SSS’19 Update: AI/ML, autonomy and multi-core safety certification

SSS’19 Update: AI/ML, autonomy and multi-core safety certification

Paul P

Wind River recently participated in the Safety-critical Systems Symposium (SSS’19) held in Bristol, UK. This year’s conference achieved a record-level number of delegates, including academics and industry professionals involved in system safety. One of the main tracks in this year’s programme was on Autonomy, Artificial Intelligence (AI) and Machine Learning, which contained a number of talks which generally advocated one of two quite different approaches, either: safe AI-based autonomous systems operating through training, or by safe predictable autonomous systems with constrained AI functions. So, it appears that these contrasting approaches are in a race to achieve the end goal of safe autonomous AI/ML-based systems.

To put this into context, the advocates of the safe AI-based autonomous systems operating through training approach proposed the use of sensor fusion in conjunction with AI-based autonomous driving systems which has been trained for all possible scenarios. However, the advocates of the safe predictable autonomous systems with constrained AI functions approach, argued the case that an AI system would only ever be as good as the training data which had been used, and cited scenarios and real examples where an AI system would fail to comprehend situations which a human could handle safely (for example, it was claimed that an AI system is likely to fail to detect a pedestrian in an image which contains Gaussian noise, haze or is defocused; whereas a human will still correctly observe the pedestrian). For these reasons, some SSS’19 speakers proposed the use of a non-AI safety monitor (possibly developed using formal methods) which would observe an AI system and constrain its behavior. It will be interesting to see if the cumulative effects of AI/ML training and simulation actually improves predictability reduces the frequency of critical mishaps, or whether this will confirm the hypothesis that an AI system can only respond predictably to scenario for which it has been trained.

Diving deeper, there’s also the question of how to undertake safety certification of AI-based systems, as they don’t have requirements in the same way as ‘traditional’ (non-AI) software systems, and therefore don’t lend themselves to V-model development lifecycle – can 100% modified condition/decision coverage testing be achieved for a neural network? Although there doesn’t appear to be consensus on this aspect, a proposed mitigation approach may provide a way forward – using an ML system in conjunction with a safety-certified checker system based on ‘traditional’ software.

One of my personal highlights of the conference was the privilege of jointly presenting a conference paper Civil Certification of Multi-core Processing Systems in Commercial Avionicswith Harold Tiedeman, Jr., Technical Fellow at Collins Aerospace. This was an extended version of our joint white paper, and was also published in the SSS’19 conference proceedings (and some of the topics were also discussed in the video, The Road to Multi-core Certification).  We had a number of interesting conversations with delegates on multi-core certification, and if you didn’t have an opportunity to catch up with us, or would like to learn more about VxWorks 653 and multi-core platforms, please feel free to contact our global sales team.

If you’re interested in learning more about how AI & safety-critical systems can co-exist, please register for our forthcoming webinar on March 6th: Anatomy of Modern Systems: Where Safety-Critical and General Purpose Applications Co-exist.