On 17 March 2026, AI Talk host Kevin Craine was joined by Nanda Kishore, Director of Supply Chain & Procurement – Europe and APAC, Beaufort; Okan Ozkan, R&D and Business Development Director, myTECHNIC; and Sean Simmons, Chief Engineer, Expleo.
Views on news
The most pressing risks from artificial intelligence may come not from the models themselves, but from the complex systems companies build around them, according to the 2026 International AI Safety Report. The report marks a shift in how the global research community evaluates AI risk. Last year’s edition concentrated on model behaviour, including hallucinations, bias and benchmark failures.
This year’s zeroes in on what happens after deployment, such as when AI systems trigger business processes, access sensitive data, make autonomous decisions and interact with other systems in ways their operators may not fully understand. Agentic AI systems, which can plan, pursue goals and interact with external tools autonomously, pose heightened risks because they act without waiting for human approval at each stage. Businesses should leverage AI while also keeping their control over it.
Where can AI enable safety critical engineering?
AI-enabled data processing can be very useful when it comes to forming hypotheses or engineering assumptions, while requirements engineering and rewriting, as well as quality checking are further areas where LLMs have a large role to play. In model-based engineering, new models can now be constructed automatically, including automatic code generation and test cases. Design evaluation is another area, where AI enables the assessment of several different models at the same time prior to validation. AI’s pattern recognition capabilities can also be tapped into when evaluating performance data. While AI-enabled system can spot anomalies, further reviews and analysis must be carried out by human experts.
Fully automated systems without humans in the loop will remain unviable in these industries for quite long. For example, the AI tools used in the certification of safety critical systems are still not certified to produce useful outcomes. Where AI can bring the best and safest outcomes is in high volume, low ambiguity tasks. AI is at its best when supporting the whole product or manufacturing life cycle. How the model is trained is critical – hallucinations and drifts must be managed by humans in the loop before data is automatically written into a test case. AI is also employed to monitor critical AI systems for errors and anomalies. But there must always be an engineer who owns the decisions made by AI. Users of AI must also be provided with training on when they can accept its decision and when they should challenge it.
A lot of the standards currently in place in aviation, such as ISO 61 508, were written about 30 years ago, which regulatory bodies should finally update with AI deployments in mind, following the automotive industry’s example where standards move with the times. Digital twins are a great use case for AI thanks to its strength in processing enormous amounts of data. In predictive maintenance and condition monitoring, the aim is to maintain the system in its safe state or to return it to that safe state. Here, a digital twin and a closed loop feedback can be used for informing changes to be made to the certification baseline.
AI can also be leveraged to interpret data coming from a digital twin much faster. In aircraft maintenance, digital twins are used to optimise the positioning of aircrafts in hangars, the way tools are shared between the aircrafts in the hangar and the movements of technicians between aircrafts and tasks. In-flight engine data is analysed to assess ongoing engine performance to see when the aircraft must be pulled out of service. In terms of design applications of AI, load processing is one of the strongest use cases. AI has recently been employed in drone warfare too for reconnaissance and deep penetration, where air balloons are released for deeper exploration.
The panel’s advice

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543