
Sicherheitsargumentation befähigendes AI Engineering
über den gesamten Lebenszyklus einer
KI-Funktion.
Safe AI Engineering schafft die Grundlagen für einen im Markt allgemein akzeptierten, praxistauglichen Sicherheitsnachweis von KI, um die Führung beim sicheren autonomen Fahren weltweit zu erhalten.
Project description: Safe AI Engineering
As part of the VDA lead initiative’s roadmap to create common standards and drive forward automated driving holistically, the Safe AI Engineering project emerged from the Pegasus and KI Familie clusters, which builds on the first KI Familie generation and evolved from the project KI Absicherung.
The aim of Safe AI Engineering is to develop a methodology that enables the creation of a safety argumentation of AI functions for automated driving over the entire life cycle. This includes the steps of planning, development, testing, deployment, monitoring and embedding in the overall system. The focus is on systematically orchestrating an interconnection of the various elements of such a safety argumentation across the entire AI lifecycle process, in order to demonstrate how AI functions can be permanently and reliably secured.
Another aspect is the integration of existing test methods and standards (ISO 26262, SOTIF, ISO/PAS 8800) to be able to create a state-of-the-art and, above all, practicable proof of safety from all components of the project. The project thus closes the gap between verification & validation (V&V) and proof of safety for AI.
Motivation
The need to safe AI functions is growing with the integration of AI into safety-critical systems, especially in automated driving. For the commissioning of safe automated vehicles, proof of safety is required that enables both correct functioning and continuous operational monitoring. The modules for this verification are derived from the AI engineering methodology. This is developed in Safe AI Engineering together with vehicle manufacturers, suppliers, IT companies, technology providers and scientific partners along the entire value chain of the automotive industry – in particular through the use of real and synthetic driving data.
Approach
A central goal of the project is to develop the methodology using a specific AI perceptual function for pedestrian detection. To this end, the project will go through three iterative stages of function evolution, each including a use case of increasing complexity. The AI function for pedestrian recognition will be further developed and tested via the use cases.
Use Case 1:
A stationary vehicle and a stationary pedestrian in different poses at an intersection.
Use Case 2:
A stationary vehicle and a pedestrian in motion at an intersection.
Use Case 3:
Two stationary vehicles and several moving pedestrians at an intersection, where one pedestrian may be partially covered.
Each of these use cases presents a new challenge and requires continuous improvement and validation of the AI function. The combination of the module creation for the different areas of a safety verification ultimately results in the Safe AI Engineering Method as the overarching goal of the project.
Core innovations of the project
Safe AI Engineering develops new approaches for the safe use of AI in automated driving, with a focus on:
Safety and AI integration
Linking safety requirements and AI engineering.
Database and quality assurance
Methods for the normalization of high-quality training and validation data, including synthetic data.
AI evaluation and safety standards
Evaluation of quality metrics according to ISO 26262, ISO/PAS 8800 and SOTIF.
Explainable and robust AI
Transparent methodology to validate a perception function.
Monitoring and continuous improvement
Model verification through evidence-based offline and online monitoring.
Practical demonstration
Testing the technologies in realistic demonstrators.
Impact
The Safe AI Engineering project makes an important contribution to the safe integration of AI in vehicles and could establish a long-term standard for safeguarding AI-based functions in automated driving. The methodology developed can make a contribution to regulatory approval in order to ensure standardized applicability.
Safe AI Engineering strengthens the innovative power of the industry and promotes the further development of new technologies. The project makes a significant contribution to the integration of AI components in automated vehicles and achieves progress in sensor technology, actuator systems, robustness, reliability as well as data fusion and data processing. This creates the basis for scaling automated mobility solutions.
Vehicle manufacturers and suppliers benefit from a proven methodology to secure their AI systems, while authorities rely on transparency and traceability to ensure public safety. End users get a vehicle in which AI works safely and reliably – regulatory approval of AI could ensure the safeguarding of these systems in the future.
The project is based on existing vehicles, data and hardware from the project partners that have already been tested in previous research projects.
Outlook
Safe AI Engineering is a forward-looking project that not only ensures the safety of AI in vehicles but could also set a standard for the entire industry. It is an important step towards the safe and reliable integration of AI in automated vehicles, which is of great importance for manufacturers as well as for authorities and end users and can thus contribute to establishing a general sense of safety with regard to automated mobility systems.
Facts and figures
Project budget
34,5 Mio. €
Consortium Lead
Dr. Ulrich Wurstbauer
Luxoft GmbH
Prof. Dr. Frank Köster
DLR
Consortium
24 partners
Car manufacturers, suppliers, technology providers, research institutions, external partners
Funding
17,2 Mio. €
Duration
36 months
March 2025 – February 2028
Consortium






















