Berlin, March 19, 2025 – Safety plays a key role in the development of autonomous vehicles in terms of user acceptance. The use of artificial intelligence (AI) further increases the requirements for ensuring the safety of autonomous driving functions. This is precisely where the Safe AI Engineering research project, launched on March 1, comes in: Greater safety and better integration of AI are the focus of the project, which involves 24 partners from industry and science. Over the three-year term, practical methods will be developed to ensure the safety of an AI function throughout its entire life cycle.
The aim of the project is to develop a methodology for the holistic validation of AI functions in automated driving – from planning, development, testing, application, and monitoring to continuous improvement. An accompanying safety verification is therefore of particular importance. To this end, a practical approach is being developed to further strengthen the German automotive industry’s international leadership role in safe autonomous driving.
What are the core innovations of Safe AI Engineering?
The project links safety requirements directly to AI (artificial intelligence) engineering, which includes the systematic development, implementation, and maintenance of AI systems throughout their entire life cycle. High-quality training and validation data, including synthetic data, are standardized so that they can be used independently of the initial system. This enables better and more sustainable data usage in the long term, thereby reducing costs. In addition, explainable, robust verification methods are being developed to further improve traceability when evaluating the performance of AI. The introduction of evidence-based monitoring of AI models ensures the continuous improvement of these models, even during the runtime of an automated vehicle. At the end of the project, the methods will be tested and evaluated in practice-relevant environments.
Where does the project start?
Safe AI Engineering aims to bridge the gap between verification & validation (V&V) and safety certification for AI. To this end, existing standards such as ISO 26262, SOTIF (ISO/PAS 21448), and ISO/PAS 8800, which set international standards for AI functions, will be integrated. The project fits seamlessly into the project landscape of the VDA’s flagship initiative for autonomous and connected driving and is part of the second generation of the Ki Familie together with the jbDATA and nxtAIM projects. The methodology is being developed using an AI perception function for pedestrian detection and tested in three use cases of increasing complexity: from a static scene with a pedestrian to dynamic, realistic traffic situations.
How will the project impact the future?
The Safe AI Engineering project is making a significant contribution to the safe integration of AI in vehicles and, in the long term, aims to establish a standard for the validation of AI-based functions in automated driving. The methodology developed can support the regulatory approval of automated vehicles and, in particular, enable a standardized assessment. Vehicle manufacturers and suppliers therefore benefit directly from such a methodology, which covers the entire AI life cycle. Authorities can rely on transparency and traceability in particular with regard to the methodology. In the long term, this will enable faster introduction of corresponding automated driving functions. Users will receive an automated vehicle in which AI functions work safely and reliably.
Funded by the Federal Ministry for Economic Affairs and Energy (BMWE), the Safe AI Engineering project is a logical next step toward autonomous mobility—driven by strong partners from industry and research in Germany. The project partners are providing existing vehicles, data, and hardware developed in previous research projects. Safe AI Engineering is thus making a decisive contribution to the safe, scalable, and sustainable integration of AI into automated mobility systems.
Project: Safe AI Engineering – AI engineering that enables security argumentation throughout the entire lifecycle of an AI function
Website: www.safe-ai-engineering.de
LinkedIn: linkedin.com/company/ki-familie
Partners: DXC Luxoft GmbH, Deutsches Zentrum für Luft- und Raumfahrt e.V., Akkodis Germany GmbH, AVL Deutschland GmbH, Bundesanstalt für Straßen- und Verkehrswesen, Bertrandt Ing.-Büro GmbH, Robert Bosch GmbH, Capgemini Engineering Deutschland S.A.S. & Co KG, Cariad SE, Continental Automotive Technologies GmbH, Fraunhofer-Gesellschaft e.V., FZI Forschungszentrum Informatik, Intel Deutschland GmbH, Karlsruhe Institute of Technology (KIT), Mercedes-Benz AG, Opel Automobile GmbH, Porsche AG, Spleenlab GmbH, Technische Universität Berlin, Technische Universität Braunschweig, TÜV AI.Lab GmbH, Valeo Schalter und Sensoren GmbH, ZF Friedrichshafen AG
Facts & Figures
Projektbudget
34,5 Mio. €
Consortium Lead
Dr. Ulrich Wurstbauer
Luxoft GmbH
Prof. Dr. Frank Köster
DLR
Consortium
24 Partners
Funding
17,2 Mio. €
Duration
36 Months
March 2025 – February 2028