Nissan recently extended its partnership with AI firm Monolith for another three years to use AI-driven engineering in its vehicle development. By training machine-learning models on more than 90 years of historical test data – including simulations – engineers at Nissan’s UK technical center aim to predict the outcomes of physical tests more accurately, reducing the need for physical prototypes. The approach has already cut bolt-joint physical testing by 17%, and Nissan expects it could halve development test time for its future European vehicles. ATTI speaks with Sam Emeny-Smith, Monolith’s head of automotive, defense and motorsport, about the company’s next steps in advancing AI-supported automotive testing in collaboration with Nissan
How is Monolith’s AI platform integrated into Nissan’s existing vehicle development workflow and what kind of data infrastructure is required to support it?
Monolith was introduced into Nissan’s development workflow through a joint onboarding effort rather than a top-down technology drop-in. Engineers from both teams reviewed the available test and simulation datasets, including historic data from different divisions that had not previously been used together.
Working with Nissan’s engineers, the teams identified which variables mattered most and set up an approach where existing test data could be fed directly into the platform. This supported a structured prioritization exercise. High-risk and high-value tests could be brought forward, while lower-impact tests were reduced or removed, helping the validation program run more efficiently.
Engineers adopted the platform because it gave them transparent prediction tools. Built-in explainability features allowed them to see why the models behaved in a certain way, which made it easier to integrate AI results into day-to-day validation decisions.
From an IT perspective, integration was straightforward. Nissan already had well-organized databases and secure storage, so the main task was ensuring controlled data access rather than rebuilding infrastructure. This approach fits well with the broader Re:Nissan plan, which emphasizes leaner development processes supported by better use of existing engineering knowledge.
What types of machine-learning models are being used to predict test outcomes and how do engineers validate that these AI-generated results accurately reflect real-world vehicle performance?
Monolith uses a range of supervised and unsupervised machine-learning approaches suited to engineering regression problems, such as random forest regressors, neural network-based models and other structured techniques developed from eight years of engineering projects. The platform does not rely on a single model type. Instead, our team works with engineers to choose the model structures that best match the behavior of the system being studied. This is built into the platform so users can apply the same proven methods directly to their own test data.
Validation is handled through standard and rigorous engineering practices. A portion of historical test campaigns is held back for model checking, which allows engineers to compare predictions against known results before using the model for new test planning. For larger programs, teams often run dedicated validation exercises where metrics such as confusion matrices are reviewed to ensure false positives and false negatives stay within acceptable engineering limits.
The aim is not to treat the models as abstract statistical tools, but as practical surrogates that speed up decision making while maintaining physical testing as the reference for real-world vehicle performance.
How is Monolith’s AI system integrated into Nissan’s existing CAE and testing workflows and what challenges arise when scaling the technology across multiple vehicle platforms and global R&D centers?
Monolith integrates into Nissan’s CAE and testing workflows by using the simulation and physical test data that engineers already generate. Teams connect their existing outputs to the platform and build surrogate models that help prioritize tests without altering established processes.
Scaling is now underway. As different groups see the benefits, they want to apply machine learning to more components and validation programs, supporting Re:Nissan’s aim to cut testing time and accelerate development.

The difficult part is not creating one useful model but scaling the approach across multiple platforms and global R&D centers. In most engineering teams, someone can write a script that solves a specific problem, but sharing it is hard. Moving the code to another site, aligning data formats and explaining the workflow to a new team quickly become a burden. These one-off tools rarely survive beyond the original group.
Monolith was built to remove this barrier. The platform handles the backend data structures, reliability and security and provides built-in explainability so engineers can understand and trust the outputs. It enables teams to share analyses and results easily, without needing to maintain code or build infrastructure themselves. This is what enables Nissan engineers to focus on scaling their AI solutions rather than on building software.
Scaling AI across global R&D centers presents both technical and organizational challenges. Ensuring data quality and consistency across diverse sources in this sense is critical, as is aligning workflows and standards between regions. Monolith supports this with a flexible, cloud-based architecture and robust governance frameworks that allow teams to share models and insights securely.
The partnership has already reduced physical testing by 17%. What specific test areas or vehicle components are expected to see the biggest time savings as AI adoption expands?
The initial collaboration between Nissan and Monolith focused on the testing of bolt joints within vehicle chassis structures, which is a hugely important area of validation, but one that requires a lot of time and effort. Using Monolith’s AI, engineers identified the optimal torque range for bolts and prioritized only the most informative tests; that alone reduced physical testing by 17%, which goes a long way to showing how AI can pinpoint the experiments that truly matter.
As the partnership expands, the technology will be applied to a broader range of components and systems, particularly where repeatability and correlation to simulation data are high – so that’s throughout the design, testing and delivery process. These are areas where vast historical datasets already exist, providing the foundation for accurate predictive models.
By applying AI across the entire test portfolio, Nissan anticipates cutting physical testing time by up to half for future European models. While speed is a hugely valuable benefit from introducing AI into their workflows more broadly, engineers at the same time gain deeper insights earlier in the process, accelerating innovation while maintaining the rigorous standards customers expect from Nissan vehicles.
How do you envision AI transforming the vehicle development process over the next decade? Could it eventually enable fully virtual testing environments that replace most physical validation?
AI will change vehicle development over the next decade by reducing the amount of physical testing needed, but it will not replace physical validation entirely. Safety requirements, regulatory standards and the need for traceable decisions mean that physical tests will remain the final reference. What AI will do is shift a larger share of learning into software, so engineers can reach those final tests with far fewer prototypes and a much more focused validation plan.
The biggest impact will come from optimizing long, expensive and bottlenecked test programs. These are the areas where AI already delivers measurable outcomes, such as double-digit reductions in test effort, and where the gains directly shorten development timelines. Engineers have known about machine learning for years, but adoption has been slow because trust and accuracy matter more than novelty. Now that teams see reliable results, they are starting to implement these methods across programs.
Over the next decade, AI will become a cornerstone of vehicle development, fundamentally changing how engineers explore, validate and optimize designs.
While physical testing will always play a critical role in final validation and safety assurance, its volume will decline significantly as confidence in AI-based predictions grows. Engineers will increasingly rely on digital twins powered by machine learning to evaluate performance, durability and efficiency under a wide range of real-world scenarios.
In this future, vehicle programs could move from concept to validation with far fewer prototypes and shorter lead times. Monolith’s vision is to make every test that is conducted count, using AI to direct human expertise where it has the greatest impact. The result will be a faster, smarter and more sustainable automotive development process that is fit for purpose in a future where operational efficiency will make the difference when it comes to the bottom line.
Explore: Electroformed contacts set to redefine high-end electronics testing
