AI Engineering Blog | Read the AI Articles | Monolith

Get the most ROI from AI in engineering validation testing | Monolith

Written by Admin | Sep 22, 2024 12:09:29 AM

During the product development validation phase, artificial intelligence (AI) has proven to be a powerful tool that can greatly improve efficiency and return on investment (ROI) for several large OEMs.   

Through precise, data-driven AI models, engineers can better comprehend test conditions, prioritise critical tests, and streamline validation processes.

This technical blog delves into four key applications of AI in validation testing that have the most significant impact on not only your results but your ROI. 

 

The role of AI in validation testing  

 

Having worked on over 300 AI projects with engineering clients, we have found that validation testing offers the greatest potential for optimisation, particularly in complex, dynamic systems exhibiting non-linear behaviour. 

Validation testing faces significant challenges, such as the high costs and complexity of test rigs, the limitations of traditional simulation tools, and the risks posed by insufficient test coverage or over-testing. 

It is necessary to validate prototypes, however, with accelerated timelines and looming deadlines, engineers need ways to complete their testing campaigns faster. 

As the most resource-intensive phase of product development, validation testing stands out as the key area where AI-driven optimisation can deliver substantial benefits. 

 

 

1. Test plan optimisation 

 

Addressing the challenge  

 

Developing highly complex products, such as batteries, involves navigating vast parameter spaces under tight development schedules. Engineers must identify the critical parameters driving product performance to create an effective and efficient test plan.

However, they often face a difficult choice: over-testing, which wastes time and delays product launches, or under-testing, which risks missing crucial performance or safety issues—errors that can lead to costly delays and recalls.

 

 

  

AI-powered solutions  

  

What if it were possible to reduce physical testing by half without increasing risk? Traditional testing approaches involve validating a wide range of factors and conditions, resulting in a test matrix too large for practical implementation.

Engineers must find the optimal combination of test parameters that will validate the design against key performance criteria. 

Monolith’s Next Test Recommender (NTR) offers a solution through proprietary active learning technology. With NTR, engineers can identify the best combination of test parameters with a relatively small number of tests.

 

 

By iterating through additional tests based on AI-driven recommendations, the model is continuously retrained and improved, ensuring that all design considerations are addressed with fewer test steps, yet achieving the same or better level of coverage and confidence. 

The unique algorithms within NTR combine multiple modelling techniques at each step, balancing repeatability with performance improvement. This multi-algorithm approach not only reduces the required test steps but also provides a more comprehensive testing method.

 

 

For instance, when testing electric vehicle (EV) batteries, AI models can help determine the optimal charging profile to extend battery life while ensuring safety, effectively covering critical areas of the design space. 

 

2. Test data validation

 

The problem  

  

Validating complex products often requires extensive tests, which can be very time-consuming. At the end of a weeklong or even month-long experiment, the last thing that you want to find is that some of your test equipment malfunctioned. 

One of the primary challenges in the validation phase of complex products is quickly identifying faulty sensors in prototype test data.

Sensor errors can lead to costly delays, wasted test runs, and the need to rerun invalidated tests.

Confidence in prototype test data allows engineers to focus on valuable analysis rather than debugging measurement discrepancies. 

 

  

AI-driven detection  

  

AI models, trained on historic test data, can automatically detect anomalies and identify faulty sensors, enhancing testing efficiency.

This capability is particularly valuable in expensive and resource-limited test environments, such as wind tunnels or battery labs, where getting it right the first time is critical.

By leveraging AI for test data validation, such as Monolith's Anomaly Detector, engineers can save valuable time and resources, ensuring that test runs are accurate and reliable. 

 

 

3. Root-cause analysis

  

The complexity  

  

Validating the performance of complex products, such as smart meters or fuel cells, often involves dealing with hundreds or even thousands of parameters, leading to an overwhelming number of testing permutations.

Engineers are pressured to quickly identify the critical parameters causing system failures during physical validation tests.

When you are looking for an error hidden within thousands of rows of data, you need tools that can help narrow down your search. 

 

 

  

AI-powered insights  

  

AI’s predictive capabilities can greatly assist in root-cause analysis by offering three key benefits: 

  1. Predicting which design or operating condition changes are most likely to resolve the failure. 
  2. Identifying components that are contributing to sub-optimal performance. 
  1. Reducing delays and uncertainty in the validation process. 

 Whether dealing with large datasets, complex parameter interactions, or resource-intensive simulations, AI can rapidly process vast amounts of data, efficiently explore parameter spaces, and quickly pinpoint the source of system failures. 

 

With self-learning AI models, engineers have a better understanding of which parameters have the greatest impact on performance, giving them a headstart on identifying causes of system failures.

 

4. System calibration

  

The calibration challenge  

  

Engineers strive to calibrate complex models that govern product behaviour, seeking to identify the optimal configuration for peak performance and reliability.

Designing highly complex, non-linear systems to meet stringent performance standards is a significant challenge. Predicting the optimal combination of inputs to achieve the desired output across all operating conditions is nearly impossible. 

  

AI in calibration  

  

AI’s pattern recognition and optimisation capabilities are pivotal in this calibration process. By incorporating AI specifically developed for engineering applications, engineers can automate calibration processes, optimise models, and accelerate performance analysis.

This approach is being adopted across industries such as aerospace, industrial equipment, and robotics, where AI models are used for system calibration to reduce development time and improve product performance.

Machine learning has the ability to process a large number of parameters and find the optimal combinations and settings to train effective models, and ensure your system operates efficiently under different scenarios. 

 

  

Conclusion  

  

The integration of AI into validation testing represents a transformative shift in product development.

From optimising test plans to automating test data validation, enhancing root-cause analysis, and improving system calibration, AI offers engineers powerful tools to tackle the complexities of modern product design.

 

 

By embracing these AI-driven solutions, engineering teams can accelerate time-to-market, reduce costs, and achieve higher levels of performance and safety in their products.