Machine learning (or ML) isn’t new, but the transformations it’s bringing to test labs certainly are. We’re witnessing innovative ways of analysing data and managing experiments.
Engineers are embracing automated tools to minimise waste, save time, and enhance experiment planning. Machine learning provides a powerful solution to overcome data challenges and stay ahead.
Engineering experiments are producing larger volumes of data than ever before. Modern equipment, now more sensitive and precise, captures vast datasets that can be overwhelming to analyse manually.
For example, a single battery cell might feature six sensors recording data every second. When testing dozens of cells, packs, or modules simultaneously, this quickly scales to terabytes of data each month.
This data, and these datasets typically exist in formats like CSV, XLSX, MAT, and JSON files, resulting in massive tables that are difficult to understand. Being faced with large tables full of numbers can be challenging for a human. So many choose to use quick visualisation tools using onboard test software, such as quick visualisations in Excel/Python, to get a plot of the data.
Engineers typically look for errors in the test data or anomalies that prevent them from gaining the necessary insights. In a battery/cell cycling example, lab staff may examine the signals to ensure they see the proper behaviour in the cell. This could involve ensuring voltage drop-offs are aligned with expectations, and OCVs stabilise in expected ranges.
This type of manual signal analysis is challenging for a few reasons:
The solution to this problem may not be entirely “simple” but can be described as “straightforward.”
Automated programs can effectively address challenges such as subjectivity and time-intensive processes. While many engineers integrate such controls into their test routines, these are often limited to basic thresholds and alerts.
This approach works well for detecting straightforward errors where thresholds are breached. However, much of the engineering data reviewed by lab technicians involves more intricate behaviours. Technicians often rely on their instincts to spot subtle irregularities that simply don’t “look right.”
Unfortunately, these instincts and the concept of “not looking right” are difficult to translate into code, especially within the rigid programming languages commonly used in test lab equipment.
This is where a specific branch of deep learning—autoencoders—can provide an elegant solution.
An autoencoder is a type of neural network designed to compress data into a simplified representation (encoding) and then reconstruct it back to its original form (decoding). This process enables the network to learn the fundamental patterns and structures within the data.
Autoencoders can be trained on historical data to capture underlying trends and behaviours in complex engineering data signals. Once trained, they can make accurate predictions by reconstructing these signals from the encoded representations, even when faced with noisy or incomplete input data.
This makes them highly effective for anomaly detection, signal reconstruction, and predicting the behaviour of engineering systems.
In the context of a lab (and in the work we’ve been doing with clients), we can input a batch of "golden run" data—ideal, anomaly-free data—into the neural network. The network then learns to identify any deviations from what it understands as the expected data.
These reconstructions can be relatively precise and closely replicate complex engineering signals. The figure below illustrates the use of machine learning to reconstruct a signal. Comparing the expected signal with the measured signal computes an anomaly score.
The green line represents the predictions based on the neural network training, and the blue lines are the actual results. Being able to derive an anomaly score from these differences is much more potent than programming standard limits, as the calculation of the anomaly score can account for the signal type, the experiment itself, etc.
The engineering manager must now consider this anomaly score. It is a much simpler variable to examine, where simple thresholds indicate more complex hidden behaviour in the test channels.
At Monolith, we understood the value and need of engineering lab managers for this technology. For the past few years, we have tailored our algorithms to become more efficient at analysing complex signals and finding errors in engineering data.
We now enable engineers to input their data into our platform and have all the tools needed to quickly visualise which combination of test channel and test falls outside the expected boundaries and may warrant a more in-depth look.
We package that up and put it in a heatmap format, which summarises where the highest anomaly scores were detected and where a test engineer may want to examine further.
Now that we understand how anomaly detection could work, let’s compare these two options:
Topic |
Manual Inspection |
Monolith Anomaly Detector |
Approach to examining data |
The gut feeling of test lab operator, manual inspection of signal within test equipment software or exporting data from test software to visualise through python/excel. |
Machine learning trains on “golden run”/ good data so that it can learn to spot when signals fall out of expectation. |
How anomalies are detected |
The test manager will decide based on “gut feel”/ experience upon manual inspection of the plots for a specific reason. |
Based on the tuning of the anomaly score, the algorithm will flag when actual signals fall outside of the expected predicted ranges. |
Completeness of analysis |
The engineer may look at small portions of the data instead of the entire test data signal. Some errors/anomalies might not be missed if they fall outside of where the engineer is examining. |
The autoencoder learning can be deployed to the entire signal across all test channels and test iterations to ensure completeness of analysis. |
The objectiveness of what is considered an “anomaly” |
Subjective choice based on experience. |
Objectively based on selected parameters during training of the neural network. |
Speed of analysis |
Depending on how comprehensive the analysis is to be, examining all signals and channels can take hours to days. |
With the model already trained, this process can take less than 10 minutes for more than 10 hours of data across multiple data channels. |
The opportunity cost of time spent |
Test manager needs to dedicate hours analysing the test data. |
With the analysis being completed much quicker, the test manager can move on to more valuable activities. |
Visualisation |
Need to visualise and plot complex data and visualise channels individually. |
All the information can be summarised within a heatmap for faster visual inspection. |
Multi-channel detection |
Not possible / Very challenging. |
Machine learning-based anomaly detection can compare signals across various channels to identify multivariate anomalies (Learn more). |
A European racing team capturing vehicle dynamics data at the test track is now working more efficiently with AI. In the high-pressure environment of race conditions, they must quickly interpret data to assess whether the car is performing optimally. Sensor or measurement failures could cloud their view of performance, so accuracy is crucial.
By using deep-learning anomaly detection tools within Monolith, the team can rapidly analyse hundreds of data channels and identify errors in just minutes.
Through training models during practice laps, they’ve automated the inspection and validation process, reducing manual effort and enabling them to detect more issues, faster. In initial testing, the anomaly detector identified over 90% of known errors.
This automation has been a game-changer. The team can now identify performance issues early and make immediate adjustments.
Deep-learning models help detect sensor and data problems, such as spikes, dropouts, misalignments, and drift, ensuring smooth tests and providing accurate insights.
The future of engineering test labs lies in automation. Machine learning-based tools like the Monolith Anomaly Detector handle increasing data complexity and volume.
Monolith's Test Data Validation Module, refined through real-world applications, consistently finds over 90% of known errors, enabling faster and more reliable data inspection.
Learn more about how Monolith can enhance your testing efficiency and data quality.