Gutheil-Schoder-Gasse 8-12, 1100 Vienna
<a href=”tel:+43 1 9971993 0″>+43 1 9971993 0</a>

Bringing AI Analytics closer to Sensors for Real-time Safety-critical Systems

2021-05-20 | Omair Sarwar, PHD | Academic and professional communities around the world are researching different aspects of Artificial Intelligence (AI) and share their results (i.e. AI models) highlighting improved accuracies, evaluated using public or private . However, the process of deploying such AI models to real-time safety-critical systems often faces a huge set of challenges.


Real-time means a late response of a system is considered wrong, where the response time is lower for most of our use cases. Safety-critical means a failed system response would cause severe injuries (or even death) to human-beings or huge damage to the infrastructure. The main reason is that academic datasets are usually collected under controlled environments and does not cover all the scenarios which the AI model could come across in real-life. Moreover, such AI models have access to huge computational resources, and do not have any real-time or safety-critical constraints.


The process of deploying AI models on real-time and safety-critical use-cases is a challenging task. The main objective behind this process is to find an AI model with an optimal trade-off between accuracy, latency, and memory footprint, so that the optimized AI model can run on a standalone embedded system. We call this process AI model optimization for real-time safety-critical embedded systems. In this blog, we discuss different strategies for AI model optimizations and highlight ME expertise in this domain.

Challenges for Real-time Safety-critical AI

There are different challenges for a real-time safety-critical AI system, which need to be addressed during design, development and testing phases of an AI project:



  1. Due to real-time & dependability constraints, raw data cannot be sent to a cloud backend for analytical purposes and must be locally processed, preferably on the node itself.
  2. The computational latency of an AI model on the node should be lower than the system response, otherwise the system would fail even if the model computes and predicts the right decision.
  3. Any effort to improve latency should not affect the AI model’s accuracy, otherwise the whole objective of deploying it would be undermined.
  4. The AI model should be capable of making the right decision under extreme-designated conditions and disturbances, which can also occur instantaneously, such as entering a tunnel on a sunny day.
  5. Finally, the optimal sensor(s) selection and developing the optimal pre-processing pipelines to remove unwanted noise in the data are important pre-requisites to fulfill the overall objectives of a real-time safety critical AI systems.


Figure 1: Pruning to optimize an AI model. (Left) A fully connected Multilayer Perceptron Model (MLP). (Right) an optimized MLP with pruned neuron and connections which consequentially requires less computations.

Optimization of AI models

Optimization of AI models is essential for real-time safety-critical systems because an optimized AI model will not only take less time for inference, but also occupy less memory with negligible (or no) effect on the accuracy.

There are two strategies to develop an optimized AI model for a given use case:

(1) optimization-aware-training,

and (2) post-training optimization.


The optimization-aware-training technique searches an optimal model architecture, i.e. the AI model’s width, depth, and number of channels in each layer, during the training process. Moreover, these strategies can also be constrained to finding the optimal weight quantization of the AI model.

The post-training optimization strategies usually take an off-the-shelf AI model and apply transfer learning to train it first for the given use case, irrespective whether the model is complex or not. After the training, the AI model is optimized using different techniques, e.g. weight-quantization, weight clustering, channel pruning, network scaling, shunt connections. As an example, please see Figure 1 for post-training optimization.


Optimization-aware-training techniques are complex and require a high level of expertise, while post-training optimization procedures are comparatively simple but may not give the optimal results for the given use case. Once the AI model is optimized, it is deployed in the field and tested against the required acceptance criteria. The AI model optimization process is usually an iterative process as it requires to find tune hyperparameters for the given application.  

ME expertise for Real-time Safety-critical AI

ME has successfully completed several projects involving AI for real-time safety-critical applications and has filed several patents.

ME areas of expertise: 

  • – conducting AI feasibility studies for a given use case,
  • – evaluating, and proposing AI solutions,
  • – selection of the best sensors for the given use case,
  • – data recording campaigns,
  • – training and optimization of AI models, and
  • – rigorous testing of the AI models according to industry standards (e.g. EN50657)

Most importantly, ME has expertise in developing optimized AI models, employing the state-of-the art techniques including both optimization-aware-training and post training optimization. For example, in one of our previous projects, we optimized AI models to make them 2-4 times faster without significant loss of accuracy and achieved the required system latency.



ME can develop customized AI models that are optimized and fine-tuned for the given use case.



Get in touch with us to discuss your project idea!