Testing and validation of interior sensing — traditional techniques versus virtualization

Oct 9, 2023 by Dr. Wolfgang Stolzmann


In brief

  • Classic real-world testing is expensive and time-consuming
  • As vehicles and legal regulations evolve, the demands for testing increase dramatically
  • New tools using virtualization can overcome the challenge of testing, providing huge savings


In modern cars, interior sensing monitoring systems for drivers and passengers are an increasingly important aspect of driving assistants. However, new regulations are making the testing of such systems more expensive in terms of financial cost and time investments. Thankfully, virtualization is a promising new concept for the testing and validation of in-cabin monitoring systems — it can reduce costs and fulfill the requirements of new regulations.


End-to-end testing


At Luxoft’s center of competence for interior sensing, we offer end-to-end solutions for the testing of in-cabin monitoring systems. Whether you have to test a driver monitoring system (DMS) or an occupant monitoring system (OMS), we can perform the complete testing process: From consulting to ground truth system setup, measurement system integration, data collection, annotation and KPI analysis. We support the entire development process, from the definition of requirements to the homologation of the final system in a series-production vehicle — validating basic signals from image processing and customer functions.


Customer functions


Customer functions can be divided into two groups: Mandatory warning systems and value-added functions.

Mandatory warning systems may differ according to market regulations. For example, you’ll find systems like a Driver Distraction and Attention Warning (DDAW) or an Advanced Driver Distraction Warning (ADDW) for the European market, whereas for the Chinese market a Driver Attention Monitoring System (DAMS) is needed. Whichever system is used, it must fulfill legal requirements, while being as cheap as possible.

Value-added functions on the other hand will change the user interface (UI) and user experience (UX) in the upcoming years. For example, customer functions like video displays for the passengers, personalization, gaze control, augmented reality head-up displays and autonomous driving SAE level 3 will all need a DMS or OMS before being implemented.

For both customer functions, the testing of in-cabin monitoring systems must be state of the art and as precise as possible, while still being cost effective.


The traditional technique: Head GT


We provide the latest technology to deliver precise ground-truth (GT) data together with video data, tailored to the specific requirements of our clients. Our classic GT system for the 6D-head pose (with x, y, z, pitch, yaw and roll as dimensions) is Head GT. It has been in our portfolio since 2018 and is still updated and maintained.

Head GT consists of a motion capturing system and a head target. The motion capturing system is usually mounted by the side window of the passenger seat, replacing the passenger assist handle. The head band, with its passive near-infrared-reflective markers, weighs less than 115 grams and is comfortable to wear, even during long-term measurements.


An engineer wearing the head target for capturing ground truth data

For more details about Head GT, check out this explanatory video from our subsidiary.


Persisting challenges


Despite the success of our Head GT, some challenges are extremely difficult to overcome when it comes to handling GT data. For example, GT data for eye-opening — a mandatory signal for drowsiness detection — is usually generated using time-consuming and extremely expensive manual frame-by-frame labeling, performed by hundreds of employees. Additionally, GT data for gaze tracking — mandatory for distraction detection and numerous value-added functions — is still a challenge. Sometimes, multiple camera systems are deployed as GT for eye opening and gaze, but they use very similar DMS and OMS algorithms as the devices under test (DUTs). This can lead to uncontrollable effects during validation.


Virtual validation for interior sensing: The versatile newcomer


To overcome these persistent challenges, we started a new path in early 2020: Virtual validation. After intensive development and constant improvement, we’re now able to provide virtual testing to save time and money during development.

The toolchain

Our new simulation toolchain is a product for testing and training of DMS and OMS algorithms in a virtual environment. It’s able to create test scenarios for DDAW, ADDW and DAMS. In short, it offers virtual validation for interior sensing.

The toolchain’s main advantages are:

  • Offers a significant reduction of real-world testing
  • Provides a huge range of configurable scenarios
  • Delivers automatically generated GT data needed for testing and training

The setup: Avatars and scenarios

When creating a test scenario, first an environment and a vehicle are selected. Then, a driver and optional passengers can be selected from a database with photorealistic avatars. The avatars represent different ethnicities, skin types, genders and ages. For each avatar, the appearance (e.g., hairstyle, beard or make-up) can be changed. Accessories (e.g., caps, hats, face masks, glasses or scarfs) are also available. Both realistic and artificial movements of head, mouth, eyes, arms and upper body can be selected and combined.


Examples of avatars created for virtual validation

Distraction-scenarios (like eating, drinking, smoking, phone calls or interaction with passengers) are pre-defined and can be used in different forms. Camera parameters and camera positions can be freely configured.

Scenarios are generated with a scenario description file in the YAML-format. Then, with a scenario or a set of scenarios defined, the toolchain renders the corresponding videos and calculates GT data for DMS and OMS algorithms.

Benefits of virtual validation

With normal testing, OEMs have to invest in hundreds of drivers to collect up to 600 hours of test data. Currently, testing is done with expensive test vehicles and complex measurement technology — data collection can take several months. With our virtual reality approach, we need neither test vehicles nor measurement technology — all the necessary data can be generated within a couple of days. If you’d like to hear how you could benefit from virtual validation, contact us for more information.

Stay tuned for the second part of this blog post series, where we’ll give additional insights into how to master the transition from driver to occupant monitoring.


Dr. Wolfgang Stolzmann , Head of CoC Interior Sensing

Dr. Wolfgang Stolzmann author linkedin

Head of CoC Interior Sensing

As a director in Luxoft Automotive, Wolfgang leads the center of competence for interior sensing. With his 20+ years of experience in research, pre-development and series development in the automotive industry, he brings together advanced driver assistant systems (ADAS) with human factors and human machine interaction (HMI). Prior to Luxoft, he developed the algorithms for the Mercedes Attention Assist and was responsible for the development of driver monitoring systems for autonomous driving SAE level 3. At Luxoft, Wolfgang and his team work on end-to-end validation of interior sensors, focusing on virtual testing.

Dr. Wolfgang Stolzmann , Head of CoC Interior Sensing

Dr. Wolfgang Stolzmann author linkedin

Head of CoC Interior Sensing