I-AIMS2 (Impairment-Aware Intelligent Mobility Systems 2
Short description
In collaboration with SmartEye AB, the I-AIMS2 project explores how driver safety, well-being, and fuel efficiency can be improved by monitoring the driver and using a large language model (LLM) with voice feedback to support and regulate the driver’s mental and emotional state.
The LLM, called Sheila-Guard, is being developed for in-vehicle use and will be tested both in driving simulators and in a physical demonstrator car. The project will define safe operational limits for Sheila-Guard and examine how it affects driver cognition in different driving scenarios – including those using an embodied interface in the form of a robotic head.
Building on insights from the earlier I-AIMS (Step 1) project, I-AIMS2 will also investigate how AI-driven voice feedback can influence driver performance and fuel efficiency, contributing to safer and more sustainable mobility.
Background
According to the WHO, as many as 1.3 million people worldwide die each year as a result of road traffic accidents. A means for mitigating accident risk is the use of Driver Monitoring Systems (DMS) for evaluating driver state. Such systems can monitor drivers for distraction, drowsiness, stress, negative affective state, general cognitive impairment as well as other behaviours that may indicate potential for accident. DMS utilize biometric measures of driver state, e.g. via eye-tracking parameters, affective expression recognition software, and can be combined with thermal sensing and electrodermal activity sensing.
By evaluating cognitive-affective state, DMS may provide corrective measures to alleviate negative driver states. Corrective measures can be interfaced with natural language feedback, i.e. through a Large Language Model (LLM) interface to predict stressful events but also promote fuel efficient driving behaviour. In stressful situations LLM feedback can provide a means to predictive, quick and accessible information regarding external safety-critical events, that impact on stress, as well as regulate state through calming (when stressed) or stimulating (when drowsy) context-appropriate feedback.
By alleviating the stress component we target reducing fuel consumption as stressed drivers often use a short headway, e.g. tailgating, or drive too fast and erratically.
Purpose
The I-AIMS2 project will investigate in a demonstrator vehicle, how safety, well-being, and fuel economy can be improved by monitoring the driver and allowing a large language model (LLM), with voice feedback, to regulate the driver’s cognitive-affective-risk state. The project will evaluate SmartEye´s "Sheila-Guard" LLM integrated into their driver monitoring system based on: a) in-the-wild driver data, b) simulation testing, c) embodied LLM interfaces (including a small robot head).
Research questions
- Under what conditions does LLM feedback mitigate stress in drivers?
- Can a robot head interface faciliate such stress mitigation and feedback comprehension?
- What are the safe operational boundaries of LLM use?
- To what extent can LLM feedback be used to increase fuel efficiency?
Methods
Two iterations of the project will be conducted to test Sheila-Guard in real-world environments and to evaluate its operational limits in controlled (simulated) environments. Interface designs (including robot heads) are tested both in real-world environments and against safe operational limits. Insights from I-AIMS (step 1) form the basis for the initial setup. Fuel efficiency is also evaluated in relation to the impact of LLM feedback on driver performance. For insight into some of the methods we will use we refer you to an article published on exploratory findings of LLM use in driving simulation on the I-AIMS (step 1) project: https://link.springer.com/chapter/10.1007/978-3-031-92692-1_7
Conference paper in Springer Nature