Using eye tracking to study the takeover process in conditionally automated driving and piloting systems
No Thumbnail Available
Date
2025-10-08
Authors
Advisor
Samuel, Siby
Cao, Shi
Cao, Shi
Journal Title
Journal ISSN
Volume Title
Publisher
University of Waterloo
Abstract
In a conditionally automated environment, human operators are often required to resume manual control when the autonomous system reaches its operational limits — a process referred to as takeover. This takeover process can be challenging for human operators, as they must quickly perceive and comprehend critical system information and successfully resume manual control within a limited amount of time. Following a period of autonomous control, human operators’ Situation Awareness (SA) may be compromised, thus potentially impairing their takeover performance. Consequently, investigating potential approaches to enhance the safety and efficiency of the takeover process is essential. Human eyes are vital in an individual’s information gathering, and eye tracking techniques have been extensively applied in the takeover studies in previous research works. The current study aims at enhancing the takeover procedure by utilizing operators’ eye tracking data. The data analysis methods include machine learning techniques and the statistical approach, which will be applied to driving and piloting domains, respectively.
Simulation experiments were conducted in two domains: a level-3 semi-autonomous vehicle in the driving domain and an autopilot-assisted aircraft landing scenario in the piloting domain. In both domains, operators’ eye tracking data and simulator-derived operational data were recorded during the experiments. The eye tracking data went through two categories of feature extractions: eye movement features linked predominantly to fixation and saccades, and Area-of Interest (AOI) features associated with which AOI the gaze was located. Eye tracking features were analyzed using both traditional statistical techniques and machine learning models. Key eye tracking features included fixation-based metrics and AOI features, such as dwelling time, entry count, and gaze entropy. Operators’ SA and takeover performance were measured by a series of domain-specific metrics, including Situation Awareness Global Assessment Technique (SAGAT) score, Hazard Perception Time (HPT), Takeover Time (TOT) and Resulting acceleration.
Three research topics were discussed in the current thesis and each topic included one driving study and one piloting study. In topic 1, significant differences in eye movement patterns were found between operators with higher versus lower SA, as well as between those with better and worse takeover performance. Besides the notable differences in various Area-of-Interests (AOIs) across three pre-defined Time windows (TWs), in the driving domain, drivers with a better SA and better takeover performance showed inconsistent eye movement patterns after the Takeover Request (TOR) and before they perceived hazards. In the piloting domain, pilots with shorter TOT showed more distributed and complex eye movement pattern before the malfunction alert and after resuming control. During the intervening period, their eye movements were more focused and predictable, indicating fast identification of necessary controls with minimal visual search. In topic 2, significant differences in eye movement patterns were observed between younger and older drivers, as well as between learner and expert pilots. As for driving domain, older drivers exhibited more extensive visual scanning, indicating difficulty in effectively prioritizing information sources under time pressure. In piloting domain, expert pilots not only allocate more attention to critical instrument areas but also dynamically adjust their scanning behavior based on the current tasks. In topic 3, machine learning models trained on eye tracking features successfully performed binary classification for both SA-related and takeover performance related metrics. Model performance was evaluated using standard classification metrics, including accuracy, precision, recall, F1-score, and Area Under the ROC Curve (AUC).
Finally, comparisons were made across Topics 1 and 2, as well as between the driving and piloting domains. The results suggest that better operators can flexibly adapt their gaze strategies to meet task demands, shifting between broad visual scanning and focused searching when appropriate. This shift in patterns underscores the importance of accounting for the specific Time window (TW) when interpreting operators’ eye movements. Overall, this thesis advances the understanding of different eye movement patterns during the takeover process by exploring a range of eye tracking features. The findings support the development of operator training programs and the design of customized interfaces to enhance the safety and efficiency of takeover performance.
Description
Keywords
Eye tracking, Takeover, Consitionally autonomous system.