Filters

Year
Product

Fields


1-10 of 652 publications

A machine learning study for predicting driver goals in contingencies with leading and lagging features during goal determination

2024Artificial Intelligence, Cognitive Psychology, DrivingCore
Hsueh-Yi LaiExpert Systems with Applications
Many studies have focused on decision support systems that enhance both the efficiency and safety of driving. They have also explored the potential of real-time psychological data and machine learning in predicting drivers’ cognitive state, such as their fatigue levels, drowsiness, or workload. However, few studies have investigated prediction of driving goals as a cognitive outcome. Early prediction plays an essential role in providing active decision support during driving events under time pressure conditions. In this study, machine learning algorithms and features associated with different phases of decision-making were used to predict two common driving goals: defensive driving in emerging scenarios and urgent reactions in nonroutine scenarios. The effects of perception-, reflex-, control-, and kinetic-related features and how they contribute to prediction in the context of decision-making were analyzed. A total of 49 individuals were recruited to complete simulated driving tasks, with 237 events of defensive driving and 271 events of urgent reactions identified. The results revealed premium recall with a naïve Bayes classifier, indicating the onset of decision-making, with extreme gradient boosting and random forests exhibiting superior precision in predicting defensive driving and urgent reactions, respectively. Additionally, the cutoff of the initial 0.4 s of the events was identified. Before the cutoff, the leading features were reflex- and control-related features, which were the drivers’ immediate reactions before scenario evaluation and goal determination. These leading features contributed to superior prediction results for the two types of driving goals, indicating the likelihood of early detection. After the cutoff, model performance decreased, and lagging features came into play. These lagging features comprised perception- and kinetic-related features, reflecting observation of cues and outcomes of inputs delivered to vehicles. In the first 2 s, predictive models recovered and stabilized.

Real-Time Conversational Gaze Synthesis for Avatars

2023Applied PsychologyCore
Ryan Canales; Eakta Jain; Sophie JörgMIG '23: The 16th ACM SIGGRAPH Conference on Motion, Interaction and Games
Eye movement plays an important role in face-to-face communication. In this work, we present a deep learning approach for synthesizing the eye movements of avatars for two-party conversations and evaluate viewer perception of different types of eye motions. We aim to synthesize believable gaze behavior based on head motions and audio features as they would typically be available in virtual reality applications. To this end, we captured the head motion, eye motion, and audio of several two-party conversations and trained an RNN-based model to predict where an avatar looks in a two-person conversational scenario. We evaluated our approach with a user study on the perceived quality of the eye animation and compared our method with other eye animation methods. While our model was not rated highest, our model and our user study lead to a series of insights on model features, viewer perception, and study design that we present.

Objective Measures of Gaze Behaviors and the Visual Environment during Near-Work Tasks in Young Adult Myopes and Emmetropes

2023OpthalmologyCore
Scott A. Read; David Alonso-Caneiro; Hosein Hoseini-Yazdi; Yan Ki Lin; Trang T. M. Pham; Rafael I. Sy; Alysha Tran; Yiming Xu; Rina Zainudin; Anjali T. Jaiprakash; Hoang Tran; Michael J. CollinsTranslational Vision Science & Technology

Protecting the children —a virtual reality experiment on consumers’ risk perceptions of household chemicals

2023Ergonomics, VR/ARVR
Angela Bearth; Gioia Köppel; Nicole Schöni; Sandro Ropelato; Michael SiegristApplied Ergonomics

Evaluating Workload Indicators for Learning During Stress Exposure Training of Endotracheal Intubation

2023ClinicalInvisible
Gabriel Gazetta; Chloe Miller; Brian Clemency; Kaori Tanaka; Matthew Hackett; Jack Norfleet; Rahul; Steven D. Schwaitzberg; Suvranu De; Lora CavuotoProceedings of the Human Factors and Ergonomics Society Annual Meeting
Endotracheal intubation (ETI) is an important procedure of point-of-injury care and emergency medicine. Although ETI is a complex procedure with possible stressful conditions demanding higher levels of mental workload, traditional training methods do not involve stress exposure training. This study aims to evaluate potential workload indicators for stress exposure training for ETI. Twelve participants executed intubation tasks in three separate visits and were exposed to auditory and visual stressors. Participants were instrumented with eye-tracking glasses and a heart rate monitor. Participants rated their perceived workload using the NASA-Task Load Index scale. When comparing the first repetitions during the first visit to the last repetitions on the last visit, participants expressed a significant improvement in performance, reduction in perceived workload, and smaller differences in heart-rate variability between rest and task execution. Results demonstrated the potential effectiveness of stress exposure training in improving performance and reducing mental workload.

Where and how do people search for medical emergency equipment in public buildings?

2023Architecture & Design, ClinicalInvisible
Petter Norrblom; Erik Prytz; Carl-Oscar JonsonProceedings of the Human Factors and Ergonomics Society Annual Meeting
Exsanguinating trauma is a common cause of death. Placing bleeding control kits in public areas has been suggested as a countermeasure. Similarly, automatic external defibrillators (AEDs) are placed in public areas in case of cardiac arrests. Both severe bleeding and cardiac arrests require rapid care and people must be able to quickly find relevant emergency equipment. This study explores where and how people search for such equipment. Twenty participants wearing eye-tracking glasses searched for bleeding control kits and AEDs in a public building. The participants visually searched features such as signs with maps, written information, and other emergency equipment. The participants expressed elevators and staircases, open areas, entrances, and the reception to be places where medical emergency equipment would likely be placed. The results suggest that these features and places may be suitable for medical emergency equipment or directions.

Eye Tracking for Tele-robotic Surgery: A Comparative Evaluation of Head-worn Solutions

2023Clinical, RoboticsCore
Regine Büter; Roger D. Soberanis-Mukul; Paola Ruiz Puentes; Ahmed Ghazi; Jie Ying Wu; Mathias UnberatharXiv
Purpose: Metrics derived from eye-gaze-tracking and pupillometry show promise for cognitive load assessment, potentially enhancing training and patient safety through user-specific feedback in tele-robotic surgery. However, current eye-tracking solutions' effectiveness in tele-robotic surgery is uncertain compared to everyday situations due to close-range interactions causing extreme pupil angles and occlusions. To assess the effectiveness of modern eye-gaze-tracking solutions in tele-robotic surgery, we compare the Tobii Pro 3 Glasses and Pupil Labs Core, evaluating their pupil diameter and gaze stability when integrated with the da Vinci Research Kit (dVRK). Methods: The study protocol includes a nine-point gaze calibration followed by pick-and-place task using the dVRK and is repeated three times. After a final calibration, users view a 3x3 grid of AprilTags, focusing on each marker for 10 seconds, to evaluate gaze stability across dVRK-screen positions with the L2-norm. Different gaze calibrations assess calibration's temporal deterioration due to head movements. Pupil diameter stability is evaluated using the FFT from the pupil diameter during the pick-and-place tasks. Users perform this routine with both head-worn eye-tracking systems. Results: Data collected from ten users indicate comparable pupil diameter stability. FFTs of pupil diameters show similar amplitudes in high-frequency components. Tobii Glasses show more temporal gaze stability compared to Pupil Labs, though both eye trackers yield a similar 4cm error in gaze estimation without an outdated calibration. Conclusion: Both eye trackers demonstrate similar stability of the pupil diameter and gaze, when the calibration is not outdated, indicating comparable eye-tracking and pupillometry performance in tele-robotic surgery settings.

Unlocking Safer Driving: How Answering Questions Help Takeovers in Partially Automated Driving

2023DrivingCore
Xiaolu Bai; Jing FengProceedings of the Human Factors and Ergonomics Society Annual Meeting
As vehicle automation increases, drivers' roles shift from active operation to passive monitoring, making it challenging for drivers to effectively take over when automation fails. While a growing body of research focuses on providing assistance to improve driver monitoring behavior and takeover performance, these approaches have been mostly linked to the takeover event. The current study proposes to facilitate drivers' awareness during automated driving by adopting strategies from natural conversations, specifically, asking driver questions while monitoring the vehicle. In this simulated driving study, we investigated how different types of questions affect monitoring behavior and takeover performance with or without takeover notifications. Our preliminary results suggest that the driving-related questions could improve drivers’ takeover performance, especially when the takeover notification is absent.

Gaze shifts during wayfinding decisions

2023Architecture & Design, Motor ControlCore
Mai Geisen; Otmar Bock; Stefanie KlattAttention, Perception, & Psychophysics
When following a route through a building or city, we must decide at every intersection in which direction to proceed. The present study investigates whether such decisions are preceded by a gradual gaze shift in the eventually chosen direction. Participants were instructed to repeatedly follow a route through a sequence of rooms by choosing, in each room, the correct door from among three possible doors. All rooms looked alike, except for a room-specific cue, which participants could associate with the direction to take. We found that on 88.9% of trials, the gaze shifted from the cue to the chosen door by a single saccade, without interim fixations. On the few trials where interim fixations occurred, their spatiotemporal characteristics differed significantly from that expected in case of a consistent shift. Both findings concordantly provide no support for the hypothesized gradual gaze shift. The infrequent interim fixations might rather serve the purpose to avoid large-amplitude saccades between cue and door.

Regulation of pupil size in natural vision across the human lifespan

2023Clinical, Cognitive PsychologyCore
Rafael Lazar; Josefine Degen; Ann-Sophie Fiechter; Aurora Monticelli; Manuel SpitschanbioRxiv
Vision is mediated by light passing through the aperture of the eye, the pupil, which changes in diameter from ~2 to ~8 mm from the brightest to the darkest illumination. In addition, with age, mean pupil size declines. In laboratory experiments, factors affecting pupil size can be experimentally controlled or held constant, but how the pupil reflects changes in retinal input from the visual environment under natural viewing conditions is not clear. Here, we address this question in a field experiment (N=83, 43 female, age: 18-87 years) using a custom-made wearable video-based eye tracker with a spectroradiometer measuring spectral irradiance in the approximate corneal plane. Participants moved in and between indoor and outdoor environments varying in spectrum and engaged in a range of everyday tasks. Our real-world data confirm that light-adapted pupil size is determined by light intensity, with clear superiority of melanopic over photopic units, and that it decreases with increasing age, yielding steeper slopes at lower light levels. We find no indication that sex, iris colour or reported caffeine consumption affect pupil size. Taken together, the data provide strong evidence for considering age in personalised lighting solutions and for using melanopsin-weighted light measures to assess real-world lighting conditions.