Filters

Year
Product

Fields


1-10 of 1081 publications

Comparable effectiveness of risk awareness and perception training (RAPT) in young drivers with diverse socioeconomic status: A driving simulator study

2026DrivingCore
Jeffrey Glassman; Yusuke YamaniTransportation Research Part F: Traffic Psychology and Behaviour
Previous driving simulator and on road studies have shown that young drivers are poorer at anticipating road hazards than more experienced drivers. Risk Awareness and Perception Training (RAPT) is a training program shown to improve hazard anticipation skills among young drivers. A recent study suggested that RAPT may reduce crashes more effectively among drivers from low-socioeconomic status (SES) backgrounds than those from high-SES backgrounds, implying differential effects of RAPT on accelerating hazard anticipation skills. The present driving simulator experiment directly examined whether RAPT improves latent hazard anticipation differently in young drivers with various SES backgrounds in the United States. Fifty-two participants were randomly assigned to either an active or passive RAPT training group. The active RAPT group completed the full RAPT program that provided knowledge-based instruction on hazard anticipation, rehearsal of hazard anticipation skills, and feedback on errors. The passive RAPT group completed placebo training that included only the knowledge-based content without active practice or feedback. Participants drove eight driving scenarios in a high-fidelity driving simulator immediately before and after the completion of their assigned training.Results showed credible improvements in hazard anticipation performance for the active but not the passive group, suggesting that the active training method was crucial for learning. Low-SES drivers only showed improvements on hazard anticipation performance only after completing the active-training, whereas high-SES drivers improved following either program. These findings suggest that RAPT is generally effective across SES groups, but active training may be particularly important for enhancing hazard anticipation in low-SES young drivers.

Eye-Guided Human-Robot Collaborative Assembly: A Feasibility Study

2026HCINeon
Hajime Mizuyama; Eiji Morinaga; Tomomi Nonaka; Toshiya Kaihara; Gregor Von Cieminski; David Romero; Raquel Quesada Díaz; Álvaro Ballesteros Martín; Frank Luque Lineros; Erik BillingAdvances in Production Management Systems. Cyber-Physical-Human Production Systems: Human-AI Collaboration and Beyond

Exploring Health Care Professionals' Engagement With a Precision Dosing Calculator and Supporting Clinical Information: Insights From an Eye-Tracking and Usability Study

2025Clinical, UI/UXCore
Sherilyn Wong; Philip R. Selby; Michael B. Ward; Stephanie E. ReuterTherapeutic Drug Monitoring
Background: Pharmacokinetic-based dosing calculators for individualized drug dosing remain underutilized in clinical practice, often due to poor usability and a lack of user-centered design. Understanding how health care professionals interact with these tools can inform design strategies and enhance usability. Methods: Health care professionals wore eye-tracking glasses while using a codesigned vancomycin dosing calculator with supporting clinical information to complete example clinical scenarios. Eye-tracking data were collected for 23 predefined areas of interest, and fixation sequences were analyzed. A Post-Study System Usability Questionnaire was administered to assess the tool's perceived usability. Results: Eleven pharmacists and three doctors participated in the study. The highest average dwell times were recorded for the pharmacokinetic plot, dosage regimen selection, dosing history, drug concentrations, and the area under the concentration–time curve and dose visualization area. Participants generally viewed patient demographic information first and pharmacokinetic and dosage regimen information last. Considerable heterogeneity was observed among participants' fixation sequences, with frequent eye movements between key areas, particularly between the pharmacokinetic plot and dosage regimen selection, and between dosing history and drug concentrations. Participants expressed a preference for these elements to be positioned close together. Conclusions: Understanding how health care professionals interact with decision support systems is essential for developing user-friendly tools that align with clinical workflows. Eye-tracking data provided valuable insights into user engagement patterns with the dosing calculator and clinical information interface. These insights will guide future design strategies to address usability barriers that limit the utilization of dosing calculators in clinical practice and promote the implementation of individualized drug dosing.

An in-flight multimodal data collection method for assessing pilot cognitive states and performance in general aviation

2025Neuroscience & NeuropsychologyNeon
Rongbing Xu; Shi Cao; Michael Barnett-Cowan; Gulnaz Bulbul; Elizabeth Irving; Ewa Niechwiej-Szwedo; Suzanne KearnsMethodsX
Human factors are central to aviation safety, with pilot cognitive states such as workload, stress, and situation awareness playing important roles in flight performance and safety. Although flight simulators are widely used for training and scientific research, they often lack the ecological validity needed to replicate pilot cognitive states from real flights. To address these limitations, a new in-flight data collection methodology for general aviation using a Cessna 172 aircraft, which is one of the most widely used aircraft for pilot training, is presented. The dataset combines: • Human data from wearable physiological sensors (electroencephalography, electrocardiography, electrodermal activity, and body temperature) and eye-tracking glasses. • Flight data from ADS-B flight recorder. • Pilot’s self-reported cognitive states and flight performance rate by instructor. The paper describes the sensor setup, flight task design, and data synchronization procedures. Potential analyses using statistical and machine learning methods are discussed to classify cognitive states and demonstrate the dataset’s value. This methodology supports human factors research and has practical value for applications in pilot training, performance evaluation, and aviation safety management. The method was applied in a field study with 25 participants, from which 20 complete multimodal datasets were retained after data cleaning. After collecting additional data, the resulting dataset will support further research on pilot performance and behavior.

An in-flight multimodal data collection method for assessing pilot cognitive states and performance in general aviation

2025DrivingNeon
Rongbing Xu; Shi Cao; Michael Barnett-Cowan; Gulnaz Bulbul; Elizabeth Irving; Ewa Niechwiej-Szwedo; Suzanne KearnsMethodsX
Human factors are central to aviation safety, with pilot cognitive states such as workload, stress, and situation awareness playing important roles in flight performance and safety. Although flight simulators are widely used for training and scientific research, they often lack the ecological validity needed to replicate pilot cognitive states from real flights. To address these limitations, a new in-flight data collection methodology for general aviation using a Cessna 172 aircraft, which is one of the most widely used aircraft for pilot training, is presented. The dataset combines: • Human data from wearable physiological sensors (electroencephalography, electrocardiography, electrodermal activity, and body temperature) and eye-tracking glasses. • Flight data from ADS-B flight recorder. • Pilot’s self-reported cognitive states and flight performance rate by instructor. The paper describes the sensor setup, flight task design, and data synchronization procedures. Potential analyses using statistical and machine learning methods are discussed to classify cognitive states and demonstrate the dataset’s value. This methodology supports human factors research and has practical value for applications in pilot training, performance evaluation, and aviation safety management. The method was applied in a field study with 25 participants, from which 20 complete multimodal datasets were retained after data cleaning. After collecting additional data, the resulting dataset will support further research on pilot performance and behavior.

Enhancement of ADAS with Driver-Specific Gaze Profiling Algorithm—Pilot Case Study

2025Traffic PsychologyNeon
Marián Gogola; Ján OndrušVehicles
This study investigates drivers’ visual attention strategies during naturalistic urban driving using mobile eye-tracking (Pupil Labs Neon). A sample of experienced drivers participated in a realistic traffic scenario to examine fixation behaviour under varying traffic conditions. Non-parametric analyses revealed substantial variability in fixation behaviour attributable to driver identity (H(9) = 286.06, p = 2.35 × 10−56), stimulus relevance (H(7) = 182.64, p = 5.40 × 10−36), and traffic density (H(4) = 76.49, p = 9.64 × 10−16). Vehicles and pedestrians elicited significantly longer fixations than lower-salience categories, reflecting adaptive allocation of visual attention to behaviourally critical elements of the scene. Compared with the fixed-rule method, which produced inflated anomaly rates of 7.23–14.84% (mean 12.06 ± 2.71%), the DSGP algorithm yielded substantially lower and more stable rates of 1.62–3.33% (mean 2.48 ± 0.53%). The fixed-rule approach over-classified anomalies by approximately 4–6×, whereas DSGP more accurately distinguished contextually appropriate fixations from genuine attentional deviations. These findings demonstrate that fixation behaviour in driving is strongly shaped by individual traits and environmental context, and that driver-specific modelling substantially improves the reliability of attention monitoring. Therefore DSGP framework offers a robust, personalised alternative evaluated at the proof-of-concept level to fixed thresholds and represents a promising direction for enhancing driver-state assessment in future ADAS.

Simulating stress in urban active mobility: A multimodal framework integrating physiological and mobile eye tracking data in cycling

2025Sports Science, Traffic PsychologyNeon
Martin Moser; Merve KeskinBook
To foster sustainable urban mobility, it is critical to design cycling infrastructure that is not only physically safe but also perceived as such. Traditional methods for assessing cycling experiences, i.e., accident statistics and surveys, often fail to capture the dynamic, real-time interactions between a cyclist and the surrounding environment. This paper presents a novel methodology that combines mobile eye tracking with stress measurements derived from wearable physiological sensors for an integrated agent-based modelling framework that provides a deeper understanding of cyclist behaviour under stressful conditions. To validate the feasibility of our approach, a pilot field study was conducted in Salzburg, Austria, where participants navigated two distinct routes of varying cycling infrastructure. Preliminary results demonstrate a quan-tifiable link between inadequate infrastructure and elevated stress levels. Participants on the more complex route exhibited a higher number of Moment of Stress (MOS) events, which coincided with their self-reported feedback. Leveraging geo-referenced stress measurements, we identified spatial clusters of stress (hotspots) around intersections with high traffic volumes and road crossings, which were most prominent on the "difficult" route, which lacks adequate mobility infrastructure. Our research offers an empirically informed methodology for designing and evaluating infrastructure that prioritises safety and well-being of cyclists.

Gender Comparison of Factors Involved in Self-Study Activities with Digital Tools: A Mixed Study Using an Eye Tracker and Interviews

2025Applied Psychology, EducationInvisible
Anna Cavallaro; Maria Beatrice LigorioEducation Sciences
This study investigates gender and disciplinary differences in self-directed study strategies with digital tools among university students. Grounded in Activity Theory (AT), Gender Similarities Hypothesis, and Self-Determination Theory, the research explores how students from STEM and non-STEM fields interact with digital and paper-based materials during individual study sessions. A mixed-methods design was employed, combining eye-tracking data with qualitative interviews. Forty students (mean age: 21.5; equally distributed by gender and disciplinary field) participated in 15 min study sessions using the Pupil Invisible eye-tracker. Fixation durations and heat maps were analyzed through RStudio (Version 2024.04.2+764r), while semi-structured interviews explored students’ motivations, study habits, and perceptions of strategy effectiveness. A theory-driven codebook was developed to analyze interview data, incorporating cognitive, emotional, socio-cultural, and metacognitive dimensions. Results indicate that the disciplinary field plays a more decisive role than gender in shaping study strategies. Female STEM students alternated between digital and paper tools, while non-STEM females used digital tools more continuously. Among males, non-STEM students favored paper, whereas STEM students engaged more with digital materials. Interview data confirmed intra-gender variation and emphasized the influence of context, autonomy, and study planning. The integration of eye-tracking and qualitative inquiry effectively captured both behavioral patterns and students’ perspectives. Findings suggest the need for inclusive, flexible educational practices that respect diverse learning preferences and disciplinary cultures.

Measuring eye vergence angle in extended reality

2025OpthalmologyCore
Mohammed Safayet Arefin; John Edward Swan Ii; Russell Cohen Hoffing; Steven M. ThurmanPLOS ONE
Recently, extended reality (XR) displays, including augmented reality (AR) and virtual reality (VR), have integrated eye tracking capabilities, which could enable novel ways of interacting with XR content. In natural settings, eye vergence angle (EVA) changes constantly, based on the distance of fixated objects. Here, we measured EVA for eye fixations on real and virtual target objects in three different environments: real objects in the real world (real), virtual objects in the real world (AR), and virtual objects in a virtual world (VR). In a repeated measures design with 13 participants, EVA was measured while participants fixated on targets at varying distances. As expected, the results showed a significant main effect of target depth such that increasing EVA was associated with closer targets. However, there were consistent individual differences in baseline EVA. There was also a smaller but statistically significant main effect of environment (real, AR, VR) on EVA. Importantly, EVA was stable with respect to the starting depth of previously fixated targets and invariant to the direction (convergence vs. divergence) of vergence changes. In addition, EVA proved to be a more veridical depth estimate than verbal subjective depth judgments.

PipID: Light-Pupillary Response Based User Authentication for Virtual Reality

2025VR/ARVR
Muchen Pan; Yan Meng; Yuxia Zhan; Guoxing Chen; Haojin ZhuConference Paper
During the use of Virtual Reality (VR) applications such as gaming, education, and military training, sensitive information may be generated or collected by VR sensors, raising user concerns about potential data leakage. This highlights the critical need for effective user authentication to prevent unauthorized access. Existing authentication methods for VR are often either cumbersome (e.g., entering passwords via handheld controllers), reliant on specialized hardware (e.g., iris recognition), or vulnerable to credential replay attacks. In this study, we propose PipID, a lightweight VR authentication approach that leverages commercial off-the-shelf (COTS) eye trackers integrated into VR headsets. PipID is based on the fact that users' pupillary responses to visual stimuli vary uniquely. Thus, by displaying lights of randomly selected colors (i.e., wavelengths) on the VR screen, PipID can utilize pupil diameter responses to these wavelengths as the basis for authentication. For pupil data collected by precision-limited COTS eye trackers, PipID mitigates the impact of unrelated eye movements (e.g., blinks) and leverages pupillary response differences between the left and right eyes to further enhance the granularity of authentication features. Additionally, the randomized sequence of light colors helps prevent replay attacks. We implemented PipID on a COTS VR headset and tested it with 52 participants. Experimental results show that PipID achieves an accuracy of 98.65% and maintains robust performance under various conditions (e.g., keeping 98% and 91% accuracy after 7 and 14 days respectively).