Filters

Year
Product

Fields


1-10 of 1017 publications

Comparable effectiveness of risk awareness and perception training (RAPT) in young drivers with diverse socioeconomic status: A driving simulator study

2026DrivingCore
Jeffrey Glassman; Yusuke YamaniTransportation Research Part F: Traffic Psychology and Behaviour
Previous driving simulator and on road studies have shown that young drivers are poorer at anticipating road hazards than more experienced drivers. Risk Awareness and Perception Training (RAPT) is a training program shown to improve hazard anticipation skills among young drivers. A recent study suggested that RAPT may reduce crashes more effectively among drivers from low-socioeconomic status (SES) backgrounds than those from high-SES backgrounds, implying differential effects of RAPT on accelerating hazard anticipation skills. The present driving simulator experiment directly examined whether RAPT improves latent hazard anticipation differently in young drivers with various SES backgrounds in the United States. Fifty-two participants were randomly assigned to either an active or passive RAPT training group. The active RAPT group completed the full RAPT program that provided knowledge-based instruction on hazard anticipation, rehearsal of hazard anticipation skills, and feedback on errors. The passive RAPT group completed placebo training that included only the knowledge-based content without active practice or feedback. Participants drove eight driving scenarios in a high-fidelity driving simulator immediately before and after the completion of their assigned training.Results showed credible improvements in hazard anticipation performance for the active but not the passive group, suggesting that the active training method was crucial for learning. Low-SES drivers only showed improvements on hazard anticipation performance only after completing the active-training, whereas high-SES drivers improved following either program. These findings suggest that RAPT is generally effective across SES groups, but active training may be particularly important for enhancing hazard anticipation in low-SES young drivers.

Eye-Guided Human-Robot Collaborative Assembly: A Feasibility Study

2026HCINeon
Hajime Mizuyama; Eiji Morinaga; Tomomi Nonaka; Toshiya Kaihara; Gregor Von Cieminski; David Romero; Raquel Quesada Díaz; Álvaro Ballesteros Martín; Frank Luque Lineros; Erik BillingAdvances in Production Management Systems. Cyber-Physical-Human Production Systems: Human-AI Collaboration and Beyond

An in-flight multimodal data collection method for assessing pilot cognitive states and performance in general aviation

2025DrivingNeon
Rongbing Xu; Shi Cao; Michael Barnett-Cowan; Gulnaz Bulbul; Elizabeth Irving; Ewa Niechwiej-Szwedo; Suzanne KearnsMethodsX
Human factors are central to aviation safety, with pilot cognitive states such as workload, stress, and situation awareness playing important roles in flight performance and safety. Although flight simulators are widely used for training and scientific research, they often lack the ecological validity needed to replicate pilot cognitive states from real flights. To address these limitations, a new in-flight data collection methodology for general aviation using a Cessna 172 aircraft, which is one of the most widely used aircraft for pilot training, is presented. The dataset combines: • Human data from wearable physiological sensors (electroencephalography, electrocardiography, electrodermal activity, and body temperature) and eye-tracking glasses. • Flight data from ADS-B flight recorder. • Pilot’s self-reported cognitive states and flight performance rate by instructor. The paper describes the sensor setup, flight task design, and data synchronization procedures. Potential analyses using statistical and machine learning methods are discussed to classify cognitive states and demonstrate the dataset’s value. This methodology supports human factors research and has practical value for applications in pilot training, performance evaluation, and aviation safety management. The method was applied in a field study with 25 participants, from which 20 complete multimodal datasets were retained after data cleaning. After collecting additional data, the resulting dataset will support further research on pilot performance and behavior.

Art Immersion: Evidence for attention restoration in museums

2025Applied Psychology, Architecture & DesignNeon
Nicola Vasta; Francesco N. BiondiConsciousness and Cognition

Success in goal-directed visual tasks: the benefits of alternating sitting and standing instead of only sitting

2025ErgonomicsCore
Wafa Cherigui; Mélen Guillaume; Sérgio T. Rodrigues; Cédrick T. BonnetApplied Ergonomics
Both excessive sitting and excessive standing have been shown to be detrimental for performance, productivity and health. In the present study, our objective was specifically to determine the effect of alternating the body position (between standing and sitting) on task performance and visual attention in the Attention Network Task (ANT), relative to a sitting-only condition. Twenty-four participants (aged 18–35) performed the ANT six times in both conditions (5 min 35 per ANT). The proportion of blinks was significantly lower in the alternating condition than in the sitting-only condition. In both between-condition and within-condition analyses, the reaction times were significantly shorter when standing than when sitting. Humans may be more effective (i.e. a shorter reaction time) and have greater visual attention (i.e. less frequent proportion of blinking) in an alternating condition than in a sitting-only condition. In practice, the use of sit-stand desks might usefully help to both reduce the time spent sitting and improve task performance.

On utilizing gaze behavior to predict movement transitions during natural human walking on different terrains

2025ErgonomicsCore
Martina Hasenjäger; Christiane B. Wiebel-HerbothPLOS ONE
Human gaze behavior is crucial for successful goal-directed locomotion. In this study we explore the potential of gaze information to improve predictions of walk mode transitions in real-world urban environments which has not been investigated in great detail, yet. Using a dataset with IMU motion data and gaze data from the Pupil Labs Invisible eye tracker, twenty participants completed three laps of an urban walking track with three walk modes: level walking, stairs (up, down), and ramps (up, down). In agreement with previous findings, we found that participants directed their gaze more towards the ground during challenging transitions. They adjusted their gaze behavior up to four steps before adjusting their gait behavior. We trained a random forest classifier to predict walk mode transitions using gaze parameters, gait parameters, and both. Results showed that the more complex transitions involving stairs were easier to predict than transitions involving ramps, and combining gaze and gait parameters provided the most reliable results. Gaze parameters had a greater impact on classification accuracy than gait parameters in most scenarios. Although prediction performance, as measured by Matthews’ correlation coefficient (MCC), declined with increasing forecasting horizons (from one to four steps ahead), the model still achieved robust classification performance well above chance level (MCC = 0), with an average MCC of 0.60 when predicting transitions from level walking to stairs (either up or down) four steps in advance. The study suggests that gaze behavior changes in anticipation of walk mode transitions and the expected challenge for balance control, and has the potential to significantly improve the prediction of walk mode transitions in real-world gait behavior.

Continuous User Authentication for Extended Reality Using Pupil Reflexive Mechanisms as a Biometric

2025HCICore
Shuaikang Hou; Muyao Tang; Srinivasan Murali; Huadi ZhuConference Paper
With the rapid adoption of extended reality (XR) technologies in both consumer and enterprise domains, continuous and unobtrusive user authentication has become increasingly important. Existing authentication methods are often intrusive, static, or insufficiently secure for immersive environments. In this work, we propose a novel passive authentication framework that leverages users' real-time pupil light reflex (PLR) in response to visual stimuli rendered in XR. By treating screen brightness as a natural, time-varying challenge and modeling the user's pupil response as the biometric signal, our system learns to extract identity-specific features that are invariant to environmental content. We implement our prototype on two commercial XR headsets and evaluate it through a user study involving eight participants across diverse XR applications. Our system achieves an equal error rate (EER) of 0.093 with a 2-minute prediction window. These results demonstrate the feasibility of pupillary dynamics as a behavioral biometric for secure, continuous authentication in immersive environments. This study lays the foundation for future work on scalable, multimodal, and adaptive biometric authentication in XR.

Visuospatial performance and its neural substrates in Dementia with Lewy Bodies during a pointing task

2025ClinicalCore
Bosco Annalisa; Foglino Caterina; Guidi Lucia; Braghittoni Davide; Venturi Greta; Sambati Luisa; Baldelli Luca; Calandra-Buonaura Giovanna; Fattori Patrizia; Lodi Raffaele; Tonon Caterina; Mitolo MicaelaScientific Reports
Dementia with Lewy Bodies (DLB) is characterized by motor and cognitive deficits that often overlap with other neurodegenerative disorders, complicating its diagnosis. This study combined linear mixed-effects modeling and machine learning to investigate key parameters of pointing movements, saccadic behavior, and superior parietal lobule (SPL) volumetry in differentiating DLB patients from controls. DLB patients exhibited distinct motor impairments, including increased movement times, greater pointing errors, and spatially modulated deficits in pointing accuracy. Saccadic analysis revealed prolonged saccade latencies, larger amplitudes, and pervasive hypermetria, with notable spatial asymmetries in accuracy and amplitude. Specifically, reduced hypermetria for upward-directed saccades suggests direction-specific modulation in DLB, highlighting potential disruptions in visuomotor pathways. Brain volumetric analysis demonstrated significant volumetric loss of SPL, particularly in the left hemisphere, further implicating this region in the visuospatial and motor deficits observed in DLB. Interestingly, an inverse relationship between SPL volumetry and task performance was found, more evident for hand-related parameters. The integration of behavioral, saccadic, and volumetric data revealed that a combined approach highlights the complementary contributions of motor, oculomotor, and neural changes in distinguishing patients from controls. This study provides novel insights into the visuomotor and neural substrates underlying DLB and emphasizes the importance of adopting a multimodal approach to its diagnosis. The results go beyond traditional visuospatial assessments, offering a robust framework for the identification of DLB-specific biomarkers. Future research should explore the generalizability of this combined model across other neurodegenerative conditions to refine diagnostic tools and improve patient outcomes.

SNRI-driven recovery of impaired pupil dynamics in Progressive Supranuclear Palsy

2025ClinicalCore
Molly Zeitzschel; Maxime Maheu; Tobias H. Donner; Christian K. E. Moll; Carsten Buhmann; Götz Thomalla; Tim U. Magnus; Günther U. Höglinger; Keno Hagena; Alessandro Gulberti; Monika Pötter-NergermedRxiv
Background Progressive supranuclear palsy (PSP) is a tauopathy marked by early degeneration of brainstem nuclei such as the locus coeruleus (LC), a central player of the ascending reticular activating system (ARAS). ARAS dysfunction contributes to cognitive and arousal disturbances and is increasingly recognized as a therapeutic target. Methods 14 PSP patients (eight females, age 70.81 ± 6.32 years, disease duration 4.69 ± 3.24 years) were assessed before and after ≥4 weeks of SNRI (serotonin–norepinephrine reuptake inhibitor) treatment. Participants were assessed by non-invasive neurophysiological measures as pupillometry, performed at rest and during an auditory oddball paradigm, and clinically, using PSP-specific clinical scales and quantitative gait analysis. Pupil, cognitive, and gait parameters were compared with age-matched healthy controls. Results At baseline, pupil size, pupil dilation responses and surprise pupil responses were reduced in PSP patients compared to controls, accompanied by impaired executive functions, reduced phonemic verbal fluency and depressive symptoms in PSP patients. SNRI treatment selectively rescued certain impaired pupil metrics in PSP, leading to a significant improvement of the surprise pupil response and partial normalization of pupil fluctuations at rest. Pupil metric changes were associated with notable improvement in executive functions and quality of life at follow-up. Conclusion In summary, our findings reveal altered pupil-linked arousal regulation in PSP and its modulation by SNRI therapy, suggesting pupillometric indices as promising biomarkers of therapeutic response.

Screening for glaucoma with a novel eye movement perimetry technique based on continuous visual stimulus tracking

2025ClinicalNeon
A. C. L. Vrijling; M. J. de Boer; R. J. Renken; J. B. C. Marsman; J. Heutink; F. W. Cornelissen; N. M. JansoniusmedRxiv
Purpose Standard automated perimetry (SAP) is the gold standard for functional assessment in glaucoma. SAP can be too demanding for some groups of patients. Continuous visual stimulus tracking (SONDA: Standardized Oculomotor and Neurological Disorders Assessment) simplifies the perimetric task to following a moving stimulus on a screen. In this study we evaluated the screening performance of SONDA-based eye movement perimetry (SONDA-EMP) in glaucoma. To explore generalizability, we evaluated an experimental setup (SONDA-Eyelink) and a clinic-ready version (SONDA-Neon). Methods SONDA-Eyelink and SONDA-Neon measurements were performed in 100 cases with glaucoma (36, 36, and 28 with early, moderate, and severe glaucoma, respectively) and 100 age-similar controls. Participants monocularly tracked a moving stimulus (Goldmann size III) at 40% contrast (both setups) and 160% (SONDA-Eyelink). Eye movements were continuously recorded. Outcome was the agreement between gaze and stimulus position. We used previously collected glaucoma case-control data to build a continuous ‘glaucoma screening score’. This score was used for an ROC-analysis applied to the current, independently collected dataset. We predefined good screening performance as: at 95% specificity, a sensitivity of at least 50%, 90%, and 100% for early, moderate, and severe glaucoma, respectively. Results At 95% specificity, the sensitivity of SONDA-Eyelink was 58, 94, and 100% at 40% contrast and 56, 97, and 100% at 160% contrast for early, moderate, and severe glaucoma, respectively. Sensitivity was 53, 94, and 100% for SONDA-Neon. Conclusions SONDA-EMP is a novel, fast, and intuitive method to screen for visual function loss in glaucoma.