Filters

Year
Product

Fields


1-10 of 924 publications

Evaluation of data collection and annotation approaches of driver gaze dataset

2025DrivingInvisible
Pavan Kumar Sharma; Pranamesh ChakrabortyBehavior Research Methods
Driver gaze estimation is important for various driver gaze applications such as building advanced driving assistance systems and understanding driver gaze behavior. Gaze estimation in terms of gaze zone classification requires large-scale labeled data for supervised machine learning and deep learning-based models. In this study, we collected a driver gaze dataset and annotated it using three annotation approaches – manual annotation, Speak2Label, and moving pointer-based annotation. Moving pointer-based annotation was introduced as a new data annotation approach inspired by screen-based gaze data collection. For each data collection approach, ground truth labels were obtained using an eye tracker. The proposed moving pointer-based approach was found to achieve higher accuracy compared to the other two approaches. Due to the lower accuracy of manual annotation and the Speak2Label method, we performed a detailed analysis of these two annotation approaches to understand the reasons for the misclassification. A confusion matrix was also plotted to compare the manually assigned gaze labels with the ground truth labels. This was followed by misclassification analysis, two-sample t-test-based analysis to understand if head pose and pupil position of driver influence the misclassification by the annotators. In Speak2Label, misclassification was observed due to a lag between the speech and gaze time series, which can be visualized in the graph and cross-correlation analysis were done to compute the maximum lag between the two time series. Finally, we created a benchmark Eye Tracker-based Driver Gaze Dataset (ET-DGaze) that consists of the driver’s face images and corresponding gaze labels obtained from the eye tracker.

Optimizing Workplace Lighting: Objective Assessment of Cognitive Performance Factors Using Eye-Tracking Technology

2025ErgonomicsCore
D. Filipa Ferreira; Ana Carolina Fonseca; Simao Ferreira; Luís Coelho; Matilde A. RodriguesSocial Science Research Network
Workplace accidents and illnesses affect millions globally, underscoring the urgent need for effective strategies to design safer and healthier built environments. This study investigated the impact of lighting conditions, a critical environmental factor, on cognitive function and psychological well-being, focusing on workload, fatigue, attention, and stress. In a simulated work environment, participants followed a task protocol under two lighting conditions: 500 lux and 300 lux. Objective data were collected using Pupil Labs Core eye-tracking glasses, complemented by subjective self-reports via questionnaires. The findings revealed that 500 lux lighting with a lower colour temperature reduced fatigue, alleviated eye strain, and enhanced attention, demonstrating the role of proper lighting in promoting cognitive function and well-being. Conversely, the 300 lux condition led to increased fatigue and greater pupil constriction, highlighting potential negative effects of insufficient illuminance in workplace environments. Objective measures, such as pupil dilation, provided consistent and reliable insights compared to subjective self-reports, emphasizing the advantages of advanced eye-tracking technology in assessing environmental factors. The study also highlighted the limitations of subjective methods, which are susceptible to individual interpretation. These results underline the importance of integrating optimal lighting systems into building designs to improve worker productivity, mental health, and overall environmental quality.

SPEED: A Graphical User Interface Software for Processing Eye Tracking Data

2025Cognitive PsychologyNeon
Daniele Lozzi; Ilaria Di Pompeo; Martina Marcaccio; Matias Ademaj; Simone Migliore; Giuseppe CurcioNeuroSci
Eye tracking is a tool that is widely used in scientific research, enabling the acquisition of precise and detailed data on an individual’s eye movements during interaction with visual stimuli, thus offering a rich source of information on visual perception and associated cognitive processes. In this work, a new software called SPEED (labScoc Processing and Extraction of Eye tracking Data) is presented to process data acquired by Pupil Lab Neon (Pupil Labs, Berlin, Germany). The software is written in Python which helps researchers with the feature extraction step without any coding skills. This work also presents a pilot study in which five healthy subjects were included in research investigating oculomotor correlates during MDMT (Moral Decision-Making Task) and testing possible autonomic predictors of participants’ performance. A statistically significant difference was observed in reaction times and in the number of blinks made during the choice between the conditions of the personal and impersonal dilemma.

Capturing eye movements during ultrasound-guided embryo transfer: first insights

2025ClinicalNeon
Josselin Gautier; Kimberley Truyen; Ndeye Racky Sall; Solène Duros; Pierre JanninMedical Imaging 2025: Image Perception, Observer Performance, and Technology Assessment
Embryo transfer is a critical step of in vitro fertilization, the most effective treatment for infertility experienced by one in six people in their lifetime. To date, despite advances in optimizing embryo quality, an important variability of pregnancy rate remains between practitioners. In order to evaluate the key technical skills that might affect such behavioural differences, we conducted a preliminary multi-centric study on assisted reproductive technologies (ART) specialists using a Gynos Virtamed simulator for ultrasound guided embryo transfer (UGET) combined with a portable eyetracker (Neon, Pupil labs). Our first analyses demonstrate the capability of a recent portable eyetracker in tracking fine eye movements in an ecological (head unrestrained, dim light condition) embryo transfer condition. A dedicated processing pipeline was developed and gaze were analyzed on Areas of Interest (AoI) consisting of the ultrasound image, the uterine model (A, C or E) or the catheter. A separate analysis of the fixated anatomical subregions of the ultrasound image was also conducted. Preliminary analyses show two distinctive patterns of eye movements during UGET: a target based behaviour or a switching and tool following behaviour, suggesting more pro-active gaze behaviour in experts, in agreement with the literature on other image guided interventions.

Analyzing Gaze During Driving: Should Eye Tracking Be Used to Design Automotive Lighting Functions?

2025DrivingCore
Korbinian Kunst; David Hoffmann; Anıl Erkan; Karina Lazarova; Tran Quoc KhanhJournal of Eye Movement Research
In this work, an experiment was designed in which a defined route consisting of country roads, highways, and urban roads was driven by 20 subjects during the day and at night. The test vehicle was equipped with GPS and a camera, and the subject wore head-mounted eye-tracking glasses to record gaze. Gaze distributions for country roads, highways, urban roads, and specific urban roads were then calculated and compared. The day/night comparisons showed that the horizontal fixation distribution of the subjects was wider during the day than at night over the whole test distance. When the distributions were divided into urban roads, country roads, and motorways, the difference was also seen in each road environment. For the vertical distribution, no clear differences between day and night can be seen for country roads or urban roads. In the case of the highway, the vertical dispersion is significantly lower, so the gaze is more focused. On highways and urban roads there is a tendency for the gaze to be lowered. The differentiation between a residential road and a main road in the city made it clear that gaze behavior differs significantly depending on the urban area. For example, the residential road led to a broader gaze behavior, as the sides of the street were scanned much more often in order to detect potential hazards lurking between parked cars at an early stage. This paper highlights the contradictory results of eye-tracking research and shows that it is not advisable to define a holy grail of gaze distribution for all environments. Gaze is highly situational and context-dependent, and generalized gaze distributions should not be used to design lighting functions. The research highlights the importance of an adaptive light distribution that adapts to the traffic situation and the environment, always providing good visibility for the driver and allowing a natural gaze behavior.

Effects of Virtual and Real-World Quiet Eye Training on Visuomotor Learning in Novice Dart Throwing

2025Cognitive Psychology, Sports ScienceCore
Zahra Dodangeh; Masoumeh Shojaei; Afkham Daneshfar; Thomas Simpson; Harjiv Singh; Ayoub AsadiJournal of Motor Learning and Development
Quiet eye training, a technique focused on optimizing gaze behavior during critical moments, has shown potential for enhancing motor skill acquisition. This study investigates the effects of quiet eye training in both virtual and real-world environments on dart-throwing learning. The participants consisted of 45 female students who were randomly divided into three groups: a control group (age: M  = 22.46 ± 2.89), real-world (age: M  = 23.80 ± 2.75), and virtual quiet eye training groups (age: M  = 24.33 ± 2.25). The training sessions spanned 2 days, with each session consisting of 60 dart throws divided into 20 blocks of three trials each. The virtual group used an Xbox Kinect motion sensor to throw virtual darts, while the real-world group threw real darts at a dartboard. Both experimental groups followed specific visual training protocols. The control group, on the other hand, threw real darts at a dartboard without receiving any visual training. Results showed that both experimental groups enhanced QE duration, but only the real-world group significantly improved throwing accuracy. These results highlight the importance of sensory information specific to the task in motor learning, supporting the specificity of practice hypothesis.

Contributed Talks III: Infants' eye movements to scene statistics in natural behavior

2025OpthalmologyCore
T Rowan Candy; Zachary Petroff; Stephanie Biehn; Sarah Freeman; Kathryn Bonnen; Linda SmithJournal of Vision
Infants start to interact with their visual environment during the first postnatal months. Immaturities in gross motor responses and spatial vision constrain their visual behavior during this rapid development. Analyses of first-person video and eye-tracking data from infants were performed to understand key components of visual experience during this period of visual learning. Methods: Infants wore head-mounted scene and binocular eye-tracking cameras (modified Pupil Labs Core) while engaging in naturalistic behavior in an 8ftx8ft home-like environment. Calibrated eye movements were identified using standard approaches (e.g. Engbert & Mergenthaler, 2006) and image statistics were extracted at fixation locations (>200ms). Results: Recordings (10.5 hours) at ages 2-3 (n=24) 5-6 (35) 8-9 (27) & 11-12 (11) months were analyzed. Eye position and saccade amplitude distributions relative to the head were tighter for younger infants. The distribution of RMS contrast around fixation was also highest at younger ages. Conclusions: The youngest infants with limited head and trunk control exhibited the most restricted range of eye movements, suggesting no gaze shift compensation for limited mobility. This likely leads to less active sampling of the scene, slower rates of change in input, and a tight link between head- and eye-centered frames of reference. Early experience also provides a concentration of contrast serving the development of foveal and parafoveal function.

Intermittent control and retinal optic flow when maintaining a curvilinear path

2025Cognitive Psychology, VR/ARVR
Björnborg Nguyen; Ola BenderiusPreprint
Abstract The topic of how humans navigate using vision has been studied for decades. Research has identified the emergent patterns of retinal optic flow from gaze behavior may play an essential role in human curvilinear locomotion. However, the link towards control has been poorly understood. Lately, it has been shown that human locomotor behavior is corrective, formed from intermittent decisions and responses. A simulated virtual reality experiment was conducted where fourteen participants drove through a texture-rich simplistic road environment with left and right curve bends. The goal was to investigate how human intermittent lateral control can be associated with the retinal optic flow-based cues and vehicular heading as sources of information. This work reconstructs dense retinal optic flow using numerical estimation of optic flow with measured gaze behavior. By combining retinal optic flow with the drivable lane surface, a cross-correlational relation to intermittent steering behavior could be observed. In addition, a novel method of identifying constituent ballistic correction using particle swarm optimization was demonstrated to analyze the incremental correction-based behavior. Through time delay analysis, our results show a human response time of approximately 0.14 s for retinal optic flow-based cues and 0.44 s for heading-based cues, measured from stimuli onset to steering correction onset. These response times were further delayed by 0.17 s when the vehicle-fixed steering wheel was visibly removed. In contrast to classical continuous control strategies, our findings support and argue for the intermittency property in human neuromuscular control of muscle synergies, through the principle of satisficing behavior: to only actuate when there is a perceived need for it. This is aligned with the human sustained sensorimotor model, which uses readily available information and internal models to produce informed responses through evidence accumulation to initiate appropriate ballistic correction, even amidst another correction.

Exploring the impact of myoelectric prosthesis controllers on visuomotor behavior

2025Motor ControlCore
Kodi Y. Cheng; Heather E. Williams; Ahmed W. Shehata; Patrick M. Pilarski; Craig S. Chapman; Jacqueline S. HebertJournal of NeuroEngineering and Rehabilitation

Joint infrared pupil images and near-corneal-plane spectral light exposure data under natural conditions across the adult lifespan

2025Gaze EstimationCore
Rafael Lazar; Manuel SpitschanPreprint
Abstract Factors influencing pupil size can be experimentally controlled or held constant in laboratory experiments, but how the pupil reflects changes in retinal input from the visual environment in real-world viewing conditions has yet to be captured in a large, age-diverse sample. In this dataset, we address this research gap by collecting data in a hybrid field-laboratory experiment ( N =83, 43 female, age range 18–87 years) using a custom-built, wearable, video-based eye tracker with a spectroradiometer measuring spectral irradiance in the approximate corneal plane, resulting in a total of valid 29,664 recorded spectral irradiance and eye image pairs along with 83 approximately 3-minute-long calibration videos. After an initial 3-minute calibration procedure, 10-minute dark-adaptation period and a 14-minute controlled laboratory light condition, participants moved in and between indoor and outdoor environments of varying spectral irradiance for ∼25–35 minutes and performed a range of everyday tasks. This dataset may provide a basis for developing algorithms for pupil detection, processing, and prediction under natural conditions.