Filters

Year
Product

Fields


1-10 of 914 publications

Effects of Virtual and Real-World Quiet Eye Training on Visuomotor Learning in Novice Dart Throwing

2025Cognitive Psychology, Sports ScienceCore
Zahra Dodangeh; Masoumeh Shojaei; Afkham Daneshfar; Thomas Simpson; Harjiv Singh; Ayoub AsadiJournal of Motor Learning and Development
Quiet eye training, a technique focused on optimizing gaze behavior during critical moments, has shown potential for enhancing motor skill acquisition. This study investigates the effects of quiet eye training in both virtual and real-world environments on dart-throwing learning. The participants consisted of 45 female students who were randomly divided into three groups: a control group (age: M  = 22.46 ± 2.89), real-world (age: M  = 23.80 ± 2.75), and virtual quiet eye training groups (age: M  = 24.33 ± 2.25). The training sessions spanned 2 days, with each session consisting of 60 dart throws divided into 20 blocks of three trials each. The virtual group used an Xbox Kinect motion sensor to throw virtual darts, while the real-world group threw real darts at a dartboard. Both experimental groups followed specific visual training protocols. The control group, on the other hand, threw real darts at a dartboard without receiving any visual training. Results showed that both experimental groups enhanced QE duration, but only the real-world group significantly improved throwing accuracy. These results highlight the importance of sensory information specific to the task in motor learning, supporting the specificity of practice hypothesis.

Exploring the impact of myoelectric prosthesis controllers on visuomotor behavior

2025Motor ControlCore
Kodi Y. Cheng; Heather E. Williams; Ahmed W. Shehata; Patrick M. Pilarski; Craig S. Chapman; Jacqueline S. HebertJournal of NeuroEngineering and Rehabilitation

Joint infrared pupil images and near-corneal-plane spectral light exposure data under natural conditions across the adult lifespan

2025Gaze EstimationCore
Rafael Lazar; Manuel SpitschanPreprint
Abstract Factors influencing pupil size can be experimentally controlled or held constant in laboratory experiments, but how the pupil reflects changes in retinal input from the visual environment in real-world viewing conditions has yet to be captured in a large, age-diverse sample. In this dataset, we address this research gap by collecting data in a hybrid field-laboratory experiment ( N =83, 43 female, age range 18–87 years) using a custom-built, wearable, video-based eye tracker with a spectroradiometer measuring spectral irradiance in the approximate corneal plane, resulting in a total of valid 29,664 recorded spectral irradiance and eye image pairs along with 83 approximately 3-minute-long calibration videos. After an initial 3-minute calibration procedure, 10-minute dark-adaptation period and a 14-minute controlled laboratory light condition, participants moved in and between indoor and outdoor environments of varying spectral irradiance for ∼25–35 minutes and performed a range of everyday tasks. This dataset may provide a basis for developing algorithms for pupil detection, processing, and prediction under natural conditions.

Predicting spatial familiarity by exploiting head and eye movements during pedestrian navigation in the real world

2025Cognitive Psychology, Experimental PsychologyInvisible
Markus Kattenbeck; Ioannis Giannopoulos; Negar Alinaghi; Antonia Golab; Daniel R. MontelloScientific Reports
Abstract Spatial familiarity has seen a long history of interest in wayfinding research. To date, however, no studies have been done which systematically assess the behavioral correlates of spatial familiarity, including eye and body movements. In this study, we take a step towards filling this gap by reporting on the results of an in-situ, within-subject study with $$N=52$$ pedestrian wayfinders that combines eye-tracking and body movement sensors. In our study, participants were required to walk both a familiar route and an unfamiliar route by following auditory, landmark-based route instructions. We monitored participants’ behavior using a mobile eye tracker, a high-precision Global Navigation Satellite System receiver, and a high-precision, head-mounted Inertial Measurement Unit. We conducted machine learning experiments using Gradient-Boosted Trees to perform binary classification, testing out different feature sets, i.e., gaze only, Inertial Measurement Unit data only, and a combination of the two, to classify a person as familiar or unfamiliar with a particular route. We achieve the highest accuracy of $$89.9\%$$  using exclusively Inertial Measurement Unit data, exceeding gaze alone at $$67.6\%$$ , and gaze and Inertial Measurement Unit data together at $$85.9\%$$ . For the highest accuracy achieved, yaw and acceleration values are most important. This finding indicates that head movements (“looking around to orient oneself”) are a particularly valuable indicator to distinguish familiar and unfamiliar environments for pedestrian wayfinders.

Passenger physiology in self-driving vehicles during unexpected events

2025DrivingCore
Zsolt Palatinus; Miklós Lukovics; Márta Volosin; Zsolt Dudás; Szabolcs Prónay; Zoltán Majó-Petri; Henrietta Lengyel; Zsolt SzalayScientific Reports
Abstract While using fully autonomous vehicles is expected to radically change the way we live our daily lives, it is not yet available in most parts of the world, so we only have sporadic results on passenger reactions. Furthermore, we have very limited insights into how passengers react to an unexpected event during the ride. Previous physiological research has shown that passengers have lower levels of anxiety in the event of a human-driven condition compared to a self-driving condition. The aim of our current study was to investigate these differences in unexpected road events in real-life passenger experiences. All subjects were driven through a closed test track in human-driven and then self-driving mode. During the journey, unforeseen obstacles were encountered on the path (deer and human-shaped dummies appeared). Using physiological measurements (EEG, eye movements, head movements and blinking frequencies) our results suggest that passengers had moderate affective preferences for human-driven conditions. Furthermore, multifractal spectra of eye movements and head movements were wider and blinking frequencies were decreased during unexpected events. Our findings further establish real-world physiological measurements as a source of information in researching the acceptance and usage of self-driving technologies.

Reducing English Major Students' Writing Errors with an Automated Writing Evaluation System: Evidence from Eye-Tracking Technology

2025ReadingInvisible
Bei Cai; Ziyu He; Hong Fu; Yang Zheng; Yanjie SongIEEE Transactions on Learning Technologies

GAIPAT - Dataset on Human Gaze and Actions for Intent Prediction in Assembly Tasks

2025Ergonomics, Gaze Estimation, Motor ControlCore
Maxence Grand; Damien Pellier; Francis JambonConference Paper
The primary objective of the dataset is to provide a better understanding of the coupling between human actions and gaze in a shared working environment with a cobot, with the aim of significantly enhancing the efficiency and safety of human-cobot interactions. More broadly, by linking gaze patterns with physical actions, the dataset offers valuable insights into cognitive processes and attention dynamics in the context of assembly tasks. The proposed dataset contains gaze and action data from approximately 80 participants, recorded during simulated industrial assembly tasks. The tasks were simulated using controlled scenarios in which participants manipulated educational building blocks. Gaze data was collected using two different eye-tracking setups --head-mounted and remote-- while participants worked in two positions: sitting and standing.

Transient Authentication from First-Person-View Video

2025HCICore
Le Ngu Nguyen; Rainhard Dieter Findling; Maija Poikela; Si Zuo; Stephan SiggProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
We propose PassFrame, a system which utilizes first-person-view videos to generate personalized authentication challenges based on human episodic memory of event sequences. From the recorded videos, relevant (memorable) scenes are selected to form image-based authentication challenges. These authentication challenges are compatible with a variety of screen sizes and input modalities. As the popularity of using wearable cameras in daily life is increasing, PassFrame may serve as a convenient personalized authentication mechanism to screen-based appliances and services of a camera wearer. We evaluated the system in various settings including a spatially constrained scenario with 12 participants and a deployment on smartphones with 16 participants and more than 9 hours continuous video per participant. The authentication challenge completion time ranged from 2.1 to 9.7 seconds (average: 6 sec), which could facilitate a secure yet usable configuration of three consecutive challenges for each login. We investigated different versions of the challenges to obfuscate potential privacy leakage or ethical concerns with 27 participants. We also assessed the authentication schemes in the presence of informed adversaries, such as friends, colleagues or spouses and were able to detect attacks from diverging login behaviour.

Predicting When and What to Explain From Multimodal Eye Tracking and Task Signals

2025Applied Psychology, Cognitive Psychology, Computer Vision, HCI, Machine LearningCore
Lennart Wachowiak; Peter Tisnikar; Gerard Canal; Andrew Coles; Matteo Leonetti; Oya CeliktutanIEEE Transactions on Affective Computing
While interest in the field of explainable agents increases, it is still an open problem to incorporate a proactive explanation component into a real-time human–agent collaboration. Thus, when collaborating with a human, we want to enable an agent to identify critical moments requiring timely explanations. We differentiate between situations requiring explanations about the agent’s decision-making and assistive explanations supporting the user. In order to detect these situations, we analyze eye tracking signals of participants engaging in a collaborative virtual cooking scenario. Firstly, we show how users’ gaze patterns differ between moments of user confusion, the agent making errors, and the user successfully collaborating with the agent. Secondly, we evaluate different state-of-the-art models on the task of predicting whether the user is confused or the agent makes errors using gaze- and task-related data. An ensemble of MiniRocket classifiers performs best, especially when updating its predictions with high frequency based on input samples capturing time windows of 3 to 5 seconds. We find that gaze is a significant predictor of when and what to explain. Gaze features are crucial to our classifier’s accuracy, with task-related features benefiting the classifier to a smaller extent.

Performance of focus-tunable presbyopia correction lenses operated using gaze-tracking and LIDAR

2025OpthalmologyCore
Rajat Agarwala; Björn R. Severitt; Felix F. Reichel; Benedikt W. Hosp; Siegfried WahlBiomedical Optics Express
Presbyopia is an age-related loss of accommodation ability of the eye which affects an individual’s capacity to focus on closer objects. With the advent of tunable lens technologies, various algorithms have been developed to tune such lenses for presbyopia correction in older populations. In this study, we assessed a gaze and LIDAR-based feedback mechanism with electronically tunable lenses for their use as correction lenses for presbyopia. The tunable lens prototype was evaluated in 15 healthy young participants with their corrected sphero-cylindrical refraction by comparing their performance for a dynamic matching task under two conditions: (1) natural accommodation, and (2) emulating presbyopia using cycloplegic drops to paralyse accommodation while focussing using the developed visual demonstrator prototype. The participants performed the matching task on three screens placed at multiple distances. We have demonstrated that gaze can be used in conjunction with LIDAR to tune the lenses in the wearable visual demonstrator prototype, enabling participants to achieve a fast and accurate response for the matching task.