Filters

Year
Product

Fields


1-10 of 899 publications

Effects of Virtual and Real-World Quiet Eye Training on Visuomotor Learning in Novice Dart Throwing

2025Cognitive Psychology, Sports ScienceCore
Zahra Dodangeh; Masoumeh Shojaei; Afkham Daneshfar; Thomas Simpson; Harjiv Singh; Ayoub AsadiJournal of Motor Learning and Development
Quiet eye training, a technique focused on optimizing gaze behavior during critical moments, has shown potential for enhancing motor skill acquisition. This study investigates the effects of quiet eye training in both virtual and real-world environments on dart-throwing learning. The participants consisted of 45 female students who were randomly divided into three groups: a control group (age: M  = 22.46 ± 2.89), real-world (age: M  = 23.80 ± 2.75), and virtual quiet eye training groups (age: M  = 24.33 ± 2.25). The training sessions spanned 2 days, with each session consisting of 60 dart throws divided into 20 blocks of three trials each. The virtual group used an Xbox Kinect motion sensor to throw virtual darts, while the real-world group threw real darts at a dartboard. Both experimental groups followed specific visual training protocols. The control group, on the other hand, threw real darts at a dartboard without receiving any visual training. Results showed that both experimental groups enhanced QE duration, but only the real-world group significantly improved throwing accuracy. These results highlight the importance of sensory information specific to the task in motor learning, supporting the specificity of practice hypothesis.

GAIPAT - Dataset on Human Gaze and Actions for Intent Prediction in Assembly Tasks

2025Ergonomics, Gaze Estimation, Motor ControlCore
Maxence Grand; Damien Pellier; Francis JambonConference Paper
The primary objective of the dataset is to provide a better understanding of the coupling between human actions and gaze in a shared working environment with a cobot, with the aim of significantly enhancing the efficiency and safety of human-cobot interactions. More broadly, by linking gaze patterns with physical actions, the dataset offers valuable insights into cognitive processes and attention dynamics in the context of assembly tasks. The proposed dataset contains gaze and action data from approximately 80 participants, recorded during simulated industrial assembly tasks. The tasks were simulated using controlled scenarios in which participants manipulated educational building blocks. Gaze data was collected using two different eye-tracking setups --head-mounted and remote-- while participants worked in two positions: sitting and standing.

Predicting When and What to Explain From Multimodal Eye Tracking and Task Signals

2025Applied Psychology, Cognitive Psychology, Computer Vision, HCI, Machine LearningCore
Lennart Wachowiak; Peter Tisnikar; Gerard Canal; Andrew Coles; Matteo Leonetti; Oya CeliktutanIEEE Transactions on Affective Computing
While interest in the field of explainable agents increases, it is still an open problem to incorporate a proactive explanation component into a real-time human–agent collaboration. Thus, when collaborating with a human, we want to enable an agent to identify critical moments requiring timely explanations. We differentiate between situations requiring explanations about the agent’s decision-making and assistive explanations supporting the user. In order to detect these situations, we analyze eye tracking signals of participants engaging in a collaborative virtual cooking scenario. Firstly, we show how users’ gaze patterns differ between moments of user confusion, the agent making errors, and the user successfully collaborating with the agent. Secondly, we evaluate different state-of-the-art models on the task of predicting whether the user is confused or the agent makes errors using gaze- and task-related data. An ensemble of MiniRocket classifiers performs best, especially when updating its predictions with high frequency based on input samples capturing time windows of 3 to 5 seconds. We find that gaze is a significant predictor of when and what to explain. Gaze features are crucial to our classifier’s accuracy, with task-related features benefiting the classifier to a smaller extent.

Challenges in using pupil dilation responses to sounds as a reliable alternative to standard audiometric tests

2025Audiology, ClinicalCore
Maria Paola Tramonti Fantozzi; Antonino Crivello; Davide La Rosa; Mario Milazzo; Serena Danti; Vincenzo De Cicco; Paolo Orsini; Diego Manzoni; Francesco Lazzerini; Rachele Canelli; Giacomo Fiacchini; Luca BruschiniHeliyon

Safety of human-AI cooperative decision-making within intensive care: A physical simulation study

2025Clinical, ErgonomicsCore
Paul Festor; Myura Nagendran; Anthony C. Gordon; Aldo A. Faisal; Matthieu KomorowskiPLOS Digital Health
The safety of Artificial Intelligence (AI) systems is as much one of human decision-making as a technological question. In AI-driven decision support systems, particularly in high-stakes settings such as healthcare, ensuring the safety of human-AI interactions is paramount, given the potential risks of following erroneous AI recommendations. To explore this question, we ran a safety-focused clinician-AI interaction study in a physical simulation suite. Physicians were placed in a simulated intensive care ward, with a human nurse (played by an experimenter), an ICU data chart, a high-fidelity patient mannequin and an AI recommender system on a display. Clinicians were asked to prescribe two drugs for the simulated patients suffering from sepsis and wore eye-tracking glasses to allow us to assess where their gaze was directed. We recorded clinician treatment plans before and after they saw the AI treatment recommendations, which could be either ‘safe’ or ‘unsafe’. 92% of clinicians rejected unsafe AI recommendations vs 29% of safe ones. Physicians paid increased attention (+37% gaze fixations) to unsafe AI recommendations vs safe ones. However, visual attention on AI explanations was not greater in unsafe scenarios. Similarly, clinical information (patient monitor, patient chart) did not receive more attention after an unsafe versus safe AI reveal suggesting that the physicians did not look back to these sources of information to investigate why the AI suggestion might be unsafe. Physicians were only successfully persuaded to change their dose by scripted comments from the bedside nurse 5% of the time. Our study emphasises the importance of human oversight in safety-critical AI and the value of evaluating human-AI systems in high-fidelity settings that more closely resemble real world practice.

Hybrid BCI for Meal-Assist Robot Using Dry-Type EEG and Pupillary Light Reflex

2025HCICore
Jihyeon Ha; Sangin Park; Yaeeun Han; Laehyun KimBiomimetics
Brain–computer interface (BCI)-based assistive technologies enable intuitive and efficient user interaction, significantly enhancing the independence and quality of life of elderly and disabled individuals. Although existing wet EEG-based systems report high accuracy, they suffer from limited practicality. This study presents a hybrid BCI system combining dry-type EEG-based flash visual-evoked potentials (FVEP) and pupillary light reflex (PLR) designed to control an LED-based meal-assist robot. The hybrid system integrates dry-type EEG and eyewear-type infrared cameras, addressing the preparation challenges of wet electrodes, while maintaining practical usability and high classification performance. Offline experiments demonstrated an average accuracy of 88.59% and an information transfer rate (ITR) of 18.23 bit/min across the four target classifications. Real-time implementation uses PLR triggers to initiate the meal cycle and EMG triggers to detect chewing, indicating the completion of the cycle. These features allow intuitive and efficient operation of the meal-assist robot. This study advances the BCI-based assistive technologies by introducing a hybrid system optimized for real-world applications. The successful integration of the FVEP and PLR in a meal-assisted robot demonstrates the potential for robust and user-friendly solutions that empower the users with autonomy and dignity in their daily activities.

SensPS: Sensing Personal Space Comfortable Distance between Human-Human Using Multimodal Sensors

2025Ergonomics, Social PsychologyCore
Ko Watanabe; Nico Förster; Shoya IshimaruarXiv
Personal space, also known as peripersonal space, is crucial in human social interaction, influencing comfort, communication, and social stress. Estimating and respecting personal space is essential for enhancing human-computer interaction (HCI) and smart environments. Personal space preferences vary due to individual traits, cultural background, and contextual factors. Advanced multimodal sensing technologies, including eye-tracking and wristband sensors, offer opportunities to develop adaptive systems that dynamically adjust to user comfort levels. Integrating physiological and behavioral data enables a deeper understanding of spatial interactions. This study develops a sensor-based model to estimate comfortable personal space and identifies key features influencing spatial preferences. Our findings show that multimodal sensors, particularly eye-tracking and physiological wristband data, can effectively predict personal space preferences, with eye-tracking data playing a more significant role. An experimental study involving controlled human interactions demonstrates that a Transformer-based model achieves the highest predictive accuracy (F1 score: 0.87) for estimating personal space. Eye-tracking features, such as gaze point and pupil diameter, emerge as the most significant predictors, while physiological signals from wristband sensors contribute marginally. These results highlight the potential for AI-driven personalization of social space in adaptive environments, suggesting that multimodal sensing can be leveraged to develop intelligent systems that optimize spatial arrangements in workplaces, educational institutions, and public settings. Future work should explore larger datasets, real-world applications, and additional physiological markers to enhance model robustness.

Marker-Based Safety Functionality for Human–Robot Collaboration Tasks by Means of Eye-Tracking Glasses

2025HCI, RoboticsNeon
Enrico Masi; Nhu Toan Nguyen; Eugenio Monari; Marcello Valori; Rocco VertechyMachines
Human–robot collaboration (HRC) remains an increasingly growing trend in the robotics research field. Despite the widespread usage of collaborative robots on the market, several safety issues still need to be addressed to develop industry-ready applications exploiting the full potential of the technology. This paper focuses on hand-guiding applications, proposing an approach based on a wearable device to reduce the risk related to operator fatigue or distraction. The methodology aims at ensuring operator’s attention during the hand guidance of a robot end effector in order to avoid injuries. This goal is achieved by detecting a region of interest (ROI) and checking that the gaze of the operator is kept within this area by means of a pair of eye-tracking glasses (Pupil Labs Neon, Berlin, Germany). The detection of the ROI is obtained primarily by the tracking camera of the glasses, acquiring the position of predefined ArUco markers, thus obtaining the corresponding contour area. In case of the misdetection of one or more markers, their position is estimated through the optical flow methodology. The performance of the proposed system is initially assessed with a motorized test bench simulating the rotation of operator’s head in a repeatable way and then in an HRC scenario used as case study. The tests show that the system can effectively identify a planar ROI in the context of a HRC application in real time.

Accurate Estimation of Fiducial Marker Positions Using Motion Capture System

2025Ergonomics, Motor Control, UI/UXCore
Matus Tanonwong; Naoya Chiba; Koichi Hashimoto2025 IEEE/SICE International Symposium on System Integration (SII)
In this paper, we present a method for aligning the coordinates of multiple cameras and sensors into a unified coordinate system using a motion capture system. Our simulated convenience store environment includes cameras and sensors with distinct coordinate systems, necessitating coordinate alignment. The motion capture system identifies retroreflective markers, while other cameras detect fiducial markers for position and orientation determination. Three optimization algorithms are experimented with to compute a transformation matrix aligning camera coordinates to motion capture coordinates, with the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm achieving the best results (average errors of 1.13 centimeters and 3.90 degrees). Comparisons with fiducial marker pose estimation using an open-source Pupil Core software indicate our method is more robust and consistent, with lower repeatability errors. Additionally, we examine the estimation errors in relation to the distances of the fiducial markers from the camera to minimize these errors, enhancing installation accuracy of cameras and sensors in our simulated environment. This approach enables precise determination of positions and orientations across integrated cameras, consistent with the motion capture system. The findings contribute to our ongoing project, which requires an accurate system integration for customer behavior analysis.

Dashboard Vision: Using Eye-Tracking to Understand and Predict Dashboard Viewing Behaviors

2025HCICore
Manling Yang; Yihan Hou; Ling Li; Remco Chang; Wei ZengIEEE Transactions on Visualization and Computer Graphics
Dashboards serve as effective visualization tools for conveying complex information. However, there exists a knowledge gap regarding how dashboard designs impact user engagement, necessitating designers to rely on their design expertise. Saliency has been used to comprehend viewing behaviors and assess visualizations, yet existing saliency models are primarily designed for single-view visualizations. To address this, we conduct an eye-tracking study to quantify participants' viewing patterns on dashboards. We collect eye-movement data from 60 participants, each viewing 36 dashboards (16 representative dashboards shared across all and 20 unique to each participant), totaling 1,216 dashboards and 2,160 eye-movement data instances. Analysis of the data from 16 dashboards viewed by all participants provides insights into how dashboard objects and layout designs influence viewing behaviors. Our analysis confirms known viewing patterns and reveals new patterns related to dashboard layout designs. Using the eye-movement data and identified patterns, we develop a saliency model to predict viewing behaviors with dashboards. Compared to state-of-the-art models for single-view visualizations, our model demonstrates overall improvement in prediction performance for dashboards. Finally, we propose potential dashboard design guidelines, illustrate an application case, and discuss general scanning strategies along with limitations and future work.