Filters

Year
Product

Fields


1-10 of 938 publications

Children With Bilateral Cochlear Implants Show Emerging Spatial Hearing of Stationary and Moving Sound - Robel Z Alemu, Alan Blakeman, Angela L Fung, Melissa Hazen, Jaina Negandhi, Blake C Papsin, Sharon L Cushing, Karen A Gordon, 2025

2025Audiology, ClinicalCore
Robel Alemu; Alan Blakeman; Angela Fung; Melissa Hazen; Jaina Negandhi; Blake Papsin; Sharon Cushing; Karen GordonTrends in Hearing
Spatial hearing in children with bilateral cochlear implants (BCIs) was assessed by: (a) comparing localization of stationary and moving sound, (b) investigating the relationship between sound localization and sensitivity to interaural level and timing differences (ILDs/ITDs), (c) evaluating effects of aural preference on sound localization, and (d) exploring head and eye (gaze) movements during sound localization. Children with BCIs (n = 42, MAge = 12.3 years) with limited duration of auditory deprivation and peers with typical hearing (controls; n = 37, MAge = 12.9 years) localized stationary and moving sound with unrestricted head and eye movements. Sensitivity to binaural cues was measured by a lateralization task to ILDs and ITDs. Spatial separation effects were measured by spondee-word recognition thresholds (SNR thresholds) when noise was presented in front (colocated/0°) or with 90° of left/right separation. BCI users had good speech reception thresholds (SRTs) in quiet but higher SRTs in noise than controls. Spatial separation of noise from speech revealed a greater advantage for the right ear across groups. BCI users showed increased errors localizing stationary sound and detecting moving sound direction compared to controls. Decreased ITD sensitivity occurred with poorer localization of stationary sound in BCI users. Gaze movements in BCI users were more random than controls for stationary and moving sounds. BCIs support symmetric hearing in children with limited duration of auditory deprivation and promote spatial hearing which is albeit impaired. Spatial hearing was thus considered to be “emerging.” Remaining challenges may reflect disruptions in ITD sensitivity and ineffective gaze movements.

Active Gaze Labeling: Visualization for Trust Building

2025Machine Learning, UI/UXInvisible
Maurice Koch; Nan Cao; Daniel Weiskopf; Kuno KurzhalsIEEE Transactions on Visualization and Computer Graphics
Areas of interest (AOIs) are well-established means of providing semantic information for visualizing, analyzing, and classifying gaze data. However, the usual manual annotation of AOIs is time-consuming and further impaired by ambiguities in label assignments. To address these issues, we present an interactive labeling approach that combines visualization, machine learning, and user-centered explainable annotation. Our system provides uncertainty-aware visualization to build trust in classification with an increasing number of annotated examples. It combines specifically designed EyeFlower glyphs, dimensionality reduction, and selection and exploration techniques in an integrated workflow. The approach is versatile and hardware-agnostic, supporting video stimuli from stationary and unconstrained mobile eye tracking alike. We conducted an expert review to assess labeling strategies and trust building.

One size does not fit all: a support vector machine exploration of multiclass cognitive state classifications using physiological measures

2025Cognitive Psychology, Machine LearningCore
Jonathan Vogl; Kevin O'Brien; Paul St. OngeFrontiers in Neuroergonomics
IntroductionThis study aims to develop and evaluate support vector machines (SVMs) learning models for predicting cognitive workload (CWL) based on physiological data. The objectives include creating robust binary classifiers, expanding these to multiclass models for nuanced CWL prediction, and exploring the benefits of individualized models for enhanced accuracy. Cognitive workload assessment is critical for operator performance and safety in high-demand domains like aviation. Traditional CWL assessment methods rely on subjective reports or isolated metrics, which lack real-time applicability. Machine learning offers a promising solution for integrating physiological data to monitor and predict CWL dynamically. SVMs provide transparent and auditable decision-making pipelines, making them particularly suitable for safety-critical environments.MethodsPhysiological data, including electrocardiogram (ECG) and pupillometry metrics, were collected from three participants performing tasks with varying demand levels in a low-fidelity aviation simulator. Binary and multiclass SVMs were trained to classify task demand and subjective CWL ratings, with models tailored to individual and combined subject datasets. Feature selection approaches evaluated the impact of streamlined input variables on model performance.ResultsBinary SVMs achieved accuracies of 70.5% and 80.4% for task demand and subjective workload predictions, respectively, using all features. Multiclass models demonstrated comparable discrimination (AUC-ROC: 0.75–0.79), providing finer resolution across CWL levels. Individualized models outperformed combined-subject models, showing a 13% average improvement in accuracy. SVMs effectively predict CWL from physiological data, with individualized multiclass models offering superior granularity and accuracy.DiscussionThese findings emphasize the potential of tailored machine learning approaches for real-time workload monitoring in fields that can justify the added time and expense required for personalization. The results support the development of adaptive automation systems in aviation and other high-stakes domains, enabling dynamic interventions to mitigate cognitive overload and enhance operator performance and safety.

Gaze Behaviors During Forehand Clear and Backhand Driving in Badminton: A Comparison between Beginner, Intermediate and Expert Players

2025Sports ScienceInvisible
Yi Yang; Zhenxiang Guo; Meng LiuSocial Science Research Network
This study investigates differences in gaze behavior metrics during the performance of forehand clears and backhand drives among individuals at varying skill levels (beginners, intermediates, and experts). Participants completed 150 forehand clears and 150 backhand drives in a randomized sequence while wearing Pupil Invisible glasses for tracking gaze metrics. Gaze counts, velocity, trajectory length, pre-fixation counts (prefixcounts), and pre-fixation ratios (prefixratios) were measured. Stroke initiation and termination were accurately captured using a remotely controlled shuttlecock launcher. The analysis showed that gaze counts were significantly higher during forehand clears compared to backhand drives (p < 0.05), regardless of expertise level. Conversely, prefixcounts were significantly higher during backhand drives than forehand clears for all skill levels (p < 0.05). Beginners exhibited notably higher prefix counts than intermediates and experts for both stroke types (p < 0.05). Additionally, beginners and experts demonstrated significantly higher prefix ratios during backhand drives compared to forehand clears (p < 0.05), while intermediates did not show a significant difference between the two strokes (p > 0.05). Specifically, beginners had significantly higher prefixratios than intermediates and experts for both strokes (p < 0.05). No significant differences were found among the groups in terms of velocity and trajectory length during the execution of either stroke (p > 0.05). These results emphasize the critical role of gaze behavior in enhancing skilled performance in badminton. Future studies should investigate the causal links between modifications in gaze behavior and enhancements in performance in competitive environments.

AI-Powered Analysis of Eye Tracker Data in Basketball Game

2025Artificial Intelligence, Sports ScienceNeon
Daniele Lozzi; Ilaria Di Pompeo; Martina Marcaccio; Michela Alemanno; Melanie Krüger; Giuseppe Curcio; Simone MiglioreSensors
This paper outlines a new system for processing of eye-tracking data in basketball live games with two pre-trained Artificial Intelligence (AI) models. blueThe system is designed to process and extract features from data of basketball coaches and referees, recorded with the Pupil Labs Neon Eye Tracker, a device that is specifically optimized for video analysis. The research aims to present a tool useful for understanding their visual attention patterns during the game, what they are attending to, for how long, and their physiological responses, blueas is evidenced through pupil size changes. AI models are used to monitor events and actions within the game and correlate these with eye-tracking data to provide understanding into referees’ and coaches’ cognitive processes and decision-making. This research contributes to the knowledge of sport psychology and performance analysis by introducing the potential of Artificial Intelligence (AI)-based eye-tracking analysis in sport with wearable technology and light neural networks that are capable of running in real time.

From Percepts to Semantics: A Multi-modal Saliency Map to Support Social Robots’ Attention

2025RoboticsCore
Lorenzo Ferrini; Antonio Andriella; Raquel Ros; Séverin LemaignanACM Transactions on Human-Robot Interaction
In social robots, visual attention expresses awareness of the scenario components and dynamics. As in humans, their attention should be driven by a combination of different attention mechanisms. In this paper, we introduce multi-modal saliency maps, i.e. spatial representations of saliency that dynamically integrate multiple attention sources depending on the context. We provide the mathematical formulation of the model and an open-source software implementation. Finally, we present an initial exploration of its potential in social interaction scenarios with humans, and evaluate its implementation.

Intermittent control and retinal optic flow when maintaining a curvilinear path

2025Cognitive Psychology, OculomotorCore, VR
Björnborg Nguyen; Ola BenderiusScientific Reports
The topic of how humans navigate using vision has been studied for decades. Research has identified the emergent patterns of retinal optic flow from gaze behavior may play an essential role in human curvilinear locomotion. However, the link towards control has been poorly understood. Lately, it has been shown that human locomotor behavior is corrective, formed from intermittent decisions and responses. A simulated virtual reality experiment was conducted where fourteen participants drove through a texture-rich simplistic road environment with left and right curve bends. The goal was to investigate how human intermittent lateral control can be associated with the retinal optic flow-based cues and vehicular heading as sources of information. This work reconstructs dense retinal optic flow using a numerical estimation of optic flow with measured gaze behavior. By combining retinal optic flow with the drivable lane surface, a cross-correlational relation to intermittent steering behavior could be observed. In addition, a novel method of identifying constituent ballistic correction using particle swarm optimization was demonstrated to analyze the incremental correction-based behavior. Through time delay analysis, our results show a human response time of approximately 0.14 s for retinal optic flow-based cues and 0.44 s for heading-based cues, measured from stimulus onset to steering correction onset. These response times were further delayed by 0.17 s when the vehicle-fixed steering wheel was visibly removed. In contrast to classical continuous control strategies, our findings support and argue for the intermittency property in human neuromuscular control of muscle synergies, through the principle of satisficing behavior: to only actuate when there is a perceived need for it. This is aligned with the human sustained sensorimotor model, which uses readily available information and internal models to produce informed responses through evidence accumulation to initiate appropriate ballistic correction, even amidst another correction.

Seeing Meaning: How Congruent Robot Speech and Gestures Impact Human Intuitive Understanding of Robot Intentions

2025RoboticsCore
Marieke Van Otterdijk; Bruno Laeng; Diana Saplacan-Lindblom; Adel Baselizadeh; Jim TørresenInternational Journal of Social Robotics
Social communication between humans and robots has become critical as a result of the integration of robots into our daily lives as assistants. There is a need to explore how users intuitively understand the behavior of a robot and the impact of social context on that understanding. This study measures mental effort (as indexed by pupil response) and processing time, measured as the time taken to provide the correct answer, to investigate participants’ intuitive understanding of the robot’s gestures. Thirty-two participants participated in a charades game with a TIAGo robot, during which their eyes were tracked. Our findings show a relationship between mental effort and processing time, and indicate that robot gestures, congruence of speech and behavior, and the correctness of interpreting robot behavior influence intuitive understanding. Furthermore, we found that people focused on the robot’s limb movement. Using these findings, we can highlight what features contribute to the intuitive interaction with a robot, thus improving its efficiency.

SensPS: Sensing Personal Space Comfortable Distance Between Human-Human Using Multimodal Sensors

2025Social PsychologyCore
Ko Watanabe; Nico Förster; Shoya Ishimaru; Don Harris; Wen-Chin LiConference Paper
Personal space, also known as peripersonal space, is crucial in human social interaction, influencing comfort, communication, and social stress. Estimating and respecting personal space is essential for enhancing human-computer interaction (HCI) and smart environments. Personal space preferences vary due to individual traits, cultural background, and contextual factors. Advanced multimodal sensing technologies, including eye-tracking and wristband sensors, offer opportunities to develop adaptive systems that dynamically adjust to user comfort levels. Integrating physiological and behavioral data enables a deeper understanding of spatial interactions. This study aims to develop a sensor-based model to estimate comfortable personal space and identify key features influencing spatial preferences. Here we show that multimodal sensors, particularly eye-tracking and physiological wristband data, can effectively predict personal space preferences, with eye-tracking data playing a more significant role. Our experimental study involving controlled human interactions demonstrates, that the Transformer model achieves the highest predictive accuracy (F1 score: 0.87) for estimating personal space. Eye-tracking features, such as gaze point and pupil diameter, emerge as the most significant predictors, while physiological signals from wristband sensors contribute marginally. These findings highlight the potential for AI-driven personalization of social space in adaptive environments. Our results suggest that multimodal sensing can be leveraged to develop intelligent systems that optimize spatial arrangements in workplaces, educational institutions, and public settings. Future work should explore larger datasets, real-world applications, and additional physiological markers to enhance model robustness.

Eye Movements as Indicators of Deception: A Machine Learning Approach

2025Experimental Psychology, Social PsychologyNeon
Valentin Foucher; Santiago de Leon-Martinez; Robert MoroEye Tracking Research and Applications 2025
Gaze may enhance the robustness of lie detectors but remains under-studied. This study evaluated the efficacy of AI models (using fixations, saccades, blinks, and pupil size) for detecting deception in Concealed Information Tests across two datasets. The first, collected with Eyelink 1000, contains gaze data from a computerized experiment where 87 participants revealed, concealed, or faked the value of a previously selected card. The second, collected with Pupil Neon, involved 36 participants performing a similar task but facing an experimenter. XGBoost achieved accuracies up to 74% in a binary classification task (Revealing vs. Concealing) and 49% in a more challenging three-classification task (Revealing vs. Concealing vs. Faking). Feature analysis identified saccade number, duration, amplitude, and maximum pupil size as the most important for deception prediction. These results demonstrate the feasibility of using gaze and AI to enhance lie detectors and encourage future research that may improve on this.