Filters

Year
Product

Fields


1-10 of 871 publications

A temporal quantitative analysis of visuomotor behavior during four twisting somersaults in elite and sub-elite trampolinists

2024Sports ScienceInvisible
Eve Charbonneau; Mickaël Begon; Thomas RomeasHuman Movement Science
Vision has previously been correlated with performance in acrobatic sports, highlighting visuomotor expertise adaptations. However, we still poorly understand the visuomotor strategies athletes use while executing twisting somersaults, even though this knowledge might be helpful for skill development. Thus, the present study sought to identify the differences in gaze behavior between elite and sub-elite trampolinists during the execution of four acrobatics of increasing difficulty. Seventeen inertial measurement units and a wearable eye-tracker were used to record the body and gaze kinematics of 17 trampolinists (8 elites, 9 sub-elites). Six typical metrics were analyzed using a mixed analysis of variance (ANOVA) with the Expertise as inter-subject and the Acrobatics as intra-subject factors. To complement this analysis, advanced temporal eye-tracking metrics are reported, such as the dwell time on areas of interest, the scan path on the trampoline bed, the temporal evolution of the gaze orientation endpoint (SPGO), and the time spent executing specific neck and eye strategies. A significant main effect of Expertise was only evidenced in one of the typical metrics, where elite athletes exhibited a higher number of fixations compared to sub-elites (p = 0.033). Significant main effects of Acrobatics were observed on all metrics (p < 0.05), revealing that gaze strategies are task-dependent in trampolining. The recordings of eyes and neck movements performed in this study confirmed the use of “spotting” at the beginning and end of the acrobatics. They also revealed a unique sport-specific visual strategy that we termed as self-motion detection. This strategy consists of not moving the eyes during fast head rotations, a strategy mainly used by trampolinists during the twisting phase. This study proposes a detailed exploration of trampolinists' gaze behavior in highly realistic settings and a temporal description of the visuomotor strategies to enhance understanding of perception-action interactions during the execution of twisting somersaults.

Better understanding fall risk: AI-based computer vision for contextual gait assessment

2024Artificial Intelligence, Motor ControlInvisible
Jason Moore; Peter McMeekin; Samuel Stuart; Rosie Morris; Yunus Celik; Richard Walker; Victoria Hetherington; Alan GodfreyMaturitas
Contemporary research to better understand free-living fall risk assessment in Parkinson's disease (PD) often relies on the use of wearable inertial-based measurement units (IMUs) to quantify useful temporal and spatial gait characteristics (e.g., step time, step length). Although use of IMUs is useful to understand some intrinsic PD fall-risk factors, their use alone is limited as they do not provide information on extrinsic factors (e.g., obstacles). Here, we update on the use of ergonomic wearable video-based eye-tracking glasses coupled with AI-based computer vision methodologies to provide information efficiently and ethically in free-living home-based environments to better understand IMU-based data in a small group of people with PD. The use of video and AI within PD research can be seen as an evolutionary step to improve methods to understand fall risk more comprehensively.

Automated Identification of Clinically Relevant Regions in Glaucoma OCT Reports Using Expert Eye Tracking Data and Deep Learning

2024Artificial Intelligence, OpthalmologyCore
Ye Tian; Anurag Sharma; Shubh Mehta; Shubham Kaushal; Jeffrey M. Liebmann; George A. Cioffi; Kaveri A. ThakoorTranslational Vision Science & Technology
To propose a deep learning–based approach for predicting the most-fixated regions on optical coherence tomography (OCT) reports using eye tracking data of ophthalmologists, assisting them in finding medically salient image regions. We collected eye tracking data of ophthalmology residents, fellows, and faculty as they viewed OCT reports to detect glaucoma. We used a U-Net model as the deep learning backbone and quantized eye tracking coordinates by dividing the input report into an 11 × 11 grid. The model was trained to predict the grids on which fixations would land in unseen OCT reports. We investigated the contribution of different variables, including the viewer's level of expertise, model architecture, and number of eye gaze patterns included in training. Our approach predicted most-fixated regions in OCT reports with precision of 0.723, recall of 0.562, and f1-score of 0.609. We found that using a grid-based eye tracking structure enabled efficient training and using a U-Net backbone led to the best performance. Our approach has the potential to assist ophthalmologists in diagnosing glaucoma by predicting the most medically salient regions on OCT reports. Our study suggests the value of eye tracking in guiding deep learning algorithms toward informative regions when experts may not be accessible. By suggesting important OCT report regions for a glaucoma diagnosis, our model could aid in medical education and serve as a precursor for self-supervised deep learning approaches to expedite early detection of irreversible vision loss owing to glaucoma.

Experimental road safety study of the actual driver reaction to the street ads using eye tracking, multiple linear regression and decision trees methods

2024Traffic PsychologyCore
Sharaf AlKhederExpert Systems with Applications
The article describes the results of a naturalistic driving study done in Kuwait performed by 34 participants wearing a mobile eye tracker to monitor the effect of roadside advertisements on user attention. Eye-tracking (fixations) are the main dependent variable and examined as a function of driving/roadside characteristics, particularly billboards, speed, and so forth. The results obtained were analyzed using traditional statistics (ANOVA test and multiple linear regression) and machine learning (decision tree estimation methods). From the results, it was found that road advertisements negatively affect driver attention and thus road safety. How the level of safety varies with the type and size of advertisement was also investigated. As a consequence, all of the estimates revealed that different aspects of advertising have a detrimental impact on drivers' behavior, and that the duration of fixation and the rate of acceleration before viewing an ad are impacted by advertising type and size, respectively.

NeuroSight: Combining Eye-Tracking and Brain-Computer Interfaces for Context-Aware Hand-Free Camera Interaction

2024Artificial Intelligence, HCICore
Benedict Leung; Mariana Shimabukuro; Christopher CollinsConference Paper
Technology has blurred the boundaries of our work and private lives. Using touch-free technology can lessen the divide between technology and reality and bring us closer to the immersion we once had before. This work explores the combination of eye-tracking glasses and a brain-computer interface to enable hand-free interaction with the camera without holding or touching it. Different camera modes are difficult to implement without the use of eye-tracking. For example, visual search relies on an object, selecting a region in the scene by touching the touchscreen on your phone. Eye-tracking is used instead, and the fixation point is used to select the intended region. In addition, fixations can provide context for the mode the user wants to execute. For instance, fixations on foreign text could indicate translation mode. Ultimately, multiple touchless gestures create more fluent transitions between our life experiences and technology.

EyeWithShut: Exploring Closed Eye Features to Estimate Eye Position

2024Gaze EstimationNeon
Mingyu Han; Ian OakleyUbiComp Companion ’24
Smart glasses enabling eyes-only input are growing in popularity. The underlying input technique powering the majority of interactions in this modality is dwell-based selection. However, this approach suffers from the Midas touch: the unintentional triggering of input during exploratory gaze. We suggest that multimodal input based on eye motions performed while the eyes are shut can trigger selections and overcome this problem without requiring additional input devices or modalities. To explore this idea, this paper captures a dataset of labeled closed-eye images corresponding to eye motions in various directions from nine participants. This was achieved by recording images from a binocular eye-tracker while participants performed standard eye-targeting tasks with one eye closed: the image from the open eye serves as ground truth for the closed eye's location. To understand the scope of closed-eye input, we explore this data according to three dimensions: the distance (near/far), width (narrow/broad), and direction (horizontal/vertical) of single captures of eye positions. Our results indicate that horizontal and vertical eye positions can be accurately recovered from static closed-eye images, but only after relatively large angular eye movements.

Including visual criteria into predictive simulation of acrobatics to enhance the realism of optimal techniques | SportRxiv

2024Sports ScienceInvisible
Eve Charbonneau; Thomas Romeas; Annie Ross,; Mickaël BegonsportRxiv
To perform their acrobatics successfully, trampolinists make real-time corrections mainly based on visual feedback. Despite athletes’ heavy reliance on visual cues, visual criteria have not been introduced into predictive simulations yet. We aimed to introduce visual criteria into predictive simulations of the backward somersault with a twist and the double backward somersault with two twists in pike position to generate innovative and safe optimal acrobatic techniques. Different visual vs kinematics objective weightings were tested to find a good compromise. Four international coaches and two international judges assessed animations of the optimal techniques and of an elite athletes technique, providing insights into the acceptability of the optimal techniques. For the most complex acrobatics, coaches found the optimal techniques more efficient for aerial twist creation. However, they perceived them as less safe, less realistic, similarly aesthetic, and similarly appropriate for visual information intake compared with the athlete’s technique. The scores given by the judges were twice as high for the optimal technique compared to the athlete’s technique. This study highlights the importance of including visual criteria into the optimization of acrobatics to improve the relevance of the optimal techniques for the sporting community.

Resilience Scores from Wearable Biosignal Sensors for Decision Support of Worker Allocation in Production

2024Applied PsychologyNeon
Lucas Paletta; Michael Schneeberger; Martin Pszeida; Jochen Mosbacher; Florian Haid; Julia Tschuden; Herwig ZeinerAHFE (2024) International Conference
Mental health and well-being have to be considered on an equal footing when designing digitalized workplaces in production. We present the configuration of selected wearable sensor technologies together with the architecture of the Intelligent Sensor Box to enable monitoring resilience scores at the production site. The wearables include a Garmin vivosmart 5 fitness tracker to provide cardiovascular data, the greenTEG CORE body temperature sensor, Pupil Labs Neon eye tracking glasses and an optional sanSirro QUS smart shirt with textile biosignal measurements of vital parameters. We provide a framework to integrate a sequence of daily strain scores within a pre-determined time window of a preceding working period, and finally integrate this into a current resilience score. We present the estimation of the daily strain score based on the wearable sensing data that were captured in the Human Factors Lab in Austria during activities that are characteristic for the car production workplace. Furthermore, we demonstrate how resilience scores would impact the decision-making in the use case of daily dynamic worker allocation.

EyeTrAES: Fine-grained, Low-Latency Eye Tracking via Adaptive Event Slicing

2024Gaze EstimationCore
Argha Sen; Nuwan Bandara; Ila Gokarn; Thivya Kandappu; Archan MisraarXiv
Eye-tracking technology has gained significant attention in recent years due to its wide range of applications in human-computer interaction, virtual and augmented reality, and wearable health. Traditional RGB camera-based eye-tracking systems often struggle with poor temporal resolution and computational constraints, limiting their effectiveness in capturing rapid eye movements. To address these limitations, we propose EyeTrAES, a novel approach using neuromorphic event cameras for high-fidelity tracking of natural pupillary movement that shows significant kinematic variance. One of EyeTrAES's highlights is the use of a novel adaptive windowing/slicing algorithm that ensures just the right amount of descriptive asynchronous event data accumulation within an event frame, across a wide range of eye movement patterns. EyeTrAES then applies lightweight image processing functions over accumulated event frames from just a single eye to perform pupil segmentation and tracking. We show that these methods boost pupil tracking fidelity by 6+%, achieving IoU~=92%, while incurring at least 3x lower latency than competing pure event-based eye tracking alternatives [38]. We additionally demonstrate that the microscopic pupillary motion captured by EyeTrAES exhibits distinctive variations across individuals and can thus serve as a biometric fingerprint. For robust user authentication, we train a lightweight per-user Random Forest classifier using a novel feature vector of short-term pupillary kinematics, comprising a sliding window of pupil (location, velocity, acceleration) triples. Experimental studies with two different datasets demonstrate that the EyeTrAES-based authentication technique can simultaneously achieve high authentication accuracy (~=0.82) and low processing latency (~=12ms), and significantly outperform multiple state-of-the-art competitive baselines.

Dermatologist-like explainable AI enhances melanoma diagnosis accuracy: eye-tracking study

2024Artificial IntelligenceCore
Tirtha Chanda; Sarah Haggenmueller; Tabea-Clara Bucher; Tim Holland-Letz; Harald Kittler; Philipp Tschandl; Markus V. Heppt; Carola Berking; Jochen S. Utikal; Bastian Schilling; Claudia Buerger; Cristian Navarrete-Dechent; Matthias Goebeler; Jakob Nikolas Kather; Carolin V. Schneider; Benjamin Durani; Hendrike Durani; Martin Jansen; Juliane Wacker; Joerg Wacker; Reader Study Consortium; Titus J. BrinkerarXiv
Artificial intelligence (AI) systems have substantially improved dermatologists' diagnostic accuracy for melanoma, with explainable AI (XAI) systems further enhancing clinicians' confidence and trust in AI-driven decisions. Despite these advancements, there remains a critical need for objective evaluation of how dermatologists engage with both AI and XAI tools. In this study, 76 dermatologists participated in a reader study, diagnosing 16 dermoscopic images of melanomas and nevi using an XAI system that provides detailed, domain-specific explanations. Eye-tracking technology was employed to assess their interactions. Diagnostic performance was compared with that of a standard AI system lacking explanatory features. Our findings reveal that XAI systems improved balanced diagnostic accuracy by 2.8 percentage points relative to standard AI. Moreover, diagnostic disagreements with AI/XAI systems and complex lesions were associated with elevated cognitive load, as evidenced by increased ocular fixations. These insights have significant implications for clinical practice, the design of AI tools for visual tasks, and the broader development of XAI in medical diagnostics.