Filters

Year
Product

Fields


1-10 of 885 publications

A temporal quantitative analysis of visuomotor behavior during four twisting somersaults in elite and sub-elite trampolinists

2024Sports ScienceInvisible
Eve Charbonneau; Mickaël Begon; Thomas RomeasHuman Movement Science
Vision has previously been correlated with performance in acrobatic sports, highlighting visuomotor expertise adaptations. However, we still poorly understand the visuomotor strategies athletes use while executing twisting somersaults, even though this knowledge might be helpful for skill development. Thus, the present study sought to identify the differences in gaze behavior between elite and sub-elite trampolinists during the execution of four acrobatics of increasing difficulty. Seventeen inertial measurement units and a wearable eye-tracker were used to record the body and gaze kinematics of 17 trampolinists (8 elites, 9 sub-elites). Six typical metrics were analyzed using a mixed analysis of variance (ANOVA) with the Expertise as inter-subject and the Acrobatics as intra-subject factors. To complement this analysis, advanced temporal eye-tracking metrics are reported, such as the dwell time on areas of interest, the scan path on the trampoline bed, the temporal evolution of the gaze orientation endpoint (SPGO), and the time spent executing specific neck and eye strategies. A significant main effect of Expertise was only evidenced in one of the typical metrics, where elite athletes exhibited a higher number of fixations compared to sub-elites (p = 0.033). Significant main effects of Acrobatics were observed on all metrics (p < 0.05), revealing that gaze strategies are task-dependent in trampolining. The recordings of eyes and neck movements performed in this study confirmed the use of “spotting” at the beginning and end of the acrobatics. They also revealed a unique sport-specific visual strategy that we termed as self-motion detection. This strategy consists of not moving the eyes during fast head rotations, a strategy mainly used by trampolinists during the twisting phase. This study proposes a detailed exploration of trampolinists' gaze behavior in highly realistic settings and a temporal description of the visuomotor strategies to enhance understanding of perception-action interactions during the execution of twisting somersaults.

Slippage-robust linear features for eye tracking

2024Gaze EstimationCore
Tawaana Gustad Homavazir; V.S. Raghu Parupudi; Surya L.S.R. Pilla; Pamela CosmanExpert Systems with Applications

Eyeball Kinematics Informed Slippage Robust Gaze Tracking

2024Gaze EstimationCore
Wei Zhang; Jiaxi Cao; Xiang Wang; Pengfei Xia; Bin Li; Xun ChenIEEE Sensors Journal

Evaluation of Autonomous Vehicle Takeover Performance in Work-Zone Environment

2024DrivingNeon
Viktor Nagy; Diovane Mateus Da Luz; Ágoston Pál Sándor; Attila BorsosSMTS 2024

Better understanding fall risk: AI-based computer vision for contextual gait assessment

2024Artificial Intelligence, Motor ControlInvisible
Jason Moore; Peter McMeekin; Samuel Stuart; Rosie Morris; Yunus Celik; Richard Walker; Victoria Hetherington; Alan GodfreyMaturitas
Contemporary research to better understand free-living fall risk assessment in Parkinson's disease (PD) often relies on the use of wearable inertial-based measurement units (IMUs) to quantify useful temporal and spatial gait characteristics (e.g., step time, step length). Although use of IMUs is useful to understand some intrinsic PD fall-risk factors, their use alone is limited as they do not provide information on extrinsic factors (e.g., obstacles). Here, we update on the use of ergonomic wearable video-based eye-tracking glasses coupled with AI-based computer vision methodologies to provide information efficiently and ethically in free-living home-based environments to better understand IMU-based data in a small group of people with PD. The use of video and AI within PD research can be seen as an evolutionary step to improve methods to understand fall risk more comprehensively.

Eye Movement Assessment Methodology Based on Wearable EEG Headband Data Analysis

2024HCI, Neuroscience & NeuropsychologyInvisible
Vladimir Romaniuk; Alexey Kashevnik2024 36th Conference of Open Innovations Association (FRUCT)

Automated Identification of Clinically Relevant Regions in Glaucoma OCT Reports Using Expert Eye Tracking Data and Deep Learning

2024Artificial Intelligence, OpthalmologyCore
Ye Tian; Anurag Sharma; Shubh Mehta; Shubham Kaushal; Jeffrey M. Liebmann; George A. Cioffi; Kaveri A. ThakoorTranslational Vision Science & Technology
To propose a deep learning–based approach for predicting the most-fixated regions on optical coherence tomography (OCT) reports using eye tracking data of ophthalmologists, assisting them in finding medically salient image regions. We collected eye tracking data of ophthalmology residents, fellows, and faculty as they viewed OCT reports to detect glaucoma. We used a U-Net model as the deep learning backbone and quantized eye tracking coordinates by dividing the input report into an 11 × 11 grid. The model was trained to predict the grids on which fixations would land in unseen OCT reports. We investigated the contribution of different variables, including the viewer's level of expertise, model architecture, and number of eye gaze patterns included in training. Our approach predicted most-fixated regions in OCT reports with precision of 0.723, recall of 0.562, and f1-score of 0.609. We found that using a grid-based eye tracking structure enabled efficient training and using a U-Net backbone led to the best performance. Our approach has the potential to assist ophthalmologists in diagnosing glaucoma by predicting the most medically salient regions on OCT reports. Our study suggests the value of eye tracking in guiding deep learning algorithms toward informative regions when experts may not be accessible. By suggesting important OCT report regions for a glaucoma diagnosis, our model could aid in medical education and serve as a precursor for self-supervised deep learning approaches to expedite early detection of irreversible vision loss owing to glaucoma.

Experimental road safety study of the actual driver reaction to the street ads using eye tracking, multiple linear regression and decision trees methods

2024Traffic PsychologyCore
Sharaf AlKhederExpert Systems with Applications
The article describes the results of a naturalistic driving study done in Kuwait performed by 34 participants wearing a mobile eye tracker to monitor the effect of roadside advertisements on user attention. Eye-tracking (fixations) are the main dependent variable and examined as a function of driving/roadside characteristics, particularly billboards, speed, and so forth. The results obtained were analyzed using traditional statistics (ANOVA test and multiple linear regression) and machine learning (decision tree estimation methods). From the results, it was found that road advertisements negatively affect driver attention and thus road safety. How the level of safety varies with the type and size of advertisement was also investigated. As a consequence, all of the estimates revealed that different aspects of advertising have a detrimental impact on drivers' behavior, and that the duration of fixation and the rate of acceleration before viewing an ad are impacted by advertising type and size, respectively.

NeuroSight: Combining Eye-Tracking and Brain-Computer Interfaces for Context-Aware Hand-Free Camera Interaction

2024Artificial Intelligence, HCICore
Benedict Leung; Mariana Shimabukuro; Christopher CollinsConference Paper
Technology has blurred the boundaries of our work and private lives. Using touch-free technology can lessen the divide between technology and reality and bring us closer to the immersion we once had before. This work explores the combination of eye-tracking glasses and a brain-computer interface to enable hand-free interaction with the camera without holding or touching it. Different camera modes are difficult to implement without the use of eye-tracking. For example, visual search relies on an object, selecting a region in the scene by touching the touchscreen on your phone. Eye-tracking is used instead, and the fixation point is used to select the intended region. In addition, fixations can provide context for the mode the user wants to execute. For instance, fixations on foreign text could indicate translation mode. Ultimately, multiple touchless gestures create more fluent transitions between our life experiences and technology.

EyeWithShut: Exploring Closed Eye Features to Estimate Eye Position

2024Gaze EstimationNeon
Mingyu Han; Ian OakleyUbiComp Companion ’24
Smart glasses enabling eyes-only input are growing in popularity. The underlying input technique powering the majority of interactions in this modality is dwell-based selection. However, this approach suffers from the Midas touch: the unintentional triggering of input during exploratory gaze. We suggest that multimodal input based on eye motions performed while the eyes are shut can trigger selections and overcome this problem without requiring additional input devices or modalities. To explore this idea, this paper captures a dataset of labeled closed-eye images corresponding to eye motions in various directions from nine participants. This was achieved by recording images from a binocular eye-tracker while participants performed standard eye-targeting tasks with one eye closed: the image from the open eye serves as ground truth for the closed eye's location. To understand the scope of closed-eye input, we explore this data according to three dimensions: the distance (near/far), width (narrow/broad), and direction (horizontal/vertical) of single captures of eye positions. Our results indicate that horizontal and vertical eye positions can be accurately recovered from static closed-eye images, but only after relatively large angular eye movements.