Filters

Year
Product

Fields


1-10 of 887 publications

Blended police firearms training improves performance in shoot/don't shoot scenarios: a systematic replication with police cadets

2024Sports ScienceNeon
Joshua Olma; Christine Sutter; Sandra SülzenbrückFrontiers in Psychology
Senior police officers' tactical gaze control and visual attention improve with an individual video-based police firearms training. To validate the efficacy of said intervention training, a previous experiment was systematically replicated with a sample of N = 52 second-year police cadets. Participants were randomly assigned to the intervention training that focused on situational awareness, tactical gaze control, and visual attention, or an active control training that addressed traditional marksmanship skills. In a pre- and post-test, they had to engage in dynamic shoot/don't shoot video scenarios in an indoor firing range. Overall, the previous findings were replicated: Baseline levels of performance were elevated, yet the intervention group significantly improved their response time and time until the first hit. False positive decision-making cannot be reported at all; false negatives were marginal in the pre-test and eliminated after training. Further, the outcomes of the previous sample of senior officers and the present sample of cadets are compared and lead to the conclusion that the presented approach is a valuable extension of current training standards for both senior police officers and police cadets.

A temporal quantitative analysis of visuomotor behavior during four twisting somersaults in elite and sub-elite trampolinists

2024Sports ScienceInvisible
Eve Charbonneau; Mickaël Begon; Thomas RomeasHuman Movement Science
Vision has previously been correlated with performance in acrobatic sports, highlighting visuomotor expertise adaptations. However, we still poorly understand the visuomotor strategies athletes use while executing twisting somersaults, even though this knowledge might be helpful for skill development. Thus, the present study sought to identify the differences in gaze behavior between elite and sub-elite trampolinists during the execution of four acrobatics of increasing difficulty. Seventeen inertial measurement units and a wearable eye-tracker were used to record the body and gaze kinematics of 17 trampolinists (8 elites, 9 sub-elites). Six typical metrics were analyzed using a mixed analysis of variance (ANOVA) with the Expertise as inter-subject and the Acrobatics as intra-subject factors. To complement this analysis, advanced temporal eye-tracking metrics are reported, such as the dwell time on areas of interest, the scan path on the trampoline bed, the temporal evolution of the gaze orientation endpoint (SPGO), and the time spent executing specific neck and eye strategies. A significant main effect of Expertise was only evidenced in one of the typical metrics, where elite athletes exhibited a higher number of fixations compared to sub-elites (p = 0.033). Significant main effects of Acrobatics were observed on all metrics (p < 0.05), revealing that gaze strategies are task-dependent in trampolining. The recordings of eyes and neck movements performed in this study confirmed the use of “spotting” at the beginning and end of the acrobatics. They also revealed a unique sport-specific visual strategy that we termed as self-motion detection. This strategy consists of not moving the eyes during fast head rotations, a strategy mainly used by trampolinists during the twisting phase. This study proposes a detailed exploration of trampolinists' gaze behavior in highly realistic settings and a temporal description of the visuomotor strategies to enhance understanding of perception-action interactions during the execution of twisting somersaults.

Slippage-robust linear features for eye tracking

2024Gaze EstimationCore
Tawaana Gustad Homavazir; V.S. Raghu Parupudi; Surya L.S.R. Pilla; Pamela CosmanExpert Systems with Applications

Eyeball Kinematics Informed Slippage Robust Gaze Tracking

2024Gaze EstimationCore
Wei Zhang; Jiaxi Cao; Xiang Wang; Pengfei Xia; Bin Li; Xun ChenIEEE Sensors Journal

Instrumented Contact Lens to Detect Gaze Movements Independently of Eye Blinks

2024OpthalmologyCore
Marion Othéguy; Vincent Nourrit; Jean-Louis De Bougrenet De La TocnayeTranslational Vision Science & Technology

Evaluation of Autonomous Vehicle Takeover Performance in Work-Zone Environment

2024DrivingNeon
Viktor Nagy; Diovane Mateus Da Luz; Ágoston Pál Sándor; Attila BorsosSMTS 2024

Better understanding fall risk: AI-based computer vision for contextual gait assessment

2024Artificial Intelligence, Motor ControlInvisible
Jason Moore; Peter McMeekin; Samuel Stuart; Rosie Morris; Yunus Celik; Richard Walker; Victoria Hetherington; Alan GodfreyMaturitas
Contemporary research to better understand free-living fall risk assessment in Parkinson's disease (PD) often relies on the use of wearable inertial-based measurement units (IMUs) to quantify useful temporal and spatial gait characteristics (e.g., step time, step length). Although use of IMUs is useful to understand some intrinsic PD fall-risk factors, their use alone is limited as they do not provide information on extrinsic factors (e.g., obstacles). Here, we update on the use of ergonomic wearable video-based eye-tracking glasses coupled with AI-based computer vision methodologies to provide information efficiently and ethically in free-living home-based environments to better understand IMU-based data in a small group of people with PD. The use of video and AI within PD research can be seen as an evolutionary step to improve methods to understand fall risk more comprehensively.

Eye Movement Assessment Methodology Based on Wearable EEG Headband Data Analysis

2024HCI, Neuroscience & NeuropsychologyInvisible
Vladimir Romaniuk; Alexey Kashevnik2024 36th Conference of Open Innovations Association (FRUCT)

Automated Identification of Clinically Relevant Regions in Glaucoma OCT Reports Using Expert Eye Tracking Data and Deep Learning

2024Artificial Intelligence, OpthalmologyCore
Ye Tian; Anurag Sharma; Shubh Mehta; Shubham Kaushal; Jeffrey M. Liebmann; George A. Cioffi; Kaveri A. ThakoorTranslational Vision Science & Technology
To propose a deep learning–based approach for predicting the most-fixated regions on optical coherence tomography (OCT) reports using eye tracking data of ophthalmologists, assisting them in finding medically salient image regions. We collected eye tracking data of ophthalmology residents, fellows, and faculty as they viewed OCT reports to detect glaucoma. We used a U-Net model as the deep learning backbone and quantized eye tracking coordinates by dividing the input report into an 11 × 11 grid. The model was trained to predict the grids on which fixations would land in unseen OCT reports. We investigated the contribution of different variables, including the viewer's level of expertise, model architecture, and number of eye gaze patterns included in training. Our approach predicted most-fixated regions in OCT reports with precision of 0.723, recall of 0.562, and f1-score of 0.609. We found that using a grid-based eye tracking structure enabled efficient training and using a U-Net backbone led to the best performance. Our approach has the potential to assist ophthalmologists in diagnosing glaucoma by predicting the most medically salient regions on OCT reports. Our study suggests the value of eye tracking in guiding deep learning algorithms toward informative regions when experts may not be accessible. By suggesting important OCT report regions for a glaucoma diagnosis, our model could aid in medical education and serve as a precursor for self-supervised deep learning approaches to expedite early detection of irreversible vision loss owing to glaucoma.

Experimental road safety study of the actual driver reaction to the street ads using eye tracking, multiple linear regression and decision trees methods

2024Traffic PsychologyCore
Sharaf AlKhederExpert Systems with Applications
The article describes the results of a naturalistic driving study done in Kuwait performed by 34 participants wearing a mobile eye tracker to monitor the effect of roadside advertisements on user attention. Eye-tracking (fixations) are the main dependent variable and examined as a function of driving/roadside characteristics, particularly billboards, speed, and so forth. The results obtained were analyzed using traditional statistics (ANOVA test and multiple linear regression) and machine learning (decision tree estimation methods). From the results, it was found that road advertisements negatively affect driver attention and thus road safety. How the level of safety varies with the type and size of advertisement was also investigated. As a consequence, all of the estimates revealed that different aspects of advertising have a detrimental impact on drivers' behavior, and that the duration of fixation and the rate of acceleration before viewing an ad are impacted by advertising type and size, respectively.