Filters

Year
Product

Fields


1-10 of 889 publications

Hybrid BCI for Meal-Assist Robot Using Dry-Type EEG and Pupillary Light Reflex

2025HCICore
Jihyeon Ha; Sangin Park; Yaeeun Han; Laehyun KimBiomimetics
Brain–computer interface (BCI)-based assistive technologies enable intuitive and efficient user interaction, significantly enhancing the independence and quality of life of elderly and disabled individuals. Although existing wet EEG-based systems report high accuracy, they suffer from limited practicality. This study presents a hybrid BCI system combining dry-type EEG-based flash visual-evoked potentials (FVEP) and pupillary light reflex (PLR) designed to control an LED-based meal-assist robot. The hybrid system integrates dry-type EEG and eyewear-type infrared cameras, addressing the preparation challenges of wet electrodes, while maintaining practical usability and high classification performance. Offline experiments demonstrated an average accuracy of 88.59% and an information transfer rate (ITR) of 18.23 bit/min across the four target classifications. Real-time implementation uses PLR triggers to initiate the meal cycle and EMG triggers to detect chewing, indicating the completion of the cycle. These features allow intuitive and efficient operation of the meal-assist robot. This study advances the BCI-based assistive technologies by introducing a hybrid system optimized for real-world applications. The successful integration of the FVEP and PLR in a meal-assisted robot demonstrates the potential for robust and user-friendly solutions that empower the users with autonomy and dignity in their daily activities.

Blended police firearms training improves performance in shoot/don't shoot scenarios: a systematic replication with police cadets

2024Sports ScienceNeon
Joshua Olma; Christine Sutter; Sandra SülzenbrückFrontiers in Psychology
Senior police officers' tactical gaze control and visual attention improve with an individual video-based police firearms training. To validate the efficacy of said intervention training, a previous experiment was systematically replicated with a sample of N = 52 second-year police cadets. Participants were randomly assigned to the intervention training that focused on situational awareness, tactical gaze control, and visual attention, or an active control training that addressed traditional marksmanship skills. In a pre- and post-test, they had to engage in dynamic shoot/don't shoot video scenarios in an indoor firing range. Overall, the previous findings were replicated: Baseline levels of performance were elevated, yet the intervention group significantly improved their response time and time until the first hit. False positive decision-making cannot be reported at all; false negatives were marginal in the pre-test and eliminated after training. Further, the outcomes of the previous sample of senior officers and the present sample of cadets are compared and lead to the conclusion that the presented approach is a valuable extension of current training standards for both senior police officers and police cadets.

A temporal quantitative analysis of visuomotor behavior during four twisting somersaults in elite and sub-elite trampolinists

2024Sports ScienceInvisible
Eve Charbonneau; Mickaël Begon; Thomas RomeasHuman Movement Science
Vision has previously been correlated with performance in acrobatic sports, highlighting visuomotor expertise adaptations. However, we still poorly understand the visuomotor strategies athletes use while executing twisting somersaults, even though this knowledge might be helpful for skill development. Thus, the present study sought to identify the differences in gaze behavior between elite and sub-elite trampolinists during the execution of four acrobatics of increasing difficulty. Seventeen inertial measurement units and a wearable eye-tracker were used to record the body and gaze kinematics of 17 trampolinists (8 elites, 9 sub-elites). Six typical metrics were analyzed using a mixed analysis of variance (ANOVA) with the Expertise as inter-subject and the Acrobatics as intra-subject factors. To complement this analysis, advanced temporal eye-tracking metrics are reported, such as the dwell time on areas of interest, the scan path on the trampoline bed, the temporal evolution of the gaze orientation endpoint (SPGO), and the time spent executing specific neck and eye strategies. A significant main effect of Expertise was only evidenced in one of the typical metrics, where elite athletes exhibited a higher number of fixations compared to sub-elites (p = 0.033). Significant main effects of Acrobatics were observed on all metrics (p < 0.05), revealing that gaze strategies are task-dependent in trampolining. The recordings of eyes and neck movements performed in this study confirmed the use of “spotting” at the beginning and end of the acrobatics. They also revealed a unique sport-specific visual strategy that we termed as self-motion detection. This strategy consists of not moving the eyes during fast head rotations, a strategy mainly used by trampolinists during the twisting phase. This study proposes a detailed exploration of trampolinists' gaze behavior in highly realistic settings and a temporal description of the visuomotor strategies to enhance understanding of perception-action interactions during the execution of twisting somersaults.

Slippage-robust linear features for eye tracking

2024Gaze EstimationCore
Tawaana Gustad Homavazir; V.S. Raghu Parupudi; Surya L.S.R. Pilla; Pamela CosmanExpert Systems with Applications

Eyeball Kinematics Informed Slippage Robust Gaze Tracking

2024Gaze EstimationCore
Wei Zhang; Jiaxi Cao; Xiang Wang; Pengfei Xia; Bin Li; Xun ChenIEEE Sensors Journal

Instrumented Contact Lens to Detect Gaze Movements Independently of Eye Blinks

2024OpthalmologyCore
Marion Othéguy; Vincent Nourrit; Jean-Louis De Bougrenet De La TocnayeTranslational Vision Science & Technology

Evaluation of Autonomous Vehicle Takeover Performance in Work-Zone Environment

2024DrivingNeon
Viktor Nagy; Diovane Mateus Da Luz; Ágoston Pál Sándor; Attila BorsosSMTS 2024

Better understanding fall risk: AI-based computer vision for contextual gait assessment

2024Artificial Intelligence, Motor ControlInvisible
Jason Moore; Peter McMeekin; Samuel Stuart; Rosie Morris; Yunus Celik; Richard Walker; Victoria Hetherington; Alan GodfreyMaturitas
Contemporary research to better understand free-living fall risk assessment in Parkinson's disease (PD) often relies on the use of wearable inertial-based measurement units (IMUs) to quantify useful temporal and spatial gait characteristics (e.g., step time, step length). Although use of IMUs is useful to understand some intrinsic PD fall-risk factors, their use alone is limited as they do not provide information on extrinsic factors (e.g., obstacles). Here, we update on the use of ergonomic wearable video-based eye-tracking glasses coupled with AI-based computer vision methodologies to provide information efficiently and ethically in free-living home-based environments to better understand IMU-based data in a small group of people with PD. The use of video and AI within PD research can be seen as an evolutionary step to improve methods to understand fall risk more comprehensively.

Eye Movement Assessment Methodology Based on Wearable EEG Headband Data Analysis

2024HCI, Neuroscience & NeuropsychologyInvisible
Vladimir Romaniuk; Alexey Kashevnik2024 36th Conference of Open Innovations Association (FRUCT)

Automated Identification of Clinically Relevant Regions in Glaucoma OCT Reports Using Expert Eye Tracking Data and Deep Learning

2024Artificial Intelligence, OpthalmologyCore
Ye Tian; Anurag Sharma; Shubh Mehta; Shubham Kaushal; Jeffrey M. Liebmann; George A. Cioffi; Kaveri A. ThakoorTranslational Vision Science & Technology
To propose a deep learning–based approach for predicting the most-fixated regions on optical coherence tomography (OCT) reports using eye tracking data of ophthalmologists, assisting them in finding medically salient image regions. We collected eye tracking data of ophthalmology residents, fellows, and faculty as they viewed OCT reports to detect glaucoma. We used a U-Net model as the deep learning backbone and quantized eye tracking coordinates by dividing the input report into an 11 × 11 grid. The model was trained to predict the grids on which fixations would land in unseen OCT reports. We investigated the contribution of different variables, including the viewer's level of expertise, model architecture, and number of eye gaze patterns included in training. Our approach predicted most-fixated regions in OCT reports with precision of 0.723, recall of 0.562, and f1-score of 0.609. We found that using a grid-based eye tracking structure enabled efficient training and using a U-Net backbone led to the best performance. Our approach has the potential to assist ophthalmologists in diagnosing glaucoma by predicting the most medically salient regions on OCT reports. Our study suggests the value of eye tracking in guiding deep learning algorithms toward informative regions when experts may not be accessible. By suggesting important OCT report regions for a glaucoma diagnosis, our model could aid in medical education and serve as a precursor for self-supervised deep learning approaches to expedite early detection of irreversible vision loss owing to glaucoma.