Filters

Year
Product

Fields


1-10 of 737 publications

Regulation of pupil size in natural vision across the human lifespan

2024Cognitive Psychology, Ps, PsychophysicsCore
Rafael Lazar; Josefine Degen; Ann-Sophie Fiechter; Aurora Monticelli; Manuel SpitschanRoyal Society Open Science
Vision is mediated by light passing through the pupil, which changes in diameter from approximately 2 to 8 mm between bright and dark illumination. With age, mean pupil size declines. In laboratory experiments, factors affecting pupil size can be experimentally controlled. How the pupil reflects the change in retinal input from the visual environment under natural viewing conditions is unclear. We address this question in a field experiment (N = 83, 43 female, 18–87 years) using a custom-made wearable video-based eye tracker with a spectroradiometer measuring near-corneal spectral irradiance. Participants moved in and between indoor and outdoor environments varying in spectrum and engaged in a range of everyday tasks. Our data confirm that light-adapted pupil size is determined by light level, with a better model fit of melanopic over photopic units, and that it decreased with increasing age, yielding steeper slopes at lower light levels. We found no indication that sex, iris colour or reported caffeine consumption affects pupil size. Our exploratory results point to a role of photoreceptor integration in controlling steady-state pupil size. The data provide evidence for considering age in personalized lighting solutions and against the use of photopic illuminance alone to assess the impact of real-world lighting conditions.

Choice enhances touch pleasantness

2024Cognitive PsychologyCore
Lenka Gorman; Wenhan Sun; Jyothisa Mathew; Zahra Rezazadeh; Justin Sulik; Merle Fairhurst; Ophelia DeroyAttention, Perception, & Psychophysics
We value what we choose more than what is imposed upon us. Choice-induced preferences are extensively demonstrated using behavioural and neural methods, mainly involving rewarding objects such as money or material goods. However, the impact of choice on experiences, especially in the realm of affective touch, remains less explored. In this study, we specifically investigate whether choice can enhance the pleasure derived from affective touch, thereby increasing its intrinsic rewarding value. We conducted an experiment in which participants were being touched by an experimenter and asked to rate how pleasant their experience of touch was. They were given either a choice or no choice over certain touch stimulus variables which differed in their relevance: some were of low relevance (relating to the colour of the glove that the experimenter would use to touch them), while others were of high relevance (relating to the location on their arm where they would be stroked). Before and during touching, pupillometry was used to measure the level of arousal. We found that having a choice over aspects of tactile stimuli—especially those relevant to oneself—enhanced the pleasant perception of the touch. In addition, having a choice increases arousal in anticipation of touch. Regardless of how relevant it is to the actual tactile stimulus, allowing one to choose may positively enhance a person’s perception of the physical contact they receive.

Swap It Like Its Hot: Segmentation-based spoof attacks on eye-tracking images

2024Computer Vision, Gaze EstimationCore
Anish S Narkar; Brendan David-JohnConference Paper
Video-based eye trackers capture the iris biometric and enable authentication to secure user identity. However, biometric authentication is susceptible to spoofing another user’s identity through physical or digital manipulation. The current standard to identify physical spoofing attacks on eye-tracking sensors uses liveness detection. Liveness detection classifies gaze data as real or fake, which is sufficient to detect physical presentation attacks. However, such defenses cannot detect a spoofing attack when real eye image inputs are digitally manipulated to swap the iris pattern of another person. We propose IrisSwap as a novel attack on gaze-based liveness detection. IrisSwap allows attackers to segment and digitally swap in a victim’s iris pattern to fool iris authentication. Both offline and online attacks produce gaze data that deceives the current state-of-the-art defense models at rates up to 58% and motivates the need to develop more advanced authentication methods for eye trackers.

CSA-CNN: A Contrastive Self-Attention Neural Network for Pupil Segmentation in Eye Gaze Tracking

2024Artificial Intelligence, Computer Vision, Gaze Estimation
Soumil Chugh; Juntao Ye; Yuqi Fu; Moshe EizenmanConference Paper
This paper presents a novel Contrastive Self-Attention Convolutional Neural Network (CSA-CNN) model with enhanced Difficulty Aware (DA) loss function to improve the segmentation of pupils in eye images. The incorporation of transformer-style self-attention and Difficulty-Aware loss in a UNET-style architecture allows for robust feature representation and promotes shape alignment. The novel model was trained on two public databases (LPW and RIT-Eyes) and evaluated on two other public datasets (ExCuSe and ElSe). When compared with seven state-of-the-art pupil center detection methods, the CSA-CNN showed improvement of over 6% in pupil center detection accuracy (detection within 5 pixels of the labeled center) and more than 9% in Intersection Over Union (IOU) accuracy, compared to the best performer among the other seven methods. Furthermore, when the CSA-CNN model was integrated into a glint-based eye tracking system that uses learning based methods to detect pupil-center, we saw a 25% improvement in gaze accuracy.

Improving the Temporal Accuracy of Eye Gaze Tracking for the da Vinci Surgical System through Automatic Detection of Decalibration Events and Recalibration

2024HCI
Regine Büter; John J. Han; Ayberk Acar; Yizhou Li; Paola Ruiz Puentes; Roger D. Soberanis-Mukul; Iris Gupta; Joyraj Bhowmick; Ahmed Ghazi; Andreas Maier; Mathias Unberath; Jie Ying WuJournal of Medical Robotics Research
Robust and accurate eye gaze tracking can advance medical telerobotics by providing complementary data for surgical training, interactive instrument control, and augmented human–robot interactions. However, current gaze tracking solutions for systems such as the da Vinci Surgical System (dVSS) are limited to complex hardware installations. Additionally, existing methods do not account for operator head movement inside the surgeon console, invalidating the original calibration. This work provides an initial solution to these challenges that can seamlessly integrate into console devices beyond the dVSS. Our approach relies on simple and unobtrusive wearable eye tracking glasses and provides calibration routines that can contend with operator-head movements. An external camera measures movement of the glasses through trackers mounted on the glasses to detect invalidation of the prior calibration from head movement and slippage. Movements beyond a threshold of 5 cm or 9ˆ∘ prompt another calibration sequence. In a study where users moved freely in the surgeon console after an initial calibration procedure, we show that our system tracks the eye tracking glasses to initiate recalibration procedures. Recalibration can reduce the mean tracking error up to 89% compared to the current prevailing approach which relies on the initial calibration only. This work is an important first step towards incorporating user movement into gaze-based applications for the dVSS.

Wearable Eye-Tracking System for Synchronized Multimodal Data Acquisition

2024Gaze EstimationCore
Minqiang Yang; Yujie Gao; Longzhe Tang; Jian Hou; Bin HuIEEE Transactions on Circuits and Systems for Video Technology
Eye-tracking technology is extensively utilized in affective computing research, enabling the investigation of emotional responses through the analysis of eye movements. Integration of eye-tracking with other modalities, allows for the collection of multimodal data, leading to a more comprehensive understanding of emotions and their relationship with physiological responses. This paper presents a novel head-mounted eye-tracking system for multimodal data acquisition with a completely redesigned structure and improved performance. We propose a novel method for pupil-fitting with high efficiency and robustness based on deep learning and RANSAC, which gets better performance of pupil segmentation when it is partially occluded, and build a 3D model to obtain gaze points. Existing eye trackers for multi-modal synchronous data collection either have limited device support or suffer from significant synchronization delays. Our proposed hard real-time synchronization mechanism implements microsecond level latency with low cost, which facilitates multimodal analysis for affective computing research. The uniquely designed exterior effectively reduces facial occlusion, making it more comfortable for the wearer while facilitating the capture of facial expressions.

What Eyeblinks Reveal: Interpersonal Synchronization in Dyadic Interaction

2024Social PsychologyNeon
Mehtap Çakır; Anke HuckaufProceedings of the ACM on Human-Computer Interaction
This study examines eyeblink synchronization in interactions characterized by mutual gaze without task-related or conversational elements that can trigger similarities in visual, auditory, or cognitive processing. We developed a study design capable of isolating the role of gaze in human-human interaction and observed the blinking behavior of dyads with mobile eye tracking glasses under three conditions: face-to-face mutual gaze, mediated mutual gaze through a mirror, and self-directed gaze in a mirror. The results revealed that when the interaction was through direct mutual gaze, eyeblink synchronization increased concurrently with a more structured temporal pattern. Also, the sense of connection between partners mimicked the synchronization. These findings suggest that even minor deviations caused by mediated interaction lead to reduced synchronization and a weakened sense of connection among partners. The paper also discusses the need for methodologies to enhance the efficacy and authenticity of online environments and human-robot interaction.

Area of Interest Tracking Techniques for Driving Scenarios Focusing on Visual Distraction Detection

2024DrivingCore
Viktor Nagy; Péter Földesi; György IstenesApplied Sciences
On-road driving studies are essential for comprehending real-world driver behavior. This study investigates the use of eye-tracking (ET) technology in research on driver behavior and attention during Controlled Driving Studies (CDS). One significant challenge in these studies is accurately detecting when drivers divert their attention from crucial driving tasks. To tackle this issue, we present an improved method for analyzing raw gaze data, using a new algorithm for identifying ID tags called Binarized Area of Interest Tracking (BAIT). This technique improves the detection of incidents where the driver’s eyes are off the road through binarizing frames under different conditions and iteratively recognizing markers. It represents a significant improvement over traditional methods. The study shows that BAIT performs better than other software in identifying a driver’s focus on the windscreen and dashboard with higher accuracy. This study highlights the potential of our method to enhance the analysis of driver attention in real-world conditions, paving the way for future developments for application in naturalistic driving studies.

A machine learning study for predicting driver goals in contingencies with leading and lagging features during goal determination

2024Artificial Intelligence, Cognitive Psychology, DrivingCore
Hsueh-Yi LaiExpert Systems with Applications
Many studies have focused on decision support systems that enhance both the efficiency and safety of driving. They have also explored the potential of real-time psychological data and machine learning in predicting drivers’ cognitive state, such as their fatigue levels, drowsiness, or workload. However, few studies have investigated prediction of driving goals as a cognitive outcome. Early prediction plays an essential role in providing active decision support during driving events under time pressure conditions. In this study, machine learning algorithms and features associated with different phases of decision-making were used to predict two common driving goals: defensive driving in emerging scenarios and urgent reactions in nonroutine scenarios. The effects of perception-, reflex-, control-, and kinetic-related features and how they contribute to prediction in the context of decision-making were analyzed. A total of 49 individuals were recruited to complete simulated driving tasks, with 237 events of defensive driving and 271 events of urgent reactions identified. The results revealed premium recall with a naïve Bayes classifier, indicating the onset of decision-making, with extreme gradient boosting and random forests exhibiting superior precision in predicting defensive driving and urgent reactions, respectively. Additionally, the cutoff of the initial 0.4 s of the events was identified. Before the cutoff, the leading features were reflex- and control-related features, which were the drivers’ immediate reactions before scenario evaluation and goal determination. These leading features contributed to superior prediction results for the two types of driving goals, indicating the likelihood of early detection. After the cutoff, model performance decreased, and lagging features came into play. These lagging features comprised perception- and kinetic-related features, reflecting observation of cues and outcomes of inputs delivered to vehicles. In the first 2 s, predictive models recovered and stabilized.

User Identification via Free Roaming Eye Tracking Data

2024HCINeon
Rishabh Vallabh Varsha Haria; Amin El Abed; Sebastian ManethPreprint
We present a new dataset of "free roaming" (FR) and "targeted roaming" (TR): a pool of 41 participants is asked to walk around a university campus (FR) or is asked to find a particular room within a library (TR). Eye movements are recorded using a commodity wearable eye tracker (Pupil Labs Neon at 200Hz). On this dataset we investigate the accuracy of user identification using a previously known machine learning pipeline where a Radial Basis Function Network (RBFN) is used as classifier. Our highest accuracies are 87.3% for FR and 89.4% for TR. This should be compared to 95.3% which is the (corresponding) highest accuracy we are aware of (achieved in a laboratory setting using the "RAN" stimulus of the BioEye 2015 competition dataset). To the best of our knowledge, our results are the first that study user identification in a non laboratory setting; such settings are often more feasible than laboratory settings and may include further advantages. The minimum duration of each recording is 263s for FR and 154s for TR. Our best accuracies are obtained when restricting to 120s and 140s for FR and TR respectively, always cut from the end of the trajectories (both for the training and testing sessions). If we cut the same length from the beginning, then accuracies are 12.2% lower for FR and around 6.4% lower for TR. On the full trajectories accuracies are lower by 5% and 52% for FR and TR. We also investigate the impact of including higher order velocity derivatives (such as acceleration, jerk, or jounce).