Filters

Year
Product

Fields


1-10 of 987 publications

Success in goal-directed visual tasks: the benefits of alternating sitting and standing instead of only sitting

2025ErgonomicsCore
Wafa Cherigui; Mélen Guillaume; Sérgio T. Rodrigues; Cédrick T. BonnetApplied Ergonomics
Both excessive sitting and excessive standing have been shown to be detrimental for performance, productivity and health. In the present study, our objective was specifically to determine the effect of alternating the body position (between standing and sitting) on task performance and visual attention in the Attention Network Task (ANT), relative to a sitting-only condition. Twenty-four participants (aged 18–35) performed the ANT six times in both conditions (5 min 35 per ANT). The proportion of blinks was significantly lower in the alternating condition than in the sitting-only condition. In both between-condition and within-condition analyses, the reaction times were significantly shorter when standing than when sitting. Humans may be more effective (i.e. a shorter reaction time) and have greater visual attention (i.e. less frequent proportion of blinking) in an alternating condition than in a sitting-only condition. In practice, the use of sit-stand desks might usefully help to both reduce the time spent sitting and improve task performance.

Open your Eyes: Blink-induced Change Blindness while Reading

2025ReadingCore
Kai Schultz; Kenan Bektas; Jannis Strecker-Bischoff; Simon MayerJournal Article
Reading assistants provide users with additional information through pop-ups or other interactive events which might interrupt the flow of reading. We propose that unnoticeable changes can be made in a given text during blinks while the vision is obscured for a short period of time. Reading assistants could make use of such change blindness to adapt text in real time and without infringing on the reading experience. We developed a system to study blink-induced change blindness. In two preliminary experiments, we asked five participants to read six short texts each. Once per text and during a blink, our system changed a predetermined part of each text. In each trial, the intensity and distance of the change were systematically varied. Our results show that text changes — although obvious to bystanders — were difficult to detect for participants. Concretely, while changes that affected the appearance of large text parts were detected in 80% of the occurrences, no line-contained changes were detected.

Coordinating Attention in Face-to-Face Collaboration: The Dynamics of Gaze, Pointing, and Verbal Reference

2025Applied PsychologyCore
Lucas Haraped; D. Jacob Gerlofs; Olive Chung-Hui Huang; Cam Hickling; Walter F. Bischof; Pierre Sachse; Alan KingstoneCognitive Science
During real-world interactions, people rely on gaze, gestures, and verbal references to coordinate attention and establish shared understanding. Yet, it remains unclear if and how these modalities couple within and between interacting individuals in face-to-face settings. The current study addressed this issue by analyzing dyadic face-to-face interactions, where participants (n = 52) collaboratively ranked paintings while their gaze, pointing gestures, and verbal references were recorded. Using cross-recurrence quantification analysis, we found that participants readily used pointing gestures to complement gaze and verbal reference cues and that gaze directed toward the partner followed canonical conversational patterns, that is, more looks to the other's face when listening than speaking. Further, gaze, pointing, and verbal references showed significant coupling both within and between individuals, with pointing gestures and verbal references guiding the partner's gaze to shared targets and speaker gaze leading listener gaze. Moreover, simultaneous pointing and verbal referencing led to more sustained attention coupling compared to pointing alone. These findings highlight the multimodal nature of joint attention coordination, extending theories of embodied, interactive cognition by demonstrating how gaze, gestures, and language dynamically integrate into a shared cognitive system.

Where is my hand? Using touch and proprioception to localize the upper limb in children with Developmental Coordination Disorder

2025Neuroscience & NeuropsychologyCore
Marion Naffrechoux; Alice Gomez; Eric Koun; Frédéric Volland; Alessandro Farnè; Denis Pélisson; Alice Catherine RoySocial Science Research Network
Developmental Coordination Disorder (DCD) consists in slowness and inaccuracy in motor performance, mainly attributed to internal modelling deficits. Previous studies revealed impaired visuospatial capabilities in children with DCD, but the integrity of proprioception and touch - despite being critical for motor control - remains largely undetermined. Critically, such somatosensory abilities were typically assessed through active manual responses, thus being potentially confounded with the motor deficits affecting the responding limb. In this study, we aimed to assess proprioceptive and tactile localization abilities in children with DCD while avoiding the motor deficits confounding factor.Seventeen children with DCD and sixteen Typically Developing (TD) children completed two tasks in which they had to localize unseen proprioceptive or tactile targets. In each task they responded by either manual pointing movements - i.e. with an effector affected by the motor disorder - or ocular saccades, the saccadic system being preserved in DCD.In the proprioceptive task, the group of children with DCD showed higher inaccuracy and imprecision than the TD group in both the vertical and horizontal dimensions. These newly reported proprioceptive deficits were further found to correlate with DCD gross motor skills. In the tactile task, inaccuracy and imprecision in children with DCD were found only for the horizontal error and were not correlated with their motor skills.These results reveal ‘pure’ proprioceptive deficits in children with DCD, which may thus contribute to their motor difficulties. This finding improves our knowledge of the mechanisms underlying DCD.

Human Action Planning and Mental Rotation in a Tetris-Like-Game

2025Applied PsychologyCore
Aisha Aamir; Minija Tamosiunaite; Florentin WörgötterbioRxiv
The mechanisms behind human action planning and mental object manipulation are still not well understood. These core cognitive abilities are essential not only for interaction with physical spaces, e.g., for assembling objects, but also for effective problem-solving in the digital world. Here we ask, which strategies humans employ when assessing whether or not an object will fit into a cavity. To this end, objects and cavities were presented with different orientations on a computer screen and we measured errors, reaction-times and gaze patterns, where the latter can point to different problem-solving strategies. On the one hand our findings confirm that simpler configurations are solved faster and more efficiently than more complex ones. On the other hand, by analyzing about 80,000 gazes, we observed that participants used three different strategies. In many instances, the investigated task — featuring relatively large objects — could be completed using only peripheral vision (37%). In a larger number of cases quite “specific” gaze patterns were observed, primarily focusing on the Gestalt of a concave corner (46%). Less frequently, but still notably often, participants employed a strategy of fixating near to object or cavity (17%), potentially minimizing the length of the required saccadic eye movements while relying on perifoveal/peripheral vision. Ultimately, these findings highlight the crucial roles of proximity, spatial orientation, and visual cues in object recognition tasks, suggesting that the perceptual strategies used depend on distinct aspects of the object configurations.

Framework for Multimodal Cognitive Load Analysis in Safety-Critical Systems: An ATC Simulation Case Study

2025HCICore
Jonas Pöhler; Antonia Vitt; Nadine Flegel; Tilo MentlerJournal Article
Controlled studies in safety-critical domains such as Air Traffic Control (ATC) are inherently difficult, making high-fidelity simulators essential for research. However, existing simulation environments are often complex and expensive facilities that are only available at selected locations (e.g. flight simulators) or lack necessary realism, limiting their use in Human-Computer Interaction (HCI) research. This paper presents a framework that addresses this gap, demonstrating how a more realistic, sensor-enhanced simulation environment can be developed in a comparatively low-cost manner. Following the Design Science Research (DSR) methodology, we integrated the open-source BlueSky ATC engine with a custom frontend and multiple sensor modalities (e.g., eye-tracking, PPG, respiration). Our preliminary evaluation in a landing scenario case study confirms the framework’s effectiveness in capturing rich physiological and behavioral data corresponding to cognitive load. We present the system architecture, assess the DSR process, and release the framework as an open source tool to foster further research.

The interpretable surgical temporal informer: explainable surgical time completion prediction

2025ClinicalCore
Roger D. Soberanis-Mukul; Rohit Shankar; Lalithkumar Seenivasan; Jose L. Porras; Masaru Ishii; Mathias UnberathInternational Journal of Computer Assisted Radiology and Surgery
Predicting surgical time completion helps streamline surgical workflow and OR utilization, enhancing hospital efficacy. When time prediction is based on interventional video of the surgical site, time predictions may correlate with technical proficiency of the surgeon because skill is a useful proxy of completion time. To understand features that are predictive of surgical time in surgical site video, we develop prototype-like visual explanations, making them applicable to video sequences.

Disentangling Respiratory Phase-Dependent and Anticipatory Cardiac Deceleration in a Visual Perception Task

2025Neuroscience & NeuropsychologyCore
Ege Kingir; Sukanya Chakraborty; Caspar M. Schwiedrzik; Melanie WilkebioRxiv
The heart does not beat like a metronome: varying parasympathetic input to the heart leads to constant heart rate variability. Vagal cardiomotor neuron activity is coupled to the respiratory cycle, leading to Respiratory Sinus Arrhythmia (RSA), a permanent oscillation of heart rate synchronized to respiration. Heart rate also temporarily decelerates in specific conditions such as in freezing due to perceived threat, or anticipation of a salient stimulus. Anticipatory Cardiac Deceleration (ACD) is observed consistently in anticipation of a stimulus in perceptual tasks, but its relationship with perceptual performance is debated. Previous quantifications of ACD neglect ongoing heart rate oscillations due to RSA, which may have led to inconsistencies in the ACD-related analyses across studies. Here, we suggest a novel approach to estimate trial-averaged RSA amplitude and respiratory phase-independent cardiac deceleration simultaneously, and apply it to an EEG-ECG dataset from a visual detection task. While the total ACD was not associated with perception, dissociating RSA-based and non-respiratory cardiac modulations revealed that they show opposing effects on perceptual performance. Additionally, we found that participants with higher ACD amplitudes also displayed larger Visual Awareness Negativity potentials, further supporting a contribution of ACD to visual perception. Impact Statement We present a novel analysis method to quantify task-related, anticipatory cardiac deceleration which takes tonic heart rate oscillations due to respiratory sinus arrhythmia into account. Our results add to previous research on the relationship between cardiac deceleration and perception by simultaneously characterizing and dissociating respiratory and non-respiratory heart rate modulations during stimulus anticipation.

Operator-agnostic and real-time usable psychophysiological models of trust, workload, and situation awareness

2025Applied PsychologyCore
Erin E. Richardson; Jacob R. Kintz; Savannah L. Buchner; Torin K. Clark; Allison P. HaymanFrontiers in Computer Science
Trust, mental workload, and situation awareness (TWSA) are cognitive states important to human performance and human-autonomy teaming. Individual and team performance may be improved if operators can maintain ideal levels of TWSA. Predictions of operator TWSA can inform adaptive autonomy and resource allocation in teams, helping achieve this goal. Current approaches of estimating TWSA, such as questionnaires or behavioral measures, are obtrusive, task-specific, or cannot be used in real-time. Psychophysiological modeling has the potential to overcome these limitations, but prior work is limited in operational feasibility. To help address this gap, we develop psychophysiological models that can be used in real time and that do not rely on operator-specific background information. We assess the impacts of these constraints on the models' performance. Participants ( n = 10) performed a human-autonomy teaming task in which they monitored a simulated spacecraft habitat. Regression models using LASSO-based feature selection were fit with an emphasis on model stability and generalizability. We demonstrate functional model fit (Adjusted R 2 : T = 0.67, W = 0.60, SA = 0.85). Furthermore, model performance extends to predictive ability, assessed via leave-one-participant-out cross validation ( Q 2 : T = 0.58, W = 0.46, SA = 0.74). This study evaluates model performance to help establish the viability of real-time, operator-agnostic models of TWSA.

Detecting Reading-Induced Confusion Using EEG and Eye Tracking

2025HCICore
Haojun Zhuang; Dünya Baradari; Nataliya Kosmyna; Arnav Balyan; Constanze Albrecht; Stephanie Chen; Pattie MaesarXiv
Humans regularly navigate an overwhelming amount of information via text media, whether reading articles, browsing social media, or interacting with chatbots. Confusion naturally arises when new information conflicts with or exceeds a reader's comprehension or prior knowledge, posing a challenge for learning. In this study, we present a multimodal investigation of reading-induced confusion using EEG and eye tracking. We collected neural and gaze data from 11 adult participants as they read short paragraphs sampled from diverse, real-world sources. By isolating the N400 event-related potential (ERP), a well-established neural marker of semantic incongruence, and integrating behavioral markers from eye tracking, we provide a detailed analysis of the neural and behavioral correlates of confusion during naturalistic reading. Using machine learning, we show that multimodal (EEG + eye tracking) models improve classification accuracy by 4-22% over unimodal baselines, reaching an average weighted participant accuracy of 77.3% and a best accuracy of 89.6%. Our results highlight the dominance of the brain's temporal regions in these neural signatures of confusion, suggesting avenues for wearable, low-electrode brain-computer interfaces (BCI) for real-time monitoring. These findings lay the foundation for developing adaptive systems that dynamically detect and respond to user confusion, with potential applications in personalized learning, human-computer interaction, and accessibility.