Filters

Year
Product

Fields


1-10 of 856 publications

Experimental road safety study of the actual driver reaction to the street ads using eye tracking, multiple linear regression and decision trees methods

2024Traffic PsychologyCore
Sharaf AlKhederExpert Systems with Applications
The article describes the results of a naturalistic driving study done in Kuwait performed by 34 participants wearing a mobile eye tracker to monitor the effect of roadside advertisements on user attention. Eye-tracking (fixations) are the main dependent variable and examined as a function of driving/roadside characteristics, particularly billboards, speed, and so forth. The results obtained were analyzed using traditional statistics (ANOVA test and multiple linear regression) and machine learning (decision tree estimation methods). From the results, it was found that road advertisements negatively affect driver attention and thus road safety. How the level of safety varies with the type and size of advertisement was also investigated. As a consequence, all of the estimates revealed that different aspects of advertising have a detrimental impact on drivers' behavior, and that the duration of fixation and the rate of acceleration before viewing an ad are impacted by advertising type and size, respectively.

How much you can supervise children during housework

2024EducationInvisible
Mikiko Oono; Keigo Inamura; Yoshifumi Nishida; Tatsuhiro YamanakaInjury Prevention
Background The World Health Organization stated that the effectiveness of child supervision is scientifically limited to prevent injury, and it will be affected by many factors such as the caregiver’s level of being distracted, the caregiver’s mental health status, and use of medications. In this study, we focused on supervision during housework to clarify how much of the time the supervisor can supervise while doing a particular housework. Methods We developed the evaluation system to measure how much the supervisor watched a child. We uses RGBD camera (Microsoft Kinect) and eye tracking glasses (Pupil Labs Pupil Invisible). In this system, we use point cloud data measured by a RGB-D camera to construct a 3D model of a living space. Then, we integrate eye tracking data from the glasses and posture data from a RGB-D camera to examine the level of child supervision. In this study, we asked participants to do daily housework including cooking, ironing, watching TV, and using a smartphone. Results Two parents participated in this study. The result showed that a child was in effective visual field during 30 to 45% of the experiment time at most while ironing and watching TV, but only 15% at most while cooking and using a smartphone. The level of supervision is affected by the types of housework, the positions of a supervisor and a child, and home layouts. Conclusion Our developed system allows us to measure how much the supervisor can at least watch a child during housework. The definition of supervision is still under discussion, but we believe that this system can be used to educate the supervisors on the importance of passive approach to protect children from harm. Acknowledgement: This research is based on results obtained from a project, JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO)

Cognitive workload classification of law enforcement officers using physiological responses

2024Applied PsychologyCore
David Wozniak; Maryam ZahabiApplied Ergonomics
Motor vehicle crashes (MVCs) are a leading cause of death for law enforcement officers (LEOs) in the U.S. LEOs and more specifically novice LEOs (nLEOs) are susceptible to high cognitive workload while driving which can lead to fatal MVCs. The objective of this study was to develop a machine learning algorithm (MLA) that can estimate cognitive workload of LEOs while performing secondary tasks in a patrol vehicle. A ride-along study was conducted with 24 nLEOs. Participants performed their normal patrol operations while their physiological responses such as heartrate, eye movement, and galvanic skin response were recorded using unobtrusive devices. Findings suggested that the random forest algorithm could predict cognitive workload with relatively high accuracy (>70%) given that it was entirely reliant on physiological signals. The developed MLA can be used to develop adaptive in-vehicle technology based on real-time estimation of cognitive workload, which can reduce the risk of MVCs in police operations.

Improving Predictions of Cognitive States for an Adaptive Autonomous System

2024Cognitive PsychologyCore
Nicole C. Rote; Jacob R. Kintz; Erin E. Richardson; Allison P. Hayman; Torin K. ClarkProceedings of the Human Factors and Ergonomics Society Annual Meeting
Future crewed deep space missions will be challenged by substantial communication latency with Earth. Autonomous systems will likely augment the role of mission control, enabling a more Earth-independent crew. To improve the performance of human-autonomy teams, autonomous systems can adapt in real-time to accommodate changes to an operator’s cognitive states caused by dynamic spaceflight events. The aim of this work was to determine the most important feature categories to accurately predict an operator’s cognitive states in real-time as they work with an autonomous system. We utilized data from a human-autonomy teaming experiment in which trust, mental workload, and situation awareness were predicted as participants completed a spaceflight-relevant task. In cognitive state predictions of unseen operators, a model with no operator background information or eye-tracking data outperformed models that included these features. These simplified models enhance feasibility for an autonomous system to adapt in real-time to accommodate an operator’s cognitive states.

Interpretable Models for Near-real-time Prediction of Team Cognitive Workload in Complex Sociotechnical Environments Using Behavioral and Physiological Data

2024Neuroscience & NeuropsychologyCore
Nurun Naher; Mary Jean Amon; Stephen M. FioreProceedings of the Human Factors and Ergonomics Society Annual Meeting
This work develops interpretable models to predict near-real-time cognitive workload (CWL) in teams operating in complex environments. Existing approaches using neurological sensors are impractical for field use. Our approach integrates multimodal data from non-invasive behavioral and physiological sensors to robustly detect CWL changes. We apply multidimensional recurrence quantification analysis (MdRQA) with a novel pattern analysis extension to identify recurring multimodal signatures indicative of different CWL states. A multiparty dataset with fNIRS, behavioral, and physiological measures from teams performing a gamified search and rescue mission and individual resting state tasks were used. The findings indicate that the multimodal patterns derived from non-invasive measures were significantly associated with a neurological measure of CWL within 10s time slices. Moreover, the multimodal patterns were predictive of individual and team performance over-and-above the neurological measure of CWL. This can enable timely interventions by intelligent systems to optimally manage team CWL and enhance human-machine teaming in demanding environments.

Charting the Silent Signals of Social Gaze: Automating Eye Contact Assessment in Face-to-Face Conversations

2024Social PsychologyInvisible
Ralf Schmälzle; Nolan T. Jahn; Gary BentebioRxiv
Social gaze is a crucial yet often overlooked aspect of nonverbal communication. During conversations, it typically operates subconsciously, following automatic co-regulation patterns. However, deviations from typical patterns, such as avoiding eye contact or excessive gazing, can significantly affect social interactions and perceived relationship quality. The principles and effects of social gaze have intrigued researchers across various fields, including communication science, social psychology, animal biology, and psychiatry. Despite its significance, research in social gaze has been limited by methodological challenges in assessing eye movements and gaze direction during natural social interactions. To address these obstacles, we have developed a new approach combining mobile eye tracking technology with automated analysis tools. In this paper, we introduce, validate, and apply a pipeline for recording and analyzing gaze behavior in dyadic conversations. We present a sample study where dyads engaged in two types of interactions: a get-to-know conversation and a conflictual conversation. Our new analysis pipeline corroborated previous findings, such as people directing more eye gaze while listening than talking, and gaze typically lasting about three seconds before averting. These results demonstrate the potential of our methodology to advance the study of social gaze in natural interactions.

Development and validation of a virtual teaching method for minimally invasive surgery skills: a prospective cohort study

2024ClinicalCore
Bibek Das; Frances Ledesma; Ravi Naik; Sarah Law; Payam Soleimani-Nouri; Omar A. Khan; George Mylonas; Madhava Pai; Hutan Ashrafian; Duncan Spalding; Matyas FehervariInternational Journal of Surgery
Introduction:  The COVID-19 pandemic led to a significant reduction in operative exposure for surgical trainees, necessitating alternative training methods to mitigate the impact on surgical education. This study sought to evaluate whether minimally invasive surgery (MIS) skills could be taught remotely using widely available technology with objective assessments of proficiency. Methods:  This was a pilot observational study with comparative assessment of face-to-face (F2F) and virtual training of novice learners in MIS skills. Performance and objective cognitive workload parameters (Surgical Task Load Index (SURG-TLX) score, heart rate and pupil metrics) were evaluated. The assessments were peg transfer (McGill Inanimate System for Training and Evaluation of Laparoscopic Skills (MISTELS)) and suturing (Suturing Training and Testing (SUTT)) tasks performed using box trainers. Virtual teaching was conducted by expert trainers using a web-based streaming platform. Results:  Technical challenges of delivering a virtual MIS skills course were addressed after a pilot course. Participants (n = 20) in the final course had similar baseline characteristics and were randomly allocated to F2F (n = 8) and virtual (n = 12) teaching groups. Participants in the online group completed the peg transfer task faster than the F2F group (11.25 minutes vs. 16.88 minutes; P = 0.015). There were no significant differences in all other MISTELS and SUTT performance measures between groups. Cognitive workload parameters (SURG-TLX score, heart rate and pupil metrics) were also similar between groups. Conclusion:  This study has demonstrated that virtual teaching of MIS skills using a web-based streaming platform is feasible and effective, providing the foundation for low-cost, effective, and scalable MIS skills programs in the future.

Drivers’ situational awareness of surrounding vehicles during takeovers: Evidence from a driving simulator study

2024DrivingCore
Lesong Jia; Chenglue Huang; Na DuTransportation Research Part F: Traffic Psychology and Behaviour

The Visual Experience Dataset: Over 200 Recorded Hours of Integrated Eye Movement, Odometry, and Egocentric Video

2024Computer Vision, HCICore
Michelle R. Greene; Benjamin J. Balas; Mark D. Lescroart; Paul R. MacNeilage; Jennifer A. Hart; Kamran Binaee; Peter A. Hausamann; Ronald Mezile; Bharath Shankar; Christian B. Sinnott; Kaylie Capurro; Savannah Halow; Hunter Howe; Mariam Josyula; Annie Li; Abraham Mieses; Amina Mohamed; Ilya Nudnou; Ezra Parkhill; Peter Riley; Brett Schmidt; Matthew W. Shinkle; Wentao Si; Brian Szekely; Joaquin M. Torres; Eliana WeissmannarXiv
We introduce the Visual Experience Dataset (VEDB), a compilation of over 240 hours of egocentric video combined with gaze- and head-tracking data that offers an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 58 observers ranging from 6-49 years old. This paper outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to utilize and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings.

Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

2024HCINeon
Arvind Srinivasan; Johannes Ellemose; Peter W. S. Butcher; Panagiotis D. Ritsos; Niklas ElmqvistarXiv
We propose the notion of Attention-Aware Visualizations (AAVs) that track the user's perception of a visual representation over time and feed this information back to the visualization. Such context awareness is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user's attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user's gaze on a visualization and its parts; (2) tracking the user's attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D data-agnostic method for web-based visualizations that can use an embodied eyetracker to capture the user's gaze, and a 3D data-aware one that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a qualitative evaluation studying visual feedback and triggering mechanisms for capturing and revisualizing attention.