Traditional research paradigm does not measure joint attention in real-time during shared storybook reading. We hypothesize the lack of joint attention limits children"s opportunity to learn print-related skills. There is a disassociation of attention when the adult reads texts while the child looks at pictures. Joint attention is critical for social learning activities such as parent-child shared storybook reading. This paper studies how eye-tracking can be used to measure and facilitate joint attention in parent-child interaction. Our study demonstrates that gaze augmentation can be useful and encourages further study in real-world collaborative scenarios. Our results show that viewers of gaze-augmented video could predict the direction taken by a driver at a four-way intersection more accurately and more confidently than a viewer of the same video without the superimposed gaze point. We designed a study, deconstructing a collaborative car navigation scenario, to specifically target the value of gaze-augmented video for intention prediction. Previous research on using egocentric videos for real-world collaborative tasks has failed to show clear benefits of gaze point visualization. We hypothesized three potential benefits of gaze-augmented egocentric video to support collaborative scenarios: support deictic referencing, enable grounding in communication, and enable better awareness of the collaborator's intentions. Growing popularity of wearable gaze trackers presents an opportunity to add gaze information on the egocentric video. Video communication using head-mounted cameras could be useful to mediate shared activities and support collaboration. In addition, using gaze for remote gesturing required increased verbal effort for communicating both referential and procedural messages. The mouse outperformed the gaze when the task required conveying procedural instructions. Suggests that mouse and gaze pointers lead to faster task performance and improved perception of the collaboration, in comparison to having no pointer at all. We also investigated the effect of distraction of the remote expert on the collaborative process and whether the effect depends on the pointing device. We conducted aĬontrolled user study to compare the two remote gesturing mechanisms (mouse, gaze) to video only (none) in a situation where a remote expert saw video of the desktop of a worker where his/her However, studies that compare gaze to other pointing mechanisms, such as a mouse-controlled pointer, in video-based collaboration, have not been available. Recent work suggests that conveying gaze information measured using an eye tracker between collaboration partners couldīe beneficial in this context. Remote collaboration on physical tasks is an emerging use of video telephony.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |