Multimodal kinect-supported interaction for visually impaired users

Gross, Richard; Bockholt, Ulrich; Biersack, Ernst W.; Kuijper, Arjan
UAHCI 2013, Held as part of HCI 2013, 7th International Conference on Universal Access in Human-Computer Interaction: Design Methods, Tools, and Interaction Techniques for eInclusion, 21-26 July 2013, Las Vegas, NV, USA / Also published in Lecture Notes in Computer Science, Vol 8009/2013, PART 1

This paper discusses Kreader, a proof-of-concept for a new interface for blind or visually impaired users to have text read to them. We use the Kinect device to track the users body. All feedback is presented with auditory cues, while a minimal visual interface can be turned on optionally. Interface elements are organized in a list manner and placed ego-centric, in relation to the user's body. Moving around in the room does not change the element's location. Hence visually impaired users can utilize their "body-sense" to find elements. Two test sessions were used to evaluate Kreader. We think the results are encouraging and provide a solid foundation for future research into such an interface, that can be navigated by sighted and visually impaired users.


DOI
Type:
Conférence
City:
Las Vegas
Date:
2013-07-21
Department:
Sécurité numérique
Eurecom Ref:
4100
Copyright:
© Springer. Personal use of this material is permitted. The definitive version of this paper was published in UAHCI 2013, Held as part of HCI 2013, 7th International Conference on Universal Access in Human-Computer Interaction: Design Methods, Tools, and Interaction Techniques for eInclusion, 21-26 July 2013, Las Vegas, NV, USA / Also published in Lecture Notes in Computer Science, Vol 8009/2013, PART 1 and is available at : http://dx.doi.org/10.1007/978-3-642-39188-0_54

PERMALINK : https://www.eurecom.fr/publication/4100