Pen-based user interfaces which leverage the affordances of the pen provide userswith more flexibility and natural interaction. However, it is difficult to construct usable pen-baseduser interfaces because of the lack...Pen-based user interfaces which leverage the affordances of the pen provide userswith more flexibility and natural interaction. However, it is difficult to construct usable pen-baseduser interfaces because of the lack of support for their development. Toolkit-level support has beenexploited to solve this problem, but this approach makes it hard to gain platform independence,easy maintenance and easy extension. In this paper a context-aware infrastructure is created,called WEAVER, to provide pen interaction services for both novel pen-based applications andlegacy GUI-based applications. WEAVER aims to support the pen as another standard interactivedevice along with the keyboard and mouse and present a high-level access interface to pen input.It employs application context to tailor its service to different applications. By modeling theapplication context and registering the relevant action adapters, WEAVER can offer services,such as gesture recognition, continuous handwriting and other fundamental ink manipulations, toappropriate applications. One of the distinct features of WEAVER is that off-the-shelf GUI-basedsoftware packages can be easily enhanced with pen interaction without modifying the existing code.In this paper, the architecture and components of WEAVER are described. In addition, examplesand feedbacks of its use are presented.展开更多
Large displays have become ubiquitous in our everyday lives, but these displays are designed for sighted people.This paper addresses the need for visually impaired people to access targets on large wall-mounted displa...Large displays have become ubiquitous in our everyday lives, but these displays are designed for sighted people.This paper addresses the need for visually impaired people to access targets on large wall-mounted displays. We developed an assistive interface which exploits mid-air gesture input and haptic feedback, and examined its potential for pointing and steering tasks in human computer interaction(HCI). In two experiments, blind and blindfolded users performed target acquisition tasks using mid-air gestures and two different kinds of feedback(i.e., haptic feedback and audio feedback). Our results show that participants perform faster in Fitts' law pointing tasks using the haptic feedback interface rather than the audio feedback interface. Furthermore, a regression analysis between movement time(MT) and the index of difficulty(ID)demonstrates that the Fitts' law model and the steering law model are both effective for the evaluation of assistive interfaces for the blind. Our work and findings will serve as an initial step to assist visually impaired people to easily access required information on large public displays using haptic interfaces.展开更多
Despite the existence of advanced functions in smartphones, most blind people are still using old-fashioned phones with familiar layouts and dependence on tactile buttons. Smartphones support accessibility features in...Despite the existence of advanced functions in smartphones, most blind people are still using old-fashioned phones with familiar layouts and dependence on tactile buttons. Smartphones support accessibility features including vibration, speech and sound feedback, and screen readers. However, these features are only intended to provide feedback to user commands or input. It is still a challenge for blind people to discover functions on the screen and to input the commands. Although voice commands are supported in smartphones, these commands are difficult for a system to recognize in noisy environments. At the same time, smartphones are integrated with sophisticated motion sensors, and motion gestures with device tilt have been gaining attention for eyes-free input. We believe that these motion gesture interactions offer more efficient access to smartphone functions for blind people. However, most blind people are not smartphone users and they are aware of neither the affordances available in smartphones nor the potential for interaction through motion gestures. To investigate the most usable gestures for blind people, we conducted a user-defined study with 13 blind participants. Using the gesture set and design heuristics from the user study, we implemented motion gesture based interfaces with speech and vibration feedback for browsing phone books and making a call. We then conducted a second study to investigate the usability of the motion gesture interface and user experiences using the system. The findings indicated that motion gesture interfaces are more efficient than traditional button interfaces. Through the study results, we provided implications for designing smartphone interfaces.展开更多
基金This research was initiated in a project with,国家高技术研究发展计划(863计划),日本科研项目,国家高技术研究发展计划(863计划)
文摘Pen-based user interfaces which leverage the affordances of the pen provide userswith more flexibility and natural interaction. However, it is difficult to construct usable pen-baseduser interfaces because of the lack of support for their development. Toolkit-level support has beenexploited to solve this problem, but this approach makes it hard to gain platform independence,easy maintenance and easy extension. In this paper a context-aware infrastructure is created,called WEAVER, to provide pen interaction services for both novel pen-based applications andlegacy GUI-based applications. WEAVER aims to support the pen as another standard interactivedevice along with the keyboard and mouse and present a high-level access interface to pen input.It employs application context to tailor its service to different applications. By modeling theapplication context and registering the relevant action adapters, WEAVER can offer services,such as gesture recognition, continuous handwriting and other fundamental ink manipulations, toappropriate applications. One of the distinct features of WEAVER is that off-the-shelf GUI-basedsoftware packages can be easily enhanced with pen interaction without modifying the existing code.In this paper, the architecture and components of WEAVER are described. In addition, examplesand feedbacks of its use are presented.
基金partially supported by the National Natural Science Foundation of China under Grant No.61228206the Grant-in-Aid for Scientific Research of Japan under Grant Nos.23300048 and 25330241
文摘Large displays have become ubiquitous in our everyday lives, but these displays are designed for sighted people.This paper addresses the need for visually impaired people to access targets on large wall-mounted displays. We developed an assistive interface which exploits mid-air gesture input and haptic feedback, and examined its potential for pointing and steering tasks in human computer interaction(HCI). In two experiments, blind and blindfolded users performed target acquisition tasks using mid-air gestures and two different kinds of feedback(i.e., haptic feedback and audio feedback). Our results show that participants perform faster in Fitts' law pointing tasks using the haptic feedback interface rather than the audio feedback interface. Furthermore, a regression analysis between movement time(MT) and the index of difficulty(ID)demonstrates that the Fitts' law model and the steering law model are both effective for the evaluation of assistive interfaces for the blind. Our work and findings will serve as an initial step to assist visually impaired people to easily access required information on large public displays using haptic interfaces.
基金partially supported by the Grant-in-Aid for Scientific Research of Japan under Grant Nos.23300048,25330241the National Natural Science Foundation of China under Grant No.61228206
文摘Despite the existence of advanced functions in smartphones, most blind people are still using old-fashioned phones with familiar layouts and dependence on tactile buttons. Smartphones support accessibility features including vibration, speech and sound feedback, and screen readers. However, these features are only intended to provide feedback to user commands or input. It is still a challenge for blind people to discover functions on the screen and to input the commands. Although voice commands are supported in smartphones, these commands are difficult for a system to recognize in noisy environments. At the same time, smartphones are integrated with sophisticated motion sensors, and motion gestures with device tilt have been gaining attention for eyes-free input. We believe that these motion gesture interactions offer more efficient access to smartphone functions for blind people. However, most blind people are not smartphone users and they are aware of neither the affordances available in smartphones nor the potential for interaction through motion gestures. To investigate the most usable gestures for blind people, we conducted a user-defined study with 13 blind participants. Using the gesture set and design heuristics from the user study, we implemented motion gesture based interfaces with speech and vibration feedback for browsing phone books and making a call. We then conducted a second study to investigate the usability of the motion gesture interface and user experiences using the system. The findings indicated that motion gesture interfaces are more efficient than traditional button interfaces. Through the study results, we provided implications for designing smartphone interfaces.