Article Preview
TopIntroduction
The modern-day average workstation can comprise a multitude of devices such as a computer, table-clock, desk phone, calendar, notepad, pen stand with multiple pens, documents, cabinets, files, planner, diary etc. All these items/devices create a challenge for a visually impaired user to effectively use these, as none of the aforementioned objects/ items were created with a blind user in mind. The concept of a workstation is quite old, with all the required items on the table (at an arm’s length) which in fact can help the efficiency of any user, due to the limited movement, involved in using any needed item. Having all the required tools close by and used in parallel creates the classical office workstation model. However, this scheme of things does not necessarily provide an edge to a visually impaired user. A blind user being unable to “see” the tools such as tables/ cabinets/ files etc. thus limits effectiveness in utilizing these tools, (Sáenz & Sánchez, 2010). Therefore this paper addresses this issue by creating a gesture-controlled, augmented reality virtual workstation for a blind user. This system enables a blind user, to literally “pull down” an entire workstation with all the gadgets/ devices/ tools as a digital overlay, with a unique interface optimized for use by a visually impaired user. However, as this whole system is virtual and overlaid over the real world, it’s not limited by the actual physical real estate, which is in vicinity of the blind user. The user can practically, pull and slide as many new workstations and thus can have access to many more virtual objects, than possible on a physical workstation.
The system provides a blind user, to be able to create a complete workstation anywhere, and interact with it using simple gestures, receiving haptic feedback and binaural audio cues, an interface which is perfectly tuned to cater a blind user.
In this paper, a novel “scalable adaptive framework” has been proposed, which can be scaled to incorporate newer tools and devices, and the user's workstation can stay relevant with all the needed equipment for as long as there is a supply of newer widgets.
The system consists of two main hardware components namely, the camera mounted eyeglasses and the ultra-sonic haptic glove device. However, the haptic glove is the key component of the system and facilitates the function of detection of hand condition/ orientation, facilitates haptic braille read and write functions, it can be powered with a small battery. The glove communicates with the processing module and together with the camera module, facilitates the creation of a truly interactive and immersive Augmented Reality Interface for the visually impaired. Refer Figure 1, for the first prototype of the haptic glove with all functionalities.
Figure 1. The 3 prototypes of the haptic glove system
TopDesign Considerations And Other Possible State Of The Art Alternatives
There were a multitude of design considerations before finalizing the current system.
A lot of innovative techniques had to be implemented to manage the changing lighting and environmental conditions. However, these efforts could have been easily replaced by using some of the currently available state of the art 3D IR point cloud camera systems, such as Microsoft Kinect, Leap Motion, Google glass etc. But these sophisticated hardware are very expensive, and the proposed system is geared to facilitate the visually impared users, which in majority reside in developing nations, thus it was a clear imperative to keep the overall cost of the system at a minimum. Currently, the prototype system costs a little over $50, which can be drastically reduced by mass production by virtue of economies of scale which would make this system much more accessible, feasible and practical for all users coming from all walks of life. See Figure 1.