Monday, June 27, 2011

Concept and Related Haptic Research

Description of core concept

Blind individuals have a unique perspective of the world and have adapted to a heightened level of control of their other senses in order to compensate for their impairment. Traditional maps and navigation methods don’t serve the visually impaired and in the case of George are down right useless. Once placed in a pitch black apartment, a sighted individual can start to understand how the blind perceive spaces and use their other senses to navigate though them.

Research carried out by the creators of SeaTouch, an experimental based research program catered to the blind collective’s perception of coordinating egocentric and allocentric spatial frames of reference as apposed to their collective acquisition of spatial knowledge of an area or route to understand how the blind navigate (Tlauka and Wilson, 1996; Darken and Banker, 1998; Christou and Bülthoff, 2000), has provided a point of reference in the development of the navigation concept and use of haptic technology and virtual reality throughout the development of the core concept.

The system consists of a three tier concept, starting from the interaction between George and his computer through haptic and other technological forms of assistance, followed by route planning and auditory navigation assistance through the clever combination of handheld device technology and Google Maps and finally a collision avoidance system which will alert George in advance to incoming obstacles specific to exact areas on the body. In reference to the research carried out on how the blind perceive spatial displacement and technologies studied in each specific area of expertise, core benefits the system offers come in the form of George possessing a heightened level of awareness prior to his journey and the constant updating of spatial information through the use of an integration of technologies in order to provide George with a sensory capacity to compensate for his visual impairment.

Planning and knowing ones surroundings is of paramount importance to the blind, and what this conceptual integration of the above technologies and systems hopes to achieve, is to provide George with not only a compensation for a visual impairment, but also a sense of security and stability whilst navigating through a previously ‘dark’ area.

References

Tlauka, M., & Wilson, P. N. (1996). Orientation-free representations from navigation through a computer-simulated environment. Environment and Behavior, 28, 647-664.

Darken, R. and Banker, W. (1998) Navigating in natural environments: A virtual environment training transfer study. VRAIS98: Virtual Reality Annual Symposium, 98, 12-19

Bülthoff HH and Christou CG (2000) The Perception of Spatial Layout in a Virtual World In: BMCV 2000, First IEEE International Workshop on Biologically Motivated Computer Vision (BMCV 2000), Springer, Berlin, Germany, 1-22.

Sunday, June 26, 2011

Haptic Assistance

Map Exploration Through Haptic Assistance

The blind perceive map information in relation to spatial displacement within an area as apposed to their acquisition of spatial knowledge (Tlauka and Wilson, 1996; Darken and Banker, 1998; Christou and Bülthoff, 2000). Early work by Jacobson (1998) illustrated the use of a combination of virtual reality and haptic devices in order to provide the blind with an audible and tangible interaction between themselves and the map, using the phantom haptic device as a white cane in the virtual world.’

Phantom Omni force-feedback

This is an off the shelf haptic device, which consists of a stylus mounted at the end of a robotic arm, capable of movements and resisting these movements on different pressure levels in all three dimensions. This device will act as the ‘white cane’ to the user in the virtual world. When the user grabs the end of the stylus, the forces delivered by the motors can be felt as a result of the motors being controlled by the computer program resulting movement restriction and opposing forces to create the haptic effect whilst interacting with the map.

3D Haptic Web Browser

The user is immersed into a 3d virtual representation of the web, which allows the user to navigate through the internet by touch, developed for the visually impaired.Using the Phantom device the user can explore the 3d virtual representation of the internet and interact with ‘hapgets’. As well as haptic interaction, the haptic web browser also incorporates a speech synthesis and speech recognition engines to allow interaction and feedback through speech as well as touch. All information concerning hapgets can be transformed into speech.

The most intriguing feature of the 3d haptic web browser would have to be the map exploration mode, which allows the visually impaired user to explore an interactive map. This technology is fully compatible with the google map system in which the 2d map is transformed into a 3d multimodal map (haptic and aural)

Using the Phantom device, the user can then explore the virtual 3d modular map in a similar manner as using a cane in the physical world. Upon exploring roads and junctions, the aural speech synthesis engine produces information about the street in the form of speech.

Limitations

Although the phantom omni force feedback device offers a whole new range of interaction methods between the visually impaired user and digital world, being visually impaired means perceiving the physical and digital world in a unique manner. Although the phantom device offers the user a form of a ‘digital cane’, the 3d virtual map integration isn’t yet detailed enough to provide the user with a first person perspective of walking through the streets of the routes planned out, but instead just allows the user to feel the direction of the route and hear the name of the streets as he/she follows the path.

The technology is very much in its infancy, but is only limited by the current virtual representations of the real world.

References

Jacobson, R. D. (1998) Navigating maps with little or no sight: An audio-tactile approach.

Proceedings of the Workshop on Content Visualization and Intermedia Representations (CVIR).Montreal.

Tlauka, M.; Brolese, A.; Pomeroy, D. and Hobbs, W. (2005) Gender differences in spatial knowledge acquired through simulated exploration of a virtual shopping centre. Journal of Environmental Psychology, 25, 111-118

Darken, R. and Banker, W. (1998) Navigating in natural environments: A virtual environmenttraining transfer study. VRAIS98: Virtual Reality Annual Symposium, 98, 12-19

Friday, June 17, 2011

Plan Route From Computer

Note images will be added in the final version of the documentation


Planning a Route from the Computer

Firstly to plan the route the user must click on the shortcut icon on the desktop. This will automatically open the web browser and open the Google Maps Plan Route feature.

The shortcut will load Google Maps with ‘Walking Route Option’ (as shown in the figure above). By default, the Location A, will be already selected, therefore the screen reader will read “Location A”. When the user starts typing the address the screen reader will read the suggestions (shown in the figure below). What the screen reader reads will be also displayed on the braille.


To Enter “Location B” to user could either navigate down with the tactile mouse or use the tab button from the keyboard. If using the mouse, when the user will hover over the “Location B” the screen reader will inform the user that he could enter “Location B” by clicking and typing in the address as done for “Location A”.

Once the Start Point and End Point are defined, the user will click “Send”. This will send the route by e-mail to the mobile device.

Thursday, June 16, 2011

Wearable Computer

Problem definition:

In order for George to be able to navigate his way once on street, a navigation system is required to help George get from his starting point to his final destination. Once George has his route planned on the navigation application on the smart phone, he can now step into the street and start his journey.

Unfortunately for George, from time to time there will be obstacles in his way to the final destination. Since obstacles such as potholes, steps and trees amongst others cannot be seen, George needs a way of being able to tell where these objects are and therefore avoid them as best as possible.

It is also imperative that George is able to determine where certain landmarks are located, as well as be able to find pedestrian crossings and cross the road in a safe manner. Even if George takes the same route every day and gets used to the obstacles of that path, one day or another he is bound to encounter a change in the environment that he uses for his route. This can be caused by a number of elements such as road works and detours.

Solution:

Moore (2001) et al designed a system called Drishti, a wireless pedestrian navigation system that uses a number of integrated technologies together in order to effectively route a blind professor through a university campus. This system uses a wearable computer, voice recognition and synthesis, GPS and GIS combined with wireless networks to augment information relevant to the blind user depending on his or her geographical position.

Certain elements of this system can be adopted for our proposed solution. A wearable computer equipped with a microphone for voice recognition as used by Moore et al (2001), can be used in order for George to be able to communicate with the system and thus make routing requests on the go. Although George’s route is preplanned through the navigation application, George may need to make additional requests for routing. Such requests can include “pedestrians crossing” to which the system may reply with the distance to the pedestrian crossing as well as the instructions to reach it.

Graphical Positions System (GPS) will be used to transmit information about George’s location and therefore access information about traffic, road works, landmarks and traffic lights in his immediate area through Geographic Information System (GIS). The latter being a system which captures and stores geographic data.


How it works:

The system will work in the following manner; George will plan his route using his navigation application on his smart phone and once on the road, the GPS tracker placed in a backpack sends tracking information regarding George’s position. This will communicate with the GIS , checking what information is available about that particular location. The information will be given to George through voice synthesis through the speakers in his headset.

Whilst on the way to his destination, the GIS is checked constantly for known obstacles; such as road congestions, traffic and special events, as well as information about pedestrian crossings and landmarks. These are placed in the spatial database and are then given to George through the headset.

If George needs on the spot information at a particular place, he can speak into the microphone and through voice recognition computed within the wearable computer, a signal is sent to the GIS and a reply is given to back to George.

The wearable computer communicates with with the spatial database held on the server, through a 3G Internet connection, whilst a constant connection is maintained with the navigation application’s map server to ensure routes are computed in terms of the least obstacles rather than the shortest route. This can be seen in figure x.

System Design:

Figure x: System Design showing the client and server side of the system.

As can be seen from figure x, the system is also spit into client and server side. The client side contains the wearable computer, GPS and headset that will be in George’s backpack, while the server contacts the database and connection to the GIS and map server through the smart phone.


In order to implement such a system, the following hardware and software is needed:

Hardware:

Similar to Moore et al (2001) off-the-shelf hardware will be used for this prototype to ensure that it is as cost effective as possible. The main hardware components needed are:

  • Wearable computer

  • GPS receivers

  • Headset

These components weight approximately less than 2lbs according to Moore et al (2001).

Software Components:

  • Spatial database to manage geographic information system data.

  • Speech Recognition software


Moore et al (2001) state that although their system has been tested, GPS is not foolproof and has been proven to lose signal near trees and tall buildings. For this reason, an improvement for this can be to use the user’s average speed and compass to compensate for the last data through signal loss.

Using the Computer - Updated

1.0 Introduction:

Blind people cannot make use of the computer unless certain assistive technologies are used. To overcome this obstacle, screen readers, braille displays and haptic mice could be used.


2.0 Technologies to be used:

2.1 Screen Reader

Screen readers will read all of the information currently on screen and the content that will be typed on screen by the user, thus, making it easier to navigate through the system.

There are many different types of screen readers, each offering different tasks. A simple screen reader will only read the current word or line from the whole window, this will make it more difficult for the user as he/she must search for the stuff on screen. On the other hand, the more complex screen readers will provide the user with more information by reading; the name of the application, the title bar, the window or the current item selected.

To make the screen user even more effective, hot keys could be assigned to assign the screen reader to read specific information. For example, when the hotkey is pressed, the reader will read through the tool bar of the current application being used. By doing so, the user will be given an overview of the different options that could be used within the program (example. Bold, Italic, Underlines, Align etc… in Microsoft Word).


2.2 Tactile Mouse

The tactile mouse will look similar to a normal mouse found on any other desktop computer, however will include two pads each containing 16 pins arranged in a four-by-four array. This extra feature will translate text displayed on screen into braille. A braille device is used by blind people to actually read the content that is being displayed on screen, this will be an additional feature to the screen reader.

2.2.1 How does the tactile mouse work?

In traditional Braille, numbers and letters are represented by raised bumps. The pins on the mouse take the role of these bumps. When the cursor is moved, the pins rise and fall to represent the text across which they are moving. Since having two pads, one will represent the character / word beneath the cursor, while the other will show what is coming next, such as the end of the character / word.

The tactile mouse includes some features to make it more accessible. On feature has an “anchor” feature, which will hold onto the line of text being read. When the user clicks on the mouse button, the text will scroll along as he/she reads, making reading easy since moving the mouse will not be needed to continue scrolling.

When accessing webpages, one can come across maps, graphs and other figures. For example, when the mouse will come across a line of a graph, the pins will rise. The number of pins that have been triggered and raised will show the thickness of the line. If he moves away from the line, the pins will fall back, thus this will let him depict the graph.


2.3 Iterface

The computer system will be split into a grid like view so that it will make it much easier for the user to understand the position of objects when read by the screen reader.

The screen reader and braille give an overview of the current screen from left – right, top – bottom.

For example: if the user is on the desktop the screen reader will start by reading ‘Recycle Bin’. Therefore the user will already know that there is only one shortcut on the desktop, and know that it is at the top left since it’s the only one.

The system will be using a tool such as PowerCursor ‘Hole’ around icons so as to pinpoint exactly the clickable areas are.

3.0 How will the system work altogether?

Firstly the Screen Reader will always read what is displayed on screen to give an overview of what is presented/displayed on screen. When the user is navigating around the interface and encounters an icon (for example- Internet Explorer) both mouse pads will trigger all the pins to rise to show that there is an object beneath the cursor. The screen reader will read the object and the pads will then display the text by using braille. The mouse pointer will be placed in the middle of the icon automatically so that if the user will want to access Internet Explorer he just needs to click.


References:

http://www.evengrounds.com/blog/how-do-blind-people-use-the-computer

http://www.economist.com/node/14955359

http://www.powercursor.com/

Thursday, June 9, 2011

Head Tracking Software/Hardware

Technology behind head tracking:

Software such as Freetrack will be needed to translate the movement of the head to the cursor.

In order to detect the following movements;

  • Rotate left-right,
  • Rotate up-down,
  • Tilt left-right, Move left-right,
  • Move up-down,
  • Move forward-back.
Freetrack will need at least three to four markers. These markers will be using an Infrared Light Emitting Diode, as they can be better isolated from visible light using filters.
These markers will be fixed to a standard cap.
An Optitrack V100 Web cam was chosen as it produces a grayscale image with high FPS output and is equipped with 30 IR LEDs.







Figure 1: Cap with 3 markers (left), Optitrack V100 Web Cam with IR (right)

The Collission Prevention Suit v1.0 - Ryan & Kevin


Collision Prevention Suit

Introduction

As the name suggests, the suit will be designed to prevent the wearer from bumping into objects such as lampposts, bins and cars while walking. It will operate via a series of IR Proximity Sensors and Vibrating Beads, connected to each other via a central CPU, as displayed in the figure below.


Figure x: Full body suit and headwear

Operation Mechanics

When one of the IR Sensors (similar to those in Car Parking Sensors) senses an object close by, it will send a message to the CPU, directing it to send power to the associated vibrator ,(i.e.; Left Knee Sensor will send power to Left Knee Vibrator, Left Shoulder IR Sensor will send power to Left Shoulder Vibrator and so on)

This way, will not only know that there is an object close by, but what side of his body/body part risks bumping into that object.

Power

A suit like this will require a constant flow of electricity, which is where the rechargeable battery pack comes in. Once the day is at an end, can remove the suit, and plug it into a power socket to refuel.

Benefits of Suit

The chief advantage of this suit is its ability to allow the user to freely walk around in the streets without crashing into anything in the middle of his path. This can be used in conjunction with the to allow to safely navigate the streets en-route to his final destination.

Another advantage of this lightweight suit is that it may be worn underneath normal layers of clothing, and thus will help the wearer hide his condition from the public eye.

Suit Limitations

While the suit can help in a lot of ways, it is anything but perfect, and has its own limitations, including its dependency on power. Once the battery runs out of charge, the sensors and vibrating buttons will stop functioning, putting at risk of seriously injuring himself.

Also, while the suit will point away from obstacles, it does not have its own inbuilt GPS, therefore the wearer will not be able to get from point A to point B without the additional technology.

Communications Plan

Figure x: Map out for suits hardware. IR Proximity Sensors on the left and Vibrating buttons on the right.

Research

Proximity Sensors:

The blind and robots have a lot in common, in the sense that neither of them have fully functional eyes, and thus rely on other senses. In the case of robots, their secondary sense is artificial vision, which comes in many forms, including Proximity Sensors.

Such sensors may also be adopted by the blind, who could combine them with other elements to help know their surroundings.

A proximity sensor will send out a beam of IR and wait for the reflection. Once the IR’s reflection is received, the data is sent to a CPU to calculate distance through the equation:

When talking about IR Sensor operation, the authors of the journal Testing and Calibrating of IR Proximity Sensors, state that when reading distance between them and an object, “the sensors return the measured distance in a form of [an] analogue signal” which is then transferred to a CPU for all calculations to be made. In our case, the onboard CPU could send an electric pulse to the vibrating elements once distance is under 10cm (for instance)


Figure x: Image obtained from the aforementioned journal, showing how Proximity sensors determine whether or not an obstacle is in the way, and read the distance between itself and the obstacle.

Vibrating Elements:

This technology has been used in a number of gadgets in recent times, including mobile phones and for gaming purposes. Recently, 3rd Space have come up with their similar version, utilizing air pockets instead of vibrating buttons, for their FPS Gaming Vest.

Besides use in gaming, vibrating vests have been used in the US Army, and may possibly be used in disco’s by the deaf, to hear music in a whole new way, as highlighted by pages 18 and 19 of TNO Magazine’s September2009 issue.

Such elements may be easily purchased from companies like Omega Piezo, who produce “a wide range of vibrating elements for use in applications including: alarms, speakers, atomizers, mist generators, and many others.