Tech products for the visually impaired are continually improving. And these wearables use haptic feedback, OLED headsets and deep learning to help users navigate the world.
According to the World Health Organization, 253 million people live with visual impairment globally. And while it affects all segments of the population, it disproportionately affects older people: 81 per cent of people who are blind or have moderate to severe visual impairment are aged 50 and over. That number, estimates the WHO, could triple by 50.
Encouragingly, though, the number of people affected by vision impairment has actually decreased in the last 25 years. And assistive technologies are continually improving, led by both large companies – Microsoft, for example, has been developing 3D soundscape technologies for years – small startups and even braille watchmakers. As a result, a range of products have emerged that use cutting-edge tech, prioritize intuitive functionality and consider social context. Here are six options that provide independence, safety and discretion.
From a distance, industrial designer Emilios Farrington-Arnas’ Maptic looks like a fitness tracker. That’s intentional: he didn’t want his wearable device to resemble a medical device. Or feel like one, either. Consisting of a visual sensor, which is worn around the neck, and vibrating feedback units worn around the wrists, the Maptic system discretely helps visually impaired users navigate the world around them
Maptic accounts for uneven surfaces, obstacles and the fact that phones need to be hand-held – an obstacle for someone who, for example, may need to carry a cane. Its chest-level sensor detects objects in the visual fields and sends information to an app, which relays the information the units worn around the wrist or on clothing. Through haptic feedback (the motor that makes a phone vibrate), users are guided through a physical environment. “Utilizing the phone’s GPS, the app can navigate the user to chosen destinations via vibrations to the left and right sides of the body when the user needs to turn,” says Farrington-Arnas, who says the feedback feels like ticking, or sonar.
The system’s tactile navigation also frees up another sense: hearing. “Using the sense of touch frees up hearing for detecting immediate dangers, which is the dominant sense when visually impaired,” says Farrington-Arnas. Those intrigued with Maptic should also explore Sunu, a haptic-feedback wristband that uses sonar to guide users through obstacles.
Developed by TeamTactile, a five-person student startup at MIT, Tactile aims to make printed text more accessible to the blind. Less than one per cent of printed material has braille translation – which can make simple tasks, like reading mail, inaccessible. So TeamTactile developed a device that glides over printed pages, scans the text, and translates it into braille, line by line.
It also works with an app that, after scanning digital text, either syncs with a user’s Tactile device or provides a voice-over transcription. While it isn’t available to the public yet, the device has been prototyped and its patent is pending.
Apple has Siri. Google has Google Assistant. Amazon has Alexa. And Eyra, an AI-focused startup, has Horus, a virtual assistant aimed specifically at helping the visually impaired. Paring a pocket unit with a headset – which resembles a pair of sports headphones – Horus uses real-time audio to map out the world for its users.
Horus’ headset is outfitted with a camera, which the company says can learn to read text, recognize faces and identify objects using deep learning. Information captured by the camera is sent to the pocket unit, which is outfitted with a tiny, but powerful, processor. The unit then relays audio descriptions without blocking the users ears, via bone-conduction technology, which works even in noisy environments.
So, how does Horus learn? As users approach an object or person, Horus sends an prompt. It will automatically recognize anything in its field of vision, which is cross-referenced with an internal database. If a face or object isn’t recognized, it will encourage the user to identify it; the more Horus is used, the smarter it gets. Currently, the tech is only available in English, Italian and Japanese – and there’s a waiting list to obtain one.
OrCam is another startup using artificial intelligence to provide solutions for the visually impaired. The company, recently valued at $1 billion, has developed MyEye 2.0, a finger-sized camera that attaches to a pair of glasses – and provides the user with what OrCam calls “robotic vision.” The attachment – which, in previous iterations, was wired – features a 12-megapixel camera and a miniature speakers, which reads text, faces, banknotes and products, then relays the information to its user.
MyEye 2.0 is triggered by gestures. Users point to objects, and the wearable responds with detailed information. It can identify objects and colours, features a barcode-based database of products, and can store upwards of 100 faces. MyEye 2.0 can also read text of all different sizes and fonts – and it’s equally able to read from street signs and phone screens. Intuitive, portable and lightweight, MyEye is currently available for $3,500.
Self-driving cars are set to be a real game-changer for visually impaired users. And while they’re likely to be cost-prohibitive once they do hit the market, adding autonomy to vehicles will also improve public transportation. Though it’s often the best option when low vision prohibits driving, public transit is not always as accessible as it should be – and that’s the problem Olli, a self-driving shuttle designed by Local Motors, aims to solve. Though not exclusively designed for those with impaired vision, Olli, debuted at CES earlier this year, uses artificial intelligence – which was developed with IBM Watson technology – to provide transportation for users with a variety of needs.
For blind or low vision passengers, audio cues and haptic sensors will help them find open seats. The outside of the bus is outfitted with cameras and sensors, which automatically activate ramps to help wheelchair-using passengers on the bus. An app called Modally, which uses augmented reality to speak sign languages, helps deaf riders. Rides could also be booked through Olli’s app.
Travelling at roughly 55 miles per hour and carrying up to ten passengers, Olli also generates 3D maps of neighbourhoods using cameras from Meridian Autonomous; it’s monitored externally by a human controller. It’s slated for production and sales later this year.
In February, Toronto-based eSight was named a breakthrough technology at the Canadian Innovation Awards. But the most compelling part about this wearable isn’t the accolades – and ample press coverage – it has received. Rather, it’s the moving stories it creates.
Resembling a pair of glasses, eSight says it gives sight to those who are legally blind by, in part, acting like a VR headset. Its latest iteration, the eSight 3 (above), is far less cumbersome than previous versions. It uses digital cameras and image-processing algorithms to display images on two OLED screens that rest close to the eyes. The designers say it can produce images similar to 20/20 vision in real time – while auto-focusing on short-, mid- and long-range objects.
Like many technological advances, eSight is about consolidation; it removes the need for magnifying devices, assistance animals and more. The results, which eSight simply calls Moments: a blind man who gets to see his wife at his wedding. A boy seeing his mother’s face for the first time. Sight given as a Christmas gift. And there’s much, much more.