A new adaptive mobile technology to be developed that would help visually impaired people to see through their smartphone or tablet. A Google Faculty Research Award funds the University of Lincoln which aims to invent a smart vision system in mobile devices that would help the people with sight problems navigate unfamiliar indoor environments.
The computer vision and machine specialists in the University of Lincoln to work on the basis of preliminary work on assistive technologies earlier done by Lincoln Centre for Autonomous systems, the team intends to use colour and depth sensor technology inside new smartphones and tablets which would help 3D mapping and localization, navigation and object recognition for the visually impaired.
The best interface is to be developed by the team that includes vibrations, sounds or the spoken words. A Lincoln’s School of Computer Science expert on machine perception and human-centered robotics lead Dr. Nicola Bellotto said that the team to work on the previous research and create an interface that would enable the users (visually impaired) efficiently.
Belloto said, “There are many visual aids already available, from guide dogs to cameras and wearable sensors. Typical problems with the latter are usability and acceptability. If people were able to use technology embedded in devices such as smartphones, it would not require them to wear extra equipment which could make them feel self-conscious.”
He said, “There are also existing smartphone apps that are able to, for example, recognise an object or speak text to describe places. But the sensors embedded in the device are still not fully exploited. We aim to create a system with ‘human-in-the-loop’ that provides good localisation relevant to visually impaired users and, most importantly, that understands how people observe and recognise particular features of their environment,”
The research team that aims to develop a system that will recognise visual clues in the environment includes: Dr Oscar Martinez Mozos, a specialist in machine learning and quality of life technologies, and Dr Grzegorz Cielniak, who works in mobile robotics and machine perception.
The device has a camera that would detect the type of room as the user moves around a space. The important aspect of the system will be its capacity to adapt to individual users’ experiences, modifying the guidance it provides as the machine ‘learns’ from its landscape and from the human interaction.
So, as the user becomes more accustomed to the technology, the quicker and easier it would be to identify the environment.