Why the future of assistive tech is camera-based?
For decades, companies have used ultrasounds or laser sensors. These sensors have been mounted on a wristband, on a necklace, on glasses, on a white cane, on shoes…Innovation cannot come from the way one wears these sensors. Limits have been reached long ago. Whether you try to group a couple of sensors together, work on the feedback or else, there is simply nothing that will allow you to produce a good understanding of your surroundings.
All devices that embed those sensors will only every be as good as telling you that there is something in front of you. But imagine the following simple scenario. You walk on a sidewalk, there is a building on your left, a lane of cars parked on your right, and someone a couple of meters ahead of you walking in the same direction. Any device that uses ultrasounds or laser would vibrate on the left, vibrate in front and vibrate on the right.
The promise of Lidar
You might have heard of Lidar sensors. They’re basically 3D laser sensors. They’re already a big step forward, as you can estimate shapes and volumes using such sensors.
Lidars are used by some autonomous driving companies for the navigation of their vehicles, simply to estimate distance in a reliable way. Lidar is however expensive, and remains quite bulky for sensors that enable wide fields of view. Lidars have more promising applications than other sensors.
However, Lidars are being gradually dropped in the autonomous driving industry. The main reasons behind that is essentially that using AI, simple cameras can start to approximate Lidar results, and that the true knowledge of a scene lies in the image that a camera can capture, not in the distance of things around the user. Lidars also have moving parts, and require move heavy maintenance.
Cameras are the future
Intuitively, cameras capture the world as we see it. With colors, details and contrasts. Tesla cars run self-driving features on cameras only. They don’t require any other sensor, as AI learns to identify what matters around the user and which action to take Walking is an activity that does not require AI to be running extremely fast. The worst case scenario is usually to handle a pedestrian walking fast (around 7 kilometers per hour) and a car at 50 kilometers per hour. However, the diversity of objects that you can see as a pedestrian is much more important than what a car can see.
Self-driving cars use the car’s headlights to get a clear picture even at night. This was not on option on biped, as we don’t want the harness to produce light. We therefore embedded infra-red sensors that are specialized for night vision.
On top of that, cameras offer the flexibility of bringing all your favorite smartphone features to a hands-free device like biped. In future updates, biped will be able to detect text, read QR codes or even recognize faces.
Given how much focus is nowadays being put on AI on cameras, and the massive shift of self-driving cars to camera-based systems only, we deeply believe, at biped, that the future of assistive technologies with smart wheelchairs or navigation devices like ours, will be camera-based.