How biped filters the relevant information?

Mael Fabien
August 15, 2022
How biped filters the relevant information?

As biped plays 3D sounds to warn you about the most important elements, the key questions remains to know how to filter the most relevant information. And to be honest, this is not straightforward, for multiple reasons:

  • every user is different, and has different thresholds that they consider as an overload
  • every user has different residual vision and wants different warnings
  • what matters usually depends on the situation, and every situation is different

Our approach

We took a simplified approach to get started with. Is there an obstacle around you? If yes, you should definitely know about it. Then, biped will use AI to analyze the scene. This is a typical example of what we can capture.

Software demonstration with a person on the left, pointing at something, who is identified as an obstacle, a person in front of the user, walking straight, and a couple of people on the right, further away.

We have, in real-time, access to the positions of obstacles, to the location of people, vehicles, crosswalks... to their trajectories, and to the "walkable area", around the user, with an understanding of where holes, staircases or drop-offs are located.

In other words, your biped device has:

  • a central processing unit
  • a set a sensors, like cameras and depth cameras, that can capture your surroundings
  • these sensors can perceive distance, movements, types of obstacles approaching, also using extra sensors like IMU
  • these inputs are used to identify if there is any risk of collision or relevant obstacle to notify the end-user of
  • and finally, an easy audio interface to perceive the feedback. Note that upon request, it is also possible to use haptic feedback instead using external devices

What do we do with this? It’s potentially a lot of sounds to play. We asked end-users how to address this, and confirmed that collision risk is what matters the most. The core value proposition of biped is to reduce your stress level when using the device. Each sound should only be played for a specific reason.

In real-time, biped will predict each object’s trajectory. The trajectory gives an idea of whether there is a risk of collision or not. However, this is not sufficient. biped then ranks the most important objects based on the predicted time-to-impact for each object. If the predicted time to impact is sufficiently close, we will start warning the user about the potential obstacle.

Finally, biped users can choose which objects they want warning for. If you’re visually impaired but use a white cane, you might notice that people tend to avoid you and you potentially don’t consider them as an obstacle, and hence don’t want to enable sounds for them. All of these settings can be switched ON and OFF directly. We prefer to leave flexibility to end-users in our first versions, and over time, try to automate and understand automatically the right set of settings. But that’s for the next updates… 🙂

Related Blogs