nifty ,
@nifty@lemmy.world avatar

For now, cars need more than computer vision to navigate because right now adding cameras by themselves doesn’t help a car spatially orient itself in its environment. What might help? I think the consensus is that the cameras need to get a 360 deg view of the surroundings and the car needs a method for making sense of these inputs without that understanding being the focus of attention.

It seems Teslas do add sensors in appropriate locations to be able to do that, but there’s some disconnect in reconciling the information: https://www.notateslaapp.com/news/1452/tesla-guide-number-of-cameras-their-locations-uses-and-how-to-view. A multi-modal sensing system would bypass reliance on getting everything right via CV.

Think of you focusing on an object in the distance and moving toward it: while you’re using your eyes to look at it, you’re subconsciously computing relative distance and speed as you approach it. it’s your subconscious memory of your 3D spatial orientation that helps you make corrections and adjustments to your speed and approach. Outside of better hardware that can reconcile these different inputs, relying on different sensor inputs would make the most robust approach for autonomous vehicles.

Humans essentially keep track of their body in 3D space and time without thinking about it, and actually most multicellular organisms have learned to do this in some manner.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • kbinchat
  • All magazines