Craig Doty II, a Tesla owner, narrowly avoided a collision after his vehicle, in Full Self-Driving (FSD) mode, allegedly steered towards an oncoming train.

Nighttime dashcam footage from earlier this month in Ohio captured the harrowing scene: Doty’s Tesla rapidly approaching a train with no apparent deceleration. He insisted his Tesla was in Full Self-Driving mode when it barreled towards the train crossing without slowing down.

  • Imalostmerchant@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    7 months ago

    I never understood Musk’s reasoning for this decision. From my recollection it was basically “how do you decide who’s right when lidar and camera disagree?” And it felt so insane to say that the solution to conflicting data was not to figure out which is right but only to listen to one.

    • wirehead@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 months ago

      I mean, I think he’s a textbook example of why not to do drugs and why we need to eat the rich, but I can understand the logic here.

      When you navigate a car as a human, you are using vision, not LIDAR. Outside of a few edge cases, you aren’t even using parallax to judge distances. Ergo, a LIDAR is not going to see the text on a sign, the reflective stripes on a truck, etc. And it gets confused differently than the eye, absorbed by different wavelengths, etc. And you can jam LIDAR if you want. Thus, if we were content to wait until the self-driving-car is actually safe before throwing it out into the world, we’d probably want the standard to be that it navigates as well as a human in all situations using only visual sensors.

      Except, there’s some huge problems that the human visual cortex makes look real easy. Because “all situations” means “understanding that there’s a kid playing in the street from visual cues so I’m going to assume they are going to do something dumb” or “some guy put a warning sign on the road and it’s got really bad handwriting”

      Thus, the real problem is that he’s not using LIDAR as harm reduction for a patently unsafe product, where the various failure modes of the LIDAR-equipped self-driving cars show that those aren’t safe either.

    • Jakeroxs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Also that LIDAR is more expensive then cameras, which means higher end user price, as far as I remember.

    • smonkeysnilas@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      I mean the decision was stupid from an engineering point of view, but the reasoning is not entirely off. Basically it follows the biological example: if humans can drive without Lidar and only using their eyes than this is proof that it is possible somehow. It’s only that the current computer vision and AI tech is way worse than humans. Elon chose to ignore this, basically arguing that it is merely a software problem for his developers to figure out. I guess in reality it is a bit more complex.