The U.S. government’s road safety agency is again investigating Tesla’s “Full Self-Driving” system, this time after getting reports of crashes in low-visibility conditions, including one that killed a pedestrian.

The National Highway Traffic Safety Administration says in documents that it opened the probe on Thursday with the company reporting four crashes after Teslas entered areas of low visibility, including sun glare, fog and airborne dust.

In addition to the pedestrian’s death, another crash involved an injury, the agency said.

Investigators will look into the ability of “Full Self-Driving” to “detect and respond appropriately to reduced roadway visibility conditions, and if so, the contributing circumstances for these crashes.”

  • tekato@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    1 month ago

    This is why you can’t have an AI make decisions on activities that could kill someone. AI models can’t say “I don’t know”, every input is forced to be classified as something they’ve seen before, effectively hallucinating when the input is unknown.

    • pycorax@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      I’m not very well versed in this but isn’t there a confidence value that some of these models are able to output?

      • FatCrab@lemmy.one
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 month ago

        All probabilistic models output a confidence value, and it’s very common and basic practice to gate downstream processes around that value. This person just doesn’t know what they’re talking about. Though, that puts them on about the same footing as Elono when it comes to AI/ML.

        • tekato@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 month ago

          Right, which is why that marvelous confidence value got somebody ran over.

          • FatCrab@lemmy.one
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 month ago

            Are you under the impression that I think Teslas approach to AI and computer vision is anything but fucking dumb? The person said a stupid and patently incorrect thing. I corrected them. Confidence values being literally baked into how most ML architectures work is unrelated to intentionally depriving your system of one of the most robust ccomputer vision signals we can come up with right now.

            • tekato@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 month ago

              Yes, but confidence values are not magic. These values are calculated based on how familiar the current input is to a previous observed input. If the type of input is unfamiliar to the model, what do you think happens? Usually, there will be a category with a high enough confidence score so that it will be chosen as the correct one, while being wrong. Now, assuming you somehow manage to not get a favorable confidence score for any decision. What do you think happens in that case? I never encountered this, but there can only be 3 possible paths: 1) Choose a random value. Not good. 2) Do nothing. Not good. 3) Rerun the model with slightly newer data? Maybe helps, but in the case of driving a car, slightly newer data might be too late.

              • FatCrab@lemmy.one
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                There’s plenty you could do if no label was produced with a sufficiently high confidence. These are continuous systems, so the idea of “rerunning” the model isn’t that crazy, but you could pair that with an automatic decrease in speed to generate more frames, stop the whole vehicle (safely of course), divert path, and I’m sure plenty more an actual domain and subject matter expert might come up with–or a whole team of them. But while we’re on the topic, it’s not really right to even label these confidence intervals as such–they’re just output weighting associated with respective levels. We’ve sort of decided they vaguely match up to something kind of sort approximate to confidence values but they aren’t based on a ground truth like I’m understanding your comment to imply–they entirely derive out of the trained model weights and their confluence. Don’t really have anywhere to go with that thought beyond the observation itself.