• Ocelot@lemmies.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    25
    ·
    edit-2
    1 year ago

    Here’s the specific timestamp of the incident you mentioned in case you wanted to actually see it: https://youtu.be/aqsiWCLJ1ms?t=1190 The car wanted to move through the intersection on a green left turn arrow. I’ve seen a lot of human drivers do the same. In any case, its fixed now and never was part of any public release.

    The video didn’t end there, it was near the middle. What you’re referring to is a regression specifically with the HW3 model S that failed to recognize one of the red lights. Now I’m sure that sounds like a huge deal, but here’s the thing…

    This was a demo of a very early alpha release of FSD 12 (current public release 11.4.7) representing a completely new and more efficient method of utilizing the neural network for driving and has already been fixed. It is not released to anyone outside of a select few Tesla employees. Other than that it performed flawlessly for over 40 minutes in a live demo.

    • midorale@lemmy.villa-straylight.social
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      3
      ·
      1 year ago

      Other than that it performed flawlessly for over 40 minutes in a live demo.

      I get that this is an alpha, but the problem with full self driving is that’s way worse than what users want. If chatgpt gave you perfect information for 40 minutes (it doesn’t) and then huge lies once, we’d be using it everywhere. You can validate the lies.

      With FSD, that threshold means a lot of people would have terrible accidents. No amount of perfect driving outside of that window would make you feel very happy.

      • Ocelot@lemmies.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        12
        ·
        edit-2
        1 year ago

        You realize that FSD is not an LLM, right?

        If its “Way Worse” then where are all the accidents? All teslas have 360 dashcams. Where are all the accidents?!

        • midorale@lemmy.villa-straylight.social
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          1 year ago

          I didn’t say FSD was an LLM. My comment was implementation agnostic. My point was that drivers are less forgiving to what programmatically seems like a small error than someone who is trying to generate an essay.

          • Ocelot@lemmies.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            7
            ·
            edit-2
            1 year ago

            Maybe so, but from where I stand the primary goal should be “Better driver than a human” which is an incredibly low bar. We are already quite a ways past that and its getting better with every release. FSD is today nearly 100% safe, most of the complaints now are around how it drives like a robot by obeying traffic laws, which confuses a lot of other drivers. There are still some edge cases yet to be ironed out extensively like really heavy rain, some icy conditions and snow. People are also terrible drivers in those conditions so its not a surprise. It will get there.

            • midorale@lemmy.villa-straylight.social
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              Oh man I definitely agree here. I’m a huge fan of that “better than a human” threshold. Roads are already very dangerous. One of the wildest things I’ve noticed is highway driving at night in very rainy conditions, sometimes visibility will be near zero. Yet a lot of drivers are zooming around pretending they can see. I feel like I’m in the twilight zone when it happens.

    • zeppo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      it has to perform flawlessly 99.999999% of the time. The number of 9s matters. Otherwise, you are paying some moron to kill you and perhaps other people.

      • Ocelot@lemmies.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        1 year ago

        ok so im totally in agreement but 99.999999% is one accident per hundred million miles traveled. I dont think there should be any reasonable expectation that such a technology can ever possibly get that far without real world testing. Which is precisely where we are now. Maybe at 4 or 5 9s currently.

        If you do actually want to have that level of safety, which lets be honest we all do, or ideally 100% safety, how would you propose such a system be tested and deemed safe if not how it’s currently being done?