“The chatbot gave wildly different answers to the same math problem, with one version of ChatGPT even refusing to show how it came to its conclusion.”

It’s getting worse. And because it’s a black box model they don’t know why. The computer science professor here likens it to how human students make mistakes… but human students make mistakes because they don’t have perfect recall, mishear things being told to them, are tired and/or not paying attention… A bunch of reason that basically relate to having a human body that needs food, rest and water. A thing a computer does not have.

The only reason ChatGPT should be getting math wrong is that it’s getting inputs that are wrong, but without view into it they can’t figure out where it’s getting it wrong and who told it the wrong info.

  • MrZigZag@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I think it’s the same hype train that said ten years ago that by now every vehicle would be self driving and all the truckers would be out of work. Or back when that first Avater movie came out that in a short while every movie and TV show would be in 3D.

    • Hazdaz@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Oh absolutely. It is one hypetrain after another. VR was going to be the Next Big Thing… and then seemed to have fizzled faster than anything else. Companies would dump billions into these projects and then 6, 12, 18 months later cancel them when the public didn’t see interested. I am sure AI is not going totally away, but with the way it was being described just a few short weeks ago, it was the second coming of Jebus.