I’ve been saying this for about a year since seeing the Othello GPT research, but it’s nice to see more minds changing as the research builds up.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    10 months ago

    No matter what you call it, an LLM will always produces the same output with the same input if it is at the same state.

    How do you know a human wouldn’t do the same? We lack the ability to perform the experiment.

    An LLM will never say “I don’t know” unless it’s been trained to say “I don’t know”

    Also a very human behaviour, in my experience.

    • MonkderZweite@feddit.ch
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      How do you know a human wouldn’t do the same?

      Because the human has “circuits” for coherrent thought and language was added later.