I’ve been saying this for about a year since seeing the Othello GPT research, but it’s nice to see more minds changing as the research builds up.

  • kromem@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    10 months ago

    No matter what you call it, an LLM will always produces the same output with the same input if it is at the same state.

    You might want to look up the definition of ‘stochastic.’

    • expr@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      They’re not wrong. Randomness in computing is what we call “pseudo-random” in that it is deterministic provided that you start from same state or “seed”.