I’ve been saying this for about a year since seeing the Othello GPT research, but it’s nice to see more minds changing as the research builds up.
I’ve been saying this for about a year since seeing the Othello GPT research, but it’s nice to see more minds changing as the research builds up.
Is there a difference between being a “stochastic parrot” and understanding text? No matter what you call it, an LLM will always produces the same output with the same input if it is at the same state.
An LLM will never say “I don’t know” unless it’s been trained to say “I don’t know”, it doesn’t have the concept of understanding. And so I lean on calling it a “stochastic parrot”. Although I think there is some interesting philosophic exercises, you could do on whether humans are much different and if understanding is just an illusion.
How do you know a human wouldn’t do the same? We lack the ability to perform the experiment.
Also a very human behaviour, in my experience.
Because the human has “circuits” for coherrent thought and language was added later.
You might want to look up the definition of ‘stochastic.’
They’re not wrong. Randomness in computing is what we call “pseudo-random” in that it is deterministic provided that you start from same state or “seed”.