I’ve been saying this for about a year since seeing the Othello GPT research, but it’s nice to see more minds changing as the research builds up.
I’ve been saying this for about a year since seeing the Othello GPT research, but it’s nice to see more minds changing as the research builds up.
Stupid, LLMs do not create new relationships to words that don’t exist.
This is all just fluff to make them seem more like AGI, which they never will be.
Why would that be required for understanding? Presumably during the training it would have made connections between words it saw. Now that the training has stopped it hasn’t just lost those connections, sure it can’t make new connections but why is that important for using the connections it already has?