- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.
I don’t think this is true.
The models (or maybe the characters in the conversations simulated by the models) can be spectacularly bad at basic reasoning, and misunderstand basic concepts on a regular basis. They are of course completely insane; the way they think is barely recognizable.
But they also, when asked, are often able to manipulate concepts or do reasoning and get right answers. Ask it to explain the water cycle like a pirate, and you get that. You can find the weights that make the Eifel Tower be in Paris and move it to Rome, and then ask for a train itinerary to get there, and it will tell you to take the train to Rome.
I don’t know what “understanding” something is other than to be able to get right answers when asked to think about it. There’s some understanding of the water cycle in there, and some of pirates, and some of European geography. Maybe not a lot. Maybe it’s not robust. Maybe it’s superficial. Maybe there are still several differences in kind between whatever’s there and the understanding a human can get with a brain that isn’t 100% the stream of consciousness generator. But not literally zero.