ChatGPT consistently makes up shit. It’s difficult to tell when something is made up because it’s a language model so it is supposed to sound confident as if it’s any person telling a fact that they know.
It knows how to talk like a subject matter expert because that’s usually what gets publicized most and thus that’s what it’s trained on, but it doesn’t always know the facts necessary to answer questions. It makes shit up to fill the gap and then presents it intelligently, but it’s wrong.
Most of the time I use assistant to either perform home automation tasks, or look stuff up online. The first one already works fine, and for the second one I won’t trust a glorified autocomplete.
Good point, hallucinations only add to the fake news problem and artificial content problem.
I’ll counter with this: how do you know the stuff you look up online is legit? Should we go back to encyclopedias? Who writes those?
Edit: in case anyone isn’t aware, GPT “hallucinates” made up information in specific cases when temperature and top_p settings aren’t optimized, wasn’t saying anyone’s opinion was a hallucination of course
Some generative chatbots will say something then link to where the info is from. That’s good because I can followup
Some will just say something. That’s bad and I’ll have to search myself afterwards.
It’s the equivalent of a book with no cover or a webpage where I can’t see what website it’s on. Maybe it’s reputable, maybe it’s not. Without a source I can’t really decide
Ya, it’s utterly baffling to me that anyone would use a tool that predicts the next word in a sentence to try and learn something. Besides, what’s the endgame when no reporter could make a living because all their words are laundered and fed into a most people are saying bot? At that point new and unknown news, information, and facts will just be filtered out unless a lot of clickbait sites steal them because they the words don’t show up in the average conversation frequently enough.
Amusing, much like the Cryptocurrency and NFT industry where everyone from the CEO of Openai to the majority of the influencers came from, the extent that the system remind useable at all is reliant on the technology being niche. If it ever actually did become the primary method the tech would fundamentally collapse under its own weight.
Was leading onto this side of the debate, but basically our collective knowledge, hell our collective experiences are not objective. Our assumptions, mistakes, wordings which result in different interpreted meanings, etc all contribute to some level of disinformation.
Now let’s not be as nit picky and accept that some detail fudging isn’t the end of the world and happens frequently. We can cross reference each others’ accounts but even that only works to an extent.
Whole cultures might bare witness to an event and perceive it to be about x y or z, whereas the next door neighbor might see it completely different.
AI to me really isn’t that far off from the winners being the ones to write the history books, or that strange or unexpected events naturally cause human brains to recollect them in incorrect detail and accuracy.
Not quite what I meant, I was merely pointing out that we should be cognizant and of how our world view and others views might shape and define what’s considered history or fact.
All in all, central points of authority are inherently vulnerable to misinformation. I personally think communal (and biological namely) sources of information shared and verified by each other is far more valuable.
Why settle to see the rainbow for your own favorite color when there’s such an amazing and valuable spectrum available. So very digital of us
I’m admittedly a little confused on how you might still think this. Could you explain your train of thought for how I think that (what’s the bridge between the two quotes you’re using?)
To be clear, objective fact is obtainable by reproducibility (scientific process namely) but that doesn’t really work as well for “objective fact” regarding previous events when you expand them past “this event happened” (I.e. this happened because xyz)
I think a lot of people blur the line between the event itself and the rationale/explanation behind it. That’s really the crux of the problem as I see it and am trying to bring awareness to.
Care to elaborate why not? Interested in your viewpoint
ChatGPT consistently makes up shit. It’s difficult to tell when something is made up because it’s a language model so it is supposed to sound confident as if it’s any person telling a fact that they know.
It knows how to talk like a subject matter expert because that’s usually what gets publicized most and thus that’s what it’s trained on, but it doesn’t always know the facts necessary to answer questions. It makes shit up to fill the gap and then presents it intelligently, but it’s wrong.
Most of the time I use assistant to either perform home automation tasks, or look stuff up online. The first one already works fine, and for the second one I won’t trust a glorified autocomplete.
Good point, hallucinations only add to the fake news problem and artificial content problem.
I’ll counter with this: how do you know the stuff you look up online is legit? Should we go back to encyclopedias? Who writes those?
Edit: in case anyone isn’t aware, GPT “hallucinates” made up information in specific cases when temperature and top_p settings aren’t optimized, wasn’t saying anyone’s opinion was a hallucination of course
Some generative chatbots will say something then link to where the info is from. That’s good because I can followup
Some will just say something. That’s bad and I’ll have to search myself afterwards.
It’s the equivalent of a book with no cover or a webpage where I can’t see what website it’s on. Maybe it’s reputable, maybe it’s not. Without a source I can’t really decide
Ya, it’s utterly baffling to me that anyone would use a tool that predicts the next word in a sentence to try and learn something. Besides, what’s the endgame when no reporter could make a living because all their words are laundered and fed into a most people are saying bot? At that point new and unknown news, information, and facts will just be filtered out unless a lot of clickbait sites steal them because they the words don’t show up in the average conversation frequently enough.
Amusing, much like the Cryptocurrency and NFT industry where everyone from the CEO of Openai to the majority of the influencers came from, the extent that the system remind useable at all is reliant on the technology being niche. If it ever actually did become the primary method the tech would fundamentally collapse under its own weight.
deleted by creator
Yep you got me!
Was leading onto this side of the debate, but basically our collective knowledge, hell our collective experiences are not objective. Our assumptions, mistakes, wordings which result in different interpreted meanings, etc all contribute to some level of disinformation.
Now let’s not be as nit picky and accept that some detail fudging isn’t the end of the world and happens frequently. We can cross reference each others’ accounts but even that only works to an extent.
Whole cultures might bare witness to an event and perceive it to be about x y or z, whereas the next door neighbor might see it completely different.
AI to me really isn’t that far off from the winners being the ones to write the history books, or that strange or unexpected events naturally cause human brains to recollect them in incorrect detail and accuracy.
deleted by creator
Not quite what I meant, I was merely pointing out that we should be cognizant and of how our world view and others views might shape and define what’s considered history or fact.
All in all, central points of authority are inherently vulnerable to misinformation. I personally think communal (and biological namely) sources of information shared and verified by each other is far more valuable.
Why settle to see the rainbow for your own favorite color when there’s such an amazing and valuable spectrum available. So very digital of us
deleted by creator
I’m admittedly a little confused on how you might still think this. Could you explain your train of thought for how I think that (what’s the bridge between the two quotes you’re using?)
To be clear, objective fact is obtainable by reproducibility (scientific process namely) but that doesn’t really work as well for “objective fact” regarding previous events when you expand them past “this event happened” (I.e. this happened because xyz)
I think a lot of people blur the line between the event itself and the rationale/explanation behind it. That’s really the crux of the problem as I see it and am trying to bring awareness to.
Cause Chatgpt isn’t reliable on actual information and i don’t want to have any “assistant” at all.
Fair enough!
Collects your data, profiles you based on your typing style, what you type etc.
Regular assistants, websites, stores etc all do this exact same thing already for what it’s worth.
People do too!
people/regular aassistants don’t sell my data to do highest bidder
By regular assistant I meant google, Samsung, etc not like a person.
People just give it away for free to each other.