this post was submitted on 13 Sep 2024
27 points (100.0% liked)

Technology

37708 readers
202 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 14 comments
sorted by: hot top controversial new old
[–] kbal@fedia.io 37 points 1 month ago (2 children)

I find myself suspecting that chatbots getting really good at talking people into believing whatever their operators want people to believe is going to start a lot more conspiracy theories than it ends.

[–] drwho@beehaw.org 6 points 1 month ago (1 children)

People want to be lied to.

[–] ininewcrow@lemmy.ca 2 points 1 month ago

People want to only believe what they want because they hate being wrong .... and they never believe or want to believe that their side, their group, their community no matter what it is can ever be wrong.

I'm not immune to it myself and I constantly have to remind myself that I can easily fall into that same mentality.

Most of us are never taught to be self-critical or to properly question the world or the people around us.

[–] kbal@fedia.io 3 points 1 month ago (1 children)

... I hope so anyway, because the obvious alternative of the chatbots remaining under the control of an elite few while everyone falls into the habit of believing whatever they say seems substantially worse.

I guess the optimistic view would be to hope that a crowd of very persuasive bots participating in all kinds of media, presenting opinions that are just as misguided as the average human but much more charismatic and convincing, will all argue for different conflicting things leading to a golden age full of people who've learned that it's necessary to think critically about whatever they see on the screen.

[–] CanadaPlus@lemmy.sdf.org 2 points 1 month ago* (last edited 1 month ago)

The interaction between society and technology continues to be borderline impossible to predict. I hope less true factually beliefs are still harder to defend, at least.

[–] halm@leminal.space 16 points 1 month ago (2 children)

According to that research mentioned in the article, the answer is yes. The big caveats are

  • that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.
  • you need a level of "AI" that isn't going to start hallucinating and instead enforce the subjects' conspiracy beliefs. Despite techbros' hype of the technology, I'm not convinced we're anywhere close.
[–] Butterbee@beehaw.org 13 points 1 month ago (1 children)

It's not even fundamentally possible with the current LLMs. It's like saying "Yes, it's totally possible to do that! We just need to invent something that can do that first!"

[–] halm@leminal.space 5 points 1 month ago

I think we agree on the limited capability of (what is currently passed off as) "artificial intelligence", yes.

[–] CanadaPlus@lemmy.sdf.org 4 points 1 month ago* (last edited 1 month ago)

that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.

You overestimate how hard it is to get a conspiracy theorist to click on something. I don't know, it seems promising to me. I more worry that it can be used to sell things more nefarious than "climate change is real".

you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.

They used a purpose-finetuned GPT-4 model for this study, and it didn't go off script in that way once. I bet you could make it if you really tried, but if you're doing adversarial prompting then you're not the target for this thing anyway.

[–] SweetCitrusBuzz@beehaw.org 12 points 1 month ago (1 children)

Betteridge's law of headlines.

So no.

[–] OhNoMoreLemmy@lemmy.ml 2 points 1 month ago (1 children)

That's just what they want you to think.

[–] desktop_user@lemmy.blahaj.zone 3 points 1 month ago

the better goal is creating new unique conspiracy theories that nobody has heard of with the help of machine learning.

[–] Kwakigra@beehaw.org 1 points 1 month ago

I have two main thoughts on this

  1. LLMs are not at this time reliable sources of factual information. The user may be getting something that was skimmed from factual information, but the output can often be incorrect since the machine can't "understand" the information it's outputting.

  2. This could potentially be an excellent way to do real research for people who were not provided research skills by their education. Conspiracy theorists often start off as curious but undisciplined before they fall into the identity aspects of the theories. If a machine using human-like language is able to report factual information quickly, reliably, and without judgement to those who wouldn't be able to find that info on their own, this could actually be a very useful tool.