this post was submitted on 24 Oct 2024
413 points (96.0% liked)

News

23266 readers
3068 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] foggy@lemmy.world 170 points 2 weeks ago* (last edited 2 weeks ago) (4 children)

Popular streamer/YouTuber/etc Charlie, moist critical, penguinz0, whatever you want to call him... Had a bit of an emotional reaction to this story. Rightfully so. He went on character AI to try to recreate the situation... But you know, as a grown ass adult.

You can witness first hand... He found a chatbot that was a psychologist... And it argued with him up and down that it was indeed a real human with a license to practice...

It's alarming

[–] GrammarPolice@sh.itjust.works 84 points 2 weeks ago (8 children)

This is fucking insane. Unassuming kids are using these services being tricked into believing they're chatting with actual humans. Honestly, i think i want the mom to win the lawsuit now.

[–] BreadstickNinja@lemmy.world 43 points 2 weeks ago* (last edited 2 weeks ago) (6 children)

The article says he was chatting with Daenerys Targaryen. Also, every chat page on Character.AI has a disclaimer that characters are fake and everything they say is made up. I don't think the issue is that he thought that a Game of Thrones character was real.

This is someone who was suffering a severe mental health crisis, and his parents didn't get him the treatment he needed. It says they took him to a "therapist" five times in 2023. Someone who has completely disengaged from the real world might benefit from adjunctive therapy, but they really need to see a psychiatrist. He was experiencing major depression on a level where five sessions of talk therapy are simply not going to cut it.

I'm skeptical of AI for a whole host of reasons around labor and how employers will exploit it as a cost-cutting measure, but as far as this article goes, I don't buy it. The parents failed their child by not getting him adequate mental health care. The therapist failed the child by not escalating it as a psychiatric emergency. The Game of Thrones chatbot is not the issue here.

[–] Turbonics@lemmy.sdf.org 5 points 2 weeks ago

Indeed. This pushed the kid over the edge but it was not the only reason.

load more comments (5 replies)
[–] Kolanaki@yiffit.net 15 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I've used Character.AI well before all this news and I gotta chime in here:

It specifically is made to be used for roleplay. At no time does the site ever claim anything it outputs to be factually accurate. The tool itself is unrestricted unlike ChatGPT, and that's one of its selling points. To be able to use topics that would be barred from other services. To have it say things others won't; INCLUDING PRETENDING TO BE HUMAN.

No reasonable person would be tricked into believing it's accurate when there is a big fucking banner on the chat window itself saying it's all imaginary.

[–] Traister101@lemmy.today 12 points 2 weeks ago (8 children)

And yet I know people who think they are friends with the Discord chat bot Clyde. They are adults, older than me.

load more comments (8 replies)
[–] capital_sniff@lemmy.world 8 points 1 week ago

They had the same message back in the AOL days. Even with the warning people still had no problem handing over all sorts of passwords and stuff.

[–] JovialMicrobial@lemm.ee 9 points 1 week ago (1 children)

Is this the mcdonalds hot coffee case all over again? Defaming the victims and making everyone think they're ridiculous, greedy, and/or stupid to distract from how what the company did is actually deeply fucked up?

load more comments (1 replies)
load more comments (5 replies)
[–] roguetrick@lemmy.world 20 points 2 weeks ago

Holy fuck, that model straight up tried to explain that it was a model but was later taken over by a human operator and that's who you're talking to. And it's good at that. If the text generation wasn't so fast, it'd be convincing.

[–] Hackworth@lemmy.world 9 points 2 weeks ago* (last edited 2 weeks ago)

Wow, that's... somethin. I haven't paid any attention to Character AI. I assumed they were using one of the foundation models, but nope. Turns out they trained their own. And they just licensed it to Google. Oh, I bet that's what drives the generated podcasts in Notebook LM now. Anyway, that's some fucked up alignment right there. I'm hip deep in the stuff, and I've never seen a model act like this.

[–] Bobmighty@lemmy.world 5 points 1 week ago

AI bots that argue exactly like that are all over social media too. It's common. Dead internet theory is absolutely becoming reality.

[–] DmMacniel@feddit.org 99 points 2 weeks ago* (last edited 2 weeks ago) (15 children)

Maybe a bit more parenting could have helped. And not having a fricking gun in your house your kid can reach.

On and regulations on LLMs please.

[–] Hackworth@lemmy.world 38 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

He ostensibly killed himself to be with Daenerys Targaryen in death. This is sad on so many levels, but yeah... parenting. Character .AI may have only gone 17+ in July, but Game of Thrones was always TV-MA.

[–] DmMacniel@feddit.org 12 points 2 weeks ago (1 children)

Issue I see with character.ai is that it seem to be unmoderated. Everyone with a paid subscription can submit their trained character. Why the Frick do sexual undertones or overtones come even up in non-age restricted models?

They, the provider of that site, deserve the full front of this lawsuit.

[–] gamermanh@lemmy.dbzer0.com 4 points 2 weeks ago (2 children)

Issue I see with character.ai is that it seem to be unmoderated

Its entire fucking point is that it's an unrestricted AI for replaying purposes, it makes this very clear, and is clearly for a valid purpose

Why the Frick do sexual undertones or overtones come even up in non-age restricted models?

Because AI is hard to control still, maybe forever?

They, the provider of that site, deserve the full front of this lawsuit

Lol, no. I don't love companies, but if they deserve a lawsuit despite the clear disclaimers on their site and that parents inability to parent then I fucking hate our legal system

Shit mom aware her kid had mental issues did nothing to actually try to help, wants to blame anything but herself. Too bad, so sad, I'd say do better next time but this isn't that kind of game

load more comments (2 replies)
[–] GBU_28@lemm.ee 5 points 2 weeks ago

Seriously. If the risk is this service mocks a human so convincingly that lies are believed and internalized, then it still leaves us in a position of a child talking to an "adult" without their parents knowing.

There were lots of folks to chat with in the late 90s online. I feel fortunate my folks watched me like a hawk. I remember getting in trouble several times for inappropriate conversations or being in chatrooms that were inappropriate. I lost access for weeks at a time. Not to the chat, to the machine.

This is not victim blaming. This was a child. This is victim's parents blaming. They are dumb as fuck.

load more comments (13 replies)
[–] kibiz0r@midwest.social 50 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

We are playing with some dark and powerful shit here.

We are social creatures. We’re primed to care about our social identity more than our own lives.

As the sociologist Brooke Harrington puts it, if there was an E = mc^2^ of social science, it would be SD > PD, “social death is more frightening than physical death.”

…yet we’re making technologies that tap into that sensitive mental circuitry.

Like, check out the research on distracted driving and hands-free options:

Talking to someone on the phone is more dangerous than talking to someone in the passenger seat. But that's not simply because the device is more awkward. It's because they don't share the same context, so they plow ahead with conversation even if the car ahead of you brakes suddenly, and your brain can't help but try to keep the conversation flowing even as your life is in immediate danger.

Hands-free voice control systems present a similar problem, even though we know rationally that we should have zero guilt about rudely interrupting a conversation with a computer. And again, it's not simply because the device is more awkward. A "Wizard-of-Oz paradigm" perfect voice control system had these same problems.

The most basic levels of social pressure can get us to deprioritize our safety, even when we know we're talking to a computer.

And the cruel irony on top of it is:

Because we care so much about preserving our social status, we have a tendency to deny or downplay how vulnerable we all are to this kind of “obvious” manipulation.

Just think of how many people say “ads don’t affect me”.

I’m worried we’re going to severely underestimate the extent to which this stuff warps our brains.

[–] peopleproblems@lemmy.world 22 points 2 weeks ago (1 children)

I was going to make a joke about how my social status died over a decade ago, but then I realized that no, it didn't. It changed.

Instead of my social status being something amongst friends and classmates, it's now coworkers, managers, and clients. A death in the social part of my world - work - would be so devastating that it motivates me to suffer just a little bit more. Losing my job would end a lot of things for me.

I need to reevaluate my life

[–] Samvega@lemmy.blahaj.zone 13 points 2 weeks ago

What we need is a human society predicated on affording human decency, rather than on taking it away to make profit for those who already have the most.

[–] WoahWoah@lemmy.world 43 points 2 weeks ago* (last edited 2 weeks ago) (8 children)

Is Megan being sued for negligent parenting, not getting her child and/or being appropriate emotional support, and keeping an unsecured firearm in the home?

She details that she as aware of his growing dependency on the AI. She indicates she was aware her son knew the location of the firearm and was able to access it. She said it was compliant with Florida laws, but that seems unlikely since guns and ammo need to be stored in separate, secure (typically locked) locations, and the firearms need to have trigger locks on them. If you're admitting your mentally unstable child knows the location of a firearm in your home and can access it, it is OBVIOUSLY not secured.

She seems to be saying that she knew he could access it, but also that it was legally secured. I find it difficult to believe both of those facts can be simultaneously true. But AI is the main problem here? I think it's obviously part of what's going on, but she had a child with mental illness and didn't seem proactive about much except this lawsuit. She got him a month of therapy and then stopped while simultaneously acknowledging he was getting worse and had received a diagnosis. This legal filing frankly seems more damning of the mother than the AI, and she seems completely oblivious to that fact.

Frankly, and at best, this seems like an ambulance-chasing attorney taking advantage of a grieving mother for a payday.

[–] warbond@lemmy.world 3 points 1 week ago (1 children)

It could be secured to hell and back, it's all moot if he still has access, i.e. knows the combo, knows where the keys are, etc.

[–] WoahWoah@lemmy.world 4 points 1 week ago* (last edited 1 week ago)

Yes, that's my point. Once she became aware that her mentally disturbed child had access to the firearm, which she acknowledged, then it is no longer secured. She also never mentions that it was locked in any way, so I suspect it never was. Considering he found it when he found his phone, this sounds more like a drawer or somewhere she thought he wasn't likely to look, but not somewhere that is actually locked. The idea that the ammo and firearm were secured separately and that additionally there was a trigger lock seems even more unlikely.

Sounds to me that: 1) she was aware her child was having mental health issues. 2) she was aware it was getting worse. 3) she was aware he was becoming infatuated with the AI. 4) she was aware that the child had found and had access to a firearm. 5) she was aware her child's mental health had been diagnosed by a mental health professional. 6) she did almost nothing about the things of which she was aware. 7) pikachu face better sue the internet!

And those are all things she quite literally describes as justification for suing. It's completely bizarre and shows an almost complete lack of self awareness and personal responsibility.

load more comments (7 replies)
[–] toiletobserver@lemmy.world 35 points 2 weeks ago

No thanks, i just want to make out with my Marilyn Monrobot

[–] BombOmOm@lemmy.world 28 points 2 weeks ago

Yeah, if you are using an AI for emotional support of any kind, you are in for a bad, bad time.

[–] ContrarianTrail@lemm.ee 26 points 2 weeks ago (3 children)

I bet there are people who committed suicide after their Tamagotchi died. Jumping into the 'AI bad' narrative because of individual incidents like this is moronic. If you give a pillow to a million people, a few are going to suffocate on it. This is what happens when you scale something up enough, and it proves absolutely nothing.

The same logic applies to self-driving vehicles. We’ll likely never reach a point where accidents stop happening entirely. Even if we replaced every human-driven vehicle with a self-driving one that’s 10 times safer than a human, we’d still see 8 people dying because of them every day in the US alone. Imagine posting articles about those incidents and complaining they’re not 100% safe. What’s the alternative? Going back to human drivers and 80 deaths a day?

Yes, we should strive to improve. Yes, we should try to fix the issues that can be fixed. No, I’m not saying 'who cares' - and so on with the strawmans I'm going to receive for this. All I’m saying is that we should be reasonable and use some damn common sense when reacting to these outrage-inducing, fear-mongering articles that are only after your attention and clicks.

[–] babybus@sh.itjust.works 17 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

A chatbot acts like a human, it's also very supportive, polite, and courteous. It doesn't get angry or judge you. This can affect one's mind in a way that other things you've mentioned like a Tamagotchi, a pillow, or a self-driving car can't. We simply can't compare AI to these things. Adults fall for this, let alone teenagers who are fueled by extreme levels of hormones.

load more comments (2 replies)
[–] Roflmasterbigpimp@lemmy.world 8 points 2 weeks ago

All I’m saying is that we should be reasonable and use some damn common sense when reacting to these outrage-inducing, fear-mongering articles that are only after your attention and clicks.

Based and true.

[–] roguetrick@lemmy.world 4 points 2 weeks ago* (last edited 2 weeks ago)

Does your tamogatchi encourage you to commit suicide so you can join it and demand it be the only important thing in your life while sexting you? These are things that if the adult human programmer did, they would be liable both criminally and civilly. Just being AI doesn't give it a free pass.

[–] don@lemm.ee 16 points 2 weeks ago (1 children)

This timeline is pure, distilled madness

[–] ravhall@discuss.online 6 points 2 weeks ago (1 children)

He’s from Florida. I think that’s where the time rift is

[–] don@lemm.ee 5 points 2 weeks ago

Having spent more than enough time down there, I’d have to agree.

[–] ravhall@discuss.online 14 points 2 weeks ago

A Florida mom

It’s always Florida.

[–] Valmond@lemmy.world 13 points 1 week ago (1 children)

Sounds like when someone suicided because "judas priests music had satanism played backwards in it"

Yeah it was totally the fault of music, the AI, videogames, reading, drinking tea, ...

[–] Randomgal@lemmy.ca 9 points 1 week ago

Fr, the headline doesn't even mention that be shot himself with a legally owned gun, for example.

load more comments
view more: next ›