(This is an expanded version of two of my comments [Comment A, Comment B] - go and read those if you want)

Well, Character.ai got themselves into some real deep shit recently - repeat customer Sewell Setzer shot himself and his mother, Megan Garcia, is suing the company, its founders and Google as a result, accusing them of “anthropomorphising” their chatbots and offering “psychotherapy without a license.”, among other things and demanding a full-blown recall.

Now, I’m not a lawyer, but I can see a few aspects which give Garcia a pretty solid case:

  • The site has “mental health-focused chatbots like “Therapist” and “Are You Feeling Lonely,” which Setzer interacted with” as Emma Roth noted writing for The Verge

  • Character.ai has already had multiple addiction/attachment cases like Sewell’s - I found articles from Wired and news.com.au, plus a few user testimonies (Exhibit A, Exhibit B, Exhibit C) about how damn addictive the fucker is.

  • As Kevin Roose notes for NYT “many of the leading A.I. labs have resisted building A.I. companions on ethical grounds or because they consider it too great a risk”. That could be used to suggest character.ai were being particularly reckless.

Which way the suit’s gonna go, I don’t know - my main interest’s on the potential fallout.

Some Predictions

Win or lose, I suspect this lawsuit is going to sound character.ai’s death knell - even if they don’t get regulated out of existence, “our product killed a child” is the kind of Dasani-level PR disaster few companies can recover from, and news of this will likely prompt any would-be investors to run for the hills.

If Garcia does win the suit, it’d more than likely set a legal precedent which denies Section 230 protection to chatbots, if not AI-generated content in general. If that happens, I expect a wave of lawsuits against other chatbot apps like Replika, Kindroid and Nomi at the minimum.

As for the chatbots themselves, I expect they’re gonna rapidly lock their shit down hard and fast, to prevent themselves from having a situation like this on their hands, and I expect their users are gonna be pissed.

As for the AI industry at large, I suspect they’re gonna try and paint the whole thing as a frivolous lawsuit and Garcia as denying any fault for her son’s suicide , a la the “McDonald’s coffee case”. How well this will do, I don’t know - personally, considering the AI industry’s godawful reputation with the public, I expect they’re gonna have some difficulty.

  • Mii@awful.systems
    link
    fedilink
    arrow-up
    12
    ·
    12 days ago

    If Garcia does win the suit, it’d more than likely set a legal precedent which denies Section 230 protection to chatbots, if not AI-generated content in general.

    I’m not gonna lie, that would be hilarious just for the monkey paw effect. They want chatbots to take the place of real employees? Let’s start with holding them to the same standards and treat everything they shit out as first-party content.