A shocking story was promoted on the “front page” or main feed of Elon Musk’s X on Thursday:
“Iran Strikes Tel Aviv with Heavy Missiles,” read the headline.
This would certainly be a worrying world news development. Earlier that week, Israel had conducted an airstrike on Iran’s embassy in Syria, killing two generals as well as other officers. Retaliation from Iran seemed like a plausible occurrence.
But, there was one major problem: Iran did not attack Israel. The headline was fake.
Even more concerning, the fake headline was apparently generated by X’s own official AI chatbot, Grok, and then promoted by X’s trending news product, Explore, on the very first day of an updated version of the feature.
People who deploy AI should be held responsible for the slander and defamation the AI causes.
Slander is spoken. In print, it’s libel.
Get me pictures of Spiderman!
Parker, why does Spider-Man have seven fingers in this photo?!
Why I imagine Xitter lawyers arguing that was it was neither spoken nor “printed”, they can’t be charged?
You expect people pushing AI fear mongering to be literate?
how is it fear mongering when shit like this is happening? AI as it stands is unregulated and will continue to cause issues if left this way
So will self-driving cars, the oil industry, and a bunch of others. The US is a plutocracy, and Musk has enough power to keep playing around with this, so that’s it. If you want actual change, either start organizing for a general strike or a civil war, since stopping Musk is not on the ballot, and most likely won’t be for either of your lifetimes.
I mean, if your problem is just narrowly musk, the one guy, you don’t need a whole war; just one shot.
I’m not advocating this, just pointing out, you know? Not that I have a problem with turning the class massacre into a class war.
I wonder how legislation is going to evolve to handle AI. Brazilian law would punish a newspaper or social media platform claiming that Iran just attacked Israel - this is dangerous information that could affect somebody’s life.
If it were up to me, if your AI hallucinated some dangerous information and provided it to users, you’re personally responsible. I bet if such a law existed in less than a month all those AI developers would very quickly abandon the “oh no you see it’s impossible to completely avoid hallucinations for you see the math is just too complex tee hee” and would actually fix this.
I bet if such a law existed in less than a month all those AI developers would very quickly abandon the “oh no you see it’s impossible to completely avoid hallucinations for you see the math is just too complex tee hee” and would actually fix this.
Nah, this problem is actually too hard to solve with LLMs. They don’t have any structure or understanding of what they’re saying so there’s no way to write better guardrails… Unless you build some other system that tries to make sense of what the LLM says, but that approaches the difficulty of just building an intelligent agent in the first place.
So no, if this law came into effect, people would just stop using AI. It’s too cavalier. And imo, they probably should stop for cases like this unless it has direct human oversight of everything coming out of it. Which also, probably just wouldn’t happen.
Yep. To add on, this is exactly what all the “AI haters” (myself included) are going on about when they say stuff like there isn’t any logic or understanding behind LLMs, or when they say they are stochastic parrots.
LLMs are incredibly good at generating text that works grammatically and reads like it was put together by someone knowledgable and confident, but they have no concept of “truth” or reality. They just have a ton of absurdly complicated technical data about how words/phrases/sentences are related to each other on a structural basis. It’s all just really complicated math about how text is put together. It’s absolutely amazing, but it is also literally and technologically impossible for that to spontaneously coelesce into reason/logic/sentience.
Turns out that if you get enough of that data together, it makes a very convincing appearance of logic and reason. But it’s only an appearance.
You can’t duct tape enough speak and spells together to rival the mass of the Sun and have it somehow just become something that outputs a believable human voice.
For an incredibly long time, ChatGPT would fail questions along the lines of “What’s heavier, a pound of feathers or three pounds of steel?” because it had seen the normal variation of the riddle with equal weights so many times. It has no concept of one being smaller than three. It just “knows” the pattern of the “correct” response.
It no longer fails that “trick”, but there’s significant evidence that OpenAI has set up custom handling for that riddle over top of the actual LLM, as it doesn’t take much work to find similar ways to trip it up by using slightly modified versions of classic riddles.
A lot of supporters will counter “Well I just ask it to tell the truth, or tell it that it’s wrong, and it corrects itself”, but I’ve seen plenty of anecdotes in the opposite direction, with ChatGPT insisting that it’s hallucination was fact. It doesn’t have any concept of true or false.
The shame of it is that despite this limitation LLMs have very real practical uses that, much like cryptocurrencies and NFTs did to blockchain, are being undercut by hucksters.
Tesla has done the same thing with autonomous driving too. They claimed to be something they’re not (fanboys don’t @ me about semantics) and made the REAL thing less trusted and take even longer to come to market.
Drives me crazy.
Yup, and I hate that.
I really would like to one day just take road trips everywhere without having to actually drive.
I love that example. Microsoft’s Copilot (based on GTP-4) immediately doesn’t disappoint:
It’s annoying that for many things, like basic programming tasks, it manages to generate reasonable output that is good enough to goat people into trusting it, yet hallucinates very obviously wrong stuff or follows completely insane approaches on anything off the beaten path. Every other day, I have to spend an hour to justify to a coworker why I wrote code this way when the AI has given him another “great” suggestion, like opening a hidden window with an UI control to query a database instead of going through our ORM.
Something to do with Twitter and Elon was disinformation? I’m keeling over in shock.
Hope his ass goes to court over this.
I hope he has an aneurysm using a device to enlarge his penis or something.
Was this some glitch in the matrix, then? (For reference: attack happened on 2024-04-14)
I’m wondering the same thing. Is this coincidence?
To everyone that goes to “X” to get the “real”, unfiltered news, I hope you can see that it’s not that site anymore.
Yet, annoyingly, much of the press still uses it to disseminate news.
I understand journalism is in a rough spot these days and many are there against their will but something needs to change abruptly. This slow exodus is too slow for democracy to survive '24.
I’d argue it never was anything outside of pulling net celebs names from hats and claiming they were rapists and racists without evidence, and then having them get chased off the internet, destroying their careers in the process and in some cases causing suicides… Unless they actually did it, because then they were rich and could just buy good publicity or start an Alt-Right circle jerk where they can claim “Wokeness” did it.
Beware, terminally incompetent interns everywhere. Doing something incredibly damaging to your company over social media on your first day is officially a job that’s been taken by AI.
“Grok” sounds like a name of a really stupid ork from a D&D capaign.
In case you’re not familiar, https://en.m.wikipedia.org/wiki/Grok.
It’s somewhat common slang in hacker culture, which of course Elon is shitting all over as usual. It’s especially ironic since the meaning of the word roughly means “deep or profound understanding”, which their AI has anything but.
Gork is from Stranger in a Strange Land.
Yup. Got also added to the Jargon File, which was an influential collection of hacker slang.
If there’s one thing that Elon is really good at, it’s taking obscure beloved nerd tidbits and then pigeon-shitting all over them.
Oh, what a surprise. Another AI spat out some more bullshit. I can’t wait until companies finally give up on trying to do everything with AI.
I can’t wait until companies finally give up on trying to do everything with AI.
I don’t think that will ever happen.
They’re acceptable of AI driving car accidents that causes harm happen. It’s all part of the learning / debugging process to them.
I don’t really understand this headline
The bot made it? So why was it promoted as trending?
It’s pretty, trending is based on . . . What’s trending by users.
Or as the article explains for those who can’t comprehend what trending means.
Based on our observations, it appears that the topic started trending because of a sudden uptick of blue checkmark accounts (users who pay a monthly subscription to X for Premium features including the verification badge) spamming the same copy-and-paste misinformation about Iran attacking Israel. The curated posts provided by X were full of these verified accounts spreading this fake news alongside an unverified video depicting explosions.
Not to defend elon here but so what? Is it his job?
As CEO he is ultimately responsible for his platform. So yes, in the end it’s his responsibility. It’s why he gets paid the big bucks.
Nah, it’s a bit like government. Its only his responsability if it is no one elses responsability. Like, they can have the most corrupt gabinets, most presidents do not resign/abdicate, whatever the word is.
Nah, it’s a bit like government.
No it’s not.
Ok, you just downvote and say no, but no explanation given. In my gov several cases of corruption arised during the last couple of years, and way more in the past. They affect high ranking ministers, and yet the oresident does not resign. Same with companies, they get paid the most, do the least, claim it is vecause they have “lots of responsabilities” but still never pay the price
Corporations are completely authoritarian, while most governments are not, or at least not completely. If there really is a “rogue engineer”, Musk can very easily fire them. Even if there was, it’s his responsibility to organize a company in such a way that this cannot happen, with people having oversight over other people.
He is very clearly failing to do any of that.
I assume that Twitter still has tons of managers and team leads that allowed this and have their own part of the responsibility. However, Musk is known to be a choleric with a mercurial temper, someone who makes grand public announcements and then pushes his companies to release stuff that isn’t nearly ready for production. Often it’s “do or get fired”.
So… an unshackled AI generating official posts, no human hired to curate the front page, headlines controlled through up-voting by trolls and foreign influence campaigns, all running unchecked in the name of “free speech” – that’s very much on brand for a Musk-run business, I’d say.
MUSK BAD RABBLE RABBLE RABBLE
“Somebody I don’t like said something bad about one of the worlds richest oligarchs therefore all criticism of him is invalid”
The guy has enough money to protect himself from bad criticism and address narratives he doesn’t like; he doesn’t need sad pathetic losers defending him on the internet like he’s a defenseless baby.
You people will take a statement, then do mental gymnastics in your mind to make it fit your retarded narrative.
Holy fuck
I mean, he’s responsible for a pretty heinous thing in this instance… It’s not like people are pissing their pants over nothing.
If your company publishes an explicitly fabricated headline stating there was a missile attack that never happened, generated by ai, and then presented to a ton of people on a major social media site, criticism is pretty deserved… (≖_≖ )