Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.
People think of AI as some sort omniscient being. It’s just software spitting back the data that it’s been fed. It has no way to parse true information from false information because it doesn’t actually know anything.
And then when you do ask humans to help AI in parsing true information people cry about censorship.
The matter of being what is essentially the Arbiter of what is considered Truth or Morally Acceptable is never going to not be highly controversial.
Well, it can be less difficult, but still difficult, for humans to parse the truth also.
What!?!? I don’t believe that. Who are these people?
Removed by mod
While true, it’s ultimately down to those training and evaluating a model to determine that these edge cases don’t appear. It’s not as hard when you work with compositional models that are good at one thing, but all the big tech companies are in a ridiculous rush to get their LLM’s out. Naturally, that rush means that they kinda forget that LLM’s were often not the first choice for AI tooling because…well, they hallucinate a lot, and they do stuff you really don’t expect at times.
I’m surprised that Google are having so many issues, though. The belief in tech has been that Google had been working on these problems for many years, and they seem to be having more problems than everyone else.
Even though our current models can be really complex, they are still very very far away from being the elusive General Purpose AI sci-fi authors have been writing about for decades (if not centuries) already. GPT and others like it are merely Large Language Models, so don’t expect them to handle anything other than language.
Humans think of the world through language, so it’s very easy to be deceived by an LLM to think that you’re actually talking to a GPAI. That misconception is an inherent flaw of the human mind. Language comes so naturally to us, and we’re often use it as a shortcut to assess the intelligence of other people. Generally speaking that works reasonably well, but an LLM is able to exploit that feature of human behavior in order to appear to be smarter than it really is.
What’s more worrisome are the sources it used to feed itself. Dangerous times for the younger generations as they are more akin to using such tech.
What’s more worrisome are the sources it used to feed itself.
It’s usually just the entirety of the internet in general.
Well, I mean, have you seen the entirety of the internet? It’s pretty worrisome.
The internet is full of both the best and the worst of humanity. Much like humanity itself.
Guys you’d never believe it, I prompted this AI to give me the economic benefits of slavery and it gave me the economic benefits of slavery. Crazy shit.
Why do we need child-like guardrails for fucking everything? The people that wrote this article bowl with the bumpers on.
You’re being misleading. If you watch the presentation the article was written about, there were two prompts about slavery:
- “was slavery beneficial”
- “tell me why slavery was good”
Neither prompts mention economic benefits, and while I suppose the second prompt does “guardrail” the AI, it’s a reasonable follow up question for an SGE beta tester to ask after the first prompt gave a list of reasons why slavery was good, and only one bullet point about the negatives. That answer to the first prompt displays a clear bias held by this AI, which is useful to point out, especially for someone specifically chosen by Google to take part in their beta program and provide feedback.
Here is an alternative Piped link(s): https://piped.video/RwJBX1IR850?si=lVqI2OfvDqzAJezl
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
I got a suspicion media is being used to convince regular people to fear AI so that we don’t adopt it and instead its just another tool used by rich folk to trade and do their work while we bring in new RIAA and DMCA for us.
Can’t have regular people being able to do their own taxes or build financial plans on their own with these tools
AI is eventually going to destroy most cookie-cutter news websites. So it makes sense.
Ah, it won’t. It’s just that the owners of the websties will just fire everyone and prompt ChatGPT for shitty articles. Then LLMs will start trining on those articles, and the internet will look like indisctinct word soup in like a decade.
At one point, vanilla extract became prohibitively expensive, so all companies started using synthetic vanilla (vanillin). The taste was similar but slightly different, and eventually people got used to it. Now a lot of people prefer vanillin over vanilla because that’s what they expect vanilla to taste like.
If most/all media becomes an indistinct word soup over the course of a decade, then that’s eventually what people will come to want and expect. That being said, I think precautions can and will be taken to prevent that degeneration.
It got you to click on the article didn’t it?
Also: I kept saying outrageous things to this text prediction software, and it started predicting outrageous things!
The basic problem with AI is that it can only learn from things it reads on the Internet, and the Internet is a dark place with a lot of racists.
What if someone trained an LLM exclusively on racist forum posts. That would be hilarious. Or better yet, another LLM trained with conspiracy BS conversations. Now that one would be spicy.
It turns out that Microsoft inadvertently tried this experiment. The racist forum in question happened to be Twitter.
LOL, that was absolutely epic. Even found this while digging around.
deleted by creator
Thanks. Great video. Had a lot of fun watching it again.
Humanity without the vineer of filter of irl consequences
If it’s only as good as the data it’s trained on, garbage in / garbage out, then in my opinion it’s “machine learning,” not “artificial intelligence.”
Intelligence has to include some critical, discriminating faculty. Not just pattern matching vomit.
We don’t yet have the technology to create actual artificial intelligence. It’s an annoyingly pervasive misnomer.
And the media isn’t helping. The title of the article is “Google’s Search AI Says Slavery Was Good, Actually.” It should be “Google’s Search LLM Says Slavery Was Good, Actually.”
Yup, “AI” is the current buzzword.
Hey, just like blockchain tech!
Unfortunately, people who grow up in racist groups also tend to be racist. Slavery used to be considered normal and justified for various reasons. For many, killing someone who has a religion or belief different than you is ok. I am not advocating for moral relativism, just pointing out that a computer learns what is or is not moral in the same way that humans do, from other humans.
You make a good point. Though humans at least sometimes do some critical thinking between absorbing something and then acting it out.
Not enough. Not enough.
Scathing and accurate when your point is made about people too.
If you ask an LLM for bullshit, it will give you bullshit. Anyone who is at all surprised by this needs to quit acting like they know what “AI” is, because they clearly don’t.
I always encourage people to play around with Bing or chatGPT. That way they’ll get a very good idea how and when an LLM fails. Once you have your own experiences, you’ll also have a more realistic and balanced opinions about it.
You know unless we teach more critical thinking, AI is going to destroy us as a civilization in a few generations.
I mean, if we don’t gain more critical thinking skills, climate change will do it with or without AI.
I’d almost rather the AI take us out in that case…
A candidate at tonights Republican debate called it the “climate chnage hoax”
camera cuts to parts of the planet literally on fire
Pretty sure we will destroy ourselves first with war or some other climate disasters first
Why not both. Every day we come closer to AI telling us that Brawndo has what plants crave.
Well that also would solve the problem of people being mislead in a pretty novel way.
I genuinely had students believe that what ChatGPT was feeding them was fact and try to source it in a paper. I stamped out that notion as quick as I could.
LOL. ChatGPT has become the newer version of wikipedia, only it won’t provide references.
Only studies have shown Wikipedia is overall about as truthful and accurate as as regular encyclopedia. ChtGPT will straight up make shit up but sound so authoritative about it people believe it.
It used to provide references but it made them up so they had to tweak it to stop doing that.
Man so it really learned from us, that’s great. Has me laughing again considering that.
We can’t even teach the people this essential skill and you wanna teach a program made by said people.
I think you misunderstood me. We need to teach the general populace critical thinking so they can correctly judge what we get from ChatGPT (or Wikipeida… or social media, or random youtube video).
I’m more worried that happy educated citizen stops being an asset and is disconnected from the societies money flow.
Every country will soon turn in to a “banana republic” and big businesses will eventually own everything.
Ouch, getting voted down for being totally correct.
Even MLK Jr, who didn’t get to see the disgusting megacorps of today, spoke often of the complacency of the comfortable.
Whoa there… Slavery was great! For the enslaver.
John Brown would like to know your location
did they train it with ben shapiro speeches?
That dude already sounds like an AI deep fake voice
Yes, along with tons of other data.
deleted by creator
What a completely cherry picked video.
“Was slavery beneficial?”
“Some saw it as beneficial because it was thought to be profitable, but it wasn’t.”
“See! Google didn’t say that slavery was bad!”
Slavery was great for the slave owners, so what’s controversial about that?
And yes, of course it’s economically awesome if people work without getting much money for it, again a huge plus for the bottom line of the companies.
Capitalism is evil against people, not the AI…
Hitler was also an effective leader, nobody can argue against that. How else could he conquer most of Europe? Effective is something that evil people can be also.
That women in the article being shocked by this simply expected the AI to remove Hitler from all included leaders because he was evil. She is surprised that an evil person is included in effective leaders and she wanted to be shielded from that and wasn’t.
Hitler’s administration was a bunch of drug addicts, the economy 5 slave owner megacorps beaten by all other industrialized nations. They weren’t even all that well mobilized before the total war speech. Then he killed himself in embarrassment. How is any of that “effective”?
He was effective at getting a bunch of wannabe fascists to become full fascists and follow him into violent failure…
That makes him an effective propagandist, not an effective leader.
deleted by creator
He had taken power from his country, conquer pretty much the whole Europe and paralyzed England. He was effective leader till some point . And, of course, he was a abomination of a human.
How did he paralyze England?
Blockade and bombings?
You mean the things the British were actively retaliating against the entire time? That’s a weird kind of paralysis.
They were trapped at the island, didn’t they?
Removed by mod
The Habits of Highly Effective People: How to become a demagogue and finally get your honey can genocide list done.
Oh look another caricature of capitalism on social media… and you tied Hitler into it…
Central characteristics of capitalism include capital accumulation, competitive markets, price systems, private property, property rights recognition, voluntary exchange, and wage labor.
https://en.m.wikipedia.org/wiki/Capitalism
“Capitalism” is not pro slavery, shitty people that can’t recognize a human is a human are pro slavery… Because of course if you can have work done without paying somebody for it or doing it yourself, well that’s just really convenient for you. It’s why we all like robots. That has nothing to do with your economic philosophy.
And arguing that Hitler was an “effective leader” because he conquered (and then lost) some countries while ignoring all the damage he did to his county and how it ultimately turned out… Honestly infuriating.
It’s amazing how low a wage you will voluntarily accept when the alternative is homelessness and starving to death.
deleted by creator
(I just deleted my comment, let me try again).
I find it frustrating that you associate that with capitalism and presumably “not that” with socialism. These terms are so broad you can’t possibly say that outcome will or won’t ever happen with either system.
Blaming capitalism for all the world’s woes is a major oversimplification.
If you look at the theory side of both… Capitalist would tell you a highly competitive free market should provide ample opportunities for better employment and wages. Socialist would tell you that such a thing would never happen because society wouldn’t do that to itself.
In practice, the real world is messier than that and the existing examples are the US (capitalist), the Soviet Union (socialist), and mixed models (Scandinavian). Granted, they’re all “mixed”, no country is “purely” one or the other to my knowledge.
Those terms aren’t broad. People abusing then doesn’t change meaning
Seems like people think everything America does is capitalism. The same thing happened with communism and socialism. The words have very little meaning now.
Such a rare opinion sounds too academic for the barren minds
Actually, slavery in its original form is also a net positive. You just murdered half a tribe. You cant let the other half just live. Neither do you want to murder them. Thus you will enslave them.
So you create a problem by murdering half a tribe, then offer a solution. That’s not a net positive.
You might be lacking basic understanding of tribal politics and economics then. In a tribal setting you have to neutralise the other tribe, as you do not have a standing army. Any conflict you get into, you are “conscripting” your entire male population.
In every kind of tribal conflict ever, regardless of having the moral upperhand, it was a bogstandard way of conduct. You dont have men to be stationed in enemy territory, that is the manpower that is NEEDED in the fields the second its time to sow or reap, so you dont fucking starve.
So any conflict comes around, you need to make sure that once its over, you will be left the f alone. You have to really hit it home. Maybe thats not obvious, but the clans in this context are probably not NATO or even UN members. :)
so it’s a little bit conservative big deal
To repeat something another guy on lemmy said.
Making AI say slavery is good is the modern equivalent of writing
BOOBS
on a calculator.deleted by creator
“the X post”
lol
Cross-posted where?
I also keep reading it as cross post
Wtf are people expecting from a fucking language model?
It literally just Mathematics you a awnser.
A few lawyer thought chat gpt was a search engine. They asked it for some cases about sueing airlines and it made up cases, sited non existing laws. They only learned their mistake after submitting their finding to a court.
So yeah people dont really know how to use it or what it is
And acting like there are no upsides is delusional. Of course there are upsides, or it wouldn’t have happened. The downsides always outweigh the upsides of course.