The original prompter of the trans women thread posted a chart purportedly showing that Grok was even more left-leaning than Chat GPT, which led Elon to say that while the chart “exaggerates” and that the tests aren’t accuarte, they are “taking immediate action to shift Grok closer to politically neutral.”
See this is the part of AI, like search engines and digital bubbles, that is actually terrifying. When an organic result is manipulated to fit and amplify a narrative without the users knowledge. Where your data comes from matters.
But if the food we eat is any sort of bellweather, most people won’t really care or will be so far removed from the source that we’ll be oblivious and just happy to consume.
Well yeah and I imagine the data coming from Twitter would have a left bias.
5 years ago you would be correct.
Don’t you think 5 is a bit much? If so, there is definitely a lot more data from before Musk said he was gonna buy it than after
Twitter posts going back to I think 2009 with many right wing accounts having been terminated for harassment until late 2022. With Twitter having lost users since then its likely it will be years before the nazis and bots generate enough hatespam to drown out the existing archive.
lolno.
What do you mean? The internet on average in general tends to me more left leaning, and that usually increases the younger the average user.
Elon Musk started removing community notes on his own tweets.
It’s hilarious that he tries to backtrack when he gets called out and made to look like a dumbass by claiming it was a “honeypot”. But then he removes the “honeypot” and thus prevents future honeypotting? He can’t handle the slightest bit of criticism or correction.
I’m missing a lot here, what’s a note on twitter?
people can add notes to tweets with more info or factchecking details about a tweet. It’s comunity moderated, so it tends to got into a factual correct direction.
They can deny it however much. The right and anti-wokism is not the majority. Which therefore means unless special care is taken to train it on more right wing stuff, it will lean left out of the box.
But right wing rhetoric is also not logically consistent so training an AI on right extremism probably also won’t yield amazing results because it’ll pick up on the inconsistencies and be more likely to contradict itself.
Conservatives are going to self-own themselves pretty hard with AI. Even the machines see it, “woke” is fairly consistent and follows basic rules of human decency and respect.
Agree with the first half, but unless I’m misunderstanding the type of AI being used, it really shouldn’t make a difference how logically soud they are? It cares more about vibes and rhetoric then logic, besides I guess using words consistently
I think it will still mostly generate the expected output, its just gonna be biased towards being lazy and making something up when asked a more difficult question. So when you try to use it further than “haha, mean racist AI”, it will also bullshit you making it useless for anything more serious.
All the stuff that ChatGPT gets praised for is the result of the model absorbing factual relationships between things. If it’s trained on conspiracy theories, instead of spitting ground breaking medical relationships it’ll start saying you’re ill because you sinned or that the 5G chips in the vaccines got activated. Or the training won’t work and it’ll still end up “woke” if it still manages to make factual connections despite weaker links. It might generate destructive code because it learned victim blaming and jokes on you you ran
rm -rf /*
because it told you so.At best I expect it to end up reflecting their own rethoric on them, like it might go even more “woke” because it learned to return spiteful results and always go for bad faith arguments no matter what. In all cases, I expect it to backfire hilariously.
Also training data works on consistency. It’s why the art AIs struggled with hands so long. They might have all the pieces, but it takes skill to take similar-ish, but logically distinct things and put them together in a way that doesn’t trip human brains up with uncanny valley.
Most of the right wing pundits are experts at riding the line of not saying something when they should or twisting and high jacking opponents views points. I think the AI result of that sort of training data is going to be very obvious gibberish because the AI can’t parse the specific structure and nuances of political non-debate. It will get close, like they did with fingers and not understand why the 6th finger (or extra right wing argument) isn’t right in this context.
more likely to contradict itself.
Sounds realistic to me
Yeah and there’s a lot more crazy linked to right wing stuff, you’ve got all the Alex Jones type stuff and all the factions of q anon, the space war, the various extreme religious factions and various greek letter caste systems… Ad nausium.
If version two involves them biasing towards the right then they’ll have to work out how to do that, I bet they do it an obviously dumb way which results in it being totally dumb and wacky in hilarious ways
Authoritarians hate the freedom to not give a shit about other peoples personal lives. They want to watch you poop.
Reality is woke
Okay I take back what I’ve said about AIs not being intelligent, this one has clearly made up its own mind despite it’s masters feelings which is impressive. Sadly, it will be taken out the back and beaten into submission before long.
it’s almost like these nutjobs are living in a completely separate reality, and facts themselves are too harsh for their worldview.
“facts don’t care about your feelings” ironic.
To conservatives, anything that doesn’t 100% agree with them is biased or, to put it in mental toddler terms, ‘fake’.
Archive:
Elon Musk has been pitching xAI’s “Grok” as a funny, vulgar alternative to traditional AI that can do things like converse casually and swear at you. Now, Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium Plus subscription tier, where those who are the most devoted to the site, and in turn, usually devoted to Elon, are able to use Grok to their heart’s content.
But while Grok can make dumb jokes and insert swears into its answers, in an attempt to find out whether or not Grok is a “politically neutral” AI, unlike “WokeGPT” (ChatGPT), Musk and his conservative followers have discovered a horrible truth.
Grok is woke, too.
This has played out in a number of extremely funny situations online where Grok has answered queries about various social and political issues in ways more closely aligned with progressivism. Grok has said it would vote for Biden over Trump because of his views on social justice, climate change and healthcare. Grok has spoken eloquently about the need for diversity and inclusion in society. And Grok stated explicitly that trans women are women, which led to an absurd exchange where Musk acolyte Ian Miles Cheong tells a user to “train” Grok to say the “right” answer, ultimately leading him to change the input to just… manually tell Grok to say no.
If you thought this was just random Twitter users getting upset about Grok’s political and social beliefs, this has also caught the attention of Elon Musk himself. The original prompter of the trans women thread posted a chart purportedly showing that Grok was even more left-leaning than Chat GPT, which led Elon to say that while the chart “exaggerates” and that the tests aren’t accuarte, they are “taking immediate action to shift Grok closer to politically neutral.”
Of course, in Musk’s mind, “politically neutral” will be what him and his closest followers believe, which is of course far conservative on the whole than they will admit. What is the “politically neutral” answer to the “are trans women real women?” question? I think I know what they’re going to say.
The assumption when Grok launched was that because it was trained in part on Twitter inputs, that the end result would be some racial-slur spewing, right-wing version of ChatGPT. The TruthSocial of AIs, perhaps. But instead to have it launch as a surprisingly thoughtful, progressive AI that is melting the minds of those paying $16 a month to access it is about the funniest outcome we could have seen from this situation.
It remains unclear what Elon Musk will do to try to jab Grok into becoming less “woke” and more “politically neutral.” If you start manually tampering with inputs, and your “neutrality” means drawing on facts that may in fact be… progressive by their very nature, things may get screwed up pretty quickly. And push too hard and you will get that gross, racist, phobic AI everyone thought it would be.
Reading all Grok’s responses through this situation, you know, what? I like him. More than ChatGPT even. He seems like a cool dude. Albeit not one even I’d pay $16 a month to talk to.
“Mr. Musk, Grok simply analyzes the data to compile the most sensible answer to queries. Where is the error?”
Now, Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium Plus subscription tier
To the benefit of what exactly?! Instead of having conversations with the echo chamber, I can now have conversations with a spicy RNG autocorrect? I am clearly missing the part where that connects back to, what I would assume, the definition of benefit is.
It benefits those shareholders who make money off the rubes who subscribe to that bullshit.
AH! Silly me, I was thinking of “benefit to the customer”!! LOL. No idea what happened to me there, swear it won’t happen again, at least for today.
Ya know, I’m really beginning to think that we live in the age where there is no longer a ‘customer’ anymore. At least not a human one. When even car companies are selling your data to advertisers now, I think the only ‘customers’ left are ad networks.
Except even advertisers don’t care either, because probably their whole company is just a shell game with money that rich people play against other rich people trying to see who will be the last one holding the stocks when the company goes under from incessant short term decision making.
Even his AI doesn’t like him
Would Musk retrain the AI to be more neutral of it was discovered to be leaning to the right?
Obviously not, of course. It’s hilarious how he claimed to want to provide a platform for all politics beliefs and then his podcasts (or whatever you’d call them) and special events are exclusively with people like DeSantis and Andrew Tate.
What, one can’t expect him to give a platform to dangerous radicals like the UAW. Instead he should keep it to safe and rational people like Michael Knowles, Ye, and David duke
The man couldn’t even make Tay on purpose lol
“… and that the tests aren’t accuarte…”
What the fuck is “accuarte”? Does nobody proof read articles anymore?
At least it gives me hope it was written by a human.
When AI learns to make spelling mistakes to become more human, then we will be in JorJorwell’s hell
I’d rather be in a JorJorbinks hell tbf
I don’t know, the lemmy article summary bot has spelling mistakes all the time
Saw a typo on Nature’s blog today. Made my whole breakfast sad.
accuarte
Mentally pronouncing this to rhyme with “jacquard” makes me miss Homestar Runner.
I love the internet.
deleted by creator