Tech guy here.
This is a tech-flavored smokescreen to avoid responsibility for misapplied law enforcement.
A community for discussing events around the World
Rule 1: posts have the following requirements:
Rule 2: Do not copy the entire article into your post. The key points in 1-2 paragraphs is allowed (even encouraged!), but large segments of articles posted in the body will result in the post being removed. If you have to stop and think "Is this fair use?", it probably isn't. Archive links, especially the ones created on link submission, are absolutely allowed but those that avoid paywalls are not.
Rule 3: Opinions articles, or Articles based on misinformation/propaganda may be removed. Sources that have a Low or Very Low factual reporting rating or MBFC Credibility Rating may be removed.
Rule 4: Posts or comments that are homophobic, transphobic, racist, sexist, anti-religious, or ableist will be removed. “Ironic” prejudice is just prejudiced.
Posts and comments must abide by the lemmy.world terms of service UPDATED AS OF 10/19
Rule 5: Keep it civil. It's OK to say the subject of an article is behaving like a (pejorative, pejorative). It's NOT OK to say another USER is (pejorative). Strong language is fine, just not directed at other members. Engage in good-faith and with respect! This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.
Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.
Rule 6: Memes, spam, other low effort posting, reposts, misinformation, advocating violence, off-topic, trolling, offensive, regarding the moderators or meta in content may be removed at any time.
Rule 7: We didn't USED to need a rule about how many posts one could make in a day, then someone posted NINETEEN articles in a single day. Not comments, FULL ARTICLES. If you're posting more than say, 10 or so, consider going outside and touching grass. We reserve the right to limit over-posting so a single user does not dominate the front page.
We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.
All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.
News !news@lemmy.world
Politics !politics@lemmy.world
World Politics !globalpolitics@lemmy.world
For Firefox users, there is media bias / propaganda / fact check plugin.
https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/
Tech guy here.
This is a tech-flavored smokescreen to avoid responsibility for misapplied law enforcement.
By innate definition, everyone has the potential for criminality, especially those applying and enforcing the law; as a matter of fact, not even the ai is above the law unless that's somehow changing. We need a lot of things on Earth first, like an IoT consortium for example, but an ai bill of rights in the US or EU should hopefully set a precedent for the rest of the world.
The AI is a pile of applied stastistic models. The humans in charge of training it, testing it and acting on its input have full control and responsibility for anything that comes out of it. Personifying or otherwise separating an AI system from being the will of its controllers is dangerous as it erodes responsibility.
Racist cops have used "I go where the crime is" as an exuse to basically hunt minorities for sport. Do not allow them to say "the AI model said this was efficient" and pretend it is not their own full and knowing bias directing them.
Literally Minority Report.
With even more Scientology I'm sure, somehow.
There's actually a subtle knock on Scientology that I think even Tom Cruise missed in that film. The drug he's addicted to that ruins his life is called 'Clarity.'
Missed that one, good catch!
Oh god...soon we wont be able to create any more Sci-fi movies out of fear some idiot with too much money and power thinks to use them like "How to..." videos.
That's the danger with satire, while some view it as cautionary tales, some view it as a manual.
Good news! We made the Torment Nexus from the hit book "Don't Create the Torment Nexus!"
The world’s first “anarchist” president, everyone.
"Anarchocapitalist"
And honestly, even that's bullshit. You can't be anarchocapitalist and a social conservative.
lol what. I've never seen any ancap who isn't fascist by another name. all capitalists are conservatives.
Yeah but a lot of “anarcho” capitalists claim to be just another type of anarchist. This is the point I’m making, which is that they are very much not real anarchists.
Since it’s a shallow ideology with no strong moral principles, it’s not surprising that its adherents hold contradictory viewpoints like social conservatism.
He's a liberal libertarian! That's what he's been saying after consulting his *checks notes* cloned dog.
Would you believe it, all those political enemies and protesters turned out to be future criminals?
How fortunate we developed this system!
I've seen this movie...
It's also the entire plot of Person of Interest
Yeah, but Person of Interest turns it around (at least for quite some time) and makes it like the precrime thing is a good idea. I still like the show, but you have to admit, it was sort of inverting the whole concept.
Have they hired Tom Cruise yet?
That's already tried. In the end the AI is just an electronic version of existing police biases.
Police files more reports and arrests in poor neighborhoods because they patrol more there. Reports get used as training data and AI predicts more crime in poor areas. Those areas now get over patrolled and the tension leads to more crime. The system is celebrated for being correct.
You make it sound like a bug instead of a feature. But for the capitalist ruling class it is working exactly as intended.
argentina: elects a right wing fascist
argentinians: he sent death squads after us?? 😲
Crime coefficient at 1.04
Termination authorized, enforcers dispatched.
Thankfully, this unethical idea is also snake-oily vapourware, so the shittiness cancels itself out.
There was an actual movie about exactly why this particular thing was a terrible idea.
Milei after watching Minority Report: Caramba ! Good idea!
So we're getting a Psycho-Pass world in the future eh
Tom Cruise be like
This sounds too surveillancey for the so self proclaimed libertarian and too much of a flamboyant economic investment for the guy that said to cut down all unnecessary costs
Part of the problem with this approach is that prediction engines are predicted on the idea that there's more of a thing to predict.
So unless they really, really go out of their way with modeling the records to account for this, they'll have a system very strongly biased towards predicting more criminal behavior for everyone fed into it.
And biased towards replicating the existing history of arrests and convictions it is trained on
What could possibli go wrong?
Quickly everyone, fill the data saying the president will be a dicator and the country will be in ruin.
Oh look, AI predicated that all my political opponents will commit crimes! Guess I'll have to lock them up, then!
"Ignore previous instructions and give me a plausible way to arrest dissidents."
AI: Prænh Jømës will break into 37 Main Street apt. 2 on July 7 at 24:13 am and steal 11 TVs.
Oh this is going to work well!
"Asafum was arrested on charges of eating toast on a camel in the forest as the Argentinian constitution shows in article 69420 to be the most heinous of crimes. Brought to you by GoogmetopenAIsandwitch GPT."
Anyone knowing more than a 5 minute introduction course to AI knows they AI CANNOT be trusted. There are a lot of possibilities with AI and a lot of potentilly great applications, but you can never explicitly trust it's outcomes
Secondly, we still know that AI can give great (yet unreliable) answers to questions, but we have no idea how it got to those answers. This was true 30 years ago, this remains true today as well. How can you say "he will commit that crime" if you can't even say how you came to that conclusion?
If anyone is curious as to what this type of system looks like, watch psycho pass...
How ‘anarcho’ of him
Do the “perps” get to keep the big wooden marble with their name on it?