this post was submitted on 13 Jun 2024
289 points (100.0% liked)

Technology

37708 readers
222 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in Google and Bing. OpenAI, the $80 billion start-up that has partnered with Apple and Microsoft, feels ubiquitous; the auto-generated products of its ChatGPTs and DALL-Es are everywhere. And for a growing number of consumers, that’s a problem.

Rarely has a technology risen—or been forced—into prominence amid such controversy and consumer anxiety. Certainly, some Americans are excited about AI, though a majority said in a recent survey, for instance, that they are concerned AI will increase unemployment; in another, three out of four said they believe it will be abused to interfere with the upcoming presidential election. And many AI products have failed to impress. The launch of Google’s “AI Overview” was a disaster; the search giant’s new bot cheerfully told users to add glue to pizza and that potentially poisonous mushrooms were safe to eat. Meanwhile, OpenAI has been mired in scandal, incensing former employees with a controversial nondisclosure agreement and allegedly ripping off one of the world’s most famous actors for a voice-assistant product. Thus far, much of the resistance to the spread of AI has come from watchdog groups, concerned citizens, and creators worried about their livelihood. Now a consumer backlash to the technology has begun to unfold as well—so much so that a market has sprung up to capitalize on it.


Obligatory "fuck 99.9999% of all AI use-cases, the people who make them, and the techbros that push them."

top 50 comments
sorted by: hot top controversial new old
[–] lvxferre@mander.xyz 52 points 4 months ago (2 children)

For writers, that "no AI" is not just the equivalent of "100% organic"; it's also the equivalent as saying "we don't let the village idiot to write our texts when he's drunk".

Because, even as we shed off all paranoia surrounding A"I", those text generators state things that are wrong, without a single shadow of doubt.

[–] Zaktor@sopuli.xyz 18 points 4 months ago (2 children)

Sometimes. Sometimes it's more accurate than anyone in the village. And it'll be reliably getting better. People relying on "AI is wrong sometimes" as the core plank of opposition aren't going to have a lot of runway before it's so much less error prone than people the complaint is irrelevant.

The jobs and the plagiarism aspects are real and damaging and won't be solved with innovation. The "AI is dumb" is already only selectively true and almost all the technical effort is going toward reducing that. ChatGPT launched a year and a half ago.

[–] lvxferre@mander.xyz 22 points 4 months ago (1 children)

Sometimes. Sometimes it’s more accurate than anyone in the village.

So does the village idiot. Or a tarot player. Or a coin toss. And you'd still need to be a fool if your writing relies on the output of those three. Or of a LLM bot.

And it’ll be reliably getting better.

You're distorting the discussion from "now" to "the future", and then vomiting certainty on future matters. Both things make me conclude that reading your comment further would be solely a waste of my time.

[–] Zaktor@sopuli.xyz 15 points 4 months ago

You're lovely. Don't think I need to see anything you write ever again.

[–] Ilandar@aussie.zone 10 points 4 months ago (1 children)

Yes, I always get the feeling that a lot of these militant AI sceptics are pretty clueless about where the technology is and the rate at which it is improving. They really owe it to themselves to learn as much as they can so they can better understand where the technology is heading and what the best form of opposition will be in the future. As you say, relying on "haha Google made a funny" isn't going to cut it forever.

[–] Zaktor@sopuli.xyz 11 points 4 months ago (1 children)

Yeah. AI making images with six fingers was amusing, but people glommed onto it like it was the savior of the art world. "Human artists are superior because they can count fingers!" Except then the models updated and it wasn't as much of a problem anymore. It felt good, but it was just a pleasant illusion for people with very real reasons to fear the tech.

None of these errors are inherent to the technology, they're just bugs to correct, and there's plenty of money and attention focused on fixing bugs. What we need is more attention focused on either preparing our economies to handle this shock or greatly strengthen enforcement on copyright (to stall development). A label like this post is about is a good step, but given how artistic professions already weren't particularly safe and "organic" labeling only has modest impacts on consumer choice, we're going to need more.

[–] sonori@beehaw.org 12 points 4 months ago (7 children)

Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.

So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence. Instead, in order to get a coherent output the system must be fed training data that closely mirrors the context, this is why groups like OpenAi have been met with so much success by simplifying the algorithm, but progressively scrapping more and more of the internet into said systems.

I would argue that a similar inherent technological limitation also applies to image generation, and until a generative model can both model a four dimensional space and conceptually understand everything it has created in that space a generated image can only be as meaningful as the parts of the work the tens of thousands of people who do those things effortlessly it has regurgitated.

This is not required to create images that can pass as human made, but it is required to create ones that are truely meaningful on their own merits and not just the merits of the material it was created from, and nothing I have seen said by experts in the field indicates that we have found even a theoretical pathway to get there from here, much less that we are inevitably progressing on that path.

Mathematical models will almost certainly get closer to mimicking the desired parts of the data they were trained on with further instruction, but it is important to understand that is not a pathway to any actual conceptual understanding of the subject.

[–] Zaktor@sopuli.xyz 8 points 4 months ago (2 children)

Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.

So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence.

This is a misunderstanding of what "probabilistic word choice" can actually accomplish and the non-probabilistic systems that are incorporated into these systems. People also make mistakes and don't actually "know" the meaning of words.

The belief system that humans have special cognizance unlearnable by observation is just mysticism.

load more comments (2 replies)
load more comments (6 replies)
[–] CanadaPlus@lemmy.sdf.org 7 points 4 months ago

Occasionally. If you aren't even proofreading it that's dumb, but it can do a lot of heavy lifting in collaboration with a real worker.

For coders, there's actually hard data on that. You're worth about a coder and a half using CoPilot or similar.

[–] CanadaPlus@lemmy.sdf.org 35 points 4 months ago (7 children)

It will fail. Downvote me if you must, but AI generated erotica is just as here as machine-woven textiles.

[–] Zaktor@sopuli.xyz 16 points 4 months ago* (last edited 4 months ago) (5 children)

This is a post on the Beehaw server. They don't propagate downvotes.

load more comments (5 replies)
load more comments (6 replies)
[–] darkphotonstudio@beehaw.org 32 points 4 months ago (2 children)

Knee-jerk stupidity. Not all AI development revolves around "tech bros".

[–] echodot@feddit.uk 11 points 4 months ago (10 children)

I've never understood the supposed problem. Either AI is a gimmick, in which case you don't need to worry about it. Or it's real, in which case no one's going to use it to automate art, don't worry.

[–] Kroxx@lemm.ee 22 points 4 months ago

The problem I have is when a gimmick is forced on me

[–] technocrit@lemmy.dbzer0.com 12 points 4 months ago (1 children)

Or it's both depending on the wide variety of actually unintelligent things labelled as "AI".

load more comments (1 replies)
load more comments (8 replies)
[–] Drewelite@lemmynsfw.com 9 points 4 months ago* (last edited 4 months ago) (7 children)

They should go ahead and be against Photoshop and, well, computers all together while they're at it. In fact spray paint is cheating too. You know how long it takes to make a proper brush stroke? No skill numpties just pressing a button; they don't know what real art is!

load more comments (7 replies)
[–] theangriestbird@beehaw.org 32 points 4 months ago* (last edited 4 months ago) (4 children)

I hate how the Atlantic will publish well-thought pieces like this, and then turn around and publish op-eds like this that are practically drooling with lust for AI.

[–] Ilandar@aussie.zone 10 points 4 months ago

From the article:

The Atlantic has a corporate partnership with OpenAI. The editorial division of The Atlantic operates independently from the business division.

load more comments (2 replies)
[–] mspencer712@programming.dev 29 points 4 months ago

Plagiarism should be part of the conversation here. Credit and context both matter.

[–] Sabata11792@ani.social 27 points 4 months ago (3 children)

Pandora's box can not be closed.

[–] Muffi@programming.dev 19 points 4 months ago (2 children)

I don't think this about trying to close it, but rather put a big fat sticker on everything that comes out of the box, so consumers can actually make informed decisions.

load more comments (2 replies)
load more comments (2 replies)
[–] Marsupial@quokk.au 22 points 4 months ago (3 children)

Good thing about this is it’s self selecting, all the luddites who refuse to use AI will find themselves at a disadvantage just the same as refusing to use a computer isn’t doing anyone any favours.

[–] SkyNTP@lemmy.ml 39 points 4 months ago (4 children)

The benefit of AI is overblown for a majority of product tiers. Remember how everything was supposed to be block chain? And metaverse? And web 3.0? And dot.com? This is just the next tech trend for dumb VCs to throw money at.

[–] CanadaPlus@lemmy.sdf.org 22 points 4 months ago

Yes, it's very hyped and being overused. Eventually the bullshit artists will move on to the next buzzword, though, and then there's plenty of tasks it is very good at where it will continue to grow.

[–] Zaktor@sopuli.xyz 7 points 4 months ago (2 children)

Except those things didn't really solve any problems. Well, dotcom did, but that actually changed our society.

AI isn't vaporware. A lot of it is premature (so maybe overblown right now) or just lies, but ChatGPT is 18 months old and look where it is. The core goal of AI is replacing human effort, which IS a problem wealthy people would very much like to solve and has a real monetary benefit whenever they can. It's not going to just go away.

[–] BurningRiver@beehaw.org 13 points 4 months ago* (last edited 4 months ago) (1 children)

Can you trust whatever AI you use, implicitly? I already know the answer, but I really want to hear people say it. These AI hype men are seriously promising us capabilities that may appear down the road, without actually demonstrating use cases that are relevant today. “Some day it may do this, or that”. Enough already, it’s bullshit.

load more comments (1 replies)
[–] PeteBauxigeg@lemm.ee 7 points 4 months ago (7 children)

ChatGPT didn't begin 18 months ago, the research that it originates from has been ongoing for years, how old is alexnet?

load more comments (7 replies)
load more comments (2 replies)
[–] MayonnaiseArch@beehaw.org 36 points 4 months ago (1 children)

Luddites were not idiots, they were people who understood the only use of tech at their time was to fuck them. Like this complete garbage shit is going to be used to fuck people. Nobody is opposed to having tools, we just don't like Musk fanboys blowing spit bubbles while trying to get peepee hard

[–] Marsupial@quokk.au 12 points 4 months ago (1 children)

If capitalism is shit you attack capitalism not a technology.

All the misplaced rage and wasted effort.

[–] Rozauhtuno@lemmy.blahaj.zone 12 points 4 months ago (5 children)

Good thing about this is it’s self selecting, all the technobros who obsess over AI will find themselves bankrupted like when the blockchain bubble bursted.

[–] echodot@feddit.uk 9 points 4 months ago (4 children)

The blockchain bubble burst because everyone with a brain could see from the start that it wasn't really a useful technology. AI actually does have some advantages so they won't go completely bust as long as they don't go completely mad and start declaring that it can do things it can't do.

[–] Rozauhtuno@lemmy.blahaj.zone 11 points 4 months ago (4 children)

they won’t go completely bust as long as they don’t go completely mad and start declaring that it can do things it can’t do.

Which is exactly what's happening.

load more comments (4 replies)
load more comments (3 replies)
load more comments (4 replies)
[–] teawrecks@sopuli.xyz 18 points 4 months ago* (last edited 4 months ago) (2 children)

So this could go one of two ways, I think:

  1. the "no AI" seal is self-ascribed using the honor system and over time enough studios just lie about it or walk the line closely enough that it loses all meaning and people disregard it entirely. Or,
  2. getting such a seal requires 3rd party auditing, further increasing the cost to run a studio relative to their competition, on top of not leveraging AI, resulting in those studios going out of business.
[–] lvxferre@mander.xyz 15 points 4 months ago* (last edited 4 months ago) (4 children)

3. If you lie about it and get caught people will correctly call you a liar, ridicule you, and you lose trust. Trust is essential for content creators, so you're spelling your doom. And if you find a way to lie without getting caught, you aren't part of the problem anyway.

load more comments (4 replies)
load more comments (1 replies)
[–] Kissaki@beehaw.org 14 points 4 months ago (1 children)

the $80 billion start-up

lol, can you still call that start-up?

load more comments (1 replies)
[–] Mac@mander.xyz 13 points 4 months ago (1 children)

we should punish techncompanies for ruining the Internet with AI.

[–] darkphotonstudio@beehaw.org 7 points 4 months ago

The internet was ruined before AI.

[–] cupcakezealot@lemmy.blahaj.zone 8 points 4 months ago

good to know that the anti gmo weirdos found another cause to rally around.

[–] umbrella@lemmy.ml 8 points 4 months ago (2 children)

the solution here is not being luddites, but taking the tech to ourselves, not put it into the hands of some stupid techbro who only wants to see line go up.

[–] TheFriar@lemm.ee 9 points 4 months ago* (last edited 4 months ago) (1 children)

But that’s the point. It’s already in their hands. There is no ethical and helpful application of AI that doesn’t go hand in hand with these assholes having mostly s monopoly on it. Us using it for ourselves doesn’t take it out of their hands. Yes, you can self-host your own and make it helpful in theory but the truth is this is a tool being weaponized by capitalists to steal more data and amass more wealth and power. This technology is inextricable from the timeline we’re stuck in: vulture capitalism in its latest, most hostile stages. This shit in this time is only a detriment to everyone else but the tech bros and their data harvesting and “disrupting” (mostly of the order that allowed those “less skilled” workers among us to survive, albeit just barely). I’m all for less work. In theory. Because this iteration of “less work” is only tied to “more suffering” and moving from pointless jobs to assistant to the AI taking over pointless jobs to increase profits. This can’t lead to utopia. Because capitalism.

load more comments (1 replies)
load more comments (1 replies)
[–] ultratiem@lemmy.ca 7 points 4 months ago (1 children)

I’ve just started hiding penises in my work. Your move AI.

load more comments (1 replies)
load more comments
view more: next ›