TLDR: if the rich succeed in building AI systems that cater fully to their needs through the whole supply chain (i.e. AI can mine and process resources into what they want with no humans needed), then the rich will have no reason to keep anyone else around and can just massacre all the poors.


Recently, the r/singularity subreddit has had several posts which show some class-consciousness, despite they mostly-techbro atmosphere.

The post I’ve linked and reproduced below states a concern I also have with AI:

If we assume that we reach AGI, maybe even super intelligence, then we can expect a lot of human jobs will suddenly become obsolete.

First it could be white collar and tech jobs. Then when robotics catches up, manual labor will soon follow. Pretty soon every conceivable position a human once had can now be taken over by a machine.

Humans are officially obsolete.

What’s really chilling is that, while humans in general will no longer be a necessity to run a government or society, the very few billionaires at the top that helped bring this AI to existence will be the ones who control it - and no longer need anyone else. No military personnel, teachers, doctors, lawyers, bureaucrats, engineers, no one.

Why should countries exist filled with people when people are no longer needed to farm crops, serve in the military, build infrastructure, or anything else?

I would like to believe that if all of humanities needs can now always be fulfilled (but controlled by a very, very few), those few would see the benefit in making sure everyone lives a happy and fulfilling life.

The truth is though, the few at the top will likely leave everyone else to fend for themselves the second their walled garden is in place.

As the years pass, eventually AI becomes fully self-sustaining - from sourcing its own raw materials, to maintaining and improving its own systems - even the AI does not need a single human anymore (not that many are left at that point).

Granted, it could take a long while for this scenario to occur (if ever), but the way things are shaking out, it’s looking more and more unlikely that we’ll never get to a utopia where no one works unless they want to and everyone’s needs are met. It’s just not possible if the people in charge are greedy, backstabbing, corporate sociopaths that only play nice because they have to at the moment.

I find their argument quite valid, only lacking in the explicit mention of ‘capitalism’.

Once the rich have full-supply-chain-AI, we wouldn’t be able to revolt even if we wanted to. The robotic police force controlled by the rich can just massacre all the poors.

This puts a hard time limit on when revolution needs to occur. After that I guess we need China’s J-36s to save the American proletariat.

  • 矛⋅盾@lemmygrad.ml
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    5 days ago

    imo that kind of ‘analysis’ is frankly very steeped in neoliberalism-derivative defeatism and doomerism (‘end of history’ and other such trite epistemology): unable to imagine any exits or alterations from The One Inevitability of Capitalism. I’m not familiar with that subreddit but “singularity” does sound like how followers of that camp (and adjacent antiwork types) treat capitalism, like it’s a black hole, an all-powerful force of nature (not dissimilar to how bourgeois society talks about ‘hand of the market’), instead of a system that is built and maintained by humans. A million other things can and probably will become more important factors to the status of humankind before any of that technofuturist (doesnt matter if its ‘good’ or ‘evil’) “prophet”-eering would come to pass.

    Past revolutionaries have succeeded in the face of seemingly impossible odds; current present-day people oppressed by US imperialism have resisted with greater obstacles. Guerillas made sure that even if their enemies render the land toxic, filled with mines, or otherwise impossible to live in, invaders can’t take and hold onto their land, and because supplies aren’t free nor infinite, eventually constant (over)commitment forces them to withdraw. The United States might crow about its military (and clearly its got a track record in special ops in raping, looting, couping, blowing up pipelines etc*) but it has not definitively won a single “war” it involved itself in since WWII. Gaza was the most policed/surveilled place on earth! yet Israel, who markets itself as being cutting edge in the field of surveillance to export its tech to other countries, can’t figure out where al Qassam are based (and supplied), always claiming the next school or hospital is really where their enemies are hiding. On what basis do these techbro&derivative types have to say that ALL subjugated people are doomed? Just because in their immediate surroundings, where the concentrated brunt of oppression is NOT happening, people aren’t resisting much?

    Think of it another way, if capitalist billionaires can get their hands on robot armies, what’s stopping their enemies from doing the same? Why, other than simply monopoly concerns, is the west so freaked out about China’s (or DPRK or Iran or Russia etc etc) technological development, including chips manufacturing and AI?

    *I’m rambling long enough but there’s some argument to be made that coups and related interventionism is more cost effective for empire than direct war.

  • cfgaussian@lemmygrad.ml
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    5 days ago

    The problem with the notion that if you get rid of all the poor people then all the people left will be rich, is that the rich can’t exist without the poor. This isn’t just a question of definition or even psychology (one of the main reasons why so many people fetishize wealth is because it makes them feel superior to others - but if there are no others to feel superior to then what is the point?), rather that even if all the poor people disappeared from the face of the earth tomorrow, the system of capitalism would simply reproduce the underclass. Automation and AI won’t and can’t change this fundamental fact.

    And if i may get slightly off topic here, this is also why the old propaganda line that liberalism sells you which says that “everyone can theoretically become rich if they work hard/get lucky” is broadly understood even by most liberals themselves to be bullshit. Even if you believe that anyone can go from poor to rich, everyone can’t, it is structurally impossible since capitalism, despite what is often claimed, really is a zero sum game in a lot of ways. You don’t get rich without others getting poor, even if you don’t see those people because they are outside your own country, in the global south being exploited to enable the imperial core to maintain and accumulate its wealth.

  • tamagotchicowboy [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    6 days ago

    AI is super energy intensive and we’re dealing with both climate change and economically quickly dimensions returns which limit their usefulness. Big AI projects are getting scaled back as the hype train dies down and ever increasing resources are required for them. People don’t require as much, like the Romans didn’t really need the steam engine one part due to slavery the capitalists don’t really need AI for very similar reasons. I suspect more mechanical turk like AI-human combos in the future rather than fully automated AI.

    On the other hand, AI is improving to the point where if you have a PC with gpu from the last 20 years or so you can locally run a version of many things, even a raspberry pi can run a small llm.

    The rich have tanks, atomic weapons, automated guns, drones, helicopters, etc, etc and people have still made a stand against all that. AI is just another thing to be adapted to, this isn’t some simplistic capitalist wet dream dystopic Hollywood film or a silly Civilization style game where a ‘tech advantage’ is unsurmountable. Like people machines have failures, we are in a universe of breakable beings and toys.

  • 陆船。@lemmygrad.ml
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    6 days ago

    Society will be radically changed long before AI radically changes society.

    Given the mostly white, bourgeois preoccupation with “x-AI risk” (existential/extinction) I think the real “risk” is that the self-legitimating myths of capitalism will fall on muted microphones. Even 10 years ago when AI was still called machine learning and it was much less impressive (its outputs were exclusively categorization of inputs) and it required decades of breakthroughs and to be hooked up to every input in society and multiplexed with every output to do anything “harmful” the x-AI risk people were running around crying (this holds true today of LLMs and other statistically likely to exist content emitters).

    The pitch is always that the AI will decide the needs of the many outweigh the needs (private property rights) of the few. This is only scary if you are among that few. Even property rights obsessed liberals don’t think themselves among the few who will be exproprAIted but are outraged by the expropriation itself. It’s a boogyman spewed by the people who are the problem and we’re asked to share their fear. Ridiculous.

    Unlike other private property and artifacts of capital accumulation which are inert (the workers may organize against you but the steel mill itself won’t), the AI their capital gives birth to might in several decades time maybe organize against you (but not really).

  • amemorablename@lemmygrad.ml
    link
    fedilink
    arrow-up
    6
    ·
    6 days ago

    This puts a hard time limit on when revolution needs to occur.

    I would say if there is a time limit, it’s more to do with climate change than anything else. To give an example of why I don’t think AI leads to such a scenario: Suppose AI becomes as capable, or more capable than, humans (I am going there because I don’t see how else AI can replace the majority of the labor force). To do this, it implicitly needs to be capable of degrees of reasoning, autonomy, adaptive real-time independent learning, and a physical body that would make it difficult to fully control, much like with a human. In effect, we’d be talking about AI that becomes its own class of being and the rich would now have to contend with AI rebelling alongside humans instead of just humans rebelling. Anything less than that won’t be able to replace the more involved parts of human labor, only reduce the number of workers required for tasks that can be more automated.

    That said, the class consciousness is nice to see among some AI-focused people.

    • Commiejones@lemmygrad.ml
      link
      fedilink
      arrow-up
      5
      ·
      6 days ago

      I wish there were more coding comrades involved with AI who could be subtly influencing them to read more communist theory and give it more weight.

      • cimbazarov@lemmygrad.ml
        link
        fedilink
        aragonés
        arrow-up
        1
        ·
        edit-2
        6 days ago

        I mean from my experience with chatgpt, it has def been trained on Marxist theory texts. And it gets some of it right and some of it wrong in the same way it gets anything right or wrong.

        LLM’s are black boxes. You can’t modify their algorithm and influence them at the layer of abstraction to be pro or anti anything (at least from my understanding). The only way you could do that is by cherry picking its training data, and even then it’s still a black box so it could potentially have a bias towards the opposite of what you intend.

        • Commiejones@lemmygrad.ml
          link
          fedilink
          arrow-up
          1
          ·
          6 days ago

          You and I cant influence the “black box” but people make these things. There are humans that have control over what is and isn’t training data and I just wish there were more comrades in that group of humans.

  • Gaia [She/Her]@lemmygrad.ml
    link
    fedilink
    arrow-up
    5
    ·
    6 days ago

    Hasn’t there been essentially no progress on the alignment issue, and they’re still planning to do superintelligence asap? I see the most likely scenario being that of a tragic play on hubris. The AI will probably do many good things to bad people then promptly kill itself.

  • cimbazarov@lemmygrad.ml
    link
    fedilink
    aragonés
    arrow-up
    3
    ·
    6 days ago

    I’ve started contemplating if a butlerian jihad is more likely than a proletarian revolution at this point with how little class conscious there is in America.

    It is interesting to see that even tech-bros can see some of the contradictions in AI. I do feel it’s easier to organize them against AI rather than against billionaires who own it (since in the tech bro mind they are already on the cusp of being one of the billionaires).

    Also if I could indulge in some sci-fi speculation for a bit: what if the “AI takeover” follows one of the stories in I, Robot (the book) where there’s a official who is an AI but it’s not clear to the public that he is, creating this environment of ambiguity. Then we have more “AI’s” masquerading as real people (take the recent event with the AI Instagram profiles) until everyone we are surrounded by is an AI before we even know it.

    • Commiejones@lemmygrad.ml
      link
      fedilink
      arrow-up
      4
      ·
      6 days ago

      butlerian jihad is more likely than a proletarian revolution

      This is really funny.

      But on a serious note the ruling class would probably side with the machines. Even if they didn’t the issues of capitalism would still be around after the jihad and people would have a taste for direct action. Capitalism would fall soon after the thinking machines.

    • cfgaussian@lemmygrad.ml
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      6 days ago

      I’ve started contemplating if a butlerian jihad is more likely

      I’ve been thinking about this as well recently, or rather, not about if it’s likely but if it may be starting to become necessary. The more i see of “AI” the more i start to fantasize about a global societal mobilization where we smash anything associated with AI, burn anything ever written about it, call it “Forbidden Knowledge” from now till the end of time and establish some sort of Inquisition to make sure it never resurfaces again.

      Haha, no but real talk, i don’t think AI will take over the world Skynet style (that’s probably not in the realm of possibility seeing as what we now call “AI” has very little to do with actual intelligence and is rather just mindless imitation of training data, albeit on a very large scale) but i do think it is really obscene the way it is being used by capitalism to ruin art and entrench corporate control. Yeah, yeah, i know, just a tool and all that, and sure under socialism it could be used for good, but at the moment this particular tool is just making everything shittier in this capitalist dystopia i’m stuck in.

      • cimbazarov@lemmygrad.ml
        link
        fedilink
        aragonés
        arrow-up
        1
        ·
        6 days ago

        Luddism part 2.

        I think the question is if AI, as a tool, can make such a quantitative change in productivity that there is a qualitative change in the relations of production. Otherwise it’s just going to be the same as all other increases in productivity this century which sharpen the contradictions of capitalism.