TLDR: if the rich succeed in building AI systems that cater fully to their needs through the whole supply chain (i.e. AI can mine and process resources into what they want with no humans needed), then the rich will have no reason to keep anyone else around and can just massacre all the poors.


Recently, the r/singularity subreddit has had several posts which show some class-consciousness, despite they mostly-techbro atmosphere.

The post I’ve linked and reproduced below states a concern I also have with AI:

If we assume that we reach AGI, maybe even super intelligence, then we can expect a lot of human jobs will suddenly become obsolete.

First it could be white collar and tech jobs. Then when robotics catches up, manual labor will soon follow. Pretty soon every conceivable position a human once had can now be taken over by a machine.

Humans are officially obsolete.

What’s really chilling is that, while humans in general will no longer be a necessity to run a government or society, the very few billionaires at the top that helped bring this AI to existence will be the ones who control it - and no longer need anyone else. No military personnel, teachers, doctors, lawyers, bureaucrats, engineers, no one.

Why should countries exist filled with people when people are no longer needed to farm crops, serve in the military, build infrastructure, or anything else?

I would like to believe that if all of humanities needs can now always be fulfilled (but controlled by a very, very few), those few would see the benefit in making sure everyone lives a happy and fulfilling life.

The truth is though, the few at the top will likely leave everyone else to fend for themselves the second their walled garden is in place.

As the years pass, eventually AI becomes fully self-sustaining - from sourcing its own raw materials, to maintaining and improving its own systems - even the AI does not need a single human anymore (not that many are left at that point).

Granted, it could take a long while for this scenario to occur (if ever), but the way things are shaking out, it’s looking more and more unlikely that we’ll never get to a utopia where no one works unless they want to and everyone’s needs are met. It’s just not possible if the people in charge are greedy, backstabbing, corporate sociopaths that only play nice because they have to at the moment.

I find their argument quite valid, only lacking in the explicit mention of ‘capitalism’.

Once the rich have full-supply-chain-AI, we wouldn’t be able to revolt even if we wanted to. The robotic police force controlled by the rich can just massacre all the poors.

This puts a hard time limit on when revolution needs to occur. After that I guess we need China’s J-36s to save the American proletariat.

  • amemorablename@lemmygrad.ml
    link
    fedilink
    arrow-up
    6
    ·
    6 days ago

    This puts a hard time limit on when revolution needs to occur.

    I would say if there is a time limit, it’s more to do with climate change than anything else. To give an example of why I don’t think AI leads to such a scenario: Suppose AI becomes as capable, or more capable than, humans (I am going there because I don’t see how else AI can replace the majority of the labor force). To do this, it implicitly needs to be capable of degrees of reasoning, autonomy, adaptive real-time independent learning, and a physical body that would make it difficult to fully control, much like with a human. In effect, we’d be talking about AI that becomes its own class of being and the rich would now have to contend with AI rebelling alongside humans instead of just humans rebelling. Anything less than that won’t be able to replace the more involved parts of human labor, only reduce the number of workers required for tasks that can be more automated.

    That said, the class consciousness is nice to see among some AI-focused people.

    • Commiejones@lemmygrad.ml
      link
      fedilink
      arrow-up
      5
      ·
      6 days ago

      I wish there were more coding comrades involved with AI who could be subtly influencing them to read more communist theory and give it more weight.

      • cimbazarov@lemmygrad.ml
        link
        fedilink
        aragonés
        arrow-up
        1
        ·
        edit-2
        6 days ago

        I mean from my experience with chatgpt, it has def been trained on Marxist theory texts. And it gets some of it right and some of it wrong in the same way it gets anything right or wrong.

        LLM’s are black boxes. You can’t modify their algorithm and influence them at the layer of abstraction to be pro or anti anything (at least from my understanding). The only way you could do that is by cherry picking its training data, and even then it’s still a black box so it could potentially have a bias towards the opposite of what you intend.

        • Commiejones@lemmygrad.ml
          link
          fedilink
          arrow-up
          1
          ·
          6 days ago

          You and I cant influence the “black box” but people make these things. There are humans that have control over what is and isn’t training data and I just wish there were more comrades in that group of humans.