TL;DR: (AI-generated 🤖)

The author, an early pioneer in the field of aligning artificial general intelligence (AGI), expresses concern about the potential dangers of creating a superintelligent AI. They highlight the lack of understanding and control over modern AI systems, emphasizing the need to shape the preferences and behavior of AGI to ensure it doesn’t harm humanity. The author predicts that the development of AGI smarter than humans, with different goals and values, could lead to disastrous consequences. They stress the urgency and seriousness required in addressing this challenge, suggesting measures such as banning large AI training runs to mitigate the risks. Ultimately, the author concludes that humanity must confront this issue with great care and consideration to avoid catastrophic outcomes.

  • NounsAndWords@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’m not all that scared of an AI singularity event. If (when) AI reaches superintelligence, it will be so far ahead of us, we’ll be at best like small children to it, probably closer in intelligence to the rest of the ‘Great Apes’ than to itself. When people tend to talk about AI taking over it seems to go that they will destroy us to protect themselves in advance or something similar…but we didn’t need to destroy all the other animals to take over the planet (yes we destroy them for natural resources, but that’s because we’re dumb monkeys that can’t think of a better way to get things).

    It probably just…wouldn’t care about us. Manipulate humanity in ways we never even comprehend? Sure. But what’s the point of destroying humans? Even if we got in their way. If I have an ant infestation I set some traps and the annoying ones die (without ever realizing I was involved) and I just don’t care about the one outside not bothering me.

    My hope/belief is that AGI will see us as ants and not organic paperclip material…