Non-paywalled link: https://archive.ph/9Hihf

In his latest NYT column, Ezra Klein identifies the neoreactionary philosophy at the core of Marc Andreessen’s recent excrescence on so-called “techno-optimism”. It wasn’t exactly a difficult analysis, given the way Andreessen outright lists a gaggle of neoreactionaries as the inspiration for his screed.

But when Andreessen included “existential risk” and transhumanism on his list of enemy ideas, I’m sure the rationalists and EAs were feeling at least a little bit offended. Klein, as the founder of Vox media and Vox’s EA-promoting “Future Perfect” vertical, was probably among those who felt targeted. He has certainly bought into the rationalist AI doomer bullshit, so you know where he stands.

So have at at, Marc and Ezra. Fight. And maybe take each other out.

  • sus@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    11 months ago

    surface probing of the RatioSphere indicates that ezra has already, by associating with NY times, become something something molech, thus limiting interaction between these these two [sic] Tribes by mechanism of -consults terminology cheat sheet- absurdity heuristic

  • datarama@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    I haven’t really followed Klein for a while, but at least what he wrote in the beginning of the generative AI gold rush was closer to what one might call “social doomerism” than Yudkowskianism: Less “the AI is going to go foom and kill us all with digital brain-magic”, and more “AI is going to cause devastating social disruptions, destroy the livelihoods of millions, enable mass manipulation, and concentrate enormous power into the hands of AI owners”.

    Has he pivoted into “classic sneer territory” since then?

    • TinyTimmyTokyo@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      I see him more as a dupe than a Cassandra. I heard him on a podcast a couple months ago talking about how he’s been having conversations with Bay Area AI researchers who are “really scared” about what they’re creating. He also spent quite a bit of time talking up Geoffrey Hinton’s AI doomer tour. So while I don’t think Ezra’s one of the Yuddite rationalists, he’s clearly been influenced by them. Given his historical ties to effective altruism, this isn’t surprising to me.