- cross-posted to:
- technology@lemmy.world
- futurism@lemmy.ca
- cross-posted to:
- technology@lemmy.world
- futurism@lemmy.ca
Non-paywalled link: https://archive.ph/9Hihf
In his latest NYT column, Ezra Klein identifies the neoreactionary philosophy at the core of Marc Andreessen’s recent excrescence on so-called “techno-optimism”. It wasn’t exactly a difficult analysis, given the way Andreessen outright lists a gaggle of neoreactionaries as the inspiration for his screed.
But when Andreessen included “existential risk” and transhumanism on his list of enemy ideas, I’m sure the rationalists and EAs were feeling at least a little bit offended. Klein, as the founder of Vox media and Vox’s EA-promoting “Future Perfect” vertical, was probably among those who felt targeted. He has certainly bought into the rationalist AI doomer bullshit, so you know where he stands.
So have at at, Marc and Ezra. Fight. And maybe take each other out.
I haven’t really followed Klein for a while, but at least what he wrote in the beginning of the generative AI gold rush was closer to what one might call “social doomerism” than Yudkowskianism: Less “the AI is going to go foom and kill us all with digital brain-magic”, and more “AI is going to cause devastating social disruptions, destroy the livelihoods of millions, enable mass manipulation, and concentrate enormous power into the hands of AI owners”.
Has he pivoted into “classic sneer territory” since then?
I see him more as a dupe than a Cassandra. I heard him on a podcast a couple months ago talking about how he’s been having conversations with Bay Area AI researchers who are “really scared” about what they’re creating. He also spent quite a bit of time talking up Geoffrey Hinton’s AI doomer tour. So while I don’t think Ezra’s one of the Yuddite rationalists, he’s clearly been influenced by them. Given his historical ties to effective altruism, this isn’t surprising to me.