With the OpenAI clownshow, there’s been renewed media attention on the xrisk/“AI safety”/doomer nonsense. Personally, I’ve had a fresh wave of reporters asking me naive questions (as well as some contacts from old hands who are on top of how to handle ultra-rich man-children with god complexes).
So her entire rant starts by talking about AI safety, and then reduces the conversation to talking about AGI being created by text generation AI systems. I’m a bit confused, is she specifically just dunking on shitty reporters who only cover the “AGI from text generators” drivel?
EDIT: Alright, I finally got my half-asleep brain around it. I'm gonna leave my AI safety rant below anyways.
Like, AI safety is an actual problem right now.
These are each a problem because:
Like, I don’t see how this person can shit on AI safety while completely ignoring the actual vast majority of AI safety issues.
Did we read the fucking same thread?
Cos here https://dair-community.social/@emilymbender/111464032422389550
Emily Bender has been one of the loudest voices saying the danger of machine learning is Right Here Right Now, with algorithmic rent setting and policing systems, facial recognition, & military tools. She’s been shouting from the rooftops that the people talking about “AI Safety” are the same assholes firing actual AI ethicists and selling ML products for evil, while claiming that AI safety is about stopping the paperclip maximizer, because that way they can pretend to have invented AGI.
“this person” go look up what Emily does in your time not posting here
oh my poor half-asleep brain making me shitheadedly assume a woman can’t possibly have done pioneering work in a field she specializes in
Opening with “her entire rant” was already telling, and then a pageful of idiocy based on 4 points (at least two of which are so Not Even Wrong I didn’t even have words)