• titotal@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I think people are misreading the post a little. It’s a follow on from the old AI x-risk argument: “evolution optimises for having kids, yet people use condoms! Therefore evolution failed to “align” humans to it’s goals, therefore aligning AI is nigh-impossible”.

    As a commentator points out, for a “failure”, there sure do seem to be a lot of human kids around.

    This post then decides to take the analogy further, and be like “If I was hypothetically a eugenicist god, and I wanted to hypothetically turn the entire population of humanity into eugenicists, it’d be really hard! Therefore we can’t get an AI to build us, like, a bridge, without it developing ulterior motives”.

    You can hypothetically make this bad argument without supporting eugenics… but I wouldn’t put money on it.