• phoneymouse@lemmy.world
    link
    fedilink
    English
    arrow-up
    93
    ·
    edit-2
    1 year ago

    Can’t figure out how to feed and house everyone, but we have almost perfected killer robots. Cool.

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    44
    ·
    1 year ago

    Future is gonna suck, so enjoy your life today while the future is still not here.

  • BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    arrow-down
    24
    ·
    edit-2
    1 year ago

    As an important note in this discussion, we already have weapons that autonomously decide to kill humans. Mines.

    • Chuck@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      98
      ·
      1 year ago

      Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.

      • gibmiser@lemmy.world
        link
        fedilink
        English
        arrow-up
        55
        ·
        1 year ago

        Well, an important point you and him. Both forget to mention is that mines are considered inhumane. Perhaps that means AI murdering should also be considered. Inhumane, and we should just not do it instead of allowing landmines.

        • livus@kbin.social
          link
          fedilink
          arrow-up
          21
          ·
          1 year ago

          This, jesus, we’re still losing limbs and clearing mines from wars that were over decades ago.

          An autonomous field of those is horror movie stuff.

      • Chozo@kbin.social
        link
        fedilink
        arrow-up
        21
        ·
        1 year ago

        Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention.

        Pretty sure the entire DOD got a collective boner reading this.

      • Sterile_Technique@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.

        For what it’s worth, there’s footage on youtube of drone swarm demonstrations that were posted 6 years ago. Considering that the military doesn’t typically release footage of the cutting edge of its tech to the public, so this demonstration was likely for a product that was already going obsolete; and that the 6 years that have passed since have made lightning fast developments in things like facial recognition… at this point I’d be surprised if we weren’t already at the very least field testing the murder machines you described.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        17
        ·
        1 year ago

        Imagine a mine that could recognize “that’s just a child/civilian/medic stepping on me, I’m going to save myself for an enemy soldier.” Or a mine that could recognize “ah, CenCom just announced a ceasefire, I’m going to take a little nap.” Or “the enemy soldier that just stepped on me is unarmed and frantically calling out that he’s surrendered, I’ll let this one go through. Not the barrier troops chasing him, though.”

        There’s opportunities for good here.

        • key@lemmy.keychat.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Maybe it starts that way but once that’s accepted as a thing the result will be increased usage of mines. Where before there were too many civilians to consider using mines, now the soldiers say “it’s smart now, it won’t blow up children” and put down more and more in more dangerous situations. And maybe those mines only have a 0.1% failure rate in tested situations but a 10% failure rate over the course of decades. Usage increases 10 fold and then you quickly end up with a lot more dead kids.

          Plus it won’t just be mines, it’ll be automated turrets when previously there were none or even more drone strikes with less oversight required because the automated system is supposed to prevent unintended casualties.

          Availability drives usage.

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    37
    ·
    edit-2
    1 year ago

    “Deploy the fully autonomous loitering munition drone!”

    “Sir, the drone decided to blow up a kindergarten.”

    “Not our problem. Submit a bug report to Lockheed Martin.”

      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        1 year ago

        Goes to original ticket:

        Status: WONTFIX

        “This is working as intended according to specifications.”

    • spirinolas@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 year ago

      “Your military robots slaughtered that whole city! We need answers! Somebody must take responsibility!”

      “Aaw, that really sucks starts rubbing nipples I’ll submit a ticket and we’ll let you know. If we don’t call in 2 weeks…call again and we can go through this over and over until you give up.”

      “NO! I WANT TO TALK TO YOUR SUPERVISOR NOW”

      “Suuure, please hold.”

  • pelicans_plight@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    1 year ago

    Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from. At this point one of the biggest security threats to the U.S. and for that matter the entire world is the extremely low I.Q. of every one that is supposed to be protecting this world. But I think they do this all on purpose, I mean the day the Pentagon created ISIS was probably their proudest day.

    • zaphod@feddit.de
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from.

      Eh, they could’ve done that without AI for like two decades now. I suppose the drones would crashland in a rather destructive way due to the EMP, which might also fry some of the electronics rendering the drone useless without access to replacement components.

      • pelicans_plight@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        1 year ago

        I hope so, but I was born with an extremely good sense of trajectory and I also know how to use nets. So lets just hope I’m superhuman and the only one who possesses these powers.

        Edit; I’m being a little extreme here because I heavily disagree with the way everything in this world is being run. So I’m giving a little push back on this subject that I’m wholly against. I do have a lot of manufacturing experience, and I would hope any killer robots governments produce would be extremely shielded against EMPs, but that is not my field, and I have no idea if shielding a remote controlled robot from EMPs is even possible?

        • AngryCommieKender@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          The movie Small Soldiers is totally fiction, but the one part of that movie that made “sense” was that because the toy robots were so small, they had basically no shielding whatsoever, so the protagonist just had to haul a large wrench/ spanner up a utility pole, and connect the positive and negative terminals on the pole transformer. It blew up of course, and blew the protagonist off the pole IIRC. That also caused a small (2-3 city block diameter) EMP that shut down the malfunctioning soldier robots.

          I realize this is a total fantasy/ fictional story, but it did highlight the major flaw in these drones. You can either have them small, lightweight, and inexpensive, or you can put the shielding on. In almost all cases when humans are involved, we don’t spend the extra $$$ and mass to properly shield ourselves from the sun, much less other sources of radiation. This leads me to believe that we wouldn’t bother shielding these low cost drones.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Is there a way to create an EMP without a nuclear weapon? Because if that’s what they have to develop, we have bigger things to worry about.

    • Snapz@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The real problem (and the thing that will destroy society) is boomer pride. I’ve said this for a long time, they’re in power now and they are terrified to admit that they don’t understand technology.

      So they’ll make the wrong decisions, act confident and the future will pay the tab for their cowardice, driven solely by pride/fear.

  • cosmicrookie@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    edit-2
    1 year ago

    It’s so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can’t punish AI for doing something wrong. AI does not require a raise for doing something right either

    • Meowing Thing@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 year ago

      That’s an issue with the whole tech industry. They do something wrong, say it was AI/ML/the algorithm and get off with just a slap on the wrist.

      We should all remember that every single tech we have was built by someone. And this someone and their employer should be held accountable for all this tech does.

    • zalgotext@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      You can’t punish AI for doing something wrong.

      Maybe I’m being pedantic, but technically, you do punish AIs when they do something “wrong”, during training. Just like you reward it for doing something right.

      • cosmicrookie@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        But that is during training. I insinuated that you can’t punish AI for making a mistake, when used in combat situations, which is very convenient for the ones intentionally wanting that mistake to happen

    • reksas@lemmings.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 months ago

      That is like saying you cant punish gun for killing people

      edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.

      • cosmicrookie@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        Sorry, but this is not a valid comparison. What we’re talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?

  • Immersive_Matthew@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    2
    ·
    1 year ago

    We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.

  • Steve@lemmy.today
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    1 year ago

    Didn’t Robocop teach us not to do this? I mean, wasn’t that the whole point of the ED-209 robot?

    • aeronmelon@lemm.ee
      link
      fedilink
      English
      arrow-up
      30
      ·
      1 year ago

      Every warning in pop culture (1984, Starship Troopers, Robocop) has been misinterpreted as a framework upon which to nail the populous to.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Every warning in pop culture is being misinterpreted as something other than a fun/scary movie designed to sell tickets, being imagined as a scholarly attempt at projecting a plausible outcome instead.

        • MBM@lemmings.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          People didn’t seem to like my movie idea “Terminator, but the AI is actually very reasonable and not murderous”

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Every single thing in The Hitchhiker’s Guide to the Galaxy says AI is a stupid and terrible idea. And Elon Musk says it’s what inspired him to create an AI.

  • MindSkipperBro12@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    4
    ·
    edit-2
    1 year ago

    For everyone who’s against this, just remember that we can’t put the genie back in the bottle. Like the A Bomb, this will be a fact of life in the near future.

    All one can do is adapt to it.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      There is a key difference though.

      The A bomb wasn’t a technology that as the arms race advanced enough would develop the capacity to be anywhere between a conscientious objector to an usurper.

      There’s a prisoner’s dilemma to arms races that in this case is going to lead to world powers effectively paving the path to their own obsolescence.

      In many ways, that’s going to be uncharted territory for us all (though not necessarily a bad thing).

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    3
    ·
    1 year ago

    As disturbing as this is, it’s inevitable at this point. If one of the superpowers doesn’t develop their own fully autonomous murder drones, another country will. And eventually those drones will malfunction or some sort of bug will be present that will give it the go ahead to indiscriminately kill everyone.

    If you ask me, it’s just an arms race to see who build the murder drones first.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      1 year ago

      A drone that is indiscriminately killing everyone is a failure and a waste. Even the most callous military would try to design better than that for purely pragmatic reasons, if nothing else.

      • SomeSphinx@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Even the best laid plans go awry though. The point is even if they pragmatically design it to not kill indiscriminately, bugs and glitches happen. The technology isn’t all the way there yet and putting the ability to kill in the machine body of something that cannot understand context is a terrible idea. It’s not that the military wants to indiscriminately kill everything, it’s that they can’t possibly plan for problems in the code they haven’t encountered yet.

    • Pheonixdown@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      I feel like it’s ok to skip to optimizing the autonomous drone-killing drone.

      You’ll want those either way.

      • threelonmusketeers@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        If entire wars could be fought by proxy with robots instead of humans, would that be better (or less bad) than the way wars are currently fought? I feel like it might be.

        • Pheonixdown@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          You’re headed towards the Star Trek episode “A Taste of Armageddon”. I’d also note, that people losing a war without suffering recognizable losses are less likely to surrender to the victor.

    • KeenFlame@feddit.nu
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Other weapons of mass destruction, biological and chemical warfare have been successfully avoided in war, this should be classified exactly the same