• Overzeetop@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    See, this is what happens when you prevent/restrict the use of nuclear weapons. If we would just recognize the effectiveness of hypersonic, ballistic re-rentry, multiple warhead nuclear munitions and deploy them in conflicts instead of conventional weapons we wouldn’t need to worry about AI mis-identifying non-combatants one by one. [taps forehead]

    • Argongas@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      I’m not sure how I feel about it, but I heard the argument made that AI maybe would be better for killing. People mess up all the time and misidentify threats which causes collateral damage in conflicts often. AI in theory could be much better at this identification process.

      It’s kind of the same argument with self driving cars: we freak out whenever they get in an accident, but people causing thousands of accidents a day doesn’t cause an outage.

      Not saying I necessarily agree with either argument, but they do make me question how we think about and evaluate technology.

  • TWeaK@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of “killer robots” have made it onto the battlefield and proved to be devastating weapons.

    Apparently no one was paying attention to Azerbaijan.