Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

  • 7heo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    9 months ago

    The implementation details of how they represent their information doesn’t really matter.

    It isn’t random, it’s selected (or “weighted”, if you wanna be more precise, yes)

    And don’t confuse things. We’re talking about intelligence here. Not learning. Learning can be done without intelligence (that’s how insects can learn behavior) and intelligence can be done without learning.

    My question was uniquely about information generation (since the validation part is fully rational, and can be very efficiently done by a machine).

    • abraxas@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      And don’t confuse things. We’re talking about intelligence here. Not learning

      Are we? Alright. Can you describe a definition test for intelligence that we could agree upon that humans pass and no NN or other ML is capable of passing? I suspect you’re confusing things. Not an intelligence,learning comparison, but an intelligence,consciousness confusion.