• Zagorath@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I genuinely think the alignment problem is a really interesting philosophical question worthy of study.

    It’s just not a very practically useful one when real-world AI is so very, very far from any meaningful AGI.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      One of the problems with the ‘alignment problem’ is that one group doesn’t care about a large part of the possible alignment problems and only cares about theoretical extinction level events and not about already occurring bias, and other issues. This also causes massive amounts of critihype.