• jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    17 hours ago

    Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.

    For shit’s sake, it’s a computer. No matter how sentient the glorified chatbot being sold as “AI” appears to be, it’s essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it’s not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.

    If a computer starts talking to you as though you’re some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.

    • rasbora@lemm.ee
      link
      fedilink
      English
      arrow-up
      16
      ·
      16 hours ago

      Yeah, from the article:

      Even sycophancy itself has been a problem in AI for “a long time,” says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts. What’s likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, “is that people with existing tendencies toward experiencing various psychological issues,” including what might be recognized as grandiose delusions in clinical sense, “now have an always-on, human-level conversational partner with whom to co-experience their delusions.”

      • A_norny_mousse@feddit.org
        link
        fedilink
        English
        arrow-up
        16
        ·
        14 hours ago

        So it’s essentially the same mechanism with which conspiracy nuts embolden each other, to the point that they completely disconnect from reality?

        • rasbora@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          13 hours ago

          That was my take away as well. With the added bonus of having your echo chamber tailor made for you, and all the agreeing voices tuned in to your personality and saying exactly what you need to hear to maximize the effect.

          It’s eery. A propaganda machine operating on maximum efficiency. Goebbels would be jealous.

    • alaphic@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      16 hours ago

      Or immediately question what it/its author(s) stand to gain from making you think it thinks so, at a bear minimum.

      I dunno who needs to hear this, but just in case: THE STRIPPER (OR AI I GUESS) DOESN’T REALLY LOVE YOU! THAT’S WHY YOU HAVE TO PAY FOR THEM TO SPEND TIME WITH YOU!

      I know it’s not the perfect analogy, but… eh, close enough, right?

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 hours ago

        a bear minimum.

        I always felt that was too much of a burden to put on people, carrying multiple bears everywhere they go to meet bear minimums.

    • Kyrgizion@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 hours ago

      For real. I explicitly append “give me the actual objective truth, regardless of how you think it will make me feel” to my prompts and it still tries to somehow butter me up to be some kind of genius for asking those particular questions or whatnot. Luckily I’ve never suffered from good self esteem in my entire life, so those tricks don’t work on me :p