• FourWaveforms@lemm.ee
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    3 hours ago

    The article talks of ChatGPT “inducing” this psychotic/schizoid behavior.

    ChatGPT can’t do any such thing. It can’t change your personality organization. Those people were already there, at risk, masking high enough to get by until they could find their personal Messiahs.

    It’s very clear to me that LLM training needs to include protections against getting dragged into a paranoid/delusional fantasy world. People who are significantly on that spectrum (as well as borderline personality organization) are routinely left behind in many ways.

    This is just another area where society is not designed to properly account for or serve people with “cluster” disorders.

  • 7rokhym@lemmy.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    4 hours ago

    I think OpenAI’s recent sycophant issue has cause a new spike in these stories. One thing I noticed was these observations from these models running on my PC saying it’s rare for a person to think and do things that I do.

    The problem is that this is a model running on my GPU. It has never talked to another person. I hate insincere compliments let alone overt flattery, so I was annoyed, but it did make me think that this kind of talk would be crack for a conspiracy nut or mentally unwell people. It’s a whole risk area I hadn’t been aware of.

    https://www.msn.com/en-us/news/technology/openai-says-its-identified-why-chatgpt-became-a-groveling-sycophant/ar-AA1E4LaV

  • perestroika@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    4 hours ago

    From the article (emphasis mine):

    Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.

    /…/

    “It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says.

    From elsewhere:

    Sycophancy in GPT-4o: What happened and what we’re doing about it

    We have rolled back last week’s GPT‑4o update in ChatGPT so people are now using an earlier version with more balanced behavior. The update we removed was overly flattering or agreeable—often described as sycophantic.

    I don’t know what large language model these people used, but evidence of some language models exhibiting response patterns that people interpret as sycophantic (praising or encouraging the user needlessly) is not new. Neither is hallucinatory behaviour.

    Apparently, people who are susceptible and close to falling over the edge, may end up pushing themselves over the edge with AI assistance.

    What I suspect: someone has trained their LLM on somethig like religious literature, fiction about religious experiences, or descriptions of religious experiences. If the AI is suitably prompted, it can re-enact such scenarios in text, while adapting the experience to the user at least somewhat. To a person susceptible to religious illusions (and let’s not deny it, people are suscpecptible to finding deep meaning and purpose with shallow evidence), apparently an LLM can play the role of an indoctrinating co-believer, indoctrinating prophet or supportive follower.

      • perestroika@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        I think Elon was having the opposite kind of problems, with Grok not validating its users nearly enough, despite Elon instructing employees to make it so. :)

  • Satellaview@lemmy.zip
    link
    fedilink
    English
    arrow-up
    21
    ·
    6 hours ago

    This happened to a close friend of mine. He was already on the edge, with some weird opinions and beliefs… but he was talking with real people who could push back.

    When he switched to spending basically every waking moment with an AI that could reinforce and iterate on his bizarre beliefs 24/7, he went completely off the deep end, fast and hard. We even had him briefly hospitalized and they shrugged, basically saying “nothing chemically wrong here, dude’s just weird.”

    He and his chatbot are building a whole parallel universe, and we can’t get reality inside it.

  • randomname@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    19
    ·
    6 hours ago

    I think that people give shows like the walking dead too much shit for having dumb characters when people in real life are far stupider

    • Sauerkraut@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      13
      ·
      6 hours ago

      Like farmers who refuse to let the government plant shelter belts to preserve our top soil all because they don’t want to take a 5% hit on their yields… So instead we’re going to deplete our top soil in 50 years and future generations will be completely fucked because creating 1 inch of top soil takes 500 years.

      • Buddahriffic@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        6 hours ago

        Even if the soil is preserved, we’ve been mining the micronutrients from it and generally only replacing the 3 main macros for centuries. It’s one of the reasons why mass produced produce doesn’t taste as good as home grown or wild food. Nutritional value keeps going down because each time food is harvested and shipped away to be consumed and then shat out into a septic tank or waste processing facility, it doesn’t end up back in the soil as a part of nutrient cycles like it did when everything was wilder. Similar story for meat eating nutrients in a pasture.

        Insects did contribute to the cycle, since they still shit and die everywhere, but their numbers are dropping rapidly, too.

        At some point, I think we’re going to have to mine the sea floor for nutrients and ship that to farms for any food to be more nutritious than junk food. Salmon farms set up in ways that block wild salmon from making it back inland doesn’t help balance out all of the nutrients that get washed out to sea all the time, too.

        It’s like humanity is specifically trying to speedrun extiction by ignoring and taking for granted how things work that we depend on.

        • Usernameblankface@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          Why would good nutrients end up in poop?

          It makes sense that growing a whole plant takes a lot of different things from the soil, and coating the area with a basic fertilizer that may or may not get washed away with the next rain doesn’t replenish all of what is taken makes sense.

          But how would adding human poop to the soil help replenish things that humans need out of food?

          • Buddahriffic@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            50 minutes ago

            We don’t absorb everything completely, so some passes through unabsorbed. Some are passed via bile or mucous production, like manganese, copper, and zinc. Others are passed via urine. Some are passed via sweat. Selenium, when experiencing selenium toxicity, will even pass through your breath.

            Other than the last one, most of those eventually end up going down the drain, either in the toilet, down the shower drain, or when we do our laundry. Though some portion ends up as dust.

            And to be thorough, there’s also bleeding as a pathway to losing nutrients, as well as injuries (or surgeries) involving losing flesh, tears, spit/boogers, hair loss, lactation, finger nail and skin loss, reproductive fluids, blistering, and mensturation. And corpse disposal, though the amount of nutrients we shed throughout our lives dwarfs what’s left at the end.

            I think each one of those are ones that, due to our way of life and how it’s changed since our hunter gatherer days, less of it ends up back in the nutrient cycle.

            But I was mistaken to put the emphasis on shit and it was an interesting dive to understand that better. Thanks for challenging that :)

    • Daggity@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 hours ago

      Covid gave me an extremely different perspective on the zombie apocalypse. They’re going to have zombie immunization parties where everyone gets the virus.

  • _cryptagion [he/him]@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    8 hours ago

    I lost a parent to a spiritual fantasy. She decided my sister wasn’t her child anymore because the christian sky fairy says queer people are evil.

    At least ChatGPT actually exists.

  • Boddhisatva@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    9 hours ago

    In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, “which we know to be quite effective at helping people reframe their stories.” Critically, though, AI, “unlike a therapist, does not have the person’s best interests in mind, or a moral grounding or compass in what a ‘good story’ looks like,” she says. “A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.”

    This is a rather terrifying take. Particularly when combined with the earlier passage about the man who claimed that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler.” Therapists have to be very careful because human memory is very plastic. It’s very easy to alter a memory, in fact, every time you remember something, you alter it just a little bit. Under questioning by an authority figure, such as a therapist or a policeman if you were a witness to a crime, these alterations can be dramatic. This was a really big problem in the '80s and '90s.

    Kaitlin Luna: Can you take us back to the early 1990s and you talk about the memory wars, so what was that time like and what was happening?

    Elizabeth Loftus: Oh gee, well in the 1990s and even in maybe the late 80s we began to see an altogether more extreme kind of memory problem. Some patients were going into therapy maybe they had anxiety, or maybe they had an eating disorder, maybe they were depressed, and they would end up with a therapist who said something like well many people I’ve seen with your symptoms were sexually abused as a child. And they would begin these activities that would lead these patients to start to think they remembered years of brutalization that they had allegedly banished into the unconscious until this therapy made them aware of it. And in many instances these people sued their parents or got their former neighbors or doctors or teachers whatever prosecuted based on these claims of repressed memory. So the wars were really about whether people can take years of brutalization, banish it into the unconscious, be completely unaware that these things happen and then reliably recover all this information later, and that was what was so controversial and disputed.

    Kaitlin Luna: And your work essentially refuted that, that it’s not necessarily possible or maybe brought up to light that this isn’t so.

    Elizabeth Loftus: My work actually provided an alternative explanation. Where could these merit reports be coming from if this didn’t happen? So my work showed that you could plant very rich, detailed false memories in the minds of people. It didn’t mean that repressed memories did not exist, and repressed memories could still exist and false memories could still exist. But there really wasn’t any strong credible scientific support for this idea of massive repression, and yet so many families were destroyed by this, what I would say unsupported, claim.

    The idea that ChatBots are not only capable of this, but that they are currently manipulating people into believing they have recovered repressed memories of brutalization is actually at least as terrifying to me as it convincing people that they are holy prophets.

    Edited for clarity

  • lenz@lemmy.ml
    link
    fedilink
    English
    arrow-up
    36
    ·
    edit-2
    11 hours ago

    I read the article. This is exactly what happened when my best friend got schizophrenia. I think the people affected by this were probably already prone to psychosis/on the verge of becoming schizophrenic, and that ChatGPT is merely the mechanism by which their psychosis manifested. If AI didn’t exist, it would’ve probably been Astrology or Conspiracy Theories or QAnon or whatever that ended up triggering this within people who were already prone to psychosis. But the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

    ChatGPT actively screwing with mentally ill people is a huge problem you can’t just blame on stupidity like some people in these comments are. This is exploitation of a vulnerable group of people whose brains lack the mechanisms to defend against this stuff. They can’t help it. That’s what psychosis is. This is awful.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      9 hours ago

      the problem with ChatGPT in particular is that is validates the psychosis… that is very bad.

      So do astrology and conspiracy theory groups on forums and other forms of social media, the main difference is whether you’re getting that validation from humans or a machine. To me, that’s a pretty unhelpful distinction, and we attack both problems the same way: early detection and treatment.

      Maybe computers can help with the early detection part. They certainly can’t do much worse than what’s currently happening.

      • lenz@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 hours ago

        I think having that kind of validation at your fingertips, whenever you want, is worse. At least people, even people deep in the claws of a conspiracy, can disagree with each other. At least they know what they are saying. The AI always says what the user wants to hear and expects to hear. Though I can see how that distinction may matter little to some, I just think ChatGPT has advantages that are worse than what a forum could do.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          4 hours ago

          Sure. But on the flip side, you can ask it the opposite question (tell me the issues with <belief>) and it’ll do that as well, and you’re not going to get that from a conspiracy theory forum.

    • Maeve@kbin.earth
      link
      fedilink
      arrow-up
      5
      ·
      11 hours ago

      I think this is largely people seeking confirmation their delusions are real, and wherever they find it is what they’re going to attach to themselves.

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    4
    ·
    edit-2
    14 hours ago

    I admit I only read a third of the article.
    But IMO nothing in that is special to AI, in my life I’ve met many people with similar symptoms, thinking they are Jesus, or thinking computers work by some mysterious power they posses, but was stolen from them by the CIA. And when they die all computers will stop working! Reading the conversation the wife had with him, it sounds EXACTLY like these types of people!
    Even the part about finding “the truth” I’ve heard before, they don’t know what it is the truth of, but they’ll know when they find it?
    I’m not a psychiatrist, but from what I gather it’s probably Schizophrenia of some form.

    My guess is this person had a distorted view of reality he couldn’t make sense of. He then tried to get help from the AI, and he built a world view completely removed from reality with it.

    But most likely he would have done that anyway, it would just have been other things he would interpret in extreme ways. Like news, or conversations, or merely his own thoughts.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      11 hours ago

      Around 2006 I received a job application, with a resume attached, and the resume had a link to the person’s website - so I visited. The website had a link on the front page to “My MkUltra experience”, so I clicked that. Not exactly an in-depth investigation. The MkUltra story read that my job applicant was an unwilling (and un-informed) test subject of MkUltra who picked him from his association with other unwilling MkUltra test subjects at a conference, explained how they expanded the MkUltra program of gaslighting mental torture and secret physical/chemical abuse of their test subjects through associates such as co-workers, etc.

      So, option A) applicant is delusional, paranoid, and deeply disturbed. Probably not the best choice for the job.

      B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.

      C) applicant is pulling our legs with his website, it’s all make-believe fun. Absolutely nothing on applicant’s website indicated that this might be the case.

      You know how you apply to jobs and never hear back from some of them…? Yeah, I don’t normally do that to our applicants, but I am willing to make exceptions for cause… in this case the position applied for required analytical thinking. Some creativity was of some value, but correct and verifiable results were of paramount importance. Anyone applying for the job leaving such an obvious trail of breadcrumbs to such a limited set of conclusions about themselves would seem to be lacking the self awareness and analytical skill required to succeed in the position.

      Or, D) they could just be trying to stay unemployed while showing effort in applying to jobs, but I bet even in 2006 not every hiring manager would have dug in those three layers - I suppose he could deflect those in the in-person interviews fairly easily.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        10 hours ago

        IDK, apparently the MkUltra program was real,

        B) applicant is 100% correct about what is happening to him, DEFINITELY not someone I want to get any closer to professionally, personally, or even be in the same elevator with coincidentally.

        That sounds harsh. This does NOT sound like your average schizophrenic.

        https://en.wikipedia.org/wiki/MKUltra

        • zarkanian@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          The Illuminati were real, too. That doesn’t mean that they’re still around and controlling the world, though.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 hours ago

          Oh, I investigated it too - it seems like it was a real thing, though likely inactive by 2005… but if it were active I certainly didn’t want to become a subject.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 hours ago

            OK that risk wasn’t really on my radar, because I live in a country where such things have never been known to happen.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              2
              ·
              8 hours ago

              That’s the thing about being paranoid about MkUltra - it was actively suppressed and denied while it was happening (according to FOI documents) - and they say that they stopped, but if it (or some similar successor) was active they’d certainly say that it’s not happening now…

              At the time there were active rumors around town about influenza propagation studies being secretly conducted on the local population… probably baseless paranoia… probably.

              Now, as you say, your (presumably smaller) country has never known such things to happen, but…

              • Buffalox@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 hours ago

                I live in Danmark, and I was taught already in public school how such things were possible, most notably that Russia might be doing experiments here, because our reporting on effects is very open and efficient. So Denmark would be an ideal testing ground for experiments.
                But my guess is that it also may makes it dangerous to experiment here, because the risk of being detected is also high.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      Yep.

      And after enough people can no longer actually critically think, well, now this shitty AI tech does actually win the Turing Test more broadly.

      Why try to clear the bar when you can just lower it instead?

      … Is it fair, at this point, to legitimately refer to humans that are massively dependant on AI for basic things… can we just call them NPCs?

      I am still amazed that no one knows how to get anywhere around… you know, the town or city they grew up in? Nobody can navigate without some kind of map app anymore.

      • Geodad@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        can we just call them NPCs?

        They were NPCs before AI was invented.

    • Zippygutterslug@lemmy.world
      link
      fedilink
      English
      arrow-up
      36
      ·
      edit-2
      14 hours ago

      Humans are irrational creatures that have transitory states where they are capable of more ordered thought. It is our mistake to reach a conclusion that humans are rational actors while we marvel daily at the irrationality of others and remain blind to our own.

      • Kyrgizion@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        15 hours ago

        Precisely. We like to think of ourselves as rational but we’re the opposite. Then we rationalize things afterwards. Even being keenly aware of this doesn’t stop it in the slightest.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 hours ago

          Probably because stopping to self analyze your decisions is a lot less effective than just running away from that lion over there.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            4
            ·
            10 hours ago

            It’s a luxury state: analysis; whether self or professionally administered on a chaise lounge at $400 per hour.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    10
    ·
    14 hours ago

    Oh wow. In the old times, self-proclaimed messiahs used to do that without assistance from a chatbot. But why would you think the “truth” and path to enlightenment is hidden within a service of a big tech company?

    • iAvicenna@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      13 hours ago

      well because these chatbots are designed to be really affirming and supportive and I assume people with such problems really love this kind of interaction compared to real people confronting their ideas critically.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 hours ago

        I think there was a recent unsuccessful rev of ChatGPT that was too flattering, it made people nauseous - they had to dial it back.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 hours ago

        I guess you’re completely right with that. It lowers the entry barrier. And it’s kind of self-reinforcing. And we have other unhealty dynamics with other technology as well, like social media, which also can radicalize people or get them in a downwards spiral…

  • jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    18 hours ago

    Sounds like a lot of these people either have an undiagnosed mental illness or they are really, reeeeaaaaalllyy gullible.

    For shit’s sake, it’s a computer. No matter how sentient the glorified chatbot being sold as “AI” appears to be, it’s essentially a bunch of rocks that humans figured out how to jet electricity through in such a way that it can do math. Impressive? I mean, yeah. It is. But it’s not a human, much less a living being of any kind. You cannot have a relationship with it beyond that of a user.

    If a computer starts talking to you as though you’re some sort of God incarnate, you should probably take that with a dump truck full of salt rather then just letting your crazy latch on to that fantasy and run wild.

    • rasbora@lemm.ee
      link
      fedilink
      English
      arrow-up
      18
      ·
      18 hours ago

      Yeah, from the article:

      Even sycophancy itself has been a problem in AI for “a long time,” says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts. What’s likely happening with those experiencing ecstatic visions through ChatGPT and other models, he speculates, “is that people with existing tendencies toward experiencing various psychological issues,” including what might be recognized as grandiose delusions in clinical sense, “now have an always-on, human-level conversational partner with whom to co-experience their delusions.”

      • A_norny_mousse@feddit.org
        link
        fedilink
        English
        arrow-up
        17
        ·
        16 hours ago

        So it’s essentially the same mechanism with which conspiracy nuts embolden each other, to the point that they completely disconnect from reality?

        • rasbora@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          ·
          15 hours ago

          That was my take away as well. With the added bonus of having your echo chamber tailor made for you, and all the agreeing voices tuned in to your personality and saying exactly what you need to hear to maximize the effect.

          It’s eery. A propaganda machine operating on maximum efficiency. Goebbels would be jealous.

    • alaphic@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      18 hours ago

      Or immediately question what it/its author(s) stand to gain from making you think it thinks so, at a bear minimum.

      I dunno who needs to hear this, but just in case: THE STRIPPER (OR AI I GUESS) DOESN’T REALLY LOVE YOU! THAT’S WHY YOU HAVE TO PAY FOR THEM TO SPEND TIME WITH YOU!

      I know it’s not the perfect analogy, but… eh, close enough, right?

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        12 hours ago

        a bear minimum.

        I always felt that was too much of a burden to put on people, carrying multiple bears everywhere they go to meet bear minimums.

    • Kyrgizion@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      15 hours ago

      For real. I explicitly append “give me the actual objective truth, regardless of how you think it will make me feel” to my prompts and it still tries to somehow butter me up to be some kind of genius for asking those particular questions or whatnot. Luckily I’ve never suffered from good self esteem in my entire life, so those tricks don’t work on me :p