Air Canada appears to have quietly killed its costly chatbot support.

  • GluWu@lemm.ee
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    3
    ·
    9 months ago

    I wonder how much time and space there will be to “play” between the first case in the US that would uphold this standard legally, and when companies lock down AI from edge cases. I’ve been breaking generative LLMs since they hit public accessibility. I’m a blackhat “prompt engineer”(I fucking hate that term).

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      9 months ago

      Maybe go with “prompt hacker” since that seems more accurate? And maybe cooler in a 90s sort of way.

      • GluWu@lemm.ee
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        9 months ago

        Lol, I’ll start using that for anyone who starts asking me questions about AI beyond “so you can make Obama rewrite the Bible in Chinese?”.

        • Patches@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          10
          ·
          9 months ago

          How would Obama rewriting the Bible in Chinese be different from any other person rewriting the Bible in Chinese?

          • GluWu@lemm.ee
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            edit-2
            9 months ago

            LLMs only become increasingly more politically correct. I would assume any LLM that isn’t uncensored to return something about how that’s inappropriate, in whatever way it chooses. None of those things by themselves present any real conflict, but once you introduce topics that have a majority dataset of being contradictory, the llm will struggle. You can think deeply about why topics might contradict themselves, llms can’t. Llms function on reinforced neutral networks, when that network has connections that only strongly route one topic away from the other, connecting the two causes issues.

            I haven’t, but if you want, take just that prompt and give it to gpt3.5 and see what it does.

            • SpaceCowboy@lemmy.ca
              link
              fedilink
              English
              arrow-up
              10
              ·
              9 months ago

              That’s interesting. A normal computer program when it gets in a scenario it can’t deal with will throw an exception and stop. A human when dealing with something weird like “make Obama rewrite the Bible in Chinese” will just say “WTF?”

              But it seems a flaw in these systems is that it doesn’t know when something is just garbage that there’s nothing that can be done with it.

              Reminds me of when Google had an AI that could play Starcraft 2, and it was playing on the ladder anonymously. Lowko is this guy that streams games, and he was unknowingly playing it and beat it. What was interesting is the AI just kinda freaked out and started doing random things. Lowko (not knowing it was an AI) thought the other player was just showing bad manners because you’re supposed to concede when you know you’ve lost because otherwise you’re just wasting the other player’s time. Apparently the devs at google had to monitor games being played by the AI to force it to concede when it lost because the AI couldn’t understand that there was no longer any way it could win the game.

              It seems like AI just can’t understand when it should give up.

              It’s like some old sci-fi where they ask a robot an illogical question and its head explodes. Obviously it’s more complicated than that, but cool that there’s real questions in the same vein as that.

            • corsicanguppy@lemmy.ca
              link
              fedilink
              English
              arrow-up
              6
              ·
              9 months ago

              It’s weird that Republicans latched onto something Obama did that Mr Bush did a few times each.

              Except, in the past, tan suits and coloured face paint weren’t a sin; now we see it’s only an issue for them based on who’s doing it and whether they can rephrase and leverage it. Put Obama in mime makeup and a tan suit and some barely-used heads will explode.

              • Patches@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                1
                ·
                edit-2
                9 months ago

                It isn’t weird.

                He was black which is what they ‘latched’ on to. From that perspective - everything he does, wear, eat, say is wrong.

                Just like that bitch Karen in HR, or some mean aunt, you probably hate. Once you have decided you hate someone - everything they do will be wrong. Even when it is a kind gesture - you will assume an ulterior motive.

              • jkrtn@lemmy.ml
                link
                fedilink
                English
                arrow-up
                4
                ·
                9 months ago

                It’s not weird. They just don’t give a fuck about truth or consistency. Consider: “what about her emails” vs. whatever it is they are saying about having boxes of classified documents and a copy machine in the bathroom.

                What they care about is creating, maintaining, and empowering white supremacist hierarchies. Truth and integrity are secondary to their hierarchy.

          • kent_eh@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            9 months ago

            How would Obama rewriting the Bible in Chinese be different from any other person rewriting the Bible in Chinese?

            It would trigger the red hatter conspiracy brigade a lot more.