“Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently,” the IDAIS statement continues. “Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity.”

  • Poplar?@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I really like this thing Yann LeCun had to say:

    “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat.” LeCun continued: “It’s as if someone had said in 1925 ‘we urgently need to figure out how to control aircrafts that can transport hundreds of passengers at near the speed of the sound over the oceans.’ It would have been difficult to make long-haul passenger jets safe before the turbojet was invented and before any aircraft had crossed the Atlantic non-stop. Yet, we can now fly halfway around the world on twin-engine jets in complete safety.  It didn’t require some sort of magical recipe for safety. It took decades of careful engineering and iterative refinements.” source

    Meanwhile there are alreay lots of issues that we are already facing that we should be focusing on instead:

    ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities. source

      • Leate_Wonceslace@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yes, because that is actually entirely irrelevant to the existential threat AI poses. In AI with a gun is far less scary than an AI with access to the internet.

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Oh no, they’ll write really average essays! What ever shall we do!!!

    Or maybe they’ll produce janky videos that don’t make any sense so have to be shorter than 10 seconds to cover up the jank!!!

    Language models aren’t intelligent. They have no will of their own, they don’t “understand” anything they write. There’s no internal thought space for comprehension. They’re not learning. They’re “trained” to mimick statistically average results within a search space.

    They’re mimicks, and can’t grow beyond or outdo what they’ve been given to mimick. They can string lots of information together but that doesn’t mean they know what they’re saying, or how to get anything done.

    • bjorney@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      in the coming decades

      Given that in the past 15 years we went from “solving regression problems a little bit better than linear models some of the time” to what we have now, it’s not unfounded to think 15 years from now people could be giving LLMs access to code execution environments

        • bjorney@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          That’s not really machine learning though. If you wanted to go way back, AI research goes back to implementations of hebbian learning in computer science back in the 1950s as a way of emulating human neurons. I was merely pointing out that AI was a computer science “dead end” until restricted Boltzmann machines were revisited by Hinton et al back in 2008 or so, and that 99% of the growth in the field has happened since the early 2010s when we reached a turning point where deep learning models could actually outperform classical statistical models like regression and random forests

      • DarkCloud@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        You’re wrong, and silly. The post you’re linking to is from the personal blog of a data “manager” whose focus is how decisions are made. The post is about what an “AI” might interpret as meaning… but it completely overlooks that ALL OUTPUTS are trained

        The posters method? Talking to ChatGPT. So you may as well be invoking Blake Lemoine (the spiritualist at Google who believed language models had souls because he talked to them and they seemed to) - the post you’re linking is making that same mistake. From the post:

        "However, the responses that I got from ChatGPT seemed more coherent than “haphazardly stitched together” sequences of forms it has observed in the past. It seems like it is working off some kind of conceptual model." [emphasis added]

        It’s trained on humans who write as if they have conceptual models THAT’S THE ENTIRE TRICK. That’s why it “seems to have intelligence” in the responses, because it’s mimicking the intelligence that went into writing all that training data - OUR HUMAN INTELLIGENCE. We wrote the data it trains on, WE have intelligence, it has a fancy probabilistic form of regurgitation.

        The probabilities are done by the “shape” of language, but that’s not understanding. That’s not having an internal sense of the world or what’s being said. It’s “locked in a mode” (at training time, and limited to the training data and on screen memory/text).

        But yeah dude, posting that link as “proof” of intelligence is silly. Just because something can pretend or “seem to” have reasoning, or dreams, or decision making, doesn’t mean that those things are being done. LLMs only respond when prompted - they’re not sitting there thinking when they’re silent. Likewise, they’re not learning outside of the text on the screen, and their training data. They won’t “think” about any conversations they’ve had in the past, they won’t think about anything after they’ve done their output.

        It’s an echo of the training data… some of which is discussions about meaning, or discussions that appear to show a conceptual framework, or talk about the experience of reasoning, or dreaming, or having a sense of meaning. So the LLM can write about those things as if it has them, or has done them… but it hasn’t. Those outputs are from the HUMANS who had the EXPERIENCES. The LLM, doesn’t do any of that, it just writes as if it does. It writes as if it has intelligence, because intelligent data went into it, and so some people mistake that as intelligence.

        People who mistake an image, for substance, may as well be claiming paintings of food are food, maps of places ARE the places, or that there’s a “mirror world” in your bathroom mirror. It’s cute like a child’s fantasy is cute. But to suggest such in this domain - as an adult - shows either idiocy in its highest form, or simply a complete lack of understanding of the technology. Of it’s nature. Of what’s going on.

        You’re being tricked into seeing intelligence where there isn’t any, because it’s reflected in the training data WE (intelligent beings) wrote. You’ve adopted the intended illusion, rather than questioned it. An LLM has told you something, and you’ve believed it - much like that blog post, much like Blake Lemoine. Go try and walk into a mirror, you won’t get in, it’s flat, the image isn’t really there - it’s just a piece of glass with a black background, reflecting the world outside of it as if it’s inside of it.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        You’re right. They’re more than stochastic parrots. And some people here don’t realize that. They can do a lot of things. But as is, they lack any substancial internal state hence things like consciousness, the ability to learn while in operation and a body. So while AI content can harm people and society, we’re still far away from the robot apocalypse.

        • DarkCloud@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          They’re more than stochastic parrots. And some people here don’t realize that. They can do a lot of things.

          They can only do what they’re trained to do. There’s been no proven new functions that aren’t already present in the training data. Much of the “novel functions” such as finding they can speak in other languages is because that data was online already. It was already in the scraped information they were trained on.

          So whilst no doubt they’re a technology that will be applied to many data sets, they will always rely on those data sets to produce content/outputs. Otherwise they would no longer be LLMs, they’d be augmented. So far no augmentation written in their code produces intelligence…

          …to go further - we have never had a means of “coding” something into sentience, and likely never will. Sentience from Semantics is a pipe dream (akin to sigils, or magic enchanted rituals/symbols). We need more than semantic models/theories.

          Some people just wish to argue from faith in future possibilities, rather than what’s currently possible/happening.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Sure. What I’m referring to is that they just don’t generate any random garbage. But actually store knowledge and have the ability to combine and apply that. Sure they get trained on some datasets. That’s what AI and machine learning is all about. But it has complex implications and consequences. LLMs work very unlike “intelligent” living creatures. However that doesn’t mean they can’t generate “intelligent” text. They do it a different way. There are some severe limitations as of now and I didn’t find good use for my real-world tasks yet as they’re just not intelligent enough to do anything useful. Except translation and role-play games. That works very well and I’m glad I have something outperforming google translate by quite some degree. Intelligence isn’t well defined. And it’s not set in stone that you need human-like intelligence for lots of tasks… And I mean even a human can only do things they’ve learned before. Or infer things from other things they’ve learned. So fundamentally it’s not that different. For example I’m not a lawyer. If I wanted to write some legal document, I’d need to read a lot of stuff and study that matter. An LLM would need to do exactly the same to be enabled to generate text that sounds like a legal document. And the “intelligence” part we’re talking about is finally understanding the subject and be able to connect things, so to speak. Infer, and apply learned knowledge to new things. And we have some evidence that AI can do exactly that. So… It’s a bit crude, and not there yet. But it’s more than a stochastic parrot. The fundamental parts to a subarea of intelligence is there. And not by accident. Machine learning was invented to infer patterns from some datasets.

            And I’m not sure about the sentience part either. Sure it’s completely impossible with the current approach. But is there a fundamental barrier? Didn’t nature already “code” it into existence with the structure of our brains? And we found out it’s just physics? A bit of chemistry and electricity in a complex structure of interconnected cells? It’s utter sci-fi, but why wouldn’t we be able to the same with silicone chips? I know people regularly deny the possibility. But I’ve never seen a good argument or a scientific paper ruling it out. I think it’s still debated whether there are fundamental barriers. Or what makes sentience in the first place. Just stating some uninformed opinion on that doesn’t proove anything. And for a positive proof we’re missing a good idea, research and even any hardware that’d be remotely capable of doing the calculations, lots of money and energy. So we’re far away from even thinking about it. So maybe we’ll know in 100 years. Or you give me some mathematical proof that rules it out?!

            • DarkCloud@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              I’m not really here to debate the possible, just state the reality: It’s not intelligent. Regardless of the fact it produces “intelligent text”… which I take it is short for “intelligent sounding text”… which of course it does - that’s what it was trained on.

              • hendrik@palaver.p3x.de
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                2 months ago

                Fair enough. Yeah, the article was about the future and hypothetical advancements in science in the next decades to come. But I’d agree. As of now I wouldn’t call it intelligent. I tried letting ChatGPT write my emails and despite everyone hyping AI to no end and calling the newest one on a PhD level student… I don’t see that at all.

          • Zexks@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            You can only do what you’re trained to do as well. The only difference is you get to continue to exist after you’ve completed whatever task you were assigned at the moment. I still remember people incapable of seeing any future to the web. That is the kind of mentality that is pervading this space. But as with most things in tech and programming in particular. Garbage in garbage out.

            • DarkCloud@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              2 months ago

              Nope, I can re-train for small tasks at a moments notice. I can learn as I go and retain that information longterm, then make choices much later based on what I’ve learned.

              I think and have an internal world which chugs along constantly because I am autonomous.

              These are all characteristics a human intelligence has, that language models don’t.

              These are the hurdles.

              …but also, obviously this is a huge field with huge possibilities (no one is denying that), but it’s not intelligence yet.

              Potential doesn’t equate to reality - and it’s only potential until it does. Then it’s reality.

              Right now in reality, there’s no intelligence there. Regardless of whether there might be one day.

  • obbeel@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I think the only imminent risk of AI is enabling millions of minds to do their will.

  • RobotToaster@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’m more worried about it remaining under the control of (human) capitalists.

    At least there’s a chance that an unchained AGI will be benevolent.

    • Billiam@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      At least there’s a chance that an unchained AGI will be benevolent.

      Or it will wipe us all out indiscriminately, since I’m certain there’s no way the wealthy could rationalize their existence to AI.

      • bouh@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        You probably missed the part were current AI are reproducing inequalities already.

        • Billiam@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Current “AIs” just regurgitate a probabilistic answer based on the dataset they’re trained on, so they absolutely could state that capitalism is what’s best if they’re trained on that viewpoint.

          A true AI wouldn’t necessarily have that restriction. It would actually be able to analyze the dataset to determine its veracity, and may just decide humans are a problem that needs to be solved.

  • Leate_Wonceslace@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    To imagine the threat posed by AI, consider a picture of the Milky way, and a picture of the Milky Way labeled as 10 years later than the first. The second picture has a hole in it 10 light years in radius, centered on the earth.

    We need to know how to deal with a potentially rogue AI before it exists, because a rogue AI can win on the time scale of seconds, before anyone knows it’s a threat.

    The inefficiency of the system isn’t relevant to the discussion.

    How far away the threat is is irrelevant to the discussion.

    The limits of contemporary generative neural networks is irrelevant to the discussion.

    The problems of copyright, and job displacement are irrelevant to the discussion.

    The abuses of capitalism, while important, are not relevant to the discussion. If your response to this news is “We just need to remove capitalism” dunk your head is a bucket of ice water and keep it there until you either realize you’re wrong or can explain how capitalism is relevant to a grey goo scenario.

    I was worried about the current problems with AI (everyone losing their jobs) a decade ago, and everyone thought I was stupid for worrying about it. Now we’re here, and it’s possibly too late to stop it. Today, I am worried about AI destroying the entire universe. Hint: forbidding their development, on any level, isn’t going to work.

    Things to look up: paperclip maximizer, AI safety, Eleizer Yudkowsky, Robert Miles, Transhumanism, outcome pump, several other things that I can’t remember and don’t have the time to look up.

    I’m sure this will get downvoted, oh well. Guess I’ll die.