• OfCourseNot@fedia.io
    link
    fedilink
    arrow-up
    16
    arrow-down
    2
    ·
    22 days ago

    ‘AI isn’t reliable, has a ton of bias, tells many lies confidently, can’t count or do basic math, just parrots whatever is fed to them from the internet, wastes a lot of energy and resources and is fucking up the planet…’. When I see these critics about ai I wonder if it’s their first day on the planet and they haven’t met humans yet.

    • RushLana@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      21 days ago

      … You are deliberately missing the point.

      When I’m asking a question I don’t want to hear what most people think but what people that are knowledgeable about the subject of my question think and LLM will fail at that by design.

      LLMs don’t wastes a lot, they waste at a ridiculous scale. According to statista training GPT-3 is responsible for 500 tCO2 in 2024. All for what ? Having an automatic plagiarism bias machine ? And before the litany of “it’s just the training cost, after that it’s ecologically cheap” tell me how you LLM will remain relevant if it’s not constantly retrained with new data ?

      LLMs don’t bring any value, if I want information I already have search engine (even if LLMs degraded the quality of the result), if I want art I can pay someone to draw it, etc…

      • yetAnotherUser@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        21 days ago

        500 tons of CO2 is… surprisingly little? Like, rounding error little.

        I mean, one human exhales ~400 kg of CO2 per year (according to this). Training GPT-3 produced as much CO2 as 1250 people breathing for a year.

        • RushLana@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          21 days ago

          That seems so little because it doesn’t account for the data-centers construction cost, hardware production cost, etc… 1 model costing as much as 1250 people breathing for a year is enormous to me.

        • OfCourseNot@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          21 days ago

          I don’t know why people downvoted you. It is surprisingly little! I checked the 500 tons number thinking it could be a typo or a mistake but I found the same.

      • OfCourseNot@fedia.io
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        21 days ago

        You are deliberately missing the point.

        I kind of am tbh

        When I’m asking a question I don’t want to hear what most people think but what people that are knowledgeable about the subject of my question think and LLM will fail at that by design.

        I mostly use it to ask about something that I can describe but I don’t know or can’t remember the word/name. But I’ve also asked it more specialized and even pretty niche questions. Some simply as a test. And it’s done pretty well.

        All for what ? Having an automatic plagiarism bias machine ?

        Coming back to the point of the comment, you could argue that people aren’t much more than ‘automatic plagiarism bias machines’ either.

        • RushLana@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          21 days ago

          Search engine where already doing great at giving you answer to by description. For specialized question the wiki or documentation dedicated to your field will be much better. You have no guarantees that LLMs will not generate garbage so you have to check the source (if they exists) so just read the source it’ll waste less time and energy.

          Human are much more than 'automatic plagiarism bias machines’ … don’t dare to equate an autocorrect with life.

          • OfCourseNot@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            21 days ago

            I do dare to equate them. Sorry if it offends you but I’m not religious or spiritual. Thought, reasoning, consciousness… are just the product of the computing power of the human meatware. There’s no reason that computing couldn’t be done by electronics instead of chemistry. Are we there yet? I don’t think so. Will we? Who knows. Equating llms to an autocorrect is like equating a lightbulb to a modern computer.

            And answering your first two paragraphs: no they aren’t doing great at that. In fact search engines have been going to shit in the last few years. And no it’s not ai’s fault, I would say it’s seo’s and upper management’s.

            • Cherries@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              20 days ago

              “I don’t believe in magical thinking. I just believe that GenAI will magically develop conciousness one day”

            • RushLana@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              20 days ago

              I’m not arguing on religious or spiritual ground, I don’t follow any religion and I’m not into spiritual stuff.

              Reducing people to what they can produce is where my problem is. LLM are a word prediction machine by design that’s why I called them glorified autocorrect.

              I won’t waste both of our times, you have your opinion, I have mine and let’s leave it at that.

              • OfCourseNot@fedia.io
                link
                fedilink
                arrow-up
                1
                ·
                19 days ago

                I just think that many people are underestimating a very powerful tool. Taking labor from humans is the good part! Oppressing, controlling, spying… are the dangerous and scary parts. Even the adds! Imagine the level of personalization they can get.

                But anyway, have a good day!

    • Jesus_666@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      21 days ago

      LLMs use even more resources to be even more wrong even faster. That’s the difference.

      • HalfSalesman@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        21 days ago

        IDK, I’m pretty sure it’d use more resources to have someone just follow you around answering your questions to the best of their ability compared to using some electricity.

      • morrowind@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        21 days ago

        AIs use a lot less resources rn, but humans are also constantly doing a hundred other things beyond answering questions

    • Initiateofthevoid@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      21 days ago

      Everyone with a brain, everywhere: “Don’t you hate how the news just asks and amplifies the opinions of random shmucks? I don’t care what bob down the street thinks about this, or Alice on Xhitter. I want a goddamn expert opinion”

      Inchoate LLM apologists: “it’s just doing what typical humans already do!”

      … We know. That’s the problem. It’s bringing us down to the lowest common denominator of discourse. There is a reason we build entire institutions dedicated to the pursuit of truth - so that a dozen or a hundred people smarter or better informed than us can give it the green light before the average idiot ever sees the answer.

    • The_Decryptor@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      21 days ago

      Why is that desirable though?

      We already had calculators, why do we need a machine that can’t do math? Why do we need a machine that produces incorrect information?

      • Aux@feddit.ukBanned
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        21 days ago

        We need to replace those humans who are less reliable and less correct.