• sugar_in_your_tea@sh.itjust.works
    cake
    link
    fedilink
    arrow-up
    11
    arrow-down
    19
    ·
    edit-2
    6 days ago

    Or you could learn to use the tool better and ask better questions. It’s pretty decent at some things, absolutely terrible for others.

    Asking to explain something like shorting a stock is one of the better uses, since there are tons of relevant posts explaining exactly that.

      • sugar_in_your_tea@sh.itjust.works
        cake
        link
        fedilink
        arrow-up
        5
        arrow-down
        11
        ·
        6 days ago

        Why not both? Use the LLM to refine what you’re looking for, and better sources for details. It’s like skimming summaries in search results before picking a web page, or asking friends/family before looking up actual sources.

        LLMs are great for interactive information retrieval and figuring out what information you actually need. They don’t do everything, but they do a lot more than detractors claim and lot less than proponents claim. Find that happy middle-ground and it’ll be a great tool.

        • ninjabard@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          6 days ago

          LLMs are great for tech bros and CEOs who want maximum profit with minimum effort all while stealing work that isn’t theirs and poisoning the planet at the same time.

          • sugar_in_your_tea@sh.itjust.works
            cake
            link
            fedilink
            arrow-up
            2
            arrow-down
            4
            ·
            6 days ago

            They’re also great for non-tech bros who just want to get stuff done, and they don’t have to poison the planet at all. We run a few in our office on a Mac Mini, which sips power.

            Those tech bros and CEOs are mostly fleecing investors, so I guess I’m not very concerned about them.

          • sugar_in_your_tea@sh.itjust.works
            cake
            link
            fedilink
            arrow-up
            4
            arrow-down
            4
            ·
            6 days ago

            So asking family and friends about things you don’t know isn’t worth it? Reading personal blogs isn’t worth it?

            As long as you go in knowing what it offers, it can be a great tool, like those little summaries on search results. I use them to find information when the search engine isn’t going to be a great option, or at my day job to generate code for a poorly documented library. I tend to use it a handful of times per week, if that, and only for things that I know how to verify (i.e. manually test, confirm w/ official sources, etc).

            • pelespirit@sh.itjust.worksM
              link
              fedilink
              arrow-up
              5
              arrow-down
              1
              ·
              6 days ago

              As I said in another comment, searching for what’s right or wrong takes a lot of time. Using it as a tool that is untrustworthy isn’t great. For simple shit, maybe? But then simple shit is covered by most other places.

              • sugar_in_your_tea@sh.itjust.works
                cake
                link
                fedilink
                arrow-up
                3
                arrow-down
                6
                ·
                6 days ago

                Then you’re using it wrong.

                LLMs shouldn’t be used to search for what’s right or wrong, they should be used to quickly get a wide breadth of information on a given topic or provide a starting point on code or text projects that you can refine later.

                For example, I wanted to use a new library in a project, and the documentation was a bit cryptic and examples weren’t quite right either. So I asked an LLM tuned for coding tasks to generate some example code for our use case (described the features I wanted to use), and the code worked. I needed to tweak some stuff (like I would w/ any example), but it worked. I used the LLM because I knew there would be a bunch of public code projects using this library with a variety of use cases, and that the LLM would probably do a reasonable job of picking out a decent one to build from, and I was right.

                On another topic, I needed to do research on an unfamiliar topic, so I asked the LLM to provide a few examples in that domain w/ brief descriptions. That generated a ton of useful keywords that I used to find more reputable sources (keywords that would’ve taken hours of searching to generate), so I was able to quickly find reliable information starting from a pretty vague notion of what I wanted to learn.

                LLMs have a lot of limitations, but if they’re used to accomplish common tasks quickly, they can be incredibly useful. I don’t think they’ll replace anyone’s job (unless your job is already pointless), traditional search engines (as much as Google wants it to), or anything like that, but they are useful tools that can make some of the more annoying parts of my job more efficient.

                • zero_spelled_with_an_ecks@programming.dev
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  6 days ago

                  LLMs have flat out made up functions that don’t exist when I’ve used them for coding help. Was not useful, did not point me in a good direction, and wasted my time.

                  • vivendi@programming.dev
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    6 days ago

                    You need to actively have the relevant code in context.

                    I use it to describe code from shitty undocumented libraries, and my local models can explain the code well enough in lieu of actual documentation.

                  • sugar_in_your_tea@sh.itjust.works
                    cake
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    6 days ago

                    Sure, they certainly can hallucinate things. But some models are way better than others at a given task, so it’s important to find a good fit and to learn to use the tool effectively.

                    We have three different models at work, and they work a lot differently and are good at different things.

          • Grimy@lemmy.world
            link
            fedilink
            arrow-up
            4
            arrow-down
            6
            ·
            edit-2
            6 days ago

            Not everyone has a bank of experts waiting for whatever questions they have.

            If you have better options, fine. But if you are spending two hours googling instead of just asking chatgpt to spend five minutes finding you mostly the same links, you are just wasting time. Its easy to have it pull sources that you can quickly verify and then base your actually documentation on. FYI, I personally do both (I search while it does).

            • pelespirit@sh.itjust.worksM
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              6 days ago

              Except these “experts” are wrong a lot, so you can’t trust them. It’s the confidently wrong that’s problematic.

              • Grimy@lemmy.world
                link
                fedilink
                arrow-up
                4
                arrow-down
                4
                ·
                edit-2
                6 days ago

                Yes, you are still expected to participate and verify what is said. I also dont copy paste stuff from websites without verification since god knows the internet in general isn’t always right either.

                It’s a productivity tool meant to help you, not do the job for you.

                • pelespirit@sh.itjust.worksM
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  6 days ago

                  Except, I’m constantly trying to figure out what’s right or wrong. At least wikipedia has people fighting about the truth, LLMs just state incorrect shit as truth and then stares at you.

                  • Grimy@lemmy.world
                    link
                    fedilink
                    arrow-up
                    2
                    arrow-down
                    2
                    ·
                    edit-2
                    6 days ago
                    1. Ask chatgpt the question while outlining your need for sources and direct quotes. (30 secs)

                    2. The bot searches. (30 secs to 10 minute depending on if deep research is used. This is free time you can spend doing something else productive, even the same or similar tasks on your end)

                    3. Click the links and ctrl-f to find the quote or keywords if its a large document. Verify it isn’t bullshit. (1 min)

                    4a. It is valid, the link and relevant quotes are added to a work document to be used later (1 min)

                    4b. It is not valid, the legwork must be done yourself (10-20 minutes, maybe more)

                    Maybe its because I’m coming from a research perspective, where you need to put sources on everything. I would never take something chatgpt gave me at face value and dump it into a doc. That being said, I feel like the argument boils down to “since the tool can be used stupidly, I won’t try to use it properly”.

                    And there are many uses for a variety of things. I have it build me summaries of papers when I make a bank of them for example. I just finished reading the paper, so I can verify and modify the summary. I could write it but I dont want to be bothered trying to figure out the best way to give the most info in one paragraph. Chatgpt already writes better hyper condensed blocks of texts then me anyways.

                    Its good at making tables too and its hard to make mistakes when I’m giving it all the data in the first place.

                  • sugar_in_your_tea@sh.itjust.works
                    cake
                    link
                    fedilink
                    arrow-up
                    5
                    arrow-down
                    2
                    ·
                    6 days ago

                    What’s the benefit of example code in a project if you have to change it all anyway? What’s the benefit of an article summary if you end up reading the article afterward? What’s the benefit of a college course that just teaches you how to learn on your own (i.e. most of them)?

                    LLMs are great for some things, terrible for others, just like any tool. Use them for what they’re good at (generating example code, getting an intro to a topic, etc) and not what they’re poor at (greenfield projects, hard questions, etc). As you use the tool more, you’ll get a feel for what it’s good at and what it’s not.

                  • Grimy@lemmy.world
                    link
                    fedilink
                    arrow-up
                    3
                    arrow-down
                    4
                    ·
                    edit-2
                    6 days ago

                    How did you get “redo” from “verifying sources”? Even if only 10% is good in any case (very rare but it is better at some subjects then others), that is still 10% you don’t have to do. What we currently have is also the worst it’s going to be.

                    Keep burying your head in the sand but you will just become the laughing stock of your office. You guys are aiming to become this decades “boomer that types with one finger”.

                    Just the amount of time I save when I ask it to build me tables I can drop into documents is worth it. It’s my information, I just don’t have to copy paste it one by one into excel. People that are anti-AI in professional contexts are actually nuts.

      • sugar_in_your_tea@sh.itjust.works
        cake
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        6 days ago

        It’s good if the answers exist, but you don’t know how to find them. They’re like search engines that can generated related terms. or regurgitate common answers.

        I find LLMs help me use existing search engines a lot better, because I can usually get it to spit out domain-specific terms to a more general query.

    • Dubiousx99@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      5
      ·
      6 days ago

      Forget trying to say anything positive about LLM or AI. The Lemmyverse downvotes any positive comment related to AI.

      • sugar_in_your_tea@sh.itjust.works
        cake
        link
        fedilink
        arrow-up
        6
        arrow-down
        3
        ·
        6 days ago

        I’m well aware, but I think clearing misconceptions is valuable, and since I’m getting a fair amount of votes in both directions and discussion, hopefully that means people have read and considered my point.

        I’m not going to recommend people use LLMs for everything or even claim that they’re perfect for everyone (in fact, I don’t like using them), just that they do have valid uses and, if it comes up, can be used efficiently (i.e. not burn down the planet). I use them a handful of times in a given week, often less, and mostly to get more keywords to search on a traditional search engine.

        So yeah, it’s whatever. I very much dislike both extremes here, and am trying to drive home the point that there is a happy middleground.