• vivendi@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 hours ago

    No the fuck it’s not

    I’m a pretty big proponent of FOSS AI, but none of the models I’ve ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking grep for me.

    People who think AI codes well are shit at their job

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 hours ago

      In my workflow there is no difference between LLMs and fucking grep for me.

      Well grep doesn’t hallucinate things that are not actually in the logs I’m grepping so I think I’ll stick to grep.

      (Or ripgrep rather)

        • froztbyte@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 hour ago

          (I don’t mean to take aim at you with this despite how irked it’ll sound)

          I really fucking hate how many computer types go “ugh I can’t” at regex. the full spectrum of it, sure, gets hairy. but so many people could be well served by decently learning grouping/backrefs/greedy match/char-classes (which is a lot of what most people seem to reach for[0])

          that said, pomsky is an interesting thing that might in fact help a lot of people go from “I want $x” as a human expression of intent, to “I have $y” as a regex expression

          [0] - yeah okay sometimes you also actually need a parser. that’s a whole other conversation. I’m talking about “quickly hacking shit up in a text editor buffer in 30s” type cases here

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 hour ago

            Hey. I can do regex. It’s specifically grep I have beef with. I never know off the top of my head how to invoke it. Is it -e? -r? -i? man grep? More like, man, get grep the hell outta here!

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              58 minutes ago

              now listen, you might think gnu tools are offensively inconsistent, and to that I can only say

              find(1)

              • swlabr@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                57 minutes ago

                find(1)? You better find(1) some other place to be, buster. In this house, we use the file explorer search bar

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 hours ago

        Hallucinations become almost a non issue when working with newer models, custom inference, multishot prompting and RAG

        But the models themselves fundamentally can’t write good, new code, even if they’re perfectly factual

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 hour ago

          The promptfarmers can push the hallucination rates incrementally lower by spending 10x compute on training (and training on 10x the data and spending 10x on runtime cost) but they’re already consuming a plurality of all VC funding so they can’t 10x many more times without going bust entirely. And they aren’t going to get them down to 0%, hallucinations are intrinsic to how LLMs operate, no patch with run-time inference or multiple tries or RAG will eliminate that.

          And as for newer models… o3 actually had a higher hallucination rate because trying to squeeze rational logic out of the models with fine-tuning just breaks them in a different direction.

          I will acknowledge in domains with analytically verifiable answers you can check the LLMs that way, but in that case, its no longer primarily an LLM, you’ve got an entire expert system or proof assistant or whatever that can operate independently of the LLM and the LLM is just providing creative input.

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 hour ago

            We should maximise hallucinations, actually. That is, we should hack the environmental controls of the data centers to be conducive for fungi growth, and flood them with magic mushrooms spores. We can probably get the rats on board by selling it as a different version of nuking the data centers.

        • Architeuthis@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 hours ago

          If LLM hallucinations ever become a non-issue I doubt I’ll be needing to read a deeply nested buzzword laden lemmy post to first hear about it.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 hours ago

        I’m guessing if it would actually work for that, somebody would have done it by now.

        But it probably just does it’s usual thing of bullshitting something that looks like code, only now you’re wasting the time of maintainers as well who have to confirm that it is bobbins.

        • Natanox@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 hours ago

          It’s already doing that, some FOSS projects regularly get weird PRs that on first glance look good, but if you look closer are either total nonsense or riddled with bugs. Especially awful are security-related PRs; although those are never made in good faith, that’s usually grifting (throwing AI at the wall trying to cash in as many bounties as possible). The project lead of curl recently announced that anyone who posts a PR that’s obviously AI, or is made with AI, will get banned.

          Like, it’s really good as a learning tool as long as you don’t blindly believe everything it says given you can ask stuff in natural language and it will resolve possible knowledge dependencies for you that you’d otherwise get stuck on in official docs, and since you can ask contextual questions receiving contextual answers (no logical abstraction). But code generation… please don’t.

          • V0ldek@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 hours ago

            Like, it’s really good as a learning tool

            Fuck you were doing so well in the first half, ahhh,

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 hours ago

            it’s really good as a learning tool as long as you don’t blindly believe everything it says given you can ask stuff in natural language

            the poster: “it’s really good as a learning tool”

            the poster: “but don’t blindly believe it”

            the learner: “how should I know when to believe it?”

            the poster: “check everything”

            the learner: “so you’re saying I should just read the actual documentation and/or source?”

            the poster: “how are you going to ask that anything? how can you fondle something that isn’t a prompt?!”

            the learner: “thanks for your time, I think I’m going to find another class”

            • Natanox@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 hours ago

              Nice conversation you had right there in your head. I assume you also took a closer look at it to get a neutral opinion and didn’t just ride one of the two waves “blind AI hype” or “blind AI hate”?

              I’ve taken a closer look at Codestral (which is locally hostable), threw stuff at it and got a sense for what it can and can’t do. The general gist is that its (Python) syntax is basically always correct, however it sometimes messes up the actual code logic or gets the user request wrong. That makes it a good tool for code questions aimed at specific features, how certain syntax in a language works or to look up potential alternative solutions for smaller code snippets. However it should absolutely not be used to create huge chunks of your code logic, that will always backfire.

              And since some people will read this and think I’m some AI worshipper, fuck no. They’re amoral as fuck, the only models not screwed up through their creation process are those very few truly FOSS ones. But if you hate on something you have to actually know shit about it and understand its appeal and non-hyped usecases (they do have them, even LLMs). Otherwise you’ll end up in a social corner filled with bitterness and, depending on the topic, perhaps even increasingly extreme opinions (not saying we shouldn’t smash OpenAI and other corposcum into tiny pieces, we absolutely should).

              There are technologies that are utter bullshit like NFTs. However (unfortunately?) that isn’t the case for AI. We just live in an economy that’s good in abusing everything and everyone.

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 hour ago

                Nice conversation you had right there in your head

                that you recognize none of this is telling. that someone else got it, more so.

                I assume

                you could just ask, you know. since you seem so comfortable fondling prompts, not sure why you wouldn’t ask a person. is it because they might tell you to fuck off?

                I’ve taken a closer look…

                fuck off with the unrequested advertising. never mind that no-one asked you for how you felt for some fucking piece of shit. oh, you feel happy that the logo is a certain tint of <colour>? bully for you, now fuck off and do something worthwhile

                That makes it a good tool

                a tool you say? wow, sure glad you’re going to replace your *spins the wheel* Punctured Car Tyre with *spins the wheel again* Needlenose Pliers!

                think I’m some AI worshipper, fuck no. They’re amoral as fuck

                so, you think there’s moral problems, but only sometimes? it’s supes okay to do your version of leveraged exploitation? cool, thanks for letting us know

                those very few truly FOSS ones

                oh yeah, right, the “truly FOSS ones”! tell me again how those are trained - who’s funding that compute? are the licenses contextually included in the model definition?

                wait, hold on! why are you squealing away like a deflating balloon?! those are actual questions! you’re the one who brought up morals!

                Otherwise you’ll end up in a social corner filled with bitterness

                I’ve met people like you at parties. they’re often popular, but they’re never fun. and I always regret it.

                There are technologies that are utter bullshit like NFTs. However (unfortunately?) that isn’t the case for AI

                citation. fucking. needed.

                  • scruiser@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    45 minutes ago

                    Bro, sneerclub and techtakes are for sneering at bad technology and those that worship it, not for engaging in apologia for it (or worse yet, tone policing the sneering). If you don’t like it, you can ask the mods for an exit pass out (if they haven’t generously given you one already).

                  • froztbyte@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    edit-2
                    49 minutes ago

                    if you can’t make a good that’s a you problem. if people end up poking holes in your shit and you suddenly can’t keep your incoherent nonsense together, still a you problem. but:

                    nonsensically off-the-rails

                    take your abuser bullshit and fuck right off, thanks

              • swlabr@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 hours ago

                Otherwise you’ll end up in a social corner filled with bitterness

                This is a standard Internet phenomenon (I generalize) called a Sneer Club, i.e. people who enjoy getting together and picking on designated targets. Sneer Clubs (I expect) attract people with high Dark Triad characteristics, which is (I suspect) where Asshole Internet Atheists come from - if you get a club together for the purpose of sneering at religious people, it doesn’t matter that God doesn’t actually exist, the club attracts psychologically f’d-up people. Bullies, in a word, people who are powerfully reinforced by getting in what feels like good hits on Designated Targets, in the company of others doing the same and congratulating each other on it.

        • gens@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 hours ago

          Yea it’s a problem already for security bugs, llms just waste maintainers time and make them angry.

          They are useless and make more work for programmers, even on python and js codebases that they are trained on the most and are the “easiest”.

          • V0ldek@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 hours ago

            Hey, Devin! Really impressive that the product best known for literally lying about all of its functionality in its release video still somehow exists and you can pay it money. Isn’t the free market great.

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 hours ago

        These views on LLMs are simplistic. As a wise man once said, “check yoself befo yo wreck yoself”, I recommend more education thus

        LLM structures arw over hyped, but they’re also not that simple