• vivendi@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    40 minutes ago

    No the fuck it’s not

    I’m a pretty big proponent of FOSS AI, but none of the models I’ve ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking grep for me.

    People who think AI codes well are shit at their job

  • vga@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 hours ago

    So how do you tell apart AI contributions to open source from human ones?

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      41 minutes ago

      if it’s undisclosed, it’s obvious from the universally terrible quality of the code, which wastes volunteer reviewers’ time in a way that legitimate contributions almost never do. the “contributors” who lean on LLMs also can’t answer questions about the code they didn’t write or help steer the review process, so that’s a dead giveaway too.

  • mriswith@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 hours ago

    You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers.

    Mostly said by tech bros and startups.

    That should really tell you everything you need to know.

  • MNByChoice@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 hours ago

    Damn, this is powerful.

    If AI code was great, and empowered non-programmers, then open source projects should have already committed hundreds of thousands of updates. We should have new software releases daily.

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 hours ago

    Hot take, people will look back on anyone who currently codes, as we look back on the NASA programmers who got the equipment and people to the moon.

    They won’t understand how they did so much with so little. You’re all gourmet chefs in a future of McDonalds.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 hours ago

      Perhaps! But not because we adopted vibe coding. I do have faith in our ability to climb out of the Turing tarpit (WP, Esolangs) eventually, but only by coming to a deeper understanding of algorithmic complexity.

      Also, from a completely different angle: when I was a teenager, I could have a programmable calculator with 18MHz Z80 in my hand for $100. NASA programmers today have the amazing luxury of the RAD750, a 110MHz PowerPC chipset. We’re already past the gourmet phase and well into fusion.

      • whats_all_this_then@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 hours ago

        This is dead on! 99% of the fucking job is digital plumbing so the whole thing doesn’t blow the up when (a) there’s a slight deviation from the “ideal” data you were expecting, or (b) the stakeholders wanna make changes at the last minute to a part of the app that seems benign but is actually the crumbling bedrock this entire legacy monstrosity was built upon. Both scenarios are equally likely.

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 hours ago

      Hot take, people will look back on anyone who currently codes, as we look back on the NASA programmers who got the equipment and people to the moon.

      I doubt it’ll be anything that good for them. By my guess, those who currently code are at risk of suffering some guilt-by-association problems, as the AI bubble paints them as AI bros by proxy.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 hour ago

        I think most people will ultimately associate chatbots with corporate overreach rather rank-and-file programmers. It’s not like decades of Microsoft shoving stuff down our collective throat made people think particularly less of programmers, or think about them at all.

  • Flax@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 hours ago

    AI isn’t bad when supervised by a human who knows what they’re doing. It’s good to speed up programmers if used properly. But business execs don’t see that.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 hours ago

      autoplag isn’t bad when supervised by a human

      even when I supervise it, it’s bad

      my god you people are a whole kind of poster and it fucking shows

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 hours ago

          also, fucking ew:

          Needs to be put in it’s place like a misbehaving dog, lol

          why do AI guys always have weird power fantasies about how they interact with their slop machines

          • swlabr@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 hours ago

            It’s almost as if they have problematic conceptions (or lack thereof) of exploitation and power dynamics!

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 hours ago

          given your posts in this thread, I don’t think I trust your judgement on what less annoying looks like

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 hours ago

          Google used to return helpful results that answered questions without needing to be corrected before it started returning AI slop. So maybe that is true now, but only because the search results are the same AI slop as the AI.

          For example, results in stack overflow generally include some discussion about why a solution addressed the issue that provided extra context for why you might use it or do something else instead. AI slop just returns a result which may or may not be correct but it will be presented as a solution without any context.

          • Honytawk@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 hours ago

            Google became shit not because of AI but because of SEO.

            The enshitification was going on long before OpenAI was even a thing. Remember when we had to add the “reddit” tag just to make sure to get actual results instead of some badly written bloated text?

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 hours ago

                this is actually the correct case - it is both written about (prabhakar raghavan, look him up), and the exact mechanics of how they did it were detailed in documents surfaced in one of the lawsuits that google recently lost (the ones that found they them to be a monopoly)

          • Feyd@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 hours ago

            The funny thing about stack overflow is that the vocal detractors have a kernel of truth to their complaints about elitism, but if you interact with them enough you realize they’re often the reason the gate keeping is necessary to keep the quality high.

          • Flax@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 hours ago

            Stack overflow resulted in people with highly specialised examples that wouldn’t suit your application. It’s easier to just ask an AI to write a simple loop for you whenever you forget a bit of syntax

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 hours ago

              wow imagine needing to understand the code you’re dealing with and not just copypasting a bunch of shit around

              reading documentation and source code must be an excruciating amount of exercise for your poor brain - it has to even do something! poor thing

            • scruiser@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              8 hours ago

              You’ve inadvertently pointed out the exact problem: LLM approaches can (unreliably) manage boilerplate and basic stuff but fail at anything more advanced, and by handling the basic stuff they give people false confidence that leads to them submitting slop (that gets rejected) to open source projects. LLMs, as the linked pivot-to-ai post explains, aren’t even at the level of occasionally making decent open source contributions.

            • Feyd@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              9 hours ago

              Man i remember eclipse doing code completion for for loops and other common snippets in like 2005. LLM riders don’t even seem to know what tools have been in use for decades and think using an LLM for these things is somehow revolutionary.

              • self@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 hours ago

                the promptfondlers that make their way into our threads sometimes try to brag about how the LLM is the only way to do basic editor tasks, like wrapping symbols in brackets or diffing logs. it’s incredible every time

  • BarrierWithAshes@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    11 hours ago

    Man trust me you don’t want them. I’ve seen people submit ChatGPT generated code and even generated the PR comment with ChatGPT. Horrendous shit.

    • Hasherm0n@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 hours ago

      Today the CISO of the company I work for suggested that we should get qodo.ai because it would “… help the developers improve code quality.”

      I wish I was making this up.

      • Rayquetzalcoatl@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 hour ago

        My boss is obsessed with Claude and ChatGPT, and loves to micromanage. Typically, if there’s an issue with what a client is requesting, I’ll approach him with:

        1. What the issue is
        2. At least two possible solutions or alternatives we can offer

        He will then, almost always, ask if I’ve checked with the AI. I’ll say no. He’ll then send me chunks of unusable code that the AI has spat out, which almost always perfectly illuminate the first point I just explained to him.

        It’s getting very boring dealing with the roboloving freaks.

    • ImplyingImplications@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 hours ago

      The maintainers of curl recently announced any bug reports generated by AI need a human to actually prove it’s real. They cited a deluge of reports generated by AI that claime to have found bugs in functions and libraries which don’t even exist in the codebase.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 hours ago

        you may find, on actually going through the linked post/video, that this is in fact mentioned in there already

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 hours ago

    As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).

    This is the most entertaining thing I’ve read this month.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 hour ago

      yeah someone elsewhere on awful linked issue a few days ago, and throughout many of his posts he pulls that kind of stunt the moment he gets called on his shit

      he also wrote a 21.KiB screed very huffily saying one of the projects’ CoC has failed him

      long may his PRs fail

    • makeshiftreaper@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 hours ago

      I tried asking some chimps to see if the macaques had written a New York Times best seller, if not MacBeth, yet somehow Random house wouldn’t publish my work

    • murtaza64@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 hours ago

      so what are the sentiments about langchain? I was recently working with it to try to build some automatic PR generation scripts but I didn’t have the best experience understanding how to use the library. the documentation has been quite messy, repetitive and disorganized—somehow both verbose and missing key details. but it does the job I wanted it to, namely letting me use an LLM with tool calling and custom tools in a script

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 hour ago

        Given the volatility of the space I don’t think it could have been doing stuff much better, doubt it’s getting out of alpha before the bubble bursts and stuff settles down a bit, if at all.

        Automatic pr generation sounds like something that would need a prompt and a ten-line script rather than langchain, but it also seems both questionable and unnecessary.

        If someone wants to know an LLM’s opinion on what the changes in a branch are meant to accomplish they should be encouraged to ask it themselves, no need to spam the repository.