You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers. AI code is absolutely up to production quality! Also, you’re all…
There’s this phenomenon when you’re an interviewer at a decently-funded start-up where you take a ton of interviews and say “OMG developers are so bad”. But you’ve mistakenly defined “developer” as “person who applies for a developer job”. GPT3.5 is certainly better at solving interview questions than 90% of the people who apply. But it’s worse than the people who actually pass the interview. (In part because the interview is more than just implementing a standard interview problem.)
your post has done a significantly better job of understanding the issue than a rather-uncomfortably-large amount of programming.dev posters we get, and that’s refreshing!
My boss is obsessed with Claude and ChatGPT, and loves to micromanage. Typically, if there’s an issue with what a client is requesting, I’ll approach him with:
What the issue is
At least two possible solutions or alternatives we can offer
He will then, almost always, ask if I’ve checked with the AI. I’ll say no. He’ll then send me chunks of unusable code that the AI has spat out, which almost always perfectly illuminate the first point I just explained to him.
It’s getting very boring dealing with the roboloving freaks.
The maintainers of curl recently announced any bug reports generated by AI need a human to actually prove it’s real. They cited a deluge of reports generated by AI that claime to have found bugs in functions and libraries which don’t even exist in the codebase.
Man trust me you don’t want them. I’ve seen people submit ChatGPT generated code and even generated the PR comment with ChatGPT. Horrendous shit.
Today the CISO of the company I work for suggested that we should get qodo.ai because it would “… help the developers improve code quality.”
I wish I was making this up.
90% of developers are so bad, that even ChatGPT 3.5 is much better.
wow 90%, do you have actual studies to back up that number you’re about to claim you didn’t just pull out of your ass?
This reminds me of another post I’d read, “Hey, wait – is employee performance really Gaussian distributed??”.
There’s this phenomenon when you’re an interviewer at a decently-funded start-up where you take a ton of interviews and say “OMG developers are so bad”. But you’ve mistakenly defined “developer” as “person who applies for a developer job”. GPT3.5 is certainly better at solving interview questions than 90% of the people who apply. But it’s worse than the people who actually pass the interview. (In part because the interview is more than just implementing a standard interview problem.)
your post has done a significantly better job of understanding the issue than a rather-uncomfortably-large amount of programming.dev posters we get, and that’s refreshing!
and, yep
My boss is obsessed with Claude and ChatGPT, and loves to micromanage. Typically, if there’s an issue with what a client is requesting, I’ll approach him with:
He will then, almost always, ask if I’ve checked with the AI. I’ll say no. He’ll then send me chunks of unusable code that the AI has spat out, which almost always perfectly illuminate the first point I just explained to him.
It’s getting very boring dealing with the roboloving freaks.
The maintainers of
curl
recently announced any bug reports generated by AI need a human to actually prove it’s real. They cited a deluge of reports generated by AI that claime to have found bugs in functions and libraries which don’t even exist in the codebase.you may find, on actually going through the linked post/video, that this is in fact mentioned in there already