

LLMs are bad even at converting news articles to smaller news articles faithfully, so I’m assuming in a significant percentage of conversions the dumbed down contract will be deviating from the original.
It’s not always easy to distinguish between existentialism and a bad mood.
LLMs are bad even at converting news articles to smaller news articles faithfully, so I’m assuming in a significant percentage of conversions the dumbed down contract will be deviating from the original.
I posted this article on the general chat at work the other day and one person became really defensive of ChatGTP, and now I keep wondering what stage of being groomed by AI they’re currently at and if it’s reversible.
Not really possible in an environment were the most useless person you know keeps telling everyone how AI made him twelve point eight times more productive, especially when in hearing distance from the management.
A programmer automating his job is kind of his job, though. That’s not so much the problem as the complete enshittification of software engineering that the culture surrounding these dubiously efficient and super sketchy tools seems to herald.
On the more practical side, enterprise subscriptions to the slop machines do come with assurances that your company’s IP (meaning code and whatever else that’s accessible from your IDE that your copilot instance can and will ingest) and your prompts won’t be used for training.
Hilariously, github copilot now has an option to prevent it from being too obvious about stealing other people’s code, called duplication detection filter:
If you choose to block suggestions matching public code, GitHub Copilot checks code suggestions with their surrounding code of about 150 characters against public code on GitHub. If there is a match, or a near match, the suggestion is not shown to you.
Liuson told managers that AI “should be part of your holistic reflections on an individual’s performance and impact.”
who talks like this
Good parallel, the hands are definitely strategically hidden to not look terrible.
Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves.
Big deal, we’ll just configure a few to be in a constant state of unparalleled bliss to cancel out the ones having a hard time of it.
Although I’d guess human level problem solving needn’t imply a human-analogous subjective experience in a way that would make suffering and angst meaningful for them.
Ed Zitron summarizes his premium post in the better offline subreddit: Why Did Microsoft Invest In OpenAI?
Summary of the summary: they fully expected OpenAI would’ve gone bust by now and MS would be looting the corpse for all it’s worth.
PZ Myers boosted the pivot-to-ai piece on veo3: https://freethoughtblogs.com/pharyngula/2025/06/23/so-much-effort-spiraling-down-the-drain-of-ai/
Fund copyright infringement lawsuits against the people they had been bankrolling the last few years? Sure, if the ROI is there, but I’m guessing they’ll likely move on to then next trendy sounding thing, like a quantum remote diddling stablecoin or whatevertheshit.
I too love to reminisce over the time (like 3m ago) when the c-suite would think twice before okaying uploading whatever wherever, ostensibly on the promise that it would cut delivery time (up to) some notable percentage, but mostly because everyone else is also doing it.
Code isn’t unmoated because it’s mostly shit, it’s because there’s only so many ways to pound a nail into wood, and a big part of what makes a programming language good is that it won’t let you stray too much without good reason.
You are way overselling coding agents.
Ah yes, the supreme technological miracle of automating the ctrl+c/ctrl+v parts when applying the LLM snippet into your codebase.
On the other hand they blatantly reskinned an entire existing game, and there’s a whole breach of contract aspect there since apparently they were reusing their own code that they wrote while working for Bethesda, who I doubt would’ve cared as much if this were only about an LLM-snippet length of code.
I’d say that incredibly unlikely unless an LLM suddenly blurts out Tesla’s entire self-driving codebase.
The code itself is probably among the least behind-a-moat things in software development, that’s why so many big players are fine with open sourcing their stuff.
Yet, under Aron Peterson’s LinkedIn posts about these video clips, you can find the usual comments about him being “a Luddite”, being “in denial” etc.
And then there’s this:
From: Rupert Breheny Bio: Cobalt AI Founder | Google 16 yrs | International Keynote Speaker | Integration Consultant AI Comment: Nice work. I’ve been playing around myself. First impressions are excellent. These are crisp, coherent images that respect the style of the original source. Camera movements are measured, and the four candidate videos generated are generous. They are relatively fast to render but admittedly do burn through credits.
From: Aron Peterson (Author) Bio: My body is 25% photography, 25% film, 25% animation, 25% literature and 0% tolerating bs on the internet. Comment: Rupert Breheny are you a bot? These are not crisp images. In my review above I have highlighted these are terrible.
AI is the product, not the science.
Having said that:
you know that there’s almost no chance you’re the real you and not a torture copy
I basilisk’s wager was framed like that, that you can’t know if you are already living in the torture sim with the basilisk silently judging you, it would be way more compelling that the actual “you are ontologically identical with any software that simulates you at a high enough level even way after the fact because [preposterous transhumanist motivated reasoning]”.
Scott A. comes off as such a disaster of a personality. Hope it’s less obvious in his irl interactions.
I’d say if there’s a weak part in your admittedly tongue-in-cheek theory it’s requiring Roko to have had a broader scope plan instead of a really catchy brainfart, not the part about making the basilisk thing out to be smarter/nobler than it is.
Reframing the infohazard aspect as an empathy filter definitely has legs in terms of building a narrative.
Training a model on its own slop supposedly makes it suck more, though. If Microsoft wanted to milk their programmers for quality training data they should probably be banning copilot, not mandating it.
At this point it’s an even bet that they are doing this because copilot has groomed the executives into thinking it can’t do wrong.