Sam Altman was fired from OpenAI in 2023 by the nonprofit board. After considerable internal turmoil and external pressure from his venture capitalist mates and from Microsoft, he was reinstated. K…
The sequence of links hopefully lays things out well enough for normies? I think it it does, but I’ve been aware of the scene since the mid 2010s, so I’m not the audience that needs it. I can almost feel sympathy for Sam dealing with all the doomers, except he uses the doom and hype to market OpenAI and he lied a bunch so not really. And I can almost feel sympathy for the board, getting lied to and outmaneuvered by a sociopathic CEO, but they are a bunch of doomers from the sound of it so, eh. I would say they deserve each other, its the rest of the world that don’t deserve them (from the teacher dealing with the LLM slop plugged into homework, to the Website Admin fending off scrapers, to legitimate ML researchers getting the attention sucked away while another AI winter starts to loom, to the machine cultist not saving a retirement fund and having panic attacks over the upcoming salvation or doom).
I think I’m the normie here and I’m very confused (and have no idea what this community is).
A few years ago someone mentioned the Harry Potter fanfic somewhere and I looked into what it is out of curiosity, but since I’m not interested in reading Harry Potter fanfics, I never read it. But I saw it’s connected to the LessWrong website, which described itself as being for rationalists. Sounds good - being rational is obviously a good thing, so I read a few of the promoted blog posts there and thought they were nice reads about rational thought, it seemed like a good community to me on first glance. Then I forgot about it all before reading more.
And now I’m here, and learning all of this is somehow connected to AI and apparently it’s all some kind of sect? How did I get from a Harry Potter fanfic to some Peter Thiel drama? This is extremely confusing from the outside. And I don’t know where does this Lemmy community fit into all of this.
Big effort post… reading it will still be less effort than listening to the full Behind the Bastards podcast, so I hope you appreciate it…
To summarize it from a personal angle…
In 2011, I was a high school who liked Harry Potter fanfics. I found Harry Potter And The Methods of Rationality a fun story, so I went to the lesswrong website and was hooked on all the neat pop-science explanations. The AGI stuff and cryonics and transhumanist stuff seemed a bit fanciful but neat (after all, the present would seem strange and exciting to someone from a hundred years ago). Fast forward to 2015, HPMOR was finally finishing, I was finishing my undergraduate degree, and in the course of getting a college education I had actually taken some computer science and machine learning courses. Reconsidering lesswrong with my level of education then… I noticed MIRI (the institute Eliezer founded) wasn’t actually doing anything with neural nets, they were playing around with math abstractions, and they hadn’t actually published much formal writing, and even the informal lesswrong posts had basically stopped. I had gotten into a related blog, slatestarcodex (written by Scott Alexander), which filled some of the same niche, but in 2016 Scott published a defense of Trump normalizing him, and I realized Scott had an agenda at cross purposes with the “center-left” perspective he portrayed himself as. At around that point, I found the reddit version of sneerclub and it connected a lot of dots I had been missing. Far from the AI expert he presented himself as, Eliezer had basically done nothing but write loose speculation on AGI and pop-science explanations. And Scott Alexander was actually trying to push “human biodiversity” (i.e. racism disguised in pseudoscience) and neoreactionary/libertarian beliefs. From there, it became apparent to me a lot of Eliezer’s claims weren’t just a bit fanciful, they were actually really really ridiculous, and the community he had setup had a deeply embedded racist streak.
To summarize it focusing on Eliezer…
Late 1990s Eliezer was on various mailing lists, speculating with bright eyed optimism about nanotech and AGI and genetic engineering and cryonics. He tried his hand at getting in on it, first trying to write a stock trading bot… which didn’t work, then trying to write up seed AI (AI that would bootstrap to strong AGI and change the world)… which also didn’t work; then trying to develop a new programming language for AI… which he never finished. Then he realized he had been reckless, an actually successful AI might have destroyed mankind, so really it was lucky he didn’t succeed, he needed to figure out how to align an AI first. So from the mid 2000s on he started getting donors (this is where Thiel comes in) to fund his research. People kind of thought he was a crank, or just didn’t seem concerned with his ideas, so he concluded they must not be rational enough, and set about, first on Overcoming bias, then his own blog, lesswrong, writing a sequence of blog posts to fix that (and putting any actual AI research on hold). They got moderate attention which exploded in the early 2010s when a side project of writing Harry Potter fanfiction took off. He used this fame to get more funding and spread his ideas further. Finally, around mid 2010s, he pivoted to actually trying to do AI research again… MIRI has a sparse (compared to number of researchers they hired and how productive good professors in academia are) collection of papers focused on an abstract concept for AI called AIXI, that basically depends on having infinite computing power and isn’t remotely implementable in the real world. Last I checked they didn’t get any further than that. Eliezer was skeptical of neural network approaches, derisively thinking of them as voodoo science trying to blindly imitate biology with no proper understanding, so he wasn’t prepared for NN taking off mid 2012 and leading to GPT and LLM approaches. So when ChatGPT starts looking impressive, he starts panicking, leading to him going on a podcast circuit professing doom (after all if he and his institute couldn’t figure out AI alignment, no one can, and we’re likely all doomed for reasons he has written tens of thousands of words in blog posts about without being refuted at a quality he believes is valid).
To tie off some side points:
Peter Thiel was one of the original funders of Eliezer and his institution. It was probably a relatively cheap attempt to buy reputation, and it worked to some extent. Peter Thiel has cut funding since Eliezer went full doomer (Thiel probably wanted Eliezer as a silicon valley hype man, not an apocalypse cult).
As Scott continued to write posts defending the far-right with a weird posture of being center-left, Slatestarcodex got an increasingly racist audience, culminating in a spin-off forum with full on 14 words white supremacists. He has played a major role in the alt-right pipeline that is some of Trump’s most loyal supporters.
Lesswrong also attracted some of the neoreactionaries (libertarian wackjobs that want a return to monarchy), among them Menicus Moldbug (real name Curtis Yarvin). Yarvin has written about strategies for dismantling the federal government, which DOGE is now implementing
Eliezer may not have been much of a researcher himself, but he inspired a bunch of people, so a lot of OpenAI researchers buy into the hype and/or doom. Sam Altman uses Eliezer’s terminology as marketing hype.
As for lesswrong itself… what is original isn’t good and what’s good isn’t original. Lots of the best sequences are just a remixed form of books like Kahneman’s “Thinking, Fast and Slow”. And the worst sequences demand you favor Eliezer’s take on bayesianism over actual science, or are focused on the coming AI salvation/doom.
Feel free to ask any follow-up questions if you genuinely want to know more. If you actually already know about this stuff and are looking for a chance to evangelize for lesswrong or the coming LLM God, the mods can smell that out and you will be shown the door, so don’t bother (we get one or two people like that every couple of weeks).
understandable! and it’s messy in the weeds, because the loons have been talking about and doing absolutely crazy shit for a long time
And I don’t know where does this Lemmy community fit into all of this.
so, before I answer, I’ll tell you upfront that you’ll probably hate what you learn. that said, see the sidebar for some links and descriptions
the community (techtakes, sneerclub) can be loosely described as a collective of people who’ve, for some reason or another, found themselves caring- /having to care about/encountering/etc one or more of the loons in the subject groups, and then needing a place to vent about it
partly this is because these loons are extremely influential in tech and related spheres, and have been becoming ever more so in recent political pushes in the US. much of yarvin’s thought is directly influencing current US government policy, and all the harms that is causing people. the current stock market bonfire? yeah, these fucking guys.
Sounds good - being rational is obviously a good thing
there’s a difference between “being rational” and capital-R Rationalism (which is what’s involved from yud/lesswrong/etc)
nice reads about rational thought
on paper, that’s the pitch. in practice, it’s cult shit. it’s also self-reinforced for years against any actual expertise - all the fuckers in this scene don’t want to learn from anyone “outside”, and they do an extreme amount of mental gymnastics to justify their various beliefs (which frequently include scientific racism, eugenics, etc)
if you listen to podcasts, Behind The Bastards has featured episodes of (Curtis) Yarvin in the last 2y, as well as another (very recently) on Ziz. all these go into a fair amount of detail about these people. but, again, warning: you will hate what you learn.
The sequence of links hopefully lays things out well enough for normies? I think it it does, but I’ve been aware of the scene since the mid 2010s, so I’m not the audience that needs it. I can almost feel sympathy for Sam dealing with all the doomers, except he uses the doom and hype to market OpenAI and he lied a bunch so not really. And I can almost feel sympathy for the board, getting lied to and outmaneuvered by a sociopathic CEO, but they are a bunch of doomers from the sound of it so, eh. I would say they deserve each other, its the rest of the world that don’t deserve them (from the teacher dealing with the LLM slop plugged into homework, to the Website Admin fending off scrapers, to legitimate ML researchers getting the attention sucked away while another AI winter starts to loom, to the machine cultist not saving a retirement fund and having panic attacks over the upcoming salvation or doom).
I think I’m the normie here and I’m very confused (and have no idea what this community is).
A few years ago someone mentioned the Harry Potter fanfic somewhere and I looked into what it is out of curiosity, but since I’m not interested in reading Harry Potter fanfics, I never read it. But I saw it’s connected to the LessWrong website, which described itself as being for rationalists. Sounds good - being rational is obviously a good thing, so I read a few of the promoted blog posts there and thought they were nice reads about rational thought, it seemed like a good community to me on first glance. Then I forgot about it all before reading more.
And now I’m here, and learning all of this is somehow connected to AI and apparently it’s all some kind of sect? How did I get from a Harry Potter fanfic to some Peter Thiel drama? This is extremely confusing from the outside. And I don’t know where does this Lemmy community fit into all of this.
Big effort post… reading it will still be less effort than listening to the full Behind the Bastards podcast, so I hope you appreciate it…
To summarize it from a personal angle…
In 2011, I was a high school who liked Harry Potter fanfics. I found Harry Potter And The Methods of Rationality a fun story, so I went to the lesswrong website and was hooked on all the neat pop-science explanations. The AGI stuff and cryonics and transhumanist stuff seemed a bit fanciful but neat (after all, the present would seem strange and exciting to someone from a hundred years ago). Fast forward to 2015, HPMOR was finally finishing, I was finishing my undergraduate degree, and in the course of getting a college education I had actually taken some computer science and machine learning courses. Reconsidering lesswrong with my level of education then… I noticed MIRI (the institute Eliezer founded) wasn’t actually doing anything with neural nets, they were playing around with math abstractions, and they hadn’t actually published much formal writing, and even the informal lesswrong posts had basically stopped. I had gotten into a related blog, slatestarcodex (written by Scott Alexander), which filled some of the same niche, but in 2016 Scott published a defense of Trump normalizing him, and I realized Scott had an agenda at cross purposes with the “center-left” perspective he portrayed himself as. At around that point, I found the reddit version of sneerclub and it connected a lot of dots I had been missing. Far from the AI expert he presented himself as, Eliezer had basically done nothing but write loose speculation on AGI and pop-science explanations. And Scott Alexander was actually trying to push “human biodiversity” (i.e. racism disguised in pseudoscience) and neoreactionary/libertarian beliefs. From there, it became apparent to me a lot of Eliezer’s claims weren’t just a bit fanciful, they were actually really really ridiculous, and the community he had setup had a deeply embedded racist streak.
To summarize it focusing on Eliezer…
Late 1990s Eliezer was on various mailing lists, speculating with bright eyed optimism about nanotech and AGI and genetic engineering and cryonics. He tried his hand at getting in on it, first trying to write a stock trading bot… which didn’t work, then trying to write up seed AI (AI that would bootstrap to strong AGI and change the world)… which also didn’t work; then trying to develop a new programming language for AI… which he never finished. Then he realized he had been reckless, an actually successful AI might have destroyed mankind, so really it was lucky he didn’t succeed, he needed to figure out how to align an AI first. So from the mid 2000s on he started getting donors (this is where Thiel comes in) to fund his research. People kind of thought he was a crank, or just didn’t seem concerned with his ideas, so he concluded they must not be rational enough, and set about, first on Overcoming bias, then his own blog, lesswrong, writing a sequence of blog posts to fix that (and putting any actual AI research on hold). They got moderate attention which exploded in the early 2010s when a side project of writing Harry Potter fanfiction took off. He used this fame to get more funding and spread his ideas further. Finally, around mid 2010s, he pivoted to actually trying to do AI research again… MIRI has a sparse (compared to number of researchers they hired and how productive good professors in academia are) collection of papers focused on an abstract concept for AI called AIXI, that basically depends on having infinite computing power and isn’t remotely implementable in the real world. Last I checked they didn’t get any further than that. Eliezer was skeptical of neural network approaches, derisively thinking of them as voodoo science trying to blindly imitate biology with no proper understanding, so he wasn’t prepared for NN taking off mid 2012 and leading to GPT and LLM approaches. So when ChatGPT starts looking impressive, he starts panicking, leading to him going on a podcast circuit professing doom (after all if he and his institute couldn’t figure out AI alignment, no one can, and we’re likely all doomed for reasons he has written tens of thousands of words in blog posts about without being refuted at a quality he believes is valid).
To tie off some side points:
Peter Thiel was one of the original funders of Eliezer and his institution. It was probably a relatively cheap attempt to buy reputation, and it worked to some extent. Peter Thiel has cut funding since Eliezer went full doomer (Thiel probably wanted Eliezer as a silicon valley hype man, not an apocalypse cult).
As Scott continued to write posts defending the far-right with a weird posture of being center-left, Slatestarcodex got an increasingly racist audience, culminating in a spin-off forum with full on 14 words white supremacists. He has played a major role in the alt-right pipeline that is some of Trump’s most loyal supporters.
Lesswrong also attracted some of the neoreactionaries (libertarian wackjobs that want a return to monarchy), among them Menicus Moldbug (real name Curtis Yarvin). Yarvin has written about strategies for dismantling the federal government, which DOGE is now implementing
Eliezer may not have been much of a researcher himself, but he inspired a bunch of people, so a lot of OpenAI researchers buy into the hype and/or doom. Sam Altman uses Eliezer’s terminology as marketing hype.
As for lesswrong itself… what is original isn’t good and what’s good isn’t original. Lots of the best sequences are just a remixed form of books like Kahneman’s “Thinking, Fast and Slow”. And the worst sequences demand you favor Eliezer’s take on bayesianism over actual science, or are focused on the coming AI salvation/doom.
Feel free to ask any follow-up questions if you genuinely want to know more. If you actually already know about this stuff and are looking for a chance to evangelize for lesswrong or the coming LLM God, the mods can smell that out and you will be shown the door, so don’t bother (we get one or two people like that every couple of weeks).
Excellent primer. It would fit well into this thread if you wanna link it there https://awful.systems/post/3935670
understandable! and it’s messy in the weeds, because the loons have been talking about and doing absolutely crazy shit for a long time
so, before I answer, I’ll tell you upfront that you’ll probably hate what you learn. that said, see the sidebar for some links and descriptions
the community (techtakes, sneerclub) can be loosely described as a collective of people who’ve, for some reason or another, found themselves caring- /having to care about/encountering/etc one or more of the loons in the subject groups, and then needing a place to vent about it
partly this is because these loons are extremely influential in tech and related spheres, and have been becoming ever more so in recent political pushes in the US. much of yarvin’s thought is directly influencing current US government policy, and all the harms that is causing people. the current stock market bonfire? yeah, these fucking guys.
there’s a difference between “being rational” and capital-R Rationalism (which is what’s involved from yud/lesswrong/etc)
on paper, that’s the pitch. in practice, it’s cult shit. it’s also self-reinforced for years against any actual expertise - all the fuckers in this scene don’t want to learn from anyone “outside”, and they do an extreme amount of mental gymnastics to justify their various beliefs (which frequently include scientific racism, eugenics, etc)
if you listen to podcasts, Behind The Bastards has featured episodes of (Curtis) Yarvin in the last 2y, as well as another (very recently) on Ziz. all these go into a fair amount of detail about these people. but, again, warning: you will hate what you learn.
Thanks for answering, it’s… fascinating in it’s bizarreness.
Nooo run! Beware the lure of the abyss that is learning more about this stuff. Remember you cannot unlearn things!
I finished and realised Yudkowsky was the good guy in my story here. Because the other guy is fuckin’ Sam.
Starting a cult to build god, control god and upload your consciousness into god?
Well, at least it is an ethos.
the worst talos principle dlc