Bistable multivibrator
Non-state actor
Tabs for AI indentation, spaces for AI alignment
410,757,864,530 DEAD COMPUTERS

  • 0 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle










  • It’s just depressing. I don’t even think Yudkoswsky is being cynical here, but expressing genuine and partially justified anger, while also being very wrong and filtering the event through his personal brainrot. This would be a reasonable statement to make if I believed in just one or two of the implausible things he believes in.

    He’s absolutely wrong in thinking the LLM “knew enough about humans” to know anything at all. His “alignment” angle is also a really bad way of talking about the harm that language model chatbot tech is capable of doing, though he’s correct in saying the ethics of language models aren’t a self-solving issue, even though he expresses it in critihype-laden terms.

    Not that I like “handing it” to Eliezer Yudkowsky, but he’s correct to be upset about a guy dying because of an unhealthy LLM obsession. Rhetorically, this isn’t that far from this forum’s reaction to children committing suicide because of Character.AI, just that most people on awful.systems have a more realistic conception of the capabilities and limitations of AI technology.




  • Absolutely. Take the reverence for “SysV” init* to the point where the init system has all but eclipsed the AT&T Unix release as the primary meaning of “System V”. The BSDs (at least the Net/Open branch, not sure about FreeBSD) adopted a simplified BSD init/rc model ages ago and Solaris switched to systemd-esque SMF with little uproar. Personally I even prefer SMF over its Linux equivalents, despite the cumbersome XML configuration.

    I somewhat understand the terminalchud mindset, a longing for a supposed simpler time where a nerd could keep a holistic grasp of one’s computing system in their head. Combine that with the tech industry’s pervasive male chauvinism and dogmatic adherence to a law of “simplify and reduce weight” (usually a useful rule of thumb) and you end up with terrible social circles making bad software believing they’re great on both fronts.

    * Rather, the Linux implementation of the concept


  • We probably live in a simulation with the purpose of producing the best anime. This is why we are living in an age with so much anime and with so many people who are interested in anime. The anime maximizing AI is simulating all kinds of scenarios from abiogenesis to a prolific anime industry. Most possible scenarios of life evolving from its first forms would not lead to the development of an anime industry, which is why it would be improbable for us to exist in a world with anime, if not for the fact that the simulated scenarios without anime in them are dropped and not simulated further.


  • I don’t think Yud is that hard to explain. He’s a science fiction fanboy who never let go of his adolescent delusions of grandeur. He was never successfully disabused from the notion that he’s always the smartest person in the room and he didn’t pursue high school, let alone college education to give him the expertise to recognize just how difficult his goal is. Blud thinks he’s gonna create a superhumanly intelligent machine when he struggles with basic programming tasks.

    He’s kinda comparable to Elon Musk in a way. Brain uploading and superhuman AI are sort of in the same “cool sci fi tech” category as Mars colonization, brain implants and vactrain gadgetbahns. It’s easy to forget that not too many years ago the public’s perception of Musk was very different. A lot of people saw him as a cool Tony Stark figure who was finally going to give us our damn flying cars.

    Yudkowsky is sometimes good at knowing just a bit more about things than his audience and making it seem like he knows a lot more than he does. The first time I started reading HPMoR I thought the author was an actual theoretical physicist or something and when the story said I could learn everything Harry knows for free on this LessWrong site I though I could learn what it means for something to be “implied by the form of the quantum Hamiltonian” or what that those “timeless formulations of quantum mechanics” were about. Instead it was just poorly paced essays on bog standard logical fallacies and cognitive biases explained using their weird homegrown terminology.

    Also, it’s really easy to be convinced of thing when you really want to believe in it. I know personally some very smart and worldly people who have been way too impressed by ChatGPT. Convincing people in San Francisco Bay Area that you’re about to invent Star Trek technology is basically the national pastime there.

    His fantasies of becoming immortal through having a God AI simulate his mind forever aren’t the weird part. Any imaginative 15 year old computer nerd can have those fantasies. The weird parts are that he never grew out of those fantasies and that he managed to make some rich and influential contacts while holding on to his chuunibyō delusions.

    Anyone can become a cult leader through the power of buying into your own hype and infinite thielbux.