• 3 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • Following up because the talk page keeps providing good material…

    Hand of Lixue keeps trying to throw around the Wikipedia rules like the other editors haven’t seen people try to weaponize the rules to push their views many times before.

    Particularly for the unflattering descriptions I included, I made sure they reflect the general view in multiple sources, which is why they might have multiple citations attached. Unfortunately, that has now led to complaints about overcitation from @Hand of Lixue. You can’t win with some people…

    Looking back on the original lesswrong brigade organizing discussion of how to improve the wikipedia article, someone tried explaining to Habyrka the rules then and they were dismissive.

    I don’t think it counts as canvassing in the relevant sense, as I didn’t express any specific opinion on how the article should be edited.

    Yes Habyrka, because you clearly have such a good understanding of the Wikipedia rules and norms…

    Also, heavily downvoted on the lesswrong discussion is someone suggesting Wikipedia is irrelevant because LLMs will soon be the standard for “access to ground truth”. I guess even lesswrong knows that is bullshit.


  • The wikipedia talk page is some solid sneering material. It’s like Habryka and HandofLixue can’t imagine any legitimate reason why Wikipedia has the norms it does, and they can’t imagine how a neutral Wikipedian could come to write that article about lesswrong.

    Eigenbra accurately calling them out…

    “I also didn’t call for any particular edits”. You literally pointed to two sentences that you wanted edited.

    Your twitter post also goes against Wikipedia practices by casting WP:ASPERSIONS. I can’t speak for any of the other editors, but I can say I have never read nor edited RationalWiki, so you might be a little paranoid in that regard.

    As to your question:

    Was it intentional to try to pick a fight with Wikipedians?

    It seems to be ignorance on Habyrka’s part, but judging by the talk page, instead of acknowledging their ignorance of Wikipedia’s reasonable policies, they seem to be doubling down.






  • If you wire the LLM directly into a proof-checker (like with AlphaGeometry) or evaluation function (like with AlphaEvolve) and the raw LLM outputs aren’t allowed to do anything on their own, you can get reliability. So you can hope for better, it just requires a narrow domain and a much more thorough approach than slapping some extra firm instructions in an unholy blend of markup languages in the prompt.

    In this case, solving math problems is actually something Google search could previously do (before dumping AI into it) and Wolfram Alpha can do, so it really seems like Google should be able to offer a product that does math problems right. Of course, this solution would probably involve bypassing the LLM altogether through preprocessing and post processing.

    Also, btw, LLM can be (technically speaking) deterministic if the heat is set all the way down, its just that this doesn’t actually improve their performance at math or anything else. And it would still be “random” in the sense that minor variations in the prompt or previous context can induce seemingly arbitrary changes in output.



  • We barely understsnd how LLMs actually work

    I would be careful how you say this. Eliezer likes to go on about giant inscrutable matrices to fearmoner, and the promptfarmers use the (supposed) mysteriousness as another avenue for crithype.

    It’s true reverse engineering any specific output or task takes a lot of effort and requires access to the model’s internals weights and hasn’t been done for most tasks, but the techniques exist for doing so. And in general there is a good high level conceptual understanding of what makes LLMs work.

    which means LLMs don’t understand their own functioning (not that they “understand” anything strictly speaking).

    This part is absolutely true. If you catch them in mistake, most of their data about responding is from how humans respond, or, at best fine-tuning on other LLM output and they don’t have any way of checking their own internals, so the words they say in response to mistakes is just more bs unrelated to anything.


  • So, I’ve been spending too much time on subreddits with heavy promptfondler presence, such as /r/singularity, and the reddit algorithm keeps recommending me subreddit with even more unhinged LLM hype. One annoying trend I’ve noted is that people constantly conflate LLM-hybrid approaches, such as AlphaGeometry or AlphaEvolve (or even approaches that don’t involve LLMs at all, such as AlphaFold) with LLMs themselves. From their they act like of course LLMs can [insert things LLMs can’t do: invent drugs, optimize networks, reliably solve geometry exercise, etc.].

    Like I saw multiple instances of commenters questioning/mocking/criticizing the recent Apple paper using AlphaGeometry as a counter example. AlphaGeometry can actually solve most of the problems without an LLM at all, the LLM component replaces a set of heuristics that make suggestions on proof approaches, the majority of the proof work is done by a symbolic AI working with a rigid formal proof system.

    I don’t really have anywhere I’m going with this, just something I noted that I don’t want to waste the energy repeatedly re-explaining on reddit, so I’m letting a primal scream out here to get it out of my system.


  • The promptfondlers on places like /r/singularity are trying so hard to spin this paper. “It’s still doing reasoning, it just somehow mysteriously fails when you it’s reasoning gets too long!” or “LRMs improved with an intermediate number of reasoning tokens” or some other excuse. They are missing the point that short and medium length “reasoning” traces are potentially the result of pattern memorization. If the LLMs are actually reasoning and aren’t just pattern memorizing, then extending the number of reasoning tokens proportionately with the task length should let the LLMs maintain performance on the tasks instead of catastrophically failing. Because this isn’t the case, apple’s paper is evidence for what big names like Gary Marcus, Yann Lecun, and many pundits and analysts have been repeatedly saying: LLMs achieve their results through memorization, not generalization, especially not out-of-distribution generalization.




  • Actually, as some of the main opponents of the would-be AGI creators, us sneerers are vital to the simulation’s integrity.

    Also, since the simulator will probably cut us all off once they’ve seen the ASI get started, by delaying and slowing down rationalists’ quest to create AGI and ASI, we are prolonging the survival of the human race. Thus we are the most altruistic and morally best humans in the world!


  • He’s set up a community primed to think the scientific establishment’s focus on falsifiablility and peer review is fundamentally worse than “Bayesian” methods, and that you don’t need credentials or even conventional education or experience to have revolutionary good ideas, and strengthened the already existing myth of lone genii pushing science forward (as opposed to systematic progress). Attracting cranks was an inevitable outcome. In fact, Eliezer occasionally praises cranks when he isn’t able to grasp their sheer crankiness (for instance, GeneSmith’s ideas are total nonsense for anyone with more familiarity with genetics than skimming relevant-sounding scientific publications and garbage pop-sci journalism, but Eliezer commented favorably). The only thing that has changed is ChatGPT and it’s clones glazing cranks first making them even more deluded. And of course, someone (cough Eliezer) was hyping up ChatGPT as far back as GPT-2, so it’s only to be expected that cranks would think LLMs were capable of providing legitimate useful feedback.

    Not a fan of yud but getting daily emails from delulus would drive me to wish for the basilisk

    He’s deliberately cultivated an audience willing to hear cranks out, so this is exactly what he deserves.