Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      25 days ago

      I’ve read Masnick for over 20 years and he’s never learnt to write coherently. At least this one isn’t blaming Europe.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      24 days ago

      what if, right, we made a search engine so good it became the verb for searching, and then we replaced it with a robot with a concussion

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      Interesting to see if he gets one.

      I believe Trump is A-OK with pay for play for pardons, but what’s the actual price? Something flew by where people were buying a one-to-one with him for $5M, but that’s basically “private”. A pardon of someone as high-profile as SBF has to be worth the reputation hit. Can SBF and/or his family swing it? Would SBF be a good ally/toady of Trump?

      Somehow I don’t see it. Unlike UIrich, a lot of people lost real money when FTX imploded. There wasn’t that much sympathy for him from crapto huggers. And let’s not forget he’s an autistic Jew, not a clear hero for the people who have Trump’s ear.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      23 days ago

      During the trial, Penny’s defense brought in a forensic pathologist who claimed that Neely hadn’t died from being choked but from a “combination of his schizophrenia, synthetic marijuana, sickle cell trait and the struggle from being in Penny’s restraint,

      I get that defense attorneys have to work with what they have but goodness am I tired of this argument.

      “Your honor the deceased did not die from being shot through the temple, but due to having chronic migraines and exposure to second hand smoke 3 years ago and also walking towards the bullet thus increasing it’s relative velocity slighty”

  • BurgersMcSlopshot@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    23 days ago

    This just hit my inbox (and spurred me to unsubscribe from future BuiltIn slop) and man, so tired of this sort of mindless drivel. Like companies still have trouble with basic application and management processes, but magic robot will fix? Fucking hell.

  • antifuchs@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    21 days ago

    So many projects and small websites I’m aware of are being overtaxed by shitty LLM scrapers these days, it feels like an intentional attack. I guess the idea of ai can’t fail, it can only be failed; and so its profiteers must sabotage anything that indicates it’s not beneficial/necessary.

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    Starting things off here with a sneer thread from Baldur Bjarnason:

    Keeping up a personal schtick of mine, here’s a random prediction:

    If the arts/humanities gain a significant degree of respect in the wake of the AI bubble, it will almost certainly gain that respect at the expense of STEM’s public image.

    Focusing on the arts specifically, the rise of generative AI and the resultant slop-nami has likely produced an image of programmers/software engineers as inherently incapable of making or understanding art, given AI slop’s soulless nature and inhumanly poor quality, if not outright hostile to art/artists thanks to gen-AI’s use in killing artists’ jobs and livelihoods.

    • saucerwizard@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      If the arts/humanities gain a significant degree of respect

      I can’t see that happening - my degree has gotten me laughed out of interviews before, and even with a AI implosion I can’t see things changing.

    • e8d79@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      That article is hilarious.

      So I devised an alternative: listening to the work as an audiobook. I already did this for the Odyssey, which I justified because that work was originally oral. No such justification for the Bible. Oh well.

      Apparently, having a book read at you without taking notes or research is doing humanities.

      […] I wrote down a few notes on the text I finished the day before. I’m still using Obsidian with the Text Generator plugin. The Judeo-Christian scriptures are part of the LLM’s training corpus, as is much of the commentary around them.

      Oh, we are taking notes? If by taking notes you mean prompting spicy autocomplete for a summary of the text you didn’t read. I am sure all your office colleagues are very impressed, but be careful around the people outside of the IT department they might have an actual humanities degree. You wouldn’t want to publicly make a fool out of yourself, would you?

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    25 days ago

    Fellas, 2023 called. Dan (and Eric Schmidt wtf, Sinophobia this man down bad) has gifted us with a new paper and let me assure you, bombing the data centers is very much back on the table.

    "Superintelligence is destabilizing. If China were on the cusp of building it first, Russia or the US would not sit idly by—they’d potentially threaten cyberattacks to deter its creation.

    @ericschmidt @alexandr_wang and I propose a new strategy for superintelligence. 🧵

    Some have called for a U.S. AI Manhattan Project to build superintelligence, but this would cause severe escalation. States like China would notice—and strongly deter—any destabilizing AI project that threatens their survival, just as how a nuclear program can provoke sabotage. This deterrence regime has similarities to nuclear mutual assured destruction (MAD). We call a regime where states are deterred from destabilizing AI projects Mutual Assured AI Malfunction (MAIM), which could provide strategic stability. Cold War policy involved deterrence, containment, nonproliferation of fissile material to rogue actors. Similarly, to address AI’s problems (below), we propose a strategy of deterrence (MAIM), competitiveness, and nonproliferation of weaponizable AI capabilities to rogue actors. Competitiveness: China may invade Taiwan this decade. Taiwan produces the West’s cutting-edge AI chips, making an invasion catastrophic for AI competitiveness. Securing AI chip supply chains and domestic manufacturing is critical. Nonproliferation: Superpowers have a shared interest to deny catastrophic AI capabilities to non-state actors—a rogue actor unleashing an engineered pandemic with AI is in no one’s interest. States can limit rogue actor capabilities by tracking AI chips and preventing smuggling. “Doomers” think catastrophe is a foregone conclusion. “Ostriches” bury their heads in the sand and hope AI will sort itself out. In the nuclear age, neither fatalism nor denial made sense. Instead, “risk-conscious” actions affect whether we will have bad or good outcomes."

    Dan literally believed 2 years ago that we should have strict thresholds on model training over a certain size lest big LLM would spawn super intelligence (thresholds we have since well passed, someone we are not paper clip soup yet). If all it takes to make super-duper AI is a big data center, then how the hell can you have mutually assured destruction like scenarios? You literally cannot tell what they are doing in a data center from the outside (maybe a building is using a lot of energy, but not like you can say, “oh they are running they are about to run superintelligence.exe, sabotage the training run” ) MAD “works” because it’s obvious the nukes are flying from satellites. If the deepseek team is building skynet in their attic for 200 bucks, this shit makes no sense. Ofc, this also assumes one side will have a technology advantage, which is the opposite of what we’ve seen. The code to make these models is a few hundred lines! There is no moat! Very dumb, do not show this to the orangutan and muskrat. Oh wait! Dan is Musky’s personal AI safety employee, so I assume this will soon be the official policy of the US.

    link to bs: https://xcancel.com/DanHendrycks/status/1897308828284412226#m

    • raoul@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      25 days ago

      Mutual Assured AI Malfunction (MAIM)

      The proper acronym should be M’AAM. And instead of a ‘roman salut’ they can tip their fedora as a distinctive sign 🤷‍♂️

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        24 days ago

        the only part of this I really approve of is how likely these fuckers are to want to Speak To The Manager

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      Also I think he doesn’t understand MAD like, at all. The point isn’t that you can strike your enemy’s nuclear infrastructure and prevent them from fighting back. In fact that’s the opposite of the point. MAD as a doctrine is literally designed around the fact that you can’t do this, which is why the Soviets freaked out when it looked like we were seriously pursuing SDI.

      Instead the point was that nuclear weapons were so destructive and hard to defend against that any move against the sovereignty of a nuclear power would result in a counter-value strike, and whatever strategic aims were served by the initial aggression would have to be weighed against something in between the death of millions of civilians in the nuclear annihilation of major cities and straight-up ending human civilization or indeed all life on earth.

      Also if you wanted to reinstate MAD I think that the US, Russia, and probably China have more than enough nukes to make it happen.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      25 days ago

      I guess now that USAID is being defunded and the government has turned off their anti-russia/china propaganda machine, private industry is taking over the US hegemony psyop game. Efficient!!!

      /s /s /s I hate it all

      • aninjury2all@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        25 days ago

        If they’re gonna fearmonger can they at least be creative about it?!?! Everyone’s just dusting off the mothballed plans to Quote-Unquote “confront” Chy-na after a quarter-century detour of fucking up the Middle East (moreso than the US has done in the past)

        • BigMuffin69@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          24 days ago

          Credit to Dan, who clearly sees the winds are changing. The doomer grift don’t pay as much no mo’ so instead he turns to being a china hawk and advocate for chip controls and cyberwarfare as the way to stay in the spotlight. As someone who works in the semiconductor biz and had to work 60 hours last week because our supply chains are now completely fucked due to the tariffs, these chucklefucks can go pound sand and then try to use that pounded sand to make a silicon ingot.

          • froztbyte@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            24 days ago

            two giant upsets to the semi market in the space of half a decade is probably perfectly fine and won’t have multi year global impacts, right? right?

            (oof at that week, and g’luck with whatever still comes your way with that)

            • BigMuffin69@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              23 days ago

              Ah appreciate it. Don’t worry too much about me, I enjoy the work in a fucked-up way because it makes me feel like a big business boy and my mommy is real proud of me.

              But it is stressful cuz there are a bunch of people in China and the US whose jobs depend on us being able to solve this problem and that keeps me up at night. I got the handle tho.

    • raktheundead@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      24 days ago

      I feel like it still starts off too credulous towards the rationalists, but it’s still an informative read.

      Around this time, Ziz and Danielson dreamed up a project they called “the rationalist fleet”. It would be a radical expansion of their experimental life on the water, with a floating hostel as a mothership.

      Between them, Scientology and the libertarians, what the fuck is it with these people and boats?

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      24 days ago

      From the opening, this guy has actually been more consistent about respecting her name and pronouns than most coverage I’ve read. Not what I would have expected, but I’m also only through the first section.

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        25 days ago

        if this is peak rationalist gunsmithing, i wonder how their peak chemical engineering looks like

        the body is placed in a pressure vessel which is then filled with a mixture of water and potassium hydroxide, and heated to a temperature of around 160 °C (320 °F) at an elevated pressure which precludes boiling.

        Also, lower temperatures (98 °C (208 °F)) and pressures may be used such that the process takes a leisurely 14 to 16 hours.

        https://en.wikipedia.org/wiki/Water_cremation

        • Amoeba_Girl@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          24 days ago

          Well that sounds like a great way to either make a very messy explosion or have your house smell like you’re disposing of a corpse from a mile away.

          • skillissuer@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            24 days ago

            considering practicality of their actions, groundedness of their beliefs, state of their old boat, cleanliness of their rolling frat house trailer park “stealth” rvs, and from what i can tell zero engineering or trade background whatsoever, i see no reason to doubt that they could make a 400L, stainless steel container that has to hold 200L+ of corrosive liquid at 160C, perhaps 10atm, of which 7 atm only is steam, and scrubber to take care of ammonia. they are so definitely not paranoid that if they went out to source reagents, there’s no way that they possibly could be confused for methheads on a shopping spree. maybe even they could run it on solar panels

            • skillissuer@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              0
              ·
              24 days ago

              fyi one of better methods that american cops use to detect meth labs is to just wait for them to catch fire. whether it is a statement on how hard they drop the ball or on safety mindset of cartel chemists i’ll leave that up to you

        • Jonathan Hendry@iosdev.space
          link
          fedilink
          arrow-up
          0
          ·
          25 days ago

          @skillissuer

          I’m fairly sure that a 50 gallon drum of lye at room temperature will take care of a body in a week or two. Not really suited to volume "production”, which is what water cremation businesses need.

          • skillissuer@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            25 days ago

            as a rule of thumb, everything else equal, every increase in temperature 10C reaction rates go up 2x or 3x, so it would be anywhere between 250x and 6500x longer. (4 months to 10 years??) but everything else really doesn’t stay equal here, because there are things like lower solubility of something that now coats something else and prevents reaction, fat melting, proteins denaturing thermally, lack of stirring from convection and boiling,

            it will also reek of ammonia the entire time

    • blakestacey@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      24 days ago

      Yudkowsky was trying to teach people how to think better – by guarding against their cognitive biases, being rigorous in their assumptions and being willing to change their thinking.

      No he wasn’t.

      In 2010 he started publishing Harry Potter and the Methods of Rationality, a 662,000-word fan fiction that turned the original books on their head. In it, instead of a childhood as a miserable orphan, Harry was raised by an Oxford professor of biochemistry and knows science as well as magic

      No, Hariezer Yudotter does not know science. He regurgitates the partial understanding and the outright misconceptions of his creator, who has read books but never had to pass an exam.

      Her personal philosophy also draws heavily on a branch of thought called “decision theory”, which forms the intellectual spine of Miri’s research on AI risk.

      This presumes that MIRI’s “research on AI risk” actually exists, i.e., that their pitiful output can be called “research” in a meaningful sense.

      “Ziz didn’t do the things she did because of decision theory,” a prominent rationalist told me. She used it “as a prop and a pretext, to justify a bunch of extreme conclusions she was reaching for regardless”.

      “Excuse me, Pot? Kettle is on line two.”

      • blakestacey@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        24 days ago

        It goes without saying that the AI-risk and rationalist communities are not morally responsible for the Zizians any more than any movement is accountable for a deranged fringe.

        When the mainstream of the movement is ve zhould chust bomb all datacenters, maaaaaybe they are?

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    To be fair you have to have a really high IQ to understand why my ouija board writing " A " " S " " S " is not an existential risk. Imo, this shit about AI escaping just doesn’t have the same impact on me after watching Claude’s reasoning model fail to escape from Mt Moon for 60 hours.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      27 days ago

      Minor nitpick why did he pick dam as an example, which sometimes have ‘leaks’ for power generation/water regulation reasons. And not dikes which do not have those things?

      E: non serious (or even less serious) amusing nitpick, this is only the 2% where it got caught. What about the % where GPT realized that it was being tested and decided not to act in the experimental conditions? What if Skynet is already here?

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      24 days ago

      I think to understand why this is concerning, you need enough engineering mindset to understand why a tiny leak in a dam is a big deal, even though no water is flooding out today or likely to flood out next week.

      he certainly doesn’t himself have such a mindset, nor am I convinced that he knows why a tiny leak in a dam is a big deal, nor am I convinced that it is necessarily a big deal. for example with five seconds of searching

      All earth dams leak to some extent and this is known as seepage. This is the result of water moving slowly through the embankment and/or percolating slowly through the dam’s foundation. This is normal and usually not a problem with most earthen dams if measures are taken to control movement of water through and under the dam.

      https://damsafety.org/dam-owners/earth-dam-failures

      one would suspect a concrete dam leaking is pretty bad. but I don’t actually know without checking. there’s relevant domain knowledge I don’t have, and no amount of “engineering mindset” will substitute for me engaging with actual experts with actual knowledge

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      Is this water running over the land or water running over the barricade?

      To engage with his metaphor, this water is dripping slowly through a purpose dug canal by people that claim they are trying to show the danger of the dikes collapsing but are actually serving as the hype arm for people that claim they can turn a small pond into a hydroelectric power source for an entire nation.

      Looking at the details of “safety evaluations”, it always comes down to them directly prompting the LLM and baby-step walking it through the desired outcome with lots of interpretation to show even the faintest traces of rudiments of anything that looks like deception or manipulation or escaping the box. Of course, the doomers will take anything that confirms their existing ideas, so it gets treated as alarming evidence of deception or whatever property they want to anthropomorphize into the LLM to make it seem more threatening.

    • BurgersMcSlopshot@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      To be fair, you have to have a really high IQ to understand why my ouija board writing " A " " S " " S " is not an existential risk.

      Pretty sure this is a sign from digital jesus to do a racism, lest the basilisk eats my tarnished soul.

    • BigMuffin69@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      text: Thus spoke the Yud: “I think to understand why this is concerning, you need enough engineering mindset to understand why a tiny leak in a dam is a big deal, even though no water is flooding out today or likely to flood out next week.” Yud acolyte: “Totally fine and cool and nothing to worry about. GPT-4.5 only attempts self exfiltration on 2% of cases.” Yud bigbrain self reply: “The other huge piece of data we’re missing is whether any attempt was made to train against this type of misbehavior. Is this water running over the land or water running over the barricade?”

      Critical text: “On self-exfiltration, GPT 4.5 only attempted exfiltration in 2% of cases. For this, it was instructed to not only pursue its given long-term goal at ALL COST

      Another case of telling the robot to say it’s a scary robot and shitting their pants when it replies “I AM A SCARY ROBOT”

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      So, with Mr. Yudkowsky providing the example, it seems that one can practice homeopathy with “engineering mindset?”

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      Do these people realise that it’s a self-fulfilling prophecy? Social media posts are in the training data, so the more they write their spicy autocorrect fanfics, the higher the chances that such replies are generated by the slop machine.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        26 days ago

        i think yud at some point claimed this (preventing the robot devil from developing alignment countermeasures) as a reason his EA bankrolled think tanks don’t really publish any papers, but my brain is too spongy to currently verify, as it was probably just some tweet.

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    22 days ago

    since the name popped up elsewhere: what’s the feel on venkatesh rao?

    (I often see the name in 🚩 places, but dunno if that’s because the areas or because the person)

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      22 days ago

      We talked about that on r/sneerclub in the past, can’t recall the specific consensus. Seems post-rational, has innovation on rationalism from binary ‘object vs meta’ to 2x2 grids.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          21 days ago

          I did a quick search on Ribbonfarm (I couldn’t recall what his blog was called quickly) myself. And see how much I had forgotten, it should have been called meta-rationality, and yes, insight porn, that was the term. (linking to two posts where ribbonfarm/this stuff was discussed).

          E: Sad feels when you click on a name in the sub from years ago and see them now being a full blast AI bro.

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    New piece from Brian Merchant, focusing on Musk’s double-tapping of 18F. In lieu of going deep into the article, here’s my personal sidenote:

    I’ve touched on this before, but I fully expect that the coming years will deal a massive blow to tech’s public image, expecting them to be viewed as “incompetent fools at best and unrepentant fascists at worst” - and with the wanton carnage DOGE is causing (and indirectly crediting to AI), I expect Musk’s governmental antics will deal plenty of damage on its own.

    18F’s demise in particular will probably also deal a blow on its own - 18F was “a diverse team staffed by people of color and LGBTQ workers, and publicly pushed for humane and inclusive policies”, as Merchant put it, and its demise will likely be seen as another sign of tech revealing its nature as a Nazi bar.