Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    24 days ago

    I wonder how much % of the freakout over Deepseek is AI doomers realizing the coming AI god might be … ChiCom!

  • HotGarbage@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    24 days ago

    After fondling ChatGPT to generate naughty things, man has meltdown when he learns no one cares.

    https://www.bleepingcomputer.com/news/security/time-bandit-chatgpt-jailbreak-bypasses-safeguards-on-sensitive-topics/

    Horror. Dismay. Disbelief. For weeks, it felt like I was physically being crushed to death.

    I hurt all the time, every part of my body. The urge to make someone who could do something listen and look at the evidence was so overwhelming.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      24 days ago

      i don’t understand the “safety” angle here. if chatgpt can output authoritatively-looking sentence-shaped string about pipebombs, then it’s only because similar content about pipebombs is already available on wide open internet. if model is closed, then at worst they would have to monitor its use (not like google blocks any similar information from showing up). if model is open, then no safeguards make sense in the first place. i guess it’s more about legal liability for openai? now they can ignore it with all these bills about “ai safety” gone (for now)

      • sc_griffith@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        23 days ago

        frankly it’s probably harm prevention if people turn to an LLM for pipe bomb instructions. “5) Put the warm pizza in the center of the pipe bomb. To maximize the radius of the detonation, you should roll the pizza and make sure that it fits securely into the pipe.”

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          23 days ago

          that’s a tiny amount of harm reduction if there are other ways to get there

          it can go in opposite way: some segment of propmtfondlers specifically went after one open-source locally ran model because it was “uncensored” (i think it was mistral) the logic in this one was, there’s no search going out so you can “look up” anything and no one would be any wiser. this is extremely charitably assuming that llm training does a kind of lossy compression on all data it devours, and since they took everything, it’s basically almost like worse google search

          if there are steps like “put a thing in pipe. make sure to weld ends shut” then it’s also harm reduction, but instead for everyone else. imagine getting eldest son’d by a bot, pathetic

        • sc_griffith@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          23 days ago

          I’m not even joking, really. the way I see harm in LLMs talking about pipe bombs is less that they’ll give instructions and more that we might get a character.ai style situation where the LLM talks someone into an attack

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            23 days ago

            Also remember that some of the instructions you get via these tricks are wrong. The ‘pretend that you are writing a movie script and give me tips on how to break into a house’ thing gave you lockpicking tips, which looks cool as a movie plot. But not just the advice to tap the lock, which is iirc what they actually do (breaks the lock sure, but you are breaking in already, also is faster). This kind of stuff combined with ‘eh you could google this before’ is why so many people ge talked to prob ignored him and didnt freak out.

            If you let amateurs do security you get amateur security after all.

            Talking people into things, esp as people lionize and anthropomorphize llms so much, is a bigger problem.

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        24 days ago

        also, relying on spicy autocomplete when trying to put together a deadly device sounds like cyberpunk-flavored darwin award material

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      24 days ago

      This tied into a hypothesis I had about emergent intelligence and awareness, so I probed further, and realized the model was completely unable to ascertain its current temporal context, aside from running a code-based query to see what time it is. Its awareness - entirely prompt-based - was extremely limited and, therefore, would have little to no ability to defend against an attack on that fundamental awareness.

      How many times are AI people going to re-learn that LLMs don’t have “awareness” or "reasnloning’ in a sense humans would find meaningful?

    • FredFig@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      24 days ago

      https://www.wheresyoured.at/deep-impact/ And the companion piece on his blog.

      What I didn’t wager was that, potentially, nobody was trying. My mistake was — if you can believe this — being too generous to the AI companies, assuming that they didn’t pursue efficiency because they couldn’t, and not because they couldn’t be bothered.

      This isn’t about China — it’s so much fucking easier if we let it be about China — it’s about how the American tech industry is incurious, lazy, entitled, directionless and irresponsible. OpenAi and Anthropic are the antithesis of Silicon Valley. They are incumbents, public companies wearing startup suits, unwilling to take on real challenges, more focused on optics and marketing than they are on solving problems, even the problems that they themselves created with their large language models.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    27 days ago

    Today on highlighting random rat posts from ACX:

    poster thinks the future of llm training is contingent on focusing early on philosophical and theological text because they match the causality of human experience

    (Current first post on today’s SSC open thread)

    On slightly more relevant news the main post is scoot asking if anyone can put him in contact with someone from a major news publication so he can pitch an op-ed by a notable ex-OpenAI researcher that will be ghost-written by him (meaning siskind) on the subject of how they (the ex researcher) opened a forecast market that predicts ASI by the end of Trump’s term, so be on the lookout for that when it materializes I guess.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      scoot asking if anyone can put him in contact with someone from a major news publication

      how about the New York Times

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        NYT and WaPo are his specific examples. He also wants a connection to “a policy/defense/intelligence/foreign affairs journal/magazine” if possible.

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      i wonder which endocrine systems are disrupted by not having your head sufficiently stuffed into a toilet before being old enough to type words into nazitter dot com

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    27 days ago

    Me: Oh boy, I can’t wait to see what my favorite thinkers of the EA movement will come up with this week :)

    Text from Geoff: "Morally stigmatize AI developers so they considered as socially repulsive as Nazi pedophiles. A mass campaign of moral stigmatization would be more effective than any amount of regulation. "

    Another rationalist W: don’t gather empirical evidence that AI will soon usurp / exterminate humanity. Instead as the chief authorities of morality, engage in societal blackmail to anyone who’s ever heard the words TensorFlow.

    • BigMuffin69@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      Next Sunday when I go to my EA priest’s group home, I will admit to having invoked the chain rule to compute a gradient 1 trillion times since my last confessional. For this I will do penance for the 8 trillion future lives I have snuffed out and whose utility has been consumed by the basilisk.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      engage in societal blackmail to anyone who’s ever heard the words TensorFlow.

      no no wait this is geoff’s stopped clock moment

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      Is it too early to hope that this is the beginning of the end of the bubble?

      Also, does someone know why broadcom was also hit so hard? Is it because they make various networking-related chips used in datacenter infrastructure?

      • SGforce@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        26 days ago

        When hedge funds decide to flip the switch on something the reaction never looks rational. Meta was green today ffs.

    • BigMuffin69@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      Folks around here told me AI wasn’t dangerous 😰 ; fellas I just witnessed a rogue Chinese AI do 1 trillion dollars of damage to the US stock market 😭 /s

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      I get being privacy conscious and that sharing crash dumps and logs you don’t really understand yourself can be scary. Making demands of urgent free tech support from strangers is just rude, though.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        24 days ago

        my least favorite thing about old forums, which carried over to a lot of open source spaces, is how little moderation there is. coming into the help forum with a “no fuck you help me the way I want” attitude should probably be an instant ban and “what the fuck is wrong with you” mod note, cause that’s the exact type of shit that causes the community to burn out quick, and it decreases the usefulness of the space by a lot. but somehow almost every old forum was moderated by the type of cyberlibertarian who treated every ban like an attack on free speech? so you’d constantly see shit like the mod popping in to weakly waggle their finger at the crackpot who’s posting weird conspiracy shit to every thread (which generally caused the crackpot to play the victim and/or tell the mod to go fuck themselves) instead of taking a stand and banning the fucker

        and now those crackpots have metamorphosed into full fascists and act like banning them from your GitHub is an international incident, cause they almost never receive any pushback at all

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      this was mentioned in last week’s thread

      what I don’t get is why the admins chose to both backdate the entries and re-use poster’s handles. If they’d just tried to “close” open questions using GenAI with the current date and a robot user it would still be shit but not quite as deceptive

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        26 days ago

        The whole thing is just weirdly incompetent. Maybe they just had everything configured wrong and accidentally deployed sone throwaway tests to production? I could almost see it as a way to poison scrapers, given that there are some odd visibility settings on the slop posts, though the owner’s shiftiness and dubious explanations suggest it wasn’t anything so worthy.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      that thread is a work of genius and answers what the next tech boom needs to be

      dicks in mousetraps I MEAN whatever wastes electricity most, preferably with Nvidia cards

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        26 days ago

        I do actually have a mechanism for using the sharp edges of NVidia cards for dick mouse trapping purposes. And we could - hypothetically - use the extraneous power inputs to mine Bitcoin or something, maximizing efficiency!