• gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    From the comments

    The average person concerned with existential risk from AGI might assume “safety” means working to reduce the likelihood that we all die. They would be disheartened to learn that many “AI Safety” researchers are instead focused on making sure contemporary LLMs behave appropriately.

    “average person” is doing a lot of work here. I suspect the vast amount of truly “average people” are in fact concerned that LLMs will reproduce Nazi swill at an exponential scale more than that they may actually be Robot Hitler.

    Turns out if you spend all your time navelgazing and inventing your own terms, the real world will ignore you and use terms people outside your bubble use.