Very off topic: The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable. This is to model good behaviour if/when they ask someone a question or for help. But also you shouldn’t be using those things anyhoo.
I recommend it because we know some of these LLM-based services still rely on the efforts of A Guy Instead to make up for the nonexistence and incoherence of AGI. If you’re an asshole to the frontend there’s a nonzero chance that a human person is still going to have to deal with it.
Also I have learned an appropriate level of respect and fear for the part of my brain that, half-asleep, answers the phone with “hello this is YourNet with $CompanyName Support.” I’m not taking chances around unthinkingly answering an email with “alright you shitty robot. Don’t lie to me or I’ll barbecue this old commodore 64 that was probably your great uncle or whatever”
The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable.
Very much this but, we’re all impressionable. Being abusive to a machine that’s good at tricking our brains into thinking that it’s conscious is conditioning oneself to be abusive, period. You see this also in online gaming - every person that I have encountered who is abusive to randos in a match on the Internet has problematic behavior in person.
It’s literally just conditioning; making things adjacent to abusing other humans comfortable and normalizing them makes abusing humans less uncomfortable.
Very off topic: The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable. This is to model good behaviour if/when they ask someone a question or for help. But also you shouldn’t be using those things anyhoo.
I recommend it because we know some of these LLM-based services still rely on the efforts of A Guy Instead to make up for the nonexistence and incoherence of AGI. If you’re an asshole to the frontend there’s a nonzero chance that a human person is still going to have to deal with it.
Also I have learned an appropriate level of respect and fear for the part of my brain that, half-asleep, answers the phone with “hello this is YourNet with $CompanyName Support.” I’m not taking chances around unthinkingly answering an email with “alright you shitty robot. Don’t lie to me or I’ll barbecue this old commodore 64 that was probably your great uncle or whatever”
Also it’s simply just bad to practice being cruel to a humanshaped thing.
Very much this but, we’re all impressionable. Being abusive to a machine that’s good at tricking our brains into thinking that it’s conscious is conditioning oneself to be abusive, period. You see this also in online gaming - every person that I have encountered who is abusive to randos in a match on the Internet has problematic behavior in person.
It’s literally just conditioning; making things adjacent to abusing other humans comfortable and normalizing them makes abusing humans less uncomfortable.
That’s reasonable, and especially achievable if you don’t use chatbots or digital assistants!