Isn’t CSAM classed as images and videos which depict child sexual abuse? Last time I checked written descriptions alone did not count, unless they were being forced to look at AI generated image prompts of such acts?
That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.
This is the quote in question. They’re talking about images
I really find this a bit alarmist and exaggerated. Consider the motive and the alternative. You really think companies like that have any other options than to deal with those things?
This reminds me of an NPR podcast from 5 or 6 years ago about the people who get paid by Facebook to moderate the worst of the worst. They had a former employee giving an interview about the manual review of images that were CP andrape related shit iirc. Terrible stuff
deleted by creator
I’m shocked and I shouldn’t be… Poor people
deleted by creator
I’m sure there’s some loophole there, maybe between countries’ laws. And if there isn’t, Hey! We’ll make one!
Isn’t CSAM classed as images and videos which depict child sexual abuse? Last time I checked written descriptions alone did not count, unless they were being forced to look at AI generated image prompts of such acts?
This is the quote in question. They’re talking about images
I really find this a bit alarmist and exaggerated. Consider the motive and the alternative. You really think companies like that have any other options than to deal with those things?
deleted by creator
So they paid Kenyan workers $2 an hour to sift through some of the darkest shit on the internet.
Ugh.
deleted by creator
This reminds me of an NPR podcast from 5 or 6 years ago about the people who get paid by Facebook to moderate the worst of the worst. They had a former employee giving an interview about the manual review of images that were CP andrape related shit iirc. Terrible stuff