• gon [he]@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 hours ago

      Yeah.

      That screenshot has some errors on point 2 and 3, by the way.

      1. It doesn’t say シリーズ (series), it says ドリトス (Doritos). So it’s タイツくん〇〇大人のドリトス (Tights-kun Adult Doritos), with the middle part being cut off and unreadable. I did some more research on this and figured out what it says.

      2. It doesn’t say 刺激入り (with a kick). I can’t read the 2nd kanji on the packaging, but the first one is definitely not 刺. The Reddit post I linked in the original reply seems to think it says 竹炭入り (bamboo charcoal inside), which upon further research I can confirm is correct, according to the website, even though the 2nd kanji is basically unreadable on that image.

      As a fun fact, in the top left it says 励行のこと in the little rectangle, which refers to doing something diligently. On the website, this particular image is accompanied by the text “Don’t we have any other way to motivate our workers?”

      Where did you get that screenshot by the way? Seems like a really cool tool. Is that ChatGPT?

      • brbposting@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        “Don’t we have any other way to motivate our workers?”

        Ahaha!!

        cool tool

        Yeah we spent a ton of energy on that analysis but am necessarily so curious about the state of the art (I’mma tech person) and this was a suitable test case. UC Berkeley/others’s got a project

        with SotA available free (but all data hoovered for research)

        My other big test was right when it came out after reporting about scary good image analysis, on something where I had a legitimate question. tl;dr take pic of scene IRL, remove EXIF, hop on VPN, and upload to ^ & see if its response/analysis feels creepy or reverse image search is just as good or what

        PS: adding missed disclaimer to previous comment