CJR study shows AI search services misinform users and ignore publisher exclusion requests.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    60% about news content, which I think is an important distinction. LLMs are basically lossy compressions of text, meaning their knowledge is frozen to the same time as their dataset. The fact that they are right even 40% of the time is the truly surprising thing.

    It’s completely irresponsible for search engines to be using LLMs for news and current events. Science or history facts are probably a lot more reasonable though.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    You’ve probably seen it already, but the glass of wine issue is something to really consider if you do any work or ever get frustrated by these kinds of systems.

    There is very clearly something “missing” in what these systems are doing.