starting out[0] with “I was surprised by the following results” and it just goes further down almost-but-not-quite Getting It Avenue

close, but certainly no cigar

choice quotes:

Why is it impressive that a model trained on internet text full of random facts happens to have a lot of random facts memorized? … why does that in any way indicate intelligence or creativity?

That’s a good point.

you don’t fucking say

I have a website (TrackingAI.org) that already administers a political survey to AIs every day. So I could easily give the AIs a real intelligence test, and track that over time, too.

really, how?

As I started manually giving AIs IQ tests

oh.

Then it proceeds to mis-identify every single one of the 6 answer options, leading it to pick the wrong answer. There seems to be little rhyme or reason to its misidentifications

if this fuckwit had even the slightest fucking understanding of how these things work, it would be glaringly obvious

there’s plenty more, so remember to practice stretching before you start your eyerolls

  • Deborah@hachyderm.io
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    One of the saddest parts here is that there is almost an interesting research direction for people who are truly interested in machine intelligence. “the third cell should likely have a shape with 2 layers within a square” – if you are a person who insists on reading generative AI as “reasoning”, then that wrong answer is a jumping off point into how humans see the image composition as dependent on shapes, and GPT reasons based on something more important to a computer, namely, layers.

    • Deborah@hachyderm.io
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      5 months ago

      But nobody who’s really interested in machine intelligence thinks generative text constitutes reasoning, so instead you just have fuckwits giving IQ tests to their autocomplete engine and *not even seeing the thing that’s interesting.*