• vivendi@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 hours ago

    Small scale models, like Mistral Small or Qwen series, are achieving SOTA performance with lower than 50 billion parameters. QwQ32 could already rival shitGPT with 32 billion parameters, and the new Qwen3 and Gemma (from google) are almost black magic.

    Gemma 4B is more comprehensible than GPT4o, the performance race is fucking insane.

    ClosedAI is 90% hype. Their models are benchmark princesses, but they need huuuuuuge active parameter sizes to effectively reach their numbers.

    Everything said in this post is independently verifiable by taking 5 minutes to search shit up, and yet you couldn’t even bother to do that.