• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    4 days ago

    I’m actually more medium on this!

    • Only 32K context without yarn, and with yarn Qwen 2.5 was kinda hit/miss.

    • No 32B base model. Is that a middle finger to the Deepseek distils?

    • It really feels like “more of qwen 2.5/1.5” architecture wise. I was hoping for better attention mechanisms, QAT, a bitnet test, logit distillation… something new other than some training data optimizations and more scale.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        Yeah, but only an Instruct version. They didn’t leave any 32B base model like they did for for the 30B MoE.

        That could be intentional, to stop anyone from building on their 32B dense model.

        • pebbles@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          4 days ago

          Huh. I didn’t realize that thanks. Lame that they would hold back the one that is the biggest size most consumers would ever run.