• VagueAnodyneComments@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 hours ago

    Nah we’re up to running Qwen3 and Deepseek r1 locally with accessible hardware at this point so we have access to what you describe. Ollama is the app.

    The problem continues to be that LLMs are not suitable for many applications, and where they are useful, they are sloppy and inconsistent.

    My laptop is one of the ones they are talking about in the article. It has an AMD NPU, it’s a 780M APU that also runs games about as well as an older budget graphics card. It handles running local models really well for its size and power draw. Running local models is still lame as hell, not how I end up utilizing the hardware. 😑

    • Detun3d@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 hours ago

      Does Ollama accept custom parameters now?

      I wasn’t talking about their effectiveness though. Yeah, they’re sloppy as hell, but I’d rather trust a sloppy tool I set up at home and use myself than having someone I don’t trust at home using their sloppy tools, tinkering with my property without permission when I’m not looking and changing their terms and prices each day.

      But granted your point is a really good one. These AI ready laptops don’t give the bang for your buck you’d expect. We’re all better off taking good care of our older harware and waiting longer for components that are a true improvement to replace them.