schizoidman@lemm.ee to Technology@lemmy.worldEnglish · 11 days agoDeepSeek's distilled new R1 AI model can run on a single GPU | TechCrunchtechcrunch.comexternal-linkmessage-square22linkfedilinkarrow-up1174arrow-down118 cross-posted to: [email protected]
arrow-up1156arrow-down1external-linkDeepSeek's distilled new R1 AI model can run on a single GPU | TechCrunchtechcrunch.comschizoidman@lemm.ee to Technology@lemmy.worldEnglish · 11 days agomessage-square22linkfedilink cross-posted to: [email protected]
minus-squareIrdial@lemmy.sdf.orglinkfedilinkEnglisharrow-up3arrow-down1·edit-211 days agoOn my Mac mini running LM Studio, it managed 1702 tokens at 17.19 tok/sec and thought for 1 minute. If accurate, high-performance models were more able to run on consumer hardware, I would use my 3060 as a dedicated inference device
On my Mac mini running LM Studio, it managed 1702 tokens at 17.19 tok/sec and thought for 1 minute. If accurate, high-performance models were more able to run on consumer hardware, I would use my 3060 as a dedicated inference device