You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers. AI code is absolutely up to production quality! Also, you’re all…
ah yes, my ability to read a pdf immediately confers upon me all the resources required to engage in materially equivalent experimentation of the thing that I just read! no matter whether the publisher spent cents or billions in the execution and development of said publication, oh no! it is so completely a cost paid just once, and thereafter it’s ~totally~ free!
oh, wait, hang on. no. no it’s the other thing. that one where all the criticisms continue to hold! my bad, sorry for mistaking those. guess I was roleplaying a LLM for a moment there!
You can experiment on your own GPU by running the tests using a variety of models of different generations (LLAMA 2 class 7B, LLAMA 3 class 7B, Gemma, Granite, Qwen, etc…)
Even the lowest end desktop hardware can run atleast 4B models. The only real difficulty is scripting the test system, but the papers are usually helpful with describing their test methodology.
The most recent Qwen model actually works really well for cases like that, but this one I haven’t tested for myself and I’m going based on what some dude on reddit tested
For example, how many Rs are in Strawberry? Or shit like that
(Although that one is a bad example because token based models will fundamentally make such mistakes. There is a new technique that lets LLMs process byte level information that fixes it, however)
oh, I get it, you personally choose not to make these structurally-repeatable-by-foundation errors? you personally choose to be a Unique And Correct Snowflake?
wow shit damn, I sure want to read your eventual uni paper, see what kind of distinctly novel insight you’ve had to wrangle this domain!
nah, the most insufferable Reddit shit was when you decided Lemmy doesn’t want to learn because somebody called you out on the confident bullshit you’re making up on the spot
tell me what hw spec I need to deploy some kind of interactive user-facing prompt system backed by whatever favourite LLM/transformer-model you want to pick. idgaf if it’s llama or qwen or some shit you’ve got brewing in your back shed - if it’s on huggingface, fair game. here’s the baselines:
expected response latencies: human, or better
expected topical coherence: mid-support capability or above
expected correctness: at worst “I misunderstood $x” in the sense of “whoops, sorry, I thought you were asking about ${foo} but I answered about ${bar}”; i.e. actual, contextual, concrete contextual understanding
(so, basically, anything a competent L2 support engineer at some random ISP or whatever could do)
you’ll be waiting a while. it turns out “i’m not saying it’s always programming.dev, but” was already in my previous ban reasons, and it was this time too.
ah yes, my ability to read a pdf immediately confers upon me all the resources required to engage in materially equivalent experimentation of the thing that I just read! no matter whether the publisher spent cents or billions in the execution and development of said publication, oh no! it is so completely a cost paid just once, and thereafter it’s ~totally~ free!
oh, wait, hang on. no. no it’s the other thing. that one where all the criticisms continue to hold! my bad, sorry for mistaking those. guess I was roleplaying a LLM for a moment there!
You can experiment on your own GPU by running the tests using a variety of models of different generations (LLAMA 2 class 7B, LLAMA 3 class 7B, Gemma, Granite, Qwen, etc…)
Even the lowest end desktop hardware can run atleast 4B models. The only real difficulty is scripting the test system, but the papers are usually helpful with describing their test methodology.
👨🏿🦲: how many billions of models are you on
🗿: like, maybe 3, or 4 right now my dude
👨🏿🦲: you are like a little baby
👨🏿🦲: watch this
glue pizza
The most recent Qwen model actually works really well for cases like that, but this one I haven’t tested for myself and I’m going based on what some dude on reddit tested
Good for what? Glue pizza? Unnerving/creepy pasta?
Not making these famous logical errors
For example, how many Rs are in Strawberry? Or shit like that
(Although that one is a bad example because token based models will fundamentally make such mistakes. There is a new technique that lets LLMs process byte level information that fixes it, however)
oh, I get it, you personally choose not to make these structurally-repeatable-by-foundation errors? you personally choose to be a Unique And Correct Snowflake?
wow shit damn, I sure want to read your eventual uni paper, see what kind of distinctly novel insight you’ve had to wrangle this domain!
you have lost the game
you have been voted off the island
you are the weakest list
etc etc etc
This is the most “insufferable redditor” stereotype shit possible, and to think we’re not even on Reddit
nah, the most insufferable Reddit shit was when you decided Lemmy doesn’t want to learn because somebody called you out on the confident bullshit you’re making up on the spot
like LLM like shithead though am I right?
fuck, there’s potential here, but a bit too specific for a t-shirt?
perhaps?
a’ight, sure bub, let’s play
tell me what hw spec I need to deploy some kind of interactive user-facing prompt system backed by whatever favourite LLM/transformer-model you want to pick. idgaf if it’s llama or qwen or some shit you’ve got brewing in your back shed - if it’s on huggingface, fair game. here’s the baselines:
(so, basically, anything a competent L2 support engineer at some random ISP or whatever could do)
hit it, I’m waiting.
you’ll be waiting a while. it turns out “i’m not saying it’s always programming.dev, but” was already in my previous ban reasons, and it was this time too.
RIP my hopes and dreams :<