Ever since reporting earlier this year on how easy it is to trick an agentic browser, I’ve been following the intersections between modern AI and old-school scams. Now, there’s a new convergence on the horizon: hackers are apparently using AI prompts to seed Google search results with dangerous commands. When executed by unknowing users, these commands prompt computers to give the hackers the access they need to install malware.
The warning comes by way of a recent report from detection-and-response firm Huntress. Here’s how it works. First, the threat actor has a
→ Continue reading at Engadget