Showing only posts in Bruce Schneier. Show all posts.

Manipulating AI Summarization Features

Source

Microsoft is reporting : Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters.... These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to …

Why Tehran’s Two-Tiered Internet Is So Dangerous

Source

Iran is slowly emerging from the most severe communications blackout in its history and one of the longest in the world. Triggered as part of January’s government crackdown against citizen protests nationwide, the regime implemented an internet shutdown that transcends the standard definition of internet censorship. This was …

On the Security of Password Managers

Source

Good article on password managers that secretly have a backdoor. New research shows that these claims aren’t true in all cases, particularly when account recovery is in place or password managers are set to share vaults or organize users into groups. The researchers reverse-engineered or closely analyzed Bitwarden …

AI Found Twelve New Vulnerabilities in OpenSSL

Source

The title of the post is” What AI Security Research Looks Like When It Works,” and I agree: In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for …

Side-Channel Attacks Against LLMs

Source

Here are three papers describing different side-channel attacks against LLMs. “ Remote Timing Attacks on Efficient Language Model Inference “: Abstract: Scaling up language models has significantly increased their capabilities. But larger models are slower models, and so there is now an extensive body of work (e.g., speculative sampling or …

The Promptware Kill Chain

Source

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “ prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious …

Prompt Injection Via Road Signs

Source

Interesting research: “ CHAI: Command Hijacking Against Embodied AI.” Abstract: Embodied Artificial Intelligence (AI) promises to handle edge cases in robotic vehicle systems where data is scarce by using common-sense reasoning grounded in perception and action to generalize beyond training distributions and adapt to novel real-world situations. These capabilities, however …

LLMs are Getting a Lot Better and Faster at Finding and Exploiting Zero-Days

Source

This is amazing : Opus 4.6 is notably better at finding high-severity vulnerabilities than previous models and a sign of how quickly things are moving. Security teams have been automating vulnerability discovery for years, investing heavily in fuzzing infrastructure and custom harnesses to find bugs at scale. But what …

page 1 | older articles »