Showing only posts tagged academic papers. Show all posts.

New Attack Against Wi-Fi

Source

It’s called AirSnitch : Unlike previous Wi-Fi attacks, AirSnitch exploits core features in Layers 1 and 2 and the failure to bind and synchronize a client across these and higher layers, other nodes, and other network names such as SSIDs (Service Set Identifiers). This cross-layer identity desynchronization is the …

Side-Channel Attacks Against LLMs

Source

Here are three papers describing different side-channel attacks against LLMs. “ Remote Timing Attacks on Efficient Language Model Inference “: Abstract: Scaling up language models has significantly increased their capabilities. But larger models are slower models, and so there is now an extensive body of work (e.g., speculative sampling or …

Prompt Injection Via Road Signs

Source

Interesting research: “ CHAI: Command Hijacking Against Embodied AI.” Abstract: Embodied Artificial Intelligence (AI) promises to handle edge cases in robotic vehicle systems where data is scarce by using common-sense reasoning grounded in perception and action to generalize beyond training distributions and adapt to novel real-world situations. These capabilities, however …

Substitution Cipher Based on The Voynich Manuscript

Source

Here’s a fun paper: “ The Naibbe cipher: a substitution cipher that encrypts Latin and Italian as Voynich Manuscript-like ciphertext “: Abstract: In this article, I investigate the hypothesis that the Voynich Manuscript (MS 408, Yale University Beinecke Library) is compatible with being a ciphertext by attempting to develop a …

A Surprising Amount of Satellite Traffic Is Unencrypted

Source

Here’s the summary : We pointed a commercial-off-the-shelf satellite dish at the sky and carried out the most comprehensive public study to date of geostationary satellite communication. A shockingly large amount of sensitive traffic is being broadcast unencrypted, including critical infrastructure, internal corporate and government communications, private citizens’ voice …

Time-of-Check Time-of-Use Attacks Against LLMs

Source

This is a nice piece of research: “ Mind the Gap: Time-of-Check to Time-of-Use Vulnerabilities in LLM-Enabled Agents “.: Abstract: Large Language Model (LLM)-enabled agents are rapidly emerging across a wide range of applications, but their deployment introduces vulnerabilities with security implications. While prior work has examined prompt-based attacks (e …

Assessing the Quality of Dried Squid

Source

Research : Nondestructive detection of multiple dried squid qualities by hyperspectral imaging combined with 1D-KAN-CNN Abstract: Given that dried squid is a highly regarded marine product in Oriental countries, the global food industry requires a swift and noninvasive quality assessment of this product. The current study therefore uses visible­near-infrared …

New Cryptanalysis of the Fiat-Shamir Protocol

Source

A couple of months ago, a new paper demonstrated some new attacks against the Fiat-Shamir transformation. Quanta published a good article that explains the results. This is a pretty exciting paper from a theoretical perspective, but I don’t see it leading to any practical real-world cryptanalysis. The fact …

Friday Squid Blogging: The Origin and Propagation of Squid

Source

New research (paywalled): Editor’s summary: Cephalopods are one of the most successful marine invertebrates in modern oceans, and they have a 500-million-year-old history. However, we know very little about their evolution because soft-bodied animals rarely fossilize. Ikegami et al. developed an approach to reveal squid fossils, focusing on …

GPT-4o-mini Falls for Psychological Manipulation

Source

Interesting experiment : To design their experiment, the University of Pennsylvania researchers tested 2024’s GPT-4o-mini model on two requests that it should ideally refuse: calling the user a jerk and giving directions for how to synthesize lidocaine. The researchers created experimental prompts for both requests using each of seven …

Indirect Prompt Injection Attacks Against LLM Assistants

Source

Really good research on practical attacks against LLM agents. “ Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous ” Abstract: The growing integration of LLMs into applications has introduced new security risks, notably known as Promptware­—maliciously engineered prompts designed to manipulate LLMs …

Subverting AIOps Systems Through Poisoned Input Data

Source

In this input integrity attack against an AI system, researchers were able to fool AIOps tools: AIOps refers to the use of LLM-based agents to gather and analyze application telemetry, including system logs, performance metrics, traces, and alerts, to detect problems and then suggest or carry out corrective actions …

Subliminal Learning in AIs

Source

Today’s freaky LLM behavior : We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a “student” model learns to prefer owls when trained on sequences of numbers generated by a “teacher” model that prefers …

“Encryption Backdoors and the Fourth Amendment”

Source

Law journal article that looks at the Dual_EC_PRNG backdoor from a US constitutional perspective: Abstract : The National Security Agency (NSA) reportedly paid and pressured technology companies to trick their customers into using vulnerable encryption products. This Article examines whether any of three theories removed the Fourth Amendment’s requirement …

Applying Security Engineering to Prompt Injection Security

Source

This seems like an important advance in LLM security against prompt injection: Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within …

Regulating AI Behavior with a Hypervisor

Source

Interesting research: “ Guillotine: Hypervisors for Isolating Malicious AIs.” Abstract :As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that …

page 1 | older articles »