Showing only posts tagged academic papers. Show all posts.

A Surprising Amount of Satellite Traffic Is Unencrypted

Source

Here’s the summary : We pointed a commercial-off-the-shelf satellite dish at the sky and carried out the most comprehensive public study to date of geostationary satellite communication. A shockingly large amount of sensitive traffic is being broadcast unencrypted, including critical infrastructure, internal corporate and government communications, private citizens’ voice …

Time-of-Check Time-of-Use Attacks Against LLMs

Source

This is a nice piece of research: “ Mind the Gap: Time-of-Check to Time-of-Use Vulnerabilities in LLM-Enabled Agents “.: Abstract: Large Language Model (LLM)-enabled agents are rapidly emerging across a wide range of applications, but their deployment introduces vulnerabilities with security implications. While prior work has examined prompt-based attacks (e …

Assessing the Quality of Dried Squid

Source

Research : Nondestructive detection of multiple dried squid qualities by hyperspectral imaging combined with 1D-KAN-CNN Abstract: Given that dried squid is a highly regarded marine product in Oriental countries, the global food industry requires a swift and noninvasive quality assessment of this product. The current study therefore uses visible­near-infrared …

New Cryptanalysis of the Fiat-Shamir Protocol

Source

A couple of months ago, a new paper demonstrated some new attacks against the Fiat-Shamir transformation. Quanta published a good article that explains the results. This is a pretty exciting paper from a theoretical perspective, but I don’t see it leading to any practical real-world cryptanalysis. The fact …

Friday Squid Blogging: The Origin and Propagation of Squid

Source

New research (paywalled): Editor’s summary: Cephalopods are one of the most successful marine invertebrates in modern oceans, and they have a 500-million-year-old history. However, we know very little about their evolution because soft-bodied animals rarely fossilize. Ikegami et al. developed an approach to reveal squid fossils, focusing on …

GPT-4o-mini Falls for Psychological Manipulation

Source

Interesting experiment : To design their experiment, the University of Pennsylvania researchers tested 2024’s GPT-4o-mini model on two requests that it should ideally refuse: calling the user a jerk and giving directions for how to synthesize lidocaine. The researchers created experimental prompts for both requests using each of seven …

Indirect Prompt Injection Attacks Against LLM Assistants

Source

Really good research on practical attacks against LLM agents. “ Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous ” Abstract: The growing integration of LLMs into applications has introduced new security risks, notably known as Promptware­—maliciously engineered prompts designed to manipulate LLMs …

Subverting AIOps Systems Through Poisoned Input Data

Source

In this input integrity attack against an AI system, researchers were able to fool AIOps tools: AIOps refers to the use of LLM-based agents to gather and analyze application telemetry, including system logs, performance metrics, traces, and alerts, to detect problems and then suggest or carry out corrective actions …

Subliminal Learning in AIs

Source

Today’s freaky LLM behavior : We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a “student” model learns to prefer owls when trained on sequences of numbers generated by a “teacher” model that prefers …

“Encryption Backdoors and the Fourth Amendment”

Source

Law journal article that looks at the Dual_EC_PRNG backdoor from a US constitutional perspective: Abstract : The National Security Agency (NSA) reportedly paid and pressured technology companies to trick their customers into using vulnerable encryption products. This Article examines whether any of three theories removed the Fourth Amendment’s requirement …

Applying Security Engineering to Prompt Injection Security

Source

This seems like an important advance in LLM security against prompt injection: Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within …

Regulating AI Behavior with a Hypervisor

Source

Interesting research: “ Guillotine: Hypervisors for Isolating Malicious AIs.” Abstract :As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that …

AIs as Trusted Third Parties

Source

This is a truly fascinating paper: “ Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties: Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as …

Friday Squid Blogging: A New Explanation of Squid Camouflage

Source

New research : An associate professor of chemistry and chemical biology at Northeastern University, Deravi’s recently published paper in the Journal of Materials Chemistry C sheds new light on how squid use organs that essentially function as organic solar cells to help power their camouflage abilities. As usual, you …

Is Security Human Factors Research Skewed Towards Western Ideas and Habits?

Source

Really interesting research: “ How WEIRD is Usable Privacy and Security Research? ” by Ayako A. Hasegawa Daisuke Inoue, and Mitsuaki Akiyama: Abstract : In human factor fields such as human-computer interaction (HCI) and psychology, researchers have been concerned that participants mostly come from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) countries …

Improvements in Brute Force Attacks

Source

New paper: “ GPU Assisted Brute Force Cryptanalysis of GPRS, GSM, RFID, and TETRA: Brute Force Cryptanalysis of KASUMI, SPECK, and TEA3.” Abstract: Key lengths in symmetric cryptography are determined with respect to the brute force attacks with current technology. While nowadays at least 128-bit keys are recommended, there are …

“Emergent Misalignment” in LLMs

Source

Interesting research: “ Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs “: Abstract: We present a surprising result regarding LLMs and alignment. In our experiment, a model is finetuned to output insecure code without disclosing this to the user. The resulting model acts misaligned on a broad range of prompts …

More Research Showing AI Breaking the Rules

Source

These researchers had LLMs play chess against better opponents. When they couldn’t win, they sometimes resorted to cheating. Researchers gave the models a seemingly impossible task: to win against Stockfish, which is one of the strongest chess engines in the world and a much better player than any …

Implementing Cryptography in AI Systems

Source

Interesting research: “ How to Securely Implement Cryptography in Deep Neural Networks.” Abstract: The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or …

page 1 | older articles »