Showing only posts tagged academic papers. Show all posts.

Using LLMs to Exploit Vulnerabilities

Source

Interesting research: “ Teams of LLM Agents can Exploit Zero-Day Vulnerabilities.” Abstract: LLM agents have become increasingly sophisticated, especially in the realm of cybersecurity. Researchers have shown that LLM agents can exploit real-world vulnerabilities when given a description of the vulnerability and toy capture-the-flag problems. However, these agents still perform …

LLMs Acting Deceptively

Source

New research: “ Deception abilities emerged in large language models “: Abstract: Large language models (LLMs) are currently at the forefront of intertwining AI systems with human communication and everyday life. Thus, aligning them with human values is of great importance. However, given the steady increase in reasoning abilities, future LLMs …

Exploiting Mistyped URLs

Source

Interesting research: “ Hyperlink Hijacking: Exploiting Erroneous URL Links to Phantom Domains “: Abstract: Web users often follow hyperlinks hastily, expecting them to be correctly programmed. However, it is possible those links contain typos or other mistakes. By discovering active but erroneous hyperlinks, a malicious actor can spoof a website or …

Privacy Implications of Tracking Wireless Access Points

Source

Brian Krebs reports on research into geolocating routers: Apple and the satellite-based broadband service Starlink each recently took steps to address new research into the potential security and privacy implications of how their services geolocate devices. Researchers from the University of Maryland say they relied on publicly available data …

Licensing AI Engineers

Source

The debate over professionalizing software engineers is decades old. (The basic idea is that, like lawyers and architects, there should be some professional licensing requirement for software engineers.) Here’s a law journal article recommending the same idea for AI engineers. This Article proposes another way: professionalizing AI engineering …

A Taxonomy of Prompt Injection Attacks

Source

Researchers ran a global prompt hacking competition, and have documented the results in a paper that both gives a lot of good examples and tries to organize a taxonomy of effective prompt injection strategies. It seems as if the most common successful strategy is the “compound instruction attack,” as …

LLM Prompt Injection Worm

Source

Researchers have demonstrated a worm that spreads through prompt injection. Details : In one instance, the researchers, acting as attackers, wrote an email including the adversarial text prompt, which “poisons” the database of an email assistant using retrieval-augmented generation (RAG), a way for LLMs to pull in extra data from …

Friday Squid Blogging: New Extinct Species of Vampire Squid Discovered

Source

Paleontologists have discovered a 183-million-year-old species of vampire squid. Prior research suggests that the vampyromorph lived in the shallows off an island that once existed in what is now the heart of the European mainland. The research team believes that the remarkable degree of preservation of this squid is …

Apple Announces Post-Quantum Encryption Algorithms for iMessage

Source

Apple announced PQ3, its post-quantum encryption standard based on the Kyber secure key-encapsulation protocol, one of the post-quantum algorithms selected by NIST in 2022. There’s a lot of detail in the Apple blog post, and more in Douglas Stabila’s security analysis. I am of two minds about …

AIs Hacking Websites

Source

New research : LLM Agents can Autonomously Hack Websites Abstract: In recent years, large language models (LLMs) have become increasingly capable and can now interact with tools (i.e., call functions), read documents, and recursively call themselves. As a result, these LLMs can now function autonomously as agents. With the …

Improving the Cryptanalysis of Lattice-Based Public-Key Algorithms

Source

The winner of the Best Paper Award at Crypto this year was a significant improvement to lattice-based cryptanalysis. This is important, because a bunch of NIST’s post-quantum options base their security on lattice problems. I worry about standardizing on post-quantum algorithms too quickly. We are still learning a …

On Software Liabilities

Source

Over on Lawfare, Jim Dempsey published a really interesting proposal for software liability: “Standard for Software Liability: Focus on the Product for Liability, Focus on the Process for Safe Harbor.” Section 1 of this paper sets the stage by briefly describing the problem to be solved. Section 2 canvasses …

Teaching LLMs to Be Deceptive

Source

Interesting research: “ Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training “: Abstract: Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy …

Poisoning AI Models

Source

New research into poisoning AI models : The researchers first trained the AI models using supervised learning and then used additional “safety training” methods, including more supervised learning, reinforcement learning, and adversarial training. After this, they checked if the AI still had hidden behaviors. They found that with specific prompts …

Side Channels Are Common

Source

Really interesting research: “ Lend Me Your Ear: Passive Remote Physical Side Channels on PCs.” Abstract: We show that built-in sensors in commodity PCs, such as microphones, inadvertently capture electromagnetic side-channel leakage from ongoing computation. Moreover, this information is often conveyed by supposedly-benign channels such as audio recordings and common …

Code Written with AI Assistants Is Less Secure

Source

Interesting research: “ Do Users Write More Insecure Code with AI Assistants? “: Abstract: We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to …

On IoT Devices and Software Liability

Source

New law journal article : Smart Device Manufacturer Liability and Redress for Third-Party Cyberattack Victims Abstract: Smart devices are used to facilitate cyberattacks against both their users and third parties. While users are generally able to seek redress following a cyberattack via data protection legislation, there is no equivalent pathway …

« newer articles | page 2 | older articles »