Showing only posts tagged LLM. Show all posts.

Side-Channel Attacks Against LLMs

Source

Here are three papers describing different side-channel attacks against LLMs. “ Remote Timing Attacks on Efficient Language Model Inference “: Abstract: Scaling up language models has significantly increased their capabilities. But larger models are slower models, and so there is now an extensive body of work (e.g., speculative sampling or …

The Promptware Kill Chain

Source

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “ prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious …

LLMs are Getting a Lot Better and Faster at Finding and Exploiting Zero-Days

Source

This is amazing : Opus 4.6 is notably better at finding high-severity vulnerabilities than previous models and a sign of how quickly things are moving. Security teams have been automating vulnerability discovery for years, investing heavily in fuzzing infrastructure and custom harnesses to find bugs at scale. But what …

Could ChatGPT Convince You to Buy Something?

Source

Eighteen months ago, it was plausible that artificial intelligence might take a different path than social media. Back then, AI’s development hadn’t consolidated under a small number of big tech firms. Nor had it capitalized on consumer attention, surveilling users and delivering ads. Unfortunately, the AI industry …

AI and the Corporate Capture of Knowledge

Source

More than a decade after Aaron Swartz’s death, the United States is still living inside the contradiction that destroyed him. Swartz believed that knowledge, especially publicly funded knowledge, should be freely accessible. Acting on that, he downloaded thousands of academic articles from the JSTOR archive with the intention …

Are We Ready to Be Governed by Artificial Intelligence?

Source

Artificial Intelligence (AI) overlords are a common trope in science-fiction dystopias, but the reality looks much more prosaic. The technologies of artificial intelligence are already pervading many aspects of democratic government, affecting our lives in ways both large and small. This has occurred largely without our notice or consent …

Four Ways AI Is Being Used to Strengthen Democracies Worldwide

Source

Democracy is colliding with the technologies of artificial intelligence. Judging from the audience reaction at the recent World Forum on Democracy in Strasbourg, the general expectation is that democracy will be the worse for it. We have another narrative. Yes, there are risks to democracy from AI, but there …

Scientists Need a Positive Vision for AI

Source

For many in the research community, it’s gotten harder to be optimistic about the impacts of artificial intelligence. As authoritarianism is rising around the world, AI-generated “slop” is overwhelming legitimate media, while AI-generated deepfakes are spreading misinformation and parroting extremist messages. AI is making warfare more precise and …

Agentic AI’s OODA Loop Problem

Source

The OODA loop—for observe, orient, decide, act—is a framework to understand decision-making in adversarial situations. We apply the same framework to artificial intelligence agents, who have to make their decisions with untrustworthy observations and orientation. To solve this problem, we need new systems of input, processing, and …

AI and the Future of American Politics

Source

Two years ago, Americans anxious about the forthcoming 2024 presidential election were considering the malevolent force of an election influencer: artificial intelligence. Over the past several years, we have seen plenty of warning signs from elections worldwide demonstrating how AI can be used to propagate misinformation and alter the …

Autonomous AI Hacking and the Future of Cybersecurity

Source

AI agents are now hacking computers. They’re getting better at all phases of cyberattacks, faster than most of us expected. They can chain together different aspects of a cyber operation, and hack autonomously, at computer speeds and scale. This is going to change everything. Over the summer, hackers …

Time-of-Check Time-of-Use Attacks Against LLMs

Source

This is a nice piece of research: “ Mind the Gap: Time-of-Check to Time-of-Use Vulnerabilities in LLM-Enabled Agents “.: Abstract: Large Language Model (LLM)-enabled agents are rapidly emerging across a wide range of applications, but their deployment introduces vulnerabilities with security implications. While prior work has examined prompt-based attacks (e …

page 1 | older articles »