Showing only posts tagged AI. Show all posts.

What Anthropic’s Mythos Means for the Future of Cybersecurity

Source

Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed …

Human Trust of AI Agents

Source

Interesting research: “ Humans expect rationality and cooperation from LLM opponents in strategic games.” Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first …

How Hackers Are Thinking About AI

Source

Interesting paper: “ What hackers talk about when they talk about AI: Early-stage diffusion of a cybercrime innovation. ” Abstract: The rapid expansion of artificial intelligence (AI) is raising concerns about its potential to transform cybercrime. Beyond empowering novice offenders, AI stands to intensify the scale and sophistication of attacks by …

On Anthropic’s Mythos Preview and Project Glasswing

Source

The cybersecurity industry is obsessing over Anthropic’s new model, Claude Mythos Preview, and its effects on cybersecurity. Anthropic said that it is not releasing it to the general public because of its cyberattack capabilities, and has launched Project Glasswing to run the model against a whole slew of …

Cybersecurity in the Age of Instant Software

Source

AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: “instant software.” Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet …

OpenClaw gives users yet another reason to be freaked out about security

Source

For more than a month, security practitioners have been warning about the perils of using OpenClaw, the viral AI agentic tool that has taken the development community by storm. A recently fixed vulnerability provides an object lesson for why. OpenClaw, which was introduced in November and now boasts 347 …

As the US Midterms Approach, AI Is Going to Emerge as a Key Issue Concerning Voters

Source

In December, the Trump administration signed an executive order that neutered states’ ability to regulate AI by ordering his administration to both sue and withhold funds from states that try to do so. This action pointedly supported industry lobbyists keen to avoid any constraints and consequences on their deployment …

Claude Used to Hack Mexican Government

Source

An unknown hacker used Anthropic’s LLM to hack the Mexican government: The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup …

Manipulating AI Summarization Features

Source

Microsoft is reporting : Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters.... These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to …

LLMs can unmask pseudonymous users at scale with surprising accuracy

Source

Burner accounts on social media sites can increasingly be analyzed to identify the pseudonymous users who post to them using AI in research that has far-reaching consequences for privacy on the Internet, researchers said. The finding, from a recently published research paper, is based on results of experiments correlating …

page 1 | older articles »