Showing only posts tagged LLMs. Show all posts.

Overrun with AI slop, cURL scraps bug bounties to ensure "intact mental health"

Source

The project developer for one of the Internet’s most popular networking tools is scrapping its vulnerability reward program after being overrun by a spike in the submission of low-quality reports, much of it AI-generated slop. “We are just a small single open source project with a small number …

A single click mounted a covert, multistage attack against Copilot

Source

Microsoft has fixed a vulnerability in its Copilot AI assistant that allowed hackers to pluck a host of sensitive user data with a single click on a URL. The hackers in this case were white-hat researchers from security firm Varonis. The net effect of their multistage attack was that …

New attack on ChatGPT research agent pilfers secrets from Gmail inboxes

Source

The face-palm-worthy prompt injections against AI assistants continue. Today’s installment hits OpenAI’s Deep Research agent. Researchers recently devised an attack that plucked confidential information out of a user’s Gmail inbox and sent it to an attacker-controlled web server, with no interaction required on the part of …

OpenAI helps spammers plaster 80,000 sites with messages that bypassed filters

Source

Spammers used OpenAI to generate messages that were unique to each recipient, allowing them to bypass spam-detection filters and blast unwanted messages to more than 80,000 websites in four months, researchers said Wednesday. The finding, documented in a post published by security firm SentinelOne’s SentinelLabs, underscores the …

Gemini hackers can deliver more potent attacks with a helping hand from… Gemini

Source

In the growing canon of AI security, the indirect prompt injection has emerged as the most powerful means for attackers to hack large language models such as OpenAI’s GPT-3 and GPT-4 or Microsoft’s Copilot. By exploiting a model's inability to distinguish between, on the one hand, developer-defined …

New hack uses prompt injection to corrupt Gemini’s long-term memory

Source

In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots to exfiltrate sensitive data or perform other malicious actions. Developers of platforms such as Google's Gemini and OpenAI's ChatGPT are generally good at plugging these security holes, but hackers keep …