Showing only posts tagged prompt injections. Show all posts.

A single click mounted a covert, multistage attack against Copilot

Source

Microsoft has fixed a vulnerability in its Copilot AI assistant that allowed hackers to pluck a host of sensitive user data with a single click on a URL. The hackers in this case were white-hat researchers from security firm Varonis. The net effect of their multistage attack was that …

ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues

Source

There’s a well-worn pattern in the development of AI chatbots. Researchers discover a vulnerability and exploit it to do something bad. The platform introduces a guardrail that stops the attack from working. Then, researchers devise a simple tweak that once again imperils chatbot users. The reason more often …

New attack on ChatGPT research agent pilfers secrets from Gmail inboxes

Source

The face-palm-worthy prompt injections against AI assistants continue. Today’s installment hits OpenAI’s Deep Research agent. Researchers recently devised an attack that plucked confidential information out of a user’s Gmail inbox and sent it to an attacker-controlled web server, with no interaction required on the part of …

Flaw in Gemini CLI coding tool could allow hackers to run nasty commands

Source

Researchers needed less than 48 hours with Google’s new Gemini CLI coding agent to devise an exploit that made a default configuration of the tool surreptitiously exfiltrate sensitive data to an attacker-controlled server. Gemini CLI is a free, open-source AI tool that works in the terminal environment to …

New attack can steal cryptocurrency by planting false memories in AI chatbots

Source

Imagine a world where AI-powered bots can buy or sell cryptocurrency, make investments, and execute software-defined contracts at the blink of an eye, depending on minute-to-minute currency prices, breaking news, or other market-moving events. Then imagine an adversary causing the bot to redirect payments to an account they control …

Gemini hackers can deliver more potent attacks with a helping hand from… Gemini

Source

In the growing canon of AI security, the indirect prompt injection has emerged as the most powerful means for attackers to hack large language models such as OpenAI’s GPT-3 and GPT-4 or Microsoft’s Copilot. By exploiting a model's inability to distinguish between, on the one hand, developer-defined …