Showing only posts tagged LLMs. Show all posts.

Gemini hackers can deliver more potent attacks with a helping hand from… Gemini

Source

In the growing canon of AI security, the indirect prompt injection has emerged as the most powerful means for attackers to hack large language models such as OpenAI’s GPT-3 and GPT-4 or Microsoft’s Copilot. By exploiting a model's inability to distinguish between, on the one hand, developer-defined …

New hack uses prompt injection to corrupt Gemini’s long-term memory

Source

In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots to exfiltrate sensitive data or perform other malicious actions. Developers of platforms such as Google's Gemini and OpenAI's ChatGPT are generally good at plugging these security holes, but hackers keep …

Invisible text that AI chatbots understand and humans can’t? Yep, it’s a thing.

Source

What if there was a way to sneak malicious instructions into Claude, Copilot, or other top-name AI chatbots and get confidential data out of them by using characters large language models can recognize and their human users can’t? As it turns out, there was—and in some cases …