Showing only posts tagged large language models. Show all posts.

New hack uses prompt injection to corrupt Gemini’s long-term memory

Source

In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots to exfiltrate sensitive data or perform other malicious actions. Developers of platforms such as Google's Gemini and OpenAI's ChatGPT are generally good at plugging these security holes, but hackers keep …

Invisible text that AI chatbots understand and humans can’t? Yep, it’s a thing.

Source

What if there was a way to sneak malicious instructions into Claude, Copilot, or other top-name AI chatbots and get confidential data out of them by using characters large language models can recognize and their human users can’t? As it turns out, there was—and in some cases …