Security Feed
  1. Archives

Oct 09 2025 It's trivially easy to poison LLMs into spitting out gibberish, says Anthropic

Source

Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on.... [...]

Posted by Brandon Vigliarolo on Thu 09 October 2025 in The Register.

Categories

  1. Ars Technica
  2. AWS Security
  3. BleepingComputer
  4. Brian Krebs
  5. Bruce Schneier
  6. GCP Security
  7. Google Project Zero
  8. The Daily Swig
  9. The Guardian
  10. The Register
  11. Threatpost

Tag cloud

  • Security
  • Uncategorized
  • Security, Identity, & Compliance
  • Biz & IT
  • Security Blog
  • Microsoft
  • Security & Identity
  • Google
  • cryptocurrency
  • AI
  • Announcements
  • Foundational (100)
  • A Little Sunshine
  • Legal
  • Apple
  • Artificial Intelligence
  • privacy
  • Mobile
  • squid
  • Advanced (300)
  • Intermediate (200)
  • The Coming Storm
  • hacking
  • Technical How-to
  • Best Practices

Security Feed. Powered by Pelican and m.css. Code is available on GitLab.