Amazon AI coding agent hacked to inject data wiping commands
A hacker planted data wiping code in a version of Amazon's generative AI-powered assistant, the Q Developer Extension for Visual Studio Code. [...]
A hacker planted data wiping code in a version of Amazon's generative AI-powered assistant, the Q Developer Extension for Visual Studio Code. [...]
A novel malware family named LameHug is using a large language model (LLM) to generate commands to be executed on compromised Windows systems. [...]
Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links. [...]
Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license. [...]
The Berlin Commissioner for Data Protection has formally requested Google and Apple to remove the DeepSeek AI application from the application stores due to GDPR violations. [...]
North Korean advanced persistent threat (APT) 'BlueNoroff' (aka 'Sapphire Sleet' or 'TA444') are using deepfake company executives during fake Zoom calls to trick employees into installing custom malware on their computers. [...]
Work management platform Asana is warning users of its new Model Context Protocol (MCP) feature that a flaw in its implementation potentially led to data exposure from their instances to other users and vice versa. [...]
At this year’s AWS Summit in Washington, DC, I had the privilege of moderating a fireside chat with Steve Schmidt, Amazon’s Chief Security Officer, and Lakshmi Raman, the CIA’s Chief Artificial Intelligence Officer. Our discussion explored how AI is transforming cybersecurity, threat response, and innovation across …
At this year’s AWS Summit in Washington, DC, I had the privilege of moderating a fireside chat with Steve Schmidt, Amazon’s Chief Security Officer, and Lakshmi Raman, the CIA’s Chief Artificial Intelligence Officer. Our discussion explored how AI is transforming cybersecurity, threat response, and innovation across …
A new attack dubbed 'EchoLeak' is the first known zero-click AI vulnerability that enables attackers to exfiltrate sensitive data from Microsoft 365 Copilot from a user's context without interaction. [...]
RSA Conference (RSAC) 2025 drew 730 speakers, 650 exhibitors, and 44,000 attendees from across the globe to the Moscone Center in San Francisco, California from April 28 through May 1. The keynote lineup was eclectic, with 37 presentations featuring speakers ranging from NBA Hall of Famer Earvin “Magic …
Marketers promote AI-assisted developer tools as workhorses that are essential for today’s software engineer. Developer platform GitLab, for instance, claims its Duo chatbot can “instantly generate a to-do list” that eliminates the burden of “wading through weeks of commits.” What these companies don’t say is that these …
Financial services institutions (FSIs) are increasingly adopting AI technologies to drive innovation and improve customer experiences. However, this adoption brings new governance, risk, and compliance (GRC) considerations that organizations need to address. To help FSI customers navigate these challenges, AWS is excited to announce the launch of the AWS …
As AI becomes central to business operations, so does the need for responsible AI governance. But how can you make sure that your AI systems are ethical, resilient, and aligned with compliance standards? ISO/IEC 42001, the international management system standard for AI, offers a framework to help organizations …
As AI becomes central to business operations, so does the need for responsible AI governance. But how can you make sure that your AI systems are ethical, resilient, and aligned with compliance standards? ISO/IEC 42001, the international management system standard for AI, offers a framework to help organizations …
Fake AI-powered video generation tools are being used to distribute a new information-stealing malware family called 'Noodlophile,' under the guise of generated media content. [...]
Google is implementing a new Chrome security feature that uses the built-in 'Gemini Nano' large-language model (LLM) to detect and block tech support scams while browsing the web. [...]
Financial services institutions (FSIs) are increasingly adopting AI technologies to drive innovation and improve customer experiences. However, this adoption brings new governance, risk, and compliance (GRC) considerations that organizations need to address. To help FSI customers navigate these challenges, AWS is excited to announce the launch of the AWS …
In part 2 of this series, we showed you how to use Amazon SageMaker Studio notebooks with natural language input to assist with threat hunting. This is done by using SageMaker Studio to automatically generate and run SQL queries on Amazon Athena with Amazon Bedrock and Amazon Security Lake …
WhatsApp has announced the introduction of 'Private Processing,' a new technology that enables users to utilize advanced AI features by offloading tasks to privacy-preserving cloud servers. [...]
Amazon Web Services (AWS) is pleased to announce the release of new Security Reference Architecture (SRA) code examples for securing generative AI workloads. The examples include two comprehensive capabilities focusing on secure model inference and RAG implementations, covering a wide range of security best practices using AWS generative AI …
Spain's police arrested six individuals behind a large-scale cryptocurrency investment scam that used AI tools to generate deepfake ads featuring popular public figures to lure people. [...]
Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders. [...]
In the growing canon of AI security, the indirect prompt injection has emerged as the most powerful means for attackers to hack large language models such as OpenAI’s GPT-3 and GPT-4 or Microsoft’s Copilot. By exploiting a model's inability to distinguish between, on the one hand, developer-defined …
In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots to exfiltrate sensitive data or perform other malicious actions. Developers of platforms such as Google's Gemini and OpenAI's ChatGPT are generally good at plugging these security holes, but hackers keep …