Showing only posts tagged Artificial Intelligence. Show all posts.

AI security strategies from Amazon and the CIA: Insights from AWS Summit Washington, DC

Source

At this year’s AWS Summit in Washington, DC, I had the privilege of moderating a fireside chat with Steve Schmidt, Amazon’s Chief Security Officer, and Lakshmi Raman, the CIA’s Chief Artificial Intelligence Officer. Our discussion explored how AI is transforming cybersecurity, threat response, and innovation across …

Many voices, one community: Three themes from RSA Conference 2025

Source

RSA Conference (RSAC) 2025 drew 730 speakers, 650 exhibitors, and 44,000 attendees from across the globe to the Moscone Center in San Francisco, California from April 28 through May 1. The keynote lineup was eclectic, with 37 presentations featuring speakers ranging from NBA Hall of Famer Earvin “Magic …

Researchers cause GitLab AI developer assistant to turn safe code malicious

Source

Marketers promote AI-assisted developer tools as workhorses that are essential for today’s software engineer. Developer platform GitLab, for instance, claims its Duo chatbot can “instantly generate a to-do list” that eliminates the burden of “wading through weeks of commits.” What these companies don’t say is that these …

Introducing the AWS User Guide to Governance, Risk and Compliance for Responsible AI Adoption within Financial Services Industries

Source

Financial services institutions (FSIs) are increasingly adopting AI technologies to drive innovation and improve customer experiences. However, this adoption brings new governance, risk, and compliance (GRC) considerations that organizations need to address. To help FSI customers navigate these challenges, AWS is excited to announce the launch of the AWS …

AI lifecycle risk management: ISO/IEC 42001:2023 for AI governance

Source

As AI becomes central to business operations, so does the need for responsible AI governance. But how can you make sure that your AI systems are ethical, resilient, and aligned with compliance standards? ISO/IEC 42001, the international management system standard for AI, offers a framework to help organizations …

AI lifecycle risk management: ISO/IEC 42001:2023 for AI governance

Source

As AI becomes central to business operations, so does the need for responsible AI governance. But how can you make sure that your AI systems are ethical, resilient, and aligned with compliance standards? ISO/IEC 42001, the international management system standard for AI, offers a framework to help organizations …

Introducing the AWS User Guide to Governance, Risk and Compliance for Responsible AI Adoption within Financial Services Industries

Source

Financial services institutions (FSIs) are increasingly adopting AI technologies to drive innovation and improve customer experiences. However, this adoption brings new governance, risk, and compliance (GRC) considerations that organizations need to address. To help FSI customers navigate these challenges, AWS is excited to announce the launch of the AWS …

Use an Amazon Bedrock powered chatbot with Amazon Security Lake to help investigate incidents

Source

In part 2 of this series, we showed you how to use Amazon SageMaker Studio notebooks with natural language input to assist with threat hunting. This is done by using SageMaker Studio to automatically generate and run SQL queries on Amazon Athena with Amazon Bedrock and Amazon Security Lake …

Announcing AWS Security Reference Architecture Code Examples for Generative AI

Source

Amazon Web Services (AWS) is pleased to announce the release of new Security Reference Architecture (SRA) code examples for securing generative AI workloads. The examples include two comprehensive capabilities focusing on secure model inference and RAG implementations, covering a wide range of security best practices using AWS generative AI …

Gemini hackers can deliver more potent attacks with a helping hand from… Gemini

Source

In the growing canon of AI security, the indirect prompt injection has emerged as the most powerful means for attackers to hack large language models such as OpenAI’s GPT-3 and GPT-4 or Microsoft’s Copilot. By exploiting a model's inability to distinguish between, on the one hand, developer-defined …

New hack uses prompt injection to corrupt Gemini’s long-term memory

Source

In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots to exfiltrate sensitive data or perform other malicious actions. Developers of platforms such as Google's Gemini and OpenAI's ChatGPT are generally good at plugging these security holes, but hackers keep …

Experts Flag Security, Privacy Risks in DeepSeek AI App

Source

New mobile apps from the Chinese artificial intelligence (AI) company DeepSeek have remained among the top three “free” downloads for Apple and Google devices since their debut on Jan. 25, 2025. But experts caution that many of DeepSeek’s design choices — such as using hard-coded encryption keys, and sending …

Microsoft sues service for creating illicit content with its AI platform

Source

Microsoft is accusing three individuals of running a "hacking-as-a-service" scheme that was designed to allow the creation of harmful and illicit content using the company’s platform for AI-generated content. The foreign-based defendants developed tools specifically designed to bypass safety guardrails Microsoft has erected to prevent the creation of …

« newer articles | page 2