Showing only posts tagged Generative AI. Show all posts.

Securing Amazon Bedrock API keys: Best practices for implementation and management

Source

Recently, AWS released Amazon Bedrock API keys to make calls to the Amazon Bedrock API. In this post, we provide practical security guidance on effectively implementing, monitoring, and managing this new option for accessing Amazon Bedrock to help you build a comprehensive strategy for securing these keys. We also …

Protect your generative AI applications against encoding-based attacks with Amazon Bedrock Guardrails

Source

Amazon Bedrock Guardrails provides configurable safeguards to help you safely build generative AI applications at scale. It offers integrated safety and privacy protections that work across multiple foundation models (FMs), including models available in Amazon Bedrock and models hosted outside Amazon Bedrock from other providers. Bedrock Guardrails currently offers …

Defending LLM applications against Unicode character smuggling

Source

When interacting with AI applications, even seemingly innocent elements—such as Unicode characters—can have significant implications for security and data integrity. At Amazon Web Services (AWS), we continuously evaluate and address emerging threats across aspects of AI systems. In this blog post, we explore Unicode tag blocks, a …

Build secure network architectures for generative AI applications using AWS services

Source

As generative AI becomes foundational across industries—powering everything from conversational agents to real-time media synthesis—it simultaneously creates new opportunities for bad actors to exploit. The complex architectures behind generative AI applications expose a large surface area including public-facing APIs, inference services, custom web applications, and integrations with …

Enabling AI adoption at scale through enterprise risk management framework – Part 2

Source

In Part 1 of this series, we explored the fundamental risks and governance considerations. In this part, we examine practical strategies for adapting your enterprise risk management framework (ERMF) to harness generative AI’s power while maintaining robust controls. This part covers: Adapting your ERMF for the cloud Adapting …

Enabling AI adoption at scale through enterprise risk management framework – Part 1

Source

According to BCG research, 84% of executives view responsible AI as a top management responsibility, yet only 25% of them have programs that fully address it. Responsible AI can be achieved through effective governance, and with the rapid adoption of generative AI, this governance has become a business imperative …

Authorizing access to data with RAG implementations

Source

Organizations are increasingly using large language models (LLMs) to provide new types of customer interactions through generative AI-powered chatbots, virtual assistants, and intelligent search capabilities. To enhance these interactions, organizations are using Retrieval-Augmented Generation (RAG) to incorporate proprietary data, industry-specific knowledge, and internal documentation to provide more accurate, contextual …

AI security strategies from Amazon and the CIA: Insights from AWS Summit Washington, DC

Source

At this year’s AWS Summit in Washington, DC, I had the privilege of moderating a fireside chat with Steve Schmidt, Amazon’s Chief Security Officer, and Lakshmi Raman, the CIA’s Chief Artificial Intelligence Officer. Our discussion explored how AI is transforming cybersecurity, threat response, and innovation across …

Introducing the AWS User Guide to Governance, Risk and Compliance for Responsible AI Adoption within Financial Services Industries

Source

Financial services institutions (FSIs) are increasingly adopting AI technologies to drive innovation and improve customer experiences. However, this adoption brings new governance, risk, and compliance (GRC) considerations that organizations need to address. To help FSI customers navigate these challenges, AWS is excited to announce the launch of the AWS …

AI lifecycle risk management: ISO/IEC 42001:2023 for AI governance

Source

As AI becomes central to business operations, so does the need for responsible AI governance. But how can you make sure that your AI systems are ethical, resilient, and aligned with compliance standards? ISO/IEC 42001, the international management system standard for AI, offers a framework to help organizations …

Implementing safety guardrails for applications using Amazon SageMaker

Source

Large Language Models (LLMs) have become essential tools for content generation, document analysis, and natural language processing tasks. Because of the complex non-deterministic output generated by these models, you need to apply robust safety measures to help prevent inappropriate outputs and protect user interactions. These measures are crucial to …

Introducing the AWS User Guide to Governance, Risk and Compliance for Responsible AI Adoption within Financial Services Industries

Source

Financial services institutions (FSIs) are increasingly adopting AI technologies to drive innovation and improve customer experiences. However, this adoption brings new governance, risk, and compliance (GRC) considerations that organizations need to address. To help FSI customers navigate these challenges, AWS is excited to announce the launch of the AWS …

Use an Amazon Bedrock powered chatbot with Amazon Security Lake to help investigate incidents

Source

In part 2 of this series, we showed you how to use Amazon SageMaker Studio notebooks with natural language input to assist with threat hunting. This is done by using SageMaker Studio to automatically generate and run SQL queries on Amazon Athena with Amazon Bedrock and Amazon Security Lake …

Announcing AWS Security Reference Architecture Code Examples for Generative AI

Source

Amazon Web Services (AWS) is pleased to announce the release of new Security Reference Architecture (SRA) code examples for securing generative AI workloads. The examples include two comprehensive capabilities focusing on secure model inference and RAG implementations, covering a wide range of security best practices using AWS generative AI …

Implement effective data authorization mechanisms to secure your data used in generative AI applications – part 2

Source

In part 1 of this blog series, we walked through the risks associated with using sensitive data as part of your generative AI application. This overview provided a baseline of the challenges of using sensitive data with a non-deterministic large language model (LLM) and how to mitigate these challenges …

Safeguard your generative AI workloads from prompt injections

Source

Generative AI applications have become powerful tools for creating human-like content, but they also introduce new security challenges, including prompt injections, excessive agency, and others. See the OWASP Top 10 for Large Language Model Applications to learn more about the unique security risks associated with generative AI applications. When …

Microsoft sues service for creating illicit content with its AI platform

Source

Microsoft is accusing three individuals of running a "hacking-as-a-service" scheme that was designed to allow the creation of harmful and illicit content using the company’s platform for AI-generated content. The foreign-based defendants developed tools specifically designed to bypass safety guardrails Microsoft has erected to prevent the creation of …

New AWS Skill Builder course available: Securing Generative AI on AWS

Source

To support our customers in securing their generative AI workloads on Amazon Web Services (AWS), we are excited to announce the launch of a new AWS Skill Builder course: Securing Generative AI on AWS. This comprehensive course is designed to help security professionals, architects, and artificial intelligence and machine …