Showing only posts tagged Amazon Bedrock. Show all posts.

Securing the RAG ingestion pipeline: Filtering mechanisms

Source

Retrieval-Augmented Generative (RAG) applications enhance the responses retrieved from large language models (LLMs) by integrating external data such as downloaded files, web scrapings, and user-contributed data pools. This integration improves the models’ performance by adding relevant context to the prompt. While RAG applications are a powerful way to dynamically …

Implement effective data authorization mechanisms to secure your data used in generative AI applications

Source

Data security and data authorization, as distinct from user authorization, is a critical component of business workload architectures. Its importance has grown with the evolution of artificial intelligence (AI) technology, with generative AI introducing new opportunities to use internal data sources with large language models (LLMs) and multimodal foundation …

Enhancing data privacy with layered authorization for Amazon Bedrock Agents

Source

Customers are finding several advantages to using generative AI within their applications. However, using generative AI adds new considerations when reviewing the threat model of an application, whether you’re using it to improve the customer experience for operational efficiency, to generate more tailored or specific results, or for …

Network perimeter security protections for generative AI

Source

Generative AI–based applications have grown in popularity in the last couple of years. Applications built with large language models (LLMs) have the potential to increase the value companies bring to their customers. In this blog post, we dive deep into network perimeter protection for generative AI applications. We …

Hardening the RAG chatbot architecture powered by Amazon Bedrock: Blueprint for secure design and anti-pattern mitigation

Source

This blog post demonstrates how to use Amazon Bedrock with a detailed security plan to deploy a safe and responsible chatbot application. In this post, we identify common security risks and anti-patterns that can arise when exposing a large language model (LLM) in an application. Amazon Bedrock is built …

Context window overflow: Breaking the barrier

Source

Have you ever pondered the intricate workings of generative artificial intelligence (AI) models, especially how they process and generate responses? At the heart of this fascinating process lies the context window, a critical element determining the amount of information an AI model can handle at a given time. But …

Securing generative AI: data, compliance, and privacy considerations

Source

Generative artificial intelligence (AI) has captured the imagination of organizations and individuals around the world, and many have already adopted it to help improve workforce productivity, transform customer experiences, and more. When you use a generative AI-based service, you should understand how the information that you enter into the …

Securing generative AI: Applying relevant security controls

Source

This is part 3 of a series of posts on securing generative AI. We recommend starting with the overview post Securing generative AI: An introduction to the Generative AI Security Scoping Matrix, which introduces the scoping matrix detailed in this post. This post discusses the considerations when implementing security …

Generate AI powered insights for Amazon Security Lake using Amazon SageMaker Studio and Amazon Bedrock

Source

In part 1, we discussed how to use Amazon SageMaker Studio to analyze time-series data in Amazon Security Lake to identify critical areas and prioritize efforts to help increase your security posture. Security Lake provides additional visibility into your environment by consolidating and normalizing security data from both AWS …

Securing generative AI: An introduction to the Generative AI Security Scoping Matrix

Source

Generative artificial intelligence (generative AI) has captured the imagination of organizations and is transforming the customer experience in industries of every size across the globe. This leap in AI capability, fueled by multi-billion-parameter large language models (LLMs) and transformer neural networks, has opened the door to new productivity improvements …