Remove AWS Remove Events Remove Natural Language Processing
article thumbnail

Implement RAG while meeting data residency requirements using AWS hybrid and edge services

Flipboard

In this post, we show how to extend Amazon Bedrock Agents to hybrid and edge services such as AWS Outposts and AWS Local Zones to build distributed Retrieval Augmented Generation (RAG) applications with on-premises data for improved model outcomes.

AWS 152
article thumbnail

Transcribe, translate, and summarize live streams in your browser with AWS AI and generative AI services

AWS Machine Learning Blog

From gaming and entertainment to education and corporate events, live streams have become a powerful medium for real-time engagement and content consumption. By offering real-time translations into multiple languages, viewers from around the world can engage with live content as if it were delivered in their first language.

AWS 130
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Techniques and approaches for monitoring large language models on AWS

AWS Machine Learning Blog

Large Language Models (LLMs) have revolutionized the field of natural language processing (NLP), improving tasks such as language translation, text summarization, and sentiment analysis. The file saved on Amazon S3 creates an event that triggers a Lambda function. The function invokes the modules.

AWS 139
article thumbnail

Build an Amazon Bedrock based digital lending solution on AWS

Flipboard

This post demonstrates how you can gain a competitive advantage using Amazon Bedrock Agents based automation of a complex business process. The loan handler AWS Lambda function uses the information in the KYC documents to check the credit score and internal risk score. The events are logged using Amazon CloudWatch.

AWS 161
article thumbnail

Large language model inference over confidential data using AWS Nitro Enclaves

AWS Machine Learning Blog

In this post, we discuss how Leidos worked with AWS to develop an approach to privacy-preserving large language model (LLM) inference using AWS Nitro Enclaves. LLMs are designed to understand and generate human-like language, and are used in many industries, including government, healthcare, financial, and intellectual property.

AWS 125
article thumbnail

Automate derivative confirms processing using AWS AI services for the capital markets industry

AWS Machine Learning Blog

In this post, we show how you can automate and intelligently process derivative confirms at scale using AWS AI services. We built the solution using the event-driven principles as depicted in the following diagram. An event notification on S3 object upload completion places a message in an SQS queue.

AWS 116
article thumbnail

Amazon Q Business simplifies integration of enterprise knowledge bases at scale

Flipboard

Enhancing AWS Support Engineering efficiency The AWS Support Engineering team faced the daunting task of manually sifting through numerous tools, internal sources, and AWS public documentation to find solutions for customer inquiries. Then we introduce the solution deployment using three AWS CloudFormation templates.

AWS 155