Remove AWS Remove Events Remove Natural Language Processing
article thumbnail

Transcribe, translate, and summarize live streams in your browser with AWS AI and generative AI services

AWS Machine Learning Blog

From gaming and entertainment to education and corporate events, live streams have become a powerful medium for real-time engagement and content consumption. By offering real-time translations into multiple languages, viewers from around the world can engage with live content as if it were delivered in their first language.

AWS 118
article thumbnail

Revolutionizing knowledge management: VW’s AI prototype journey with AWS

AWS Machine Learning Blog

The integrated approach and ease of use of Amazon Bedrock in deploying large language models (LLMs), along with built-in features that facilitate seamless integration with other AWS services like Amazon Kendra, made it the preferred choice. By using Claude 3’s vision capabilities, we could upload image-rich PDF documents.

AWS 67
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Automate derivative confirms processing using AWS AI services for the capital markets industry

AWS Machine Learning Blog

In this post, we show how you can automate and intelligently process derivative confirms at scale using AWS AI services. We built the solution using the event-driven principles as depicted in the following diagram. An event notification on S3 object upload completion places a message in an SQS queue.

AWS 105
article thumbnail

Techniques and approaches for monitoring large language models on AWS

AWS Machine Learning Blog

Large Language Models (LLMs) have revolutionized the field of natural language processing (NLP), improving tasks such as language translation, text summarization, and sentiment analysis. The file saved on Amazon S3 creates an event that triggers a Lambda function. The function invokes the modules.

AWS 125
article thumbnail

Large language model inference over confidential data using AWS Nitro Enclaves

AWS Machine Learning Blog

In this post, we discuss how Leidos worked with AWS to develop an approach to privacy-preserving large language model (LLM) inference using AWS Nitro Enclaves. LLMs are designed to understand and generate human-like language, and are used in many industries, including government, healthcare, financial, and intellectual property.

AWS 108
article thumbnail

Your guide to generative AI and ML at AWS re:Invent 2023

AWS Machine Learning Blog

Yes, the AWS re:Invent season is upon us and as always, the place to be is Las Vegas! And although generative AI has appeared in previous events, this year we’re taking it to the next level. And although generative AI has appeared in previous events, this year we’re taking it to the next level.

AWS 125
article thumbnail

Brilliant words, brilliant writing: Using AWS AI chips to quickly deploy Meta LLama 3-powered applications

AWS Machine Learning Blog

However, customers who want to deploy LLMs in their own self-managed workflows for greater control and flexibility of underlying resources can use these LLMs optimized on top of AWS Inferentia2-powered Amazon Elastic Compute Cloud (Amazon EC2) Inf2 instances. model, but the same process can be followed for the Mistral-7B-instruct-v0.3

AWS 80