Remove 2010 Remove Clustering Remove Data Pipeline
article thumbnail

A review of purpose-built accelerators for financial services

AWS Machine Learning Blog

Learning means identifying and capturing historical patterns from the data, and inference means mapping a current value to the historical pattern. The following figure illustrates the idea of a large cluster of GPUs being used for learning, followed by a smaller number for inference.

AWS 96
article thumbnail

Unlocking generative AI for enterprises: How SnapLogic powers their low-code Agent Creator using Amazon Bedrock

AWS Machine Learning Blog

Since joining SnapLogic in 2010, Greg has helped design and implement several key platform features including cluster processing, big data processing, the cloud architecture, and machine learning. He currently is working on Generative AI for data integration.

AI 77
article thumbnail

How SnapLogic built a text-to-pipeline application with Amazon Bedrock to translate business intent into action

Flipboard

Iris was designed to use machine learning (ML) algorithms to predict the next steps in building a data pipeline. Since joining SnapLogic in 2010, Greg has helped design and implement several key platform features including cluster processing, big data processing, the cloud architecture, and machine learning.

Database 155