Retrieval Augmented Generation (RAG) with Data Streaming
How do you prevent hallucinations from large language models (LLMs) in GenAI applications?
LLMs need real-time, contextualized, and trustworthy data to generate the most reliable outputs. Kai Waehner, Global Field CTO at Confluent, explains how RAG and a data streaming platform with Apache Kafka and Flink make that possible.
RESOURCES
► Webinar: Building RAG-enabled GenAI applications: https://www.confluent.io/resources/online-talk/retrieval-augmented-generation-RAG-Generative-AI/
► Get started free on Confluent Cloud: https://www.confluent.io/get-started/
CHAPTERS
0:00 - What is RAG?
2:19 - Why Apache Kafka and Flink
3:40 - RAG with a Data Streaming Platform
8:54 - Use Cases
10:34 - Summary
ABOUT CONFLUENT
Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion – designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations. To learn more, please visit www.confluent.io.
#GenAI #LLM #RAG #confluent #apachekafka #kafka #apacheflink #flink #clouds