Systems | Development | Analytics | API | Testing

Kafka

Data Streaming Cheat Sheet and Checklist | Data Streaming Systems

Thank you for watching this course. We have a few additional resources for you to dig deeper and be fully equiped to start your data in motion journey: a comprehensive cheat sheet with a check list of what you need to verify before going to production and a sneak preview of what we saved for the follow-up course.

Cloud Kafka Resiliency and Fault Tolerance | Data Streaming Systems

Learn how to manage cloud volatility when running applications on Confluent Cloud. Understand how to optimally configure Kafka client for resilient cloud operations and explore error handling patterns in Kafka Streams. Leverage concepts like idempotent producers and consumers, and exactly one processing semantics.

Current '23 Keynote: Streaming into the Future - The Evolution & Impact of Data Streaming Platforms

Jay Kreps (Confluent Co-Founder and CEO), Shaun Clowes (Confluent CPO), and data streaming leaders from organizations like NASA, Warner Brothers, and Notion explore the past, present, and future of data streaming. They will address two key questions: how can organizations integrate data across their applications to deliver better experiences, and how can they embed data and analytics into every part of the business to drive better decision-making?

Top 6 Reasons to Modernize Legacy Messaging Infrastructure

Traditional messaging middleware like Message Queues (MQs), Enterprise Service Buses (ESBs), and Extract, Transform and Load (ETL) tools have been widely used for decades to handle message distribution and inter-service communication across distributed applications. However, they can no longer keep up with the needs of modern applications across hybrid and multi cloud environments for asynchronicity, heterogeneous datasets and high volume throughput.

Practical Data Mesh: Building Decentralized Data Architectures with Event Streams

Why a data mesh? Predicated on delivering data as a first-class product, data mesh focuses on making it easy to publish and access important data across your organization. An event-driven data mesh combines the scale and performance of data in motion with product-focused rigor and self-service capabilities, putting data at the front and center of both operational and analytical use-cases.

Confluent unveils Apache Flink® on Confluent Cloud, making it easier to build real-time applications with stream processing on a unified platform

Confluent launches the industry's only serverless, cloud-native Flink service to simplify building high-quality, reusable data streams. Confluent expands Stream Governance capabilities with Data Portal, so teams can easily find all the real-time data streams in an organisation. New Confluent Cloud Enterprise offering lowers the cost of private networking and storage for Apache Kafka.

Introducing Confluent Cloud for Apache Flink

In the first three parts of our Inside Flink blog series, we discussed the benefits of stream processing, explored why developers are choosing Apache Flink® for a variety of stream processing use cases, and took a deep dive into Flink's SQL API. In this post, we'll focus on how we’ve re-architected Flink as a cloud-native service on Confluent Cloud. However, before we get into the specifics, there is exciting news to share.

Deliver Intelligent, Secure, and Cost-Effective Data Pipelines

The Q3 Confluent Cloud Launch comes to you from Current 2023, where data streaming industry experts have come together to share insights into the future of data streaming and new areas of innovation. This year, we’re introducing Confluent Cloud’s fully managed service for Apache Flink®, improvements to Kora Engine, how AI and streaming work together, and much more.