Systems | Development | Analytics | API | Testing

Latest Videos

What is the Transactional Outbox Pattern? | Designing Event-Driven Microservices

The transactional outbox pattern leverages database transactions to update a microservice's state and an outbox table. Events in the outbox will be sent to an external messaging platform such as Apache Kafka. This technique is used to overcome the dual-write problem which occurs when you have to write data to two separate systems such as a database and Apache Kafka. The database transactions can be used to ensure atomic writes between the two tables. From there, a separate process can consume the outbox and update the external system as required.

What is the Dual Write Problem? | Designing Event-Driven Microservices

The dual write problem occurs when you try to write to two separate systems and need them to be atomic. If one write fails, and the other succeeds, you can end up with inconsistent state. This is an easy trap to fall into, and it can be difficult to avoid. We'll explore what causes the dual-write problem and explore both valid and invalid solutions to it.

How To Build Scalable and Resilient Microservices | Microservices 101

Building scalable and resilient microservices requires an approach that eliminates the need to treat them as special. They should be treated as easily replaceable building blocks. This means eliminating bottlenecks and single points of failure but it can also mean changing from a pull-based approach to a push-based approach. CHAPTERS.

Point-to-Point vs Publish/Subscribe | Microservices 101

Communication between microservices can be broadly categorized as either point-to-point or publish/subscribe. Point-to-point is often used synchronously, while publish/subscribe tends to be asynchronous. Each of these techniques can have a place in a modern microservices platform, but it is important to understand the role each one plays so that they can be used effectively. CHAPTERS.

Commands, Queries, and Events | Microservices 101

Messages between microservices come in three different flavors: Commands, Queries, and Events. Commands change the state, queries request the state, and events share the state. When designing microservice communication patterns, it is important to understand the role of these messages. Following a clear set of rules can help ensure consistency and clarity within your system of microservices.

What is Apache Flink?

Learn the basics of Apache Flink® and how to get started with simple, serverless Flink! Flink is a powerful, battle-hardened stream processor that has rapidly grown in popularity, becoming the de facto standard for stream processing and a top-five Apache project. Kai Waehner, Field CTO at Confluent, explains how Flink fits into your data streaming architecture, why stream processing is needed for real-time data, and how Flink’s underlying architecture provides a number of advantages.

What is Confluent?

Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion – designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations.