Systems | Development | Analytics | API | Testing

Data Streaming

How to Evolve your Microservice Schemas | Designing Event-Driven Microservices

Schema evolution is the act of modifying the structure of the data in our application, without impacting clients. This can be a challenging problem. However, it gets easier if we start with a flexible data format and take steps to avoid unnecessary data coupling. When we find ourselves having to make breaking changes, we can always fall back to creating new versions of our APIs and events to accommodate those changes.

What is a Kafka Consumer and How does it work?

Now that your data is inside your Kafka cluster, how do you get it out? In this video, Dan Weston covers the basics of Kafka Consumers: what consumers are, how they get your data flowing, and best practices for configuring consumers in a real-time data streaming system. You will also learn about offsets, consumer groups, and partition assignment.

What is the Listen to Yourself Pattern? | Designing Event-Driven Microservices

The Listen to Yourself pattern is implemented by having a microservice emit an event to a platform such as Apache Kafka, and then consuming its own events to perform internal updates. It can be used as a solution to the dual-write problem since it separates Kafka and database writes into different processes. However, it also provides added benefits because it allows microservices to respond quickly to requests by deferring processing to a later time.

Apache Kafka 3.7: Official Docker Image and Improved Client Monitoring

Apache Kafka® 3.7 is here! On behalf of the Kafka community, Danica Fine highlights key release updates, with KIPs from Kafka Core, Kafka Streams, and Kafka Connect. Kafka Core: Kafka Streams: Kafka Connect: Many more KIPs are a part of this release. See the blog post for more details.

What is the Event Sourcing Pattern? | Designing Event-Driven Microservices

Event Sourcing is a pattern of storing an object's state as a series of events. Each time the object is updated a new event is written to an append-only log. When the object is loaded from the database, the events are replayed in order, reapplying the necessary changes. The benefit of this approach is that it stores a full history of the object. This can be valuable for debugging, auditing, building new models, and a variety of other situations. It is also a technique that can be used to solve the dual-write problem when working with event-driven architectures.

What is the Transactional Outbox Pattern? | Designing Event-Driven Microservices

The transactional outbox pattern leverages database transactions to update a microservice's state and an outbox table. Events in the outbox will be sent to an external messaging platform such as Apache Kafka. This technique is used to overcome the dual-write problem which occurs when you have to write data to two separate systems such as a database and Apache Kafka. The database transactions can be used to ensure atomic writes between the two tables. From there, a separate process can consume the outbox and update the external system as required.