Systems | Development | Analytics | API | Testing

May 2024

Defining Asynchronous Microservice APIs for Fraud Detection | Designing Event-Driven Microservices

In this video, Wade explores the process of decomposing a monolith into a series of microservices. You'll see how Tributary bank extracts a variety of API methods from an existing monolith. Tributary Bank wants to decompose its monolith into a series of microservices. They are going to start with their Fraud Detection service. However, before they can start, they first have to untangle the existing code. They will need to define a clean API that will allow them to move the functionality to an asynchronous, event-driven microservice.

Solving the Dual-Write Problem: Effective Strategies for Atomic Updates Across Systems

The dual-write problem occurs when two external systems must be updated in an atomic fashion. A classic example is updating an application’s database while pushing an event into a messaging system like Apache Kafka. If the database update succeeds but the write to Kafka fails, the system ends up in an inconsistent state. However, the dual-write problem isn’t unique to event-driven systems or Kafka. It occurs in many situations involving different technologies and architectures.

Retrieval Augmented Generation (RAG) with Data Streaming

How do you prevent hallucinations from large language models (LLMs) in GenAI applications? LLMs need real-time, contextualized, and trustworthy data to generate the most reliable outputs. Kai Waehner, Global Field CTO at Confluent, explains how RAG and a data streaming platform with Apache Kafka and Flink make that possible.

Data Streaming Awards 2024: Nominations Are Now Open

The Data Streaming Awards is back for its third year! Designed to bring the data streaming community together, this one-of-a-kind industry award event recognizes organizations that are harnessing the power of this revolutionary technology to drive business and customer experience transformation. If you know a company (even your own team) that is using data streaming technology to transform their business and provide amazing value to their customers and communities, the time is now to submit a nomination.

Best Practices for Confluent Terraform Provider

Managing Confluent Cloud infrastructure efficiently poses challenges due to the complexities involved in deploying and maintaining various components like environments, clusters, topics, and authorizations. Without proper tooling and practices, teams struggle with manual configuration errors, lack of consistency, and potential security risks. The Confluent Terraform.

How to Set Up a Fully Managed Alerting Pipeline Using Confluent Cloud Audit Logs

In large organizations, Confluent Cloud is often simultaneously accessed by many different users along with business-critical applications, potentially across different lines of business. With so many individual pieces working together, the risk of an individual outage, error, or incident affecting other services increases. An incident could be constituted by a user clicking a wrong button, an application’s misconfiguration, or just a bug—you name it.

Serverless Decoded: Reinventing Kafka Scaling with Elastic CKUs

Apache Kafka has become the de facto standard for data streaming, used by organizations everywhere to anchor event-driven architectures and power mission-critical real-time applications. However, this rise has also sparked discussions on improving Kafka operations and cost-efficiency—streaming data is naturally prone to bursts and often unpredictable, resulting in inevitable variations in workloads and demand on your Kafka cluster(s).

Modernize Payments Architecture for ISO 20022 Compliance

The payments industry is evolving rapidly, fueled by technological advancements, changing consumer behaviors, and a growing appetite for real-time transactions. As this transformation unfolds, new standards have been introduced to ensure the payments ecosystem's safety, security, and efficiency.

Event-Driven Microservices in Banking and Fraud Detection | Designing Event-Driven Microservices

How do we know whether Event-Driven Microservices are the right solution? This is the question that Tributary Bank faced when they looked at modernizing their old fraud-detection system. They were faced with many challenges, including scalability, reliability, and security. Some members of their team felt that switching to an event-driven microservice architecture would be the magic bullet that would solve all of their problems. But is there any such thing as a magic bullet? Let's take a look at the types of decisions Tributary Bank had to make as they started down this path.

Introducing Confluent Cloud OpenSearch Sink Connector

Amazon OpenSearch is a popular fully managed analytics engine that makes it easier for customers to do interactive log analytics, real-time application monitoring, and semantic and keyword searches. It can also be used as a vector engine that helps organizations build and augment GenAI applications without managing infrastructure (we’ll talk about this in future blogs). Additionally, the service provides a reliable, scalable infrastructure designed to handle massive data volumes.

Contributing to Apache Kafka: How to Write a KIP

I’m brand new to writing KIPs (Kafka Improvement Proposals). I’ve written two so far, and my hands sweat every time I hit send on an email with ‘ KIP’ in the title. But I’ve also learned a lot from the process: about Apache Kafka internals, the process of writing KIPs, the Kafka community, and the most important motivation for developing software: our end users. What did I actually write? Let’s review KIP-941 and KIP-1020.

Analyzing AWS Audit Logs in Real Time Using Confluent Cloud and Amazon EventBridge

Last year, we introduced the Connect with Confluent partner program, enabling our technology partners to develop native integrations with Confluent Cloud. This gives our customers access to Confluent data streams from within their favorite applications and allows them to extract maximum value from their data.

Everything you've wanted to ask about Event-Driven Architectures | The Duchess & The Doctor Show

For their inaugural episode, Anna McDonald (the Duchess), Matthias J. Sax (the Doctor), and their extinct friend, Phil, wax rhapsodic about all things eventing: you’ll learn why events are a mindset, why the Duchess thinks you’ll find event immutability relaxing, and why your event streams might need some windows. The Duchess & The Doctor Show features a question-driven format that delivers substantial, yet easily comprehensible answers to user-submitted questions on all things events and eventing, including Apache Kafka, its ecosystem, and beyond!

Confluent Unveils New Capabilities to Apache Flink Offering to Simplify AI and Bring Stream Processing to Workloads Everywhere

Confluent's new AI Model Inference seamlessly integrates AI and ML capabilities into data pipelines. Confluent's new Freight clusters offer cost-savings for high-throughput use cases with relaxed latency requirements.

Introducing Confluent Cloud Freight Clusters

We’re excited to introduce Freight clusters—a new type of Confluent Cloud cluster designed for high-throughput, relaxed latency workloads that is up to 90% cheaper than self-managing open source Apache Kafka®. Freight clusters utilize the latest innovations in Confluent Cloud’s cloud-native engine, Kora, to deliver low cost networking by trading off ultra low latency performance.