Confluent

Mountain View, CA, USA
2014
  |  By Olivia Greene
The Q2 2024 Confluent Cloud launch introduces a suite of enhancements across the four key pillars of a Data Streaming Platform - Stream, Connect, Process, and Govern – alongside some significant work we have been doing with our partner ecosystem to help customers unlock new possibilities. Confluent has helped over 4,900+ global enterprises start their data streaming journey and was recently named a Leader by Forrester Research in The Forrester Wave: Streaming Data Platforms, Q4 2023.
  |  By course
A well-known debate: tabs or spaces? Sure, we could set up a Google Form to collect this data, but where’s the fun in that? Let’s settle the debate, Kafka-style. We’ll use the new confluent-kafka-javascript client (not in general availability yet) to build an app that produces the current state of the vote counts to a Kafka topic and consumes from that same topic to surface them to a JavaScript frontend.
The Markets in Financial Instruments Directive II (MiFID II) came into effect in January 2018, aiming to improve the competitiveness and transparency of European financial markets. As part of this, financial institutions are obligated to report details of trades and transactions (both equity and non-equity) to regulators within certain time limits.
  |  By Andrew Sellers
This article was originally published on The New Stack on June 10, 2024. Retrieval augmented generation (RAG) has emerged as a leading pattern to combat hallucinations and other inaccuracies that affect large language model content generation. However, RAG needs the right data architecture around it to scale effectively and efficiently.
While cloud computing adoption continues to accelerate due to its tremendous value, it has also become clear that edge computing is better suited for a variety of use cases. Organizations are realizing the benefits of processing data closer to its source, leading to reduced latency, security and compliance benefits, and more efficient bandwidth utilization as well as supporting scenarios where networking has challenging constraints.
  |  By Greg Murphy
Modern edge computing is transforming industries including manufacturing, healthcare, transportation, defense, retail, energy, and much more—pushing data management to far-reaching data sources to enable connected, low latency operations and enhanced decision making. These new use cases shift workloads to the left—requiring real-time data streaming and processing at the edge, right where the data is generated.
  |  By this link
Until recently, customers didn't have an easy way to send data from Confluent’s data streaming platform to Amazon OpenSearch. They had to either write custom code using AWS Lambda as an intermediary, refactor the HTTP Sink connector, or self-manage an old Elasticsearch connector version. Earlier this year, we announced the fully managed OpenSearch Sink connector, providing a seamless way to sink data from Confluent to Amazon OpenSearch.
  |  By Confluent Migration Accelerator
Confluent is thrilled to be named Microsoft’s 2024 OSS on Azure Global Partner of the Year. As a three-time Partner of the Year award winner, this recognition reflects our commitment to delivering outstanding open source-based applications and infrastructure solutions on Microsoft Azure.
  |  By Kora engine
In today’s hyper-connected world, systems are more intertwined and complex than ever. Myriad data sources including applications, databases, network and IoT devices continuously generate vast amounts of data, capturing every event and interaction. Imagine harnessing this data–login logs, firewall logs, IPS logs, web logins–aggregating it, and analyzing it to create a holistic view of your entire infrastructure.
  |  By Pascal Vantrepote
In this post, we demonstrate how a robust and scalable generative artificial intelligence (GenAI) chatbot is built using Amazon Bedrock and Confluent Cloud. We walk through the architecture and implementation of this generative AI chatbot, and see how it uses Confluent's real-time event streaming capabilities along with Amazon's infrastructure to continually stay up to date with the latest advances from the AI landscape.
  |  By Confluent
In this video, we'll look at some techniques for evolving events by analyzing a specific use case in a banking fraud detection system.
  |  By Confluent
When developing or debugging a stream processing pipeline with Flink SQL, it’s common to inspect each processing step's output to ensure data is being transformed properly. However, comprehending the resulting data stream's structure, distribution, and characteristics entails executing multiple ad-hoc SQL queries, which can be time-consuming and tedious. Additionally, isolating specific subsets of the stream for analysis or debugging often involves even more queries, adding to the complexity and time required.
  |  By Confluent
Learn how to leverage the native monitoring capabilities of the Python Kafka producer along with Confluent Cloud’s Metrics API while exploring how linger.ms affects latency and batch sizes.
  |  By Confluent
When building a distributed system, developers are often faced with something known as the dual-write problem. It occurs whenever the system needs to perform individual writes to separate systems that can't be transactionally linked. This situation creates the potential for data loss if the developer isn't careful. However, techniques such as the Transactional Outbox Pattern and Event Sourcing can be used to guard against the potential for data loss while also providing added resilience to the system.
  |  By Confluent
Tabs or spaces? Merge vs. rebase? Flink SQL vs. KStreams? Let’s Settle This is powered by a new Kafka JavaScript client from Confluent: confluent-kafka-javascript (early access). Find out how Lucia used it to make the website in the video above.
  |  By Confluent
The headless data architecture. Is it a fad? Some marketecture? Or something real? In this video, Adam Bellemare takes you through the basics of the headless data architecture and why it’s beginning to emerge as its own respective pattern. Driven by the decoupling of data computation from storage, the headless data architecture provides the basis for a modular data ecosystem. Stream your data for near real-time low latency use cases, or convert it to an Iceberg table for analytical use cases.
  |  By Confluent
In this video, you will see an example of how Tributary bank uses asynchronous events to enrich its domain and protect its fraud detection system from failures.
  |  By Confluent
In this video, we discuss one way a business can approach decomposing a monolith using a series of clearly defined steps and robust monitoring.
  |  By Confluent
Join Lucia Cerchie in a coding walkthrough, bridging the gap between REST APIs and data streaming. Together we’ll transform the OpenSky Network's live API into a data stream using Kafka and Flink SQL. Not only do we change the REST API into a data stream in this walkthrough, but we clean up the data on the way! We use Flink SQL to make it more readable and clean, and in that way we keep more of the business logic away from the client code.
  |  By Confluent
In this video, Wade explores the process of decomposing a monolith into a series of microservices. You'll see how Tributary bank extracts a variety of API methods from an existing monolith. Tributary Bank wants to decompose its monolith into a series of microservices. They are going to start with their Fraud Detection service. However, before they can start, they first have to untangle the existing code. They will need to define a clean API that will allow them to move the functionality to an asynchronous, event-driven microservice.
  |  By Confluent
Traditional messaging middleware like Message Queues (MQs), Enterprise Service Buses (ESBs), and Extract, Transform and Load (ETL) tools have been widely used for decades to handle message distribution and inter-service communication across distributed applications. However, they can no longer keep up with the needs of modern applications across hybrid and multi cloud environments for asynchronicity, heterogeneous datasets and high volume throughput.
  |  By Confluent
Why a data mesh? Predicated on delivering data as a first-class product, data mesh focuses on making it easy to publish and access important data across your organization. An event-driven data mesh combines the scale and performance of data in motion with product-focused rigor and self-service capabilities, putting data at the front and center of both operational and analytical use-cases.
  |  By Confluent
When it comes to fraud detection in financial services, streaming data with Confluent enables you to build the right intelligence-as early as possible-for precise and predictive responses. Learn how Confluent's event-driven architecture and streaming pipelines deliver a continuous flow of data, aggregated from wherever it resides in your enterprise, to whichever application or team needs to see it. Enrich each interaction, each transaction, and each anomaly with real-time context so your fraud detection systems have the intelligence to get ahead.
  |  By Confluent
Many forces affect software today: larger datasets, geographical disparities, complex company structures, and the growing need to be fast and nimble in the face of change. Proven approaches such as service-oriented (SOA) and event-driven architectures (EDA) are joined by newer techniques such as microservices, reactive architectures, DevOps, and stream processing. Many of these patterns are successful by themselves, but as this practical ebook demonstrates, they provide a more holistic and compelling approach when applied together.
  |  By Confluent
Data pipelines do much of the heavy lifting in organizations for integrating, transforming, and preparing data for subsequent use in data warehouses for analytical use cases. Despite being critical to the data value stream, data pipelines fundamentally haven't evolved in the last few decades. These legacy pipelines are holding organizations back from really getting value out of their data as real-time streaming becomes essential.
  |  By Confluent
In today's fast-paced business world, relying on outdated data can prove to be an expensive mistake. To maintain a competitive edge, it's crucial to have accurate real-time data that reflects the status quo of your business processes. With real-time data streaming, you can make informed decisions and drive value at a moment's notice. So, why would you settle for being simply data-driven when you can take your business to the next level with real-time data insights??
  |  By Confluent
Data pipelines do much of the heavy lifting in organizations for integrating and transforming and preparing the data for subsequent use in downstream systems for operational use cases. Despite being critical to the data value stream, data pipelines fundamentally haven't evolved in the last few decades. These legacy pipelines are holding organizations back from really getting value out of their data as real-time streaming becomes essential.
  |  By Confluent
Shoe retail titan NewLimits relies on a jumble of homegrown ETL pipelines and batch-based data systems. As a result, sluggish and inefficient data transfers are frustrating internal teams and holding back the company's development velocity and data quality.

Connect and process all of your data in real time with a cloud-native and complete data streaming platform available everywhere you need it.

Data streaming enables businesses to continuously process their data in real time for improved workflows, more automation, and superior, digital customer experiences. Confluent helps you operationalize and scale all your data streaming projects so you never lose focus on your core business.

Confluent Is So Much More Than Kafka:

  • Cloud Native: 10x Apache Kafka® service powered by the Kora Engine.
  • Complete: A complete, enterprise-grade data streaming platform.
  • Everywhere: Availability everywhere your data and applications reside.

Apache Kafka® Reinvented for the Data Streaming Era