Systems | Development | Analytics | API | Testing

Confluent

What is a Headless Data Architecture?

The headless data architecture. Is it a fad? Some marketecture? Or something real? In this video, Adam Bellemare takes you through the basics of the headless data architecture and why it’s beginning to emerge as its own respective pattern. Driven by the decoupling of data computation from storage, the headless data architecture provides the basis for a modular data ecosystem. Stream your data for near real-time low latency use cases, or convert it to an Iceberg table for analytical use cases.

How to Turn a REST API Into a Data Stream with Kafka and Flink

In the space of APIs for consuming up-to-date data (say, events or state available within an hour of occurring) many API paradigms exist. There are file- or object-based paradigms, e.g., S3 access. There’s database access, e.g., direct Snowflake access. Last, we have decoupled client-server APIs, e.g., REST APIs, gRPC, webhooks, and streaming APIs.

AWS and Confluent: Meeting the Requirements of Real-Time Operations

As government agencies work to improve both customer experience and operational efficiency, two tools have become critical: cloud services and data. Confluent and Amazon Web Services (AWS) have collaborated to make the move to and management of cloud easier while also enabling data streaming for real-time insights and action. We’ll be at the AWS Public Sector Summit in Washington, DC on June 26-27 to talk about and demo how our solutions work together.

Next-Gen Customer Loyalty Programs with Data Streaming

Buy 10 sandwiches, get 1 free. Classic punch cards (and fishing for them in your wallet or occasionally misplacing one) have become a thing of the past, as today's digital landscape demands more innovative solutions. Today’s customer loyalty programs are increasingly sophisticated—evolving, proliferating, and diversifying across every industry from retail, travel, and hospitality to healthcare (e.g., a discount for paying within 30 days of a hospital visit).

How to Use Flink SQL, Streamlit, and Kafka: Part 2

In part one of this series, we walked through how to use Streamlit, Apache Kafka, and Apache Flink to create a live data-driven user interface for a market data application to select a stock (e.g., SPY) and discussed the structure of the app at a high level. First, data with information on stock bid prices is moved via an Alpaca websocket, then, it’s produced to a Kafka topic in Confluent Cloud where it is also processed with Flink SQL.

86% of IT leaders say data streaming is a priority for IT investment in 2024

Confluent survey: 90% of respondents say data streaming platforms can lead to more product and service innovation in AI and ML development. 86% of respondents cite data streaming as a strategic or important priority for IT investments in 2024. For 91% of respondents, data streaming platforms are critical or important for achieving data-related goals.

How to Analyze Data from a REST API with Flink SQL

Join Lucia Cerchie in a coding walkthrough, bridging the gap between REST APIs and data streaming. Together we’ll transform the OpenSky Network's live API into a data stream using Kafka and Flink SQL. Not only do we change the REST API into a data stream in this walkthrough, but we clean up the data on the way! We use Flink SQL to make it more readable and clean, and in that way we keep more of the business logic away from the client code.