Systems | Development | Analytics | API | Testing

Kafka

Kafka best practices: Monitoring and optimizing the performance of Kafka applications

Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. Administrators, developers, and data engineers who use Kafka clusters struggle to understand what is happening in their Kafka implementations.

Building a dependable realtime betting app with Confluent Cloud and Ably

Our everyday digital experiences are in the midst of a revolution. Customers increasingly expect their online experiences to be interactive, immersive, and realtime by default. The need to satisfy user expectations is driving the exponential growth of event-driven architectures in organizations of all shapes and sizes. And by enabling users to have realtime experiences whenever and wherever they want, 24/7, mobile drives this change further and faster.

The Ably Kafka Connector - now Generally Available with enhanced pattern-based mapping capabilities

The Ably Kafka Connector has a raft of new enhancements, and is now available in a full general availability (GA) release. Developers now have the option to use pattern-based mapping rules to enable streaming of data from many Kafka topics to many Ably channels - ideal for chat solutions, live sports updates, live streaming, broadcasting notifications and alerts.

Ably launch Kafka Connector at Kafka Summit 2022 - London

Here at Ably, we're excited to announce our participation and Silver sponsorship of the Kafka Summit 2022, taking place between 25-26 April. The Kafka Summit is the only dedicated technical conference for the Apache Kafka® Community, and it's a great opportunity for anyone building large-scale event-driven systems to learn and share ideas. It's also the perfect event for us to launch the general availability of the Ably Kafka Connector.

Apache Kafka to BigQuery: 2 Easy Methods

Organizations today have access to a wide stream of data. Data is generated from recommendation engines, page clicks, internet searches, product orders, and more. It is necessary to have an infrastructure that would enable you to stream your data as it gets generated and carry out analytics on the go. To aid this objective, incorporating a data pipeline for moving data from Apache Kafka to BigQuery is a step in the right direction.

Producing Protobuf data to Kafka

Until recently, teams were building a small handful of Kafka streaming applications. They were usually associated with Big Data workloads (analytics, data science etc.), and data serialization would typically be in AVRO or JSON. Now a wider set of engineering teams are building entire software products with microservices decoupled through Kafka. Many teams have adopted Google Protobuf as their serialization, partly due to its use in gRPC.

Lenses 5.0: The developer experience for mass Kafka adoption

Kafka is a ubiquitous component of a modern data platform. It has acted as the buffer, landing zone, and pipeline to integrate your data to drive analytics, or maybe surface after a few hops to a business service. More recently, though, it has become the backbone for new digital services with consumer-facing applications that process live off the stream. As such, Kafka is being adopted by dozens, (if not hundreds) of software and data engineering teams in your organization.

Steps to Install Kafka on Ubuntu 20.04: 8 Easy Steps

Apache Kafka is a distributed message broker designed to handle large volumes of real-time data efficiently. Unlike traditional brokers like ActiveMQ and RabbitMQ, Kafka runs as a cluster of one or more servers which makes it highly scalable and due to this distributed nature it has inbuilt fault-tolerance while delivering higher throughput when compared to its counterparts. This article will walk you through the steps to install Kafka on Ubuntu 20.04 using simple 8 steps.