Systems | Development | Analytics | API | Testing

Kafka

Apache Kafka to BigQuery: 2 Easy Methods

Organizations today have access to a wide stream of data. Data is generated from recommendation engines, page clicks, internet searches, product orders, and more. It is necessary to have an infrastructure that would enable you to stream your data as it gets generated and carry out analytics on the go. To aid this objective, incorporating a data pipeline for moving data from Apache Kafka to BigQuery is a step in the right direction.

Producing Protobuf data to Kafka

Until recently, teams were building a small handful of Kafka streaming applications. They were usually associated with Big Data workloads (analytics, data science etc.), and data serialization would typically be in AVRO or JSON. Now a wider set of engineering teams are building entire software products with microservices decoupled through Kafka. Many teams have adopted Google Protobuf as their serialization, partly due to its use in gRPC.

Lenses 5.0: The developer experience for mass Kafka adoption

Kafka is a ubiquitous component of a modern data platform. It has acted as the buffer, landing zone, and pipeline to integrate your data to drive analytics, or maybe surface after a few hops to a business service. More recently, though, it has become the backbone for new digital services with consumer-facing applications that process live off the stream. As such, Kafka is being adopted by dozens, (if not hundreds) of software and data engineering teams in your organization.

Steps to Install Kafka on Ubuntu 20.04: 8 Easy Steps

Apache Kafka is a distributed message broker designed to handle large volumes of real-time data efficiently. Unlike traditional brokers like ActiveMQ and RabbitMQ, Kafka runs as a cluster of one or more servers which makes it highly scalable and due to this distributed nature it has inbuilt fault-tolerance while delivering higher throughput when compared to its counterparts. This article will walk you through the steps to install Kafka on Ubuntu 20.04 using simple 8 steps.

The Next Generation of Cloud Connectivity: Apache Kafka, API Gateway and Service Mesh

Let’s boldly go where no one has gone before. Get ready, Star Trek fans! Jean-Luc Picard will be representing our microservice. Once we have Jean-Luc in our ship (microservice in production), what happens on day 2? We still need to add authorization, load balancing, rate limiting, etc. With an API gateway, like Kong Gateway, you don’t have to know how to do this because a set of program components, called plugins, allow you to implement this without any problem.

Apache Kafka, API Gateway and Service Mesh for Cloud Connectivity

In this video, @Viktor Gamov illustrates the differences between an API gateway and service mesh — and when to use one or the other pragmatically and objectively. He also discuss the similarities and differences between the communication layer provided by gateways, service mesh and Apache Kafka. Finally, you will learn a few ways to use Apache Kafka within a service mesh architecture.

Building a realtime ticket booking solution with Kafka, FastAPI, and Ably

As the post-pandemic world emerges, the future of events such as summits, conferences or concerts is brighter than ever. Thanks to hybrid events, in-person events are now doubled by online happenings, which allows event organizers to reach much larger, geographically distributed audiences. For organizers and ticket distributors, providing a great ticket-booking experience to their global audiences has become more important than ever.

Ably Kafka Connector: extend Kafka to the edge reliably and safely

We are excited to announce our new Kafka connector. The Ably Kafka Connector provides a ready-made integration between Kafka and Ably, helping companies distribute data from Kafka to internet-connected client devices in a fast, easy, dependable and secure way. As part of our partnership with Confluent, the connector is available on the Confluent Hub as a Gold standard connector.