Systems | Development | Analytics | API | Testing

Lenses

In the event-driven galaxy, which metadata matters most?

As a developer, you're no stranger to your vast and varied data environment… Or are you? The tremendous amount of data your organization collects is stored in various sources and formats. You need a way to understand where and what data is, to be able to do what you need to do: build amazing event-driven applications.

NEW Lenses: PostgreSQL & metadata to navigate your Kafka galaxy

When you’re one of many developers commanding streaming applications running in Apache Kafka, you want enough data observability to fly your own data product to the moon. But you also want to boldly go where no developer has gone before to discover new applications. At the same time, you don’t want to be exposed to sensitive data that summons you to your compliance team, crashing you back down to earth.

Creating your managed Kafka shortlist

You’ve been handed the not-so-easy task of scoping a managed Kafka for your team. How do you start the shortlist? Post something on Reddit? Skim read a gazillion review blogs? Crash Google Chrome opening a thousand tabs to compare feature lists? If you’re going to run a Kafka POC with two or three vendors, or you’re trying to find the best Kafka for your business, how can you narrow down your selection? Let’s get to it.

Architecting Apache Kafka for GDPR compliance

Once upon a time (2017), in an office far far away, you may have been cornered in a conversation with someone from Legal about GDPR. It could have gone something like this: “You there, Data Engineer” “Yep, that’s me” “What PII do we have residing in this Apache Kafka database?” You probably mumbled something about Kafka not being a database. “And who can read/ write the data?

Picking up the pieces of your monolith breakdown

A decade ago, all developers could talk about was breaking down the monolith and event-driven architectures. Especially in the financial services industry, to become more nimble and accelerate their application delivery. They leveraged messaging systems to decouple the application, and specifically Apache Kafka has transitioned from being a data integration technology to the leading messaging system for microservices.

Kafka to Splunk: Data mesh for security & IT

Splunk is a technology that made processing huge volumes and complex datasets accessible to security and IT teams. Despite its strengths for monitoring and investigation, Splunk is a bit of a one-way street. Once it's in Splunk, it's not that easy to stream the data elsewhere in great volume. And it doesn’t mean it’s the best technology for all IT and Security use cases. Or the cheapest.

AWS re:Invent: Apache Kafka takeaways

If anyone's ever been to AWS ReInvent in Vegas before, you'll know it's a crazy ride. This year we missed out (at least we have cleaner consciences and healthier wallets). But the high quality of content hadn't changed. We've been binging on sessions ‘til the bitter end (it officially ended Friday). So for our community, here is a summary of a few talks related to Apache Kafka.

Kafka Total Cost of Ownership: What are you missing?

“We’ve seen two years’ worth of digital transformation in two months” said Microsoft’s Satya Nadella. Due to COVID-19, digital transformation roadmaps have been deleted, redrafted, doubled down and accelerated by up to a decade. Traditional companies are moving by osmosis towards streaming technologies such as Apache Kafka to kick off new digital services. But how much should it cost to experience 2030 in 2021?