Systems | Development | Analytics | API | Testing

Latest Posts

The best of Kafka Summit 2020

After a self-isolated and event-free spring, some of us around the world welcomed a more promising summer. You might be taking some time away on a socially distanced holiday. You might be taking some time away from the day-to-day at home. But if a cold beer in the sun isn't enough to make up for these difficult months, the premier event for the Streaming Data Community is back! Kafka Summit has gone virtual this year and that means you can attend the event from anywhere.

Why our new Streaming SQL opens up your data platform

SQL has long been the universal language for working with data. In fact it’s more relevant today than it was 40 years ago. Many data technologies were born without it and inevitably ended up adopting it later on. Apache Kafka is one of these data technologies. At Lenses.io, we were the first in the market to develop a SQL layer for Kafka (yes, before KSQL) and integrate it in a few different areas of our product for different workloads.

Why SQL is your key to querying Kafka

If you’re an engineer exploring a streaming platform like Kafka, chances are you’ve spent some time trying to work out what’s going on with the data in there. But if you’re introducing Kafka to a team of data scientists or developers unfamiliar with its idiosyncrasies, you might have spent days, weeks, months trying to tack on self-service capabilities. We’ve been there.

Data dump to data catalog for Apache Kafka

From data stagnating in warehouses to a growing number of real-time applications, in this article we explain why we need a new class of Data Catalogs: this time for real-time data. The 2010s brought us organizations “doing big data”. Teams were encouraged to dump it into a data lake and leave it for others to harvest. But data lakes soon became data swamps.

It Takes Two to Kafka: AWS MSK + DataOps

I ordered a ride share recently from a beach; the app struggled to find a car, so I had to make several requests. After the fourth or fifth attempt, my bank alerted me to possible fraudulent activity on my credit card via SMS. Each time I ordered a ride, the service put a pending charge on my card. After I texted back that it was just me, the bank reactivated my account. Though the process was annoying, I felt reassured my bank could detect possible fraudulence that quickly.

Will your streaming data platform disturb your holiday?

Here's why you need to double down on your DataOps before your vacation. In the past few months, everything has changed at work (or at home). Q1 plans were scrapped. Reset buttons were smashed. It was all about cost-cutting and keeping lights on. Many app and data teams sought quick solutions and developed workarounds to data challenges and operational problems as people prepared to work from home for the foreseeable future. And now, it’s time for a holiday.

Future-proofing the supply chain with real-time data

Next to the healthcare system, COVID-19’s biggest infrastructural burden fell upon the supply chain. Fluctuations in supply and demand of essential goods, along with the oil surplus, led to a freight cliff in mid-April. Outbound tender volume and spot rates bottomed out, which highlighted a massive drop in demand. As the market rebounds, technological investments are key to the industry’s recovery.

Removing Kafka bottlenecks with DataOps

Our CTO, Andrew Stevenson was interviewed by Alan Shimel for TechStrong TV. The discussion was all about hot data topics such as DataOps, DevOps and practices to successfully enable Kafka. Andrew narrates his journey from civil engineering to starting Lenses.io with Antonios, our CEO, to help organizations succeed with real-time data.

A perfect environment to learn & develop on Apache Kafka

Apache Kafka has gained traction as one of the most widely adopted technologies for building streaming applications - but introducing it (and scaling it) into your business can be a struggle. The problem isn’t with Kafka itself so much as the different components you need to learn and different tools required to operate it. For those motivated enough, you can invest money, effort and long Friday nights into learning, fixing and streamlining Kafka - and you’ll get there.

Data engineering in 2020

Data Engineers are forever flying the flag for open-source technology. But now that we’re safely locked away in our homes - potentially for the rest of the year - a new danger looms: That we get distracted by our new data tools and lose touch with delivering value to the business. Today most Data Engineers around the world are working from home, and at first glance it may seem like this works. After all, a solid internet connection is all we need to carry on doing what we were doing...