Behavioral email is the keystone of user-centric platform integration — emails to developers are most effective when they’re based on how they used your platform. Not surprisingly, the concept of segmenting your customers into groups of similar behaviors or attitudes is a well established best practice in marketing. MarketSherpa’s survey found that one of the most effective marketing strategies is to send emails based on the behavior of your customers.
In today’s post, we’ll look at two Elixir HTTP client libraries: Mint and Finch. Finch is built on top of Mint. We’ll see the benefits offered by this abstraction layer. We’ll also talk about some of the existing HTTP client libraries in the ecosystem and discuss some of the things that make Mint and Finch different. Finally, we’ll put together a quick project that makes use of Finch to put all of our learning into action. Let’s jump right in!
Links are so fundamental to web development that they're almost invisible. When we link to a third-party page, we hardly ever consider how it could become an opportunity to exploit our users. In this article, Julien Cretel introduces us to three techniques that bad actors can use to target our users and discusses how to avoid them.
SQL has long been the universal language for working with data. In fact it’s more relevant today than it was 40 years ago. Many data technologies were born without it and inevitably ended up adopting it later on. Apache Kafka is one of these data technologies. At Lenses.io, we were the first in the market to develop a SQL layer for Kafka (yes, before KSQL) and integrate it in a few different areas of our product for different workloads.
If you’re an engineer exploring a streaming platform like Kafka, chances are you’ve spent some time trying to work out what’s going on with the data in there. But if you’re introducing Kafka to a team of data scientists or developers unfamiliar with its idiosyncrasies, you might have spent days, weeks, months trying to tack on self-service capabilities. We’ve been there.
From data stagnating in warehouses to a growing number of real-time applications, in this article we explain why we need a new class of Data Catalogs: this time for real-time data. The 2010s brought us organizations “doing big data”. Teams were encouraged to dump it into a data lake and leave it for others to harvest. But data lakes soon became data swamps.
The next generation of 5G networks are unlocking a mind-bending array of new use cases. Blistering speed, super low latency, and access to more powerful mobile hardware bring VR, AR and ultra high-definition experiences into sharp focus for the near future. But there’s a bigger shift being driven by 5G, and it’s not actually about speed at all. It’s about re-thinking the modern telco business model.
Navistar is a leading global manufacturer of commercial trucks. With a fleet of 350,000 vehicles, unscheduled maintenance and vehicle breakdowns created ongoing disruption to their business. Navistar required a diagnostics platform that would help them predict when a vehicle needed maintenance to minimize downtime.
Strong investments in an organization’s data pipeline result in greater business outcomes. Few would dispute this claim, reflected in the massive growth in the big data and analytics market which continues to fuel many organizations’ ambition to become data-driven.