Systems | Development | Analytics | API | Testing

Technology

Apache Kafka, API Gateway and Service Mesh for Cloud Connectivity

In this video, @Viktor Gamov illustrates the differences between an API gateway and service mesh — and when to use one or the other pragmatically and objectively. He also discuss the similarities and differences between the communication layer provided by gateways, service mesh and Apache Kafka. Finally, you will learn a few ways to use Apache Kafka within a service mesh architecture.

The Next Generation of Cloud Connectivity: Apache Kafka, API Gateway and Service Mesh

Let’s boldly go where no one has gone before. Get ready, Star Trek fans! Jean-Luc Picard will be representing our microservice. Once we have Jean-Luc in our ship (microservice in production), what happens on day 2? We still need to add authorization, load balancing, rate limiting, etc. With an API gateway, like Kong Gateway, you don’t have to know how to do this because a set of program components, called plugins, allow you to implement this without any problem.

Data Goes Around The World In 80 Seconds With Snowflake

See how a database named Phileas Fogg can journey around the world in 80 seconds on Snowflake in this animated short. With Snowflake, PHILEAS_FOGG can failover in the event of disruption to enable continuous business operations and be joined with local data sets for global data collaboration across clouds.

Will cloud ecosystems finally make insight to action a reality?

For decades, the technologies and systems that deliver analytics have undergone massive change. What hasn’t changed, however, is the goal: using data-driven insights to drive actions. Insight to action has been a consistent vision for the industry. Everyone from data practitioners to technology developers have sought this elusive goal, but as Chief Data Strategy Officer Cindi Howson points out, it has remained unfulfilled — until now.

Announcing Our $4M Seed and Continual Public Beta

Today we’re excited to announce the public beta launch of Continual, the first operational AI platform built specifically for modern data teams and the modern data stack. We’re also announcing our $4M Series Seed, led by Amplify Partners, and joined by Illuminate Ventures, Wayfinder, DCF, and Essence, as well as new partnerships with Snowflake and dbt Labs.

How to migrate an on-premises data warehouse to BigQuery on Google Cloud

Data teams across companies have continuous challenges of consolidating data, processing it and making it useful. They deal with challenges such as a mixture of multiple ETL jobs, long ETL windows capacity-bound on-premise data warehouses and ever-increasing demands from users. They also need to make sure that the downstream requirements of ML, reporting and analytics are met with the data processing.

What is Amazon Redshift Spectrum?

Amazon S3 (Simple Storage Service) has been around since 2006. Most use this scalable, cloud-based service for archiving and backing up data. Within 10 years of its birth, S3 stored over 2 trillion objects, each up to 5 terabytes in size. Enterprises value their data as something worth preserving. But much of this data lies inert, in “cold” data lakes, unavailable for analysis. Also called “dark data”, it can hold key insights for enterprises.

Redshift Join: How to use Redshift's Join Clause

Redshift’s JOIN clause is perhaps the second most important clause after SELECT clause, and it is used even more ubiquitously, considering how interconnected a typical application database’s tables are. Due to that connectivity between datasets, data developers require many joins to collect and process all the data points involved in most use cases. Unfortunately, as the number of tables you’re joining in grows, so does the sloth of your query.