A data pipeline is a series of actions that combine data from multiple sources for analysis or visualization.
A recent survey from Wakefield Research finds that when enterprises build their own data pipelines, decision-making and revenue suffer.
When you build a data warehouse, the important question is how to ingest data from the source system to the data warehouse. If the table is small you can fully reload a table on a regular basis, however, if the table is large a common technique is to perform incremental table updates. This post demonstrates how you can enhance incremental pipeline performance when you ingest data into BigQuery.
At Airflow Summit 2021, Unravel’s co-founder and CTO, Shivnath Babu and Hari Nyer, Senior Software Engineer, delivered a talk titled Lessons Learned while Migrating Data Pipelines from Enterprise Schedulers to Airflow. This story, along with the slides and videos included in it, comes from the presentation.
At Airflow Summit 2021, Unravel’s co-founder and CTO, Shivnath Babu, led a talk titled Data Pipeline HealthCheck for Correctness, Performance & Cost Efficiency. This story, along with the slides and videos included in it, come from the presentation.