Analytics

What is High Cardinality?

High cardinality is a term that often surfaces in discussions about data management and analysis. It refers to a situation where a dataset contains an unusually high number of unique values, presenting challenges when it comes to processing and analyzing the data. In this blog, we will explore the concept of high cardinality data, its implications for data analysis, and strategies for managing and analyzing it effectively.

5 Factors to Assess When Choosing an E-Commerce ERP

The five factors to consider when choosing an e-commerce ERP are: Choosing the right e-commerce platform, like Shopify or Magento, is just the first step in the selection process when planning your IT environment. Your e-commerce business can also benefit from ERP (enterprise resource planning) software that helps you streamline, automate, and optimize your business workflow.

Effortless Stream Processing on Any Cloud - Flink Actions, Terraform Support, and Multi-Cloud Availability

Since we launched the Open Preview of our serverless Apache Flink® service during last year’s Current, we’ve continued to add new capabilities to the product that make stream processing accessible and easy to use for everyone. In this blog post, we will highlight some of the key features added this year.

Introducing Apache Kafka 3.7

We are proud to announce the release of Apache Kafka® 3.7.0. This release contains many new features and improvements. This blog post will highlight some of the more prominent features. For a full list of changes, be sure to check the release notes. See the Upgrading to 3.7.0 from any version 0.8.x through 3.6.x section in the documentation for the list of notable changes and detailed upgrade steps.

Apache Kafka 3.7: Official Docker Image and Improved Client Monitoring

Apache Kafka® 3.7 is here! On behalf of the Kafka community, Danica Fine highlights key release updates, with KIPs from Kafka Core, Kafka Streams, and Kafka Connect. Kafka Core: Kafka Streams: Kafka Connect: Many more KIPs are a part of this release. See the blog post for more details.

Marketplace Monetization: Turn Your Data and Apps into a Revenue Stream

Snowflake Marketplace is a vibrant resource, with hundreds of providers offering thousands of ready-to-try or ready-to-buy third-party data sets, applications and services. Many of these providers make their products available on Snowflake Marketplace for Snowflake customers to purchase — and they use our integrated Marketplace Monetization capabilities to simplify the process and speed up procurement and sales cycles.

Data Product Manager Essentials: Unleashing Innovation and Growth

Just a couple decades ago, human resource departments didn’t look for data product managers. The job didn’t exist because organizations rarely needed professionals to oversee data products and the teams that build them. They might have employed data scientists, but they didn’t need people focused on management more than they focused on data.

How to Create Big Number and Vertical Column Charts in Yellowfin

Welcome back to Yellowfin Japan’s ‘How to?’ blog series! In our previous blog, we went through how to capture data using Yellowfin's Data Transformation flow, and the preparation and steps for creating reports using Yellowfin View. It may seem like a lot of simple work, but as the number of reports to be created increases, the importance of data preparation becomes far more apparent. So, what about after you’ve done all the setup? Well, it’s now time to create reports!

Automating ETL Tasks Effectively with Choreo

Connecting multiple systems and exchanging data among them is afrequent requirement in many business scenarios. This typically involves one or many source systems, an intermediary processor, and one or many destination systems. Some organizations invest in purpose-built solution suites such as Data Warehouse, Master Data Management (MDM), or Extract, Transform, Load (ETL) platforms, which, in-theory, cover a wider spectrum of requirements.