As we covered in part 1 of this blog series, Snowflake’s platform is architecturally different from almost every traditional database system and cloud data warehouse. Snowflake has completely separate compute and storage, and both tiers of the platform are near instantly elastic. The need to do advanced resource planning, agonize over workload schedules, and prevent new workloads on the system due to the fear of disk and CPU limitations just go away with Snowflake.
The push to embrace cloud-based technologies has undoubtedly transformed IT infrastructures at every level of government. Federal, state, and local agencies have made significant strides in modernizing how data is collected, stored, and analyzed, all in service of their mission and in fulfillment of strategic IT mandates.
The only certainty in today’s world is change. And nowhere is that more apparent than in the way organizations consume data. A typical company might have thousands of analysts and business users accessing dashboards daily, hundreds of data scientists building and training models, and a large team of data engineers designing and running data pipelines. Each of these workloads has distinct compute and storage needs, and those needs can change significantly from hour to hour and day to day.
The COVID-19 pandemic has changed nearly everything. It’s affected nearly all Americans, and as such, it’s impacted every organization they interact with, both B2C and B2B. One industry that has had its operations turned upside down is the grocery industry. Grocery stores and their consumer packaged goods (CPG) suppliers and partners had to improvise and adapt nearly overnight to accommodate the changing demands of shoppers.
Customers of B2B companies rely on insights from applications to grow their business, secure their infrastructure, make business decisions, and more. Unless your B2B company offers a rich set of analytics within its product, your customers likely demand nightly data dumps from your application so they can analyze application data with their own BI stack.
Virtually every marketing organization is taking steps to become more data-driven, but there are considerable gaps between vision and reality. According to a 2018 Salesforce report, only 47% of marketers have a completely unified view of customer data sources. Meanwhile, customer data complexity is only increasing. According to Salesforce’s 2020 “State of Marketing” study, the median number of data sources leveraged by marketers is projected to jump by 50% between 2019 and 2021.
Since Snowflake announced general availability on Azure in November 2018, increasing numbers of customers are deploying their Snowflake accounts on Azure, and with this, more customers are using Power BI as the data visualization and analysis layer. As a result of these trends, customers want to understand the best practices for a successful deployment of Power BI with Snowflake.
This is the second post in a series about data modeling and data governance in the cloud from Snowflake’s partners at erwin. See the first post here. As you move data from legacy systems to a cloud data platform, you need to ensure the quality and overall governance of that data. Until recently, data governance was primarily an IT role that involved cataloging data elements to support search and discovery.
In August 2020, Snowflake announced several new features, all in preview, that make its cloud data platform easier to use, more powerful for sharing data, and more usable via Snowflake-supported languages. These innovations mean you can bring more workloads, more users, and more data to Snowflake, helping your organization solve your most demanding analytics challenges. Multi-Cloud, Cross-Cloud, and Pattern-Matching Support in Snowpipe