Analytics

Why We Need the Data Fabric

Computer science loves abstraction, and now, as it turns out, so does data management. Abstraction means reducing something complex to something simpler that elegantly delivers its essence. Applications all over the world become more robust and easier to maintain and evolve when a simple interface is put in front of a complex service. The consumer of the service is able to say: This is a lot simpler than allowing the consumer to reach directly under the hood and mess with the engine.

HBase Clusters Data Synchronization with HashTable/SyncTable tool

Replication (covered in this previous blog article) has been released for a while and is among the most used features of Apache HBase. Having clusters replicating data with different peers is a very common deployment, whether as a DR strategy or simply as a seamless way of replicating data between production/staging/development environments.

Migrating Big Data to the Cloud

Unravel Data helps a lot of customers move big data operations to the cloud. Chris Santiago is Global Director of Solution Engineering here at Unravel. So Unravel, and Chris, know a lot about what can make these migrations fail. Chris and intrepid Unravel Data marketer Quoc Dang recently delivered a webinar, Reasons why your Big Data Cloud Migration Fails and Ways to Overcome. You can view the webinar now, or read on to learn more about how to overcome these failures.

Why Enhanced Visibility Matters for your Databricks Environment

Databricks has become a popular computing framework for big data as organizations increase their investments of moving data applications to the cloud. With that journey comes the promise of better collaboration, processing, and scaling of applications to the Cloud. However, customers are finding unexpected costs eating into their cloud budget as monitoring/observability tools like Ganglia, Grafana, the Databricks console only telling part of the story for charge/showback reports.