Systems | Development | Analytics | API | Testing

Data Lakes

How Enterprise Data Lakes Help Expose Data's True Value

For all of the buzz surrounding both artificial intelligence and data-driven management, many companies have seen mixed results in their quest to harness the value of enterprise data. To avoid those pitfalls, we mixed best-of-breed and proprietary solutions to develop our enterprise data platform (EDP), focusing much of our attention on a combination of smart changes in technology, culture and process for data lakes.

From Data Lake To Enterprise Data Platform: The Business Case Has Never Been More Compelling

Companies have had only mixed results in their decades-long quest to make better decisions by harnessing enterprise data. But as a new generation of technologies make it easier than ever to unlock the value of business information, change is coming. We’ve already reaped gains at Hitachi Vantara, where I run a global IT team that supports 11,000 employees and helps more than 10,000 customers rapidly scale digital businesses.

Data Lake Opportunities: Rethinking Data Analytics Optimization [VIDEO]

Data lakes have challenges. And until you solve those problems, efficient, cost-effective data analytics will remain out of reach. That’s why ChaosSearch is rethinking the way businesses manage and analyze their data. As Mike Leone, Senior Analyst for Data Platforms, Analytics and AI at ESG Global, and Thomas Hazel, ChaosSearch’s founder and CTO, explained in a recent webinar, ChaosSearch offers a data analytics optimization solution that makes data faster and cheaper to store and analyze.

Data Lake Challenges: Or, Why Your Data Lake Isn't Working Out [VIDEO]

Since the data lake concept emerged more than a decade ago, data lakes have been pitched as the solution to many of the woes surrounding traditional data management solutions, like databases and data warehouses. Data lakes, we have been told, are more scalable, better able to accommodate widely varying types of data, cheaper to build and so on. Much of that is true, at least theoretically.

Using Xplenty with Parquet for Superior Data Lake Performance

Building a data lake in Amazon S3 using AWS Spectrum to query the data from a Redshift cluster is a common practice. However, when it comes to boosting performance, there are some tricks that are worth learning. One of those is using data in Parquet format, which Redshift considers a best practice. Here's how to use Parquet format with Xplenty for the best data lake performance.