Systems | Development | Analytics | API | Testing

Latest Posts

Tips to optimize Spark jobs to improve performance

Summary: Sometimes the insight you’re shown isn’t the one you were expecting. Unravel DataOps observability provides the right, and actionable, insights to unlock the full value and potential of your Spark application. One of the key features of Unravel is our automated insights. This is the feature where Unravel analyzes the finished Spark job and then presents its findings to the user. Sometimes those findings can be layered and not exactly what you expect.

Kafka best practices: Monitoring and optimizing the performance of Kafka applications

Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. Administrators, developers, and data engineers who use Kafka clusters struggle to understand what is happening in their Kafka implementations.

Unravel for Google BigQuery Datasheet

Poorly written queries and rouge queries can create a nightmare for data teams when it comes to fixing and preventing performance issues, and as a result, costs can quickly spiral out of control. Whether you want to move your on-premises data to Google BigQuery or make the most of your Google BigQuery investments, Unravel can help businesses that struggle to find the optimal balance of performance and cost of Google BigQuery.

Why Legacy Observability Tools Don't Work for Modern Data Stacks

Whether they know it or not, every company has become a data company. Data is no longer just a transactional byproduct, but a transformative enabler of business decision-making. In just a few years, modern data analytics has gone from being a science project to becoming the backbone of business operations to generate insights, fuel innovation, improve customer satisfaction, and drive revenue growth. But none of that can happen if data applications and pipelines aren’t running well.

Roundtable Recap: DataOps Just Wanna Have Fun

We like to keep things light at Unravel. In a recent event, we hosted a group of industry experts for a night of laughs and drinks as we discussed cloud migration and heard from our friends at Don’t Tell Comedy. Unravel VP of Solutions Engineering Chris Santiago and AWS Sr. Worldwide Business Development Manager for Analytics Kiran Guduguntla moderated a discussion with data professionals from Black Knight, TJX Companies, AT&T Systems, Georgia Pacific, and IBM, among others.

Beyond Observability for the Modern Data Stack

The term “observability” means many things to many people. A lot of energy has been spent—particularly among vendors offering an observability solution—in trying to define what the term means in one context or another. But instead of getting bogged down in the “what” of observability, I think it’s more valuable to address the “why.” What are we trying to accomplish with observability? What is the end goal?

Webinar Recap: Functional strategies for migrating from Hadoop to AWS

In a recent webinar, Functional (& Funny) Strategies for Modern Data Architecture, we combined comedy and practical strategies for migrating from Hadoop to AWS. Unravel Co-Founder and CTO Shivnath Babu moderated a discussion with AWS Principal Architect, Global Specialty Practice, Dipankar Ghosal and WANdisco CTO Paul Scott-Murphy. Here are some of the key takeaways from the event.

Building vs. Buying Your Modern Data Stack: A Panel Discussion

One of the highlights of the DataOps Unleashed 2022 virtual conference was a roundtable panel discussion on building versus buying when it comes to your data stack. Build versus buy is a question for all layers of the enterprise infrastructure stack. But in the last five years — even in just the last year alone — it’s hard to think of a part of IT that has seen more dramatic change than that of the modern data stack.

Webinar Recap: Optimizing and Migrating Hadoop to Azure Databricks

The benefits of moving your on-prem Spark Hadoop environment to Databricks are undeniable. A recent Forrester Total Economic Impact (TEI) study reveals that deploying Databricks can pay for itself in less than six months, with a 417% ROI from cost savings and increased revenue & productivity over three years. But without the right methodology and tools, such modernization/migration can be a daunting task.