Systems | Development | Analytics | API | Testing

Latest Blogs

HBase Clusters Data Synchronization with HashTable/SyncTable tool

Replication (covered in this previous blog article) has been released for a while and is among the most used features of Apache HBase. Having clusters replicating data with different peers is a very common deployment, whether as a DR strategy or simply as a seamless way of replicating data between production/staging/development environments.

The embedded analytics maturity curve - where does your software or app rank?

An exceptional embedded analytics offering is underpinned by the right strategy and framework - and this starts with a clear vision. To maximize the value of data assets, you may need to recognize and then address where your product may need to improve it’s BI maturity level. To do this, it’s time to focus on where your analytics development capability and tooling is today.

Migrating Big Data to the Cloud

Unravel Data helps a lot of customers move big data operations to the cloud. Chris Santiago is Global Director of Solution Engineering here at Unravel. So Unravel, and Chris, know a lot about what can make these migrations fail. Chris and intrepid Unravel Data marketer Quoc Dang recently delivered a webinar, Reasons why your Big Data Cloud Migration Fails and Ways to Overcome. You can view the webinar now, or read on to learn more about how to overcome these failures.

1 Simple Trick To Massively Improve Automation Efficiency

Automated UI testing is a daily struggle for efficiency and reliability. A single misconfigured line of code can cost teams in hours of lost feedback time and test error triaging—potentially costing companies hundreds of thousands of dollars. In this case study we will see how interactions with only two web elements led to a 34% degradation in the test execution time.

Why Hiring a Data Analyst Won't Solve Your Business Problems

As businesses increasingly leverage data-driven decision making, the ability to use and understand data at the company-wide level becomes mission critical. While tech behemoths like Netflix, Airbnb, and Spotify have strong data cultures built over the last decade, most companies often face challenges getting up and running with data.

JavaScript Internals: Garbage Collection

Garbage collection (GC) is a very important process for all programming languages, whether it’s done manually (in low-level languages like C), or automatically. The curious thing is that most of us barely stop to think about how JavaScript — which is a programming language, and hence, needs to GC — does the trick. Like the majority of high-level languages, JavaScript allocates its objects and values to memory and releases them when they’re no longer needed. But, how?

The Best Guide to Docker, Kubernetes, & Container-Based Systems

We’re taking a closer look Container-Based Systems. The rise of the microservices-based applications has allowed global enterprises – like Amazon.com, Netflix, Uber, and Airbnb – to achieve unprecedented market dominance. Central to making these microservices-based applications possible is the concept of containerization, and at the core of containerization are Docker and Kubernetes – the two most widespread solutions for building and managing container-based applications.

Re-thinking The Insurance Industry In Real-Time To Cope With Pandemic-scale Disruption

The Insurance industry is in uncharted waters and COVID-19 has taken us where no algorithm has gone before. Today’s models, norms, and averages are being re-written on the fly, with insurers forced to cope with the inevitable conflict between old standards and the new normal.

Understanding Snowflake's Resource Optimization Capabilities

The only certainty in today’s world is change. And nowhere is that more apparent than in the way organizations consume data. A typical company might have thousands of analysts and business users accessing dashboards daily, hundreds of data scientists building and training models, and a large team of data engineers designing and running data pipelines. Each of these workloads has distinct compute and storage needs, and those needs can change significantly from hour to hour and day to day.