Systems | Development | Analytics | API | Testing

Latest News

This year has been a game changer for Yellowfin

The launch of Signals has been a complete game-changer for us this year. Yellowfin is doing something completely unique in the marketplace and we’re winning some great deals because of it. Signals is an automated data discovery product that delivers alerts to users about critical changes in their business. It’s not about dashboards - this is a completely different way of consuming analytics.

Data Pipelines and the Promise of Data

The flow of data can be perilous. Any number of problems can develop during the transport of data from one system to another. Data flows can hit bottlenecks resulting in latency; it can become corrupted, or datasets may conflict or have duplication. The more complex the environment and intricate the requirements, the more the potential for these problems increases. Volume also increases the potential for problems. Transporting data between systems often requires several steps.

Data Readiness and Quality: The Big New Challenges for all Companies

We live in a digital age which is increasingly being driven by algorithms and data. All of us, whether at home or work, increasingly relate to one another via data. It’s a systemic restructuring of society, our economy and institutions the like of which we haven’t seen since the industrial revolution. In the business world we commonly refer to it as digital transformation. In this algorithmic world, data governance is becoming a major challenge.

Meeting SLAs for Data Pipelines on Amazon EMR With Unravel

A household name in global media analytics – let’s call them MTI – is using Unravel to support their data operations (DataOps) on Amazon EMR to establish and protect their internal service level agreements (SLAs) and get the most out of their Spark applications and pipelines. Amazon EMR was an easy choice for MTI as the platform to run all their analytics. To start with, getting up and running is simple. There is nothing to install, no configuration required etc.

Generating a Heat Map with Twitter data using Pipeline Designer - Part 1

For me, the most exciting thing about Pipeline Designer is the way that it makes working with streaming data easy. Traditionally this has required a completely different way of thinking if you have come from a "batch" world. So when Pipeline Designer was released, the first thing I wanted to do was to find a good streaming data source and do something fun and interesting with the data. Twitter was my first choice of streaming data.

Making Data Work With the Unravel Partner Program

Modern data apps are increasingly going to the cloud due to their elastic compute demands, skills shortages and the complexity of managing big data on premises. And while more and more organizations are taking their data apps to the cloud to leverage its flexibility, they’re also finding that it is very challenging to assess application needs and how to migrate and optimize their data to ensure performance and cost targets are not compromised.

How to Unlock Your SAP Data Potential for Accelerated Analytics - Part 1

Many SAP customers have been running SAP on premise for decades and have struggled to harness the full potential of their business processes data running inside of SAP along with other enterprise and external data to gain augmented insight and become more agile in this digital era where everything keeps on moving at an exponential pace with no sign of slowing down.

Creating Avro schemas for Pipeline Designer with Pipeline Designer

I have had the privilege of playing with and following the progress of Pipeline Designer for a while now. I am really excited about this new tool. If you haven’t seen it yet, then don’t delay and get your free trial now…..actually, maybe read this blog first ;-) Pipeline Designer is an incredibly intuitive, web-based, batch and stream processing integration tool.