Systems | Development | Analytics | API | Testing

Blog

How Traveloka built a Data Provisioning API on a BigQuery-based microservice architecture

To build and develop an advanced data ecosystem is the dream of any data team, yet that often means understanding how the business will need to store and process that data. As Traveloka’s data engineers, one of our most important obligations is to custom-tailor our data delivery tools for each individual team in our company, so that the business can benefit from the data it generates.

Cloudera 2.0: Cloudera and Hortonworks Merge to form a Big Data Super Power

We’ve all dreamed of going to bed one day and waking up the next with superpowers – stronger, faster and even perhaps with the ability to fly. Yesterday that is exactly what happened to Tom Reilly and the people at Cloudera and Hortonworks. On October 2nd they went to bed as two rivals vying for leadership in the big data space. In the morning they woke up as Cloudera 2.0, a $700M firm, with a clear leadership position. “From the edge to AI”…to infinity and beyond!

What I've learnt from 15 years of mentoring

As an entrepreneur and a CEO of a startup, I think it’s vital to have a mentor. You often come to critical junctures and that’s when it can be invaluable to speak to someone. You can’t talk to your colleagues though because you need to motivate them and keep the organization growing. That’s why having an external mentor is very helpful.

How to transfer BigQuery tables between locations with Cloud Composer

BigQuery is a fast, highly scalable, cost-effective, and fully-managed enterprise data warehouse for analytics at any scale. As BigQuery has grown in popularity, one question that often arises is how to copy tables across locations in an efficient and scalable manner. BigQuery has some limitations for data management, one being that the destination dataset must reside in the same location as the source dataset that contains the table being copied.

3rd-Generation Business Intelligence (Part I)

I’m seated here on the balcony of our rented apartment in Chamonix looking up at Mont Blanc. It’s an incredible view but its beauty is deceiving and as anyone who has climbed knows, all too often the summit is not the one you can see. In the case of Mont Blanc, the perspective is such that the Dôme du Goûter seen in the foreground often looks bigger. I can assure you it’s not.

Delivering Customised Workspaces with Enhanced BI to the Finance Sector

Continuing my series of blogs examining the products, services and client benefits borne out of Yellowfin’s partnerships, I’m pleased to introduce our new fintech partner, ChartIQ. ChartIQ provides HTML5 components and the Finsemble integration platform to banks, brokerages, trading platforms, and financial portals worldwide.

Top 5 Automated Testing Tools for Android: October 2018

Developers are a fussy bunch. We sweat on the tiniest details and every aspect of our product has to be just right. We don’t like entrusting our precious designs to anyone outside our inner circle. Yet we’re increasing delegating key quality assurance (QA) tasks to robots. The market for automated testing products is expected to be worth $20 billion by 2023 – three times as much as now.

BigQuery and surrogate keys: a practical approach

When working with tables in data warehouse environments, it is fairly common to come across a situation in which you need to generate surrogate keys. A surrogate key is a system-generated identifier that uniquely identifies a record within a table. Why do we need to use surrogate keys? Quite simply: contrary to natural keys, they persist over time (i.e. they are not tied to any business meaning) and they allow for unlimited values.