Qlik hits reset button, rolls out new cloud, AI, and developer capabilities
Qlik introduces management team, licensing changes, and new hybrid/multi-cloud, augmented intelligence and development features. Here's my take from Qonnections 2018.
Qlik introduces management team, licensing changes, and new hybrid/multi-cloud, augmented intelligence and development features. Here's my take from Qonnections 2018.
The appearance of Hadoop and its related ecosystem was like a Cambrian explosion of open source tools and frameworks to process big amounts of data. But companies who invested early in big data found some challenges. For example, they needed engineers with expert knowledge not only on distributed systems and data processing but also on Java and the related JVM-based languages and tools.
5 ways to become a better data custodian under GDPR
How to best leverage your company’s most powerful asset
Docker is a platform for packaging, deploying, and running applications. Docker applications run in containers that can be used on any system: a developer’s laptop, systems on premises, or in the cloud. Containerization is a technology that’s been around for a long time, but it’s seen new life with Docker. It packages applications as images that contain everything needed to run them: code, runtime environment, libraries, and configuration.
As developers at Mobile Jazz, we know the headache of finding, reproducing, and fixing bugs in mobile apps. Several years ago, we got so fed up debugging mobile apps remotely that we started building a solution for ourselves. We did it by creating a way to remotely access the logging facilities of users’ devices. We built a makeshift server for application logs that allowed us to fix bugs across devices and continents.
“Be Prepared, because we are about to Unleash the Beast” is how I finished the Qlik Research demo at Qonnections 2018. I refer to Qlik’s new Cognitive Engine, as the “Beast” and here is why...
Recently, I've seen a lot of articles and conversation around DevOps. DevOps is a set of practices that automates the processes between software development and IT teams, in order to enable be shorter development cycles, increased deployment frequency, and more dependable releases all in conjunction with business needs.
Are you familiar with Apache Beam? If not, don’t be ashamed, as one of the latest projects developed by the Apache Software Foundation and first released in June 2016, Apache Beam is still relatively new in the data processing world. As a matter of fact, it wasn’t until recently when I started to work closely with Apache Beam, that I loved to learn and learned to love everything about it.
In this blog, we are going to take a look at Apache Spark performance and tuning. This a common discussion among almost everyone that uses Apache Spark, even outside of Talend. When developing and running your first Spark jobs there are always the following questions that come to mind.