Systems | Development | Analytics | API | Testing

%term

Going Serverless with Talend through CI/CD and Containers

Continuous integration, delivery and deployment, known as CI/CD, has become such a critical piece in every successful software project that we cannot deny the benefits it can bring to your project. At the same time, containers are everywhere right now and are very popular among developers. In practice, CI/CD delivery allows users to gain confidence in the applications they are building by continuously test and validate them.

Stop being an Excel slave

All marketers have to deal with essentially the same problem when managing their online campaigns and looking for the right attribution model: the data they need for evaluation is scattered across various systems, and collecting and collating it takes too much time. If they decide to use solutions intended to help them to handle this problem, they often find such solutions are not, in themselves, enough.

Accelerating your Delivery Pipeline with SmartBear and Jenkins

With limited time to manually test during a continuous delivery pipeline, automated testing coupled with CI/CD infrastructure like Jenkins is the preferred method of ensuring quality at speed. With SmartBear and Jenkins, software teams can bake their UI and API tests right into their pipeline, and deliver continuous quality.

3 things you should never measure in BI

When I speak to people who are thinking about implementing BI, they are often overwhelmed by all the things they could measure. Many start by wanting to measure everything, which doesn’t necessarily help them. That’s because there’s an inherent cost in measuring things – everything you report and track creates an ongoing burden that your organization has to maintain. That’s why it’s important to be selective about what you measure from the get-go.

How to Develop a Data Processing Job Using Apache Beam - Streaming Pipelines

In our last blog, we talked about developing data processing jobs using Apache Beam. This time we are going to talk about one of the most demanded things in modern Big Data world nowadays – processing of Streaming data. The principal difference between Batch and Streaming is the type of input data source. When your data set is limited (even if it’s huge in terms of size) and it is not being updated along the time of processing, then you would likely use a batching pipeline.