Systems | Development | Analytics | API | Testing

Technology

Cloud, big data analytics & AI are driving change in Finance

How Cloud Computing is evolving alongside Big Data, Analytics, and AI in Financial Services. New technology like Artificial Intelligence (AI), Cloud Computing, big data, and prescriptive analytics are changing the way the Financial Services sector does business. With evolving tech comes both new opportunities as well as different risks, and companies within the space must innovate and embrace new ideas as shifting business conditions and changing consumer preferences dictate new norms.

Accelerating my COVID-19 DL project - User Story

The recent global pandemic caused by the COVID-19 virus has threatened the sanctity of our humanity and the well-being of our societies at large. Similar to times of war, the pandemic has also given us the opportunity to appreciate the things we take for granted such as health workers, food suppliers, drivers, grocery store clerks and many others who are in the frontlines keeping us safe at this difficult time, Salute!

What is Serverless Computing?

The shift to cloud computing fundamentally changed the way software is built and consumed by developers. Multiple code snippets or functions are logically connected to form a complex application. Since the platform deals with one function at a time, and functions are the fundamental deployment units, this model is often called as Functions as a Service (FaaS).

Why our new Streaming SQL opens up your data platform

SQL has long been the universal language for working with data. In fact it’s more relevant today than it was 40 years ago. Many data technologies were born without it and inevitably ended up adopting it later on. Apache Kafka is one of these data technologies. At Lenses.io, we were the first in the market to develop a SQL layer for Kafka (yes, before KSQL) and integrate it in a few different areas of our product for different workloads.

Why SQL is your key to querying Kafka

If you’re an engineer exploring a streaming platform like Kafka, chances are you’ve spent some time trying to work out what’s going on with the data in there. But if you’re introducing Kafka to a team of data scientists or developers unfamiliar with its idiosyncrasies, you might have spent days, weeks, months trying to tack on self-service capabilities. We’ve been there.

Data dump to data catalog for Apache Kafka

From data stagnating in warehouses to a growing number of real-time applications, in this article we explain why we need a new class of Data Catalogs: this time for real-time data. The 2010s brought us organizations “doing big data”. Teams were encouraged to dump it into a data lake and leave it for others to harvest. But data lakes soon became data swamps.

Use AI To Quickly Handle Sensitive Data Management

The growing waves of data that you’re pulling in include sensitive, personal or confidential data. This can become a compliance nightmare, especially with rules around PII, GDPR and CCPA, and it takes too much time to manually decide what should be protected. In this session, we will show how AI-driven data catalogs can identify sensitive data and share  that identification with your data security platforms to automate its discovery, identification and security.  You'll see how this dramatically reduces your time to onboard data and makes it safely available  to your business  communities.

How to Create a JavaScript Library. 7 Tips to Create a Library That Every Developer Loves Using

Have you ever found yourself copy-pasting the same bits of JavaScript code between different projects? Well, when this situation happens two or three times in a row, it’s usually a good indicator that you have a piece of code that is useful and reusable. So, if you are already reusing your code across several different projects, why not go the extra mile and convert it into a library that allows you to optimize this code-sharing?

Amazon EMR Insider Series: Optimizing big data costs with Amazon EMR & Unravel

Data is a core part of every business. As data volumes increase so do costs of processing it. Whether you are running your Apache Spark, Hive, or Presto workloads on-premise or on AWS, Amazon EMR is a sure way to save you money. In this session, we’ll discuss several best practices and new features that enable you to cut your operating costs and save money when processing vast amounts of data using Amazon EMR.

Adoption of a Cloud Data Platform, Intelligent Data Analytics While Maintaining Security, Governance and Privacy

“You cannot be the same, think the same and act the same if you hope to be successful in a world that does not remain the same.” This sentence by John C. Maxwell is so relevant to rapidly changing cloud hosting technology. Businesses understand the added value and are looking at cloud technologies to handle both operational and analytical workloads.