Systems | Development | Analytics | API | Testing

Latest News

Adopting a Production-First Approach to Enterprise AI

After a year packed with one machine learning and data science event after another, it’s clear that there are a few different definitions of the term ‘MLOps’ floating around. One convention uses MLOps to mean the cycle of training an AI model: preparing the data, evaluating, and training the model. This iterative or interactive model often includes AutoML capabilities, and what happens outside the scope of the trained model is not included in this definition.

AI and ML: No Longer the Stuff of Science Fiction

Artificial Intelligence (AI) has revolutionized how various industries operate in recent years. But with growing demands, there’s a more nuanced need for enterprise-scale machine learning solutions and better data management systems. The 2021 Data Impact Awards aim to honor organizations who have shown exemplary work in this area.

Getting Started with CI/CD and Continual

While CI/CD is synonymous with modern software development best practices, today’s machine learning (ML) practitioners still lack similar tools and workflows for operating the ML development lifecycle on a level on par with software engineers. For background, follow a brief history of transformational CI/CD concepts and how they’re missing from today’s ML development lifecycle.

Analysts Can Now Use SQL to Build and Deploy ML Models with Snowflake and Amazon SageMaker Autopilot

Machine learning (ML) models have become key drivers in helping organizations reveal patterns and make predictions that drive value across the business. While extremely valuable, building and deploying these models remains in the hands of only a small subset of expert data scientists and engineers with deep programming and ML framework expertise.

Introduction to TF Serving

Machine learning (ML) model serving refers to the series of steps that allow you to create a service out of a trained model that a system can then ping to receive a relevant prediction output for an end user. These steps typically involve required pre-processing of the input, a prediction request to the model, and relevant post-processing of the model output to apply business logic.

Make Your Models Matter: What It Takes to Maximize Business Value from Your Machine Learning Initiatives

We are excited by the endless possibilities of machine learning (ML). We recognise that experimentation is an important component of any enterprise machine learning practice. But, we also know that experimentation alone doesn’t yield business value. Organizations need to usher their ML models out of the lab (i.e., the proof-of-concept phase) and into deployment, which is otherwise known as being “in production”.

New Applied ML Prototypes Now Available in Cloudera Machine Learning

It’s no secret that Data Scientists have a difficult job. It feels like a lifetime ago that everyone was talking about data science as the sexiest job of the 21st century. Heck, it was so long ago that people were still meeting in person! Today, the sexy is starting to lose its shine. There’s recognition that it’s nearly impossible to find the unicorn data scientist that was the apple of every CEO’s eye in 2012.

It Worked Fine in Jupyter. Now What?

You got through all the hurdles getting the data you need; you worked hard training that model, and you are confident it will work. You just need to run it with a more extensive data set, more memory and maybe GPUs. And then...well. Running your code at scale and in an environment other than yours can be a nightmare. You have probably experienced this or read about it in the ML community. How frustrating is that? All your hard work and nothing to show for it.