Systems | Development | Analytics | API | Testing

Machine Learning

Looking into 2022: Predictions for a New Year in MLOps

In an era where the passage of time seems to have changed somehow, it definitely feels strange to already be reflecting on another year gone by. It’s a cliche for a reason–the world definitely feels like it’s moving faster than ever, and in some completely unexpected directions. Sometimes it feels like we’re living in a time lapse when I consider the pace of technological progress I’ve witnessed in just a year.

Adopting a Production-First Approach to Enterprise AI

After a year packed with one machine learning and data science event after another, it’s clear that there are a few different definitions of the term ‘MLOps’ floating around. One convention uses MLOps to mean the cycle of training an AI model: preparing the data, evaluating, and training the model. This iterative or interactive model often includes AutoML capabilities, and what happens outside the scope of the trained model is not included in this definition.

Scaling NLP Pipelines at IHS Markit - MLOps Live #17

The data science team at IHS Markit will be sharing practical advice on building sophisticated NLP pipelines that work at scale. Using a robust and automated MLOps process, they run complex models that make massive amounts of unstructured data searchable and indexable. In this session, they will share their journey with MLOps and provide practical advice for other data science teams looking to.

Automating MLOps for Deep Learning

MLOps holds the key to accelerating the development, deployment and management of AI, so that enterprises can derive real business value from their AI initiatives. Deploying and managing deep learning models in production carries its own set of complexities. In this talk, we will discuss real-life examples from customers that have built MLOps pipelines for deep learning use cases. For example, predicting rainfall from CCTV footage to prevent flooding.

AI and ML: No Longer the Stuff of Science Fiction

Artificial Intelligence (AI) has revolutionized how various industries operate in recent years. But with growing demands, there’s a more nuanced need for enterprise-scale machine learning solutions and better data management systems. The 2021 Data Impact Awards aim to honor organizations who have shown exemplary work in this area.

Getting Started with CI/CD and Continual

While CI/CD is synonymous with modern software development best practices, today’s machine learning (ML) practitioners still lack similar tools and workflows for operating the ML development lifecycle on a level on par with software engineers. For background, follow a brief history of transformational CI/CD concepts and how they’re missing from today’s ML development lifecycle.

Using Elastic ML to Observe Your Kuma API Observability Metrics

Observability is catching on these days as the de-facto way to provide visibility into essential aspects of systems. It would be unwise for you not to leverage it with Kuma service mesh — the place that allows your services to communicate with the rest of the world. However, many observability solutions restrict themselves to the works: simple metric collection that provides them with dashboards. Expecting users to simply sit on their chairs and look at those metrics all day long is an invitation to failure, as we know that one can only do so much when they get tired and bored.