On the heels of announcing our $14.5M Series A and General Availability, we’re excited to be at the Data + AI Summit to unveil support for Continual on the Databricks Lakehouse. Increasingly, data and ML tool providers are embracing a data-centric approach to the ML workflow. The goal is to focus on what increasing drives ML – the data – compared to infrastructure, algorithms, or pipelines. At Continual we bet on data-centric AI from day one.
The modern data stack continues to attract companies who are looking for a quick onramp into the world of cloud-based analytics and/or actively modernizing their legacy data stacks. We've enumerated the benefits of the modern data stack in previous articles.
Our day-to-day lives are becoming increasingly reliant on the direction, decision-making, and support of AI systems. Never in the history of technology has the threat or the need to protect the integrity of such decision-making been more urgent or real. I recently served as an official reviewer for a new BCS pre-publication book titled “Artificial Intelligence and Software Testing – Building systems you can trust”.
In this tutorial, we’re going to build an interactive customer Churn Insights Dashboard using the open-source Python framework, Streamlit, and the Continual predictions generated in Part 1: Snowflake and Continual Quickstart Guide. In Part 1, we connected Continual to Snowflake and used a simple dataset of customer information, activity, and churn status to build and operationalize a machine learning model in Continual to predict the likelihood of a customer churning.
Sometimes the need for processing power you or your team requires is very high one day and very low another. Especially in machine learning environments, this is a common problem. One day a team might be training their models and the need for compute will be sky high, but other days they’ll be doing research and figuring out how to solve a specific problem, with only the need for a web browser and some coffee.
The algorithm team at WSC Sports faced a challenge. How could our computer vision model, that is working in a dynamic environment, maintain high quality results? Especially as in our case, new data may appear daily and be visually different from the already trained data. Bit of a head-scratcher right? Well, we’ve developed a system that is doing just that and showing exceptional results!