Systems | Development | Analytics | API | Testing

Machine Learning

Transforming Enterprise Operations with Gen AI

Enterprises are beginning to implement gen AI across use cases, realizing its enormous potential to deliver value. Since we are all charting new technological waters, being mindful of recommended strategies, pitfalls to avoid and lessons learned can assist with the process and help drive business impact and productivity. In this blog post, we provide a number of frameworks that can help enterprises effectively implement and scale gen AI while avoiding risk.

Snowflake ML Now Supports Expanded MLOps Capabilities for Streamlined Management of Features and Models

Bringing machine learning (ML) models into production is often hindered by fragmented MLOps processes that are difficult to scale with the underlying data. Many enterprises stitch together a complex mix of various MLOps tools to build an end-to-end ML pipeline. The friction of having to set up and manage separate environments for features and models creates operational complexity that can be costly to maintain and difficult to use.

Transforming Enterprise Operations with Gen AI - MLOp Live #29 with McKinsey

In this webinar we discussed the transformative impact of gen AI on enterprise operations, spotlighting advancements across manufacturing, supply chain and procurement. We covered the main gen AI use cases, challenges to be mindful of during implementation and key learnings from client projects; highlighting three main pillars –people, processes and technology.

Future-Proofing Your App: Strategies for Building Long-Lasting Apps

The generative AI industry is changing fast. New models and technologies (Hello GPT-4o) are emerging regularly, each more advanced than the last. This rapid development cycle means that what was cutting-edge a year ago might now be considered outdated. The rate of change demands a culture of continuous learning and technological adaptation.

LLM Validation and Evaluation

LLM evaluation is the process of assessing the performance and capabilities of LLMs. This helps determine how well the model understands and generates language, ensuring that it meets the specific needs of applications. There are multiple ways to perform LLM evaluation, each with different advantages. In this blog post, we explain the role of LLM evaluation in AI lifecycles and the different types of LLM evaluation methods. In the end, we show a demo of a chatbot that was developed with crowdsourcing.

How ClearML Helps Teams Get More out of Slurm

It is a fairly recent trend for companies to amass GPU firepower to build their own AI computing infrastructure and support the growing number of compute requests. Many recent AI tools now enable data scientists to work on data, run experiments, and train models seamlessly with the ability to submit their jobs and monitor their progress. However, for many organizations with mature supercomputing capabilities, Slurm has been the scheduling tool of choice for managing computing clusters.

ClearML Supports Seamless Orchestration and Infrastructure Management for Kubernetes, Slurm, PBS, and Bare Metal

Our early roadmap in 2024 has been largely focused on improving orchestration and compute infrastructure management capabilities. Last month we released a Resource Allocation Policy Management Control Center with a new, streamlined UI to help teams visualize their compute infrastructure and understand which users have access to what resources.

Improving LLM Accuracy & Performance - MLOps Live #28 with Databricks

Watch session #28 in our MLOps Live Webinar Series featuring Databricks where we discuss improving LLM accuracy & performance. Hear Margaret Amori (Databricks), Vijay Balasubramaniam (Databricks) , and Yaron Haviv (Iguazio) share best practices and pragmatic advice on successfully improving the accuracy and performance of LLMs while mitigating challenges like risks and escalating costs. See real examples including techniques to overcome common challenges using tools such as Databricks Mosaic AI and their new open LLM, DBRX.