Iguazio

Herzliya, Israel
2014
  |  By Nick Schenone
When it comes to leveraging AI to capture business value, it’s worth asking, “what kind of AI do we need exactly?” There are significant differences between the methodologies collectively referred to as AI. While 2024 might have almost convinced us that gen AI is the end-all-be-all, there is also what’s sometimes called ‘traditional’ AI, deep learning, and much more.
  |  By Zeev Rispler
Gen AI applications can bring invaluable business value across multiple use cases and verticals. But sometimes it can be beneficial to experience different types of applications that can be created and operationalized with LLMs. Better understanding the potential value can help: In this blog post, we’ve curated the top gen AI demos of AI applications that can be developed with open-source MLRun. Each of these demos can be adapted to a number of industries and customized to specific needs.
  |  By Alexandra Quinn
Generative AI has rapidly transformed industries by enabling advanced automation, personalized experiences and groundbreaking innovations. However, implementing these powerful tools requires a production-first approach. This will maximize business value while mitigating risks. This guide outlines six best practices to ensure your generative AI initiatives are effective: valuable, scalable, compliant and future-proof.
  |  By Yaron Haviv
In 2024, organizations realized the revolutionizing business potential of gen AI. They accelerated their gen AI operationalization processes: explored new use cases to implement, researched LLMs and AI pipelines and contemplated underlying ethical issues. And with the seeds of the AI revolution now planted, the market is maturing accordingly.
  |  By Guy Lecker
LLMs are the foundation of gen AI applications. To effectively operationalize and de-risk LLMs and ensure they bring business value, organizations need to consider not just the model itself, but the supporting infrastructure, including GPUs and operational frameworks. By optimizing them to your use case, you can ensure you are using an LLM that is the right fit to your needs.
  |  By Gilad Shaham
As the open-source maintainers of MLRun, we’re proud to announce the release of MLRun v1.7. MLRun is an open-source AI orchestration tool that accelerates the deployment of gen AI applications, with features such as LLM monitoring, fine-tuning, data management, guardrails and more. We provide ready-made scenarios that can be easily implemented by teams in organizations.
  |  By Sahar Dolev-Blitental
Gen AI has the potential to bring immense value for marketing use cases, from content creation to hyper-personalization to product insights, and many more. But if you’re struggling to scale and operationalize gen AI, you’re not alone. That’s where most enterprises struggle. To date, many companies are still in the excitement and exploitation phase of gen AI. Few have a number of initial pilots deployed and even fewer have simultaneous pilots and are building differentiating use cases.
  |  By Asaf Somekh and Yaron Haviv
If 2023 was the year of gen experimentation, 2024 is the year of gen AI implementation. As companies embark on their implementation journey, they need to deal with a host of challenges, like performance, GPU efficiency and LLM risks. These challenges are exacerbated in highly-regulated industries, such as financial services and telecommunication, adding further implementation complexities. Below, we discuss these challenges and present some best practices and solutions to take into consideration.
AI and generative Al can lead to major enterprise advancements and productivity gains. By offering new capabilities, they open up opportunities for enhancing customer engagement, content creation, virtual experts, process automation and optimization, and more.
  |  By Alexandra Quinn
RAG and Fine-Tuning are two prominent LLM customization approaches. While RAG involves providing external and dynamic resources to trained models, fine-tuning involves further training on specialized datasets, altering the model. Each approach can be used for different use cases. In this blog post, we explain each approach, compare the two and recommend when to use them and which pitfalls to avoid.
  |  By Iguazio
In this demo, we showcase how to leverage MLRun, Iguazio, and NVIDIA NIMs to deploy and monitor a generative AI model at scale, focusing on reducing risks and ensuring seamless performance. Using NVIDIA's NIMs, the demo demonstrates advanced methods in model monitoring, logging, and continuous fine-tuning.
  |  By Iguazio
In this webinar, we explored how to successfully deploy your Gen AI applications while mitigating these challenges, using NVIDIA NIM and MLRun.
  |  By Iguazio
See how to use MLRun 1.7 to fine-tune a generative AI banking chatbot, ensuring it answers only relevant banking inquiries. Watch the full tutorial and follow along!
  |  By Iguazio
This demo recorded during our MLOps Live Webinar #32 showcases a customer-facing AI agent developed for a jewelry retailer. This can be used as a marketing tool to offer personalized product recommendations and purchasing information and support.
  |  By Iguazio
In this MLOps Live session we were joined by Eli Stein, Partner and Modern Marketing Capabilities Leader at McKinsey, to delve into how data scientists can leverage generative AI to support the company’s marketing strategy. We showcased a live demo of a customer-facing AI agent developed for a jewelry retailer, which can be used as a marketing tool to offer personalized product recommendations and purchasing information and support. Following the demo, we held an interactive discussion and Q&A session. Enjoy!
  |  By Iguazio
In this session, we explored the unique challenges of implementing gen AI in production environments, when agents are in direct contact with your customers. We shared the Iguazio & MongoDB one-stop-shop solution for building gen AI applications that scale effectively and efficiently, with built-in guardrails and monitoring. We'll show how the end-to-end application lifecycle is addressed – From data management all the way to governance and monitoring in production.
  |  By Iguazio

The Iguazio Data Science Platform automates MLOps with end-to-end machine learning pipelines, transforming AI projects into real-world business outcomes. It accelerates the development, deployment and management of AI applications at scale, enabling data scientists to focus on delivering better, more accurate and more powerful solutions instead of spending their time on infrastructure.

The platform is open and deployable anywhere - multi-cloud, on prem or edge. Iguazio powers real-time data science applications for financial services, gaming, ad-tech, manufacturing, smart mobility and telecoms.

Dive Into the Machine Learning Pipeline:

  • Collect and Enrich Data from Any Source: Ingest in real-time multi-model data at scale, including event-driven streaming, time series, NoSQL, SQL and files.
  • Prepare Online and Offline Data at Scale: Explore and manipulate online and offline data at scale, powered by Iguazio's real-time data layer and using your favorite data science and analytics frameworks, already pre-installed in the platform.
  • Accelerate and Automate Model Training: Continuously train models in a production-like environment, dynamically scaling GPUs and managed machine learning frameworks.
  • Deploy in Seconds: Deploy models and APIs from a Jupyter notebook or IDE to production in just a few clicks and continuously monitor model performance.

Bring Your Data Science to Life.