Iguazio

Herzliya, Israel
2014
  |  By Gilad Shaham
As the open-source maintainers of MLRun, we’re proud to announce the release of MLRun v1.7. MLRun is an open-source AI orchestration tool that accelerates the deployment of gen AI applications, with features such as LLM monitoring, fine-tuning, data management, guardrails and more. We provide ready-made scenarios that can be easily implemented by teams in organizations.
  |  By Sahar Dolev-Blitental
Gen AI has the potential to bring immense value for marketing use cases, from content creation to hyper-personalization to product insights, and many more. But if you’re struggling to scale and operationalize gen AI, you’re not alone. That’s where most enterprises struggle. To date, many companies are still in the excitement and exploitation phase of gen AI. Few have a number of initial pilots deployed and even fewer have simultaneous pilots and are building differentiating use cases.
  |  By Asaf Somekh and Yaron Haviv
If 2023 was the year of gen experimentation, 2024 is the year of gen AI implementation. As companies embark on their implementation journey, they need to deal with a host of challenges, like performance, GPU efficiency and LLM risks. These challenges are exacerbated in highly-regulated industries, such as financial services and telecommunication, adding further implementation complexities. Below, we discuss these challenges and present some best practices and solutions to take into consideration.
AI and generative Al can lead to major enterprise advancements and productivity gains. By offering new capabilities, they open up opportunities for enhancing customer engagement, content creation, virtual experts, process automation and optimization, and more.
  |  By Alexandra Quinn
RAG and Fine-Tuning are two prominent LLM customization approaches. While RAG involves providing external and dynamic resources to trained models, fine-tuning involves further training on specialized datasets, altering the model. Each approach can be used for different use cases. In this blog post, we explain each approach, compare the two and recommend when to use them and which pitfalls to avoid.
  |  By Guy Lecker
As can be inferred from their name, foundation models are the foundation upon which developers build AI applications for tasks like language translation, text summarization, sentiment analysis and more. Models such as OpenAI's GPT, Google's Gemini, Meta’s Llama and Anthropic’s Claude, are pre-trained on vast amounts of text data and have the capability to understand and generate human-like language.
  |  By Alexandra Quinn
Enterprises are beginning to implement gen AI across use cases, realizing its enormous potential to deliver value. Since we are all charting new technological waters, being mindful of recommended strategies, pitfalls to avoid and lessons learned can assist with the process and help drive business impact and productivity. In this blog post, we provide a number of frameworks that can help enterprises effectively implement and scale gen AI while avoiding risk.
  |  By Nick Schenone
The generative AI industry is changing fast. New models and technologies (Hello GPT-4o) are emerging regularly, each more advanced than the last. This rapid development cycle means that what was cutting-edge a year ago might now be considered outdated. The rate of change demands a culture of continuous learning and technological adaptation.
  |  By Alexandra Quinn
LLM evaluation is the process of assessing the performance and capabilities of LLMs. This helps determine how well the model understands and generates language, ensuring that it meets the specific needs of applications. There are multiple ways to perform LLM evaluation, each with different advantages. In this blog post, we explain the role of LLM evaluation in AI lifecycles and the different types of LLM evaluation methods. In the end, we show a demo of a chatbot that was developed with crowdsourcing.
  |  By Nick Schenone
Ever since the release of ChatGPT in November 2022, organizations have been trying to find new and innovative ways to leverage gen AI to drive organizational growth. LLM capabilities like contextual understanding and response to natural language prompts enable the development of applications like automated AI chatbots, smart call center apps, or for financial services.
  |  By Iguazio
In this webinar, we explored how to successfully deploy your Gen AI applications while mitigating these challenges, using NVIDIA NIM and MLRun.
  |  By Iguazio
See how to use MLRun 1.7 to fine-tune a generative AI banking chatbot, ensuring it answers only relevant banking inquiries. Watch the full tutorial and follow along!
  |  By Iguazio
This demo recorded during our MLOps Live Webinar #32 showcases a customer-facing AI agent developed for a jewelry retailer. This can be used as a marketing tool to offer personalized product recommendations and purchasing information and support.
  |  By Iguazio
In this MLOps Live session we were joined by Eli Stein, Partner and Modern Marketing Capabilities Leader at McKinsey, to delve into how data scientists can leverage generative AI to support the company’s marketing strategy. We showcased a live demo of a customer-facing AI agent developed for a jewelry retailer, which can be used as a marketing tool to offer personalized product recommendations and purchasing information and support. Following the demo, we held an interactive discussion and Q&A session. Enjoy!
  |  By Iguazio
In this session, we explored the unique challenges of implementing gen AI in production environments, when agents are in direct contact with your customers. We shared the Iguazio & MongoDB one-stop-shop solution for building gen AI applications that scale effectively and efficiently, with built-in guardrails and monitoring. We'll show how the end-to-end application lifecycle is addressed – From data management all the way to governance and monitoring in production.
  |  By Iguazio
  |  By Iguazio
In this webinar we discussed the transformative impact of gen AI on enterprise operations, spotlighting advancements across manufacturing, supply chain and procurement. We covered the main gen AI use cases, challenges to be mindful of during implementation and key learnings from client projects; highlighting three main pillars –people, processes and technology.

The Iguazio Data Science Platform automates MLOps with end-to-end machine learning pipelines, transforming AI projects into real-world business outcomes. It accelerates the development, deployment and management of AI applications at scale, enabling data scientists to focus on delivering better, more accurate and more powerful solutions instead of spending their time on infrastructure.

The platform is open and deployable anywhere - multi-cloud, on prem or edge. Iguazio powers real-time data science applications for financial services, gaming, ad-tech, manufacturing, smart mobility and telecoms.

Dive Into the Machine Learning Pipeline:

  • Collect and Enrich Data from Any Source: Ingest in real-time multi-model data at scale, including event-driven streaming, time series, NoSQL, SQL and files.
  • Prepare Online and Offline Data at Scale: Explore and manipulate online and offline data at scale, powered by Iguazio's real-time data layer and using your favorite data science and analytics frameworks, already pre-installed in the platform.
  • Accelerate and Automate Model Training: Continuously train models in a production-like environment, dynamically scaling GPUs and managed machine learning frameworks.
  • Deploy in Seconds: Deploy models and APIs from a Jupyter notebook or IDE to production in just a few clicks and continuously monitor model performance.

Bring Your Data Science to Life.