Iguazio

Herzliya, Israel
2014
  |  By Alexandra Quinn
RAG and Fine-Tuning are two prominent LLM customization approaches. While RAG involves providing external and dynamic resources to trained models, fine-tuning involves further training on specialized datasets, altering the model. Each approach can be used for different use cases. In this blog post, we explain each approach, compare the two and recommend when to use them and which pitfalls to avoid.
  |  By Guy Lecker
As can be inferred from their name, foundation models are the foundation upon which developers build AI applications for tasks like language translation, text summarization, sentiment analysis and more. Models such as OpenAI's GPT, Google's Gemini, Meta’s Llama and Anthropic’s Claude, are pre-trained on vast amounts of text data and have the capability to understand and generate human-like language.
  |  By Alexandra Quinn
Enterprises are beginning to implement gen AI across use cases, realizing its enormous potential to deliver value. Since we are all charting new technological waters, being mindful of recommended strategies, pitfalls to avoid and lessons learned can assist with the process and help drive business impact and productivity. In this blog post, we provide a number of frameworks that can help enterprises effectively implement and scale gen AI while avoiding risk.
  |  By Nick Schenone
The generative AI industry is changing fast. New models and technologies (Hello GPT-4o) are emerging regularly, each more advanced than the last. This rapid development cycle means that what was cutting-edge a year ago might now be considered outdated. The rate of change demands a culture of continuous learning and technological adaptation.
  |  By Alexandra Quinn
LLM evaluation is the process of assessing the performance and capabilities of LLMs. This helps determine how well the model understands and generates language, ensuring that it meets the specific needs of applications. There are multiple ways to perform LLM evaluation, each with different advantages. In this blog post, we explain the role of LLM evaluation in AI lifecycles and the different types of LLM evaluation methods. In the end, we show a demo of a chatbot that was developed with crowdsourcing.
  |  By Nick Schenone
Ever since the release of ChatGPT in November 2022, organizations have been trying to find new and innovative ways to leverage gen AI to drive organizational growth. LLM capabilities like contextual understanding and response to natural language prompts enable the development of applications like automated AI chatbots, smart call center apps, or for financial services.
  |  By Alexandra Quinn
Organizations that monitor their LLMs will benefit from higher performing models at higher efficiency, while meeting ethical considerations like ensuring privacy and eliminating bias and toxicity. In this blog post, we bring the top LLM metrics we recommend measuring and when to use each one. In the end, we explain how to implement these metrics in your ML and gen AI pipelines.
  |  By Alexandra Quinn and Guy Lecker
Customer care organizations are facing the disruptions of an AI-enabled future, and gen AI is already impacting customer care organizations across use cases like agent co-pilots, summarizing calls and deriving insights, creating chatbots and more. In this blog post, we dive deep into these use cases and their business and operational impact. Then we show a demo of a call center app based on gen AI that you can follow along.
  |  By Guy Lecker
GPU as a Service (GPUaaS) serves as a cost-effective solution for organizations who need more GPUs for their ML and gen AI operations. By optimizing the use of existing resources, GPUaaS allows organizations to build and deploy their applications, without waiting for new hardware. In this blog post, we explain how GPUaaS as a service works, how it can close the GPU shortage gap, when to use GPUaaS and how it fits with gen AI.
  |  By Alexandra Quinn
The manufacturing industry can benefit from AI, data and machine learning to advance manufacturing quality and productivity, minimize waste and reduce costs. With ML, manufacturers can modernize their businesses through use cases like forecasting demand, optimizing scheduling, preventing malfunctioning and managing quality. These all significantly contribute to bottom line improvement.
  |  By Iguazio
  |  By Iguazio
In this webinar we discussed the transformative impact of gen AI on enterprise operations, spotlighting advancements across manufacturing, supply chain and procurement. We covered the main gen AI use cases, challenges to be mindful of during implementation and key learnings from client projects; highlighting three main pillars –people, processes and technology.
  |  By Iguazio
Watch session #28 in our MLOps Live Webinar Series featuring Databricks where we discuss improving LLM accuracy & performance. Hear Margaret Amori (Databricks), Vijay Balasubramaniam (Databricks) , and Yaron Haviv (Iguazio) share best practices and pragmatic advice on successfully improving the accuracy and performance of LLMs while mitigating challenges like risks and escalating costs. See real examples including techniques to overcome common challenges using tools such as Databricks Mosaic AI and their new open LLM, DBRX.
  |  By Iguazio
In this session, Yaron Haviv, CTO Iguazio was joined by Ehud Barnea, PHD, Head of AI at Tasq.ai and Guy Lecker ML Engineering Team Lead, Iguazio to discuss how to validate, evaluate and fine tune an LLM effectively. They shared firsthand tips of how to solve the production hurdle of LLM evaluation, improving LLM performance, eliminating risks, along with a live demo of a fashion chatbot that leverages fine-tuning to significantly improve the model responses.
  |  By Iguazio
Iguazio would like to introduce two practical demonstrations showcasing our call center analysis tool and our innovative GenAI assistant. These demos illustrate how our GenAI assistant supports call center agents with real-time advice and recommendations during customer calls. This technology aims to improve customer interactions and boost call center efficiency. We're eager to share how our solutions can transform call center operations.
  |  By Iguazio
Many enterprises operate expansive call centers, employing thousands of representatives who provide support and consult with clients, often spanning various time zones and languages. However, the successful implementation of a gen AI-driven smart call center analysis applications presents unique challenges such as data privacy controls, potential biases, AI hallucinations, language translation and more.
  |  By Iguazio
Nuclio is a high-performance serverless framework focused on data, I/O, and compute intensive workloads. It is well integrated with popular data science tools, such as Jupyter and Kubeflow; supports a variety of data and streaming sources; and supports execution over CPUs and GPUs. The Nuclio project began in 2017 and is constantly and rapidly evolving; many start-ups and enterprises are now using Nuclio in production. In this video, Tomer takes you through a quick demo of Nuclio, triggering functions both from the UI and the CLI.
  |  By Iguazio
Generative AI has sparked the imagination with the explosion of tools like ChatGPT, CodePilot and others, highlighting the importance of LLMs as the basis for modern AI applications. However, implementing GenAI in the enterprise is challenging, and it becomes even more difficult for banks, insurance companies, and other financial services companies. Many Financial Service companies are struggling and end up missing out on the great value of GenAI and the competitive edge it can provide.
  |  By Iguazio
In this MLOps Live session, Gennaro, Head of Artificial Intelligence and Machine Learning at Sense, describe how he and his team built and perfected the Sense chatbot, what their ML pipeline looks like behind the scenes, and how they have overcome complex challenges such as building a complex natural language processing ( NLP) serving pipeline with custom model ensembles, tracking question-to-question context, and enabling candidate matching.

The Iguazio Data Science Platform automates MLOps with end-to-end machine learning pipelines, transforming AI projects into real-world business outcomes. It accelerates the development, deployment and management of AI applications at scale, enabling data scientists to focus on delivering better, more accurate and more powerful solutions instead of spending their time on infrastructure.

The platform is open and deployable anywhere - multi-cloud, on prem or edge. Iguazio powers real-time data science applications for financial services, gaming, ad-tech, manufacturing, smart mobility and telecoms.

Dive Into the Machine Learning Pipeline:

  • Collect and Enrich Data from Any Source: Ingest in real-time multi-model data at scale, including event-driven streaming, time series, NoSQL, SQL and files.
  • Prepare Online and Offline Data at Scale: Explore and manipulate online and offline data at scale, powered by Iguazio's real-time data layer and using your favorite data science and analytics frameworks, already pre-installed in the platform.
  • Accelerate and Automate Model Training: Continuously train models in a production-like environment, dynamically scaling GPUs and managed machine learning frameworks.
  • Deploy in Seconds: Deploy models and APIs from a Jupyter notebook or IDE to production in just a few clicks and continuously monitor model performance.

Bring Your Data Science to Life.