Fine-Tuning a Foundation Model for Multiple Tasks

Fine-Tuning a Foundation Model for Multiple Tasks

Sep 15, 2023

In this video we discuss the reasons why fine-tuning is needed to create mroe contextual accurate LLMs, and the methods that you can do to accomplish this. We also give a demo of our newest Applied ML Prototype (AMP) which demonstrates how to implement LLM fine-tuning jobs that make use of the QLoRA and Accelerate implementations available in the PEFT open-source library from Huggingface and an example application that swaps the fine-tuned adapters in real time for inference targetting different tasks.
Learn more at cloudera.com
#ai #ml