For a hands-on learning experience to develop LLM applications, join our LLM Bootcamp today.
First 3 seats get a 10% discount! So hurry up!

Fine-Tune, Serve and Scale Workflows for Large Language Models

Agenda

Build Scalable AI Workflows for LLMs and Serve it in an Application

Training today’s complex AI models requires integrating multiple steps into an interconnected workflow. Modern MLOps/LLMOps tools help scale AI training by orchestrating these workflows. Effective tooling provides a reliable framework for machine learning operations by streamlining processes, increasing efficiency, and enhancing reproducibility.

In this hands-on session, we will explore how modern MLOps/LLMOps tooling can optimize AI workflows, streamline processes, and enhance reproducibility. You will learn how to fine-tune an LLM using Hugging Face, integrate it into a scalable workflow with Union.ai, and deploy it in a real-world application.

What we will cover

  • The fundamentals of MLOps and LLMOps pipelines.
  • Fine-tuning a Hugging Face LLM model for sequence classification on unstructured data.
  • Building a scalable and reproducible production-grade workflow.
  • Deploying and serving a fine-tuned LLM in a real-time Streamlit app.
  • Applying these concepts to more complex pipelines and models.

 

To follow along, you will need a free Union.ai account, a GitHub account, and a Google account for Colab. These tools will enable you to build and deploy scalable AI workflows seamlessly. By the end of this webinar, you will have a solid understanding of how to build and deploy AI workflows for LLMs, integrating them seamlessly into your MLOps stack.

Sage Elliot

AI Engineer at Union.ai

Sage is an AI engineer and developer advocate who loves educating people and making ML applications more reliable. He has taught thousands of people in live workshops on how to get started with Python, machine learning, computer vision, and AI observability.