For a hands-on learning experience to develop LLM applications, join our LLM Bootcamp today.
First 4 seats get a discount of 20%! So hurry up!
First 4 seats get a discount of 20%! So hurry up!
NEW
. 5 DAYS . 40 HOURS . IN-PERSON / ONLINE
20% OFF
Includes:
INSTRUCTORS AND GUEST SPEAKERS
COURSE OVERVIEW
In collaboration with
Anyone interested in getting a headstart by getting a hands-on experience with building LLM applications.
Data professionals who want to supercharge their data skills using cutting-edge generative AI tools and techniques.
Product leaders at enterprises or startups seeking to leverage LLMs to enhance their products, processes and services.
A comprehensive introduction to the fundamentals of generative AI, foundation models and Large language models
An in-depth understanding of various LLM-powered application architectures and their relative tradeoffs
Hands-on experience with vector databases and vector embeddings
Practical experience with writing effective prompts for your LLM applications
Practical experience with orchestration frameworks like LangChain
Practical experience with fine-tuning, parameter efficient tuning and retrieval parameter-efficient + retrieval-augmented approaches
A custom LLM application created on selected datasets
20% OFF
Includes:
Earn a Large Language Models certificate in association with the University of New Mexico Continuing Education, verifying your skills. Step into the market with a proven and trusted skillset.
In this module, we will understand the common use cases of large language models and the fundamental building blocks of such applications. Learners will be introduced to the following topics:
In this module, we will explore the primary challenges and risks associated with adopting generative AI technologies. Learners will be introduced to the following topics at a very high level without going into the technical details:
In this module, we will be reviewing how embeddings have evolved from the simplest one-hot encoding approach to more recent semantic embedding approaches. The module will go over the following topics:
Dive into the world of large language models, discovering the potent mix of text embeddings, attention mechanisms, and the game-changing transformer model architecture.
Supplementary hands-on exercises
Learn about efficient vector storage and retrieval with vector database, indexing techniques, retrieval methods, and hands-on exercises.
Understand how semantic search overcomes the fundamental limitation in lexical search i.e. lack of semantics . Learn how to use embeddings and similarity in order to build a semantic search model.
Unleash your creativity and efficiency with prompt engineering. Seamlessly prompt models, control outputs, and generate captivating content across various domains and tasks.
In-depth discussion on fine-tuning of large language models through theoretical discussions, exploring rationale, limitations, and Parameter Efficient Fine Tuning.
* Quantization of LLMs
* Low-Rank Adaptation (LoRA) and QLoRA
Build LLM Apps using LangChain. Learn about LangChain's key components such as models, prompts, parsers, memory, chains, and Question-Answering.
Use LLMs to make decisions about what to do next. Enable these decisions with tools. In this module, we’ll talk about agents. We’ll learn what they are, how they work, and how to use them within the LangChain library to superpower our LLMs.
In this module, we'll explore the challenges in developing RAG-based enterprise-level Large Language Model (LLM) applications. We will discuss the following:
Dive into large language model (LLM) evaluation, examining its importance, common issues, benchmark datasets, and key metrics such as BLEU, ROUGE, and RAGAs, and apply these insights through a hands-on summarization exercise.
On the last day of the LLM bootcamp, the learners will apply the concepts and techniques learned during the bootcamp to build an LLM application. Learners will choose to implement one of the following:
Attendees will receive the following: At the culmination of the bootcamp, you will have a fully operational LLM application deployed on a public cloud platform, such as Streamlit. This deployment process includes setting up a continuous integration and continuous deployment (CI/CD) pipeline to ensure that your application can be updated and maintained effortlessly. By the end of the bootcamp, you'll be equipped not only with a finished project but also with the knowledge and skills to deploy and scale applications in real-world scenarios.
Daily schedule: 9 am - 5 pm PT | Breakfast, lunch and beverages | Breakout sessions and in-class activities