Interested in a hands-on learning experience for developing LLM applications?
Join our LLM Bootcamp today and Get 28% Off for a Limited Time!
NEW
. 5 DAYS . 40 HOURS . IN-PERSON / ONLINE
28% OFF
4.95 · 640+ reviews
Includes:
INSTRUCTORS AND GUEST SPEAKERS
COURSE OVERVIEW
In collaboration with
Anyone interested in getting a headstart by getting a hands-on experience with building LLM applications.
Data professionals who want to supercharge their data skills using cutting-edge generative AI tools and techniques.
Product leaders at enterprises or startups seeking to leverage LLMs to enhance their products, processes and services.
A comprehensive introduction to the fundamentals of generative AI, foundation models and Large language models
An in-depth understanding of various LLM-powered application architectures and their relative tradeoffs
Hands-on experience with vector databases and vector embeddings
Practical experience with writing effective prompts for your LLM applications
Practical experience with orchestration frameworks like LangChain and Llama Index
Learn how to deploy your LLM applications using Azure and Hugging Face cloud
Practical experience with fine-tuning, parameter efficient tuning and retrieval parameter-efficient + retrieval-augmented approaches
A custom LLM application created on selected datasets
28% OFF
4.95 · 640+ reviews
Includes:
Earn a Large Language Models certificate in association with the University of New Mexico Continuing Education, verifying your skills. Step into the market with a proven and trusted skillset.
In this module we will understand the common use cases of large language models and fundamental building blocks of such applications. Learners will be introduced to the following topics at a very high level without going into the technical details:
In this module, we will explore the primary challenges and risks associated with adopting generative AI technologies. Learners will be introduced to the following topics at a very high level without going into the technical details:
In this module, we will be reviewing how embeddings have evolved from the simplest one-hot encoding approach to more recent semantic embeddings approaches. The module will go over the following topics:
Dive into the world of large language models, discovering the potent mix of text embeddings, attention mechanisms, and the game-changing transformer model architecture.
Learn about efficient vector storage and retrieval with vector database, indexing techniques, retrieval methods, and hands-on exercises.
Understand how semantic search overcomes the fundamental limitation in lexical search i.e. lack of semantics . Learn how to use embeddings and similarity in order to build a semantic search model.
Unleash your creativity and efficiency with prompt engineering. Seamlessly prompt models, control outputs, and generate captivating content across various domains and tasks.
In-depth discussion on fine-tuning of large language models through theoretical discussions, exploring rationale, limitations, and Parameter Efficient Fine Tuning.
* Quantization of LLMs
* Low-Rank Adaptation (LoRA) and QLoRA
Build LLM Apps using LangChain. Learn about LangChain's key components such as models, prompts, parsers, memory, chains, and Question-Answering.
Use LLMs to make decisions about what to do next. Enable these decisions with tools. In this module, we’ll talk about agents. We’ll learn what they are, how they work, and how to use them within the LangChain library to superpower our LLMs.
LLMOps encompasses the practices, techniques and tools used for the operational management of large language models in production environments. LLMs offer tremendous business value, humans are involved in all stages of the lifecycle of an LLM from acquisition of data to interpretation of insights. In this module we will learn about the following:
In this module, we'll explore the challenges in developing RAG-based enterprise-level Large Language Model (LLM) applications. We will discuss the following:
Dive into Large Language Model (LLM) evaluation, examining its importance, common issues, and key metrics such as BLEU and ROUGE, and apply these insights through a hands-on summarization exercise.
On the last day of the LLM bootcamp, the learners will apply the concepts and techniques learned during the bootcamp to build an LLM application. Learners will choose to implement one of the following:
Attendees will receive the following: At the culmination of the bootcamp, you will have a fully operational LLM application deployed on a public cloud platform, such as Streamlit. This deployment process includes setting up a continuous integration and continuous deployment (CI/CD) pipeline to ensure that your application can be updated and maintained effortlessly. By the end of the bootcamp, you'll be equipped not only with a finished project but also with the knowledge and skills to deploy and scale applications in real-world scenarios.
Daily schedule: 9 am - 5 pm PT | Breakfast, lunch and beverages | Breakout sessions and in-class activities
Yes, a very basic level of python programming language.
The credit covers expenses related to OpenAI keys, GPU cloud usage, and VMs allocated for your project. While there is no limit on individual credits, the total cumulative usage should not exceed USD $500. For example, if your OpenAI usage approaches this limit, measures will be taken to prevent exceeding the $500 threshold.
Yes. You will leave the bootcamp with your own fully functional LLM application.
Yes. You can find out more about our financing plans.
1. An assortment of software subscriptions worth $500, enhancing the value of the program.
2. A 12-month unlimited access to all learning resources, allowing for continuous learning and review.
3. A repository of practice materials.
Data Science Dojo has conducted bootcamps in various cities in the past and plans to continue expanding to other locations. They are also exploring options for online bootcamps.
Transfers are allowed once with no penalty. Transfers requested more than once will incur a $200 processing fee.
Yes, you will need to bring your laptop. As for software installation, we use browser-based coding labs. You will not need to install any software on your laptop.
Yes, you can fine-tune the LLM model on your own custom data sources. The bootcamp will provide guidance on how to add custom data sources and fine-tune the model to answer specific questions related to those sources. However, please make sure to get the dataset reviewed before the bootcamp starts to avoid any last-minute inconveniences.
Yes, participants who successfully complete the bootcamp will receive a certificate of completion. This certificate can be a valuable addition to your professional portfolio and demonstrate your expertise in building large language model applications.
Yes, the participants will have access to the learning platform for 12 months after the bootcamp ends.
Yes, Data Science Dojo provides ongoing support and guidance to bootcamp participants even after the program ends. This includes access to a community of fellow participants and instructors who can help answer questions and provide further assistance.
Yes, our live instructor-led sessions are interactive. During these sessions, students are encouraged to ask questions, and our instructors respond without rushing. Additionally, discussions within the scope of the topic being taught are actively encouraged. We understand that questions may arise during homework, and to assist with that, we offer office hours to help unblock students between sessions. Rest assured, you won’t have to figure everything out by yourself – we are committed to providing the support you need for a successful learning experience.
If for any reason, you decide to cancel, we will gladly refund your registration fee in full if notified the Monday prior to the start of the training. We would also be happy to transfer your registration to another bootcamp or workshop. Refunds cannot be processed if you have transferred to a different bootcamp after registration.
PRICE
UPCOMING
DATES
APPLICATION DEADLINE