In just one week, we will teach you how to build agentic AI applications. Learn the entire LLM application stack.
The Large Language Models Bootcamp is tailored for a diverse audience, including:
Looking to enhance their skills with generative AI tools. Learn how to integrate LLMs into your data workflows and boost efficiency with real-world AI applications.
At enterprises or startups aiming to leverage LLMs for improving products, processes, or services .Discover how LLMs can drive innovation and streamline decision-making across your product lifecycle.
Seeking a head start in understanding and working with LLMs. Build a solid foundation in generative AI with guided, beginner-friendly lessons and hands-on exercises.
Wanting to supercharge their expertise in building and deploying custom LLM-powered applications. Gain advanced skills in fine-tuning, prompting, and integrating LLMs into scalable systems and tools.
Learn from though leaders at the forefront of building agentic AI applications
Earn a verified certificate from The University of New Mexico Continuing Education:
Overview of the topics and practical exercises.
In this module we will understand the common use cases of large language models and fundamental building blocks of such applications. Learners will be introduced to the following topics at a very high level without going into the technical details:
In this module, we will explore the primary challenges and risks associated with adopting generative AI technologies. Learners will be introduced to the following topics at a very high level without going into the technical details:
In this module, we will be reviewing how embeddings have evolved from the simplest one-hot encoding approach to more recent semantic embeddings approaches. The module will go over the following topics:
Review of classical techniques
Semantic Encoding Techniques
Text Embeddings
Hands-on Exercise
Dive into the world of large language models, discovering the potent mix of text embeddings, attention mechanisms, and the game-changing transformer model architecture.
Hands-on Exercise
Learn about efficient vector storage and retrieval with vector database, indexing techniques, retrieval methods, and hands-on exercises.
Hands-on Exercise
Unleash your creativity and efficiency with prompt engineering. Seamlessly prompt models, control outputs, and generate captivating content across various domains and tasks.
○ Designing inputs to guide LLM behavior
○ Benefits: Control, efficiency, customization
○ Elements: Instructions, Context, Input, Output
○ Formats: Declarative, Interrogative, Structured
○ Few-Shot: Examples guide behavior
○ Chain-of-Thought: Step-by-step reasoning
○ Tree-of-Thought: Branching problem-solving
○ ReAct: Reasoning + external actions ○ Advanced: Maieutic, DSP, Multi-modal, Function Calling
○ Temperature: Randomness control (0-1)
○ Top-k/Top-p: Token selection limits
In-depth discussion on fine-tuning of large language models through theory discussions, exploring rationale, limitations, and Parameter Efficient Fine Tuning.
○ Continuing training on pre-trained model with custom dataset
○ Benefits: Domain adaptation, tone/style adjustment, task-specific transformation
○ DPO/RLHF: Human preference alignment
○ PEFT vs Full: <10% vs all parameters
○ Data Quality: Clean, domain-specific, balanced
○ Challenges: Catastrophic forgetting, overfitting, drift
○ In-Class: Fine-tuning open-source LLMs
○ Homework: OpenAI/Llama on Azure AI
Build LLM Apps using LangChain framework for orchestrating language models, prompts, memory, chains, and retrieval systems.
Introduction to LangChain: Orchestration framework for LLM applications, simplifies AI model integration, and provides modular components for complex workflows
Model I/O (Interface with any LLM): Prompts (templates, example selectors), Language models (LLM, Chat, Embedding), Output parsers for structured responses
Retrieval (Connecting external data): Document loaders (public/private/structured/unstructured data), transformers (chunking, metadata), embedding/vector stores, optimized retrieval techniques
Chains (Complex LLM workflows): Foundational types (LLM, Router, Sequential, Transformation), Document chains (stuff, refine, map-reduce) for large document summarization
Memory (Context retention): Short-term (thread-scoped) and long-term (cross-thread) memory for past interactions, buffer management, and conversation continuity.
LangGraph & Advanced Features Cyclical workflows, agency vs reliability trade-off management, stateful memory, and cognitive architectures for agentic applications
Hands-on Exercise
LLMOps encompasses the practices, techniques and tools used for the operational management of large language models in production environments. LLMs offer tremendous business value, humans are involved in all stages of the lifecycle of an LLM from acquisition of data to interpretation of insights. In this module we will learn about the following:
Principles of Responsible AI • Fairness and Eliminating Bias • Reliability and Safety • Privacy and Data Protection
Review techniques for assessing LLM applications, including: • Model fine-tuning • Model inference and serving • Model monitoring with human feedback
Data-centric LLM Ops • Guardrails: Define rules to govern prompts and responses for LLM applications • Evaluation: Assess LLM performance using known prompts to identify issues • Observability: Collect telemetry data about the LLM’s internal state for monitoring and issue detection
Hands-on Exercise • Using Langkit Evaluate LLM performance on specific prompts
In this module, we’ll explore the challenges in developing RAG-based enterprise-level Large Language Model (LLM) applications. We will discuss the following:
Basic RAG pipeline. Limitations of naïve approach
Indexing • Chunking size optimization • Embedding Models
Querying – Challenges • Large Document Slices • Query Ambiguity
Query – Optimizations • Multi-Query Retrieval • Multi-Step Retrieval • Step-Back Prompting • Query Transformations
Retrieval – Challenges • Inefficient Retrieval of Large Documents • Lack of Conversation Context • Complex Retrieval from Multiple Sources
Retrieval – Optimizations • Hybrid Search and Meta-data integration • Sentence window retrieval • Parent-child chunk retrieval • Hierarchical Index Retrieval • Hypothetical Document embeddings (HyDE)
Generation – Challenges • Information Overload • Insufficient Context Window • Chaotic Contexts • Hallucination • Inaccurate Responses
Generation – Optimizations • Information Compression • Thread of Thought (ThoT) • Generator Fine-tuning • Adapter methods • Chain of Note (CoN) • Expert Prompting • Access control and governance
Explore LLM evaluation, key metrics like BLEU and ROUGE, with hands-on exercises.
Introduction to LLM Evaluation
RAGAS
Detailed Workflow Stages
Hands-on Exercise
Apply bootcamp concepts to build custom LLM applications with real-world deployment and scaling capabilities.
Project Options
Basic Chatbot (general queries), Chatbot Agent (data integration), Chat with Your Data (document upload/interaction), Web Search Assistant, Question Answering systems
Provided Resources
Comprehensive industry datasets, step-by-step implementation guides, ready-to-use code templates in sandbox environments, cloud resources with OpenAI key access
Deployment & Outcome
Streamlit cloud deployment with CI/CD pipeline, fully operational application, functionality demonstration, and skills for real-world scaling
Leverage AI for industry transformation, competitive advantage, social media presentation, and professional development
Attend the LLM Bootcamp for free
All of our programs are backed by a certificate from The University of New Mexico, Continuing Education. This means that you may be eligible to attend the bootcamp for FREE.
Not sure? Fill out the form so we can help.
Learn to build Large Language Model applications from leading experts in industry.
Our LLM Bootcamp has been attended by many individuals from non-technical backgrounds, including those in business consulting and strategy roles. The program is designed to make Generative AI concepts accessible, regardless of your technical expertise.
You’ll gain practical insights into how LLMs are applied across industries, empowering you to advise clients better, lead AI initiatives, or collaborate effectively with technical teams.
You need a very basic level of Python programming for our LLM Bootcamp.
Yes! We offer a short introductory course to help you get comfortable with Python. Plus, all code is provided in Jupyter Notebooks, so you’ll focus on understanding rather than writing code from scratch.
The address for the LLM Bootcamp venue is given below:
Seattle Venue Address: Data Science Dojo, 2331 130th Ave NE, Bellevue, WA 98005, United States. [View on Map]
Our LLM Bootcamp is an immersive five-day, 40-hour learning experience, available both in-person (Seattle) and online.
Yes, these sessions are live and are designed to be highly interactive.
Yes, the online session will be held at the same time with the same instructors as the in-person session.
By joining the LLM Bootcamp, you will receive:
Yes. You will receive a certificate from The University of New Mexico with 5 CEUs.
Yes, participants who complete the bootcamp will receive a certificate of completion in association with the University of Mexico. This certificate can be a valuable addition to your professional portfolio and demonstrate your expertise in building large language model applications.
Each live session is recorded and made available for review to both online and in-person participants a few days after the boot camp concludes, allowing them to view it at their convenience.
No, the price for the Large Language Models Bootcamp will remain the same, regardless of whether you attend in person or online.
If, for any reason, you decide to cancel, we will gladly refund your registration fee in full if you notify us at least five business days before the start of the training. We would also be happy to transfer your registration to another cohort. Refunds cannot be processed if you have moved to a different cohort after registration.
Transfers are allowed once with no penalty. Transfers requested more than once will incur a $200 processing fee.
Our LLM Bootcamp has been attended by many individuals from non-technical backgrounds, including those in business consulting and strategy roles. The program is designed to make Generative AI concepts accessible, regardless of your technical expertise.
You’ll gain practical insights into how LLMs are applied across industries, empowering you to advise clients better, lead AI initiatives, or collaborate effectively with technical teams.
You need a very basic level of Python programming for our LLM Bootcamp.
Yes! We offer a short introductory course to help you get comfortable with Python. Plus, all code is provided in Jupyter Notebooks, so you’ll focus on understanding rather than writing code from scratch.
The address for the LLM Bootcamp venue is given below:
Seattle Venue Address: Data Science Dojo, 2331 130th Ave NE, Bellevue, WA 98005, United States. [View on Map]
Our LLM Bootcamp is an immersive five-day, 40-hour learning experience, available both in-person (Seattle) and online.
By joining the LLM Bootcamp, you will receive:
Yes, these sessions are live and are designed to be highly interactive.
Yes, the online session will be held at the same time with the same instructors as the in-person session.
Yes. You will receive a certificate from The University of New Mexico with 5 CEUs.
Yes, participants who complete the bootcamp will receive a certificate of completion in association with the University of Mexico. This certificate can be a valuable addition to your professional portfolio and demonstrate your expertise in building large language model applications.
Each live session is recorded and made available for review to both online and in-person participants a few days after the boot camp concludes, allowing them to view it at their convenience.
No, the price for the Large Language Models Bootcamp will remain the same, regardless of whether you attend in person or online.
If, for any reason, you decide to cancel, we will gladly refund your registration fee in full if you notify us at least five business days before the start of the training. We would also be happy to transfer your registration to another cohort. Refunds cannot be processed if you have moved to a different cohort after registration.
Transfers are allowed once with no penalty. Transfers requested more than once will incur a $200 processing fee.