Building RAG-powered LLM applications is nuanced. Learn from experts who build scalable, real-world LLM applications






The Mastering LangChain course is designed for professionals who want to go beyond the basics of generative AI and learn how to build advanced applications using LangChain. This course is ideal for those who want to create, manage, and optimize Retrieval-Augmented Generation (RAG) systems and AI agents.The Mastering LangChain course is designed for professionals who want to go beyond the basics of generative AI and learn how to build advanced applications using LangChain. This course is ideal for those who want to create, manage, and optimize Retrieval-Augmented Generation (RAG) systems and AI agents.
Learn how to design and deploy intelligent LLM applications using LangChain. Gain hands-on experience with chains, agents, memory, and retrieval while building scalable AI solutions for real-world use cases.
Understand how LangChain powers modern AI workflows. Learn how to manage cross-functional AI projects, evaluate architecture choices, and bring generative AI capabilities into your products and operations.
Even if you’re not from a technical background, this course helps you grasp how LangChain connects data, models, and reasoning to power intelligent systems. Get a practical understanding of how RAG applications are shaping the future of AI-driven products and services.
Learn from practitioners with extensive industry experience in building generative
AI and large language models applications at large scale.Â
Overview of the topics and practical exercises.
Orchestration framework for LLM applicationsÂ
Purpose: Simplifies AI model integrationÂ
Features: Modular components for complex workflowsÂ
Benefits: Streamlined development, Reusability, ScalabilityÂ
Hands-on Exercise: Set up LangChain environment, explore framework architecture, install dependencies and configure basic LangChain project structureÂ
Prompts: Templates, Example selectors, Dynamic formattingÂ
Language models: LLM interfaces, Chat models, Embedding modelsÂ
Output parsers: Structured response extraction, Type validation, Format conversionÂ
Applications: Standardized communication, Flexible model switching, Consistent output handlingÂ
Hands-on Exercise:Â Interface with any LLM using Model I/O, create prompt templates, implement output parsers, switch between different language models, and extract structured responsesÂ
Document loaders:Â Public data, Private data, Structured data, Unstructured dataÂ
Transformers:Â Chunking strategies, Metadata extraction, Document preprocessingÂ
Storage:Â Embedding generation, Vector stores, Similarity searchÂ
Techniques:Â Optimized retrieval, Semantic search, Context-aware queryingÂ
Applications:Â Knowledge base integration, External data access, Dynamic information retrievalÂ
Hands-on Exercise:Â Build RAG (Retrieval-Augmented Generation) application with Retrieval, load documents from various sources, implement chunking strategies, create vector embeddings, set up vector stores, and perform semantic search queriesÂ
Foundational types:Â LLM chains, Router chains, Sequential chains, Transformation chainsÂ
Document chains:Â Stuff (all-at-once), Refine (iterative), Map-reduce (parallel processing)Â
Use cases:Â Large document summarization, Multi-step reasoning, Conditional logicÂ
Benefits: Workflow automation, Modular design, ComposabilityÂ
Hands-on Exercise:Â Create complex LLM workflows with Chains, build sequential chains for multi-step tasks, implement document summarization using stuff/refine/map-reduce chains, design router chains for conditional logic, and compose modular chain componentsÂ
Short-term memory:Â Thread-scoped storage, Session-based context, Immediate recallÂ
Long-term memory:Â Cross-thread persistence, Historical data access, Knowledge accumulationÂ
Management:Â Buffer management, Conversation continuity, Past interaction trackingÂ
Applications:Â Contextual responses, Personalization, Stateful conversationsÂ
Hands-on Exercise:Â Add Memory to LLM-based application, implement short-term conversation buffers, configure long-term memory persistence, manage conversation history, enable context-aware responses, and build stateful chatbot with memory retention Â
Agent architecture:Â Reasoning engines, Tool integration, Action selectionÂ
Components:Â AgentExecutor, Tool calling, Observation processingÂ
Capabilities:Â Autonomous task execution, Dynamic planning, Adaptive behaviorÂ
Applications:Â Complex problem solving, Multi-tool orchestration, Goal-oriented tasksÂ
Hands-on Exercise: Harness dynamic decision-making using Agents, create agent with reasoning capabilities, integrate external tools and APIs, implement AgentExecutor for autonomous task completion, design multi-step agent workflows, and build agents that adapt to different scenariosÂ
Cyclical workflows:Â Iterative processing, Feedback loops, Dynamic flow controlÂ
Trade-offs:Â Agency vs reliability management, Autonomy vs predictabilityÂ
Architecture:Â Stateful memory systems, Cognitive architectures, Agentic applicationsÂ
Capabilities:Â Complex decision trees, Multi-step planning, Adaptive behaviorÂ
Hands-on Exercise: Build advanced agentic applications with LangGraph, design cyclical workflows with feedback loops, implement stateful cognitive architectures, balance agency and reliability trade-offs, create multi-agent systems, and develop complex decision-making applicationsÂ
Earn a verified certificate from The University of New Mexico Continuing Education:
All of our programs are backed by a certificate from The University of New Mexico, Continuing Education. This means that you may be eligible to attend the bootcamp for FREE.
Learn to build RAG-powered LLM applications from leading experts in industry.Â
Yes. You should be comfortable with Python programming. Basic understanding of large language models is strongly recommended.
No worries. All sessions are recorded and made available for you to view in the learning platform.Â
The live session is 8 hours. You can learn at your own pace afterward.
All Guru package registrations receive a 12-month access to the learning platform.Â
Any software licenses, tools, and computing resources are included in the registration fee. Guru package includes API keys for LLM usage.
Yes. Additionally, Guru package includes a verified certificate with 1 Continuing Education Credit. A transcript can be requested from the University of New Mexico registrar’s office.
The live instructor-led sessions are 8 hours long. There is a lot more learning material available before and after the live session.Â
Yes. We encourage learners to bring a lot of questions. We also have dedicated class forum and teaching assistants who can answer questions outside of the live sessions.
Instructors will ensure that all questions are addressed promptly
You can request a full refund at least three business days before the start date of the training.