For a hands-on learning experience to develop Agentic AI applications, join our Agentic AI Bootcamp today. Early Bird Discount

Agentic AI bootcamp

Learn to build agents, not just apps. Automate reasoning, planning, context retrieval and execution.

Technologies and Tools

4.95

Switchup Rating

12,000+

Alumni

2,500+

Companies Trained

1M+

Community Members

Who is this bootcamp for?

The Agentic AI Bootcamp is designed for professionals who already understand the basics of LLMs and are ready to take the next leap, building intelligent, autonomous AI agents using real-world tools and techniques. 

Data and AI professionals

You’ve worked with LLMs—now learn to build systems that reason, plan, and act. This bootcamp teaches you how to integrate tools like LangChain, vector databases, and RAG to build truly agentic workflows.

Engineers and developers

Take your technical skills further by deploying LLM-powered agents in production environments. Learn how to connect APIs, fine-tune performance, and handle edge cases in real-time applications.

Product leaders and builders

Go beyond prompts. Understand the architecture behind AI agents and how to design agentic workflows that solve complex problems, automate internal processes, or power new customer-facing products.

Instructors and guest speakers

Learn from thought leaders at the forefront of building Agentic AI applications.

Thierry Damiba data science dojo bootcamp

Thierry Damiba

Developer Advocate, Qdrant
zain hasan data science dojo bootcamp

Zain Hasan

Senior DevRel Engineer, Together AI
sage elliot data science dojo bootcamp

Sage Elliot

AI Engineer, Union AI
Luis Serrano data science dojo bootcamp

Luis Serrano

Founder, Serrano Academy
Raja Iqbal data science dojo bootcamp

Raja Iqbal

Founder, Ejento AI
Kartik Talamadupula data science dojo bootcamp

Kartik Talamadupula

Head of AI, Wand AI
Hamza Farooq data science dojo bootcamp

Hamza Farooq

Founder, Travesaal AI
A2A Protocol Workshop: Build Interoperable Multi-Agent Systems

Zaid Ahmed

Senior Data Scientist, Data Science Dojo

curriculum

Explore the bootcamp curriculum

Overview of the topics and practical exercises.

Key Topics:

  • Introduction to LLMs: Strengths and weaknesses of large language models
  • Discriminative versus Generative AI: Predictive models contrasted with generative models
  • Transformer Architecture: Tokenization, embeddings, positional encoding, and attention
  • Embeddings and Similarity: Representing words as vectors and measuring closeness
  • Attention Mechanism: Keys, queries, and values in self-attention
  • Softmax and Probabilities: Converting scores to probabilities for next-word prediction
  • Training and Fine-Tuning: Adapting models with curated data for new tasks
  • Search and Retrieval: Building a semantic search engine with embeddings
  • Retrieval Augmented Generation (RAG): Combining retrieval with generation for grounded answers
  • Hands-On Exercises: Sentence Transformers, semantic search, attention scoring, and attention mechanisms

Key Topics:

  • Foundations of Agentic AI: From next-token prediction to reasoning models; limitations of classic LLMs vs reasoning LLMs; the core pillars of agentic systems—reasoning, context, and autonomy.
  • Understanding LLMs: Context windows, session memory, and long-term memory (vector databases, knowledge graphs, summaries); data sources including pre-training, fine-tuning, and in-context learning.
  • Retrieval-Augmented Generation (RAG): Naïve RAG workflows and common challenges; RAG as a context enhancement strategy; preparing and structuring data for effective RAG pipelines.
  • Agentic AI Components: Cognition (reasoning, planning, self-reflection), knowledge representation, and autonomy through tool use, action execution, and monitoring.
  • Agentic Design Patterns: Planning, tool use, and reflection loops; Agentic RAG, routers, and iterative loops; sequential, parallel, and hierarchical workflows.
  • Architectures for Agents: Single-agent vs multi-agent systems; human-in-the-loop strategies; hybrid reasoning pipelines and decision graphs.
  • Advanced Context Techniques: Session summaries, hybrid memory systems, Model Context Protocol (MCP), and scalable context management.
  • Observability, Safety & Governance: Guardrails, explainability, monitoring and evaluation, ethical alignment, and compliance strategies.
  • Hands-On Exercises: Practical implementations of reasoning workflows, memory systems, RAG pipelines, and safe agent design.

Key Topics:

  • Introduction to LangChain: Purpose and scope of LangChain; building LLM-powered applications; common challenges in implementing Retrieval-Augmented Generation (RAG).
  • Core Components: LLMs and chat models; prompt templates and example selectors; document loaders and transformers for preprocessing data.
  • Output Parsers: Extracting structured data; enforcing consistent output formats; handling parsing errors and validation failures.
  • Retrieval: Embedding and vectorization strategies; retrievers and metadata filtering; parent document retrieval for contextual completeness.
  • Vector Stores: Storing embeddings efficiently; performing scalable similarity search; optimizing retrieval for large datasets.
  • Chains: Sequential prompt logic; pre- and post-LLM processing steps; integrating retrieval and tool use into end-to-end chains.
  • Tool Use: Connecting APIs and external systems; feeding tool outputs back into workflows; managing retries and error handling.
  • LangChain Expression Language (LCEL): Building modular workflows using runnable components; piping operations; parallel branches and composable pipelines.
  • Hands-On Exercises: Constructing retrieval chains; parsing structured outputs; combining LangChain modules into coherent, production-ready workflows.

Key Topics:

  • Vector Database Fundamentals: Embeddings and vector storage; approximate nearest neighbor (ANN) vs k-nearest neighbor (kNN) search; modern vector database architectures and data models.
  • Hybrid Retrieval Design: Combining dense and sparse vectors; applying metadata filters and payload indexing; full-text tokenization for mixed semantic and keyword queries.
  • Advanced Techniques: Maximal Marginal Relevance (MMR) for improving result diversity; Discovery APIs for broader coverage; monitoring and maintaining HNSW index health.
  • Agentic RAG Concepts: Using AI-native vector databases as long-term memory for agents; multi-step retrieval with reasoning loops; context selection strategies and hallucination mitigation.
  • Semantic Caching: Caching semantically similar queries using vector similarity; time-to-live (TTL) and invalidation policies; optimizing cost and latency.
  • Hands-On Exercises: Exploring AI-native vector database fundamentals; implementing hybrid search; applying re-ranking techniques such as MMR; monitoring HNSW index health; building a RAG pipeline with agentic orchestration.

Key Topics:

  • Complex Agentic Workflows: Designing workflows with system and user prompts; integrating retrieval, memory layers, web search, and vector databases; implementing critique and refinement loops.
  • Deterministic Chains and Control Flows: Building sequential pipelines with pre- and post-LLM steps; enforcing structured task execution and predictable control logic.
  • Agent Reliability and Dynamic Decisions: Using router agents and conditional flows; balancing autonomy with control for robust task execution.
  • LangGraph Fundamentals: Understanding nodes, edges, and state management; condition-based execution for reliable and auditable workflows.
  • Tool Integration: Connecting APIs, databases, and external systems through node-based tool calls; updating shared state after execution.
  • Agentic Design Patterns: Reflection for self-critique; tool use for external actions; planning for task decomposition and structured reasoning.
  • Multi-Agent Collaboration: Implementing parallel, sequential, loop, and router flows; incorporating error handling and human supervision strategies.
  • Multi-Agent Architectures: Designing hierarchical delegation systems; approval nodes; shared memory and resource coordination.
  • Hands-On Experience: Practical implementation of advanced context engineering concepts across agentic workflows and multi-agent systems.

Key Topics:

  • Why Agentic Patterns Matter: Transforming single-pass prompting into iterative, goal-oriented reasoning loops for more adaptive and reliable systems.
  • Reflection Pattern: Enabling agents to evaluate, critique, and refine their own outputs through structured feedback and revision cycles.
  • Planning Pattern: Designing stepwise reasoning flows that decompose complex goals, manage task dependencies, and adapt dynamically to new information.
  • Tool Use Pattern: Connecting models with external systems to retrieve data, execute actions, and extend problem-solving capabilities beyond the model’s internal knowledge.
  • Multi-Agent Collaboration Pattern: Coordinating specialized agents with defined roles; enabling structured communication and collaborative problem-solving.
  • Pattern Trade-Offs: Balancing autonomy with control, flexibility with stability, and creativity with reliability in agent design.
  • Pattern Composition: Integrating reflection, planning, and tool use into hybrid workflows for building more capable and adaptive agents.
  • Hands-On Labs: Implementing individual patterns and combining them into complete, production-ready agent workflows for real-world tasks.

Key Topics:

  • Multi-Agent Coordination: Collaboration challenges in multi-agent systems; message routing and task orchestration; scaling cooperation while maintaining stability and control.
  • Need for Agentic Protocols: Establishing discovery and negotiation mechanisms; structured task and state management; enabling secure and reliable agent cooperation.
  • Model Context Protocol (MCP): Client–server architecture for LLM tool integration; standardized access to data and prompts; exposing tools, resources, and templates in a structured way.
  • MCP Architecture: Roles of hosts, clients, and servers; message exchange formats and artifacts; connecting applications, IDEs, and assistants within unified workflows.
  • Agent-to-Agent Protocol (A2A): Task-oriented communication flows; capability discovery using Agent Cards; structured message parts and artifact formats for coordination.
  • Agent Communication Protocol (ACP): Open ecosystem for cross-agent interaction; routing, discovery, and dynamic updates; interoperability across frameworks and platforms.
  • MCP vs ACP vs A2A: Comparing scope, architectural complexity, and message types; selecting appropriate protocols for different workflow requirements.
  • Hands-On Exercise – MCP Client with Streamlit: Setting up the development environment; installing dependencies and configuring API access; connecting an MCP client to servers; discovering tools; automating workflows through tool invocation, data retrieval, and output validation.

Key Topics:

  • Origins & Motivation: Addressing fragmented integrations and brittle bespoke adapters; introducing a unified, interoperable interface — the “USB-C for AI.”
  • Protocol Structure: Client–server handshake model; defining resources, tools, and prompts; JSON-RPC transport with structured, schema-driven messages.
  • Context Exposure: How MCP surfaces tools, data, and metadata through a consistent schema to enable discoverability, governance, and controlled access.
  • Agentic Integration: Connecting MCP endpoints to reflection, planning, tool-use, and multi-agent coordination patterns for modular and scalable systems.
  • Hands-On Labs: Setting up an MCP client in Streamlit; discovering and registering tools; automating workflows through data retrieval and validation; logging traces for monitoring and review.

Key Topics:

  • Need for Evaluation: Reliability, accuracy, and safety; business and ethical alignment; transparency and user trust.
  • Challenges in Evaluation: Hallucinations and prompt sensitivity; weak context grounding; subjectivity and trade-offs between accuracy, fluency, and creativity.
  • Benchmarking Approaches: MMLU for multitask accuracy; HELM for robustness and fairness; BBH and HotpotQA for reasoning and multi-hop QA.
  • Text Quality Metrics: BLEU for precision; ROUGE for recall; BERTScore for semantic similarity.
  • RAG Evaluation (RAGAs): RAGAS for faithfulness and answer relevance; context precision and recall; joint retrieval-generation scoring.
  • G-Eval: G-Eval for fluency, faithfulness, relevance; claim-level scoring of open-ended outputs.
  • Additional Benchmarks: GLUE for NLU; TriviaQA for QA; RealToxicityPrompts for safety; Blended Skill Talk for dialogue quality.
  • Other Metrics: Perplexity for confidence; METEOR for alignment; MRR and MAP for ranking; ROSCOE for reasoning quality.
  • Hands-On Exercises: Apply RAGAS to RAG pipelines; compare BLEU, ROUGE, BERTScore; evaluate agent outputs using G-Eval.

 

 

Key Topics:

Project Tracks:

  • Conversational Workflow Orchestration: Design a multi-turn assistant coordinating tasks across specialized agents.
  • Knowledge-Enhanced Agent: Integrate search and APIs for grounding, fact-checking, and real-time data access.
  • Document-Aware Action Agent: Retrieve and reason over documents; trigger external tools or services based on insights.
  • Orchestrated Collaboration (MCP): Build coordinated multi-agent systems using the Model Context Protocol for seamless tool and enterprise integration.

Attendees Will Receive:

  • Comprehensive Datasets: Industry-spanning document collections for robust development and testing.
  • Step-by-Step Implementation Guides: Clear instructions from environment setup to deployment.
  • Ready-to-Use Code Templates: Prebuilt templates within Data Science Dojo’s sandbox for accelerated development.

Learners Can Choose to Implement:

  • Virtual Assistant
  • Content Generation (Marketing Co-pilot)
  • Conversational Agent (Legal & Compliance Assistant)
  • Content Personalizer
  • MCP Chatbot – AI agent with calendar, CRM, and API integrations

Outcome:

A production-ready multi-agent application demonstrating mastery of reasoning, retrieval, tool use, and protocol-driven interoperability.

Earn a verified certificate

Earn a verified certificate from The University of New Mexico Continuing Education:

  • 3 Continuing Education Credit (CEU)
  • Acceptable by employers for reimbursements
  • Valid for professional licensing renewal
  • Verifiable by The University of New Mexico Registrar’s office
  • Add to LinkedIn and share with your network 
UNM Agentic AI Bootcamp

We accept tuition benefits

All of our programs are backed by a certificate from The University of New Mexico, Continuing Education. This means that you may be eligible to attend the bootcamp for FREE.

Recognition

Reserve your spot

Learn to build Agentic AI applications from leading experts in industry. 

Agentic AI Bootcamp

Use AGENTIC500 for USD 500 discount

Online

Morning Cohort

May 5, 2026

Every Tuesday 9 AM - 12 PM PST

$3000

$2499

Online

Morning Cohort

July 14, 2026

Every Tuesday 9 AM - 12 PM PST

$3000

$2499

Online

Morning Cohort

September 22, 2026

Every Tuesday 9 AM - 12 PM PST

$3000

$2499

Online

Morning Cohort

December 1, 2026

Every Tuesday 9 AM - 12 PM PST

$3000

$2499

A word from our alumni

agentic ai bootcamp | Data Science Dojo
agentic ai bootcamp | Data Science Dojo
agentic ai bootcamp | Data Science Dojo
agentic ai bootcamp | Data Science Dojo
agentic ai bootcamp | Data Science Dojo

Frequently asked questions, answered.

Who does the Agentic AI Curriculum target?

The Agentic AI bootcamp is designed for both technical and non-technical professionals, including engineers, product managers, and business leaders alike. While it includes high-level modules on AI fundamentals, prompt engineering, and strategic deployment, it also dives deep into technical components for developers.

The bootcamp is a 10-week, 30-hour program.

Yes. You will receive a certificate from The University of New Mexico with 3 CEUs.

Yes, participants who complete the bootcamp will receive a certificate of completion in association with the University of Mexico. This certificate can be a valuable addition to your professional portfolio and demonstrate your expertise in building large language model applications.

The LLM bootcamp covers the fundamentals of large language models and takes you through a complete learning track—from the basics to deployment.

In contrast, the Agentic AI bootcamp focuses specifically on building and deploying AI agents, so we dive straight into hands-on development. 

Yes, these sessions are live and are designed to be highly interactive.

When you join the Agentic AI bootcamp, you will receive:

  • Live sessions with industry experts
  • 1-year access to dedicated learner sandboxes
  • Exclusive access to Agentic AI coding labs
  • Access to all session recordings for review at your convenience
  • A verified certificate upon completion

While we do not provide exact recordings of the live sessions, all key topics and content covered during the class are available in our companion courses. We’ve created structured lesson clips that reflect the material discussed, allowing you to review everything at your convenience even if you miss a live session.

  • Basic understanding of LLM fundamentals and Python
  • LLM architectures: foundation models, prompts, embeddings, fine-tuning
  • Challenges & risks: prompt brittleness, context limits, security, cost
  • Transformers & attention: self-attention, multi-head attention, tokenization

You need a very basic level of Python programming for our Agentic AI bootcamp.

The preparatory material will be shared about two weeks before the bootcamp starts. You’ll receive an email with access details and instructions closer to the start date.

No, cloud subscriptions are not included. Participants will need to use their own accounts.

Transfers are allowed once with no penalty. Transfers requested more than once will incur a $200 processing fee.

If, for any reason, you decide to cancel, we will gladly refund your registration fee in full if you notify us at least five business days before the start of the training. We can also transfer your registration to another cohort if preferred.

However, refunds cannot be processed if you have transferred to a different cohort after registration. Additionally, once you have been added to the learning platform and have accessed the course materials, we are unable to issue a refund, as digital content access is considered program participation.

While we do not specifically focus on job placement, we actively promote networking with our partners, attendees, and an extensive network of alumni. Once you register for the bootcamp, we are happy to assist with introductions if you’re looking to connect with professionals in your desired field.

Looking to upskill your team?