Build LLM Apps using LangChain. Learn about LangChain's key components such as models, prompts, parsers, memory, chains, and Question-Answering.
- Introduction to LangChain:
- Why do we need an orchestration tool for LLM application development?
- What is LangChain?
- Different components of LangChain
- Why are orchestration frameworks needed?
- Eliminate the need for foundation model retraining
- Overcoming token limits
- Connecters for data sources
- Interface with any LLM using model I/O
- Model I/O overview
- Components of model I/O: Language models, chat models, prompts, example selectors, and output parsers
- Overview of prompts, prompt templates, and example selectors
- Different types of models: language, chat, and embedding models
- Structuring language model responses using various types of output parsers
- Connecting external data with LLM application with retrieval
- Retrieval overview
- The rationale for the requirement of retrieval and how does it work with LangChain
- Components of retrieval: Document loaders, text splitters, vector stores, and retrievers
- Loading public, private, structured, and unstructured data with document loaders
- Transforming documents to fewer chunks and extracting metadata using document transformers
- Embedding and vector stores for converting documents into vectors and for efficient storage and retrieval
- Optimizing retrieval using different retrieval techniques available in LangChain
- Creating complex LLM workflows with chains
- Chains overview
- Various foundational chain types: LLM, router, sequential, and transformation
- Summarizing large documents using different document chains like stuff, refine, and map-reduce
- Retain context and refer to past interactions with the memory component
- How memory can empower AI applications
- Different types of memories: simple buffer memory, conversation summarization, vector-store-backed-memory
- Overcoming token limit by using memory based on summarization of past conversations
- Utilize vector stores for memory
- Dynamic decision-making with LLMs using agents
- Agents overview
- Components of agents: Tools, toolkits, prompt, and memory
- Different types of agents: Self-ask with search, ReAct, JSON chat, structured chat
- Working with agents using LangGraph
- Monitoring and logging using callbacks
- Monitoring LLM application using callbacks
- Understanding how callbacks work with different events
- Hands-on exercise
- Interface with any LLM using model I/O
- Building RAG application with retrieval
- Creating complex LLM workflows with chains
- Adding memory to LLM-based application
- Harnessing dynamic decision-making using agents
- Supplementary exercises; many coding exercises on LangChain components model I/O, memory, chains, memory, and agents