Interested in a hands-on learning experience for developing LLM applications?
Join our LLM Bootcamp today and Get 5% Off for a Limited Time!

chatbots

In the rapidly evolving world of artificial intelligence and large language models, developers are constantly seeking ways to create more flexible, powerful, and intuitive AI agents.

While LangChain has been a game-changer in this space, allowing for the creation of complex chains and agents, there’s been a growing need for even more sophisticated control over agent runtimes.

Enter LangGraph, a cutting-edge module built on top of LangChain that’s set to revolutionize how we design and implement AI workflows.

In this blog, we present a detailed LangGraph tutorial on building a chatbot, revolutionizing AI agent workflows.

 

llm bootcamp banner

 

 Understanding LangGraph

LangGraph is an extension of the LangChain ecosystem that introduces a novel approach to creating AI agent runtimes. At its core, LangGraph allows developers to represent complex workflows as cyclical graphs, providing a more intuitive and flexible way to design agent behaviors.

The primary motivation behind LangGraph is to address the limitations of traditional directed acyclic graphs (DAGs) in representing AI workflows. While DAGs are excellent for linear processes, they fall short when it comes to implementing the kind of iterative, decision-based flows that advanced AI agents often require.

 

Explore the difference between LangChain and LlamaIndex

 

LangGraph solves this by enabling the creation of workflows with cycles, where an AI can revisit previous steps, make decisions, and adapt its behavior based on intermediate results. This is particularly useful in scenarios where an agent might need to refine its approach or gather additional information before proceeding.

Key Components of LangGraph

To effectively use LangGraph, it’s crucial to understand its fundamental components:

 

LangChain tutorial

 

Nodes

Nodes in LangGraph represent individual functions or tools that your AI agent can use. These can be anything from API calls to complex reasoning tasks performed by language models. Each node is a discrete step in your workflow that processes input and produces output.

Edges

Edges connect the nodes in your graph, defining the flow of information and control. LangGraph supports two types of edges: 

  • Simple Edges: These are straightforward connections between nodes, indicating that the output of one node should be passed as input to the next. 
  • Conditional Edges: These are more complex connections that allow for dynamic routing based on the output of a node. This is where LangGraph truly shines, enabling adaptive workflows.

 

Read about LangChain agents and their use for time series analysis

 

State

State is the information that can be passed between nodes in a whole graph. If you want to keep track of specific information during the workflow then you can use state. 

There are 2 types of graphs which you can make in LangGraph: 

  • Basic Graph: The basic graph will only pass the output of the first node to the next node because it can’t contain states. 
  • Stateful Graph: This graph can contain a state which will be passed between nodes and you can access this state at any node.

 

How generative AI and LLMs work

 

LangGraph Tutorial Using a  Simple Example: Build a Basic Chatbot

We’ll create a simple chatbot using LangGraph. This chatbot will respond directly to user messages. Though simple, it will illustrate the core concepts of building with LangGraph. By the end of this section, you will have a built rudimentary chatbot.

Start by creating a StateGraph. A StateGraph object defines the structure of our chatbot as a state machine. We’ll add nodes to represent the LLM and functions our chatbot can call and edges to specify how the bot should transition between these functions.

 

Explore this guide to building LLM chatbots

 

 

 

So now our graph knows two things: 

  1. Every node we define will receive the current State as input and return a value that updates that state. 
  2. messages will be appended to the current list, rather than directly overwritten. This is communicated via the prebuilt add_messages function in the Annotated syntax. 

Next, add a chatbot node. Nodes represent units of work. They are typically regular Python functions.

 

 

Notice how the chatbot node function takes the current State as input and returns a dictionary containing an updated messages list under the key “messages”. This is the basic pattern for all LangGraph node functions. 

The add_messages function in our State will append the LLM’s response messages to whatever messages are already in the state. 

Next, add an entry point. This tells our graph where to start its work each time we run it.

 

 

Similarly, set a finish point. This instructs the graph “Any time this node is run, you can exit.”

 

 

Finally, we’ll want to be able to run our graph. To do so, call “compile()” on the graph builder. This creates a “CompiledGraph” we can use invoke on our state.

 

 
You can visualize the graph using the get_graph method and one of the “draw” methods, like draw_ascii or draw_png. The draw methods each require additional dependencies.

 

 

LangGraph - AI agent workflows

 

Now let’s run the chatbot!

Tip: You can exit the chat loop at any time by typing “quit”, “exit”, or “q”.

 

 

Advanced LangGraph Techniques

LangGraph’s true potential is realized when dealing with more complex scenarios. Here are some advanced techniques: 

  1. Multi-step reasoning: Create graphs where the AI can make multiple decisions, backtrack, or explore different paths based on intermediate results.
  2. Tool integration: Seamlessly incorporate various external tools and APIs into your workflow, allowing the AI to gather and process diverse information.
  3. Human-in-the-loop workflows: Design graphs that can pause execution and wait for human input at critical decision points.
  4. Dynamic graph modification: Alter the structure of the graph at runtime based on the AI’s decisions or external factors.

 

Learn how to build custom Q&A chatbots

 

Real-World Applications

LangGraph’s flexibility makes it suitable for a wide range of applications: 

  1. Customer Service Bots: Create intelligent chatbots that can handle complex queries, access multiple knowledge bases, and escalate to human operators when necessary.
  2. Research Assistants: Develop AI agents that can perform literature reviews, synthesize information from multiple sources, and generate comprehensive reports.
  3. Automated Troubleshooting: Build expert systems that can diagnose and solve technical problems by following complex decision trees and accessing various diagnostic tools.
  4. Content Creation Pipelines: Design workflows for AI-assisted content creation, including research, writing, editing, and publishing steps.

 

Explore the list of top AI content generators

 

Conclusion

LangGraph represents a significant leap forward in the design and implementation of AI agent workflows. Enabling cyclical, state-aware graphs, opens up new possibilities for creating more intelligent, adaptive, and powerful AI systems.

As the field of AI continues to evolve, tools like LangGraph will play a crucial role in shaping the next generation of AI applications.

 

Explore a hands-on curriculum that helps you build custom LLM applications!

 

Whether you’re building simple chatbots or complex AI-powered systems, LangGraph provides the flexibility and power to bring your ideas to life. As we continue to explore the potential of this tool, we can expect to see even more innovative and sophisticated AI applications emerging in the near future.

August 23, 2024

With rapid LLM development, the digital world is integrating new changes and components. The advanced features offered by large language models enable businesses to enhance their overall presence and efficiency in the modern-day digital market.

In this blog, we will explore the advent of smarter chatbots – one of the many useful impacts of LLM development in modern times.

Understanding LLMs

A large language model is a computer program that is trained and learns from a large amount of data. The machine is capable of understanding and generating human-like text based on the patterns and knowledge accumulated during the training process.

In the library, for example, a young person or child may read various books, articles, and writings from a wide variety of authors. Reading and comprehending all that information requires a great deal of time.

In time, you will become familiar with a wide range of topics, and you will be able to answer questions about them and discuss them in meaningful and logical ways.

Large language models follow similar principles. The program reads and analyzes a vast amount of text, including books, websites, and articles. Therefore, it is able to learn the meaning of words, the structure of words, and the relation between them.

In response to the input it receives, the model will be capable of providing explanations, generating responses, or initiating conversations based on the information it receives after training. On the basis of the text that is provided, the system is able to generate coherent and relevant responses by using context.

Large language models and chatbots
Large language models and chatbots

The purpose of a large language model is to create a computer program that can generate human-like text based on the knowledge it has acquired through reading.

Artificial intelligence systems that are capable of understanding and generating human language are known as large Language Models (LLMs). In order to learn the nuances of language and to respond coherently and pertinently, deep learning algorithms are used along with a large amount of data. An LLM is generally able to predict what words will follow words already typed.

By typing a few keywords into the search box, Google’s BERT system can predict what you will be searching for. The BERT algorithm has been trained on 3.3 million words and contains 340 million parameters so that it can understand and respond to what is entered into the search box.

 

Large language model bootcamp

 

One of the most widely known LLMs today is ChatGPT, which was developed by OpenAI. The service has been registered by more than one million users since it was first made available to the public. A little over two months after the company’s launch, Instagram reached a million downloads, whereas Spotify took five months to reach that level.

It is no wonder that ChatGPT has experienced explosive growth due to its ability to mimic human responses as closely as possible. A total of 300 million words and 175 billion parameters have been analyzed by BERT’s machine learning algorithms, which far exceed the training model used by the model.

Most Popular LLMs (Large Language Models)

It is currently commonplace for multiple companies to develop large language models that have been trained on billions of variables and datasets. However, we are going to take a look at some of the top LLM programs right now:

  • A large language model that was released in 2020, Generative Pre-trained Transformer 3 (GPT-3), has grown in popularity over the years. As part of its development, OpenAI developed the GPT-3 code which has now been licensed to Microsoft for modification and usage.

A prompt is given to GPT-3 and it produces very accurate human-like text output based on deep learning. AI chatbot ChatGPT is based on GPT-3.5, one of the most popular AI chatbots. As well as offering a public API, ChatGPT provides an API through which the results of chats may be integrated and received.

 

  • A Google AI language model called Bidirectional Encoder Representations from Transformers (BERT) was introduced in 2018. A notable feature of this NLP model is that it finds relevance in both sides (left/right) of a word at the same time. Pre-trained plain text data sources, such as Wikipedia, are used by BERT to understand a prompt in a deeper and more meaningful way.

 

  • In 2022, Google developed a conversational large language model in the form of Language Model for Dialogue Applications (LaMDA). As part of the training process, it utilizes a decoding-only transformer language model as well as a text corpus consisting of 1.56 trillion words that have been pre-trained on both documents and dialogues. In addition to providing a Generic Language API to integrate with third-party applications, LaMDA powers Google’s conversational AI chatbot – Bard.

 

  • By 2022, Google AI had developed a large language model based on artificial intelligence called Pathways Language Model (PALM). This system is trained by using a variety of high-quality datasets, which include filtered web pages, books, Wikipedia articles, news articles, source code taken from GitHub repositories, and social media communications.

 

  • A large language model meta-AI (LLaMA) is expected to be developed in Facebook by 2023. It is similar to other large language models that LLaMA models generate text indefinitely based on a sequence of words. By using texts from 20 of the world’s most popular languages, the developers trained the LLaMA model using Latin and Cyrillic alphabets.

 

  • OpenAI created the Generated Pretrained Transformer 4 (GPT-4) model to model multimodal large languages. In addition to taking images and text as inputs, it is an improved version of GPT-3. A number of APIs can be used, images can be generated, and webpages can be accessed and summarized using GPT-4. In addition, ChatGPT Plus is powered by it.

 

Influence of LLM Development on the E-Commerce Industry

  • Show customers what they want: LMs can analyze customer data, such as browsing history, purchase patterns, and preferences, to make highly personalized product recommendations. They can improve customer satisfaction by understanding customers’ needs and preferences.
  • Dedicated Shopping Assistant: It can act as a virtual shopping assistant, assisting customers with navigation through product catalogs, answering questions, and providing guidance. Language Models provide customers with an interactive and personalized shopping experience by allowing them to communicate in natural language.
  • Search & Discover like Humans: They are capable of understanding complex search queries and providing accurate and relevant search results. A better search experience on e-commerce platforms is enabled as a result of this. Customers are able to find products more quickly and easily.
  • Save Time with negligible human intervention: Chatbots are used to provide customer service based on LMs. In addition to handling order tracking, returns, and general product inquiries, customer service representatives can also handle several types of inquiries from customers. By implementing Language Models that can provide real-time responses, customer service can be improved, and human intervention can be reduced.
  • Read, Learn, and then Decide: A LM is capable of producing natural language product descriptions that are engaging to the reader. Customers are also able to gain an understanding of the product’s features, benefits, and applications as well as make informed decisions.
  • Customer Emotions Matter: Customer reviews and feedback can be analyzed by LMs in order to gain insight and better understand customer sentiment. E-commerce platforms are able to identify trends, improve product quality, and address customer concerns in a timely manner through this process.
  • Zero Language Barrier: LMs are capable of assisting in the translation of foreign languages, breaking down language barriers for international customers. Thus, empowering e-commerce platforms to widen their prospects and reach a global audience and thereby, expand their customer base.
  • Voice of the Customer: LMs facilitate voice-based shopping experiences thanks to advancements in speech recognition technology. In order to provide customers with a convenient and hands-free shopping experience, voice commands are available for searching for products, adding items to their shopping carts, and completing purchases.
  • Learn from the Present, Prepare for the Future: In order to obtain insight into customer sentiment, LMs analyze customer reviews and feedback and analyze customer feedback. A company’s e-commerce platform can use this process to identify trends, improve product quality, and respond to customer complaints in a timely manner as a result of their efforts.

Conventional chatbots are typically developed through the use of specific frameworks or programming languages.

The definition of explicit rules and the updating of those rules periodically are essential in order to deal with new scenarios. It requires significant computational resources and expertise to develop, train, and maintain LLM-based chatbots.

 

Aspect

LLM-based Chatbots

Traditional Chatbots

Technology

Based on advanced deep learning

Rule-based or scripted approaches

architectures (e.g., GPT)

Language Understanding

A better understanding of natural

Limited ability for complex

language and context

language understanding

Conversational Ability

More human-like and coherent conversations Prone to scripted responses and struggles with complex dialogs

Personalization

Offers more personalized experiences

Lacks advanced personalization
Training and Adaptability Requires extensive pre-training

Requires manual rule updates for

and fine-tuning on specific tasks

new scenarios

Limitations

Can generate incorrect or misleading Less prone to generating
responses, lacks common sense

incorrect or unexpected responses

Development and Maintenance

Requires significant computational

Developed using specific

 

Developing LLM-based Chatbots requires high-quality Annotated Data

A large language model (LLM) is a powerful tool that enables you to enhance your ability to understand natural language and generate text that appears human-like. As a result of these sophisticated models, chatbots in various fields, including the e-commerce industry, could be revolutionized in terms of how they interact with users. A chatbot that is based on LLM will likely be more effective if the training data it receives is of high quality.

Annotating data is an essential component of preparing training data for LLMs. A dataset is labelled or tagged with annotations in order for machine learning algorithms to understand it. LLM-based chatbots are developed by annotating text with data such as intent, entities, sentiment, and dialogue structure. Based on this annotated data, the bot can provide users with relevant answers to their queries and engage in meaningful dialogue with them.

 

 

In order to train LLM-based chatbots, the quality of annotated data is of paramount importance. Annotations of high quality help the chatbot understand users’ queries accurately, understand the nuances of their language, and respond appropriately to them. It is possible that chatbots will be unable to interpret complex language structures, comprehend the intent of the user, or generate coherent and contextually relevant responses without well-annotated data.

The process of data annotation requires annotators who are skilled at interpreting and labeling data accurately as well as having a deep understanding of language. The annotators are capable of capturing subtle nuances, idioms, and context by utilizing their expertise in linguistics and domain knowledge. Their meticulous labeling and annotation of the data during the training process provide the LLM with the guidance it needs to learn from the examples and generalize from them.

LLM-based chatbots benefit from highly annotated data in numerous ways:

Understanding language: As a result of annotations, users are able to gain an understanding of the meaning, intent, and entities represented in their queries. As a result, the chatbot is capable of understanding nuances in the language of a user, interpreting their intent accurately, and providing relevant information based on their input.

Understanding context: A chatbot can understand the conversation flow based on annotations, which provide context cues. The chatbot develops a greater understanding of a conversation by annotating dialogue structure and conversation context, thereby ensuring more coherent and contextually relevant responses.

Enhanced response generation:

When annotations are of high quality, they contribute to the production of more accurate and contextually appropriate responses. LLM-based chatbots are trained on well-annotated data in order to generate text that is human-like and aligns with the conversation’s intention and context.

Expertise in a specific domain:

It is also possible to tailor data annotations for specific e-commerce domains. In order to be able to provide users with more accurate and informed responses, the chatbot acquires domain knowledge from product descriptions, customer reviews, and other domain-specific sources.

As a result, it cannot be overstated just how important it is to use high-quality annotated data to train LLM-based chatbots. It provides the basis for the development of these chatbots’ abilities to understand and respond to natural language. An e-commerce business should partner with a data annotation company that specializes in LLM training in order to ensure the accuracy, performance, and effectiveness of their chatbot solutions. An LLM-based chatbot can provide outstanding customer service, personalized suggestions, and seamless interaction as a result of quality annotations.

Final thoughts

The article describes how large language models (LLMs) affect the e-commerce industry. A LLM, such as GPT-3 or BERT, is an advanced deep-learning model capable of interpreting and generating human-like text after extensive training on large datasets. By understanding natural language, engaging in conversations, personalizing, and performing improved search functions, they have revolutionized chatbot technology.

Data that has been labeled with annotations such as intent, entities, sentiment, and dialogue structure is required for the training of LLM-based chatbots. With well-annotated data, chatbots can provide contextually relevant responses to users based on their questions, take into account nuances in language, and understand nuances in user queries. The article emphasizes the importance of partnering with companies that specialize in LLM training to ensure the effectiveness and accuracy of chatbot solutions in e-commerce.

Data Science Dojo’s Large Language Models Bootcamp

Introducing Data Science Dojo’s Large Language Models Bootcamp, a specialized 40-hour program for creating LLM-powered applications. This intensive course concentrates on practical aspects of LLMs in natural language processing, utilizing libraries like Hugging Face and LangChain.

Participants will master text analytics techniques, including semantic search and Generative AI. Perfect for professionals seeking to enhance their understanding of Generative AI, the program covers essential principles and real-world implementation without the need for extensive coding skills.

Register today

 

 

Written by Roger Brown

June 26, 2023

The ongoing battle ‘ChatGPT vs Bard’ continues as the two prominent contenders in the generative AI landscape which have garnered substantial interest. As the rivalry between these platforms escalates, it continues to captivate the attention of both enthusiasts and experts.

What are chatbots?

Chatbots are revolutionizing the way we interact with technology. These artificial intelligence (AI) programs can carry on conversations with humans, and they are becoming increasingly sophisticated. Two of the most popular chatbots on the market today are ChatGPT and Bard. Both chatbots are capable of carrying on conversations with humans, but they have different strengths and weaknesses. 

ChatGPT vs Bard

1. ChatGPT 

ChatGPT was created by OpenAI and is based on the GPT-3 language model. It is trained on a massive dataset of text and code, and is able to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. 

ChatGPT: Strenght and weaknesses

One of ChatGPT’s strengths is its ability to generate creative text formats. It can write poems, code, scripts, musical pieces, email, letters, etc., and its output is often indistinguishable from human-written text. ChatGPT is also good at answering questions, and can provide comprehensive and informative answers even to open-ended, challenging, or strange questions. 

However, ChatGPT also has some weaknesses. One of its biggest weaknesses is its tendency to generate text that is factually incorrect. This is because ChatGPT is trained on a massive dataset of text, and not all of that text is accurate. As a result, ChatGPT can sometimes generate text that is factually incorrect or misleading. 

Another weakness of ChatGPT is its lack of access to real-time information. ChatGPT is trained on a dataset of text that was collected up to 2021, and it does not have access to real-time information. This means that ChatGPT can sometimes provide outdated or inaccurate information.  

ChatGPT vs Bard - AI Chatbots
ChatGPT vs Bard – AI Chatbots

2. Bard 

Bard is a large language model from Google AI, trained on a massive dataset of text and code. It can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.  

One of Bard’s strengths is its access to real-time information. Bard is able to access and process information from the internet in real time, which means that it can provide up-to-date information on a wide range of topics. Bard is also able to access and process information from other sources, such as books, articles, and websites. This gives Bard a much wider range of knowledge than ChatGPT. 

Bard: Strenght and weaknesses

Another strength of Bard is its ability to generate accurate text. Bard is trained on a massive dataset of text that is carefully curated to ensure accuracy. As a result, Bard is much less likely to generate text that is factually incorrect than ChatGPT. 

However, Bard also has some weaknesses. One of its biggest weaknesses is its lack of creativity. Bard is good at generating text that is factually accurate, but it is not as good at generating text that is creative or engaging. Bard’s output is often dry and boring, and it can sometimes be difficult to follow. 

Another weakness of Bard is its limited availability. Bard is currently only available to a select group of users, and it is not yet clear when it will be made available to the general public.  

How chatbots are revolutionary 

Chatbots are revolutionary because they have the potential to change the way we interact with technology in a number of ways. 

First, chatbots can make technology more accessible to people who are not comfortable using computers or smartphones. For example, chatbots can be used to provide customer service or technical support to people who are not able to use a website or app. 




Second, chatbots can make technology more personalized. For example, chatbots can be used to provide recommendations or suggestions based on a user’s past behavior. This can help users to find the information or services that they are looking for more quickly and easily. 

Third, chatbots can make technology more engaging. For example, chatbots can be used to play games or tell stories. This can help to make technology more fun and enjoyable to use. 

Does the future belong to chatbots?

Chatbots are still in their early stages of development, but they have the potential to revolutionize the way we interact with technology. As chatbots become more sophisticated, they will become increasingly useful and popular.  

In the future, it is likely that chatbots will be used in a wide variety of settings, including customer service, education, healthcare, and entertainment. Chatbots have the potential to make our lives easier, more efficient, and more enjoyable. 

ChatGPT vs Bard: Which AI chatbot is right for you? 

When it comes to AI language models, the battle of ChatGPT vs Bard is a hot topic in the tech community. But, which AI chatbot is right for you? It depends on what you are looking for. If you are looking for a chatbot that can generate creative text formats, then ChatGPT is a good option. However, if you are looking for a chatbot that can provide accurate information, then Bard is a better option. 

Ultimately, the best way to decide which AI chatbot is right for you is to try them both out and see which one you prefer. 

June 23, 2023

Large language models (LLMs) like GPT-3 and GPT-4. revolutionized the landscape of NLP. These models have laid a strong foundation for creating powerful, scalable applications. However, the potential of these models isaffected by the quality of the prompt. This highlights the importance of prompt engineering.

 

 

Furthermore, real-world NLP applications often require more complexity than a single ChatGPT session can provide. This is where LangChain comes into play! 

 

 

Get more information on Large Language models and its applications and tools by clicking below:


Large language model bootcamp

 

Harrison Chase’s brainchild, LangChain, is a Python library designed to help you leverage the power of LLMs to build custom NLP applications. As of May 2023, this game-changing library has already garnered almost 40,000 stars on GitHub. 

LangChain

 

Interested in learning about Large Language Models and building custom ChatGPT like applications for your business? Click below

Learn More                  

 

This comprehensive beginner’s guide provides a thorough introduction to LangChain, offering a detailed exploration of its core features. It walks you through the process of building a basic application using LangChain and shares valuable tips and industry best practices to make the most of this powerful framework. Whether you’re new to Language Learning Models (LLMs) or looking for a more efficient way to develop language generation applications, this guide serves as a valuable resource to help you leverage the capabilities of LLMs with LangChain. 

Overview of LangChain modules 

These modules are essential for any application using the Language Model (LLM).

 

LangChain offers standardized and adaptable interfaces for each module. Additionally, LangChain provides external integrations and even ready-made implementations for seamless usage. Let’s delve deeper into these modules. 

Overview of LangChain Modules
Overview of LangChain Modules

LLM

LLM is the fundamental component of LangChain. It is essentially a wrapper around a large language model that helps use the functionality and capability of a specific large language model. 

Chains

As stated earlier, LLM (Language Model) serves as the fundamental unit within LangChain. However, in line with the “LangChain” concept, it offers the ability to link together multiple LLM calls to address specific objectives. 

For instance, you may have a need to retrieve data from a specific URL, summarize the retrieved text, and utilize the resulting summary to answer questions. 

On the other hand, chains can also be simpler in nature. For instance, you might want to gather user input, construct a prompt using that input, and generate a response based on the constructed prompt. 

 

Large language model bootcamp

 

Prompts 

Prompts have become a popular modeling approach in programming. It simplifies prompt creation and management with specialized classes and functions, including the essential PromptTemplate. 

 

Document loaders and Utils 

LangChain’s Document Loaders and Utils modules simplify data access and computation. Document loaders convert diverse data sources into text for processing, while the utils module offers interactive system sessions and code snippets for mathematical computations. 

Vector stores 

The widely used index type involves generating numerical embeddings for each document using an embedding model. These embeddings, along with the associated documents, are stored in a vector store. This vector store enables efficient retrieval of relevant documents based on their embeddings. 

Agents

LangChain offers a flexible approach for tasks where the sequence of language model calls is not deterministic. Its “Agents” can act based on user input and previous responses. The library also integrates with vector databases and has memory capabilities to retain the state between calls, enabling more advanced interactions. 

 

Building our App 

Now that we’ve gained an understanding of LangChain, let’s build a PDF Q/A Bot app using LangChain and OpenAI. Let me first show you the architecture diagram for our app and then we will start with our app creation. 

 

QA Chatbot Architecture
QA Chatbot Architecture

 

Below is an example code that demonstrates the architecture of a PDF Q&A chatbot. This code utilizes the OpenAI language model for natural language processing, the FAISS database for efficient similarity search, PyPDF2 for reading PDF files, and Streamlit for creating a web application interface.

 

The chatbot leverages LangChain’s Conversational Retrieval Chain to find the most relevant answer from a document based on the user’s question. This integrated setup enables an interactive and accurate question-answering experience for the users. 

Importing necessary libraries 

Import Statements: These lines import the necessary libraries and functions required to run the application. 

  • PyPDF2: Python library used to read and manipulate PDF files. 
  • langchain: a framework for developing applications powered by language models. 
  • streamlit: A Python library used to create web applications quickly. 
Importing necessary libraries
Importing necessary libraries

If the LangChain and OpenAI are not installed already, you first need to run the following commands in the terminal. 

Install LangChain

 

Setting openAI API key 

You will replace the placeholder with your OpenAI API key which you can access from OpenAI API. The above line sets the OpenAI API key, which you need to use OpenAI’s language models. 

Setting OpenAI API Key

Streamlit UI 

These lines of code create the web interface using Streamlit. The user is prompted to upload a PDF file.

Streamlit UI
Streamlit UI

Reading the PDF file 

If a file has been uploaded, this block reads the PDF file, extracts the text from each page, and concatenates it into a single string. 

Reading the PDF File
Reading the PDF File

Text splitting 

Language Models are often limited by the amount of text that you can pass to them. Therefore, it is necessary to split them up into smaller chunks. It provides several utilities for doing so. 

Text Splitting 
Text Splitting

Using a Text Splitter can also help improve the results from vector store searches, as eg. smaller chunks may sometimes be more likely to match a query. Here we are splitting the text into 1k tokens with 200 tokens overlap. 

Embeddings 

Here, the OpenAIEmbeddings function is used to download embeddings, which are vector representations of the text data. These embeddings are then used with FAISS to create an efficient search index from the chunks of text.  

Embeddings
Embeddings

Creating conversational retrieval chain 

The chains developed are modular components that can be easily reused and connected. They consist of predefined sequences of actions encapsulated in a single line of code. With these chains, there’s no need to explicitly call the GPT model or define prompt properties. This specific chain allows you to engage in conversation while referencing documents and retains a history of interactions. 

Creating Conversational Retrieval Chain
Creating Conversational Retrieval Chain

Streamlit for generating responses and displaying in the App 

This block prepares a response that includes the generated answer and the source documents and displays it on the web interface. 

Streamlit for Generating Responses and Displaying in the App
Streamlit for Generating Responses and Displaying in the App

Let’s run our App 

QA Chatbot
QA Chatbot

Here we uploaded a PDF, asked a question, and got our required answer with the source document. See, that is how the magic of LangChain works.  

You can find the code for this app on my GitHub repository LangChain-Custom-PDF-Chatbot.

Build your own conversational AI applications 

Concluding the journey! Mastering LangChain for creating a basic Q&A application has been a success. I trust you have acquired a fundamental comprehension of LangChain’s potential. Now, take the initiative to delve into LangChain further and construct even more captivating applications. Enjoy the coding adventure.

 

Learn to build custom large language model applications today!                                                

May 22, 2023

Do you know what can be done with your telecom data? Who decides how it should be used?

Telecommunications isn’t going anywhere. In fact, your telecom data is becoming even more important than ever.

From the first smoke signals to current, cutting-edge smartphones, the objective of telecommunications has remained the same:

Telecom transmits data across distances farther than the human voice can carry.

Telecommunications (or telecom), as an industry with data ingrained into its very DNA, has benefited a great deal from the advent of modern data science. Here are 7 ways that telecommunications companies (otherwise known as telcos) are making the most of your telecom data, with machine learning purposes.

1: Aiding in infrastructure repair

Data
A person analyzing data reports

Even as communication becomes more decentralized, signal towers remain an unfortunate remnant of an analog past in telecommunications. Companies can’t exactly send their in-house software engineers to climb up the towers and routinely check on the infrastructure. This task still requires field workers to carry out routine inspections, even if no problem visibly exists. AT&T is looking to change that through machine learning models that will analyze video footage captured by drones. The company can then passively detect potential risks, allowing human workers to fix structural issues before they affect customers. Read more about AT&T’s drones here.

2: Email management and lead identification

mail
A number of mails / e-mails

Mass email marketing is a vital asset of the modern corporation, but even as the sending process becomes more automated, someone is still required to sift through the responses and interpret the interests and questions from potential customers.

To make your life easier, you could instead offload that task to AI. In 2016, CenturyLink began using its automated assistant “Angie” to handle 30,000 monthly emails. Of these, 99% could be properly interpreted without handing them off to a human manager. Imagine how much time the human manager would save, without having to sift through that telecom data.

The company behind Angie, California-based tech developer Conversica, advertises machine learning models as a way to identify promising leads from the dense noise of email communication, which enables telcos to efficiently redirect their marketing follow-up efforts to the right representatives.

3: Rise of the chat bots

Chat-bot
Chat bots sending automated message

Dealing with chat bots can be a frustrating (or hilarious) experience. Despite the generally negative perception that precedes them, it hasn’t slowed down bot implementation into the customer service side of most telecom companies. Spectrum and AT&T are among the corporations that utilize chat bots at some level of their customer service pipeline, and others are quickly following suit. As the algorithms behind these programs grow more nuanced, human customer service, which brings its own set of frustrations, is beginning to be reduced or phased out.

4: Working with language

The advancement of natural language processing has made interacting with technology easier than ever. Telcos like DISH and Comcast have made use of this branch of artificial intelligence to improve the user interface of their products. One example of this is allowing customers to navigate channels and save shows as “favorites” using only their natural speech. Visually impaired customers can make use of vocal relay features to hear titles and time-slots read back to them in response to spoken commands, widening the user base of the company.

5: Content customization

netflix
Content customization concept on different channels

If you’re a Netflix user, I’m sure you’ve seen the “Recommended for you” and “Because you watched (insert show title)” recommendations. They used to be embarrassingly bad, but these suggestions have noticeably improved over the years.

Netflix has succeeded partly on the back of its recommendation engine, which tailors displayed content based on user behavior (in other words, your telecom data). Comcast is making moves towards a similar system, utilizing machine vision algorithms and user metadata to craft a personalized experience for the customer.

As companies begin to create increasingly precise user profiles, we are approaching the point of your telco knowing more about your behavior than you do, solely from the telecom data you put out.This can have a lot of advantages, one of the more obvious ones include being introduced to a new favorite show.

6: Variable data caps

Nobody likes data caps that restrict them, but paying for data usage you’re not actually using is nearly as bad. Some telecom companies are moving towards a system that calculates data caps based on user behavior and adjusts the price accordingly, in an effort to be as fair as possible. Whether or not you think corporations will use tiered pricing in a reasonable way depends on your opinion of said corporations. On paper, big data may be able to determine what kind of data consumer you are and adjust your data restrictions to fit your specific needs. This could potentially save you hundreds of dollars a year.

For as long as data could be extracted from phone calls, the telecommunications industry has been collecting your telecom data. “Call detail records” (CDRs) are a treasure trove of user information.

CDRs are accompanied by metadata which includes parameters such as the numbers of both speakers on the call, the route the call took to connect, any faulty conditions the call experienced, and more. Machine learning models are already working to translate CDRs into valuable insights on improving call quality and customer interactions.

It’s important to note that phone companies aren’t the only ones making use of this specific data. Since this metadata contains limited personal information, the Supreme Court ruled that it does not fall under the 4th Amendment, and as such, CDRs are used by law enforcement almost as much as by telcos.

Contributors:

Sabrina Dominguez: Sabrina holds a B.S. in Business Administration with a specialization in Marketing Management from Central Washington University. She has a passion for Search engine optimization and marketing.

James Kennedy: James holds a B.A. in Biology with a Creative Writing minor from Whitman College. He is a lifelong writer with a curiosity for the sciences.

This is the first part in a series identifying the practical uses of data science in various industries. Stay tuned for the second part, which will cover data in the healthcare sector.

June 14, 2022

What are chatbots? In the first part of this introductory series to chatbots, we talk about what this revolutionary technology is and why it has suddenly become so popular.

It took less than 24 hours of interaction with humans for an innocent, self-learning AI chatbot to turn into a chaotic, racist Nazi.

In March 2016, Microsoft unveiled Tay; a twitter-based, friendly, self-learning chatbot modeled to behave like a teenage girl. The AI chatbot was supposed to be an experiment in “conversational understanding”, as described by Microsoft. The bot was designed to learn from interacting with people online through casual conversation, slowly developing its personality.

What Microsoft didn’t consider, however, was the effect of negative inputs on Tay’s learning. Tay started off by declaring “humans are super cool” and that it was “a nice person”. Unfortunately, the conversations didn’t stay casual for too long.

In less than 24 hours Tay was tweeting racist, sexist, and extremely inflammatory remarks after learning from all sorts of misogynistic, racist garbage tweeted at it by internet trolls.

This entire experiment, despite becoming a proper PR disaster for Microsoft, proved to be an excellent study into the inherently negative human bias and its effect on self-learning Artificial Intelligence.

So, what are Chatbots?

A chatbot is a specialized software that allows conversational interaction between a computer and a human. Modern chatbots are versatile enough to carry out complete conversations with their human users and even carry out tasks given during conversations.

Having become mainstream because of personal assistants from the likes of Google, Amazon, and Apple, chatbots have become a vital part of our everyday lives whether we realize it or not.

Why the sudden popularity?

The use of chatbots has skyrocketed recently. They have found a strong foothold in almost every task that requires text-based public dealing. They have become so critical in the customer support industry, for example, that almost 25% of all customer service operations are expected to use them by 2020.

what-are-chatbots-projected growth rate

This is mainly because people have all but moved on to chat as the primary mode of communication. Couple that with the huge number of conversational platforms (Skype, WhatsApp, Slack, Kik, etc.) available, and the environment makes complete sense to use AI and the cloud to connect with people.

At the other end of the support chain, businesses love chatbots because they’re available 24×7, have near-immediate response times, and are very easy to scale without the huge human resource bill that normally comes with having a decent customer support operations team.

Outside of business environments, smart virtual assistants dominate almost every aspect of modern life. We depend on these smart assistants for everything; from controlling our smart homes to helping us manage our day-to-day tasks. They have, slowly, become a vital part of our lives and their usefulness will only increase as they keep becoming smarter.

Types of Chatbots

Chatbots can be broadly classified into two different types:

Rule-Based Chatbots

The very first bots to see the light of day, rule-based chatbots relied on pattern-matching methodologies to ‘guess’ appropriate responses from an existing database. These bots started with the release of ELIZA in 1966 and continued till around 2001 with the release of SmarterChild developed by ActiveBuddy.

eliza interface
Welcome screen of ELIZA

The simplest rule-based chatbots have one-to-one tables of inputs and their responses. These bots are extremely limited and can only respond to queries if they are an exact match with the inputs defined in their database. This means the conversation can only follow a number of predefined flows. In a lot of cases, the chatbot doesn’t even allow users to type in queries, relying, instead on, preset inputs that the bot understands.

This doesn’t necessarily limit their use though. Rule-based Chatbots are widely used in modern businesses for customer support tasks. A Customer Support Chatbot has an extremely limited job description. A customer support chatbot for a bank, for example, would need to answer some operational queries about the bank (timings, branch locations) and complete some basic tasks (authenticate users, block stolen credit cards, activate new credit cards, register complaints).

In almost all of these cases, the conversation would follow a pattern. The flow of conversation, once defined, would stay mostly the same for a majority of the users. The small number of customers who need more specialized support could be forwarded to a human agent.

A lot of modern customer-facing chatbots are AI-based Chatbots that use Retrieval-based Models (which we’ll be discussing below). They are primarily rule-based but employ some form of Artificial Intelligence (AI) to help them understand the flow of human conversations.

AI-based chatbots

These are a relatively newer class of chatbots, having come out after the proliferation of artificial intelligence in recent years. These bots (like Microsoft’s Tay) learn by being trained on conversational datasets, instead of having hard-coded rules like their rule-based kin.

AI-based chatbots are based on complex machine-learning models that enable them to self-learn. These types of chatbots can be broadly classified into two main types depending on the types of models they use.

1.    Retrieval-based models

As their name suggests, Chatbots using retrieval-based models are provided with a database of answers and are trained to retrieve the most relevant answer based on the input question. These bots already have a provided list of responses and are trained to rank each response based on the input/question. They cannot generate their own answers but with an extensive database of answers and proper training, they can be very productive and useful.

Usually easier to develop and customize, retrieval-based chatbots are mostly used in customer support and feedback applications where the conversation is limited to a topic (either a product, a service, or an entity).

2.     Generative models

Generative models, unlike Retrieval-based models, can generate their own responses by analyzing the input word by word to understand the query. These models are more ‘human’ during their interactions but at the same time also more prone to errors as they need to build sentence responses themselves.

Chatbots based on Generative Models are quite complex to build and are usually overkill for customer-facing applications. They are mostly used in applications where conversations are expected to be general/not limited to a specific topic. Take Google Assistant as an example.

The Assistant is an always-listening chatbot that can answer questions, tell jokes, and carry out very ‘human’ conversations. The one thing it can’t do. Provide customer support for Google products.

google assistant message
Google Assistant

Modern Virtual Assistants are a very good example of AI-based Chatbots

History of chatbots

history of chatbots infographic
History of Chatbots Infographic

Modern chatbots: Where are they used?

Customer services

The use of chatbots has been growing exponentially in the Customer Services industry. The chatbot market is projected to grow from $2.6 billion in 2019 to $9.4 billion by 2024. This really isn’t surprising when you look at the immense benefits chatbots bring to businesses. According to a study by IBM, chatbots can reduce customer service cost by up to 30%. Couple that with customers being open to interacting with bots for support and purchases, it’s a win-win scenario for both parties involved.

In the customer support industry, chatbots are mostly used to automate redundant queries that would normally be handled by a human agent. Businesses are also starting to use them in automating order-booking applications. The most successful example is Pizza Hut’s automated ordering platform.

Healthcare

Despite not being a substitute for healthcare professionals, chatbots are gaining popularity in the healthcare industry. They are mostly used as self-care assistants, helping patients manage their medications and help them track and monitor their fitness.

Financial assistants

Financial chatbots usually come bundled with apps from leading banks. Once linked to your bank account, they can extend the functionality of the app by providing you with a conversational (text or voice) interface to your bank.

Besides these, there are quite a few financial assistant chatbots available. These can track your expenses, budget your resources, and help you manage your finances. Charlie is a very good example of a financial assistant. The chatbot is designed to help you budget your expenses and track your finances so that you end up saving more.

Automation

Chatbots have become a very popular way of interacting with the modern smart home. These bots are, at their core, complex. They don’t need to have contextual awareness but need to be trained properly to be able to extract an actionable command from an input statement.

This is not always an easy task as the chatbot is required to understand the flow of natural language. Modern virtual assistants (such as Google Assistant, Alexa, and Siri) handle these tasks quite well and have become the de facto standard for providing a voice or text-based interface to a smart home.

Tools for building intelligent chatbots

Building a chatbot as powerful as the virtual assistants from Google and Amazon is an almost impossible task. These companies have been able to achieve this feat after spending years and billions of dollars in research, something that not everyone with a use for a chatbot can afford.

Luckily, almost every player in the tech market (including Google and Amazon) allows businesses to buy their technology platforms to design customized chatbots for their own use. These platforms have pre-trained language models and easy-to-use interfaces that make it extremely easy for new users to set up and deploy customized chatbots in no time.

If that wasn’t good enough, almost all of these platforms allow businesses to push their custom chatbot apps to Google Assistant or Amazon Alexa and have them instantly be available to millions of new users.

The most popular of these platforms are:

1.     Google DialogFlow

2.     Amazon Lex

3.     IBM Watson

4.     Microsoft Azure Bot

Coming up next

Now that we’re familiar with the basics of chatbots, we’ll be going into more detail about how to build them. In the second blog of the series, we’ll be talking about how to create a simple Rule-based chatbot in Python. Stay tuned!

June 13, 2022

Related Topics

Statistics
Resources
Programming
Machine Learning
LLM
Generative AI
Data Visualization
Data Security
Data Science
Data Engineering
Data Analytics
Computer Vision
Career
AI