For a hands-on learning experience to develop LLM applications, join our LLM Bootcamp today.
First 6 seats get an early bird discount of 30%! So hurry up!

open-source

Explore a step-by-step journey in crafting dynamic chatbot experiences tailored to your CSV data using Gradio, LLAMA2, and Hugging Face on Google Colab.  

“When diving into the world of Language Model usage, one often encounters barriers such as the necessity for a paid API or the need for a robust computing system when working with open-source Language Models (LLMs).

Eager to overcome these constraints, I embarked on a journey to develop a Gradio App using open-source tools completely.

 

 

Learn to build LLM applications

 

Harnessing the power of the free Colab T4 GPU and an open-source LLM, this blog will guide you through the process, empowering you to effortlessly chat with your own CSV data, breaking free from the traditional limitations associated with LLMs.” 

Prequisites 

  • A Hugging Face account to access open-source Llama 2 and embedding models (free sign up available if you don’t have one). 
  • Access to LLAMA2 models, obtainable through this form (access is typically granted within a few hours). 
  • A Google account for using Google Colab. 

 

Once you have been granted access to Llama 2 models visit the following link and select the checkbox shown in the image below and hit ‘Submit’. 

 

Huggingface 1

 

 

Setting up Google Colab environment 

If running on Google Colab you go to **Runtime > Change runtime type > Hardware accelerator > GPU > GPU type > T4. Our code will require ~15GB of GPU RAM. 

 

Google colab - 2  

 

Installing necessary libraries and dependencies

The following snippet streamlines the installation process, ensuring that all necessary components are readily available for our project 

 

 

Authenticating with HuggingFace

To integrate your Hugging Face token into Colab’s environment, follow these steps. 

  • Execute the following code in a Colab cell: 

 

 

  • After running the cell, a prompt will appear, requesting your Hugging Face token. 
  • Obtain your Hugging Face token by navigating to the Hugging Face settings. Look for the “Access Token” tab, where you can easily copy your token. 

 

Import relevant libraries

 

 

Initializing the HuggingFace pipeline

The first thing we need to do is initialize a text-generation pipeline with Hugging Face transformers. The Pipeline requires three things that we must initialize first, those are: 

  • An LLM, in this case it will be meta-llama/Llama-2-7b-chat-hf. 
  • The respective tokenizer for the model. 

 

Large language model bootcamp

 

We initialize the model and move it to our CUDA-enabled GPU. Using Colab this can take 2-5 minutes to download and initialize the model. 

  

 

Load HuggingFace open-source embeddings models

Embeddings are crucial for Language Model (LM) because they transform words or tokens into numerical vectors, enabling the model to understand and process them mathematically. In the context of LLMs: 

  • Semantic Representation: Embeddings encode semantic relationships, placing similar words close in vector space for the model to understand nuanced language context. 
  • Numerical Input for Models: Transforming words into numerical vectors, embeddings provide a mathematical foundation for neural networks, ensuring effective processing within the model. 
  • Dimensionality Reduction: Embeddings condense high-dimensional word representations, enhancing computational efficiency while preserving essential linguistic features. 
  • Transfer Learning: Pre-trained embeddings capture general language patterns, facilitating knowledge transfer to specific tasks, boosting model performance on diverse datasets. 
  • Contextual Information: Embeddings, considering adjacent words, capture contextual nuances, enabling Language Models to generate coherent and contextually relevant language. 

 

 

 

Load CSV data using LangChain CSV loader 

LangChain CSV loader loads csv data with a single row per document. For this demo we are using employee sample data csv file which is uploaded in colab’s environment. 

 

 

 

Creating vectorstore

For this demonstration, we are going to use FAISS vectorstore. Facebook AI Similarity Search (Faiss) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. 

 

 

Initializing retrieval QA chain and testing sample query

We are now going to use Retrieval QA chain of LangChain which combines vector store with a question answering chain to do question answering. 

The above code utilizes the RetrievalQA module to answer a specific query about the annual salary of Sophie Silva, including the retrieval of source documents. The result is then formatted for better readability by wrapping the text to a maximum width of 500 characters. 

 

 

Building a Gradio App 

Now we are going to merge the above code snippets to create a gradio application 

 

 

Function Definitions: 

  • main: Takes a dataset and a question as input, initializes a RetrievalQA chain, retrieves the answer, and formats it for display. 
  • dataset_change: Changes in the dataset trigger this function, loading the dataset, creating a FAISS vector store, and returning the first 5 rows of the dataset. 

 

Gradio Interface Setup: 

  • with gr.Blocks() as demo: Initializes a Gradio interface block. 
  • with gr.Row(): and with gr.Column():: Defines the layout of the interface with file input, text input for the question, a button to submit the question, and a text box to display the answer. 
  • with gr.Row(): and dataframe = gr.Dataframe(): Includes a row for displaying the first 5 rows of the dataset. 
  • submit_btn.click(main, inputs=[data,qs], outputs=[answer]): Associates the main function with the click event of the submit button, taking inputs from the file and question input and updating the answer text box. 
  • data.change(fn=dataset_change,inputs=data,outputs=[dataframe]): Calls the dataset_change function when the dataset changes, updating the dataframe display accordingly. 
  • gr.Examples([[“What is the Annual Salary of Theodore Dinh?”], [“What is the Department of Parker James?”]], inputs=[qs]): Provides example questions for users to input. 

 

Launching the Gradio Interface: 

  • demo.launch(debug=True): Launches the Gradio interface in debug mode. 

 

In summary, this code creates a user-friendly Gradio interface for interacting with a question-answering system. Users can input a CSV dataset, ask questions about the data, and receive answers displayed in real-time. The interface also showcases a sample dataset and questions for user guidance. 

 

Output 

Attached below are some screenshots of the app and the responses of LLM. The process kicks off by uploading a csv file, which is then passed through the embeddings model to generate embeddings. Once this process is done the first 5 rows of the file are displayed for preview. Now the user can input the question and Hit ‘Submit’ to generate answer. 

 

LLM output LLM output LLM output LLM output LLM output LLM output

 

Conclusion 

In conclusion, this blog has demonstrated the empowerment of language models through the integration of LLAMA2, Gradio, and Hugging Face on Google Colab.

By overcoming the limitations of paid APIs and compute-intensive open-source models, we’ve successfully created a dynamic Gradio app for personalized interactions with CSV data. Leveraging LangChain question-answering chains and Hugging Face’s model integration, this hands-on guide enables users to build chatbots that comprehend and respond to their own datasets. 

As technology evolves, this blog encourages readers to explore, experiment, and continue pushing the boundaries of what can be achieved in the realm of natural language processing.

 

November 16, 2023

In this blog, we will be getting started with the Llama 2 open-source large language model. We will guide you through various methods of accessing it, ensuring that by the end, you will be well-equipped to unlock the power of this remarkable language model for your projects.

Whether you are a developer, researcher, or simply curious about its capabilities, this blog will equip you with the knowledge and tools you need to get started. 

 

Understanding Llama 2 

In the ever-evolving landscape of artificial intelligence, language models have emerged as pivotal tools for developers, researchers, and enthusiasts alike. One such remarkable addition to the world of language models is Llama 2. While it may not be the absolute marvel of language models, it stands out as an open-source gem. 

Llama 2, an open-source large language model, opens its doors for both research and commercial use, breaking down barriers to innovation and creativity. It comprises a range of pre-trained and fine-tuned generative text models, varying in scale from 7 billion to a staggering 70 billion parameters.

 

Read more about – > Llama 2 fine-tuning

 

Among these, the Llama-2-Chat models, optimized for dialogue, shine as they outperform open-source chat models across various benchmarks. In fact, their helpfulness and safety evaluations rival some popular closed-source models like ChatGPT and PaLM. 

In this blog, we will exploring its training process, improvements over its predecessor, and ways to harness its potential.

 

 

If you want to use it in your projects, this guide will get you started.

So, let us embark on this journey together as we unveil the world of Llama 2 and discover how it can elevate your AI (Artificial Intelligence) endeavors. 

 

Llama 2: The evolution and enhanced features 

 

It represents a significant leap forward from its predecessor, Llama 1, which garnered immense attention and demand from researchers worldwide. With over 100,000 requests for access, the research community demonstrated its appetite for powerful language models.

Building upon this foundation, Llama 2 emerges as the next generation offering from Meta, succeeding its predecessor, Llama 1. Unlike Llama 1, which was released under a non-commercial license for research purposes, it takes a giant stride by making itself available freely for both research and commercial applications. 

  Large language model bootcamp

This second-generation model comes with notable enhancements, including pre-trained versions with parameter sizes of 7 billion, 13 billion, and a staggering 70 billion. Llama 2’s training data has been expanded, encompassing 40% more information, all while boasting double the context length compared to Llama 1, with a context length of 4,096 tokens.

 

Notably, the Llama-2 chat models, tailored for dialogue applications, have been fine-tuned with the assistance of over 1 million new human annotations. As we delve deeper, we will explore its capabilities and the numerous ways to access this remarkable language model. 

Llama2_Intro_Meta
Source: https://ai.meta.com/llama/ 

 

Exploring your path to Llama 2: Six access methods you must learn 

Accessing the power of it is easier than you might think, thanks to its open-source nature. Whether you are a researcher, developer, or simply curious, here are six ways to get your hands on the Llama 2 model right now: 

 

Unlocking_Llama2_six access methods
Understanding Llama2, Six Access Methods

 

 

Download Llama 2 Model 

Since Llama 2 large language model is open-source, you can freely install it on your desktop and start using it. For this, you will need to complete a few simple steps. 

  • First, head to Meta AI’s official Llama 2 download webpage and fill in the requested information. Make sure you select the right model you plan on utilizing. 
Llama2_Download_Request_Form
Llama2 Download Request Form

 

  • Upon submitting your download request, you can expect to encounter the following page. You will receive an installation email from Meta with more information regarding the download. 

 

Llama2_Download_Request_Received
Llama2_Download_Request_Received

 

  • Once the email has been received, you can proceed with the installation by adhering to the instructions detailed within the email. To begin, the initial step entails accessing the Llama repository on GitHub.  
Get_Started_with_Llama2_Email
Get_Started_with_Llama2_Email

 

  • Download the code and extract the ZIP file to your desktop. Subsequently, proceed by adhering to the instructions outlined in the “Readme” document to start using all available models. 

 

Learn to build custom large language model applications today!                                                

 

Its models are also available in the Hugging Face organization of Llama 2 from Meta. All the available models are accessible there as well. To use these models from Hugging Face, we still need to submit a download request to Meta, and additionally, we need to fill out a form to enable the use of Llama 2 in Hugging Face.

To access its models on Hugging Face, follow these steps:

 

Meta_Llama2_Organization_HuggingFace
Meta Llama2 Organization HuggingFace

 

  • You can see a “Models” tab on the page which lists all the available models. 
Llama2_HuggingFace_Models
Llama2_HuggingFace_Models

 

Access_Llama2_HuggingFace
Access Llama2 HuggingFace

 

 

  • In the Access Llama 2 on Hugging Face card enter the email you used to send out the download request. 

Note: Please ensure that the email you use on Hugging Face matches the one you used to request Llama 2 download permission from Meta. 

 

Utilize the quantized model from Hugging Face 

In addition to the models from the official Meta Llama 2 organization, there are some quantized models also available on Hugging Face. 

If you search for Llama in the Hugging Face search bar. You will see a list of models available in Hugging Face. You can see that models from meta-llama the official organization are available but there are other models also available.

These models are the quantized version of the same Llama 2 models. Like the model, TheBloke/Llama-2-7b-Chat-GGUF contains GGUF format model files for Meta Llama 2’s Llama 2 7B Chat. 

 

Quantized_Llama2-7b
Quantized_Llama2-7b

 

 

The key advantage of these compressed models lies in their accessibility. They are open-source and do not necessitate users to request downloads from either Meta or Hugging Face. Although they are not the complete, original models, these quantized versions allow users to harness the capabilities of the model with reduced computational requirements. 

 

 

 

Deploy Llama 2 on Microsoft Azure 

Microsoft and Meta have strengthened their partnership, designating Microsoft as the preferred partner for Llama 2. This collaboration brings Llama 2 into the Azure AI model catalog, granting developers using Microsoft Azure the capability to seamlessly integrate and utilize this powerful language model. 

 

Azure ML Model Catalog
Azure ML Model Catalog

 

Within the Azure model catalog, you can effortlessly locate the Llama 2 model developed by Meta. Microsoft Azure simplifies the fine-tuning of Llama 2, offering both UI-based and code-based methods to customize the model according to your requirements. Furthermore, you can assess the model’s performance with your test data to ascertain its suitability for your unique use case. 

 

Harness Llama 2 as a cloud-based API 

Another avenue to tap into the capabilities of the Llama 2 model is through the deployment of Llama 2 models on platforms such as Hugging Face and Replicate, transforming it into a cloud API. By leveraging the Hugging Face Inference Endpoint, you can establish an accessible endpoint for your Llama 2 model hosted on Hugging Face, facilitating its utilization. 

Hugging Face Inference Endpoint
Hugging Face Inference Endpoint

 

Additionally, it is conveniently accessible through Replicate, presenting a streamlined method for deploying and employing the model via API. This approach alleviates worries about the availability of GPU computing power, whether in the context of development or testing.

It enables the fine-tuning and operation of models in a cloud environment, eliminating the need for dedicated GPU setups. Serving as a cloud API, it simplifies the integration process for applications developed on a wide range of technologies. 

 

Replicate_Llama2
Replicate_Llama2

Online Interactions with Llama 2 

Experience its capabilities online through platforms like llama2.ai where you can freely engage with different models. Customize your interactions by adjusting parameters such as system prompt, max token, and randomness, offering a user-friendly gateway to explore the model’s creative AI potential.

This demo provides a non-technical audience with the opportunity to submit queries and toggle between chat modes, simplifying the experience of interacting with Llama 2’s generative abilities.  

 

Llama2_Online
Llama2_Online

Offline Llama 2 Interaction with LM Studio 

With LM Studio, you have the power to run LLMs (Large Language Models) offline on your laptop, employ models through an intuitive in-app Chat UI or compatible local servers, access model files from Hugging Face repositories, and discover exciting new LLMs right from the app’s homepage. 

LM_Studio_Llama2

LM_Studio_Llama2

LM Studio empowers you to engage with Llama 2 models offline. Here is how it works:  

  • Once installed, search for your desired Llama 2 model, such as Llama 2 7b. You will find a comprehensive list of repositories and quantized models on Hugging Face. Select your preferred repository and initiate the model download by clicking the link on the right. Monitor the download progress at the bottom of the screen. 

 

LM_Studio_Llama2-7b
LM_Studio_Llama2-7b

 

  • After the model is downloaded, click the AI Chat icon, select your model, and start a conversation with it. LM Studio offers a seamless offline experience, enabling you to explore the potential of Llama 2 models with ease. 
LM Studio Llama2 Inference
LM Studio Llama2 Inference

 

Explore Llama 2 now!

In summary, this blog has guided you on an exploration of an open-source language model.

We analyzed its development, pointed out its unique features, and gave a detailed overview of six methods to use it. These methods are suitable for developers, researchers, and anyone interested in their potential.

Armed with this understanding, you are now well-equipped to unlock the capabilities of Llama 2 for your individual AI initiatives and pursuits. 

October 25, 2023

Data Science Dojo is offering LAMP and LEMP for FREE on Azure Marketplace packaged with pre-installed components for Linux Ubuntu. 

 

What are web stacks? 

 

A complete application development environment is created by solution stacks, which are collections of separate components. These multiple layers in a web solution stack communicate and connect with each other to form a comprehensive system for the developers to create websites with efficiency and flexibility. The compatibility and frequent use of these components together make them suitable for a stack. 

LAMP vs LEMP 

 

Now what do these two terms mean? Have a look at the table below: 

 

LAMP 

LEMP 

1.  Stands for Linux, Apache, MySQL/MariaDB, PHP/Python/Perl  Stands for Linux, Nginx (Engine-X), MySQL/MariaDB, PHP/Python/Perl 
2.  Supports Apache2 web server for processing requests over http  Supports Nginx web server to transfer data over http 
3.  Can have heavy server configurations  Lightweight reverse proxy Nginx server  
4.  World’s first open-source stack for web development  Variation of LAMP, relatively new technology 
5.  Process driven design because of Apache  Event driven design because of Nginx 

 

Pro Tip: Join our 6-months instructor-led Data Science Bootcamp to master data science skills 

 

 

Challenges faced by web developers 

 

Developers often faced the challenge of optimal integration during web app development. Interoperability and interdependency issues are often encountered during the development and production phase.  

Apart from that, the conventional web stack would cause problems sometimes due to the heavy architecture of the web server. Thus, organizational websites had to suffer downtime. 

In this scenario, programming a website and managing a database from a single machine, connected to a web server without any interdependency issues, was thought to be an ideal solution which the developers were looking forward to deploy. 

 

Working of LAMP and LEMP 

 

LAMP & LEMP are open-source web stacks packaged with Apache2/Nginx web server, MySQL database, and PHP object-oriented programming language, running together on top of a Linux machine. Both stacks are used for building high-performance web applications. All layers of LAMP and LEMP are optimally compatible with each other, thus both the stacks are excellent if you want to host, serve, and manage web content. 

 

LAMP architecture
Figure 1: LAMP Architecture (Courtesy: https://www.javatpoint.com/what-is-lamp )

 

The LEMP architecture has the representation like that of LAMP except replace Apache with Nginx web server. 

 

Major features 

 

  • All the layers of LAMP and LEMP have potent connections with no interdependency issues 
  • They are open-source web stacks. LAMP huge support because of the experienced LAMP community 
  • Both provide blisteringly fast operability whether its querying, programming, or web server performance  
  • Both stacks are flexible which means that any open-source tool can be switched out and used against the pre-existing layers 
  • LEMP focuses on low memory usage and has a lightweight architecture 

 

What Data Science Dojo has for you 

 

LAMP & LEMP offers packaged by Data Science Dojo are open-source web stacks for creating efficient and flexible web applications with all respective components pre-configured without the burden of installation. 

  • A Linux Ubuntu VM pre-installed with all LAMP/LEMP components 
  • Database management system of MySQL for creating databases and handling web content 
  • Apache2/Nginx web server whose job is to process requests and send data via HTTP over the internet 
  • Support for PHP programming language which is used for fully functional web development 
  • PhpMyAdmin which can be accessed at http://your_ip/phpmyadmin 
  • Customizable, meaning users can replace each component with any other alternative open-end software 

 

Conclusion 

 

Both the above discussed stacks on the cloud guarantee high availability as data can be distributed across multiple data centers and availability zones on the go. In this way, Azure increases the fault tolerance of data stored in the stack application. The power of Azure ensures maximum performance and high throughput for the MySQL database by providing low latency for executing complex queries.

Since LEMP/LAMP is designed to create websites, the increase in web-related data can be adequately managed by scaling up. The flexibility, performance, and scalability provided by Azure virtual machine to LAMP/LEMP makes it possible to host, manage, and modify applications of all types despite any traffic. 

At Data Science Dojo, we deliver data science education, consulting, and technical services to increase the power of data. Don’t wait to install this offer by Data Science Dojo, your ideal companion in your journey to learn data science!  

Click on the buttons below to head over to the Azure Marketplace and deploy LAMP/LEMP for FREE by clicking on “Try now”. 

button_try-lemp-now

button_try-lamp-now

Note: You’ll have to sign up to Azure, for free, if you do not have an existing account. 

December 9, 2022

Related Topics

Statistics
Resources
rag
Programming
Machine Learning
LLM
Generative AI
Data Visualization
Data Security
Data Science
Data Engineering
Data Analytics
Computer Vision
Career
AI