fbpx
Learn to build large language model applications: vector databases, langchain, fine tuning and prompt engineering. Learn more

Imagine staring at a blank screen, the cursor blinking impatiently. You know you have a story to tell, but the words just won’t flow. You’ve brainstormed, outlined, and even consumed endless cups of coffee, but inspiration remains elusive. This was often the reality for writers, especially in the fast-paced world of blog writing.

In this struggle, enter chatbots as potential saviors, promising to spark ideas with ease. But their responses often felt generic, trapped in a one-size-fits-all format that stifled creativity. It was like trying to create a masterpiece with a paint-by-numbers kit.

Then comes Dynamic Few-Shot Prompting into the scene. This revolutionary technique is a game-changer in the creative realm, empowering language models to craft more accurate, engaging content that resonates with readers.

It addresses the challenges by dynamically selecting a relevant subset of examples for prompts, allowing for a tailored and diverse set of creative responses specific to user needs. Think of it as having access to a versatile team of writers, each specializing in different styles and genres.

Quick prompting test for you

 

To comprehend this exciting technique, let’s first delve into its parent concept: Few-shot prompting.

Few-Shot Prompting

Few-shot prompting is a technique in natural language processing that involves providing a language model with a limited set of task-specific examples, often referred to as “shots,” to guide its responses in a desired way. This means you can “teach” the model how to respond on the fly simply by showing it a few examples of what you want it to do.

In this approach, the user collects examples representing the desired output or behavior. These examples are then integrated into a prompt instructing the Large Language Model (LLM) on how to generate the intended responses.

Large language model bootcamp

The prompt, including the task-specific examples, is then fed into the LLM, allowing it to leverage the provided context to produce new and contextually relevant outputs.

 

few-shot prompting at a glance
Few-shot prompting at a glance

 

Unlike zero-shot prompting, where the model relies solely on its pre-existing knowledge, few-shot prompting enables the model to benefit from in-context learning by incorporating specific task-related examples within the prompt.

 

Dynamic few-shot prompting: Taking it to the next level

Dynamic Few-Shot Prompting takes this adaptability a step further by dynamically selecting the most relevant examples based on the specific context of a user’s query. This means the model can tailor its responses even more precisely, resulting in more relevant and engaging content.

To choose relevant examples, various methods can be employed. In this blog, we’ll explore the semantic example selector, which retrieves the most relevant examples through semantic matching. 

Enhancing adaptability with dynamic few-shot prompting
Enhancing adaptability with dynamic few-shot prompting

 

What is the importance of dynamic few-shot prompting? 

The significance of Dynamic Few-Shot Prompting lies in its ability to address critical challenges faced by modern Large Language Models (LLMs). With limited context lengths in LLMs, processing longer prompts becomes challenging, requiring increased computational resources and incurring higher financial costs.

Dynamic Few-Shot Prompting optimizes efficiency by strategically utilizing a subset of training data, effectively managing resources. This adaptability allows the model to dynamically select relevant examples, catering precisely to user queries, resulting in more precise, engaging, and cost-effective responses.  

A closer look (with code!)

It’s time to get technical! Let’s delve into the workings of Dynamic Few-Shot Prompting using the LangChain Framework.

Importing necessary modules and libraries.

 

In the .env file, I have my OpenAI API key and base URL stored for secure access.

 

 

This code defines an example prompt template with input variables “user_query” and “blog_format” to be utilized in the FewShotPromptTemplate of LangChain.

 

user_query_1 = “Write a technical blog on topic [user topic]” 

 

blog_format_1 = “”” 

**Title:** [Compelling and informative title related to user topic] 

 

**Introduction:** 

* Introduce the topic in a clear and concise way. 

* State the problem or question that the blog will address. 

* Briefly outline the key points that will be covered. 

 

**Body:** 

* Break down the topic into well-organized sections with clear headings. 

* Use bullet points, numbered lists, and diagrams to enhance readability. 

* Provide code examples or screenshots where applicable. 

* Explain complex concepts in a simple and approachable manner. 

* Use technical terms accurately, but avoid jargon that might alienate readers. 

 

**Conclusion:** 

* Summarize the main takeaways of the blog. 

* Offer a call to action, such as inviting readers to learn more or try a new technique. 

 

**Additional tips for technical blogs:** 

* Use visuals to illustrate concepts and break up text. 

* Link to relevant resources for further reading. 

* Proofread carefully for accuracy and clarity. 

“”” 

 

user_query_2 = “Write a humorous blog on topic [user topic]” 

 

blog_format_2 = “”” 

**Title:** [Witty and attention-grabbing title that makes readers laugh before they even start reading] 

 

**Introduction:** 

* Set the tone with a funny anecdote or observation. 

* Introduce the topic with a playful twist. 

* Tease the hilarious insights to come. 

 

**Body:** 

* Use puns, wordplay, exaggeration, and unexpected twists to keep readers entertained. 

* Share relatable stories and experiences that poke fun at everyday life. 

* Incorporate pop culture references or current events for added relevance. 

* Break the fourth wall and address the reader directly to create a sense of connection. 

 

**Conclusion:** 

* End on a high note with a punchline or final joke that leaves readers wanting more. 

* Encourage readers to share their own funny stories or experiences related to the topic. 

 

**Additional tips for humorous blogs:** 

* Keep it light and avoid sensitive topics. 

* Use visual humor like memes or GIFs. 

* Read your blog aloud to ensure the jokes land. 

“”” 

user_query_3 = “Write an adventure blog about a trip to [location]” 

 

blog_format_3 = “”” 

**Title:** [Evocative and exciting title that captures the spirit of adventure] 

 

**Introduction:** 

* Set the scene with vivid descriptions of the location and its atmosphere. 

* Introduce the protagonist (you or a character) and their motivations for the adventure. 

* Hint at the challenges and obstacles that await. 

 

**Body:** 

* Chronicle the journey in chronological order, using sensory details to bring it to life. 

* Describe the sights, sounds, smells, and tastes of the location. 

* Share personal anecdotes and reflections on the experience. 

* Build suspense with cliffhangers and unexpected twists. 

* Capture the emotions of excitement, fear, wonder, and accomplishment. 

 

**Conclusion:** 

* Reflect on the lessons learned and the personal growth experienced during the adventure. 

* Inspire readers to seek out their own adventures. 

 

**Additional tips for adventure blogs:** 

* Use high-quality photos and videos to showcase the location. 

* Incorporate maps or interactive elements to enhance the experience. 

* Write in a conversational style that draws readers in. 

“”” 

 

These examples showcase different blog formats, each tailored to a specific genre. The three dummy examples include a technical blog template with a focus on clarity and code, a humorous blog template designed for entertainment with humor elements, and an adventure blog template emphasizing vivid storytelling and immersive details about a location.

While these are just three examples for simplicity, more formats can be added, to cater to diverse writing styles and topics. Instead of examples showcasing formats, original blogs can also be utilized as examples.

 

 

Next, we’ll compile a list from the crafted examples. This list will be passed to the example selector to store them in the vector store with vector embeddings. This arrangement enables semantic matching to these examples at a later stage.

 

 

Now initialize AzureOpenAIEmbeddings() for creating embeddings used in semantic similarity. 

 

 

Now comes the example selector that stores the provided examples in a vector store. When a user asks a question, it retrieves the most relevant example based on semantic similarity. In this case, k=1 ensures only one relevant example is retrieved.

 

 

This code sets up a FewShotPromptTemplate for dynamic few-shot prompting in LangChain. The ExampleSelector is used to fetch relevant examples based on semantic similarity, and these examples are incorporated into the prompt along with the user query. The resulting template is then ready for generating dynamic and tailored responses.

 

Output

 

AI output
A sample output

 

This output gives an understanding of the final prompt that our LLM will use for generating responses. When the user query is “I’m writing a blog on Machine Learning. What topics should I cover?”, the ExampleSelector employs semantic similarity to fetch the most relevant example, specifically a template for a technical blog.

 

Hence the resulting prompt integrates instructions, the retrieved example, and the user query, offering a customized structure for crafting engaging content related to Machine Learning. With k=1, only one example is retrieved to shape the response.

 

 

As our prompt is ready, now we will initialize an Azure ChatGPT model to generate a tailored blog structure response based on a user query using dynamic few-shot prompting.

 

Learn to build LLM applications

 

Output

 

Generative AI sample output
Generative AI sample output

 

The LLM efficiently generates a blog structure tailored to the user’s query, adhering to the format of technical blogs, and showcasing how dynamic few-shot prompting can provide relevant and formatted content based on user input.   

 

 

Conclusion

To conclude, Dynamic Few-Shot Prompting takes the best of two worlds (few-shot prompts and zero-shot prompts) and makes language models even better. It helps them understand your goals using smart examples, focusing only on relevant things according to the user’s query. This saves resources and opens the door for innovative use.

Dynamic Few-Shot Prompting adapts well to the token limitations of Large Language Models (LLMs) giving efficient results. As this technology advances, it will revolutionize the way Large Language Models respond, making them more efficient in various applications. 

February 6, 2024

In this blog, we are enhancing our Language Model (LLM) experience by adopting the Retrieval-Augmented Generation (RAG) approach!

We’ll explore the fundamental architecture of RAG conceptually and delve deeper by implementing it through the Lang Chain orchestration framework and leveraging an open-source model from Hugging Face for both question answering and text embedding. 

So, let’s get started! 

Common hallucinations in large language models  

The most common problem faced by state-of-the-art LLMs is that they produce inaccurate or hallucinated responses. This mostly occurs when prompted with information not present in their training set, despite being trained on extensive data.

 

Large language model bootcamp

 

This discrepancy between the general knowledge embedded in the LLM’s weights and newer information can be bridged using RAG. The solution provided by RAG eliminates the need for computationally intensive and expertise-dependent fine-tuning, offering a more flexible approach to adapting to evolving information.

 

Read more about: AI hallucinations and risks associated with large language models

 

 

 

AI hallucinations
AI hallucinations

What is RAG? 

Retrieval-Augmented Generation involves enhancing the output of Large Language Models (LLMs) by providing them with additional information from an external knowledge source.

 

Explore LLM context augmentation techniques like RAG and fine-tuning in detail with out podcast now!

 

This method aims to improve the accuracy and contextuality of LLM-generated responses while minimizing factual inaccuracies. RAG empowers language models to sidestep the need for retraining, facilitating access to the most up-to-date information to produce trustworthy outputs through retrieval-based generation. 

Architecture of RAG approach

Blog | Data Science Dojo

Figure from Lang chain documentation

Prerequisites for code implementation 

  1. HuggingFace account and LLAMA2 model access:
  • Create a Hugging Face account (free sign-up available) to access open-source Llama 2 and embedding models. 
  • Request access to LLAMA2 models using this form (access is typically granted within a few hours). 
  • After gaining access to Llama 2 models, please proceed to the provided link, select the checkbox to indicate your agreement to the information, and then click ‘Submit’. 

2. Google Colab account:

  • Create a Google account if you don’t already have one. 
  • Use Google Colab for code execution. 

3. Google Colab environment setup: 

  • In Google Colab, go to Runtime > Change runtime type > Hardware accelerator > GPU > GPU type > T4 for faster execution of code. 

4. Library and dependency installation: 

  • Install necessary libraries and dependencies using the following command: 

 

5. Authentication with HuggingFace: 

  • Integrate your Hugging Face token into Colab’s environment:

 

 

  • When prompted, enter your Hugging Face token obtained from the “Access Token” tab in your Hugging Face settings. 

 

Step 1: Document Loading 

Loading a document refers to the process of retrieving and storing data as documents in memory from a specified source. This process is typically facilitated by document loaders, which provide a “load” method for accessing and loading documents into the memory. 

Lang chain has number of document loaders in this example we will be using “WebBaseLoader” class from the “langchain.document_loaders” module to load content from a specific web page.

 

 

 
The code extracts content from the web page “https://lilianweng.github.io/posts/2023-06-23-agent/“. BeautifulSoup (`bs4`) is employed for HTML parsing, focusing on elements with the classes “post-content”, “post-title”, and “post-header.” The loaded content is stored in the variable `docs`. 

 

 

Step 2: Document transformation – Splitting/chunking document 

After loading the data, it can be transformed to fit the application’s requirements or to extract relevant portions. This involves splitting lengthy documents into smaller chunks that are compatible with the model and produce accurate and clear results. Lang Chain offers various text splitters, in this implementation we chose the “RecursiveCharacterTextSplitter” for generic text processing.

 

 

The code breaks documents into chunks of 1000 characters with a 200-character overlap. This chunking is employed for embedding and vector storage, enabling more focused retrieval of relevant content during runtime. The recursive splitter ensures chunks maintain contextual integrity by using common separators, like new lines, until the desired chunk size is achieved. 

Step 3: Storage in vector database 

After extracting text chunks, we store and index them for future searches using the RAG application. A common approach involves embedding the content of each split and storing these embeddings in a vector store. 

When searching, we embed the search query and perform a similarity search to identify stored splits with embeddings most similar to the query embedding. Cosine similarity, which measures the angle between embeddings, is a simple similarity measure. 

Using the Chroma vector store and open source “HuggingFaceEmbeddings” in Lang chain, we can embed and store all document splits in a single command. 

Text embedding: 

Text embedding converts textual data into numerical vectors that capture the semantic meaning of the text. This enables efficient identification of similar text pieces. An embedding model, which is a variant of Language Models (LLMs) specifically designed for this purpose. 

 Lang Chain’s Embeddings class facilitates interaction with various text embedding models. While any model can be used, we opted for “HuggingFaceEmbeddings”. 

 

 

 

This code initializes an instance of the HuggingFaceEmbeddings class, configuring it with an open-source pre-trained model located at “sentence-transformers/all-MiniLM-l6-v2“. By doing this text embedding is created for converting textual data into numerical vectors. 

 

Learn to build custom large language model applications today!                                                

 

Vector Stores: 

Vector stores are specialized databases designed to efficiently store and search for high-dimensional vectors, such as text embeddings. They enable the retrieval of the most similar embedding vectors based on a given query vector. Lang Chain integrates with various vector stores, and we are using “Chroma” vector store for this task.

 

 

This code utilizes the Chroma class to create a vector store (vectorstore) from the previously split documents (splits) using the specified embeddings (embeddings). The Chroma vector store facilitates efficient storage and retrieval of document vectors for further processing. 

Step 4: Retrieval of text chunks 

After storing the data, preparing the LLM model, and constructing the pipeline, we need to retrieve the data. Retrievers serve as interfaces that return documents based on a query. 

Retrievers cannot store documents; they can only retrieve them. Vector stores form the foundation of retrievers. Lang Chain offers a variety of retriever algorithms, here is the one we implement. 

 

 

Step 5: Generation of answer with RAG approach 

Preparing the LLM Model: 

In the context of Retrieval Augmented Generation (RAG), an LLM model plays a crucial role in generating comprehensive and informative responses to user queries. By leveraging its ability to process and understand natural language, the LLM model can effectively combine retrieved documents with the given query to produce insightful and relevant outputs.

 

These lines import the necessary libraries for handling pre-trained models and tokenization. The specific model “meta-llama/Llama-2-7b-chat-hfis chosen for its question-answering capabilities.

 

 

 

This code defines a transformer pipeline, which encapsulates the pre-trained HuggingFace model and its associated configuration. It specifies the task as “text-generation” and sets various parameters to optimize the pipeline’s performance. 

 

 

This line creates a Lang Chain pipeline (HuggingFace Pipeline) that wraps the transformer pipeline. The model_kwargs parameter adjusts the model’s “temperature” to control its creativity and randomness. 

Retrieval QA Chain: 

To combine question-answering with a retrieval step, we employ the RetrievalQA chain, which utilizes a language model and a vector database as a retriever. By default, we process all data in a single batch and set the chain type to “stuff” when interacting with the language model. 

 

 

 

 

 

This code initializes a RetrievalQA instance by specifying a chain type (“stuff”), a HuggingFacePipeline (llm), and a retriever (retriever-initialize previously in the code from vectorstore). The return_source_documents parameter is set to True to include source documents in the output, enhancing contextual information retrieval.
 

Finally, we call this QA chain with the specific question we want to ask.

 

 

The result will be: 

 

 

We can print source documents to see which document chunks the model used to generate the answer to this specific query.

 

 

 

 

In this output, only 2 out of 4 document contents are shown as an example, that were retrieved to answer the specific question. 

Conclusion 

In conclusion, by embracing the Retrieval-Augmented Generation (RAG) approach, we have elevated our Language Model (LLM) experience to new heights.

Through a deep dive into the conceptual foundations of RAG and practical implementation using the Lang Chain orchestration framework, coupled with the power of an open-source model from Hugging Face, we have enhanced question answering capabilities of LLMs.

This journey exemplifies the seamless integration of innovative technologies to optimize LLM capabilities, paving the way for a more efficient and powerful language processing experience. Cheers to the exciting possibilities that arise from combining innovative approaches with open-source resources! 

December 6, 2023

 

The evolution of the GPT Series culminates in ChatGPT, delivering more intuitive and contextually aware conversations than ever before.

 


What are chatbots?  

AI chatbots are smart computer programs that can process and understand users’ requests and queries in voice and text. It mimics and generates responses in a human conversational manner. AI chatbots are widely used today from personal assistance to customer service and much more. They are assisting humans in every field making the work more productive and creative. 

Deep learning And NLP

Deep Learning and Natural Language Processing (NLP) are like best friends in the world of computers and language. Deep Learning is when computers use their brains, called neural networks, to learn lots of things from a ton of information.

NLP is all about teaching computers to understand and talk like humans. When Deep Learning and NLP work together, computers can understand what we say, translate languages, make chatbots, and even write sentences that sound like a person. This teamwork between Deep Learning and NLP helps computers and people talk to each other better in the most efficient manner.  

Chatbots and ChatGPT
Chatbots and ChatGPT

How are chatbots built? 

Building Chatbots involves creating AI systems that employ deep learning techniques and natural language processing to simulate natural conversational behavior.

The machine learning models are trained on huge datasets to figure out and process the context and semantics of human language and produce relevant results accordingly. Through deep learning and NLP, the machine can recognize the patterns from text and generate useful responses. 

Transformers in chatbots 

Transformers are advanced models used in AI for understanding and generating language. This efficient neural network architecture was developed by Google in 2015. They consist of two parts: the encoder, which understands input text, and the decoder, which generates responses.

The encoder pays attention to words’ relationships, while the decoder uses this information to produce a coherent text. These models greatly enhance chatbots by allowing them to understand user messages (encoding) and create fitting replies (decoding).

With Transformers, chatbots engage in more contextually relevant and natural conversations, improving user interactions. This is achieved by efficiently tracking conversation history and generating meaningful responses, making chatbots more effective and lifelike. 

 

Large language model bootcamp

GPT Series – Generative pre trained transformer 

 GPT is a large language model (LLM) which uses the architecture of Transformers. I was developed by OpenAI in 2018. GPT is pre-trained on a huge amount of text dataset. This means it learns patterns, grammar, and even some reasoning abilities from this data. Once trained, it can then be “fine-tuned” on specific tasks, like generating text, answering questions, or translating languages.

This process of fine-tuning comes under the concept of transfer learning. The “generative” part means it can create new content, like writing paragraphs or stories, based on the patterns it learned during training. GPT has become widely used because of its ability to generate coherent and contextually relevant text, making it a valuable tool in a variety of applications such as content creation, chatbots, and more.  

The advent of ChatGPT: 

ChatGPT is a chatbot designed by OpenAI. It uses the “Generative Pre-Trained Transformer” (GPT) series to chat with the user analogously as people talk to each other. This chatbot quickly went viral because of its unique capability to learn complications of natural language and interactions and give responses accordingly.

ChatGPT is a powerful chatbot capable of producing relevant answers to questions, text summarization, drafting creative essays and stories, giving coded solutions, providing personal recommendations, and many other things. It attracted millions of users in a noticeably short period. 

ChatGPT’s story is a journey of growth, starting with earlier versions in the GPT series. In this blog, we will explore how each version from the series of GPT has added something special to the way computers understand and use language and how GPT-3 serves as the foundation for ChatGPT’s innovative conversational abilities. 

Chat GPT Series evolution
Chat GPT Series evolution

GPT-1: 

GPT-1 was the first model of the GPT series developed by OpenAI. This innovative model demonstrated the concept that text can be generated using transformer design. GPT-1 introduced the concept of generative pre-training, where the model is first trained on a broad range of text data to develop a comprehensive understanding of language. It consisted of 117 million parameters and produced much more coherent results as compared to other models of its time. It was the foundation of the GPT series, and it paved a path for advancement and revolution in the domain of text generation. 

GPT-2: 

GPT-2 was much bigger as compared to GPT-1 trained on 1.5 billion parameters. It makes the model have a stronger grasp of the context and semantics of real-world language as compared to GPT-1. It introduces the concept of “Task conditioning.” This enables GTP-2 to learn multiple tasks within a single unsupervised model by conditioning its outputs on both input and task information.

GPT-2 highlighted zero-shot learning by carrying out tasks without prior examples, solely guided by task instructions. Moreover, it achieved remarkable zero-shot task transfer, demonstrating its capacity to seamlessly comprehend and execute tasks with minimal or no specific examples, highlighting its adaptability and versatile problem-solving capabilities. 

As the ChatGPT model was getting more advanced it started to have new qualities of writing long creative essays, answering complex questions instead of just predicting the next word. So, it was becoming more human-like and attracted many users for their day-to-day tasks. 

GPT-3: 

GPT-3 was trained on an even larger dataset and has 175 billion parameters. It gives a more natural-looking response making the model conversational. It was better at common sense reasoning than the earlier models. GTP-3 can not only generate human-like text but is also capable of generating programming code snippets providing more innovative solutions. 

GPT-3’s enhanced capacity, compared to GPT-2, extends its zero-shot and few-shot learning capabilities. It can give relevant and accurate solutions to uncommon problems, requiring training on minimal examples or even performing without prior training.  

Instruct GPT: 

An improved version of GPT-3 also known as InstructGPT(GPT-3.5) produces results that align with human expectations. It uses a “Human Feedback Model” to make the neural network respond in a way that is according to real-world expectations.

It begins by creating a supervised policy via demonstrations on input prompts. Comparison data is then collected to build a reward model based on human-preferred model outputs. This reward model guides the fine-tuning of the policy using Proximal Policy Optimization.

Iteratively, the process refines the policy by continuously collecting comparison data, training an updated reward model, and enhancing the policy’s performance. This iterative approach ensures that the model progressively adapts to preferences and optimizes its outputs to align with human expectations. The figure below gives a clearer depiction of the process discussed. 

Training language models
From Research paper ‘Training language models to follow instructions with human feedback’

GPT-3.5 stands as the default model for ChatGPT, while the GPT-3.5-Turbo Model empowers users to construct their own custom chatbots with similar abilities as ChatGPT. It is worth noting that large language models like ChatGPT occasionally generate responses that are inaccurate, impolite, or not helpful.

This is often due to their training in predicting subsequent words in sentences without always grasping the context. To remedy this, InstructGPT was devised to steer model responses toward better alignment with user preferences.

 

Read more –> FraudGPT: Evolution of ChatGPT into an AI weapon for cybercriminals in 2023

 

GPT-4 and beyond: 

After GTP-3.5 comes GPT-4. According to some resources, GPT-4 is estimated to have 1.7 trillion parameters. These enormous number of parameters make the model more efficient and make it able to process up to 25000 words at once.

This means that GPT-4 can understand texts that are more complex and realistic. The model has multimodal capabilities which means it can process both images and text. It can not only interpret the images and label them but can also understand the context of images and give relevant suggestions and conclusions. The GPT-4 model is available in ChatGPT Plus, a premium version of ChatGPT. 

So, after going through the developments that are currently done by OpenAI, we can expect that OpenAI will be making more improvements in the models in the coming years. Enabling it to handle voice commands, make changes to web apps according to user instruction, and aid people in the most efficient way that has never been done before. 

Watch: ChatGPT Unleashed: Live Demo and Best Practices for NLP Applications 

 

This live presentation from Data Science Dojo gives more understanding of ChatGPT and its use cases. It demonstrates smart prompting techniques for ChatGPT to get the desired responses and ChatGPT’s ability to assist with tasks like data labeling and generating data for NLP models and applications. Additionally, the demo acknowledges the limitations of ChatGPT and explores potential strategies to overcome them.  

Wrapping up: 

ChatGPT developed by OpenAI is a powerful chatbot. It uses the GPT series as its neural network, which is improving quickly. From generating one-liner responses to generating multiple paragraphs with relevant information, and summarizing long detailed reports, the model is capable of interpreting and understanding visual inputs and generating responses that align with human expectations.

With more advancement, the GPT series is getting more grip on the structure and semantics of the human language. It not only relies on its training information but can also use real-time data given by the user to generate results. In the future, we expect to see more breakthrough advancements by OpenAI in this domain empowering this chatbot to assist us in the most effective manner like ever before. 

 

Learn to build LLM applications                                          

September 13, 2023