Learn to build large language model applications: vector databases, langchain, fine tuning and prompt engineering. Learn more

Generative AI

Huda Mahmood - Author
Huda Mahmood
| April 9

While language models in generative AI focus on textual data, vision language models (VLMs) bridge the gap between textual and visual data. Before we explore Moondream 2, let’s understand VLMs better.

Understanding vision language models

VLMs combine computer vision (CV) and natural language processing (NLP), enabling them to understand and connect visual information with textual data.

Some key capabilities of VLMs include image captioning, visual question answering, and image retrieval. It learns these tasks by training on datasets that pair images with their corresponding textual description. There are several large vision language models available in the market including GPT-4v, LLaVA, and BLIP-2.


Large language model bootcamp


However, these are large vision models requiring heavy computational resources to produce effective results, and that too at slow inference speeds. The solution has been presented in the form of small VLMs that provide a balance between efficiency and performance.

In this blog, we will look deeper into Moondream 2, a small vision language model.

What is Moondream 2?

Moondream 2 is an open-source vision language model. With only 1.86 billion parameters, it is a tiny VLM with weights from SigLIP and Phi-1.5. It is designed to operate seamlessly on devices with limited computational resources.


Weights for Moondream 2
Weights for Moondream 2


Let’s take a closer look at the defined weights for Moondream2.

SigLIP (Sigmoid Loss for Language Image Pre-Training)

It is a newer and simpler method that helps the computer learn just by looking at pictures and their captions, one at a time, making it faster and more effective, especially when training with lots of data. It is similar to a CLIP (Contrastive Language–Image Pre-training) model.

However, Moondream 2 has replaced softmax loss in CLIP with a simple pairwise sigmoid loss. The change ensures better performance because sigmoid loss only focuses on image-text pairs. Without the need for a global view of all pairwise data within a batch, the process becomes faster and more efficient.


Learn computer vision using Python



It is a small language model with 1.3 billion parameters and transformer-based architecture. Developed by Microsoft, the model was trained on 30 billion words, including information from its predecessor, Phi-1 and 20 billion words artificially created by another AI model, GPT-3.5.

With its unique training method, Phi-1.5 has mastered different skills required of a language model, like understanding common sense, following complex instructions, comprehending language, and retaining information. Moreover, it even outperforms its larger counterparts.

A closer look into Moondream 2

It is the latest and updated version of the model previously known as Moondream 1. Below is a quick comparison of the two models, highlighting the basic parameters of improvement.


Comparing Moondream 1 and 2
Comparing Moondream 1 and 2


Like typical vision language models, Moondream 2 interprets the visual data to provide textual responses based on the information. Some demos included in its repository highlight its capabilities. Let’s look at these demos below:

Demo 1

Moondream 2 was provided with the following image:


Moondream demo 1


Based on this image, the VLM was asked some relevant questions.

Question: What is the girl doing?

Answer: The girl is sitting at a table and eating a large hamburger.

Question: What color is the girl’s hair?

Answer: The girl’s hair is white.

The interaction with Moondream 2 highlights that the VLM is capable of understanding the basic image properties like the girl’s hair color. Moreover, it can also interpret the actions of the actors in an image from their position in a specific instance.

Demo 2

The VLM was provided with another image.


Moondream demo 2


The demo focused on asking the following:

Question: What is this?

Answer: The image features a computer server rack, which is a large metal structure designed to hold and organize multiple computer components, such as motherboards, cooling systems, and other peripherals. The rack is filled with various computer parts, including multiple computer chips, wires, and other electronic components. The rack is placed on a carpeted floor, and there is a couch in the background, suggesting that the setup is likely in a living or working space.

Question: What is behind the stand?

Answer: There is a brick wall behind the stand.

The above demo highlights the ability of Moondream 2 to explore and interpret complex visual outputs in great detail. The VLM provides in-depth textual information from the visual data. It also presents spacial understanding of the image components.

Hence, Moondream 2 is a promising addition to the world of vision language models with its refined capabilities to interpret visual data and provide in-depth textual output. Since we understand the strengths of the VLM, it is time to explore its drawbacks or weaknesses.


Here’s a list of  7 books you must explore when learning about computer vision


Limitations of Moondream 2

Before you explore the world of Moondream 2, you must understand its limitations when dealing with visual and textual data.

Generating inaccurate statements

It is important to understand that Moondream 2 may generate inaccurate statements, especially for complex topics or situations requiring real-world understanding. The model might also struggle to grasp subtle details or hidden meanings within instructions.

Presenting unconscious bias

Like any other VLM, Moondream 2 is also a product of the data is it trained on. Thus, it can reflect the biases of the world, perpetuating stereotypes or discriminatory views.

As a user, it’s crucial to be aware of this potential bias and to approach the model’s outputs with a critical eye. Don’t blindly accept everything it generates; use your own judgment and fact-check when necessary.

Mirroring prompts

VLMs will reflect the prompts provided to them. Hence, if a user prompts the model to generate offensive or inappropriate content, the model may comply. It’s important to be mindful of the prompts and avoid asking the model to create anything harmful or hurtful.


Explore a hands-on curriculum that helps you build custom LLM applications!


In conclusion…

To sum it up, Moondream 2 is a promising step in the development of vision language models. Powered by its key components and compact size, the model is efficient and fast. However, like any language model we use nowadays, Moondream 2 also requires its users to be responsible for ensuring the creation of useful content.

If you are ready to experiment with Moondream 2 now, install the necessary files and start right away! Here’s a look at what the VLM’s user interface looks like.

Huda Mahmood - Author
Huda Mahmood
| April 8

The modern era of generative AI is now talking about machine unlearning. It is time to understand that unlearning information is as important for machines as for humans to progress in this rapidly advancing world. This blog explores the impact of machine unlearning in improving the results of generative AI.

However, before we dig deeper into the details, let’s understand what is machine unlearning and its benefits.

What is machine unlearning?

As the name indicates, it is the opposite of machine learning. Hence, it refers to the process of getting a trained model to forget information and specific knowledge it has learned during the training phase.

During machine unlearning, an ML model discards previously learned information and or patterns from its knowledge base. The concept is fairly new and still under research in an attempt to improve the overall ML training process.


Large language model bootcamp


A comment on the relevant research

A research paper published by the University of Texas presents machine learning as a paradigm to improve image-to-image generative models. It addresses the gap with a unifying framework focused on implementing machine unlearning to image-specific generative models.

The proposed approach uses encoders in its architecture to enable the model to only unlearn specific information without the need to manipulate the entire model. The research also claims the framework to be generalizable in its application, where the same infrastructure can also be implemented in an encoder-decoder architecture.


A glance at the proposed encoder-only machine unlearning architecture
A glance at the proposed encoder-only machine unlearning architecture – Source: arXiv


The research also highlights that the proposed framework presents negligible performance degradation and produces effective results from their experiments. This highlights the potential of the concept in refining machine-learning processes and generative AI applications.

Benefits of machine unlearning in generative AI

Machine unlearning is a promising aspect for improving generative AI, empowering it to create enhanced results when creating new things like text, images, or music.

Below are some of the key advantages associated with the introduction of the unlearning concept in generative AI.

Ensuring privacy

With a constantly growing digital database, the security and privacy of sensitive information have become a constant point of concern for individuals and organizations. This issue of data privacy also extends to the process of training ML models where the training data might contain some crucial or private data.

In this dilemma, unlearning is a concept that enables an ML model to forget any sensitive information in its database without the need to remove the complete set of knowledge it trained on. Hence, it ensures that the concerns of data privacy are addressed without impacting the integrity of the ML model.


Explore the power of machine learning in your business


Enhanced accuracy

In extension, it also results in updating the training data for machine-learning models to remove any sources of error. It ensures that a more accurate dataset is available for the model, improving the overall accuracy of the results.

For instance, if a generative AI model produced images based on any inaccurate information it had learned during the training phase, unlearning can remove that data from its database. Removing that association will ensure that the model outputs are refined and more accurate.

Keeping up-to-date

Another crucial aspect of modern-day information is that it is constantly evolving. Hence, the knowledge is updated and new information comes to light. While it highlights the constant development of data, it also results in producing outdated information.

However, success is ensured in keeping up-to-date with the latest trends of information available in the market. With the machine unlearning concept, these updates can be incorporated into the training data for applications without rebooting the existing training models.


Benefits of machine unlearning
Benefits of machine unlearning


Improved control

Unlearning also allows better control over the training data. It is particularly useful in artistic applications of generative AI. Artists can use the concept to ensure that the AI application unlearns certain styles or influences.

As a result, it offers greater freedom of exploration of artistic expression to create more personalized outputs, promising increased innovation and creativity in the results of generative AI applications.

Controlling misinformation

Generative AI is a powerful tool to spread misinformation through the creation of realistic deepfakes and synthetic data. Machine unlearning provides a potential countermeasure that can be used to identify and remove data linked to known misinformation tactics from generative AI models.

This would make it significantly harder for them to be used to create deceptive content, providing increased control over spreading misinformation on digital channels. It is particularly useful in mitigating biases and stereotypical information in datasets.

Hence, the concept of unlearning opens new horizons of exploration in generative AI, empowering players in the world of AI and technology to reap its benefits.


Here’s a comprehensive guide to build, deploy, and manage ML models


Who can benefit from machine unlearning?

A broad categorization of entities and individuals who can benefit from machine unlearning include:

Privacy advocates

In today’s digital world, individual concern for privacy concern is constantly on the rise. Hence, people are constantly advocating their right to keep personal or crucial information private. These advocates for privacy and data security can benefit from unlearning as it addresses their concerns about data privacy.

Tech companies

Digital progress and development are marked by several regulations like GDPR and CCPA. These standards are set in place to ensure data security and companies must abide by these laws to avoid legal repercussions. Unlearning assists tech companies in abiding by these laws, enhancing their credibility among users as well.

Financial institutions

Financial enterprises and institutions deal with huge amounts of personal information and sensitive data of their users. Unlearning empowers them to remove specific data points from their database without impacting the accuracy and model performance.

AI researchers

AI researchers are frequently facing the impacts of their applications creating biased or inaccurate results. With unlearning, they can target such sources of data points that introduce bias and misinformation into the model results. Hence, enabling them to create more equitable AI systems.


A significant impact of unlearning can come from the work of policymakers. Since the concept opens up new ways to handle information and training datasets, policymakers can develop new regulations to mitigate bias and address privacy concerns. Hence, leading the way for responsible AI development.

Thus, machine unlearning can produce positive changes in the world of generative AI, aiding different players to ensure the development of more responsible and equitable AI systems.


Explore a hands-on curriculum that helps you build custom LLM applications!


Future of machine unlearning

To sum it up, machine unlearning is a new concept in the world of generative AI with promising potential for advancement. Unlearning is a powerful tool for developing AI applications and systems but lacks finesse. Researchers are developing ways to target specific information for removal.

For instance, it can assist the development of an improved text-to-image generator to forget a biased stereotype, leading to fairer and more accurate results. Improved techniques allow the isolation and removal of unwanted data points, giving finer control over what the AI forgets.



Overall, unlearning holds immense potential for shaping the future of generative AI. With more targeted techniques and a deeper understanding of these models, unlearning can ensure responsible use of generative AI, promote artistic freedom, and safeguard against the misuse of this powerful technology.

Fiza Author image
Fiza Fatima
| April 5

AGI (Artificial General Intelligence) refers to a higher level of AI that exhibits intelligence and capabilities on par with or surpassing human intelligence.

AGI systems can perform a wide range of tasks across different domains, including reasoning, planning, learning from experience, and understanding natural language. Unlike narrow AI systems that are designed for specific tasks, AGI systems possess general intelligence and can adapt to new and unfamiliar situations. Read more

While there have been no definitive examples of artificial general intelligence (AGI) to date, a recent paper by Microsoft Research suggests that we may be closer than we think. The new multimodal model released by OpenAI seems to have what they call, ‘sparks of AGI’.


Large language model bootcamp


This means that we cannot completely classify it as AGI. However, it has a lot of capabilities an AGI would have.

Are you confused? Let’s break down things for you. Here are the questions we’ll be answering:

  • What qualities of AGI does GPT-4 possess?
  • Why does GPT-4 exhibit higher general intelligence than previous AI models?

 Let’s answer these questions step-by-step. Buckle up!

What qualities of artificial general intelligence (AGI) does GPT-4 possess?


Here’s a sneak peek into how GPT-4 is different from GPT-3.5


GPT-4 is considered an early spark of AGI due to several important reasons:

1. Performance on novel tasks

GPT-4 can solve novel and challenging tasks that span various domains, often achieving performance at or beyond the human level. Its ability to tackle unfamiliar tasks without specialized training or prompting is an important characteristic of AGI.

Here’s an example of GPT-4 solving a novel task:


GPT-4 solving a novel task
GPT-4 solving a novel task – Source: arXiv


The solution seems to be accurate and solves the problem it was provided.

2. General Intelligence

GPT-4 exhibits more general intelligence than previous AI models. It can solve tasks in various domains without needing special prompting. Its performance is close to a human level and often surpasses prior models. This ability to perform well across a wide range of tasks demonstrates a significant step towards AGI.

Broad capabilities

GPT-4 demonstrates remarkable capabilities in diverse domains, including mathematics, coding, vision, medicine, law, psychology, and more. It showcases a breadth and depth of abilities that are characteristic of advanced intelligence.

Here are some examples of GPT-4 being capable of performing diverse tasks:

  • Data Visualization: In this example, GPT-4 was asked to extract data from the LATEX code and produce a plot in Python based on a conversation with the user. The model extracted the data correctly and responded appropriately to all user requests, manipulating the data into the right format and adapting the visualization.


Data visualization with GPT-4
Data visualization with GPT-4 – Source: arXiv


  • Game development: Given a high-level description of a 3D game, GPT-4 successfully creates a functional game in HTML and JavaScript without any prior training or exposure to similar tasks


Game development with GPT-4
Game development with GPT-4 – Source: arXiv


3. Language mastery

GPT-4’s mastery of language is a distinguishing feature. It can understand and generate human-like text, showcasing fluency, coherence, and creativity. Its language capabilities extend beyond next-word prediction, setting it apart as a more advanced language model.


Language mastery of GPT-4
Language mastery of GPT-4 – Source: arXiv


4. Cognitive traits

GPT-4 exhibits traits associated with intelligence, such as abstraction, comprehension, and understanding of human motives and emotions. It can reason, plan, and learn from experience. These cognitive abilities align with the goals of AGI, highlighting GPT-4’s progress towards this goal.

Here’s an example of GPT-4 trying to solve a realistic scenario of marital struggle, requiring a lot of nuance to navigate.


An example of GPT-4 exhibiting congnitive traits
An example of GPT-4 exhibiting cognitive traits – Source: arXiv


Why does GPT-4 exhibit higher general intelligence than previous AI models?

Some of the features of GPT-4 that contribute to its more general intelligence and task-solving capabilities include:


Reasons for the higher intelligence of GPT-4
Reasons for the higher intelligence of GPT-4


Multimodal information

GPT-4 can manipulate and understand multi-modal information. This is achieved through techniques such as leveraging vector graphics, 3D scenes, and music data in conjunction with natural language prompts. GPT-4 can generate code that compiles into detailed and identifiable images, demonstrating its understanding of visual concepts.

Interdisciplinary composition

The interdisciplinary aspect of GPT-4’s composition refers to its ability to integrate knowledge and insights from different domains. GPT-4 can connect and leverage information from various fields such as mathematics, coding, vision, medicine, law, psychology, and more. This interdisciplinary integration enhances GPT-4’s general intelligence and widens its range of applications.

Extensive training

GPT-4 has been trained on a large corpus of web-text data, allowing it to learn a wide range of knowledge from diverse domains. This extensive training enables GPT-4 to exhibit general intelligence and solve tasks in various domains. Read more


Explore a hands-on curriculum that helps you build custom LLM applications!


Contextual understanding

GPT-4 can understand the context of a given input, allowing it to generate more coherent and contextually relevant responses. This contextual understanding enhances its performance in solving tasks across different domains.

Transfer learning

GPT-4 leverages transfer learning, where it applies knowledge learned from one task to another. This enables GPT-4 to adapt its knowledge and skills to different domains and solve tasks without the need for special prompting or explicit instructions.


Read more about the GPT-4 Vision’s use cases


Language processing capabilities

GPT-4’s advanced language processing capabilities contribute to its general intelligence. It can comprehend and generate human-like natural language, allowing for more sophisticated communication and problem-solving.

Reasoning and inference

GPT-4 demonstrates the ability to reason and make inferences based on the information provided. This reasoning ability enables GPT-4 to solve complex problems and tasks that require logical thinking and deduction.

Learning from experience

GPT-4 can learn from experience and refine its performance over time. This learning capability allows GPT-4 to continuously improve its task-solving abilities and adapt to new challenges.

These features collectively contribute to GPT-4’s more general intelligence and its ability to solve tasks in various domains without the need for specialized prompting.



Wrapping it up

It is crucial to understand and explore GPT-4’s limitations, as well as the challenges ahead in advancing towards more comprehensive versions of AGI. Nonetheless, GPT-4’s development holds significant implications for the future of AI research and the societal impact of AGI.

Huda Mahmood - Author
Huda Mahmood
| April 3

In the rapidly growing digital world, AI advancement is driving the transformation toward improved automation, better personalization, and smarter devices. In this evolving AI landscape, every country is striving to make the next big breakthrough.

In this blog, we will explore the global progress of artificial intelligence, highlighting the leading countries of AI advancement in 2024.

Top 9 countries leading AI development in 2024


leaders in AI advancement
Leaders in AI advancement for 2024


Let’s look at the leading 9 countries that are a hub for AI advancement in 2024, exploring their contribution and efforts to excel in the digital world.

The United States of America

Providing a home to the leading tech giants, including OpenAI, Google, and Meta, the United States has been leading the global AI race. The contribution of these companies in the form of GPT-4, Llama 2, Bard, and other AI-powered tools, has led to transformational changes in the world of generative AI.

The US continues to hold its leading position in AI advancement in 2024 with its high concentration of top-tier AI researchers fueled by the tech giants operating from Silicon Valley. Moreover, government support and initiative fosters collaboration, promising the progress of AI in the future.

The recent development of the Biden administration focused on ethical considerations for AI is another proactive approach by the US to ensure suitable regulation of AI advancement. This focus on responsible AI development can be seen as a positive step for the future.


Explore the key trends of AI in digital marketing in 2024



The next leading player in line is China powered by companies like Tencent, Huawei, and Baidu. The new releases, including Tencent’s Hunyuan’s large language model and Huawei’s Pangu, are guiding the country’s AI advancements.

Strategic focus on specific research areas in AI, government funding, and a large population providing a massive database are some of the favorable features that promote the technological development of China in 2024.

Moreover, China is known for its rapid commercialization, bringing AI products rapidly to the market. A subsequent benefit of it is the quick collection of real-world data and user feedback, ensuring further refinement of AI technologies. Thus, making China favorable to make significant strides in the field of AI in 2024.


Large language model bootcamp

The United Kingdom

The UK remains a significant contributor to the global AI race, boasting different avenues for AI advancement, including DeepMind – an AI development lab. Moreover, it hosts world-class universities like Oxford, Cambridge, and Imperial College London which are at the forefront of AI research.

The government also promotes AI advancement through investment and incentives, fostering a startup culture in the UK. It has also led to the development of AI companies like Darktrace and BenevolentAI supported by an ecosystem that provides access to funding, talent, and research infrastructure.

Thus, the government’s commitment and focus on responsible AI along with its strong research tradition, promises a growing future for AI advancement.


With top AI-powered companies like Cohere, Scale AI, and Coveo operating from the country, Canada has emerged as a leading player in the world of AI advancement. The government’s focus on initiatives like the Pan-Canadian Artificial Intelligence Strategy has also boosted AI development in the country.

Moreover, the development of research hubs and top AI talent in institutes like the Montreal Institute for Learning Algorithms (MILA) and the Alberta Machine Intelligence Institute (AMII) promotes an environment of development and innovation. It has also led to collaborations between academia and industry to accelerate AI advancement.

Canada is being strategic about its AI development, focusing on sectors where it has existing strengths, including healthcare, natural resource management, and sustainable development. Thus, Canada’s unique combination of strong research capabilities, ethical focus, and collaborative environment positions it as a prominent player in the global AI race.


While not at the top like the US or China, France is definitely leading the AI research in the European Union region. Its strong academic base has led to the development of research institutes like Inria and the 3IA Institutes, prioritizing long-term advancements in the field of AI.

The French government also actively supports research in AI, promoting the growth of innovative AI startups like Criteo (advertising) and Owkin (healthcare). Hence, the country plays a leading role in focusing on fundamental research alongside practical applications, giving France a significant advantage in the long run.


India is quietly emerging as a significant player in AI research and technology as the Indian government pours resources into initiatives like ‘India AI’, fostering a skilled workforce through education programs. This is fueling a vibrant startup landscape where homegrown companies like SigTuple are developing innovative AI solutions.

What truly sets India apart is its focus on social impact as it focuses on using AI to tackle challenges like healthcare access in rural areas and improve agricultural productivity. India also recognizes the importance of ethical AI development, addressing potential biases to ensure the responsible use of this powerful technology.

Hence, the focus on talent, social good, and responsible innovation makes India a promising contributor to the world of AI advancement in 2024.

Learn more about the top AI skills and jobs in 2024


With an aging population and strict immigration laws, Japanese companies have become champions of automation. It has resulted in the country developing solutions with real-world AI implementation, making it a leading contributor to the field.

While they are heavily invested in AI that can streamline processes and boost efficiency, their approach goes beyond just getting things done. Japan is also focused on collaboration between research institutions, universities, and businesses, prioritizing safety, with regulations and institutes dedicated to ensuring trustworthy AI.

Moreover, the country is a robotics powerhouse, integrating AI to create next-gen robots that work seamlessly alongside humans. So, while Japan might not be the first with every breakthrough, they are surely leading the way in making AI practical, safe, and collaborative.


Germans are at the forefront of a new industrial revolution in 2024 with Industry 4.0. Tech giants like Siemens and Bosch using AI are using AI to supercharge factories with intelligent robots, optimized production lines, and smart logistics systems.

The government also promotes AI advancement through funding for collaborations, especially between academia and industry. The focus on AI development has also led to the initiation of startups like Volocopter, Aleph Alpha, DeepL, and Parloa.

However, the development is also focused on the ethical aspects of AI, addressing potential biases on the technology. Thus, Germany’s focus on practical applications, responsible development, and Industry 4.0 makes it a true leader in this exciting new era.


Explore a hands-on curriculum that helps you build custom LLM applications!



The country has made it onto the global map of AI advancement with its strategic approach towards research in the field. The government welcomes international researchers to contribute to their AI development. It has resulted in big names like Google setting up shop there, promoting open collaboration using cutting-edge open-source AI tools.

Some of its notable startups include Biofourmis, Near, Active.Ai, and Osome. Moreover, Singapore leverages AI for applications beyond the tech race. Their ‘Smart Nation’ uses AI for efficient urban planning and improved public services.

In addition to this, with its focus on social challenges and focusing on the ethical use of AI, Singapore has a versatile approach to AI advancement. It makes the country a promising contender to become a leader in AI development in the years to come.



The future of AI advancement

The versatility of AI tools promises a future for the field in all kinds of fields. From personalizing education to aiding scientific discoveries, we can expect AI to play a crucial role in all departments. Moreover, the focus of the leading nations on the ethical impacts of AI ensures an increased aim toward responsible development.

Hence, it is clear that the rise of AI is inevitable. The worldwide focus on AI advancement creates an environment that promotes international collaboration and democratization of AI tools. Thus, leading to greater innovation and better accessibility for all.

Dua Mahboob
| March 27

If I were to ask you, can Generative AI in education outperform students in competitive assessments like that of Harvard’s or Stanford’s, what would your answer be? Maybe? Let me tell you, the answer is yes.

That’s the exciting world of generative AI, shaking things up everywhere across the globe, be it logical assessments, medical exams, or a thought-provoking essay at the Ivy Leagues.   

Read: Chatbot vs Medical Student Performance on Clinical Reasoning Examinations 

Now, before you imagine robots taking over classrooms, hold on! Generative AI isn’t here to replace humans, it’s more of a super-powered sidekick for education.

From unequal access to education to stressed-out teachers and confused students, the education landscape faces a lot of challenges. Generative AI isn’t here to steal anyone’s job, but maybe, it can help us fix the problems, ushering in a new era of learning and creativity.

Should ChatGPT be banned in schools? 

Role of AI in Education

Here’s how generative AI is reshaping the education landscape: 

Personalized learning

Traditionally, education has relied on a standardized approach. This “one-size-fits-all” method often leaves students behind or bored, failing to cater to their individual learning styles and paces. Generative AI disrupts this model by tailoring the education experience to individual students’ needs.  

With the help of vast amounts of data, it adapts the learning content, pace, and style to suit the strengths, weaknesses, and preferences of each learner, ensuring that no student is left behind.

This personalized approach accommodates different learning styles, such as visual, auditory, reading-writing, or kinesthetic, ensuring that students receive tailored support based on their unique preferences and abilities, while also providing immediate feedback and support. 

AI in Action

For instance, Duolingo leverages generative AI to create personalized learning experiences for young children. The app tailors its content based on a child’s progress, offering interactive activities, games, and even AI-generated stories that reinforce learning. In addition, Khan Academy has launched Khanmigo, an AI tutor that assists young students in various subjects on its platform.

AI in education - within the ed-tech landscape
Popular Generative AI Applications in the EdTech Landscape – Source: Reach Capital

Accessibility and Inclusivity: Breaking Barriers for All

Traditionally, access to quality education has been heavily reliant on individuals’ geographical access and socio-economic background. Generative AI disrupts this norm by delivering high-quality educational resources directly to students, regardless of their backgrounds.

Now, people in remote areas with limited access to knowledge bases, diverse learning environments, and styles, can leverage Generative AI, for personalized tutoring and learning. 

Generative AI further promotes inclusivity and global collaboration by facilitating language learning through the translation of educational content into multiple languages and adapting materials to fit local cultural contexts. It plays a crucial role in developing inclusive and accessible educational content suitable for diverse learner populations. 

Moreover, Generative AI can be personalized to support students with special needs by providing customized learning experiences through assistive functions and communication technologies. This ensures that students with diverse requirements have access to top-quality learning materials.

Curious how generative AI is reshaping the education landscape? Learn what an expert educator has to say!

AI in Action 

For instance, Dreamreader is an AI-powered platform that tailors reading experiences to a student’s reading level and interests. It generates personalized stories with adjustable difficulty, keeping students engaged and motivated to improve their reading skills. 

As technology becomes more accessible, platforms are emerging that enable anyone, even those without coding skills, to create their own “Chat GPT bots,” opening doors of accessibility for all.

Beyond Textbooks: Immersive Learning Adventures

Generative AI has also fostered the emergence of hybrid schools, virtual classrooms, remote learning, and micro-learning, allowing students to access education beyond the confines of a traditional classroom, and opening up a world of limitless learning opportunities. 

Generative AI can transport students to the heart of historical events, conduct virtual experiments in a simulated lab, or even practice a new language with an AI-powered conversation partner. 

AI in Action

Platforms like Historyverse and Hellohistory.AI are prime examples. This AI-powered platform allows students to step into historical simulations, interacting with virtual characters and environments to gain a deeper understanding of the past. 

Explore the 2024 trends of AI in marketing

Support for Educators: AI as a Partner in Progress

Far from replacing teachers, generative AI is here to empower them. With personalized lesson planning and content creation, AI-assisted evaluation and feedback, intelligent tutoring systems, and virtual teaching assistants, AI can free up valuable teacher time.

This allows educators to focus on what they do best: fostering student engagement, providing personalized instruction, and pursuing professional development. In a future where AI can be a leading source of disseminating information and taking the lead in delivering information, it becomes crucial to reconsider our approach towards education.

Rather than sticking to traditional classrooms, picture a flipped classroom model, a hybrid learning setup where students can engage in remote self-learning and use physical classrooms for interactive group activities and collaborative learning. It’s all about blending the best of both worlds for a more effective and engaging educational experience. 

Generative AI is reshaping the roles and dynamics of the education system, encouraging educators to evolve from knowledge deliverers to facilitators. They need to become mentors who guide and encourage student agency, fostering a collaborative environment built on co-agency and collective intelligence.


Large language model bootcamp

AI in Action

Take a look at GradeScope, a product by Turnitin, a real-world example of generative AI empowering teachers. This platform uses AI to automate the time-consuming task of grading written assignments. Teachers upload student work, and GradeScope utilizes AI to analyze handwriting, identify key concepts, and even provide students with initial grading and personalized feedback.

This frees up valuable teacher time, allowing them to focus on more individualized instruction, like one-on-one conferences or in-depth discussions about student writing. This is the power of generative AI as a partner in education – it empowers teachers to do what they do best: inspire, guide, and unlock the potential in every student

Here’s what every educator must know!

Shift towards Metacognitive Continuous Learning

Generative AI is ushering in a new era of “metacognitive continuous learning”. This approach to assessment focuses on students’ ability to understand, monitor, and regulate their cognitive and metacognitive processes, making it an integral part of the learning process.

In metacognitive continuous learning, students not only acquire knowledge but also reflect on their learning strategies and adapt them as needed. They actively engage in self-regulation to optimize their learning experience and become aware of their thinking processes.  

AI systems help students recognize their strengths and weaknesses, suggest strategies for improvement, and promote a deeper understanding of the subject matter. By leveraging AI-supported feedback, students develop essential skills for lifelong learning.

This shift represents a move away from traditional tests that measure memory recall or specific skills and towards a more student-centered and flexible approach to learning, making students self-directed learners.

It recognizes that learning is not just about acquiring knowledge but also about understanding how we think and continuously improving our learning strategies and focusing on personal growth.

Read about the game-changing moments in AI during 2023

Critical Skills to Survive and Thrive in an AI-driven World

While generative AI offers a treasure trove of educational content, it’s crucial to remember that information literacy is essential. Students need to develop the ability to critically evaluate AI-generated content, assessing its accuracy, and biases, leveraging AI to augment their own capabilities rather than blindly relying on it.

Here is a range of key skills that learners need to develop to thrive and adapt. These skills include: 

Critical Thinking: Learners must develop the ability to analyze information, evaluate its credibility, and make informed decisions. Critical thinking allows individuals to effectively navigate the vast amount of data and AI-generated content available. 

Problem-solving: AI presents new challenges and complexities. Learners need to be able to identify and define problems, think creatively, and develop innovative solutions. Problem-solving skills enable individuals to leverage AI technology to address real-world issues. 

Adaptability: The rapid pace of technological change requires learners to be adaptable. They must embrace change, learn new tools and technologies quickly, and be willing to continuously evolve their knowledge and skills. 

Data and AI Literacy: With AI generating vast amounts of data, learners need to develop the ability to understand, interpret, and analyze data so that they can make data-driven decisions and leverage AI technologies effectively. They must also possess AI literacy skills to navigate AI-driven platforms, understand the ethical implications of AI, and effectively use digital tools for learning and work.  

The Human Edge: Fostering Creativity, Emotional Intelligence, and Intuition: While AI excels at crunching numbers and following patterns, certain qualities remain uniquely human and will continue to be valuable in the age of AI. AI can generate content, but it takes human imagination to truly push boundaries and come up with groundbreaking ideas.

Our ability to empathize, build relationships, and navigate complex social situations will remain crucial for success in various fields. In addition, the ability to tap into our intuition and make gut decisions can be a valuable asset, even in the age of data-driven decision-making.

Can AI truly replace humans? Let’s find out now

Effectively Leveraging Generative AI for Education: The PAIR Framework

To equip students with critical thinking and problem-solving skills in the age of AI, the PAIR framework is a very useful tool. This four-step approach integrates generative AI tools into assignments, encouraging students to actively engage with the technology. 

  1. Problem Formulation:

The journey begins with students defining the problem or challenge they want to tackle. This initial step fosters critical thinking and sets the stage for their AI-powered exploration. 

  1. AI Tool Selection:

Students become discerning consumers of technology by learning to explore, compare, and evaluate different generative AI tools. Understanding available features allows them to choose the most appropriate tool for their specific problem. 

  1. Interaction:

Armed with their chosen AI tool, students put their problem-solving skills to the test. They experiment with various inputs and outputs, observing how the tool influences their approach and the outcome. 

  1. Reflection:

The final step involves critical reflection. Students assess their experience with the generative AI tool, reporting on its strengths, weaknesses, and overall impact on their learning process. This reflection solidifies their understanding and helps them become more self-aware learners. 

By incorporating the PAIR framework, students develop the skills necessary to navigate the world of AI, becoming not just passive users, but empowered learners who can leverage technology to enhance their problem-solving abilities.

the PAIR framework model
The PAIR framework model – Source: Harvard Business Publishing

The Road Ahead: Challenges, Considerations, and Responsible Implementation

As with any new technology, generative AI comes with its own set of challenges. Ensuring that AI systems are trained on unbiased data sets is crucial to prevent perpetuating stereotypes or misinformation. Additionally, it’s important to remember that the human element remains irreplaceable in education. 

Academic Dishonesty

AI tools can be misused for plagiarism, with students using them to generate essays or complete assignments without truly understanding the content.

Rather than outright banning these tools, educational institutions need to promote ethical and responsible AI usage. This entails establishing transparent guidelines and policies to deter dishonest or unethical practices.

Accuracy and Bias

Generative AI models are trained on vast amounts of data, which can perpetuate biases or inaccuracies present in that data. They are often trained on datasets that may not adequately represent the cultural and contextual diversity of different regions.

This can lead to a lack of relevance and inclusivity in AI-generated content. Uncritical use of AI-generated content could lead students to faulty information.

In addition, localization efforts are needed to ensure that generative AI systems are sensitive to cultural nuances and reflect diverse perspectives. 

Overdependence on Technology

Overreliance on AI tools for learning can hinder critical thinking and problem-solving skills. Students may become accustomed to having solutions generated for them, rather than developing the ability to think independently.

Educating users about AI’s limitations, potential risks, and responsible usage, becomes extremely important. It is important to promote AI as a tool designed to augment human capabilities rather than holding them back.

Explore a hands-on curriculum that helps you build custom LLM applications!

Readiness Disparities

While generative AI offers tremendous potential for improving accessibility and inclusion in education, on some occasions, it can also exacerbate existing disparities.

The integration of generative AI hinges on “technological readiness” – meaning adequate infrastructure, reliable internet access, proper training, and digital literacy.

These factors can vary greatly between regions and countries. Unequal access to these resources could create a situation where generative AI widens, rather than shrinks, the educational gap between developed and developing nations.

These disparities must be addressed to ensure that generative AI reaches all students, regardless of their background, ensuring a more equitable society.  

Way Forward: A Balanced Approach

Market projection of AI in education
Market projection of AI in education – Source: Yahoo Finance

Generative AI undoubtedly holds the potential to reshape the education landscape, by providing personalized learning, improving content, automating tasks, and reducing barriers to education.

To successfully leverage these benefits, a balanced approach is necessary that promotes responsible integration of AI in educational settings, while preserving the human touch. Moreover, it is crucial to empower educators and learners with the relevant skills and competencies to effectively utilize Generative AI while also fostering dialogue and collaboration among stakeholders.

By striking a balance between leveraging its potential benefits and mitigating the associated risks, the equitable integration of Generative AI in education can be achieved, creating a dynamic and adaptive learning environment that empowers students for the future.

Huda Mahmood - Author
Huda Mahmood
| March 19

Vector embeddings have revolutionized the representation and processing of data for generative AI applications. The versatility of embedding tools has produced enhanced data analytics for its use cases.

In this blog, we will explore Google’s recent development of specialized embedding tools that particularly focus on promoting research in the fields of dermatology and pathology.

Let’s start our exploration with an overview of vector embedding tools.

What are vector embedding tools?

Vector embeddings are a specific embedding tool that uses vectors for data representation. While the direction of a vector determines its relationship with other data points in space, the length of a vector signifies the importance of the data point it represents.

A vector embedding tool processes input data by analyzing it and identifying key features of interest. The tool then assigns a unique vector to any data point based on its features. These are a powerful tool for the representation of complex datasets, allowing more efficient and faster data processing.


Large language model bootcamp


General embedding tools process a wide variety of data, capturing general features without focusing on specialized fields of interest. On the contrary, there are specialized embedding tools that enable focused and targeted data handling within a specific field of interest.

Specialized embedding tools are particularly useful in fields like finance and healthcare where unique datasets form the basis of information. Google has shared two specialized vector embedding tools, dealing with the demands of healthcare data processing.

However, before we delve into the details of these tools, it is important to understand their need in the field of medicine.

Why does healthcare need specialized embedding tools?

Embeddings are an important tool that enables ML engineers to develop apps that can handle multimodal data efficiently. These AI-powered applications using vector embeddings encompass various industries. While they deal with a diverse range of uses, some use cases require differentiated data-processing systems.

Healthcare is one such type of industry where specialized embedding tools can be useful for the efficient processing of data. Let’s explore major reasons for such differentiated use of embedding tools.


Explore the role of vector embeddings in generative AI


Domain-specific features

Medical data, ranging from patient history to imaging results, are crucial for diagnosis. These data sources, particularly from the field of dermatology and pathology, provide important information to medical personnel.

The slight variation of information in these sources requires specialized knowledge for the identification of relevant information patterns and changes. While regular embedding tools might fail at identifying the variations between normal and abnormal information, specialized tools can be created with proper training and contextual knowledge.

Data scarcity

While data is abundant in different fields and industries, healthcare information is often scarce. Hence, specialized embedding tools are needed to train on the small datasets with focused learning of relevant features, leading to enhanced performance in the field.

Focused and efficient data processing

The AI model must be trained to interpret particular features of interest from a typical medical image. This demands specialized tools that can focus on relevant aspects of a particular disease, assisting doctors in making accurate diagnoses for their patients.

In essence, specialized embedding tools bridge the gap between the vast amount of information within medical images and the need for accurate, interpretable diagnoses specific to each field in healthcare.

A look into Google’s embedding tools for healthcare research

The health-specific embedding tools by Google are focused on enhancing medical image analysis, particularly within the field of dermatology and pathology. This is a step towards addressing the challenge of developing ML models for medical imaging.

The two embedding tools – Derm Foundation and Path Foundation – are available for research use to explore their impact on the field of medicine and study their role in improving medical image analysis. Let’s take a look at their specific uses in the medical world.

Derm Foundation: A step towards redefining dermatology

It is a specialized embedding tool designed by Google, particularly for the field of dermatology within the world of medicine. It specifically focuses on generating embeddings from skin images, capturing the critical skin features that are relevant to diagnosing a skin condition.

The pre-training process of this specialized embedding tool consists of learning from a library of labeled skin images with detailed descriptions, such as diagnoses and clinical notes. The tool learns to identify relevant features for skin condition classification from the provided information, using it on future data to highlight similar features.


Derm Foundation outperforms BiT-M (a standard pre-trained image model)
Derm Foundation outperforms BiT-M (a standard pre-trained image model) – Source: Google Research Blog


Some common features of interest for derm foundation when analyzing a typical skin image include:

  • Skin color variation: to identify any abnormal pigmentation or discoloration of the skin
  • Textural analysis: to identify and differentiate between smooth, rough, or scaly textures, indicative of different skin conditions
  • Pattern recognition: to highlight any moles, rashes, or lesions that can connect to potential abnormalities

Potential use cases of the Derm Foundation

Based on the pre-training dataset and focus on analyzing skin-specific features, Derm Foundation embeddings have the potential to redefine the data-processing and diagnosing practices for dermatology. Researchers can use this tool to develop efficient ML models. Some leading potential use cases for these models include:

Early detection of skin cancer

Efficient identification of skin patterns and textures from images can enable dermatologists to timely detect skin cancer in patients. Early detection can lead to better treatments and outcomes overall.

Improved classification of skin diseases

Each skin condition, such as dermatitis, eczema, and psoriasis, shows up differently on a medical image. A specialized embedding tool empowers the models to efficiently detect and differentiate between different skin conditions, leading to accurate diagnoses and treatment plans.

Hence, the Derm Foundation offers enhanced accuracy in dermatological diagnoses, faster deployment of models due to the use of pre-trained embeddings, and focused analysis by dealing with relevant features. It is a step towards a more accurate and efficient diagnosis of skin conditions, ultimately improving patient care.


Here’s your guide to choosing the right vector embedding model for your generative AI use case


Path Foundation: Revamping the world of pathology in medical sciences

While the Derm Foundation was specialized to study and analyze skin images, the Path Foundation embedding is designed to focus on images from pathology.


An outlook of SSL training used by Path Foundation
An outlook of SSL training used by Path Foundation – Source: Google Research Blog


It analyzes the visual data of tissue samples, focusing on critical features that can include:

  • Cellular structures: focusing on cell size, shape, or arrangement to identify any possible diseases
  • Tumor classification: differentiating between different types of tumors or assessing their aggressiveness

The pre-training process of the Path Foundation embedding comprises of labeled pathology images along with detailed descriptions and diagnoses relevant to them.


Learn to build LLM applications


Potential use cases of the Path Foundation

Using the training dataset empowers the specialized embedding tool for efficient diagnoses in pathology. Some potential use cases within the field for this embedding tool include:

Improved cancer diagnosis

Improved analysis of pathology images can lead to timely detection of cancerous tissues. It will lead to earlier diagnoses and better patient outcomes.

Better pathology workflows

Analysis of pathology images is a time-consuming process that can be expedited with the use of an embedding tool. It will allow doctors to spend more time on complex cases while maintaining an improved workflow for their pathology diagnoses.

Thus, Path Foundation promises the development of pathology processes, supporting medical personnel in improved diagnoses and other medical processes.

Transforming healthcare with vector embedding tools

The use of embedding tools like Derm Foundation and Path Foundation has the potential to redefine data handling for medical processes. Specialized focus on relevant features offers enhanced diagnostic accuracy with efficient processes and workflows.

Moreover, the development of specialized ML models will address data scarcity often faced within healthcare when developing such solutions. It will also promote faster development of useful models and AI-powered solutions.

While the solutions will empower doctors to make faster and more accurate diagnoses, they will also personalize medicine for patients. Hence, embedding tools have the potential to significantly improve healthcare processes and treatments in the days to come.

Huda Mahmood - Author
Huda Mahmood
| March 15

Covariant AI has emerged in the news with the introduction of its new model called RFM-1. The development has created a new promising avenue of exploration where humans and robots come together. With its progress and successful integration into real-world applications, it can unlock a new generation of AI advancements.

Explore the potential of generative AI and LLMs for non-profit organizations

In this blog, we take a closer look at the company and its new model.

What is Covariant AI?

The company develops AI-powered robots for warehouses and distribution centers. It spun off in 2017 from OpenAI by its ex-research scientists, Peter Chen and Pieter Abbeel. Its robots are powered by a technology called the Covariant Brain, a machine-learning (ML) model to train and improve robots’ functionality in real-world applications.

The company has recently launched a new AL model that takes up one of the major challenges in the development of robots with human-like intelligence. Let’s dig deeper into the problem and its proposed solution.

Large language model bootcamp

What was the challenge?

Today’s digital world is heavily reliant on data to progress. Since generative AI is an important aspect of this arena, data and information form the basis of its development as well. So the development of enhanced functionalities in robots, and the appropriate training requires large volumes of data.

The limited amount of available data poses a great challenge, slowing down the pace of progress. It was a result of this challenge that OpenAI disbanded its robotics team in 2021. The data was insufficient to train the movements and reasoning of robots appropriately.

However, it all changed when Covariant AI introduced its new AI model.


Understanding the Covariant AI model

The company presented the world with RFM-1, its Robotics Foundation Model as a solution and a step ahead in the development of robotics. Integrating the characteristics of large language models (LLMs) with advanced robotic skills, the model is trained on a real-world dataset.

Covariant used its years of data from its AI-powered robots already operational in warehouses. For instance, the item-picking robots working in the warehouses of Crate & Barrel and Bonprix. With these large enough datasets, the challenge of data limitation was addressed, enabling the development of RFM-1.

Since the model leverages real-world data of robots operating within the industry, it is well-suited to train the machines efficiently. It brings together the reasoning of LLMs and the physical dexterity of robots which results in human-like learning of the robots.


An outlook of RFM-1
An outlook of the features and benefits of RFM-1


Unique features of RFM-1

The introduction of the new AI model by Covariant AI has definitely impacted the trajectory of future developments in generative AI. While we still have to see how the journey progresses, let’s take a look at some important features of RFM-1.

Multimodal training capabilities

The RFM-1 is designed to deal with five different types of input: text, images, video, robot instructions, and measurements. Hence, it is more diverse in data processing than a typical LLM that is primarily focused on textual data input.

Integration with the physical world

Unlike your usual LLMs, this AI model engages with the physical world around it through a robot. The multimodal data understanding enables it to understand the surrounding environment in addition to the language input. It enables the robot to interact with the physical world.

Advanced reasoning skills

The advanced AI model not only processes the available information but engages with it critically. Hence, RFM-1 has enhanced reasoning skills that provide the robot with a better understanding of situations and improved prediction skills.


Learn to build LLM applications


Benefits of RFM-1

The benefits of the AI model align with its unique features. Some notable advantages of this development are:

Enhanced performance of robots

The multimodal data enables the robots to develop a deeper understanding of their environments. It results in their improved engagement with the physical world, allowing them to perform tasks more efficiently and accurately. It will directly result in increased productivity and accuracy of business operations where the robots operate.

Improved adaptability

Based on the model’s improved reasoning skills, it ensure that the robots are equipped to understand, learn, and reason with new data. Hence, the robots become more versatile and adaptable to their changing environment.

Reduced reliance on programming

RFM-1 is built to constantly engage with and learn from its surroundings. Since it enables the robot to comprehend and reason with the changing input data, the reliance on pre-programmed instructions is reduced. The process of development and deployment becomes simpler and faster.

Hence, the multiple new features of RFM-1