The evolution of large language models (LLMs) has revolutionized many fields, including analytics. Traditionally, LLMs have been integrated into analytics workflows to assist in explaining data, generating summaries, and uncovering insights. However, a more recent breakthrough in AI, Agentic AI which involves the development of AI systems, composed of multiple agents, each with a defined purpose, capable of autonomous decision-making and self-directed actions.
This shift is now making its way into the analytics domain, transforming how we interact with data. According to Gartner:
By 2026, over 80% of business consumers will prefer intelligent assistants and embedded analytics over dashboards for data-driven insights.
Agentic AI is reshaping the analytics landscape by enabling conversational, intelligent, and proactive data experiences.
In this blog, we’ll explore how agentic analytics is redefining data workflows and making data-driven decision-making more accessible, intelligent, and efficient.
What is Agentic Analytics?
In the realm of data analytics, driving insights is often a complex and time-consuming process. Data professionals invest significant effort in preparing the right data, cleaning and organizing it, and finally reaching meaningful conclusions. With the rise of LLM-powered agents, many of these tasks have become easier and more efficient.
Today, different types of agents can be employed at various stages of the analytics lifecycle. When these agents are granted autonomy and integrated across the entire analytics workflow, they form a cohesive, intelligent system known as Agentic Analytics. This paradigm shift enables more conversational, dynamic, and accessible ways to work with data.
Why Shift to Agentic Analytics?
How Does Agentic Analytics Differ?
To better understand the impact of Agentic Analytics, let’s compare it with traditional business intelligence approaches and AI-assisted methods:
How It Works: Components of Agentic Analytics
Agentic Analytics brings together Agentic AI and data analytics to turn raw business data into intelligent, actionable insights. To achieve this, it builds on the core architectural components of Agentic AI, enhanced with analytics-specific modules. Let’s break down some key components:
1. AI Agents (LLM-Powered)
At the core of Agent Analytics are autonomous AI agents, powered by large language models (LLMs). These agents can:
Access and query data sources
Interpret user intent
Generate automated insights and summaries
Take actions like triggering alerts, or recommending decisions
2. Memory and Learning Module
This component stores user preferences, like frequently asked questions, preferred data formats, past interactions, and recurring topics. By leveraging this memory, the system personalizes future responses and learns over time, leading to smarter, more relevant interactions.
3. Semantic Module
The semantic layer is foundational to both analytics and agentic AI. It serves as a unified interface that bridges the gap between raw data and business context, adding business logic, key metrics, governance, and consistency to raw data, ensuring that insights are not only accurate but also aligned with the organization’s definitions and standards.
4. Data Sources & Tools Integration
Agentic Analytics systems must connect to a wide variety of data sources and tools that agents can access to perform their tasks. These include structured databases, analytics tools, ETL tools, business applications, etc.
Agentic Analytics systems are powered by a collection of specialized autonomous agents, each with a clear role in the analytics lifecycle. Let’s have a look at some fundamental agents involved in analytics:
1. Planner Agent
Acts as the strategist. Breaks down a business request into smaller analytical tasks, assigns them to the right agents, and manages the execution to ensure goals are met efficiently.
Example:
A business launched a new smartwatch, and now the project manager needs a report to “assess sales, engagement, and market reception.” The Planner Agent interprets the goal and creates a multi-step workflow and delegates tasks to the appropriate Agents.
2. Data Agent
Acts as the data connector. Identifies the right data sources, retrieves relevant datasets, and ensures secure, accurate access to information across internal systems and external APIs.
Example:
The Data Agent pulls sales data from the ERP, website analytics from Google Analytics, customer reviews from e-commerce platforms, and social media mentions via APIs.
3. Data Preparation Agent
Acts as the data wrangler. Cleans, transforms, and enriches datasets so they are ready for analysis. Handles formatting, joins, missing values, and data consistency checks.
The Prep Agent merges sales and marketing data, enriches customer profiles with demographic details, and prepares engagement metrics for further analysis.
4. Analysis Agent
Acts as the analyst. Selects and applies the appropriate analytical or statistical methods to uncover patterns, trends, and correlations in the data by generating code or SQL queries.
Example:
The Analysis Agent calculates units sold per region, tracks repeat purchase rates, compares previous launch sales with new ones, identifies the most effective marketing campaigns, and detects patterns.
5. Visualization Agent
Acts as the storyteller. Generates visuals, charts, and tables that make complex data easy to understand for different stakeholders.
Example:
The Visualization Agent builds interactive dashboards showing sales heatmaps, engagement trends over time, and customer sentiment charts.
6. Monitoring Agent
Acts as the supervisor. Monitors results from all agents and ensures actions are initiated when needed.
Example:
The agent coordinates with other agents, monitors sales, and sets up real-time alerts for sentiment drops or sales spikes.
Real-World Examples of Agentic Analytics Platforms
Tableau Next
Tableau Next is Salesforce’s next-generation agentic analytics platform, tightly integrated with Agentforce, Salesforce’s digital labor framework. Its data foundation ensures enterprise-grade security, compliance, and agility while unifying customer data for holistic analysis.
Built as an open, API-first platform, Tableau Next offers reusable, discoverable analytic assets and a flexible architecture to meet evolving business needs. By embedding AI-powered insights directly into workflows, it allows decision-makers to act on relevant, real-time intelligence without switching tools, making insight delivery truly seamless.
source: Tableau
ThoughtSpot
ThoughtSpot delivers fast, accurate AI-driven insights through a unified platform powered by AI agents, connected insights, and smart applications. It streamlines the entire analytics lifecycle from data connection, exploration, and action into a single, cohesive environment.
Unlike traditional BI tools that require users to log into dashboards and search for answers, it allows organizations to integrate analytics into custom apps and workflows effortlessly. Every AI-generated insight is fully transparent, with the ability to verify results through natural language tokens or SQL queries, ensuring trust, governance, and AI explainability.
source: Thoughtspot
Tellius
Tellius combines dynamicAI agents with conversational intelligence to make analytics accessible to everyone.
The platform integrates data from multiple systems into a secure, unified knowledge layer, eliminating silos and creating a single source of truth. Multi-agent workflows handle tasks such as planning, data preparation, insight generation, and visualization. These agents operate proactively, delivering anomaly detection, segmentation, root-cause analysis, and actionable recommendations in real time.
While agentic analytics offers tremendous potential, realizing its benefits requires addressing several practical and strategic challenges:
Data Quality and Integration
Even the most sophisticated AI agents are limited by the quality of the data they consume. Siloed, inconsistent, or incomplete data can severely degrade output accuracy. To mitigate this, organizations should prioritize integrating curated datasets and implementing a semantic layer, offering a unified and consolidated view across the organization.
Cost Management
Autonomous AI agents often operate in a continuous listening mode, constantly ingesting data and running analysis, causing high token consumption and operational cost. Techniques like Agentic Retrieval-Augmented Generation (RAG) and context filtering can reduce unnecessary data queries and optimize cost efficiency.
Trust and Transparency
Building trust, transparency, and explainability in agentic systems becomes fundamental as users are made to rely on AI-driven decisions. Incorporating transparent decision logs, natural language explanations and clear traceability back to source data and agentic flow help users not only verify results but also understand the process of their generation.
Security and Compliance
When AI agents are given autonomy to pull, process, and act on enterprise data, strict access control and compliance safeguards are essential. This includes role-based data access, data masking for sensitive fields, and audit trails for agent actions. It also involves ensuring agent operations align with industry-specific regulations such as GDPR or HIPAA.
Response Quality
AI agents have the tendency to produce responses that differ from business logic, raising concerns about their use in decision-making. To address this, a clear orchestration framework with well-defined agents is essential. Other strategies include adding a semantic layer for consistent business definitions and a reinforcement learning layer to enable learning from past feedback.
Agentic analytics represents an evolution in the analytics landscape where insights are no longer just discovered but are contextual, conversational, and actionable. With Agentic AI, insights are described, root cause is diagnosed, outcomes are predicted, and corrective actions are prescribed, all autonomously.
To unlock this potential, organizations must implement an agentic system, ensuring transparency, maintaining security and governance, aligning with business requirements, and leveraging curated, trusted data.
According to Gartner, augmented analytics capabilities will evolve into autonomous analytics platforms by 2027, with 75% of analytics content leveraging GenAI for enhanced contextual intelligence. Organizations must prepare today to lead tomorrow, harnessing what, why, and how of data in a fully automated, intelligent way.
Imagine relying on an LLM-powered chatbot for important information, only to find out later that it gave you a misleading answer. This is exactly what happened with Air Canada when a grieving passenger used its chatbot to inquire about bereavement fares. The chatbot provided inaccurate information, leading to a small claims court case and a fine for the airline.
Incidents like this highlight that even after thorough testing and deployment, AI systems can fail in production, causing real-world issues. This is why LLM Observability & Monitoring is crucial. By tracking LLMs in real time, businesses can detect problems such as hallucinations or performance degradation early, preventing major failures.
This blog dives into the importance of LLM observability and monitoring for building reliable, secure, and high-performing LLM applications. You will learn how monitoring and observability can improve performance, enhance security, and optimize costs.
What is LLM Observability and Monitoring?
When you launch an LLM application, you need to make sure it keeps working properly over time. That is where LLM observability and monitoring come in. Monitoring tracks the model’s behavior and performance, while observability digs deeper to explain why things are going wrong by analyzing logs, metrics, and traces.
Since LLMs deal with unpredictable inputs and complex outputs, even the best models can fail unexpectedly in production. These failures can lead to poor user experiences, security risks, and higher costs. Thus, if you want your AI system to stay reliable and trustworthy, observability and monitoring are critical.
LLM Monitoring: Is Everything Working as Expected?
LLM monitoring tracks critical metrics to identify if the model is functioning as expected. It focuses on the performance of the LLM application by analysing user prompts, responses, and key performance indicators. Good monitoring means you spot problems early and keep your system reliable.
However, monitoring only shows you what is wrong, not why. If users suddenly get irrelevant answers or the system slows down, monitoring will highlight the symptoms, but you will still need a way to figure out the real cause. That is exactly where observability steps in.
LLM Observability: Why Is This Happening?
LLM observability goes beyond monitoring by answering the “why” behind the detected issues, providing deeper diagnostics and root cause analysis. It brings together logs, metrics, and traces to give you the full picture of what went wrong during a user’s interaction.
This makes it easier to track issues back to specific prompts, model behaviors, or system bottlenecks. For instance, if monitoring shows increased latency or inaccurate responses, observability tools can trace the request flow, identifying the root cause and enabling more efficient troubleshooting.
What to Monitor and How to Achieve Observability?
By tracking key metrics and leveraging observability techniques, organizations can detect failures, optimize costs, and enhance the user experience. Let’s explore the critical factors that need to be monitored and how to achieve LLM observability.
Key Metrics to Monitor
Monitoring core performance indicators and assessing the quality of responses ensures LLM efficiency and user satisfaction.
Response Time: Measures the time taken to generate a response, allowing you to detect when the LLM is taking longer than usual to respond.
Token Usage: Tokens are the currency of LLM operations. Monitoring them helps optimize resource use and control costs.
Throughput: Measures requests per second, ensuring the system handles varying workloads while maintaining performance.
Accuracy: Compares LLM outputs against ground truth data. It can help detect performance drift. For example, in critical services, monitoring accuracy helps detect and correct inaccurate customer support responses in real time.
Relevance: Evaluates how well responses align with user queries, ensuring meaningful and useful outputs.
User Feedback: Collecting user feedback allows for continuous refinement of the model’s responses, ensuring they better meet user needs over time.
Other metrics: These include application-specific metrics, such as faithfulness, which is crucial for RAG-based applications.
Observability goes beyond monitoring by providing deep insights into why and where the issue occurs. It relies on three main components:
1. Logs:
Logs provide granular records of input-output pairs, errors, warnings, and metadata related to each request. They are crucial for debugging and tracking failed responses and help maintain audit trails for compliance and security.
For example, if an LLM generates an inaccurate response, logs can be used to identify the exact input that caused the issue, along with the model’s output and any related errors.
2. Tracing:
Tracing maps the entire request flow, from prompt preprocessing to model execution, helping identify latency issues, pipeline bottlenecks, and system dependencies.
For instance, if response times are slow, tracing can determine which step causes the delay.
3. Metrics:
Metrics can be sampled, correlated, summarized, and aggregated in a variety of ways, providing actionable insights into model efficiency and performance. These metrics could include:
Monitoring user interactions and key metrics helps detect anomalies, while correlating them with logs and traces enables real-time issue diagnosis through observability tools.
Why Monitoring and Observability Matter for LLMs?
LLMs come with inherent risks. Without robust monitoring and observability, these risks can lead to unreliable or harmful outputs.
Prompt Injection Attacks
Prompt injection attacks manipulate LLMs into generating unintended outputs by disguising harmful inputs as legitimate prompts. A notable example is DPD’s chatbot, which was tricked into using profanity and insulting the company, causing public embarrassment.
By actively tracking and analysing user interactions, suspicious patterns can be flagged and prevented in real-time.
Source: mustsharenews
Hallucinations
LLMs can generate misleading or incorrect responses, which can be particularly harmful in high-stakes fields like healthcare and legal services.
By monitoring responses for factual correctness, hallucination can be detected early, while observability identifies the root cause, whether a dataset issue or model misconfiguration.
Sensitive Data Disclosure
LLMs trained on sensitive data may unintentionally reveal confidential information, leading to privacy breaches and compliance risks.
Monitoring helps flag leaks in real-time, while observability traces the source to refine sensitive data-handling strategies and ensure regulatory compliance.
Performance and Latency Issues
Slow or inefficient LLMs can frustrate users and disrupt operations.
Monitoring response times, API latency, and token usage helps identify performance bottlenecks, while observability provides insights for debugging and optimizing efficiency.
Concept Drift
Over time, LLMs may become less accurate as user behaviour, language patterns, and real-world data evolve.
Example: A customer service chatbot generating outdated responses due to new product features and evolved customer concerns.
Continuous monitoring of responses and user feedback helps detect gradual shifts in user satisfaction and accuracy, allowing for timely updates and retraining.
You can also learn about LangChain and its importance in LLMs
Using Langfuse for LLM Monitoring & Observability
Let’s explore a practical example using DeepSeek LLM and Langfuse to demonstrate monitoring and observability.
Create a virtual environment and install the required modules.
py -3.12 -m venv langfuse_venv
Create a virtual environment and install required modules:
Set up a .env file with Langfuse API keys (found under Settings → Setup → API Keys)
Develop an LLM-powered Python app for content generation using the code below and integrate Langfuse for monitoring. After running the code, you’ll see traces of your interactions in the Langfuse project.
Step 3: Experience LLM Observability and Monitoring with Langfuse
Navigate to the Langfuse interactive dashboard to monitor quality, cost, and latency.
Track traces of user requests to analyse LLM calls and workflows.
You can create custom evaluators or use existing ones to assess traces based on relevant metrics. Start by creating a new template from an existing one. Go to Evaluations → Templates → New Template
It requires an LLM API key to set up the evaluator. In our case, we have utilized Azure GPT3.5 Turbo.
After setting up the evaluator, as per the use case, you can create templates for evaluation, like we are using relevance metrics for this project.
After creating a template, we will create a new evaluator. Go to EvaluationsàNew Evaluator and select the created template.
Select traces and mark new traces. This way, we will run an evaluation on the newtraces. You can also evaluate on a custom dataset. In the next steps, we will see the evaluations for the new traces.
Debug each trace and track its execution flow.
It is a great feature to perform LLM Observability and trace through the entire execution flow of user request.
You can also see the relevance score that is calculated as a result of the evaluator we defined in the previous step and the user feedback for this trace.
To see the scores for all the traces, you can navigate to the Scores tab. In this example, traces are evaluated based on:
User feedback, collected via the LLM application.
Relevancy score determined using a relevance evaluator to assess content alignment with user requests.
These scores help track model performance and provide qualitative insights for the continuous improvement of LLMs.
Sessions track multi-step conversations and agentic workflows by grouping multiple traces into a single, seamless replay. This simplifies analysis, debugging, and monitoring by consolidating the entire interaction in one place.
This tutorial demonstrates how to easily set up monitoring for any LLM application. A variety of open-source and paid tools are available, allowing you to choose the best fit based on your application requirements.Langfuse also provides a free demo to explore LLM monitoring and observability (Link)
Key Benefits of LLM Monitoring & Observability
Implementing LLM monitoring and observability is not just a technical upgrade, but a strategic move. Beyond keeping systems stable, it helps boost performance, strengthen security, and create better user experiences. Let’s dive into some of the biggest benefits.
Improved Performance
LLM monitoring keeps a close eye on key performance indicators like latency, accuracy, and throughput, helping teams quickly spot and resolve any inefficiencies. If a model’s response time slows down or its accuracy drops, you will catch it early before users even notice.
By consistently evaluating and tuning your models, you maintain a high standard of service, even as traffic patterns change. Plus, fine-tuning based on real-world data leads to faster response times, better user satisfaction, and lower operational costs over time.
When something breaks in an LLM application, every second counts. Monitoring ensures early detection of glitches or anomalies, while observability tools like logs, traces, and metrics make it much easier to diagnose what is going wrong and where.
Instead of spending hours digging blindly into systems, teams can pinpoint issues in minutes, understand root causes, and apply targeted fixes. This means less downtime, faster recoveries, and a smoother experience for your users.
Enhanced Security and Compliance
Large language models are attractive targets for security threats like prompt injection attacks and accidental data leaks. Robust monitoring constantly analyzes interactions for unusual behavior, while observability tracks back the activity to pinpoint vulnerabilities.
This dual approach helps organizations quickly flag and block suspicious actions, enforce internal security policies, and meet strict regulatory requirements. It is an essential layer of defense for building trust with users and protecting sensitive information.
Better User Experience
An AI tool is only as good as the experience it offers its users. By monitoring user interactions, feedback, and response quality, you can continuously refine how your LLM responds to different prompts.
Observability plays a huge role here as it helps uncover why certain replies miss the mark, allowing for smarter tuning. It results in faster, more accurate, and more contextually relevant conversations that keep users engaged and satisfied over time.
Cost Optimization and Resource Management
Without monitoring, LLM infrastructure costs can quietly spiral out of control. Token usage, API calls, and computational overhead need constant tracking to ensure you are getting maximum value without waste.
Observability offers deep insights into how resources are consumed across workflows, helping teams optimize token usage, adjust scaling strategies, and improve efficiency. Ultimately, this keeps operations cost-effective and prepares businesses to handle growth sustainably.
Thus, LLM monitoring and observability are must-haves for any serious deployment as they safeguard performance and security. Moreover, they also empower teams to improve user experiences and manage resources wisely. By investing in these practices, businesses can build more reliable, scalable, and trusted AI systems.
Future of LLM Monitoring & Observability – Agentic AI?
At the end of the day, LLM monitoring and observability are the foundation for building high-performing, secure, and reliable AI applications. By continuously tracking key metrics, catching issues early, and maintaining compliance, businesses can create LLM systems that users can truly trust.
Hence, observability and monitoring are crucial to building reliable AI agents, especially as we move towards a more agentic AI infrastructure. Systems where AI agents are expected to reason, plan, and act independently, making real-time tracking, diagnostics, and optimization even more critical.
Without solid observability, even the smartest AI can spiral into unreliable or unsafe behavior. So, as you build a chatbot, an analytics tool, or an enterprise-grade autonomous agent, investing in strong monitoring and observability practices is the key to ensuring long-term success.
It is what separates AI systems that simply work from those that truly excel and evolve over time. Moreover, if you want to learn about this evolution of AI systems towards agentic AI, join us at Data Science Dojo’s Future of Data and AI: Agentic AI conference for an in-depth discussion!
In today’s dynamic digital world, handling vast amounts of data across the organization is challenging. It takes a lot of time and effort to set up different resources for each task and duplicate data repeatedly. Picture a world where you don’t have to juggle multiple copies of data or struggle with integration issues.
Microsoft Fabric makes this possible by introducing a unified approach to data management. Microsoft Fabric aims to reduce unnecessary data replication, centralize storage, and create a unified environment with its unique data fabric method.
What is Microsoft Fabric?
Microsoft Fabric is a cutting-edge analytics platform that helps data experts and companies work together on data projects. It is based on a SaaS model that provides a unified platform for all tasks like ingesting, storing, processing, analyzing, and monitoring data.
With this full-fledged solution, you don’t have to spend all your time and effort combining different services or duplicating data.
Overview of One Lake
Fabric features a lake-centric architecture, with a central repository known as OneLake. OneLake, being built on Azure Data Lake Storage (ADLS), supports various data formats, including Delta, Parquet, CSV, and JSON. OneLake offers a unified data environment for each of Microsoft Fabric’s experiences.
These experiences facilitate professionals from ingesting data from different sources into a unified environment and pipelining the ingestion, transformation, and processing of data to developing predictive models and analyzing the data by visualization in interactive BI reports.
Exploring Microsoft Fabric Components: Sales Use Case
Microsoft Fabric offers a set of analytics components that are designed to perform specific tasks and work together seamlessly. Let’s explore each of these components and its application in the sales domain:
Synapse Data Engineering:
Synapse Data Engineering provides a powerful Spark platform designed for large-scale data transformations through Lakehouse.
In the sales use case, it facilitates the creation of automated data pipelines that handle data ingestion and transformation, ensuring that sales data is consistently updated and ready for analysis without manual intervention.
Synapse Data Warehouse:
Synapse Data Warehouse represents the next generation of data warehousing, supporting an open data format. The data is stored in Parquet format and published as Delta Lake Logs, supporting ACID transactions and enabling interoperability across Microsoft Fabric workloads.
In the sales context, this ensures that sales data remains consistent, accurate, and easily accessible for analysis and reporting.
Synapse Data Science:
Synapse Data Science empowers data scientists to work directly with secured and governed sales data prepared by engineering teams, allowing for the efficient development of predictive models.
By forecasting sales performance, businesses can identify anomalies or trends, which are crucial for directing future sales strategies and making informed decisions.
Synapse Real-Time Intelligence:
Real-Time Intelligence in Synapse provides a robust solution to gain insights and visualize event-driven scenarios and streaming data logs. In the sales domain, this enables real-time monitoring of live sales activities, offering immediate insights into performance and rapid response to emerging trends or issues.
Data Factory:
Data Factory enhances the data integration experience by offering support for over 200 native connectors to both on-premises and cloud data sources. For the sales use case, this means professionals can create pipelines that automate the process of data ingestion, and transformation, ensuring that sales data is always updated and ready for analysis.
Data Activator:
Data Activator is a no-code experience in Microsoft Fabric that enables users to automatically perform actions on changing data on the detection of specific patterns or conditions. In the sales context, this helps monitor sales data in Power BI reports and trigger alerts or actions based on real-time changes, ensuring that sales teams can respond quickly to critical events.
Power BI:
Power BI, integrated within Microsoft Fabric, is a leading Business Intelligence tool that facilitates advanced data visualization and reporting. For sales teams, it offers interactive dashboards that display key metrics, trends, and performance indicators. This enables a deep analysis of sales data, helping to identify what drives demand and what affects sales performance.
Let’s get started with sales data analysis by leveraging the power of Microsoft Fabric:
1. Sample Data
The dataset utilized for this example is the sample sales data (sales.csv).
2. Create Workspace
To work with data in Fabric, first create a workspace with the Fabric trial enabled.
On the home page, select Synapse Data Engineering.
In the menu bar on the left, select Workspaces.
Create a new workspace with any name and select a licensing mode. When a new workspace opens, it should be empty.
3. Create Lakehouse
Now, let’s create a lakehouse to store the data.
In the bottom left corner select Synapse Data Engineering and create a new Lakehouse with any name.
On the Lake View tab in the pane on the left, create a new subfolder.
4. Create Pipeline
To ingest data, we’ll make use of a Copy Data activity in a pipeline. This will enable us to extract the data from a source and copy it to a file in the already-created lakehouse.
On the Home page of Lakehouse, select Get Data and then select New Data Pipeline to create a new data pipeline named Ingest Sales Data.
The Copy Data wizard will open automatically, if not select Copy Data > Use Copy Assistant in the pipeline editor page.
In the Copy Data wizard, on the Choose a data source page select HTTP in the New sources section.
Enter the settings in the connect to data source pane as shown:
Click Next. Then on the next page select Request method as GET and leave other fields blank. Select Next.
When the pipeline starts to run, its status can be monitored in the Output pane.
Now, in the created Lakehouse check if the sales.csv file has been copied.
5. Create Notebook
On the Home page for your lakehouse, in the Open Notebook menu, select New Notebook.
In the notebook, configure one of the cells as a Toggle parameter cell and declare a variable for the table name.
Select Data Wrangler in the notebook ribbon, and then select the data frame that we just created using the data file from the copy data pipeline. Here, we changed the data types of columns and dealt with missing values.
Data Wrangler generates a descriptive overview of the data frame, allowing you to transform, and process your sales data as required. It is a great tool especially when performing data preprocessing for data science tasks.
Now, we can save the data as delta tables to use later for sales analytics. Delta tables are schema abstractions for data files that are stored in Delta format.
Let’s use SQL operations on this delta table to see if the table is stored.
6. Run and Schedule Pipeline
Go to the already created pipeline page, add Notebook Activity to the completion of the copy data pipeline, and follow these configurations. So, the table_name parameter will override the default value of the table_name variable in the parameters cell of the notebook.
In the Notebook, select the notebook you just created.
7. Schedule and Monitor Pipeline
Now, we can schedule the pipeline.
On the Home tab of the pipeline editor window, select Schedule and enter the scheduling requirements.
To keep track of pipeline runs, add the Office Outlook activity after the pipeline.
In the settings of activity, authenticate with the sender account (use your account in ‘To’).
For the Subject and Body, select the Add dynamic content option to display the pipeline expression builder canvas and add the expressions as follows. (select your activity name in ‘activity ()’)
8. Use Data from Pipeline in PowerBI
In the lakehouse, click on the delta table just created by the pipeline and create a New Semantic Model.
As the model is created, the model view opens click on Create New Report.
This opens another tab of PowerBI, where you can visualize the sales data and create interactive dashboards.
Choose a visual of interest. Right-click it and select Set Alert. Set Alert button in the Power BI toolbar can also be used.
Next, define trigger conditions to create a trigger in the following way:
This way, sales professionals can seamlessly use their data across the platform by transforming and storing it in the appropriate format. They can perform analysis, make informed decisions, and set up triggers, allowing them to monitor sales performance and react quickly to any uncertainty.
Conclusion
In conclusion, Microsoft Fabric as a revolutionary all-in-one analytics platform simplifies data management for enterprises. Providing a unified environment eliminates the complexities of handling multiple services just by being a haven where data moves in and out all within the same environment for ease of ingestion, processing, or analysis.
With Microsoft Fabric, businesses can streamline data workflows, from data ingestion to real-time analytics, and can respond quickly to market dynamics.
Want to learn more about Microsoft Fabric? Here’s a tutorial to get you started today for a comprehensive understanding!
Imagine effortlessly asking your business intelligence dashboard any question and receiving instant, insightful answers. This is not a futuristic concept but a reality unfolding through the power of Large Language Models (LLMs).
Descriptive analytics is at the core of this transformation, turning raw data into comprehensible narratives. When combined with the advanced capabilities of LLMs, Business Intelligence (BI) dashboards evolve from static displays of numbers into dynamic tools that drive strategic decision-making.
LLMs are changing the way we interact with data. These advanced AI models excel in natural language processing (NLP) and understanding, making them invaluable for enhancing descriptive analytics in Business Intelligence (BI) dashboards.
In this blog, we will explore the power of LLMs in enhancing descriptive analytics and its impact of business intelligence dashboards.
Understanding Descriptive Analytics
Descriptive analytics is the most basic and common type of analytics that focuses on describing, summarizing, and interpreting historical data.
Companies use descriptive analytics to summarize and highlight patterns in current and historical data, enabling them to make sense of vast amounts of raw data to answer the question, “What happened?” through data aggregation and data visualization techniques.
The Evolution of Dashboards: From Static to LLM
Initially, the dashboards served as simplified visual aids, offering a basic overview of key metrics amidst cumbersome and text-heavy reports.
However, as businesses began to demand real-time insights and more nuanced data analysis, the static nature of these dashboards became a limiting factor forcing them to evolve into dynamic, interactive tools. The dashboards transformed into Self-service BI tools with drag-drop functionalities and increased focus on interactive user-friendly visualization.
This is not it, with the realization of increasing data, Business Intelligence (BI) dashboards shifted to cloud-based mobile platforms, facilitating integration to various data sources, and allowing remote collaboration. Finally, the Business Intelligence (BI) dashboard integration with LLMs has unlocked the wonderful potential of analytics.
Role of Descriptive Analytics in Business Intelligence Dashboards and its Limitations
Despite of these shifts, the analysis of dashboards before LLMs remained limited in its ability to provide contextual insights and advanced data interpretations, offering a retrospective view of business performance without predictive or prescriptive capabilities.
The following are the basic capabilities of descriptive analytics:
Defining Visualization
Descriptive analytics explains visualizations like charts, graphs, and tables, helping users quickly grasp key insights. However, this requires manually describing the analyzed insights derived from SQL queries, requiring analytics expertise and knowledge of SQL.
Trend Analysis
By identifying patterns over time, descriptive analytics helps businesses understand historical performance and predict future trends, making it critical for strategic planning and decision-making.
However, traditional analysis of Business Intelligence (BI) dashboards may struggle to identify intricate patterns within vast datasets, providing inaccurate results that can critically impact business decisions.
Reports developed through descriptive analytics summarize business performance. These reports are essential for documenting and communicating insights across the organization.
However, extracting insights from dashboards and presenting them in an understandable format can take time and is prone to human error, particularly when dealing with large volumes of data.
LLMs: A Game-Changer for Business Intelligence Dashboards
Advanced Query Handling
Imagine you would want to know “What were the top-selling products last quarter?” Conventionally, data analysts would write an SQL query, or create a report in a Business Intelligence (BI) tool to find the answer. Wouldn’t it be easier to ask those questions in natural language?
LLMs enable users to interact with dashboards using natural language queries. This innovation acts as a bridge between natural language and complex SQL queries, enabling users to engage in a dialogue, ask follow-up questions, and delve deeper into specific aspects of the data.
Improved Visualization Descriptions
Advanced Business Intelligence (BI) tools integrated with LLMs offer natural language interaction and automatic summarization of key findings. They can automatically generate narrative summaries, identify trends, and answer questions for complex data sets, offering a comprehensive view of business operations and trends without any hustle and minimal effort.
With the integration of a domain-specific Large Language Model (LLM), dashboard analysis can be expanded to offer predictive insights enabling organizations to leverage data-driven decision-making, optimize outcomes, and gain a competitive edge.
Dashboards supported by Large Language Mode (LLMs) utilize historical data and statistical methods to forecast future events. Hence, descriptive analytics goes beyond “what happened” to “what happens next.”
Prescriptive Insights
Beyond prediction, descriptive analytics powered by LLMs can also offer prescriptive recommendations, moving from “what happens next” to “what to do next.” By considering numerous factors, preferences, and constraints, LLMs can recommend optimal actions to achieve desired outcomes.
The Copilot integration in Power BI offers advanced Business Intelligence (BI) capabilities, allowing you to ask Copilot for summaries, insights, and questions about visuals in natural language. Power BI has truly paved the way for unparalleled data discovery from uncovering insights to highlighting key metrics with the power of Generative AI.
Here is how you can get started using Power BI with Copilot integration;
Step 1
Open Power BI. Create workspace (To use Copilot, you need to select a workspace that uses a Power BI Premium per capacity, or a paid Microsoft Fabric capacity).
Step 2
Upload your business data from various sources. You may need to clean and transform your data as well to gain better insights. For example, a sample ‘sales data for hotels and resorts’ is used here.
Uploading data
Step 3
Use Copilot to unleash the potential insights of your data.
Start by creating reports in the Power BI service/Desktop. Copilot allows the creation of insightful reports for descriptive analytics by just using the requirements that you can provide in natural language.
For example: Here a report is created by using the following prompt:
An example of a report creation prompt using Microsoft Copilot – Source: Copilot in Power BI Demo
Copilot has created a report for the customer profile that includes the requested charts and slicers and is also fully interactive, providing options to conveniently adjust the outputs as needed.
An example of a Power BI report created using Microsoft Copilot – Source: Copilot in Power BI Demo
Not only this, but you can also ask analysis questions about the reports as explained below.
An example of asking analysis question from Microsoft Copilot – Source: Copilot in Power BI Demo
The copilot now responds by adding a new page to the report. It explains the ‘main drivers for repeat customer visits’ by using advanced analysis capabilities to find key influencers for variables in the data. As a result, it can be seen that the ‘Purchased Spa’ service has the biggest influence on customer returns followed ‘Rented Sports Equipment’ service.
An example of asking analysis questions from Microsoft Copilot – Source: Copilot in Power BI Demo
Moreover, you can ask to include, exclude, or summarize any visuals or pages in the generated reports. Other than generating reports, you can even refer to your existing dashboard to question or summarize the insights or to quickly create a narrative for any part of the report using Copilot.
Below you can see how the Copilot has generated a fully dynamic narrative summary for the report, highlighting the useful insights from data along with proper citation from where within the report the data was taken.
An example of narrative generation by Microsoft Power BI Copilot – Source: Copilot in Power BI Demo
Microsoft Copilot simplifies Data Analysis Expressions (DAX) formulas by generating and editing these complex formulas. In Power BI, you can easily navigate to the ‘Quick Measure’ button in the calculations section of the Home tab. (if you do not see ‘suggestions with Copilot,’ then you may enable it from settings.
Otherwise, you may need to get it enabled by your Power BI Administrator).
Quick measures are predefined measures, eliminating the need for creating your own DAX syntax. It’s generated automatically according to the input you provide in Natural Language via the dialog box. They execute a series of DAX commands in the background and display the outcomes for utilization in your report.
Quick Measure – Suggestions with Copilot
In the below example, it can be seen that the copilot gives suggestion for a quick measure based on the data, generating the DAX formula as well. If you find the suggested measure satisfactory, you can simply click the “Add” button to seamlessly incorporate it into your model.
An example of DAX generation using Quick Measure – Source: Microsoft Learn
There can be several other things that you can do with copilot with clear and understandable prompts to questions about your data and generate more insightful reports for your BI dashboards.
Hence, we can say that Power BI with Copilot has proven to be the transformative force in the landscape of data analytics, reshaping how businesses leverage their data’s potential.
Embracing the LLM-led Era in Business Intelligence
Descriptive analytics is fundamental to Business Intelligence (BI) dashboards, providing essential insights through data aggregation, visualization, trend analysis, and reporting.
The integration of Large Language Models enhances these capabilities by enabling advanced query handling, improving visualization descriptions, and reporting, and offering predictive and prescriptive insights.
This new LLM-led era in Business Intelligence (BI) is transforming the dynamic landscape of data analytics, offering a glimpse into a future where data-driven insights empower organizations to make informed decisions and gain a competitive edge.