Unveiling Microsoft Fabric’s Impact on Power BI Developers and Analysts

Unveiling Microsoft Fabric’s Impact on Power BI Developers and Analysts

Microsoft Fabric is a new platform designed to bring together the data and analytics features of Microsoft products like Power BI and Azure Synapse Analytics into a single SaaS product. Its goal is to provide a smooth and consistent experience for both data professionals and business users, covering everything from data entry to gaining insights. A new data platform comes with new keywords and terminologies, so to get more familiar with some new terms in Microsoft Fabric, check out this blog post.

As mentioned in one of my previous posts, Microsoft Fabric is built upon the Power BI platform; therefore we expect it to provide ease of use, strong collaboration, and wide integration capabilities. While Microsoft Fabric is getting more attention in the market, so we see more and more organisations investigating the possibilities of migrating their existing data platforms to Microsoft Fabric. But what does it mean for seasoned Power BI developers? What about Power BI professional users such as data analysts and business analysts? In this post, I endeavor to answer those questions.

I have been blogging predominantly around Microsoft Data Platforms and especially Power BI since 2013. But I have never written about the history of Power BI. I believe it makes sense to touch upon the history of Power BI to better understand the size of its user base and how introducing a new data platform that includes Power BI can affect them. A quick search on the internet provides some interesting facts about it. So let’s take a moment and talk about it.

The history of Power BI

Power BI started as a top-secret project at Microsoft in 2006 by Thierry D’Hers and Amir Netz. They wanted to make a better way to analyse data using Microsoft Excel. They called their project “Gemini” at first.

In 2009, they released PowerPivot, a free extension for Excel that supports in-memory data processing. This made it faster and easier to do calculations and create reports. PowerPivot got quickly popular among Excel users, but it had some limitations. For example, it was hard to share large Excel files with others, and it was not possible to update the data automatically.

In 2015, Microsoft combined PowerPivot with another extension called Power Query, which lets users get data from different sources and clean it up. They also added a cloud service that lets users publish and share their reports online. They called this new product Power BI, which stands for Power Business Intelligence.

In the past few years, Power BI grasped a lot of attention in the market and improved a lot to cover more use cases and business requirements from data transformation, data modelling, and data visualisation to combining all these goods with the power of AI and ML to provide predictive and prescriptive analysis.

Who are Power BI Users?

Since its birth, Power BI has become one of the most popular and powerful data analysis and data visualisation tools in the world used by a wide variety of users. In the past few years, Power BI generated many new roles in the job market, such as Power BI developer, Power BI consultant, Power BI administrator, Power BI report writer, and whatnot, as well as helping many others by making their lives easier, such as data analysts and business analysts. With Power BI, the data analysts could efficiently analyse the data and make recommendations based on their findings. Business analysts could use Power BI to focus on more practical changes resulting from their analysis of the data and show their findings to the business much quicker than before. As a result, millions of users interact with Power BI on a daily basis in many ways. So, introducing a new data platform that sort of “Swallows Power BI” may sound daunting to those whose daily job relates to content creation, maintenance, or administrating Power BI environments. For many, the fear is real. But shall the developers and analysts be afraid of Microsoft Fabric? The short answer is “Absolutely not!”. Does it change the way we used to work with Power BI? Well, it depends.

To answer these questions, we first need to know who are Power BI users and how they interact with it.

Continue reading “Unveiling Microsoft Fabric’s Impact on Power BI Developers and Analysts”

Integrating Power BI with Azure DevOps (Git), part 2: Local Machine Integration

Integrating Power BI with Azure DevOps (Git), part 2: Local Machine Integration

This is the second part of the series of blog posts showing how to integrate Power BI with Azure DevOps, a cloud platform for software development. The previous post gave a brief history of source control systems, which help developers manage code changes. It also explained what Git is, a fast and flexible distributed source control system, and why it is useful. It introduced the initial configurations required in Azure DevOps and explained how to integrate Power BI (Fabric) Service with Azure DevOps.

This blog post explains how to synchronise an Azure DevOps repository with your local machine to integrate your Power BI Projects with Azure DevOps. Before we start, we need to know what a Power BI Project is and how we can create it.

What is Power BI Project (Developer Mode)

Power BI Project (*.PBIP) is a new file format for Power BI Desktop that was announced in May 2023 and made available for public preview in June 2023. It allows us to save our work as a project, which consists of a folder structure containing individual text files that define the report and dataset artefacts. This enables us to use source control systems, such as Git, to track changes, compare revisions, resolve conflicts, and review changes. It also enables us to use text editors, such as Visual Studio Code, to edit the artefact definitions more productively and programmatically. Additionally, it supports CI/CD (continuous integration and continuous delivery), where we submit changes to a series of quality gates before applying them to the production system.

PBIP files differ from the regular Power BI Desktop files (PBIX), which store the report and dataset artefacts as a single binary file. This made integrating with source control systems, text editors, and CI/CD systems difficult. PBIP aims to overcome these limitations and provide a more developer-friendly experience for Power BI Desktop users.

Since this feature is still in public preview when writing this blog post, we have to enable it from the Power BI Desktop Options and Settings.

Enable Power BI Project (Developer Mode) (Currently in Preview)

As mentioned, we first need to enable the Power BI Project (Developer Mode) feature, introduced for public preview in the June 2023 release of Power BI Desktop. Power BI Project files allow us to save our Power BI files as *.PBIP files deconstruct the legacy Power BI report files (*.PBIX) into well-organised folders and files.
With this feature, we can:

  • Edit individual components of our Power BI file, such as data sources, queries, data model, visuals, etc.
  • Use any text editor or IDE to edit our Power BI file
  • Compare and merge changes
  • Collaborate with other developers on the same Power BI file

To enable Power BI Project (Developer Mode), follow these steps in Power BI Desktop:

Continue reading “Integrating Power BI with Azure DevOps (Git), part 2: Local Machine Integration”

Microsoft Fabric: A SaaS Analytics Platform for the Era of AI

Microsoft Fabric

Microsoft Fabric is a new and unified analytics platform in the cloud that integrates various data and analytics services, such as Azure Data Factory, Azure Synapse Analytics, and Power BI, into a single product that covers everything from data movement to data science, real-time analytics, and business intelligence. Microsoft Fabric is built upon the well-known Power BI platform, which provides industry-leading visualization and AI-driven analytics that enable business analysts and users to gain insights from data.

Basic concepts

On May 23rd 2023, Microsoft announced a new product called Microsoft Fabric at the Microsoft Build conference. Microsoft Fabric is a SaaS Analytics Platform that covers end-to-end business requirements. As mentioned earlier, it is built upon the Power BI platform and extends the capabilities of Azure Synapse Analytics to all analytics workloads. This means that Microfot Fabric is an enterprise-grade analytics platform. But wait, let’s see what the SaaS Analytics Platform means.

What is an analytics platform?

An analytics platform is a comprehensive software solution designed to facilitate data analysis to enable organisations to derive meaningful insights from their data. It typically combines various tools, technologies, and frameworks to streamline the entire analytics lifecycle, from data ingestion and processing to visualisation and reporting. Here are some key characteristics you would expect to find in an analytics platform:

  1. Data Integration: The platform should support integrating data from multiple sources, such as databases, data warehouses, APIs, and streaming platforms. It should provide capabilities for data ingestion, extraction, transformation, and loading (ETL) to ensure a smooth flow of data into the analytics ecosystem.
  2. Data Storage and Management: An analytics platform needs to have a robust and scalable data storage infrastructure. This could include data lakes, data warehouses, or a combination of both. It should also support data governance practices, including data quality management, metadata management, and data security.
  3. Data Processing and Transformation: The platform should offer tools and frameworks for processing and transforming raw data into a usable format. This may involve data cleaning, denormalisation, enrichment, aggregation, or advanced analytics on large data volumes, including streaming IOT (Internet of Things) data. Handling large volumes of data efficiently is crucial for performance and scalability.
  4. Analytics and Visualisation: A core aspect of an analytics platform is its ability to perform advanced analytics on the data. This includes providing a wide range of analytical capabilities, such as descriptive, diagnostic, predictive, and prescriptive analytics with ML (Machine Learning) and AI (Artificial Intelligence) algorithms. Additionally, the platform should offer interactive visualisation tools to present insights in a clear and intuitive manner, enabling users to explore data and generate reports easily.
  5. Scalability and Performance: Analytics platforms need to be scalable to handle increasing volumes of data and user demands. They should have the ability to scale horizontally or vertically. High-performance processing engines and optimised algorithms are essential to ensure efficient data processing and analysis.
  6. Collaboration and Sharing: An analytics platform should facilitate collaboration among data analysts, data scientists, and business users. It should provide features for sharing data assets, analytics models, and insights across teams. Collaboration features may include data annotations, commenting, sharing dashboards, and collaborative workflows.
  7. Data Security and Governance: As data privacy and compliance become increasingly important, an analytics platform must have robust security measures in place. This includes access controls, encryption, auditing, and compliance with relevant regulations such as GDPR or HIPAA. Data governance features, such as data lineage, data cataloging, and policy enforcement, are also crucial for maintaining data integrity and compliance.
  8. Flexibility and Extensibility: An ideal analytics platform should be flexible and extensible to accommodate evolving business needs and technological advancements. It should support integration with third-party tools, frameworks, and libraries to leverage additional functionality.
  9. Ease of Use: Usability plays a significant role in an analytics platform’s adoption and effectiveness. It should have an intuitive user interface and provide user-friendly tools for data exploration, analysis, and visualisation. Self-service capabilities empower business users to access and analyse data without heavy reliance on IT or data specialists.
    These characteristics collectively enable organisations to harness the power of data and make data-driven decisions. An effective analytics platform helps unlock insights, identify patterns, discover trends, and drive innovation across various domains and industries.
Continue reading “Microsoft Fabric: A SaaS Analytics Platform for the Era of AI”

Thin Reports, Report Level Measures vs Data Model Measures

Thin Reports, Report Level Measures vs Data Model Measures

The previous post explained what Thin reports are, why we should care and how we can create them. This post focuses on a more specific topic, Report Level Measures. We discuss what report-level measures are, when and why we need them and how we create them.

If you are not sure what Thin Report means, I suggest you check out my previous blog post before reading this one.

What are report level measures?

Report level measures are the measures created by the report writers within a Thin Report. Hence, the report level measures are available within the hosting Thin Report only. In other words, the report level measures are locally available within the containing report only. These measures are not written back to the underlying dataset, hence not available to any other reports.

In contrast, the data model measures, are the measures created by data modellers and appear on the dataset level and are independent from the reports.

Why and when do we need report level measures?

It is a common situation in real-world scenarios when the business requires a report urgently, but the nuts and bolts of the report are not being created on the underlying dataset yet. For instance, the business requires to present a report to the board showing year-to-date sales analysis but the year-to-date sales measure hasn’t been created in the dataset yet. The business analyst approaches the Power BI developers to add the measure, but they are under the pump to deliver some other functionalities which adding a new measure is not even in their project delivery plan. It is perhaps too late if we wait for the developers to plan for creating the required measure, go through the release process, and make it available for us in the dataset. Here is when the report level measures come to the rescue. We can simply create the missing measure in the Thin Report itself, where we can later share it with the developers to implement it as a dataset measure.

Continue reading “Thin Reports, Report Level Measures vs Data Model Measures”