Microsoft Fabric: Terminologies and Personas Explained

In this blog post, I will explain some of the key concepts, personas, and terminologies related to Microsoft Fabric, a SaaS analytics platform for the era of AI. If you are not familiar with the basic concepts of SaaS analytics platforms and how Microsoft Fabric fits in, I recommend you read my previous blog post, where I explain them in detail.

Microsoft Fabric is an experience-based platform, meaning users can interact with it depending on their roles and personas. For example, a data engineer can use the Data Engineering experience to perform large-scale data transformation through the lakehouse. A data scientist can use Data Science experience to develop AI models on a single foundation without data movement. A business analyst can use the Power BI experience to create and consume interactive reports and dashboards. And a data steward can use the Data Activator experience to govern and secure data across the organisation.
The Data Activator experience is in private preview and is not available for public use yet!

Microsoft Fabric Terminologies

To understand how Microsoft Fabric works, it is crucial to know some of the terminologies that are used in the platform. Some of them are existing terms that are also used in Power BI or Azure services, while some of them are new and specific to Microsoft Fabric. Here are some of the key terms that you should know:

  • Tenant: A tenant is a dedicated instance of Microsoft Fabric that is provisioned for an organisation or a department within an organisation. A tenant has its own set of users, groups, permissions, capacities, workspaces, items, and experiences. A Fabric tenant is associated with an Azure Active Directory (AAD) tenant, which is a directory service that the organisations own when they sign up for a Microsoft cloud service such as Azure, Microsoft 365, Power BI, etc. AAD provides identity and access management for cloud applications. A tenant in Microsoft Fabric can only be accessed by users who belong to the same AAD tenant.
  • Capacity: Capacity is a term that refers to the amount of resources available to support a computing service. In the context of SaaS applications, capacity refers to the ability of the system to handle a certain amount of load or demand based on the required resources and infrastructure such as compute power (CPU, RAM, etc.), storage, network bandwidth and whatnot. As explained in my previous post, Microsoft Fabric is a SaaS platform. So, from a Microsoft Fabric perspective, capacities are sets of resources that are allocated to a tenant to run analytics workloads. The capacities sit in a tenant, and the available resources can be shared by multiple workspaces or dedicated to a single workspace for better performance and isolation. Microsoft Fabric capacities are available in various F SKUs that offer different levels of resources and features. For more information about capacities and SKUs, see Microsoft Fabric Capacity and SKUs.
  • Workspace: A workspace is a logical container that holds a collection of items and artefacts. A workspace can have one or more owners who can manage its settings and permissions and one or more members who can access its items. A workspace can also be assigned to a capacity to run its analytics workloads. In Microsoft Fabric, workspaces are based on Power BI workspaces.

The above terms also apply to Power BI, so they have been used within the community for a long time. The hierarchy starts with an organisation acquiring their potential Tenants, and then the purchased Capacities are available to tenants and the Workspaces that are assigned to capacities.

Continue reading “Microsoft Fabric: Terminologies and Personas Explained”

Datatype Conversion in Power Query Affects Data Modeling in Power BI

Datatype Conversion in Power Query Affects Data Modeling in Power BI

In my consulting experience working with customers using Power BI, many challenges that Power BI developers face are due to negligence to data types. Here are some common challenges that are the direct or indirect results of inappropriate data types and data type conversion:

  • Getting incorrect results while all calculations in your data model are correct.
  • Poor performing data model.
  • Bloated model size.
  • Difficulties in configuring user-defined aggregations (agg awareness).
  • Difficulties in setting up incremental data refresh.
  • Getting blank visuals after the first data refresh in Power BI service.

In this blogpost, I explain the common pitfalls to prevent future challenges that can be time-consuming to identify and fix.

Background

Before we dive into the topic of this blog post, I would like to start with a bit of background. We all know that Power BI is not only a reporting tool. It is indeed a data platform supporting various aspects of business intelligence, data engineering, and data science. There are two languages we must learn to be able to work with Power BI: Power Query (M) and DAX. The purpose of the two languages is quite different. We use Power Query for data transformation and data preparation, while DAX is used for data analysis in the Tabular data model. Here is the point, the two languages in Power BI have different data types.

The most common Power BI development scenarios start with connecting to the data source(s). Power BI supports hundreds of data sources. Most data source connections happen in Power Query (the data preparation layer in a Power BI solution) unless we connect live to a semantic layer such as an SSAS instance or a Power BI dataset. Many supported data sources have their own data types, and some don’t. For instance, SQL Server has its own data types, but CSV doesn’t. When the data source has data types, the mashup engine tries to identify data types to the closest data type available in Power Query. Even though the source system has data types, the data types might not be compatible with Power Query data types. For the data sources that do not support data types, the matchup engine tries to detect the data types based on the sample data loaded into the data preview pane in the Power Query Editor window. But, there is no guarantee that the detected data types are correct. So, it is best practice to validate the detected data types anyway.

Power BI uses the Tabular model data types when it loads the data into the data model. The data types in the data model may or may not be compatible with the data types defined in Power Query. For instance, Power Query has a Binary data type, but the Tabular model does not.

The following table shows Power Query’s datatypes, their representations in the Power Query Editor’s UI, their mapping data types in the data model (DAX), and the internal data types in the xVelocity (Tabular model) engine:

Power Query and DAX (data model) data type mapping
Power Query and DAX (data model) data type mapping

As the above table shows, in Power Query’s UI, Whole Number, Decimal, Fixed Decimal and Percentage are all in type number in the Power Query engine. The type names in the Power BI UI also differ from their equivalents in the xVelocity engine. Let us dig deeper.

Continue reading “Datatype Conversion in Power Query Affects Data Modeling in Power BI”

Slowly Changing Dimension (SCD) in Power BI, Part 2, Implementing SCD 1

Slowly Changing Dimension (SCD) in Power BI, Part 2, Implementing SCD 1

I explained what SCD means in a Business Intelligence solution in my previous post. We also discussed that while we do not expect to handle SCD2 in a Power BI implementation, we can handle scenarios similar to SCD1. In this post, I explain how to do so.

Scenario

We have a retail company selling products. The company releases the list of products in Excel format, including list price and dealer price, every year. The product list is released on the first day of July when the financial year starts. We have to implement a Power BI solution that keeps the latest product data to analyse the sales transactions. The following image shows the Product list for 2013:

Products List 2013 in Excel
Products List 2013

So each year, we receive a similar Excel file to the above image. The files are stored on a SharePoint Online site.

Scenario Explained

As the previous post explains, an SCD1 always keeps the current data by updating the old data with the new data. So an ETL process reads the data from the source, identifies the existing data in the destination table, inserts the new rows to the destination, updates the existing rows, and deletes the removed rows.

Here is why our scenario is similar to SCD1, with one exception:

  • We do not actually update the data in the Excel files and do not create an ETL process to read the data from the Excel files, identify the changes and apply the changes to an intermediary Excel file
  • We must read the data from the source Excel files, keep the latest data while filtering out the old ones and load the data into the data model.

As you see, while we are taking a very different implementation approach, the results are very similar with an exception: we do not delete any rows.

Implementation

Here is what we should do to achieve the goal:

  • We get the data in Power Query Editor using the SharePoint Folder connector
  • We combite the files
  • We use the ProductNumber column to identify the duplicated products
  • We use the Reporting Date column to identify the latest dates
  • We only keep the latest rows

Getting Data from SharePoint Online Folder

As we get the data from multiple files stored on SharePoint Online, we have to use the SharePoint Folder connector. Follow these steps:

  1. Login to SharePoint Online and navigate to the site holding the Product list Excel files and copy the site URL from the browser
Getting SharePoint Online Site URL
Getting SharePoint Online Site URL
  1. From the Get Data in the Power BI Desktop, select the SharePoint Folder connector
  2. Click Connect
Connecting to SharePoint Online Folder from Power BI
Connecting to SharePoint Online Folder from Power BI
  1. Paste the Site URL copied on step 1
  2. Click OK
Connecting to SharePoint Online Folder from Power BI using the SharePoint Folder connector
Connecting to SharePoint Online Folder from Power BI using the SharePoint Folder connector
  1. Click Transform Data
Transforming data in Power Query Editor
Transforming data in Power Query Editor
Continue reading “Slowly Changing Dimension (SCD) in Power BI, Part 2, Implementing SCD 1”

Combining X Number of Rows in Power Query for Power BI, Excel and Power Query Online

Combining X Number of Rows in Power Query for Power BI, Excel and Power Query Online

A while back, I was working on a project involving getting data from Excel files. The Excel files contain the data in sheets and tables. Getting the data from the tables is easy. However, the data in the sheets have some commentaries on top of the sheet, then the column names and then the data itself. Something like below:

Sample data
Sample data

This approach is pretty consistent across many Excel files. The customer wants to have the commentary in the column names when the data is imported into Power BI. So the final result must look like this:

Sample Data to be loaded into Power BI
Sample Data to be loaded into Power BI

The business requirement though is to combine the first 3 rows of data and promote it as the column name.

The Challenge

Let’s connect the Excel file and look at the data in Power BI Desktop.

Connecting to sample data from Power BI Desktop
Connecting to sample data from Power BI Desktop

As you can see in the preceding image, Power BI, or more precisely, Power Query, sees the data in Table format. After we click the Transform Data button, this is what we get in Power Query Editor:

Connected to sample data from Power Query in Power BI Desktop
Connected to sample data from Power Query in Power BI Desktop

We all know that tables consist of Columns and Rows. The conjunction of a column and a row is a Cell. What we require to do is to concatenate the values of cells from the first three rows. We also have to use a Space character to separate the values of each cell from the others.

Column, rows and cells in a Table in Power BI
Column, rows and cells in a Table

In Power Query, we can get each row of data in as a Record with the following syntax:

Table{RecordIndex}

In the above syntax, the Table can be the results of the previous transformation step, and the RecordIndex starts from 0. So to get the first row of the table in the preceding image, we use the following syntax:

#"Changed Type"{0}

Where the #"Changed Type" is the previous step. Here are the results of running the preceding expression:

Getting the first row of a Table
Getting the first row of a Table

So we can get the second and third rows with similar expressions. The following image shows the entire codes in the Advanced Editor:

Power Query expressions in Advanced Editor in Power BI Desktop
Power Query expressions in Advanced Editor

But how do we concatenate the values of the rows?

Continue reading “Combining X Number of Rows in Power Query for Power BI, Excel and Power Query Online”