Planning in Microsoft Fabric IQ for SQL Developers

If you work with SQL in Fabric, you already know the pattern. Reporting data lands in Fabric SQL or Lakehouse tables, semantic models sit on top, and Power BI turns that into something business users understand. Planning has always been the odd one out, usually living in a separate tool that IT feeds with exports and integrations. Not anymore…

Announced today at FabCon today (March 18th 2026)
If you haven’t already, check out Arun Ulag’s hero blog “FabCon and SQLCon 2026: Unifying databases and Fabric on a single, complete platform” for a complete look at all of our FabCon and SQLCon announcements across both Fabric and our database offerings. 

With Planning in Microsoft Fabric IQ, that separation disappears. Planning now sits directly on top of the same Fabric SQL and semantic models that developers and data engineers already maintain.

Ref: Introducing Planning in Microsoft Fabric IQ: From historical data to forecasting the future | Microsoft Fabric Blog | Microsoft Fabric

For SQL professionals, this unlocks a much cleaner architecture, less integration work and more predictable data flows.


Planning Uses Your Existing Semantic Models

Planning in Fabric IQ reads business logic directly from Power BI semantic models. SQL developers no longer have to replicate definitions or maintain special “planning exports.”

If your measures, reference tables and dimensions are modeled correctly, Planning uses them as its foundation. This eliminates drift between planning logic and reporting logic, something that has always been painful in disconnected systems.


Writeback Lands in Fabric SQL

The biggest operational change for SQL developers is this. Planning writeback does not land in a proprietary planning database. It lands in Fabric SQL tables that you control and can query.

This means:

  • Forecasts and budgets are stored as regular tables
  • Versioning and governance policies work the same way as other data
  • You can join planning data with operational data without a round trip to another tool
  • Downstream BI reports update automatically because the data lives in the same environment

A typical structure might look like:

SQL

SELECT
    d.CustomerKey,
    f.ForecastAmount,
    f.Version,
    a.ActualSales
FROM dbo.FinanceForecast f
    JOIN dbo.DimCustomer d ON d.CustomerKey = f.CustomerKey
    JOIN dbo.SalesActuals a ON a.CustomerKey = f.CustomerKey
WHERE f.Version = '2026-Base'

No more external APIs, sync jobs or file drops.


OneLake Makes Planning Data Instantly Available

Shortcuts and mirroring remove a major pain point for SQL developers. Data used by planners does not need a dedicated pipeline or internal copy. If the data is already in OneLake, Planning can use it immediately.

This avoids:

  • Daily ETL loads
  • Redundant staging areas
  • Manual reconciliation work

For SQL developers, this is more predictable, more consistent and easier to operate.


A Single Environment From Actuals to Forecasts

Traditionally, SQL teams have had to maintain two parallel worlds.
One world holds actuals and historical performance.
Another world holds budgets and planning data from a separate system.

Planning in Fabric IQ merges these worlds:

  • Actuals remain in Lakehouse and Fabric SQL
  • Plans and scenarios write back into Fabric SQL
  • Semantic models unify both
  • Power BI reports read everything from the same data estate

This reduces the number of moving parts and makes lineage, governance and validation much easier.


Planning as a New Input to AI and Automation

Planning introduces a new type of data for SQL developers to work with: intent data.
Targets, constraints, scenario assumptions and expected outcomes become tables that intelligent agents can read.

This shifts planning from an isolated workflow to a central part of automated decision support. For SQL developers, this means new opportunities to model features, feed scoring pipelines and support decision logic with richer context than just historical data.


Why SQL Developers Should Care

Planning in Fabric IQ is worth paying attention to because it simplifies several long standing operational challenges:

  • No more pipeline maintenance to feed external planning tools
  • No more reconciliation work between planning and reporting
  • No more duplicated metrics in separate systems
  • No separate planning database that IT cannot fully control
  • Writeback is now a standard Fabric SQL operation
  • Planning logic aligns with semantic model logic

This is a major improvement for anyone who has had to support planning workflows while also maintaining clean, governed SQL environments.


Getting Started

Planning in Fabric IQ is available in preview. To experiment:

  1. Open a Fabric workspace
  2. Connect Planning to an existing semantic model
  3. Observe how writeback lands in Fabric SQL
  4. Integrate planning data with your existing T SQL workload

From there, it becomes clear that planning is no longer an external dependency. It is part of the platform, and SQL developers can finally treat it like any other governed dataset.

Reversing a semantic model w/ incremental refresh using Claude

Ever faced the issue of having to download a semantic model from the Power BI service but getting stuck when incremental refresh has been implemented?

Usually it’s a rule of thumb to keep the original .pbix file to allow for any modifications in Power BI Desktop, but sometimes that file is lost, misplaced or somehow no longer available. While there are some work arounds blogged out there I though I’d give Claude a test on converting a .bim file to a .pbip project. A .bim file is obtainable using Tabular Editor, which you can find as a free version here, or paid version here (additional features). Connect to the live semantic model and select Save As (model.bim).

Once the model.bim is on you computer, you can release Claude on it to convert it into a .pbip project that you can open and manage through Power BI Desktop.

NB!
Incremental refresh is to be re-configured once deployed to the service and a full refresh is required.

Below is the complete prompt that I had Claude generate once we were through some hiccups.

Property ‘datasetReference’ has not been defined and schema does not allow additional properties. Path ‘datasetReference’, line 3, position 21

Error Message:
Cannot read ‘C:\<path removed>\model.bim’. Missing required artifact ‘model.bim’.

DatasetDefinition: Required artifact is missing in ‘C:\<path removed>\definition.pbism’.

Only text with UTF8 encoding without BOM (byte order marks) is supported. Detected BOM: ‘UTF-8’

Prompt begin: Convert a .bim file into a .pbip (Power BI Project) structure.

Analyze the model.bim file

Given the model.bim file (SSAS Tabular / Power BI semantic model JSON), create a valid .pbip project that opens in Power BI Desktop. Follow these exact rules:

Folder structure

<ProjectName>.pbip
<ProjectName>.SemanticModel/
    .platform
    definition.pbism
    model.bim
<ProjectName>.Report/
    .platform
    definition.pbir

File contents

<ProjectName>.pbip

{
  "version": "1.0",
  "artifacts": [
    {
      "report": {
        "path": "<ProjectName>.Report"
      }
    }
  ],
  "settings": {
    "enableAutoRecovery": true
  }
}

<ProjectName>.SemanticModel/definition.pbism

{
  "version": "1.0",
  "settings": {}
}

<ProjectName>.SemanticModel/.platform

{
  "$schema": "https://developer.microsoft.com/json-schemas/fabric/gitIntegration/platformProperties/2.0.0/schema.json",
  "metadata": {
    "type": "SemanticModel",
    "displayName": "<ProjectName>"
  },
  "config": {
    "version": "2.0",
    "logicalId": "<generate-a-new-guid>"
  }
}

<ProjectName>.Report/definition.pbir

{
  "version": "4.0",
  "datasetReference": {
    "byPath": {
      "path": "../<ProjectName>.SemanticModel"
    },
    "byConnection": null
  }
}

<ProjectName>.Report/.platform

{
  "$schema": "https://developer.microsoft.com/json-schemas/fabric/gitIntegration/platformProperties/2.0.0/schema.json",
  "metadata": {
    "type": "Report",
    "displayName": "<ProjectName>"
  },
  "config": {
    "version": "2.0",
    "logicalId": "<generate-a-different-guid>"
  }
}

<ProjectName>.SemanticModel/model.bim — the original .bim file, placed here unchanged.

Critical rules

  1. model.bim goes directly in the <ProjectName>.SemanticModel/ folder — NOT in a definition/ subfolder. The definition/ subfolder is only for TMDL format.
  2. All files must be UTF-8 without BOM (no byte order mark). Use new System.Text.UTF8Encoding(false) or equivalent.
  3. definition.pbism must NOT contain a datasetReference property — that property belongs only in .pbir files. The .pbism schema only allows version and settings.
  4. definition.pbir must reference the SemanticModel via relative path using "byPath": { "path": "../<ProjectName>.SemanticModel" }.
  5. The .pbip file is the entry point — users double-click this to open in Power BI Desktop.
  6. The .platform files contain Fabric Git integration metadata. The logicalId GUIDs are placeholders that get regenerated on deployment.
  7. Derive <ProjectName> from the "name" property at the root of the .bim JSON.

If the model uses incremental refresh

If the .bim contains a table with refreshPolicy and policyRange partitions, and the goal is to convert to standard import mode:

  • Remove the refreshPolicy object from the table
  • Replace all policyRange partitions with a single "type": "m" partition containing the M query from the refreshPolicy.sourceExpression, but with the RangeStart/RangeEnd filter removed
  • Remove the RangeStart and RangeEnd tables from model.tables

Let me know in the comments if you find this useful, not working at all, or any experiences in between 🙂

From Insight to Action Inside Microsoft 365

Turning Data into Everyday Decisions with Microsoft 365

In today’s business landscape, the true value of data lies not just in its collection, but in its ability to drive timely, informed action. Yet, for many organizations, the journey from analytical insight to real-world impact is often slowed by disconnected tools and siloed workflows. What if your teams could access the latest business intelligence right where they work without ever leaving their core productivity apps?

With Microsoft Fabric and Microsoft 365, this vision becomes reality. By embedding data insights directly into familiar tools like Excel, Teams, and Outlook, organizations empower employees at every level to make smarter decisions, collaborate seamlessly, and respond proactively to changing conditions. No more toggling between dashboards and emails; actionable intelligence is now woven into the very fabric (see what I did there? 😏) of daily operations.

This blog post explores how integrating analytics into everyday workflows transforms not only how decisions are made, but also how organizations build a resilient, data-driven culture. Through real-world examples and practical strategies, discover how you can bridge the gap between insight and action; fueling agility, innovation, and sustained business growth.

Embedding Data Insights Directly into Daily Workflows

As organizations look to bridge the gap between analytical insights and daily decision-making, Microsoft Fabric empowers teams by seamlessly integrating data flows from OneLake through Power BI and directly into familiar Microsoft 365 applications such as Excel, Teams, and Outlook. This connected experience ensures that actionable intelligence is available at every touchpoint where work happens, streamlining collaboration and enabling users to embed dashboards, visualizations, and data-driven recommendations into their everyday workflows. To maximize adoption, leaders and managers should prioritize hands-on training, showcase quick wins within business units, and encourage a culture where employees regularly consult and share insights surfaced in their core productivity tools. By embedding analytics within the fabric (oops, not…) of daily operations, companies accelerate the translation of insights into strategic action fueling a more agile, informed, and data-driven organization.

Check out some of the public case studies that displays this approach:

Heathrow Airport Data-Driven Operations with Microsoft 365 and Power BI

Heathrow Airport leverages Power BI, embedded within Microsoft 365 tools, to provide real-time operational dashboards accessible to staff across departments. This integration enables instant access to current metrics and supports agile decision-making in fast-paced airport environments.

Heathrow prepares rather than reacts: uses data to deliver airport calm | Microsoft Customer Stories

Marks & Spencer: Empowering Employees with Embedded Analytics

Retail giant Marks & Spencer uses Microsoft Fabric’s data pipelines and Power BI to embed relevant business insights directly into Teams and Outlook. This approach helps store managers and staff receive timely updates and analytics, improving customer service and operational efficiency.

UK retailer, Marks and Spencer, uses Azure Synapse Analytics and Power BI to drive powerful insights | Microsoft Customer Stories

Telstra: Streamlining Field Operations with Automated Insights

Australian telecom leader Telstra connects data sources using Microsoft Fabric and OneLake, delivering up-to-date analytics via Power BI dashboards within Microsoft 365 applications. Automated refreshes and workflow triggers ensure that field teams always have the latest insights for customer service and maintenance tasks.

City of London: Predictive Analytics for Public Services

The City of London Corporation integrates predictive analytics into routine communications with Microsoft 365 apps. By enabling feedback loops and tailored dashboards, different departments improve service delivery and strategic planning based on actionable, up-to-date data.

Using predictive analytics in local public services | Local Government Association

Driving Proactive Insights and Continuous Business Impact

Building on this momentum, organizations should also leverage Microsoft Fabric’s robust automation features, such as scheduled data refreshes and workflow triggers, to ensure insights remain current and relevant as business conditions evolve. By connecting data sources in OneLake with Power BI, teams can automatically surface the latest operational metrics, customer feedback, and performance trends directly inside their Microsoft 365 environment. This proactive approach empowers employees to make informed decisions faster, supports cross-functional alignment, and fosters continuous improvement. Ultimately, the integration of Fabric with Microsoft 365 not only democratizes access to data but also drives sustained business impact by turning everyday interactions into opportunities for insight-driven action.

Looking ahead, organizations can further amplify these benefits by fostering close collaboration between IT and business stakeholders to identify high-impact scenarios where embedded analytics can streamline processes and drive measurable improvements. Encouraging feedback loops and iterative enhancements within Microsoft 365 such as customizing dashboards for different roles or integrating predictive analytics into routine communications. As adoption matures, businesses not only gain from faster, more accurate decision-making but also build a culture of continuous learning, where actionable data is woven into the very fabric (oops, I did it again) of their daily operations and strategic planning.

Cleveland Clinic adopted Microsoft Power BI and Teams

Monitoring operational performance and patient outcomes, resulting in faster response times and improved care coordination.

Microsoft PowerPoint – BIAS-2022 Presentation – Mark Ruffing.pptx

Sustaining Momentum: Building a Resilient Data Culture for Long-Term Success

To sustain and scale these gains, organizations should invest in ongoing education, governance frameworks, and robust support structures that empower users at all levels to harness the full potential of integrated analytics within Microsoft 365. By cultivating data champions across departments and encouraging best-practice sharing, companies can drive widespread engagement and innovation. This continuous reinforcement ensures that as new features and use cases emerge within Microsoft Fabric and the broader Microsoft 365 suite, teams remain agile and equipped to extract maximum value from their data assets, transforming every interaction into an opportunity for business growth and competitive differentiation.

As Microsoft Fabric’s capabilities continue to evolve, organizations poised for long-term success will embrace a proactive mindset experimenting with advanced AI integrations, tailoring analytics for emerging business needs, and regularly revisiting their data strategies to ensure alignment with broader digital transformation goals. By facilitating ongoing dialogue between business leaders, IT professionals, and end users, companies can adapt swiftly to new opportunities and challenges, embedding a resilient data culture that not only supports current operations but also lays the groundwork for future innovation. This commitment to continuous improvement and cross-functional engagement transforms Microsoft 365 from a suite of productivity tools into a dynamic engine for insight-driven growth, ensuring that every strategic initiative is grounded in timely, actionable intelligence.

Siemens: Accelerating Digital Transformation Together

Optimize supply chain processes, driving efficiency and innovation across their global operations.

Microsoft and Siemens: Accelerating Digital Transformation Together | Microsoft Community Hub

Key Points:

  • Embedded Analytics: Microsoft Fabric enables organizations to deliver dashboards, visualizations, and recommendations directly into Microsoft 365 apps, making insights accessible and actionable for all users.
  • Adoption Strategies: Success depends on hands-on training, showcasing quick wins, and encouraging a culture of regular data consultation and sharing.
  • Automation & Proactivity: Features like scheduled data refreshes and workflow triggers ensure that insights remain current, supporting agile and informed decision-making

Resources:

2.000 members milestone

It’s only been a couple of months since I took over the reins from co-partner Erik Svensen (t|l) for the Danish Power BI User Group. But even just a few months in, I see and appreciate all the hard work and effort Erik has put into this user group. It’s because of Erik’s relentless efforts over the past four-five years that I can now announce that the user group has 2.000 members!

Bravo Erik – Well done!

Power BI Community Tour

Blog post in Danish 🙂

Om lidt under en måned (25/4-27/4) ruller Power BI bussen afsted og gør sit første stop på Power BI Community Touren 2022. Mere præcist, så begynder vi i Lyngby, kører videre dagen efter til Odense og runder Touren af i Aarhus. Så alt efter hvor du er i landet, vil der være god mulighed for at deltage.

På hvert stop vil der blive disket op med introdultion og best practices indefor de forskellige elementer af Power BI. Med oplæg om Introduktion til Power BI, Data Loading & Mashup, Data Modellering & DAX, Data Visualisering og Distribution og deling vil alle hjørner være dækket.

Der er tale om oplæg der retter sig mod begyndere eller meget let øvede brugere af Power BI, og du kan her få en tryggere start på din rejse med Power BI.

  • Har du brugt Power BI, men mangler at vide hvordan det hele hænger sammen?
  • Har du importeret noget data i Power BI, men mangler at vide hvordan man organiserer sine tabeller?
  • Har du lavet en Power BI rapport, men mangler at vide hvordan man bedst visualiserer dataene?
  • Har du udviklet nogle rapporter, men mangler at vide hvordan du deler dem med dine kollegaer?
  • Har du aldrig brugt Power BI, men vil gerne vide mere om hvorfor det er et af de mest populære rapporterings- og self-service BI værktøjer?

Hvis du svarer ja til ét eller flere af disse spørgsmål, så er Power BI Community Tour for dig. Hvis ikke – så send meget gerne denne information videre til relevante kollegaer!

Sign up her: https://lnkd.in/eVzcBMvp

En stor tak til JDM, Kapacity, Microsoft og Seges for at stille lokaler og forplejning til rådighed.