Data Saturday Denmark 2026: The Day 130 People Showed Up for a 250‑Person Event

Let’s talk about something uncomfortable because sometimes that’s the only way a community moves forward.

Data Saturday Denmark 2026 was, in many ways, spectacular. The sessions delivered. The conversations were buzzing. The energy was real. The people who came brought exactly the spirit that makes this community great.

Although only 18 left live feedback they all tell the same story; Awesomesauce!

But behind the scenes, something unexpected happened.
Something we need to talk about openly if we want to keep building this event in a sustainable, fair, and respectful way.


A fully booked event… on paper

Here’s what the numbers looked like before the big day:

  • Venue capacity: 220 seats
  • All tickets claimed
  • A 10% waiting list (22 people) established early
  • Additional requests still arriving in the final two weeks
  • 20 speakers + volunteers
  • Expected attendance: ~250 people

We were preparing for a packed event with a little slack and booked food for 180 people. Some are there in the morning, some are there in the afternoon, others are there all day.

And that would have been completely manageable, because…

Historically, the no‑show rate has been very predictable: 15–25%.

That’s what we’ve always planned for.
It’s normal for free community events according to my talks and chats with other organizers.
And we’ve built that into our logistics every single year.

But this year?
Everything changed.


The reality: 95 No‑Shows

During the entire day only around 130 attendees walked in.
That meant 95 unclaimed badges!

This is not a rounding error.
This is not something we could have forecasted.
This is not within the normal margins.

It’s almost double the upper limit of our usual no‑show rate.


Why this was a problem (even if no one intended it to be)

This isn’t about pointing fingers. I could easily just just post the picture of the unclaimed badges and was even encouraged to do so. But this is about the inevitable consequences of an unexpectedly massive no‑show rate:

  • People on the waiting list were blocked from attending
  • Catering and venue planning — funded by sponsors — was based on projected attendance
  • A large amount of food was prepared
  • And a large amount of food was wasted

Free events aren’t actually free.
Someone always pays the bill.


So I’m adjusting, to be completely transparent

To protect the event and the community’s experience, we’re introducing a new attendance rule for 2027:

1️⃣ If you had a ticket but didn’t attend and didn’t cancel

→ You will go directly onto the waiting list for 2027, with tickets released to you only in the final week if spots remain.

2️⃣ If you had a ticket and cancelled in advance

→ You skip the waiting list and get a direct ticket for 2027.

3️⃣ If you were on the waiting list in 2026

→ You also skip the waiting list and get a direct ticket for 2027.

This isn’t about punishment and It’s not about blame.
It’s about fairness, sustainability, and respecting everyone who wants to be part of the event.


The good news?

For those who did attend, the day was one of the strongest Data Saturdays I’ve hosted. The feedback was phenomenal. The atmosphere was everything I hoped for. And the community that showed up brought knowledge, curiosity, and generosity.

Here’s to a stronger, smarter, and more mindful Data Saturday Denmark 2027.

From Insight to Action Inside Microsoft 365

Turning Data into Everyday Decisions with Microsoft 365

In today’s business landscape, the true value of data lies not just in its collection, but in its ability to drive timely, informed action. Yet, for many organizations, the journey from analytical insight to real-world impact is often slowed by disconnected tools and siloed workflows. What if your teams could access the latest business intelligence right where they work without ever leaving their core productivity apps?

With Microsoft Fabric and Microsoft 365, this vision becomes reality. By embedding data insights directly into familiar tools like Excel, Teams, and Outlook, organizations empower employees at every level to make smarter decisions, collaborate seamlessly, and respond proactively to changing conditions. No more toggling between dashboards and emails; actionable intelligence is now woven into the very fabric (see what I did there? 😏) of daily operations.

This blog post explores how integrating analytics into everyday workflows transforms not only how decisions are made, but also how organizations build a resilient, data-driven culture. Through real-world examples and practical strategies, discover how you can bridge the gap between insight and action; fueling agility, innovation, and sustained business growth.

Embedding Data Insights Directly into Daily Workflows

As organizations look to bridge the gap between analytical insights and daily decision-making, Microsoft Fabric empowers teams by seamlessly integrating data flows from OneLake through Power BI and directly into familiar Microsoft 365 applications such as Excel, Teams, and Outlook. This connected experience ensures that actionable intelligence is available at every touchpoint where work happens, streamlining collaboration and enabling users to embed dashboards, visualizations, and data-driven recommendations into their everyday workflows. To maximize adoption, leaders and managers should prioritize hands-on training, showcase quick wins within business units, and encourage a culture where employees regularly consult and share insights surfaced in their core productivity tools. By embedding analytics within the fabric (oops, not…) of daily operations, companies accelerate the translation of insights into strategic action fueling a more agile, informed, and data-driven organization.

Check out some of the public case studies that displays this approach:

Heathrow Airport Data-Driven Operations with Microsoft 365 and Power BI

Heathrow Airport leverages Power BI, embedded within Microsoft 365 tools, to provide real-time operational dashboards accessible to staff across departments. This integration enables instant access to current metrics and supports agile decision-making in fast-paced airport environments.

Heathrow prepares rather than reacts: uses data to deliver airport calm | Microsoft Customer Stories

Marks & Spencer: Empowering Employees with Embedded Analytics

Retail giant Marks & Spencer uses Microsoft Fabric’s data pipelines and Power BI to embed relevant business insights directly into Teams and Outlook. This approach helps store managers and staff receive timely updates and analytics, improving customer service and operational efficiency.

UK retailer, Marks and Spencer, uses Azure Synapse Analytics and Power BI to drive powerful insights | Microsoft Customer Stories

Telstra: Streamlining Field Operations with Automated Insights

Australian telecom leader Telstra connects data sources using Microsoft Fabric and OneLake, delivering up-to-date analytics via Power BI dashboards within Microsoft 365 applications. Automated refreshes and workflow triggers ensure that field teams always have the latest insights for customer service and maintenance tasks.

City of London: Predictive Analytics for Public Services

The City of London Corporation integrates predictive analytics into routine communications with Microsoft 365 apps. By enabling feedback loops and tailored dashboards, different departments improve service delivery and strategic planning based on actionable, up-to-date data.

Using predictive analytics in local public services | Local Government Association

Driving Proactive Insights and Continuous Business Impact

Building on this momentum, organizations should also leverage Microsoft Fabric’s robust automation features, such as scheduled data refreshes and workflow triggers, to ensure insights remain current and relevant as business conditions evolve. By connecting data sources in OneLake with Power BI, teams can automatically surface the latest operational metrics, customer feedback, and performance trends directly inside their Microsoft 365 environment. This proactive approach empowers employees to make informed decisions faster, supports cross-functional alignment, and fosters continuous improvement. Ultimately, the integration of Fabric with Microsoft 365 not only democratizes access to data but also drives sustained business impact by turning everyday interactions into opportunities for insight-driven action.

Looking ahead, organizations can further amplify these benefits by fostering close collaboration between IT and business stakeholders to identify high-impact scenarios where embedded analytics can streamline processes and drive measurable improvements. Encouraging feedback loops and iterative enhancements within Microsoft 365 such as customizing dashboards for different roles or integrating predictive analytics into routine communications. As adoption matures, businesses not only gain from faster, more accurate decision-making but also build a culture of continuous learning, where actionable data is woven into the very fabric (oops, I did it again) of their daily operations and strategic planning.

Cleveland Clinic adopted Microsoft Power BI and Teams

Monitoring operational performance and patient outcomes, resulting in faster response times and improved care coordination.

Microsoft PowerPoint – BIAS-2022 Presentation – Mark Ruffing.pptx

Sustaining Momentum: Building a Resilient Data Culture for Long-Term Success

To sustain and scale these gains, organizations should invest in ongoing education, governance frameworks, and robust support structures that empower users at all levels to harness the full potential of integrated analytics within Microsoft 365. By cultivating data champions across departments and encouraging best-practice sharing, companies can drive widespread engagement and innovation. This continuous reinforcement ensures that as new features and use cases emerge within Microsoft Fabric and the broader Microsoft 365 suite, teams remain agile and equipped to extract maximum value from their data assets, transforming every interaction into an opportunity for business growth and competitive differentiation.

As Microsoft Fabric’s capabilities continue to evolve, organizations poised for long-term success will embrace a proactive mindset experimenting with advanced AI integrations, tailoring analytics for emerging business needs, and regularly revisiting their data strategies to ensure alignment with broader digital transformation goals. By facilitating ongoing dialogue between business leaders, IT professionals, and end users, companies can adapt swiftly to new opportunities and challenges, embedding a resilient data culture that not only supports current operations but also lays the groundwork for future innovation. This commitment to continuous improvement and cross-functional engagement transforms Microsoft 365 from a suite of productivity tools into a dynamic engine for insight-driven growth, ensuring that every strategic initiative is grounded in timely, actionable intelligence.

Siemens: Accelerating Digital Transformation Together

Optimize supply chain processes, driving efficiency and innovation across their global operations.

Microsoft and Siemens: Accelerating Digital Transformation Together | Microsoft Community Hub

Key Points:

  • Embedded Analytics: Microsoft Fabric enables organizations to deliver dashboards, visualizations, and recommendations directly into Microsoft 365 apps, making insights accessible and actionable for all users.
  • Adoption Strategies: Success depends on hands-on training, showcasing quick wins, and encouraging a culture of regular data consultation and sharing.
  • Automation & Proactivity: Features like scheduled data refreshes and workflow triggers ensure that insights remain current, supporting agile and informed decision-making

Resources:

ETL Orchestration: Air Traffic Control for Data

We have been working getting an enterprise grade event driven orchestration of our ETL system to operate like an airport control tower, managing a fleet of flights (data processes) as they progress through various stages of take-off, transit, and landing. All of this, because Microsoft Fabric has a core-based limit to the number of Notebook executions that a capacity can execute and have queued up in line for execution when invoking them using the REST API. Read the details here: limits (you know, it’s funny that there is no stated limits for Azure Service Bus Queues on number of messages in queue, but there is for Microsoft Fabric, which uses a Service Bus queue underneath…)

Each flight (file ingestion) is linked to a flight plan (orchestration) that ensures that flights follow predefined routes, encounter minimal delays, and arrive at their destination efficiently. Sometimes we have to wait for a combination of flights to arrive, before we can venture on to the next leg of the complete flight (multiple interdependent files landed in bronze before we process silver).

Each Flight Plan represents a predefined sequence of Notebook executions in Microsoft Fabric, moving data through:

  1. Bronze (Raw Data Ingestion)
  2. Silver (Data Transformation & Cleansing)
  3. Gold (Final High-Value Data)

Flight plans are determined using a combination of customer and some characteristics of the data.

A Finite State Machine (FSM) tracks each flight’s progress in states like Pending, Queued, Running, WaitingForDependency, and Succeeded, ensuring smooth execution and real-time tracking.

Receiving thousands of files, we are initiating a great number of Notebook executions this way using the REST API, we can easily hit the limits of Microsoft Fabric, a seat manager attends to the number of seats currently booked and will direct flights into a holding pattern if no free capacity is available. Flights enter WaitingForDependency status (runway congestion) and when a Notebook completes execution, a Notebook/Stop event is triggered, allowing the next waiting flight to take off.

A Fabric Database persists all flight plans, execution history, and dependencies, ensuring auditability, recovery, and coordination across multiple data loads. To handle all of the messaging we have chosen Azure Event Grid in combination with Azure Service Bus Queues. At the end of each queue there is an Azure Function App to process messages as they arrive.


The Tower Control (Orchestration Engine)

At the heart of this system is the Tower Control, which manages and directs all incoming and outgoing flights (ETL jobs).

  • It receives flight requests (messages) from the execute-queue (Azure Service Bus), ensuring that each flight follows an authorized flight plan before take-off.
  • The Tower Control doesn’t make decisions alone—it relies on the Planner to chart out the appropriate flight paths and manage scheduling.
  • The entire system ensures efficiency and scalability, preventing airport congestion (resource overutilization).

📅 The Planner (Orchestration Logic & SQL Database)

Before a flight can take off, it must file a flight plan—a predefined orchestration of file loads.

  1. Flight Plan Activation:
    • When a classifier/success event arrives in the execute-queue, it first checks if a flight plan exists in the SQL Server database for the given customer-provider pair.
    • If no active flight plan exists, a new one is registered and stored in the SQL database.
    • Each flight plan persists in the database, allowing the system to track ongoing operations, monitor execution history, and manage dependencies.
    • Flight plans are defined as templates in the meta schema, and the runtime version contains runtime information in addition to steps, workspace- and notebook-references as well as other required information to ensure the safe passage of the files.
  2. Flight Check-ins:
    • Any subsequent flights (events) for the same customer-provider combination check into the active flight plan instead of creating a new one.
    • This ensures coordination between related flights, preventing redundant take-offs and reducing airspace (resource) congestion.
    • This also ensures interdependency can be enforced, so that a crash can or cannot trigger a later flight, depending on the configuration for that flight plan.
  3. Stored Data & Execution Logic:
    • The SQL Server database keeps track of flight plans, execution states, dependencies, and processing history.
    • It enables seamless recovery, auditing, and tracking of past, present, and upcoming flights.

🚦 The Flight Status System (Finite State Machine)

Every flight is tracked by a Finite State Machine (FSM) monitoring its progress in real time. The SQL database stores and updates these statuses.

  • Pending – The flight request has been submitted but is waiting for the plan to commence.
  • Queued – The flight is cleared for take-off and waiting in the queue.
  • ✈ WaitingForDependency – The flight is delayed due to a lack of available runway space (Microsoft Fabric capacity maxed out).
  • Running – The flight is airborne and actively processing data.
  • Succeeded – The flight has landed safely, completing the data pipeline stage.

This FSM ensures systematic execution, prevents bottlenecks, and allows intelligent scheduling of flights.


🚀 Flight Execution (Notebook Orchestration in Microsoft Fabric)

Once a flight is cleared for take-off, it follows a prescribed route (Notebook executions in Microsoft Fabric):

  1. Bronze Notebook: ✈ The flight transports raw, unprocessed data into the Bronze layer.
  2. Silver Notebook: ✈ Data undergoes refinement, transformations, and cleansing.
  3. Gold Notebook: ✈ The final high-value data destination is reached, ensuring premium quality data is ready for analytics and reporting.

Each flight plan contains pre-determined orchestrations of file loads, meaning the system knows in advance which Notebooks need to be executed and in what sequence.


🛫 Managing Runway Congestion (Handling Resource Constraints in Microsoft Fabric)

Just like a busy airport can only handle a limited number of take-offs at a time, Microsoft Fabric has capacity constraints when executing Notebooks.

  • If the Fabric capacity is maxed out, new flights cannot take off immediately and are moved to “WaitingForDependency” status.
  • As soon as a Notebook completes execution, it sends a Notebook/Stop event, notifying the Tower Control that a slot has freed up.
  • This allows one waiting flight to be cleared for take-off, ensuring fair scheduling and optimal use of available resources.

🛬 Landing Safely (Successful Data Pipeline Execution)

  • Once all required Notebooks are executed successfully, the flight reaches its final destination.
  • The flight’s status is updated to “Succeeded” in the SQL Server database for logging, tracking, and reporting.
  • The system is now ready to accept new incoming flights and restart the process.
  • For every advancement from Bronze -> Silver -> Gold there will be fired a progression event for the Tower Control  to pick up.

Summary of the System’s Aviation Workflow

Aviation ConceptETL System EquivalentPurpose
Tower ControlAzure Function AppsManages flight take-off, scheduling, and execution
Flight PlanSQL Server DatabaseStores orchestrations and execution history
Planner Orchestration LogicDetermines optimal execution pathways
Flight Status (FSM)State MachineTracks execution progress
Flight Take-offNotebook ExecutionBegins processing raw data
Flight DelayWaitingForDependencyNotebook execution paused due to capacity constraints
Flight Stop EventNotebook Completion SignalFrees up capacity for the next waiting flight
Successful LandingData Processing CompletedPipeline execution finished successfully

The aviation theme has helped us create better mental images and relevant discussions during development.

OneLake – External Data Sharing

At #MSIgnite Microsoft announced a new feature in Fabric that allows people from one organization to share data with people from another organization. You might ask yourself why is this even news, and rightly so. Up until last week, professionals have had to use tools like (S)FTP clients like FileZilla, Azure Storage Explorer, WeTransfer or similar products in order to share data. Some of these tools are in fact hard to use and/or understand for a great number of business users – they are familiar with Windows and the Office suite and not much more. This is all to be expected, as business users in general should focus on business stuff rather than IT stuff.

As of last week this picture has changed quite dramatically as Microsoft has introduced what they refer to as External Data Sharing in Microsoft Fabric. Even though this new feature involves some configuration from the IT department, once it’s setup the end user can actually be allowed to share data with external organizations through what looks to be the File Explorer! 🔥 At least it looks like the File Explorer, but is in fact another application end users will need to install on top, to enable this functionality. The tool is called OneLake File Explorer and is obviously a file explorer for OneLake in Microsoft Fabric. In the following diagram, Microsoft demonstrates the feature and even underlines that no data is copied from one tenant to the other – all data is shared in-place.

Think about it just one more time – The end user will be able to, on their own device, copy and paste data from local folders to OneLake synchronized folders (also on their own device) which then gets synchronized to another tenant. The tool works just like the OneDrive application, which means that it keeps files in synch between your device and OneLake.

Admin Settings in Tenant A

Configuring the functionality requires the sharing organizations (Tenant A) to toggle a settings in their Fabric Admin section.

The setting “External Data Sharing” should be allowed, and it is recommended that this is allowed only to a specific security group for easier management of access through the IT department.

As per screenshot above, members of the security group “CatMan” are the only ones who are allowed to share externally. One note that is highlighted in the yellow box might be worth considering before using this feature.

The functionality will work, even if the receiving organization (Tenant B) does not allow sharing as described above.

Sharing from Tenant A

Suppose you already have a lakehouse in Microsoft Fabric, (otherwise here’s a great introduction on how to create that), and you want to share files or tables with an external business user or it-professional. Then the following steps will allow you just to do that.

I have uploaded my Important Business Numbers.xlsx spreadsheet in my folder File_Share. I need this file for my critical workloads in my BI analysis but I also want to share these numbers with a professional outside my organization.

From inside the workspace in Tenant A I can now (due to the configuration in the admin portal) choose to share data externally by clicking the three dots (…) on the lakehouse in question.

Choosing this option guides me to a wizard where I get to select what data items I would like to share. The supported item types are data residing in tables or files in lakehouses and mirrored databases.

In this case, I choose to share an entire folder named File_Share.

Clicking ‘Save and Continue‘ leads me to a new dialog, where I get to assign who I want to share this data with. Sharing in this way does NOT require Entra B2B guest user access but is relying on a dedicated Fabric-to-Fabric authentication mechanism. Also note that the sharer from Tenant A can’t control who has access to the data in Tenant B. Access can even be granted to guest users of Tenant B.

In this example the sharer can either choose to send the grant as an email, copy the link and send that through Teams or other option. The intended receive has 90 days to accept the invitation, after which the invitation expires.

Accepting share from Tenant B

In order for the user in Tenant B to accept the share, they have to have access to a lakehouse that becomes the target of the share. Please see link to setup a lakehouse.

Here the user Testy McTestify has created a workspace in Tenant B and also created a lakehouse called Tenant_B_Lakehouse.

Testy can now accept the share in more than one way, either by mail by clicking an accept button that directs him to the fabric portal where you will be guided through the next steps in accepting the invitation. Or Testy can simply click or paste in the link in a browser and begin the same journey as above. Either way, the below screen will be presented once authorization has completed.

Testy McTestify is a user in the domain @catmansolution.com (Tenant B) and the invite was sent from Tenant A which is @catman.bi – this information is also present in the dialog, along with details on what is shared.

Now Testy has to select the lakehouse that will house the referenced folder (in this case). Here Testy chooses Tenant_B_Lakehouse.

And the final step is to place the shared folder in the files hierarchy that exists in Tenant_B_Lakehouse, and here Testy just places the folder in the root.

Two notifications will pop up and inform you on the relevant actions taken.

As soon as that process is completed (within seconds) the files from the folder in Tenant A are available as if present in Tenant B

OneLake Explorer

Installing OneLake Explorer will allow Testy McTestify to access the same files and folders synchronized on his device. This is, as you can imagine, immensely powerful as almost every business user knows how to operate Windows File Explorer and OneDrive on their device – this is right up their alley and not some odd third party product that IT needs to whitelist for them alone. Chances are that OneLake Explorer is already in use in the organization and no further action from IT is needed.

I simply love the potential of this new feature that I feel has traveled well below the radar, covered by all the AI and CoPilot noise over the last couple of weeks.

Unexplainable behavior’s with DefaultAzureCredential()

Long story, short (2 days later)

While implementing an Azure Function that is designed to fetch secrets from Azure KeyVault, I ran into a funny and odd issue. I am not able to explain why and what is going on, but I have tried every trick a google search can conjure, at least until page 30 in the search results. It was by coincidence I came across some of the parameters in the DefaultAzureCredentialOptions class that got me going, at least locally.

The idea, as far as I have understood, is that whenever you invoke the Azure.Identity.DefaultAureCredential class, it provides a flow for attempting authentication using one of the following credentials, in listed order:

I suspect that since I have deployed my Azure Function using the Managed Identity setting to a Systems Assigned identity, like this:

System Assigned Identity

AND the fact that ManagedIdentityCredential is before VisualStudioCredential in the authentication flow, it fails, since it is unable to authenticate the managed identity – which is the main principle of the design – none other than the service can assume the identity of the service.

See more detail here: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview
Snip

  • System-assigned. Some Azure resources, such as virtual machines allow you to enable a managed identity directly on the resource. When you enable a system-assigned managed identity:
    • A service principal of a special type is created in Azure AD for the identity. The service principal is tied to the lifecycle of that Azure resource. When the Azure resource is deleted, Azure automatically deletes the service principal for you.
    • By design, only that Azure resource can use this identity to request tokens from Azure AD.
    • You authorize the managed identity to have access to one or more services.
    • The name of the system-assigned service principal is always the same as the name of the Azure resource it is created for. For a deployment slot, the name of its system-assigned identity is <app-name>/slots/<slot-name>.

Love rears it’s ugly head

Having assigned the proper permissions in the Azure KeyVault, you are able to connect using your credentials in Visual Studio to said KeyVault. A code example of that could look like this:

public static string GetSecret( string keyvault, string secret )
{            
   var kvUri = $"https://{keyvault}.vault.azure.net";
 
   var creds = new DefaultAzureCredential();
 
   var client = new SecretClient(new Uri(kvUri), creds);
   var secret = client.GetSecretAsync(secret).Result.Value.Value;
 
   return secret;
}

(link to NuGet: NuGet Gallery | Azure.Security.KeyVault.Secrets 4.5.0)

Usually this works, and I have no other explanation than having deployed the solution to a live running App Service is what breaks this otherwise elegant piece of code. The above listed code does not work for me.

Workaround

You can instantiate the DefaultAzureCredential class using a constructor that takes a DefaultAzureCredentialOptions object as a parameter and this object has a great number of attributes that are of interest. You can actively remove items in the authentication flow and you can specify the tenant id, if you have access to multiple tenants.

The code that resolved the issue locally looks something like this. (I can probably just do without the ManagedIdentity, will test)

public static string GetSecret( string keyvault, string secret )
{            
   var kvUri = $"https://{keyvault}.vault.azure.net";
 
 
    var creds = new DefaultAzureCredential(
        new DefaultAzureCredentialOptions() {
        TenantId = "<INSERT TENANT ID HERE>"
        , ExcludeAzureCliCredential = true
        , ExcludeAzurePowerShellCredential = true
        , ExcludeSharedTokenCacheCredential = true
        , ExcludeVisualStudioCodeCredential = true
        , ExcludeEnvironmentCredential = true
        , ExcludeManagedIdentityCredential = true
    });
 
 
    var client = new SecretClient(new Uri(kvUri), creds);
   var secret = client.GetSecretAsync(secret).Result.Value.Value;
 
   return secret;
}

I am not sure this will work when I deploy the solution, but I will probably create a test on environment (local debug or running prod)

HTH