Azure Databricks at FabCon 2026: What Got Announced and What It Actually Means

FabCon 2026 in Atlanta last week was bigger in many ways. For the first time, the Microsoft Fabric Community Conference ran alongside SQLCon, which meant two large communities shared the same convention center, the same coffee queues and the general atmosphere of too-many-sessions-not-enough-sleep that defines any conference worth attending.

Databricks showed up with several announcements. Some of them are incremental. A few are worth more than a passing glance, depending on where you sit on the data and BI stack.

One thing that still catches people off guard: Azure Databricks has been a first-party Azure service since 2017. Not a partner product, not a third-party integration. A first-party Azure service, alongside Power BI, Excel, Teams, Azure OpenAI, Copilot Studio and the Power Platform. When Microsoft talks about a unified data and AI platform on Azure, Databricks is part of the architecture. The announcements this week make that more visible.

Here is what they shipped, and what I think it means in practice.


Lakeflow Connect Free Tier: 100 Million Records a Day, at No Cost

Bad pipelines are one of the constants of BI work. The data arrives late, the connectors are fragile, someone is maintaining a web of custom scripts because there was never time to do it properly, and the people building reports spend half their week cleaning up after problems they did not cause.

Databricks announced a Lakeflow Connect Free Tier, and the headline number is worth taking seriously: 100 million records per workspace per day, at no charge. That is 100 free DBUs per day, included with every workspace, before standard Lakeflow Connect pricing applies.

What it connects to out of the box:

  • Databases: SQL Server, Oracle, Teradata, PostgreSQL, MySQL, Snowflake, Redshift, Synapse, BigQuery
  • SaaS applications: Dynamics 365, Salesforce, ServiceNow, Workday, Google Analytics

For databases, the ingestion runs on Change Data Capture, which means it reads the transaction log incrementally rather than scanning full tables. Data lands in Delta format on Azure Data Lake Storage. Unity Catalog governance applies from the moment the first record arrives, so access control and lineage are not something to sort out later.

Databricks quotes 25x faster pipeline builds and 83% ETL cost reduction. I would take vendor benchmarks with the usual scepticism, but the direction is clear: the intent is to make data ingestion a problem you configure rather than one you maintain. For a BI team currently paying for third-party Dynamics or Salesforce connectors, or running CSV exports on a schedule, this is worth a practical test.


Lakebase Is Generally Available: A Postgres Database Inside the Lakehouse

This one sits closer to architecture and engineering than it does to daily BI work, but it changes some assumptions that are worth understanding.

Azure Databricks Lakebase is now generally available in 14 Azure regions. It is a managed, serverless Postgres service that runs inside your lakehouse, on the same storage as your Delta tables.

The problem it addresses is one data architects have been working around for years: operational data and analytical data have historically lived on separate platforms, connected by pipelines that were always someone’s responsibility and frequently nobody’s priority. Lakebase puts an operational database directly in the same governed environment as the rest of the data platform.

Key characteristics:

  • Full Postgres compatibility, with support for extensions including pgvector and PostGIS
  • Compute and storage separated, with scale-to-zero and sub-second startup
  • Branching and instant restore for development and testing workflows
  • High availability with automatic failover across availability zones

The use cases Databricks highlights: transactional analytics on operational data, AI agent state management, and customer personalization and feature serving. For data engineers building pipelines that feed AI applications, Lakebase removes the need to run a separate operational database outside the Databricks platform just to give an agent somewhere to write state.

It is available to test today in 14 regions. If you have been looking for a Postgres layer that sits inside the lakehouse without architectural compromise, now is a reasonable time to look at the documentation.


The Excel Add-in Is in Public Preview: Governed Lakehouse Data in the Tool Most People Actually Use

This is the announcement that will get the most immediate attention from analysts and business users, and probably the one that causes the most internal conversations about data governance.

The Azure Databricks Excel Add-in is in public preview. It connects Excel directly to Unity Catalog tables and Metric Views. From inside Excel, you can browse the catalog, build pivot tables from governed semantic definitions, and filter and analyze data without writing SQL. It works on Excel for Windows, macOS and the web.

The problem it addresses is one every BI developer and governance specialist knows well: business users need data in Excel. So someone exports a CSV. Or the business user pulls their own export. Within 24 hours there are four versions of the file in four different places, none of them current, all of them cited in separate meetings. The analyst who originally produced them has no idea which version is being used.

The add-in replaces that pattern with a live connection to the same tables that power your Power BI reports and your analytics models. The data is current. The access rules in Unity Catalog apply here too, so a user who cannot query a table in Databricks cannot query it through the add-in either.

For analysts who work primarily in Excel, this is a genuine change in how a typical Tuesday works. For governance teams, it removes a whole class of ungoverned data copy that currently exists because there was no better option.


Genie Gets More Capable: Agent Mode, Genie Code and Databricks One

Genie is Databricks’ conversational analytics experience: you ask a data question in plain language and get back a chart, a table or a narrative answer. Databricks reported this week that 98% of Databricks SQL warehouse customers are using AI/BI, with monthly active Genie users up more than 300% year-over-year. The numbers are moving fast enough to suggest this has passed the experimental phase.

Three updates this week.

Genie Agent Mode

Standard Genie answers one question at a time. Genie Agent Mode takes a more complex business question, builds a research plan, runs multiple queries, tests intermediate results, refines its approach and then delivers a complete answer with supporting tables, charts and narrative context.

The difference becomes concrete quickly. Standard Genie handles: “What were total sales in Q3?” Genie Agent Mode handles: “Revenue in the Southeast dropped in Q3. Why did that happen, and what does the pattern suggest for Q4?” That is not a single query. It is an investigation, and Agent Mode runs it without someone having to direct every step.

For analytics managers sitting on a queue of complex ad hoc requests that only a senior analyst can currently answer, this is the update worth spending time with.

Genie Code

Genie Code is aimed at data practitioners, not end users. It is an agentic development assistant that runs inside Databricks notebooks, SQL editors and Lakeflow pipelines.

The distinction from a general-purpose AI coding assistant is that Genie Code understands your data context through Unity Catalog. It knows your tables, your lineage, your governance policies and your business semantics. With that, it can build pipelines and dashboards from natural language prompts, debug Lakeflow failures, generate queries grounded in your actual schema, and handle routine operational monitoring.

For senior BI developers and data engineers who spend part of every week on repetitive work that requires knowing the platform well, having an assistant that actually knows prod.gold.customer_activity is a different experience from hitting tab on a general-purpose tool that has never seen your schema.

Databricks One and Databricks One Mobile

Databricks One now includes a unified multi-agent chat experience powered by Genie. Business users can ask questions across the full data estate without needing to know which Genie space to route to. When a question goes beyond what existing spaces can answer, Databricks One can bring in additional agents to investigate. AI/BI dashboards and Databricks Apps are surfaced in the same interface.

Databricks One Mobile brings this to iOS and Android: Genie, dashboards and apps from a phone. Business users can ask data questions without being at a desk.


Genie in Microsoft Teams: Data Answers Where the Decisions Actually Happen

For organizations already using Microsoft 365, this is probably the most immediately deployable announcement.

You can now connect Genie to Microsoft Teams via Copilot Studio. The setup connects a Genie space to a Teams agent through the Copilot Studio connector, which handles the API and MCP logic. Once connected, users can ask data questions directly in a Teams conversation and get answers backed by your lakehouse data.

The part that makes this credible to security teams and BI leaders: every conversation runs through OAuth, authenticated against the user’s own identity. If a user does not have SELECT access to a table in Unity Catalog, Genie will not surface that data in Teams. The access model you already manage in Unity Catalog carries through to every Teams conversation.

For data governance managers who have spent years explaining why pasting screenshots of reports into Teams messages is not the same as having a governed answer, this changes the practical alternative. The question gets answered where it was asked, with the right access controls applied, and nothing leaves the governed environment.

For business users, it means getting a trusted data answer without leaving the tool they already have open.


What I Am Taking Away From This Week

The pattern across all of these announcements is one I have been watching build for a couple of years. Operational data, analytical data and AI have historically lived on separate platforms, and the work connecting them got called integration. That work is expensive, slow, and usually the first thing cut when a project runs over budget.

What Databricks is building is a single platform where all of it sits together, governed by Unity Catalog, accessible from Excel, Teams, a notebook, a mobile app or a SQL query. Whether the individual pieces fit together as neatly in production as they do in the announcement demos is something I will be watching as they move from preview toward GA.

If you were at FabCon this week, the Databricks session was Thursday March 19th in room C302 and should be available on demand if you missed it.

The next major Databricks gathering is Data + AI Summit, June 15 to 18, 2026, in San Francisco. 25,000 attendees, 800+ sessions, and the most complete view of where the platform is heading. Worth putting on the calendar.


What caught your attention this week at FabCon? Drop a comment below. I would like to hear what people are actually planning to test.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.