Rishi Sapra
Rishi Sapra

Fabcon 2025: Key Feature announcements unwrapped!

I was fortunate enough to have a front row seat (literally for the Keynote!) at The Microsoft Fabric community conference held in Las Vegas 31 March-April 2nd 2025!

The conference cemented the rapid evolution of Microsoft Fabric into a unified, enterprise-ready analytics platform that is now used by over 19,000 customers including 95% of the Fortune 500!

For me there were three key themes of the features announced : Democratization of Copilot, End-to-End Automation, and Simplification of the Data Stack. 

Perhaps the most impactful announcement was the wider accessibility of Copilot — now available on all Fabric SKUs, not just the high-cost tiers (F64+). The smallest SKU, an F2, costs less than $300 a month on a reserved capacity. It would cost significantly less if you only ran it during office hours on a Pay as You go (PAYG) pricing model for example! This lowers the barrier to entry for AI-driven insights and allows even small and mid-sized organizations to benefit from Fabric’s AI-powered capabilities.

Similarly, automation updates like the Fabric CLI, Workspace Variables, and user-defined data functions are paving the way for more repeatable, scalable deployments.  

Meanwhile, simplification features like OneSecurity, mirroring, and unified sensitivity labeling are addressing long-standing governance and integration challenges across managing a data platform.

Watch the video below for an overview of what I have picked as the top 10 announcements from the conference, with the individual features detailed (each with their own video) underneath!

The full playlist of videos is available at https://www.youtube.com/playlist?list=PL1BGW2wTCUqVkgzBuoxDgDOP8HzJ-D53V

1. OneLake Security

One of the most anticipated updates from Fabcon 2025 was the release of OneLake Security, which introduces a unified security model (including Row and Column level security) across the entire Fabric platform and even for where the lakehouse data is accessed externally.

This enhancement enables row- and column-level security to be centrally defined at the lakehouse level and enforced across every downstream engine — including Power BI, Excel, SQL endpoints, and even direct file access via parquet. 

Historically, applying consistent security policies across data layers in Fabric was complex and fragmented. While it was possible to set security in individual services (e.g., Power BI RLS or in SQL using GRANT/DENY statements), these were often siloed and bypassed depending on the consumption path.

OneLake Security addresses this by allowing organizations to define roles that include data access permissions, members, and granular constraints on what data is visible. 

This is a critical leap forward in enabling secure enterprise-wide data governance and for many enterprises, this is the final piece of the puzzle needed to adopt Fabric as a central data platform.

2. Workspace Variables & Fabric CLI

With the release of Workspace Variables and the Fabric Command Line Interface (CLI), Microsoft Fabric is aligning itself more closely with enterprise DevOps standards. 

Workspace Variables allow workspace admins to centrally manage environment-specific configurations such as connection strings, file paths, and credentials. This greatly simplifies deployment pipelines and reduces the risk of manual misconfigurations when promoting artifacts across environments (e.g., development, test, production). These variables are workspace-scoped and can be referenced throughout different Fabric artifacts, enabling consistent and reliable deployments. 

The Fabric CLI provides a command-line interface to interact with Fabric resources in a programmable way — from provisioning workspaces and lakehouses to managing permissions and automating deployments. With support for service principals, this enables fully automated, reproducible infrastructure-as-code workflows, similar to what Azure engineers have become accustomed to. 

For enterprise teams managing complex data estates, these updates are not just convenient — they are essential for governance, scalability, and long-term maintainability.

3. User Defined Data Functions

One of the most impactful changes to how teams will develop and reuse logic in Fabric is the introduction of User Defined Data Functions (UDDFs). These are reusable blocks of PySpark code that can be written once and called across notebooks or via an API, standardizing logic for common tasks like data cleaning/transformation and business team specific activities such as customer segmentation. 

This addresses a common challenge in Fabric — too many ways to do the same thing, leading to inconsistency across teams! With UDDFs, organizations can now create centrally managed logic libraries that ensure uniformity across departments while enabling faster onboarding and reuse. For advanced users, these functions can be maintained through the Visual Studio extension or deployed as part of automation workflows. 

Note that this is a separate feature from user-defined DAX functions. Instead, UDDFs live within the Fabric compute (Spark) layer.

It will be great to see how UDDFs are used across an enterprise, but I see them as a key foundation for process automation!

4. Materialized Views in Notebooks

Materialized Views in Notebooks are a slick solution for the age old challenge of managing performance and reusability when transforming data between the layers of a medallion architecture.

This feature lets users write T-SQL queries directly within notebooks and materialize the output as delta tables — giving analysts and data engineers an efficient way to persist curated datasets. 

Whilst it is possible to write SQL views in a Warehouse, these are volatile — recalculated at runtime and often a performance bottleneck. Now, these views can be stored as optimized delta tables that integrate directly with semantic models and support Direct Lake mode for near real-time querying.

Analysts with SQL skills can contribute transformations without needing PySpark expertise, lowering the barrier to building performant data pipelines. 

Added features like constraint validation during transformation ensure that only quality, reliable data makes it downstream into the tables used for example in Power BI!

5. Oracle Mirroring

Microsoft Fabric’s mirroring capabilities have quickly become one of its most attractive features for enterprises (indeed any organisation with a transactional/ERP system!) — and with the addition of Oracle support, the scope of use cases just expanded significantly. 

Mirroring allows Fabric to continuously synchronize data from transactional systems (such as Oracle databases) into the lakehouse, capturing inserts, updates, and deletions in near real time. This eliminates the need for complex pipelines or refresh logic and ensures that the data estate reflects live system changes without lag. 

For enterprises relying on Oracle-based ERP systems, this opens a fast path to integrating structured transactional data into analytical workflows.

The support for connecting to Azure SQL database behind a private endpoints (using a VNET gateway) also ensures secure data movement within compliant architectures. 

6. Delta–Iceberg Interoperability

Fabric’s commitment to open standards continues with the announcement of Delta–Iceberg interoperability, powered by the Apache XTable project. This collaboration with Snowflake enables Fabric and Snowflake to read and write data across each other’s preferred formats — Delta and Iceberg — while storing the data in a single location. 

This eliminates the need for data duplication or complex conversions when sharing datasets between platforms. For organizations using both Microsoft and Snowflake ecosystems, this creates a truly interoperable data layer — allowing Fabric workloads (including Copilot, Power BI, and dataflows) to work natively with data created in Snowflake, and vice versa. 

It’s a move that reinforces Microsoft’s position in the open data lakehouse movement, giving customers flexibility without compromising performance or governance. 

7. Creation of Direct Lake Semantic Models in Power BI Desktop

The introduction of Direct Lake model creation within Power BI Desktop brings together two major strengths: the flexibility of desktop-based model development and the performance of Direct Lake mode. 

Previously, Direct Lake models had to be authored in the Fabric service, limiting developer control and flexibility. With this update, you can now create, configure, and optimize Direct Lake models directly in Power BI Desktop — including support for models that connect to multiple lakehouses and even combine import and Direct Lake tables within a single semantic model. 

Most importantly, this is “true” Direct Lake — bypassing the SQL endpoint entirely, and preserving the performance and security benefits of the lakehouse layer, including full OneSecurity support. 

8. Grounding & Discovery for Copilot

As Copilot becomes available across all Fabric SKUs, Microsoft is ensuring that users can find and interact with the right data — even if they don’t know which report or model contains it. The new Copilot discovery experience allows users to search the entire tenant based on natural language prompts, with results ranked by relevance using metadata like usage, endorsements, and org graph proximity. 

The other half of the equation is grounding — helping Copilot return accurate, meaningful responses by training it on domain-specific context. Developers can now add verified answers to visuals and semantic models in Power BI Desktop, helping Copilot understand not just the data structure, but also business semantics and preferred interpretations. 

This is a huge step toward turning Copilot into a functional “data analyst” that understands both your technical assets and your organizational language. 

9. Data Agent (AI Skill) Enhancements

Data Agents (formerly AI Skills) are central to bringing structured reasoning into Fabric’s AI experiences. These agents act as intelligent connectors between user prompts and your data, translating natural language into DAX, SQL, or KQL queries based on grounding in one or more data sources. 

Microsoft recently announced significant enhancements to these data agents — they can now pull from semantic models and KQL databases in addition to lakehouses. They also support few-shot learning, allowing admins to define example prompts and expected responses to guide behavior. Perhaps most importantly, these agents can now be exposed via an API and integrated into the Azure AI Agent Framework, bridging Fabric and broader AI ecosystems. 

This elevates Fabric’s role in the GenAI stack — providing structured, governed data as the foundation for more advanced AI use cases across the Microsoft cloud and far beyond it.

10. OneLake Catalog Enhancements

The OneLake Catalog (also known as OneLake Data Hub in Fabric) is evolving from a basic navigation tool into a true data catalog — centralizing discovery, metadata, security, and governance within the Fabric environment.

Users can now search for sub-items like tables, columns, and measures, filter by tags, and view lineage across artifacts. With the introduction of sensitivity labels, data loss prevention flags, and enforcement rules, governance is no longer separate from discovery — it’s integrated. Fabric items can inherit labels and restrictions/policies from Purview, and admins can apply role-based policies that override workspace-level permissions based on data sensitivity.

These enhancements significantly reduce friction in managing data estates. For large enterprises especially, it means Fabric is no longer just a tool for developers — it’s a platform for governed, discoverable self-service analytics.

Rishi Sapra