Most data governance programs were built for a slower era: one where policies were enforced manually, stewardship moved at the pace of human review, and the biggest risk was an outdated data dictionary.
Then AI arrived. Today, the gap between what legacy governance handles and what AI-driven organizations require isn't just widening—it’s collapsing.
In a recent Alation webinar, "Why Governance Programs Stall and What AI Agents Change," Alation CEO Satyen Sangani and Raluca Alexandru, Lead Analyst at Forrester Research, challenged the data & analytics industry's latest obsession with "just automating" the problem. Their central argument: Automation makes things faster, but only learning makes them smarter. In this recap, we’ll explore how real leaders are moving beyond "fast" to build governance that compounds.
Satyen Sangani speaks with hundreds of data leaders each year. Recently, a third theme has joined the perennial complaints of "no resources" and "low engagement": The AI bottleneck. In other words, everyone wants AI, but their governance foundation is a choke point.
Forrester's research confirms this friction. Alexandru noted that 70% of organizations are still using governance playbooks written more than three years ago—long before Generative AI transformed the scale of data consumption.
"It’s an operating model problem, rather than a mindset problem," Alexandru explained. “People want to do things, but oftentimes the basis they're working on is not necessarily suitable.” Simply put: Leaders have the will to adopt AI, but they are running on an architecture engineered for a different speed, a different scale, and a much lower level of accountability.
The market is currently flooded with promises that AI agents will "solve" governance through sheer automation. But as Alexandru and Sangani discussed, this "speed-only" narrative often ignores a dangerous reality: if you automate a broken process, you simply scale the errors.
Sangani framed this as a paradox: "Automation is utterly critical and totally insufficient at the same time." While agents can perform first-pass stewardship (like labeling datasets or PII detection) in hours rather than months, they also create a new bottleneck. The "inbox" that once overwhelmed human stewards is being replaced by a flood of agent-generated outputs that still require human validation.
The core issue isn't whether agents can execute tasks; it’s whether the system is designed to learn from its own results. Alexandru was direct about the risks of ignoring this: "Speed without proper measurement of what automation means and without a feedback loop bringing those learnings... that's really not something that you want to skip on or skimp on in terms of tooling and strategy."
To escape the speed trap, organizations must shift their focus from "AI governing AI" to building an architecture that improves with every cycle. This requires moving away from legacy metrics and toward a model of governance that compounds. "Look at how specific metrics that are important for your organization (like accuracy) improve over time," Alexandru suggested. "Not just looking at a point-in-time completion rate... but look at how that develops in time and what that brings you in the shape of a feedback loop to improve your operating model."
When governance is built as a learning system rather than just an execution engine, the goal shifts. It’s no longer about how many datasets an agent can tag in an hour; it’s about how much smarter the knowledge engine becomes because of the feedback loop that follows.
The organizations pulling ahead aren't just automating more; they are building a knowledge engine. This is the "brain" of the operation: a centralized architecture that brings together policies, metadata, and human expertise.
In this model, governance compounds rather than resets. Here’s how that cycle works:
The agent executes: An AI agent performs a task (e.g., tagging a dataset).
The evaluation: A human or secondary agent evaluates the accuracy of that task.
The feedback loop: The result is fed back into the knowledge engine.
The compounding effect: The system gets incrementally smarter, tuning the context so the next "run" is more accurate.
"The real differentiator is learning," Sangani explained. "It’s how I tune the prompts, the agents, the evals, and the context so that I can learn constantly."
"Automation makes things faster,” Alexandru pointed out. “Learning makes it smarter.”
A global manufacturing leader recently provided a masterclass in moving from static automation to a learning system. Tasked with managing hundreds of thousands of parts across a complex supply chain, they implemented AI agents to predict part depletion and inventory risk.
The challenge was a high-stakes balancing act: ordering enough to prevent a stock-out and assembly line halt, without carrying excess, expiring inventory. In this high-pressure environment, an agent's ability to "complete a task" isn't the same as "being right."
To scale this program, the organization had to move beyond the technical metrics of a standard pilot. Sangani noted a critical distinction between two types of feedback:
Hard evaluations: These are explicit technical benchmarks used to see if an agent is following instructions. They are great for a quick start, but they don't tell you if the agent is actually solving the business problem.
Soft evaluations: This is the human gut check. As Sangani put it: "When people get this information [from AI]... is it the right action and are we actually getting to something that people can trust continuously?"
To move from an agent that "guesses" to one that "knows," planners must tune the system to eliminate hidden loss—the subtle reasoning errors that automated tests miss, such as an agent ignoring a specific supplier's history of delays.
This requires humans to move from data entry to reliability watchers. "Sometimes you tune the context. Sometimes you tune the prompt," Sangani explained. "Sometimes you’re saying, ‘Well, you looked at these three documents, but really... you want to look at these four.’"
In some cases, they even increased the granularity of the agent's output—moving from one summary to three separate data points—to isolate exactly where the logic failed. As Sangani noted: "Because what we realize is that to get to real accuracy, we need to be able to track the output of the agent at a much more granular level."
The result is governance that compounds: every human intervention makes the AI smarter, eventually allowing planners to shift from manual data digging to high-level resolution strategy.
If the knowledge engine is the brain, traceability is the nervous system. Alexandru noted that friction occurs when teams fear losing control. In other words, when AI decisions feel like a "black box," trust evaporates and adoption stalls.
Mature organizations solve this by building with traceability as a first principle. They define "healthy boundaries"—identifying where agents can act autonomously and where a "human-in-the-loop" is non-negotiable. This transparency enables the knowledge engine to grow without exceeding the organization's risk appetite.
For data leaders looking to move from manual to compounding governance, the speakers offered three pillars:
Define purpose before tools: Governance fails when it lacks a reason for existing. Don’t try to fix the entire foundation at once. Pick a high-value AI use case and build the governance infrastructure required to support just that first.
Build feedback loops from day one: Don't just track "tasks completed." Track accuracy over time. Your measurement system is your governance execution.
Embrace the knowledge engine: Shift your focus from "how many agents do we have?" to "is our system getting smarter the more it runs?"
Governance is no longer a compliance checkbox; it is a system of intelligence. Agents provide the engine, but the knowledge engine provides the map. The organizations that lead the AI era will be those that treat governance as a compounding asset: one that improves with every evaluation and every cycle of feedback.
The key takeaway? Governance doesn't have to be perfect on day one. It just has to learn.
Ready to build a governance program that scales with AI? Book a demo with us today.
According to Forrester’s research, the most common indicators are:
Fragmentation: Governance is still happening at the business-unit level rather than the enterprise level.
Outdated Playbooks: Using strategies or tools defined more than three years ago.
The Bottleneck Effect: Data governance is seen as the reason AI projects can’t move into production, rather than the reason they succeed.
Automation alone focuses on velocity, not intelligence. As Raluca Alexandru of Forrester Research noted, if you automate a legacy process that lacks a feedback loop, you aren't fixing governance—you’re just scaling your existing problems faster. Without a mechanism to measure accuracy and feed those learnings back into the system, AI agents can quickly create a new bottleneck: a flood of unvalidated, potentially incorrect metadata that still requires manual human review to be useful.
Traditional governance is linear and manual: a human defines a policy, manually enforces it, and the process resets. Compounding governance is a learning system where every action makes the next action more accurate. It relies on a knowledge engine that captures human feedback on AI outputs. Instead of repeating the same work, the system "compounds" its intelligence, tuning its own context and prompts so that trust and accuracy increase over time.
The role shifts from data entry to system tuning. Instead of manually labeling datasets, stewards become "watchers of reliability." They identify "hidden loss"—the subtle errors or context-gaps an AI might miss—and tune the agent’s context or prompts to prevent those errors in the future. In short, the steward moves from being the engine to being the engineer.
Start with purpose, not the foundation. Rather than trying to fix your entire data catalog at once, identify a single, high-value AI use case (like the supply chain agents at Daimler Trucks). Build the feedback loops and governance infrastructure required for that specific case first. Once you prove the system can learn and deliver value, you can scale that learning architecture across the enterprise.
Loading...