On September 25, 2025, CDO Magazine hosted a timely conversation on “Building Trusted AI through Precision AI Agents,” moderated by Jonathan Bruce, VP & Field CTO at Alation, with Ankit Goel, EVP/CDAO at KeyBank, and Nitin Kumar, Director of Data Science & GenAI at Marriott International.
Across financial services, hospitality, and product leadership, all three leaders converged on a core thesis: trustworthy AI is impossible without disciplined governance, rich metadata, and accountable, well-designed data products.
Below is a recap organized by each speaker’s perspective, highlighting practical moves you can make now to scale AI and data products with confidence.
Ankit Goel, CDAO of KeyBank, opened with a candid reality check: enterprises aren’t greenfield. Legacy systems, unique business processes, and varied governance maturity mean generic AI rarely works out of the box. As Goel put it, “the one size fits all doesn't work for AI because it doesn't quite work for any other technology as well.” He urged leaders to anchor any AI program to end-to-end business processes and treat AI as a tool within a broader business strategy—not the strategy itself.
Increasingly, organizations are using data products to fuel trusted AI. By integrating governance standards, clean data, and key context into data products, leaders build a solid foundation atop which AI models can reliably perform.
Goel underscored key principles for scaling trusted data products:
Connectivity beats novelty. Agents must connect to a heterogeneous stack—different APIs, platforms, and data stores—to move work across a full business process, not just respond to a single prompt.
Business-first accuracy. This comes from scoping AI to clear, measurable outcomes tied to business priorities—not from chasing horizontal capabilities.
Risk-balanced design. Especially in regulated industries, program acceleration depends on risk posture clarity: first-line accountability, second-line oversight, and shared standards applied consistently.
Goel championed a hub-and-spoke operating model that strikes a balance between centralized standards and decentralized execution—a foundational pattern for scalable data product initiatives.
While concepts like data mesh have existed for years, their real value comes down to accountability. The ultimate goal isn't decentralization for its own sake—it’s to get the business to own the data. And that’s precisely why data productization is so powerful: it turns raw data into accountable, governed assets tied to real business processes.
In this model, three core roles emerge:
Data owners/producers in the business, who define and create data aligned to process intent.
Custodians who “drive the truck”—moving and transforming data across platforms without degrading quality.
Consumers who apply data in analytics, ML, or agentic use cases with their own fit-for-purpose checks.
Central standards plus federated execution accelerates adoption, avoids bottlenecks, and satisfies regulators’ need for consistency. This is a practical path to managing data products at scale without sacrificing governance.
Nitin Kumar, Director of Data Science & GenAI at Marriott International, reframed the role of AI from a handy utility to a “digital companion or a digital twin of our associates.” For Marriott’s 35+ brands, that means agents that understand context—policies, rules, customer journeys—and take initiative across service and operational workflows.
Kumar highlighted three enterprise-grade metrics for evaluating agent impact:
Speed – Can agents resolve inquiries on the first interaction?
Quality/accuracy – Are outputs correct and useful to the business problem?
Trust – Can the agent cite the policies and documents it used?
Instrumenting agent behavior is critical. Kumar urged teams to log agent actions thoroughly and to “put on the guardrails, which is like the most important aspect of any… agentic solution.” In hospitality, that translates to real-time monitoring of service workflows (e.g., email, chat, voice) and proactive outreach—moving from reactive to anticipatory customer care.
The bar is high for data products in this model; Mariott data products require complete metadata, up-to-date policies, and transparent lineage so every answer can be traced back to authoritative sources. Absent that, agents remain non-deterministic helpers—not reliable partners.
No matter the industry seeking to leverage AI, metadata is the unambiguous non-negotiable. Jonathan Bruce noted the “awakening” to metadata’s strategic value; Goel added that “without a good semantic layer, which is based on metadata, AI doesn't work.” Kumar connected the dots to ongoing operations: policy traceability, lineage, and freshness reveal why an answer was produced—not just what the answer is.
Risk-based lineage. Neither bank nor hotel can “boil the ocean” on lineage. Ankit’s team starts with the most critical data and expands coverage progressively. For finance, that supports regulatory reporting, credit decisions, model risk management, and data product safety. For hospitality, lineage underpins explanations for recommendations and customer-facing decisions.
ROI realities. Both leaders acknowledged that lineage and metadata ROI are hard to quantify up front. Ankit suggested building the case via incident retrospectives—measuring time spent diagnosing issues, reputational risk avoided, and the downstream value of faster fixes—while Nitin encouraged executives to treat data quality as a “data debt” you must pay down to unlock future value safely.
Technology alone won’t deliver trusted AI. At KeyBank, Goel described organization-wide enablement so that every role understands their part in data stewardship.
For example, KeyBank rolled out simple, enterprise-wide training that showed how branch, operations, and back-office roles each create, modify, or move data during daily work.
On the AI side, KeyBank paired access to tools (e.g., copilots) with a show-don’t-tell approach—live examples from colleagues who used GenAI to improve real processes. This narrows the gap between “pilot theater” and production-grade data products, and channels enthusiasm into prioritized, risk-balanced use cases.
The panel agreed that the agentic future will be embedded in every business process—customer care, marketing, sales assistance, and back-office operations—but only agents wrapped in strong governance, transparency, and guardrails will endure. Kumar expects a rise in smaller, domain-tuned models—shaped by company DNA, policies, and proprietary datasets—powering highly contextual data products across functions.
From a financial services perspective, Goel cautioned that regulated industries will move more deliberately toward customer-facing autonomy, progressing first through internal efficiency and decision support and then external automation as governance matures. Either way, metadata, lineage, and a robust semantic layer will distinguish winners from laggards.
If you’re beginning—or rebooting—your AI journey, the panel’s advice crystallizes into a pragmatic playbook for data products:
Start with an end-to-end business process. Treat AI as one component in a measured redesign of how value flows from intake to outcome.
Stand up a semantic layer grounded in rich metadata. Canonicalize entities, policies, and metrics so agents can “reason” consistently.
Adopt a hub-and-spoke operating model. Centralize standards and controls; federate ownership to domains that build and run data products.
Instrument everything. Log agent decisions and data flows; make lineage discoverable to humans and machines.
Prioritize risk-based lineage. Cover your most critical data products first; expand iteratively.
Prove value through real use cases. Pick 1–2 simpler problems, learn fast, then scale patterns that work.
Bake in governance from day zero. Guardrails, policy traceability, and human-in-the-loop aren’t afterthoughts—they’re design inputs.
As Kumar emphasized, you don’t need a perfect plan to start; begin small, learn, and adjust. And as Goel advised, balance ambition with pragmatism—especially around data readiness and risk appetite.
Turning these principles into practice requires an operating backbone that unifies governance, metadata, lineage, and policy—and makes them usable by people and agents. That’s where Alation comes in.
Metadata & semantic layer for accurate agents. Alation curates business-friendly definitions, policies, and relationships so AI agents can resolve entities precisely and use trusted language across domains.
End-to-end lineage and impact analysis. Visualize how data flows through pipelines and data products, understand upstream/downstream blast radius, and satisfy audit needs with explainability.
Federated governance at scale. Define central standards, assign data product ownership, and automate stewardship so domains can ship faster without losing consistency.
Policy-aware access. Tie entitlements and approvals to policies so agents and humans retrieve the right data under the right conditions—every time.
Adoption analytics. Track usage, trust signals, and quality KPIs across data products to continually improve the portfolio.
With Alation as your system of record for data intelligence, your teams can move beyond “static tools” to accurate agents that act as true digital partners—grounded in trusted data products that your business, customers, and regulators can rely on.
Curious to see Alation in action? Book a demo with us today.
Loading...