Building Smarter Enterprises with Agentic AI: Key Takeaways from Data Science Connect

By Jonathan Bruce, VP, Strategic Customer Advocacy, Alation

Published on June 10, 2025

As AI evolves from predictive models and chatbots into intelligent, goal-seeking agents, the conversation around Agentic AI is just getting started. 

In a recent Data Science Connect webinar, leaders from Dataiku, Zenlytic, Nexla, and Alation came together to explore how organizations can responsibly design and deploy autonomous AI agents that operate in the real world.

We covered everything from architectures and governance to contextual memory and real-time data needs. Below are the key takeaways from what proved to be a fascinating discussion on the role of agentic AI in the modern enterprise.

Data Science Connect webinar on building smarter with agentic AI

What is agentic AI, and why does it matter?

Agentic AI refers to systems that go beyond static tasks and act autonomously with contextual awareness and goal-oriented planning. These agents don’t just answer questions—they execute tasks, make decisions, and learn and adapt in real time. 

Agents present a way to get work done in a highly contextual fashion. These systems interact with tools, evaluate their own outputs, and adapt based on iterative feedback loops. In fact increasing we should think of agents as part of the team.  We can and should think of how their efforts are net contributory to team efforts.  It's not something to be scared of, but in fact highly complimentary.

Christian Capdeville, Content and Product Lead at dataiku added, “They're basically just LLM-powered systems that can take actions autonomously.” Whereas LLM chatbots can give you an answer to a specific question, an AI agent takes action based on an answer (or series of answers).

To apply this in practice, start by identifying repeatable workflows that currently require manual input but follow clear business logic, and of course, outcomes. Then, explore where intelligent agents could handle decision points or trigger downstream actions—especially in areas like operations, forecasting, or internal support.

Build with the business outcome first

Agentic AI should never be built in a vacuum. Enterprises need to begin with a clearly defined business problem and a measurable outcome. You can build agents with the best SDKs on the market, but if you don't tie the outcome to a business objective, you’re missing the mark. 

Before selecting a model or designing an agent, align with stakeholders to define what success looks like. Techniques like Amazon’s PR/FAQ or, my own personal favorite, Salesforce’s V2MOM (Vision, Values, Methods, Obstacles & Measures), can help articulate a future outcome and work backwards to define scope, data requirements, and evaluation criteria. 

Even the most advanced agent frameworks will fall short if not tied to tangible business value. Enterprises should avoid experimentation for its own sake and ensure each AI initiative serves a measurable purpose.

Trustworthiness is the bedrock of enterprise AI

When enterprises adopt AI agents, the keyword isn’t just trust—it’s trustworthiness. AI systems must produce not only accurate outputs but also explainable, auditable results that align with business expectations and ethical standards.

Trust begins with transparency. AI agents should be designed to trace their decisions back to data sources, explain how outcomes were reached, and log actions for review. This includes surfacing decision paths, documenting data lineage, and building user interfaces that clarify system reasoning. Circuit breakers should be triggered when data quality falls below acceptable thresholds.

That’s why data provenance, lineage, and quality aren’t just “nice-to-haves”—they’re non-negotiable. Enterprises must be able to track how data has moved, changed, and been used over time, and they need processes in place to pause or flag AI systems when known quality standards are breached.

As Capdeville emphasized, building trustworthy AI requires a shift in mindset: "The big shift starts happening when you think about these as systems, not just assistants." That means instrumenting every part of the process—tracking decisions, monitoring outcomes, and enabling stakeholders to verify results. As Capdeville noted, trust looks different depending on the stakeholder: "The user wants to know, can I trust the thing? The IT administrator wants to know, can I trust it in different ways?"

To build this trust, data teams should implement scalable governance frameworks that align to critical use cases—think federated models like data mesh, with centralized policy but decentralized execution, and critically, accountability. Focus on governing high-impact data sets and ensuring proper stewardship and documentation. This approach fosters trust while still enabling innovation.

Rethinking governance for AI agents

AI governance is rapidly evolving. Concepts like data mesh and data contracts are seeing renewed interest as organizations look for ways to balance control with flexibility.

Modern governance must enable innovation while safeguarding against risk. This requires:

  • Applying centralized policies where needed

  • Empowering domain teams to own data quality

  • Documenting lineage and metadata for transparency

Implement tiered governance: higher scrutiny for use cases involving sensitive data, and lighter-touch controls for internal productivity tools. Use semantic layers and access control policies to limit what agents can see or do. Leverage existing roles, like data stewards and owners, to extend governance accountability across teams. And document everything: data lineage, model provenance, and usage logs should be transparent and easily auditable.

Or as Capdeville said, "Smooth is fast." Build with clear guardrails, and you’ll reduce risk and accelerate time to value.

Contextual memory will define the next phase of agentic AI

We often talk about how powerful large language models are. But even the smartest models need context.

My opinion? We need to permit data teams to train not so much the data they want to seek and find, but the business problem they're trying to solve. The goal is to bridge the gap between producers and consumers of data products, enabling agents to act within trusted, business-specific contexts.  And further, the majority of business-specific contexts remain squarely hooked into structured data.

Organizations should begin by building a foundation of high-quality metadata. Curate reusable data products with clearly defined ownership, documentation, and lineage. Then use these products to serve as the context layer for AI agents, but to embrace a notion of data contracts as the basis for trust. Additionally, consider fine-tuning models or using knowledge graphs that improve how agents retrieve relevant information.

A marketplace of data products—backed by lineage, metadata, and governance—can create a virtuous cycle of reusable insights and actions.

Don’t overcomplicate; scale through simplicity.

The hype around multi-agent orchestration is real. But as we discussed, the more tasks an agent performs, the harder it becomes to do them all well.

For this reason, simplicity is key when deploying AI agents. Trying to do too much too soon creates unnecessary complexity and makes agents harder to maintain and scale.

"You wouldn’t expect one human to do everything in the organization,” Capdeville argued. “The same goes for agents." I would add that complexity is exponentially harder; focus on solving a single problem first.

Start with narrow, high-impact use cases—such as automating report generation, document summarization, or answering internal FAQs. Focus each agent on a single task or domain before considering orchestration across multiple agents. By limiting scope, teams can iterate faster, measure impact clearly, and build internal trust in the system.

Real-time isn’t always required—but it is transformative

While real-time capabilities can elevate agent performance, not every use case requires it. Organizations should distinguish between analytical, operational, and mission-critical workflows.

As Ryan Janssen from Zenlytic shared, "The real bottleneck in real-time AI isn’t the LLM—it’s getting the surrounding data infrastructure to work in real time." That said, there are enormous gains in pairing agentic AI with fast, relevant data.

For enterprises already investing in streaming or low-latency pipelines, agentic AI becomes the intelligence layer that drives decision-making at speed. But as Amey Desai, CTO of Nexla, warned, "Don’t let LLMs run anything critical that’s non-deterministic."

And for analytical use cases, near real-time is often sufficient. Operational cases may benefit from real-time or low-latency retrieval, particularly when agents act on fresh inventory, transaction, or customer data. 

Start by classifying your use cases by urgency and then selectively layering in real-time infrastructure. Keep in mind that the true challenge is often your existing data pipelines, not the agent's ability to consume real-time input.

Final thought: Start with trust, lead with value

Agentic AI holds enormous promise. But it’s not about deploying agents everywhere. It’s about delivering outcomes that matter.

Focus on solving real problems with simple, explainable solutions. Establish strong data foundations, reinforce accountability, and ensure governance scales with innovation.

If we lead with clarity and business value, agentic AI won’t just be a breakthrough in automation. It will become a trusted partner in how work gets done.

Curious to learn more about the future of agentic AI for data intelligence? Read the blog, Alation Acquires Numbers Station: Enabling AI to Understand Structured Data at Scale.

    Contents
  • What is agentic AI, and why does it matter?
  • Build with the business outcome first
  • Trustworthiness is the bedrock of enterprise AI
  • Rethinking governance for AI agents
  • Contextual memory will define the next phase of agentic AI
  • Don’t overcomplicate; scale through simplicity.
  • Real-time isn’t always required—but it is transformative
  • Final thought: Start with trust, lead with value
Tagged with

Loading...