
Today, we’re announcing the Alation Skills plugin, freely available to bring your Knowledge Layer into Claude Cowork and Claude Code (and more to come).
Each Skill is a structured set of instructions that tells LLMs exactly how to query, explore, curate, and automate your data catalog — so AI agents can navigate your data with the context and precision of someone who already knows your business.
Check out Alation Skills now at the Alation Plugins repository on GitHub.
Let me say something that might surprise you, coming from someone who builds AI products for a data catalog company: your catalog is probably doing its job.
The governance metadata is in the catalog. The lineage is mapped. Connectors have been extracting queries, column-level formulas, dashboard dimensions, metrics, and filters automatically for years. Your data catalog knows how tables are created and used, which dashboards reference which columns, which queries hit which tables, and how metrics are actually calculated. And, yes, governance teams have layered manual curation on top: trust flags, ownership tags, glossary terms, verified business definitions.
Your catalog holds a lot of institutional knowledge.
The problem, clearly, is not your catalog. The problem is that your analysts live in one world and your catalog lives in another.
Think about how a data analyst works. They're in a SQL editor writing queries. They're in a BI tool checking dashboards. They're in Slack answering questions from the business. They're in Jira tracking requests. And, somewhere off to the side, in its own tab, sits the data catalog — a separate system with its own search syntax, its own object model, its own way of organizing the world. To use it, analysts must leave the context where they’re working, switch into the catalog's context, translate questions into its language, get the answer, and then switch back.
That's not a skill gap; that's friction. And humans are ruthlessly efficient at avoiding friction.
So, an analyst asks, "Where's our supply chain data?" Instead of opening the catalog, they Slack someone on the data team, who then uses the catalog. Every single time, this is what happens because it’s faster and easier.
When LLMs exploded, everyone had the same thought: this is how we finally unlock the catalog for everyone.
The first thing most teams did was connect an LLM to their catalog via Model Context Protocol (MCP) and call it a day. (MCP is Anthropic's open standard for connecting AI models to external tools, and it’s genuinely useful infrastructure. We use it at Alation.)
MCP solved the first problem the analyst faced by avoiding friction by being in a different building. Now, analysts can reach the catalog via the LLM from wherever they’re working, and with no context switching required.
But MCP didn't solve the second problem — complexity — because the catalog is still hard for the LLM to navigate.
MCP gives the LLM access to every lock:
search_catalog(),get_table_metadata(), run_query()
Each one opens a door. But the LLM, by itself, doesn't know which doors matter, in what order to open them, or what to do once it's inside. Sometimes it gets lucky and walks straight to the answer. Sometimes it wanders into a broom closet and confidently tells you it found what you were looking for.
MCP opens the catalog, but the LLM is still lost.
Much like a tourist in a new city, LLMs need a "map" of institutional knowledge to navigate the complexity of your enterprise successfully.
That knowledge has always existed, scattered across layers that are not unified in any single system. There's technical metadata that connectors extract — schema, lineage, query logs, and data quality. Catalogs are good at this. There's also business context that governance teams and analysts maintain: — glossaries, KPI definitions, calculation logic, edge cases, and whether "Q1" means calendar or fiscal year. Catalogs are good at this, too.
Importantly, there's a layer that has almost never been captured: the institutional knowledge held by experienced workers. How a deal is tagged in Salesforce so it flows into the right pipeline report. How to classify a data source as PII-compliant so it's consistent with the rest of your catalog. Which three warehouses to exclude from analysis because they were trial locations. What "Project Aurora" actually means.
Every tool in the stack has its own unwritten operating manual, and without access, the LLM is lost.
Anthropic built a framework for encoding exactly this kind of knowledge: skills and plugins. Gemini CLI and OpenAI Codex also support skills, and will soon have a distribution mechanism similar to plugins. Others will follow. This isn't a one-vendor bet; it's becoming the standard for how AI assistants consume structured, use-case-specific knowledge.
Today, Alation Skills, built on that framework, are publicly available and evolving fast in the Alation Plugins repository on GitHub.
Alation Skills are structured protocols that define how an LLM should interact with each layer of your data catalog. They specify which tools to invoke, the correct sequence of operations, how to interpret results, and what actions should follow. The LLM provides the reasoning. The Skill provides the domain expertise — the enterprise context agents need to take action and generate insights on your data.
We’re launching with six Skills:
Explore — "What marketing data do we have?" — so the analyst doesn't need to know that data products and catalog tables are searched differently.
Ask — "What was our total revenue last quarter?" — so the business user gets an accurate answer, not a pointer to a table they still must query.
Curate — "Add descriptions to the customer orders data product." — so fixing stale metadata doesn't require a ticket and a two-week wait.
Configure — "Create an agent that can answer questions about our finance data." — so standing up a new AI agent doesn't require a data engineer.
Automate — "Send me a summary of last week's sales every Monday." — so the analyst who figured out the right query doesn't need to remember to run it.
Setup — To configure credentials and authenticate with your Alation instance.
Each Alation Skill knows its lane but hands off tasks cleanly to the next:
Analysts can use Alation Skills immediately. Here’s a look at a typical interaction.
It's the week before a product review, and an analyst is pulling together European sales performance for her VP. Before she can run a single query, she gets a Slack message from Legal: a customer data audit is coming, and any untagged PII in the catalog needs to be flagged now.
She doesn't open a ticket. She doesn't ping a data steward. She just types:
“Look for objects in the catalog that might need a PII tag and update them if so.”
The Explore skill searches the catalog, identifies potentially unflagged columns containing customer PII, and asks a few clarifying questions before moving on.
She confirms the action, and the Curate skill takes over to write a script that appropriately tags the identified PII columns.
Intrigued, the analyst digs deeper.
"What’s in my catalog? Any sales data for Europe?"
The Explore skill searches the catalog and data products, surfaces the underlying tables and their schemas, and asks if she wants to query the specific data product. She confirms and digs even deeper.
"What are my highest-selling products in Europe? Create a locally-hosted dashboard of these results."
The Ask skill runs the query against live data and returns a complete answer, not a pointer to a table that she must query herself. The correct data is used. Claude can use this data to create interactive charts, providing the HTML file for local hosting, and helpfully summarizing the results.
The Configure skill then keeps the momentum going by building automation that can send weekly reports to her and the entire team so they can keep track of changes, thereby automating the entire process. feed the dashboard with interactive charts, providing the HTML file for local hosting, and helpfully summarizing the results.
Four Skills, one quick conversation. Watch it in action in the following demo:
The repo is live at github.com/Alation/alation-plugins and is Apache 2.0 licensed.
Install the plugin in Claude Code or Claude Cowork, point it at your Alation instance, and ask it a question about your data. That's it. If you're the kind of person who reads blog posts like this, you'll know within five minutes whether this is real.
Gemini CLI and OpenAI Codex support is coming soon, as are more Alation Skills. Build a skill once, and it travels with the standard — not with a specific vendor's roadmap.
We're building in the open because the problems we're solving aren't Alation-specific. Every data team is rebuilding the same primitives in different wrappers. Fork it. Extend it. Tell us what workflows you wish existed.
Remember, your catalog was never the problem. It was always full of valuable knowledge, but one context-switch away from being truly embraced by those who need it most. MCP brought the catalog closer. Skills gave AI the map. And now, for the first time, the map gets better every time it’s used.
Loading...