Skip to content See Autonomous Agents In Action
Blog

How Enterprise Data Teams Are Using Conversational Agents

When we launched AIDA, Anomalo’s Intelligent Data Analyst, we assumed data teams would use it the way every text-to-SQL tool gets used, by asking questions about their data in plain language. When we analyzed thousands of interactions across one of our largest enterprise deployments, we found that ad-hoc queries weren’t even in the top three use cases. 

The number one use case, at roughly a third of all conversations, was creating and configuring data quality checks through natural language. This usage data confirmed something we’ve witnessed across enterprises which is data teams don’t want a better version of any single tool. They want an intelligent system that can handle their data operations. What we were seeing in AIDA conversations was the early behavior of teams ready for a fundamentally different way of working with data.

To understand usage patterns, we looked across thousands of messages and conversations to see how users were interacting with AIDA and what they were accomplishing. The patterns we found fundamentally reframed how we think about what AIDA is and who it is for. Six distinct use cases emerged and the order tells a story even we didn’t expect. 

1. Building data monitoring through conversation

The most common thing enterprise users do with AIDA isn’t asking a question, it’s building scalable monitoring. Teams are describing what “good data” looks like in plain language, and then asking AIDA to turn that into a live, configured data quality check. Some users were pasting SQL logic and asking AIDA to create a custom validation rule. Others were setting up volume tieout checks between source systems to catch dropped records. They are configuring key metrics without ever opening a configuration screen.

Other, more sophisticated users are batch-generating checks across dozens of columns programmatically through AIDA’s SDK integration, turning what used to be a multi-day configuration project into an afternoon conversation.

What really drove this use case home: at one enterprise deployment, the team has never created a single check through our traditional UI. Their entire data quality monitoring program, every rule, every threshold, every validation, was built through natural language conversations with AIDA. We’re witnessing this collapse in the distance between knowing what you need to monitor and actually monitoring it. In most organizations, the gap between a data engineer saying “we should really have a check for that” and that check actually being live in production can be days or even weeks. With AIDA, it’s the length of a conversation.

2. Understanding What Their Data Is Telling Them 

The second pattern was interpretation. When a user sees a chart or visualization within Anomalo, they click “Explain” to have AIDA break down what they’re seeing in plain English to understand what changed, by how much, and whether it’s unusual. 

Users would get the initial explanation and then start pulling the thread. “Drill into the West region.” “Compare this to the same week last quarter.” “Is this pattern showing up in the other table too?” What starts as a one-click explanation turns into an iterative analytical session, sometimes ten or twelve messages deep.

This works because AIDA isn’t just reading the chart. It knows the table’s full statistical profile, what monitoring is configured, and what the historical baseline looks like. When it says a 15% drop in row volume is unusual, that assessment is grounded in months of profiling data. It’s not guessing based on the last few data points. If you’ve ever tried to get that same context from a general-purpose chatbot, you know the difference. You spend half the conversation explaining the table, the other half correcting bad assumptions. AIDA already knows.

3. Investigating When Something Breaks 

There’s a specific feeling that every data professional recognizes. You get the alert. Something failed. A check is red. And now you have to figure out why, which means opening the monitoring dashboard, querying the underlying data, cross-referencing the check configuration, checking whether someone changed something upstream, and slowly assembling a hypothesis about what went wrong. If you’re lucky, it takes 30 minutes. If you’re not, it takes half a day.

What we found is that teams were going to AIDA first. Before the dashboard. Before writing a query. Before messaging a colleague. They’d ask AIDA to diagnose the failure, investigate whether a spike in nulls was a real issue or a known schedule artifact, or analyze the pattern of check run failures across the table’s history.

AIDA can compress that entire triage workflow because it already has access to the check history, the table profile, and the data itself. It doesn’t need you to explain the context. It was there when the context was created.

This pattern also previews something bigger. Anomalo’s Data Issue First Responder Agent will do this investigation autonomously by triaging alerts, assessing severity, and then routing incidents through ServiceNow or Jira. The teams using AIDA for manual investigation today are building the exact workflow they’ll use when the first pass of investigation happens without them.

4. Getting oriented in unfamiliar data 

Every analyst knows the feeling of being dropped into a new dataset you’ve never seen. Maybe you just joined the team. Maybe someone in another department asked you to look into something. Before you can answer any actual question, you need to figure out what you’re looking at. What does this table contain? How often does it update? What’s being monitored?

“Tell me about this table” is one of the most common prompts in the entire dataset. AIDA responds with structure, contents, documentation, and monitoring configuration. It’s the briefing you’d normally piece together from three different screens and a Slack thread where someone mentioned the table six months ago.

This works because AIDA draws on Anomalo’s automated profiling, including column-level statistics, freshness patterns, historical baselines, and any documentation generated by the Data Documentation Agent. The response reads less like a schema dump and more like a knowledgeable colleague who’s been working with the data for months.

For anyone who’s ever spent their first two weeks at a new job just trying to understand the data, this one hits.

5. Asking questions about the data

This is the use case we all expected. Record counts, latency analysis, trend breakdowns, filtered aggregations. And yes, teams do use AIDA for this but as we learned, it’s just not the main event.

When users query through AIDA, the interactions are iterative. Someone starts broad (“analyze event latency for this table”), then refines over multiple turns, adding filters, changing dimensions, requesting visualizations. These are analytical sessions that build on themselves, not one-shot queries.

AIDA’s advantage here is context. It knows which tables are monitored, what the columns mean, and what “normal” looks like statistically. It writes queries that get to the answer on the first or second try, where a generic tool might take five iterations because it doesn’t understand the data model.

6. Navigating the Anomalo Platform 

The smallest but perhaps most revealing use case is how teams use AIDA as the interface to Anomalo itself. How do I clear a failed status? How do I trigger a manual run? Is there a faster way to reference this table?

This one might seem minor, but it signals something important about how the most active users relate to the product. They don’t think of AIDA as a chat feature within Anomalo, they think of it as the way they use Anomalo.

What The Usage Data Taught Us

What surprised us most is that nearly 70% of all conversations involved AIDA actually doing something. From creating a check to investigating failures, users are putting AIDA to work and not just having it answer questions. 

This reinforces that data teams have an intelligence gap, not a tool shortage. Their warehouse doesn’t know what “normal” looks like. Their BI tool doesn’t know why something changed. Their catalog has metadata but doesn’t have the context. Each of these tools is good at what it does, but the work that happens between them, the monitoring, the investigation, the documentation, has always fallen to humans. Anomalo fills that gap. Not by replacing your stack, but by adding the layer of understanding and trust that makes everything in it work better for both humans and agents.

From Conversations to Self-Driving Data

AIDA’s usage proves that when data teams get an AI that truly understands their environment, they immediately push it toward autonomous operation. They don’t want to ask and wait. They want the system to watch, investigate, surface what matters, and act, with humans setting direction rather than doing the manual work. 

AIDA ties it all together as the platform’s organizational memory. Every correction, every piece of context, every follow-up feeds back into the system and makes every agent smarter. That compounding intelligence is what makes the system more valuable the longer it runs, not just for Anomalo, but for everything downstream that depends on trustworthy data.

Data quality was the foundation. AIDA was the bridge. Self-Driving Data, where your team sets the direction and the system does the work, is where we’ll take you next. 

 

Categories

  • Product Updates

Ready to Trust Your Data? Let’s Get Started

Meet with our team to see how Anomalo transforms data quality from a challenge into a competitive edge.

Request a Demo