BlogEditorial

Product analytics with AI: from query to conversation

Product analytics with AI is the shift from query to conversation. Instead of building dashboards, you ask questions. Here's why that change matters more than it sounds, and how to use it well.

Omar
Omar
May 7, 20268 min read

Product analytics with AI: from query to conversation

There's a moment most PMs know well. You have a question about your product. You think about the answer for ten seconds, decide it would be useful, and then realise getting it will take three days. So you don't ask it.

Multiply that by every PM, every week, every quarter. That's the cost of analytics that needs translation. Product analytics with AI changes the math by removing the translation step. You ask, you get the answer, you move on.

This post is about what that shift actually looks like in practice, why some teams get a lot from it and others get a little, and how to tell which side you're on.

What product analytics with AI really changes

The simple version: you stop writing queries and start having conversations with your data.

The deeper version is about who can ask what. In traditional analytics, the people with the questions and the people with the SQL are different people. The questions wait in a queue. The answers arrive after the moment that prompted them. Half of them never get asked at all.

Product analytics with AI removes the queue. The PM can ask. The designer can ask. The growth lead can ask. They get answers in plain language, with the chart attached. That's the whole shift.

The conversational analytics market is growing fast. Industry research projects it will grow from $14.3B in 2025 to over $41B by 2030, driven by exactly this democratisation pattern. The point is not the market size. The point is that "ask, don't query" is now table stakes.

The four conversations product analytics with AI is good at

Not every analytics question fits a conversation. Some are still better as dashboards. The ones below are where the conversation format pays off the most.

The exploratory conversation

You have a hunch. Maybe users on mobile activate slower. Maybe the new pricing page is hurting conversion. You don't want to build a dashboard, you want to test the hunch.

In a query world, this requires translating the hunch into SQL, getting it past the data team, and waiting. In a conversational world, you type the hunch, see the answer, and either move on or dig deeper. The cost of being curious drops to zero, which means people are curious more often.

The follow-up conversation

The first chart raises the next question. "Conversion is down in Spain" is not an insight on its own, it's the start of a conversation. Why? Which channel? Which device? When did it start?

Each follow-up in a query world is another ticket. In a conversation, you keep typing. The system holds context. It knows you're still talking about Spain. The conversation keeps going until you either understand the cause or know what you'd need to look at next.

The status conversation

Are we on track for our OKR? Did the release we shipped Tuesday improve the metric we wanted? How does this week compare to last?

These are short conversations that happen every week. They don't need a dashboard, they need an answer. AI handles them well because the question shape is predictable and the chart is usually small.

The diagnostic conversation

Something broke. Something dropped. Something spiked. You have ten minutes to figure out what's going on before standup.

Diagnostic conversations are where AI analytics can really shine, because the question chain is fast and tightly scoped. "Show me yesterday's drop in signups, broken down by channel. Now compare to the week before. Now look at session-level behaviour for the affected channel." That's three queries in 90 seconds.

What product analytics with AI is still bad at

Worth being honest about the gaps. The teams that get value from AI analytics are the ones that know where it falls short.

Long-form, novel research questions. A really new question, with a custom definition and a non-obvious cohort, still needs a human analyst. The AI can help, but it won't lead.

Anything where the data isn't clean. If your event taxonomy is messy or your user identity is broken, the AI will give you confidently wrong answers, faster than ever. Garbage in, faster garbage out.

Anything that requires judgement about what to count. "Is this user active?" is a definition question, not a data question. The AI can apply a definition you give it. It can't tell you which definition is right for your business.

Nielsen Norman Group's research on AI in UX makes this point repeatedly: AI is a force multiplier for expertise, not a replacement for it. The same applies to analytics.

The conversation model changes how teams use data

The most interesting effect of product analytics with AI isn't speed. It's behaviour.

When questions are cheap, people ask more of them. When the answer is in plain language, more people read it. When the chart comes with an explanation, more people share it.

The cumulative effect is that data conversations move out of the data team's tools and into where the team already works: product reviews, design crits, growth standups, weekly check-ins. The analytics layer becomes a participant in the conversation, not a place you visit.

This matches what McKinsey's 2024 State of AI survey found across business functions. The teams getting the most value from gen AI are the ones embedding it into existing workflows, not the ones treating it as a separate tool.

The skills product teams actually need

The skill that mattered most in old-school product analytics was query writing. The skill that matters most in product analytics with AI is question framing.

What does that mean in practice?

Knowing what you're actually asking. "Why is conversion down" is not really a question, it's a feeling. The good version is something like, "What changed in the last 14 days for users who started checkout but didn't complete?" The AI can answer the second one. It can only guess at the first.

Knowing what counts as an answer. A chart on its own isn't an answer. The answer has a number, a comparison, and a likely cause. If the AI gives you a chart with no comparison, your follow-up is "compared to what?"

Knowing when to stop trusting and start verifying. When the answer matters, click through to the underlying sessions. Adora's Session Replays and Journey Maps are designed for this kind of verification: when you have a hypothesis, you want to see real users moving through the product, not just a number.

Knowing when the answer is wrong. If the AI gives you a number that surprises you, your first job is to figure out whether the AI is wrong or your prior was wrong. This skill compounds. Teams that practice it get faster at trusting the right things.

These are not new skills, exactly. They're the skills good analysts already have. What's new is that more people on the team need them, because more people are asking the questions.

A short rollout pattern that works

If you're thinking about adding AI to your product analytics setup, the rollout pattern below tends to work.

Step 1: Pick one team and one question they ask weekly. Not the hardest question, the most repeated one. Pipe their data into the conversational layer. Show them they can ask it themselves.

Step 2: Connect to journeys, not just events. Event data alone gets you basic answers. Journey-level data is where AI gets interesting, because the AI can talk about the path, not just the click. Adora's Journey Maps plus Ask Adora are built around this. The combination is what unlocks the harder questions.

Step 3: Replace one meeting. Find the recurring meeting whose main job is to share data the data team prepared. Replace it with a five-minute scan of the live tool. The meeting either gets shorter or disappears. Both are wins.

Step 4: Expand by team, not by feature. Don't try to roll it out to everyone on day one. Add the next team only when the first team is using it without prompting. Earned adoption beats announced adoption.

Where to start

If you read this far and want to test the conversational model, do one thing this week. Take the question your team is currently waiting on and run it through a conversational product analytics tool with your real data. Ask the question. Read the answer. Ask the follow-up.

If the answer holds up to two follow-ups without losing context, you've found something real. If it stalls, you've still learned something useful: the gap between marketing claim and product reality.

Either way, the next conversation about your product gets easier.