The wall came down
For a long time, SQL was the wall between a question and an answer. If you wanted a number, you waited on the data team. Submit a ticket. Wait three days. Get something back that half-answered what you asked.
That wall is gone. It didn't erode. AI kicked it over.
Self-service analytics finally showed up, by the way. We spent a decade pitching it with Looker and Tableau and Mode and it never really worked. AI delivered it in about eighteen months. Mostly by accident.
Now everyone's an analyst. Sales pulls their own pipeline numbers. Marketing builds their own funnel. Someone in product asked an agent about churn this morning and is staring at a chart I've never seen.
The part I didn't expect to miss
When someone from sales used to walk over and ask me for a pipeline number, there was a conversation. What's this for? The QBR? Then you actually want the weighted version. And you probably want to exclude the partner deals - finance pulls those out for board reporting.
That conversation was doing a lot of work. It was translating a business question into the right data question. It was catching the assumptions before they got baked into a slide.
AI doesn't have that conversation. It takes the question at face value and answers it. So you get an answer to the question you asked, which is often not the question you meant.
Nobody notices until the number shows up wrong somewhere that matters. Always a great time to find out.
The bottleneck moved
So if queries aren't the hard part anymore, what is?
The stuff underneath. Data models. Semantic layers. Definitions. The docs that describe how the business actually works, not how it looks on an org chart.
AI is only as good as what it's pointing at. When the underlying model is sharp, it's genuinely magic. When it's messy, AI makes the mess faster. Same ingredients, higher speed.
A fuzzy data model used to be survivable because humans patched around it. They knew which fields to trust on which day. They'd double-check the weird one. They'd add a caveat when the number deserved one.
AI doesn't caveat. AI just answers.
Who's wrong when the AI is wrong?
This one I keep turning over.
When an analyst produced a bad number, there was a person to go ask. They'd look at it, fix it, usually learn something. The mistake had a home.
When AI produces a bad number, the trail dissolves. Nobody was watching. The AI doesn't remember it picked one of three "revenue" fields last Tuesday. The person who got the answer moved on. The person who'll catch it in the board deck wasn't in the loop.
Most orgs don't have a playbook for this yet. I don't either, honestly. But I think the companies that figure it out will stop treating AI-generated numbers like answers and start treating them like claims. Something you can produce fast, but still have to back up before you act on it.
The role is changing
The data team's job used to be "get the number." That's been shifting for a while. AI just made it impossible to pretend otherwise.
The job now looks more like knowledge architecture than SQL engineering. Good definitions. Clean models. Docs that cover what the business actually does, not the polished version. Clear answers to "what does this mean when someone phrases the question this way."
It doesn't demo well. You can't screenshot an ontology for the board. It's not the kind of work that gets a LinkedIn post.
But it's the whole game now.
Where this leaves us
I'm not anti-AI. I use these tools all day. I'd never go back.
But speed without trust is just faster wrong answers. And wrong answers at the speed of AI is a new kind of problem, because the numbers still look fine on the slide. They're just not right.
I don't think I've figured this out. Nobody has yet. The teams I see handling it well aren't the ones with the flashiest AI stack. They're the ones quietly redoing the foundations. Fixing definitions. Cleaning up models. Writing docs a human would actually read.
The stuff nobody brags about. Which is probably why it's working.
The bottleneck moved. Most companies haven't clocked it yet.