Table of Contents

Subscribe to Our Newsletter and Stay Updated
From exclusive content, and webinar access, to pre-launch product updates, our insights help you become a better finance leader. Subscribe below.
A few weeks ago, an analyst on one of our Push customers ran a query that surprised both of us. It was a ClickHouse arrayJoin, syntax I wouldn't have known how to write myself despite spending over a decade writing SQL. The AI generated it from a plain English question, it ran correctly on the first try, and the whole thing cost less than a penny.
That wouldn't have been possible two years ago. And I'm part of the problem. We're building Push to be the best analyst we've ever worked with, which means we're actively automating ourselves out of the job we used to have.
The Skill That Built an Industry
For the better part of two decades, the data profession was built on a specific form of scarcity: technical fluency. Organizations generated massive amounts of data but couldn't make sense of it without people who could write SQL, Python, R, and navigate the growing stack of tools like dbt that turned raw data into something usable. The people who could translate business questions into working queries were genuinely hard to find, and that scarcity created entire career paths, departments, and an analytics industry that produced multiple unicorns and billions in acquisitions built around their expertise.
I saw this up close at Periscope Data. The product was "Type SQL, Get Charts" and the magic was the speed of going from a SQL query to a chart to a shareable dashboard. I spent years doing demos, advising on data strategy, and supporting customers across sales, marketing, and CS at thousands of companies. The pattern was always the same: the analysts who could write SQL became the most valued people at their organizations. They were the ones everybody needed, and everyone else filed tickets and waited.
The barrier was always the syntax itself, not the concepts or the data literacy. The technical act of translating how are our Q3 cohorts retaining into a valid query against a specific schema was where people got stuck. The gap between the people who could ask the question and the people who could write the query defined the data team's role for years. It was their job security, and it was also their bottleneck.
From Valid Meaningless SQL to Automatic Meaningful SQL
When ChatGPT launched, it could write SQL. That was impressive on its own. But if you actually ran those queries against a production database, the results were mostly useless. The syntax was correct, the structure made sense, and the output was meaningless because the model had no idea what your data looked like, which tables mattered, or what conditions were needed to get an accurate answer.
What happened since then is more interesting than the models just getting smarter. The products being built around AI have started replicating the workflow that a good analyst actually follows. Before writing a query, a good analyst checks the data to understand its shape. They figure out which tables are the right ones to use and what conditions are needed to return an accurate result. They build context about the business before they write a single line of code. The best AI agents now do the same thing, working with the data infrastructure to understand the environment before generating a query, not just generating syntax in a vacuum.
That is what moved AI from writing valid SQL to writing meaningful SQL. It was not one model release that flipped a switch. It was products building the same analytical workflow that data teams have been following for years, and giving AI the tools to execute it. Sometimes it takes a nudge, a thoughtful system prompt, or a design decision by the product to keep the agent on track. But the trajectory is clear, and each generation of these systems gets closer to replicating what a strong analyst does naturally.
The speed of that improvement is going to reshape what data teams are for. Not because the skill stops mattering, but because it stops being scarce.
Everyone Is Getting an Expert Analyst
Over the past year, something has shifted in how non-data people interact with their company data. Product managers, sales ops leads, finance directors, even C-suite executives are starting to query data directly through AI tools. They ask questions in plain English and get back working SQL that, more often than not, produces useful results.
The AI analyst is sometimes naive in ways that matter. It will query a deprecated table, use a revenue definition that finance abandoned two quarters ago, or miss the nuance that active users means something different in the mobile app than on web. But that naivety is being solved through better memory, documentation, and semantic layers that give the AI context about your specific data environment.
What is strange is that even as this capability spreads, the outputs from most data teams have not changed. The same dashboards, the same weekly reports, the same slide decks with the same bar charts. The tools are dramatically better, but the value being delivered to the business looks almost identical. And that is where the real question starts to form: if the broader organization is increasingly able to answer their own data questions, what should the data team be spending their time on instead?
The Question Nobody Wants to Ask
If every employee has an expert analyst on demand, what does the actual data team own?
I will be honest, the crisis here is real for me. I have spent over a decade in this industry. I have written millions of lines of SQL. And AI does it better, faster, and at a scale I can never achieve. There is a feeling of existential dread that comes with watching your core skill become a commodity. It lasts until you start looking for the things AI cannot do on its own.
Most of the industry is avoiding this question. We are talking about AI copilots and productivity gains and doing more with less. We are not talking about the deeper issue.
My first instinct was the same as most people in data: I understand the business in ways AI does not. I know that the Q3 numbers look weird because of a migration. I know that revenue in the CRM means something different than revenue in the finance system. That knowledge took years to build, and AI does not have it. But then I started building the tools that encode exactly that kind of understanding, semantic layers, documentation systems, memory, and I realized the uncomfortable part: the whole point of what we are building is to take what is in my head and make it available to an agent. The knowledge I thought was my moat is actually the input to the system that replaces the need for me to be in the loop. So if that is not the moat, what is?
I talked to a data leader this week who has already answered it. He operates as a one-person data team. AI agents handle the implementation: writing queries, building pipelines, running analysis. His job is designing strategic metric frameworks. He decides what gets measured, how it gets defined, what the business should trust. The agents do the rest.
The Answer Is Architecture
The data profession is going through the same evolution that software engineering went through over the last decade. In software, the question used to be can you code? Now it is can you design systems? Writing code is table stakes. Architecting reliable, scalable, maintainable systems is what separates good engineers from great ones.
Data is on the same trajectory. The question is shifting from can you query to can you architect the environment that makes every AI-generated query trustworthy.
Every employee now has an expert analyst. Your job is to make that analyst not just fast, but right. And not just right today, but reliably right as the business changes.
In practice, that means the data team's real value is in curation, governance, and architecture:
Metric frameworks. Defining what revenue actually means across the organization. Making sure that when AI generates a query about revenue, it is using the right definition, the right filters, the right business logic.
Semantic layers. Building the abstraction that sits between raw data and the questions people ask. Dimensions, measures, relationships, hierarchies. The structure that turns a pile of tables into a model of how the business actually works.
Data quality and trust. Ensuring that the data AI reasons over is accurate, timely, and well-documented.
Curation of context. This is the big one. Your most valuable business context does not live exclusively in your data warehouse. It lives in Slack threads about why a campaign was paused. In Notion docs explaining a pricing change. In CRM notes about a churned customer. The data team's job is becoming the curation and architecture of all of that context, structured and unstructured, so that every AI analyst across the organization reasons about the business accurately.
This is the shift from query executor to context manager. You are not being replaced. You are becoming the person who makes the replacement trustworthy.
The Opportunity Is Massive
Here is what gets me excited when I start thinking about what is becoming possible. For years, data meant aggregating data in a warehouse into measures grouped by dimensions, and the job was visualizing that data in charts and dashboards. That was the ceiling. But the ceiling is gone now.
AI can reason across structured data, unstructured data, and third-party systems. But to do that well, the definition of what it means to model your data has to change too. Modeling now means building metric trees that capture how your KPIs relate to each other, incorporating unstructured context alongside structured data, and treating core business entities like customers, products, and teams as first-class pillars that everything else connects through.
That someone is the data team. The role is not shrinking. The definition of what data means is expanding, and the people who can architect that broader environment are going to be more valuable than the people who could write the fastest query were.
The Gap
The frustrating part is that most of the industry has not reimagined what is possible. The conversation is still about making the old workflow faster. These are all improvements to a model that was built for a world where data access was scarce and the data team was the bottleneck. That world is ending, and optimizing within it is not going to be enough.
The opportunity starts with redefining what data means in the first place. For most of the industry's history, data meant what lived in the warehouse: cleaned, modeled, ready to query. But the context that drives real decisions is distributed across dozens of systems, and most of it will never be centralized. It does not need to be. When you stop trying to centralize everything and start connecting it instead, the data team's scope expands well beyond the warehouse.
That is what drove us to start Push.ai. There had to be more we could do with data beyond building yet another dashboarding tool. But this is not about any one product. It is about whether the industry is willing to rethink what data teams are for.
Let's Talk About This
I have been thinking about this a lot and I know I do not have it all figured out. If you are navigating this shift, or if you think I am overstating it, I would genuinely love to hear how you see it.
Guide
The AI Readiness Guide for Modern Data Teams

Britton Stamper
Britton is the CEO of Push.ai and oversees Growth and Vision. He's been a passionate builder, analyst and designer who loves all things data products and growth. You can find him reading books at a coffee shop or finding winning strategies in board games and board rooms.

Subscribe to Our Newsletter and Stay Updated
From exclusive content, and webinar access, to pre-launch product updates, our insights help you become a better finance leader. Subscribe below.
Related Articles
View AllJoin the Push.ai Open Beta
Sign up to start sharing your metrics and keep you teams informed with relevant data today!







