A Reasonable Assumption That Leads Somewhere Wrong
“Data is the raw material for AI.” If you’ve been working with AI over the last several years, you’ve likely heard some form of that statement. And many companies, when first planning out how to apply AI to their business, often look to their data team first. It can seem logical to place AI ownership within the data function. After all, AI models are trained on data. Data teams (hopefully) understand how to store, structure, move, and govern the data within the organization. When the board asks who should drive the company's AI strategy, the data organization looks like a natural fit. And so it’s common to see a “Data & AI” label on organizational designs.
But while this assumption is understandable, it can also be counter-productive. For more than a decade, enterprises have been taught to treat data as the core asset for digital transformation. Analytics, dashboards, data warehouses, and governance programs all reinforced the idea that whoever manages data manages the future. As AI moves from experimentation into mainstream implementation, companies carry that same thinking forward. The problem is that AI is not a better analytics engine. It is a becoming a different kind of enterprise technology—and requires a different mindset. AI is not merely a consumer of enterprise data. It's a new execution layer for the business. That changes everything about how we should think about AI in the organization.
The Structured Data Worldview and Its Blind Spots
Traditional data teams operate in a world defined by structure. Their tools (data warehouses, data lakes, and BI platforms) are optimized for querying, aggregating, and reporting on well-defined fields in well-organized tables. The value they create flows from the discipline of making messy reality conform to a clean schema.
There is definitely value in that work, but it reflects a particular model of how information flows through an organization—one that breaks down quickly when applied to AI. Most enterprise information is not structured. Estimates consistently suggest that over 80 percent of enterprise data is unstructured: emails, documents, meeting notes, support tickets, policies, procedures, call summaries, knowledge articles, and the informal context that shapes most real business decisions. Human workers handle this information every day without thinking about it—reading an email, referencing a document, checking a system record, and then taking action. This complex flow across information sources is the environment where AI increasingly needs to operate.
There is a further complication that data-centric frameworks tend to miss: data quality is not just a downstream governance problem. It is inherently linked to the processes and systems that generate data in the first place. If source data is poorly structured or inconsistently captured—often a process or systems failure, not a data team failure—no amount of warehousing sophistication will fully compensate for those issues. Fixing that requires owning the upstream process, not just the downstream pipe.
Agentic AI Makes the Gap Impossible to Ignore
If the limitations of a data-driven approach were overlooked previously, Agentic AI has made them impossible to ignore. AI is moving beyond generating analytics and insights for human decision-makers. Agentic AI takes action. It processes invoices, routes inquiries, updates records, coordinates handoffs, and executes multi-step tasks across systems. The stakes are categorically different: if data quality is poor, these systems do not produce bad reports—they make bad decisions and perform unreliable actions.
Fundamentally, an agent that assists with order management, claims processing, employee onboarding, or procurement is not primarily a data problem. It is a process problem. The agent has to understand business rules, sequencing, confidence thresholds, and when to escalate to a human. It must interpret context, handle ambiguity, navigate exceptions, and operate across both structured systems and the unstructured information that surrounds them. None of this is captured in your typical data model. The core impediment to agentic AI deployments is often not data quality—it is process understanding, integration, and execution.
This is why the current mindset creates a specific and serious risk: companies build AI programs that are technically impressive but do not bring true business value. They produce solutions that summarize information but cannot drive outcomes and gain organizational trust. They optimize data pipelines when they should be redesigning processes. They measure AI readiness in terms of data maturity when they should equally be measuring process readiness—the degree to which workflows are understood, documented, and ready to be augmented by an AI system.
The Better Question: Who Owns the Work?
The question most companies are asking is "Who owns the data?" The more useful question is "Who owns the work?" The most successful AI initiatives will be led by cross-functional teams that bring together process owners, application architects, domain experts, change managers, and data professionals. Structured data teams should absolutely be part of that effort—they are critical in providing trustworthy data in a secure and reliable way. But they should not be the default owners of AI execution simply because AI relies on data.
Data is the foundation. Process, integration, and execution are the building. Companies need both—but right now, many are only investing in one. A practical model could be to separate into two distinct functions:
- A data engineering capability focuses on data availability, quality, and governance—providing a reliable foundation.
- AI execution team(s), which are aligned to operational context and process knowledge.
This hybrid structure acknowledges that the skills needed to deploy AI effectively are not the same as the skills needed to manage data infrastructure and that forcing one team to do both can compromise the end outcome(s).
Executing an AI Strategy
The real risk of the current moment is not that companies will fail to invest in AI. It is that they will invest heavily and build for the wrong things. Create centralized AI functions disconnected from frontline realities. Generate proofs of concept that work on historical datasets but fracture on contact with real users and operational friction. Assume that because they have invested heavily in data, they are automatically prepared for AI—when in practice they may only be prepared for a narrow slice of it.
Closing the AI execution gap requires a reframe. AI is not merely a data initiative. It is a business operating model change—one that sits at the intersection of data, process, systems integration, and human judgment. Data remains foundational, but it is not the whole foundation. An AI strategy is only effective when it connects business outcomes, data foundations, operating models, and AI governance into a coherent whole.
Companies that recognize this early (that build multidisciplinary ownership structures, invest in process readiness alongside data readiness, and push AI capability closer to the operational work it is designed to transform) will find themselves with a structural advantage that compounds over time. Those that continue to route AI strategy through legacy data functions may find themselves well-governed and thoroughly benchmarked, but perpetually behind.
Ready to move from AI experimentation to real operational impact? Our Data & AI practice helps organizations build trusted data foundations, establish governance, and implement AI that drives measurable business outcomes—responsibly and at scale. Talk to an expert to start the conversation.
This article was originally published on LinkedIn by David Lavin and is republished here with permission.
Tags:
Data & AI
April 23, 2026