
You’ve probably been led to believe this: Buy the AI right tools and your data extraction problems will be solved.
But the truth is that AI alone can’t transform your workflows. The real bottleneck isn’t technical, it’s organizational.
I’ve seen companies spend millions on data extraction, only to watch adoption stall.
Teams didn’t trust the output, refused to change processes, or quietly went back to old ways of working.
The promise of AI often gets crushed under the weight of human resistance. And if you’ve lived through one of these “failed” projects, you already know this pain.
That’s why it’s time to talk about the part of AI transformation no one wants to admit: the people side.
At its core, AI-powered data extraction is designed to do something deceptively simple: turn unstructured information into structured, usable data. AI models combine vision, language, and machine learning to detect document patterns, identify entities, and map consistent schemas.
AI-powered data extraction is no longer an early-stage experiment. It has matured into an operational capability that many enterprises now treat as part of their core data infrastructure.
Today, it’s no longer an early-stage experiment—it’s an established capability that most enterprises treat as part of their core data infrastructure.
In the Asia-Pacific region, for instance, 65% of organizations have a formal data strategy for AI, and 77% have adopted distributed architectures to address “data gravity.” Another 72% are optimizing infrastructure to bring computation closer to data. On paper, that should set them up for success.
But a question remains: Why does AI-powered data extraction not work, despite all this technological progress and adoption?
Technology has evolved, but organizations haven’t caught up. While finance and operations leaders are investing in AI infrastructure and automation tools, the teams and processes around them often remain unchanged.
One of the main reasons automation projects fail is resistance from teams. This resistance stems from different beliefs, values, and perspectives, which means it doesn’t always look the same in every situation.
Before you try to overcome the resistance, you need to understand it. I’ve developed a framework called “The Four Corners of Opposition” that will help you categorize different kinds of resistance.
Here’s a look at each type of resistance with examples and outcomes:
Before I move ahead, spot yourself (and your team) on the grid:
By identifying which camp you or your team falls into, you can understand how to take advantage of AI and automation for each case. We’ll use Docxster as an example to illustrate:
For a project to work, understand how your team feels about AI, what support they need, and key milestones. The teams that succeed don’t leave adoption to chance. They plan for it and make the case for every camp inside their organization.
If resistance wins in your organization, there’s a lot at stake. And it might be higher than what you realize. Here’s a look at the organizational stakes of a failed data extraction project:
Failed data extraction will slow down your business and drain your budget. BCG estimates that more than €20 billion in tech investments go to waste every year as a result of the failure to deliver large-scale programs on time, within budget, or within the planned scope.
And this budget waste may not just be in monetary terms. Every time a workflow breaks, data goes missing, or extraction scripts need rework, the organization pays twice: once for the technology, and again for the teams fixing it.
One of the ways resistance to change builds in a business is overwhelm. When it comes to AI adoption, less is more. I believe in starting small and building confidence step by step. Ideally, you should start with one workflow at a time.

A snippet from one of my LinkedIn posts where I talk about the MVP approach to AI adoption
A narrow scope allows you to identify the cracks early: where data breaks, where validation fails, where users struggle to trust the output.
Let’s say the finance team picks one repetitive workflow: invoice extraction. They run it through Docxster, training the model and validating results with the human-in-the-loop review. Within a month, accuracy rises and manual review time drops. That small win is more than a process improvement; it’s proof. Other teams begin to ask “Can we try that too?”
A failed data extraction project often leaves behind a collection of tools that once promised a revolution but never really worked out.
A user shared how they struggled with Rossum’s setup and accuracy, especially when dealing with documents in different formats or languages. What started as an automation investment ended up creating more manual work and frustration.

I’ve read similar stories on Reddit about tools like UiPath or Zapier, where the initial excitement fades once teams realize how much effort it takes to make automation work inside real business processes.
And shelfware is just part of the problem. An even bigger problem is the trust erosion a series of shelfware causes. It leads to loss of trust in leadership, in technology, and in the idea that automation can actually help. And if you’re not careful, that disappointment can harden into resistance, forming camps like Anti-AI / Pro-Change or Anti-AI / Anti-Change inside your organization.
The way forward isn’t another big rollout—it’s a small, visible win. Pilot programs can act as trust repair mechanisms. In a Reddit thread, where the OP asks how to get employees to trust AI in their workflows, here’s what a user exclaimed:
A comment from Reddit that mentions wanting to see convincing evidence to trust AI in their workflows
This comment makes one thing clear: adoption grows when teams see convincing evidence. Let’s say your finance team reaches 80% accuracy in invoice data extraction.
Instead of another tool announcement, you now have a story your people can see and trust. Share that win, let the team explain what worked, and use that evidence to guide the next rollout.
Pilots done right don’t just test technology—they repair trust, showing your organization that this time, AI really can make work easier.
AI skepticism is one of the toughest barriers to adoption and it’s not unfounded. Many teams have seen promises of “effortless automation” fall apart in practice. I’ve come across posts where users call AI automation “just another way to sell subscriptions.”

A Reddit post where a user says AI automation feels like just another way for companies to sell subscriptions.
If your team isn’t skeptical yet, they might be after one bad experience. And once that skepticism takes hold, it spreads fast. It shapes how future tools are received, how budgets are defended, and how openly teams engage with anything that sounds like “AI.”
I like the idea of using McKinsey’s Three Es to engage employees in change—Elevate, Empower, and Energize. It’s a simple framework that shifts ownership from leadership to the teamspeople who actually make adoption happen.

Take the operations team for example. Start by elevating a few team members who deal with shipment or invoice documents daily. Make them the owners of the pilot.
Using Docxster, they can test the model, flag edge cases, and adjust the workflow to fit how they actually work. They test the model, flag edge cases, and suggest how the workflow should fit into their process. These people become your internal champions—the trusted voices who can say, “This actually works for us.”
Then, empower more users to take part in schema setup, validation, and human-in-the-loop review. Docxster’s routing logic and workflow builder give them control over how data flows and who reviews what. The more control they have, the more confident they become in the system’s accuracy.
Finally, energize the rest of the function by showcasing what the ops team achieved. That’s how ownership turns into belief—and belief turns into momentum.
A lot of times teams aren’t afraid of technology, they’re afraid of being replaced by it. Karen Watts, Founder and CEO, DomiSource puts it really well:
“Change is uncomfortable, and many teams are inherently afraid of technology, especially when they think it might replace their role. I've seen talented employees cling to manual processes not because they're efficient, but because they're familiar. It feels safer to trust their own keystrokes, even when the data proves otherwise.”
This fear often shows up as disengagement. Employees hesitate to adopt new tools or quietly revert to old processes.
Over time, such concerns create informal camps like the Anti-AI / Pro-Change. These employees aren’t against improvement, they just don’t trust “black box” tech. Without clear training and context from leadership, that hesitation grows, and adoption slows.
The solution to this challenge isn’t to push your teams harder but it is to listen better. Teams need space to test, fail, and refine without fear of being wrong.
The best way to do this is through AI feedback sessions. Set aside time each week for users to share what worked, what failed, and what surprised them. Review real documents, show error patterns, and discuss how the system can improve.
Recognize all outcomes equally:
By normalizing these discussions, small failures become part of the learning curve, not a setback. Teams stay engaged because they see their input shaping the system in real time, and that builds lasting confidence in the technology.
Change-Readiness Checklist:
✅ Do you have a pilot use case?
✅ Are business users involved in setup?
✅ Is feedback from teams being tracked + acted on?
✅ Are success stories being shared internally?
If there’s one thing I’ve learned, it’s that automation succeeds only when teams do. AI on its own can’t fix broken processes or change hesitant minds. It can make work faster and cleaner, but it can’t make teams believe in it.
That’s still a human’s job.
Most teams think the AI adoption debate is about losing jobs or learning how to code. But for the average business user, it’s about neither. In established businesses, there are two competing forces: younger leaders who see automation as an opportunity, and senior leaders who remain cautious about change.
Both want the same thing—to scale without breaking what already works. Neither is wrong. They’re just looking at different kinds of risk.
That’s why I believe the future of data extraction is human-led. Sustainable automation starts with teams who understand what’s broken, not just what’s possible. When technology and change management align, outcomes scale naturally.
If your past AI projects failed, it wasn’t because you chose the wrong model. It was because no one chose to lead the change.
But now? The floor’s yours—use it to your advantage.
Get Document Intelligence in Your Inbox.
Actionable tips, automation trends, and exclusive product updates.