"If you just take your existing workflow and try to apply advanced AI to it, you're going to weaponize inefficiency."
In Today’s Email:
We're tackling the single biggest reason enterprise agentic AI deployments are failing: organizations are layering agents onto workflows that were designed for humans, instead of redesigning how work actually flows. Deloitte's latest data shows that only 30% of organizations are redesigning key processes around AI, while 37% are using it at a surface level with little or no change to existing operations. Meanwhile, a Harvard Data Science Review study documents 2-10x productivity gains, but only when companies reengineer work with agents as primary actors. We'll explore why the "bolt-on" approach fails, what redesigned work actually looks like, and how to identify which processes are candidates for transformation versus simple automation.
News
1. Employers "Warm Up" to Remote Work to Solve Talent Gaps
After a year of rigid return-to-office (RTO) mandates in 2025, data released this week suggests a strategic reversal is underway. A new report from BioSpace and supporting data from global hiring platforms reveal that employers are once again loosening location requirements to access specialized talent. As of February 2026, 28% of surveyed life sciences and tech employers stated they will now "recruit and hire remote employees regardless of location," a significant pivot from the localization trends of late 2024. The data indicates that while companies still prefer local hubs, the "skills bottleneck" has forced their hand; they are finding it impossible to fill critical senior roles within commuting distance, leading to a pragmatic acceptance of remote work as a competitive necessity rather than just an employee perk.
Key Takeaway: The "RTO vs. Remote" culture war is ending in a truce dictated by supply and demand. If your organization is struggling to fill specialized technical or leadership roles, the bottleneck is likely your zip code. It is time to audit your open requisitions: strictly local hiring is becoming a competitive disadvantage for senior-level talent acquisition.
2. The Shift from "Chatbots" to "Agentic AI"
The conversation around AI capabilities took a sharp turn this week with new reports emerging from China's tech sector regarding DeepSeek's next move. Following the industry-shaking release of the R1 model last year, analysts are now tracking the imminent launch of a "next-generation AI agent" expected in Q1 2026. Unlike previous Large Language Models (LLMs) that simply generate text, these "agentic" systems are designed to autonomously execute multi-step workflows; such as planning a supply chain route or debugging a software module; without constant human prompting. This development signals that 2026 will be the year AI moves from providing information to taking action, intensifying the pressure on Western tech giants to release their own autonomous agent frameworks.
Key Takeaway: We are moving from the "Copilot" era (human-in-the-loop) to the "Agent" era (human-on-the-loop). Leaders need to prepare their workforce not just to prompt AI, but to supervise it. The critical skill set for 2026 will be auditing the outputs of autonomous agents to ensure they are executing complex tasks accurately.
3. The "Silent Substitution": AI-Driven Support Layoffs
While the volume of mass layoffs has stabilized compared to 2024/2025, the nature of job cuts in February 2026 has become more targeted and structural. A fresh wave of "silent" layoffs reported this week specifically impacts customer support and level-one IT roles, with companies explicitly citing AI displacement as the driver. Unlike general cost-cutting, these reductions are directly correlated with the successful deployment of AI customer service platforms that have matured over the last 12 months. Companies are effectively swapping headcount for compute credits, signaling that the "experimental" phase of AI support is over and the "replacement" phase has begun for transactional roles.
Key Takeaway: Job security is now entirely dependent on the complexity of the work. Roles that involve repetitive information retrieval (like Tier 1 Support) are being actively consolidated by AI. The workforce strategy must shift immediately to upskilling these displaced employees into "Tier 2" problem-solvers who handle the complex edge cases that the AI agents cannot resolve.
The Automation Trap
There's a pattern playing out across enterprises right now that should concern every technology and business leader. Organizations are investing heavily in agentic AI, rushing to deploy autonomous agents across their operations, and then wondering why the results are disappointing.
The numbers tell the story clearly. Deloitte's State of AI in the Enterprise 2026 report found that despite 78% of companies claiming to use AI, 80% still report no measurable impact on earnings. Nearly two-thirds of organizations are experimenting with AI agents, but fewer than one in four have successfully scaled them to production. The gap between pilot and production is 2026's defining enterprise challenge, and it has very little to do with the technology itself.
The root cause is deceptively simple. Most organizations are taking workflows that were designed for humans sitting in front of screens, clicking through menus, and manually routing information between systems, and they're assigning agents to do those same tasks in the same sequence. They're automating the status quo. They're paving the cow path instead of building a highway.
This is the automation trap, and it catches organizations precisely because it feels like progress. You've deployed agents. They're doing things. Boxes are getting checked. But the underlying work hasn't changed, which means the underlying inefficiencies, bottlenecks, and failure modes haven't changed either. They've just been handed to a faster worker who will reproduce those problems at machine speed.
Gartner projects that more than 40% of agentic AI projects will fail by 2027, and the primary reason won't be technology limitations. It will be organizations trying to automate broken processes instead of redesigning operations for how agents actually work.
Automation vs. Transformation: The Critical Distinction
Understanding the difference between automation and transformation is essential for getting agentic AI right.
Automation takes an existing process and executes it faster. The workflow stays the same. The decision points stay the same. The handoffs stay the same. You've replaced a human with an agent, but the process itself is untouched. This works well for genuinely simple, well-defined tasks where the current workflow is already efficient. Data entry, form population, standard notifications. These are legitimate automation candidates.
Transformation asks a different question entirely. Instead of "how do we do this faster?" it asks "why do we do it this way at all?" When you redesign work for agents, you're not starting from the existing process map. You're starting from the desired outcome and building backward, taking advantage of what agents can do that humans can't, like operating around the clock, processing thousands of data points simultaneously, and coordinating across dozens of systems in milliseconds.
Consider procurement. In most enterprises, the procurement process was designed around the assumption that a human buyer would review requisitions one at a time, check budget availability, look up approved suppliers, compare quotes, select a vendor, generate a purchase order, route it for approval, and track delivery. Automating that process means assigning an agent to follow those same steps in that same sequence.
Redesigning it means recognizing that the sequential nature of the process was a constraint of human cognition, not a business requirement. A redesigned procurement workflow might have the agent continuously monitoring inventory levels, demand forecasts, and supplier pricing simultaneously. When it identifies an approaching need, it evaluates the full supplier landscape, cross-references contract terms, checks budget authority, assesses delivery risk based on current logistics conditions, and either executes the purchase autonomously within pre-approved parameters or surfaces a decision to a human only when the situation falls outside those parameters. The workflow is shorter, the cycle time collapses, and human attention goes to the exceptions that actually require judgment.
That's the difference between a faster version of the old way and a new way entirely.
Why "Bolt-On" Fails at Scale
The bolt-on approach to agentic AI fails for several interconnected reasons, and the failures get worse as you scale.
First, legacy workflows carry embedded assumptions about human limitations. Approval chains exist because humans can't process enough information to make high-quality decisions without hierarchical review. Handoff points exist because humans can't hold enough context to manage an entire process end-to-end. Waiting periods exist because humans need time to review, consider, and respond. When you assign agents to these workflows, you're forcing a system that operates in milliseconds to comply with constraints that exist only because humans operate in hours and days.
Second, the interfaces between steps become bottlenecks. Humans designed workflows around screen-based interactions, email notifications, and document-based approvals. As we explored in "Stop Managing Apps. Start Orchestrating Work." (Dec 3), the application layer itself has become a constraint. Agents don't need screens, email prompts, or PDF attachments. They need APIs, data streams, and programmatic access. Forcing agents to interact through human-designed interfaces is like making a Formula 1 car drive through a school zone. You've eliminated none of the speed advantage while keeping all of the friction.
Third, and most importantly, bolted-on agents inherit the organizational politics embedded in existing processes. Every enterprise workflow carries scar tissue from past conflicts, compromises, and power dynamics. That extra approval step exists because a VP insisted on visibility three reorganizations ago. That manual review point exists because of a compliance incident in 2019 that has since been addressed by a different control. That data reconciliation step exists because two departments can't agree on a single source of truth. Agents faithfully execute all of this accumulated dysfunction without questioning any of it.
What Redesigned Work Actually Looks Like
The organizations pulling ahead aren't just deploying better technology. They're rethinking what work means when you have a digital workforce operating alongside your human one.
A Winter 2026 article in the Harvard Data Science Review by researchers at DAIN Studios introduced the "Agent OS" concept: an organizational operating system designed with AI agents as primary actors and humans as supervisors, coaches, and exception handlers. Their research documented several compelling examples.
One global industrial firm redesigned its audit reporting process around a multi-agent system. Rather than assigning agents to replicate the existing audit workflow, which involved auditors manually collecting data, cross-referencing regulations, drafting reports, and routing them through layers of review, the company rebuilt the entire process. Agents now continuously monitor compliance data, autonomously generate draft findings, cross-reference regulatory requirements in real time, and produce reports that auditors review and certify rather than create from scratch. Audit reporting time dropped by 92%. Not 10%. Not 30%. Ninety-two percent, because the work itself was redesigned.
A B2B sales team took a similar approach. Instead of using agents to automate the existing process of researching prospects and preparing sales materials, which is a bolt-on approach, they redesigned how deal preparation works entirely. Agents now generate hundreds of negotiation scenarios and competitive positioning analyses that human sales teams could never produce manually. The sales team shifted from spending 80% of their time gathering information and 20% making decisions to spending nearly all their time on strategy and relationship building. What changed wasn't the speed of the old process. It was the nature of the work itself.
These examples share a common pattern. The organizations didn't start by asking "which tasks can agents do?" They started by asking "what would this function look like if we designed it from scratch, knowing that digital workers could handle any structured analytical or operational task?" That question leads to entirely different answers than the automation question.
Process Archaeology: Understanding Before Redesigning
Before you can redesign work, you need to understand why it's structured the way it is. This is what I call process archaeology: digging beneath the surface of current workflows to identify which elements exist for legitimate business reasons and which are historical artifacts.
Every enterprise process has layers. At the foundation, there are core business requirements: the things that must happen for the organization to deliver value. A customer needs a product. A patient needs a diagnosis. A claim needs to be processed. These requirements don't change regardless of who or what does the work.
Above the core requirements, there are regulatory and compliance requirements. These are real constraints, but they're often embedded in processes in ways that conflate the requirement with a specific implementation. The regulation might require that a qualified person reviews a decision before it takes effect. The process implements that as an email approval chain with a 48-hour SLA. The requirement is real. The implementation is a human artifact.
Above that, there are operational conventions that emerged from practical experience. These often encode genuine wisdom about what can go wrong, but they may encode it in ways that assume human execution. A rule that says "always verify the customer's address before shipping" makes sense when humans are processing orders. When agents have real-time access to address validation APIs and can verify at the moment of order creation, the process step becomes unnecessary because the verification is built into the workflow itself.
Finally, at the top layer, there are political and organizational artifacts. The approval that exists because someone important wanted visibility. The manual handoff that exists because two teams can't agree on system access. The duplicate data entry that exists because nobody has prioritized system integration. These are the layers that should be eliminated entirely in a redesign.
The danger of skipping process archaeology is real. If you redesign without understanding the layers, you risk eliminating steps that exist for legitimate regulatory or business reasons. But if you treat every existing step as sacred, you end up right back where you started, automating dysfunction.
The Redesign Framework: Five Questions
When evaluating whether a process is a candidate for automation or transformation, five questions cut through the complexity.
The first question is: does this process exist because of a business outcome, or because of a human limitation? If a step exists solely because humans can't process enough information, maintain enough context, or work fast enough to handle it differently, that step is a redesign candidate. If it exists because of a genuine business or regulatory requirement, it needs to stay in some form.
The second question is: how many systems does this process touch? Processes that cross multiple systems are almost always redesign candidates because the handoffs between systems were designed for human operators who log in, check status, and move data between screens. Agents can operate across systems simultaneously, collapsing sequential handoffs into parallel operations.
The third question is: what percentage of this process is exceptions versus routine? If 80% of cases follow the same path, the routine portion is an automation candidate and the exception handling is where human judgment should be concentrated. If every case is different, you're looking at a process that might need agents augmenting human decision-making rather than operating autonomously.
The fourth question is: what happens when this process fails today? If failures are caught quickly and corrected easily, the cost of redesign may not justify the investment. If failures cascade, go undetected, or create significant downstream problems, redesign becomes urgent because agents operating at scale will amplify those failures dramatically.
The fifth question is: who benefits from this process staying the way it is? This is the uncomfortable question. Sometimes processes persist not because they're effective but because someone's role, budget, or organizational influence depends on them. Redesign requires honest assessment of whether resistance to change comes from legitimate concerns or from organizational self-preservation.
The Organizational Change Challenge
The hardest part of redesigning work for agents isn't technical. It's organizational.
Deloitte's research found that organizations further along in agentic AI adoption are significantly more likely (66% versus 42% for early adopters) to expect changes in organizational structure and job definitions. Among organizations with extensive agentic adoption, 45% expect reductions in middle management layers. This isn't a technology prediction. It's an organizational transformation prediction.
When you redesign work around agents, you're not just changing processes. You're changing roles, skills, reporting structures, and career paths. The procurement specialist who spent a career learning to evaluate suppliers and negotiate contracts needs to become the person who defines the parameters within which agents evaluate and negotiate, reviews the edge cases agents can't handle, and continuously improves the agent's decision quality. That's a meaningful job, but it's a different job, and the transition requires investment in reskilling, clear communication about how roles are evolving, and genuine organizational commitment to the people affected.
The organizations that handle this well share a few characteristics. They involve the people who do the work in the redesign process. They're transparent about what's changing and why. They invest in building new skills before they need them, not after. And they frame the change as elevation, not replacement. People are moving from routine execution to strategic oversight, from data routing to decision architecture, from process compliance to process design.
The organizations that handle it badly try to sneak the change through. They deploy agents into existing workflows without telling people what's coming, wait for the technology to force organizational restructuring, and then scramble to manage the fallout. This approach fails, and it fails loudly.
The Retrofit vs. Reengineer Decision
Not every process needs full transformation. The practical reality is that organizations have limited capacity for change, and trying to redesign everything simultaneously is a recipe for organizational paralysis.
The right approach is triage. Start by categorizing your key processes into three groups.
The first group contains processes where automation is sufficient. These are well-defined, stable processes with clear inputs and outputs, where the current workflow is reasonably efficient and the primary value of agents is speed and consistency. Automate these. Don't overthink them.
The second group contains processes where redesign is necessary. These are the processes with excessive handoffs, multiple system dependencies, significant exception volumes, or clear evidence that the current workflow was designed around human limitations rather than business requirements. These are your transformation candidates, and they're where the 2-10x productivity gains live.
The third group contains processes that need to be eliminated entirely. These are the processes that exist only because of historical circumstances, organizational politics, or technical debt. The arrival of agentic AI is an opportunity to ask whether these processes need to exist at all, in any form.
The companies that get the best results from agentic AI are the ones that make deliberate choices about which group each process falls into, rather than defaulting to automation for everything.
The Bottom Line
The agentic AI landscape right now is split between two types of organizations. On one side are companies treating agents as faster workers who execute existing processes. They're getting incremental improvements and wondering why the revolution hasn't arrived. On the other side are companies treating the arrival of agents as an opportunity to rethink how work gets done from the ground up. They're getting transformational results.
The data is clear. Only 30% of organizations are redesigning processes around AI. Those organizations are the ones seeing meaningful returns. The other 70% are running faster on the same treadmill.
The choice isn't whether to deploy agentic AI. That train has left the station. The choice is whether you'll use it to do the same things faster, or whether you'll use it as the catalyst to redesign work itself. The organizations that choose transformation over automation won't just be more efficient. They'll be operating in ways that bolt-on organizations literally cannot compete with.
The question facing every enterprise leader this year is whether they have the courage to look at their carefully constructed processes and ask: is this how work should flow, or is this just how humans had to do it?
---
Understanding how to evaluate your processes for agent readiness is a critical step toward building a truly transformed digital workforce. The Complete Agentic AI Readiness Assessment provides detailed frameworks for auditing your workflows, identifying redesign opportunities, and building a transformation roadmap that balances ambition with organizational capacity. Get your copy on Amazon or learn more at yourdigitalworkforce.com. For organizations ready to move from assessment to action, our AI Blueprint consulting helps translate process audits into practical redesign plans and sustainable change management strategies.

