Microsoft opened its recent Copilot & AI Agents Summit with an unusually candid premise: the work isn’t slowing down, but people already are. According to the company’s 2025 Work Trend Index, leaders feel growing pressure to raise productivity even as most workers say they lack the time and energy to meet current demands.
Matthew Duncan, Microsoft’s head of future-of-work thought leadership, distilled that tension into a single line that framed the event: “We’re not running out of work, but we are running out of human capacity to do it.”
The “Frontier Firm” answer
Microsoft’s proposed solution to that capacity gap is a new organizational archetype it calls the “frontier firm” — a business that treats AI agents as a second workforce and reorganizes around them, rather than simply layering Copilot onto existing processes.
In Microsoft’s language, intelligence becomes “on tap.” Human–agent teams start to reshape the org chart, and every knowledge worker becomes an “agent boss” who can brief, supervise and correct digital colleagues. The goal is an “AI-operated but human-led” system where agents handle routine execution while people retain judgment and accountability.
On paper, leadership appetite for such systems appears strong. In the 2025 Work Trend Index, Microsoft reports that 82% of leaders see this year as pivotal for rethinking key aspects of strategy and operations, and the same percentage say they’re confident they’ll use digital labor to expand workforce capacity in the next 12 to 18 months.
Sales and service: Where agents are already changing work
In sales, Microsoft positions AI agents as a way to claw back time from low-value admin. Based on Microsoft’s own Copilot usage analytics, the company says heavy users of Microsoft 365 Copilot within its sales organization saw a 9.4% increase in revenue per seller and a 20% increase in close rates.
Microsoft is careful to frame these results as its own experience rather than a guaranteed outcome, but positions them as evidence of what’s possible in large, data-rich sales teams.
The pattern Microsoft promotes is consistent: agents summarize accounts, surface pipeline risks and suggest next-best actions, while humans retain ownership of strategy, negotiation and customer relationships.
Customer service is where the idea of a “second workforce” feels most mature. Here, Microsoft points to its own Customer Service and Support (CSS) organization. After consolidating 16 different systems and more than 500 individual tools into Dynamics 365 and layering in generative AI via Dynamics 365 Contact Center, Microsoft reports a 31% increase in first-call resolution and a 20% reduction in misrouted cases.
Behind the scenes sits a layered agent architecture: some agents classify customer intent and route tickets. Others draft and update knowledge articles. Still others assist human agents in real time with suggested replies or next steps. Humans still handle edge cases and escalation, but pattern recognition and standard steps have quietly shifted to AI.
The invisible work that makes or breaks AI agents
What Microsoft emphasizes just as strongly is what these examples don’t show: the groundwork required to make agents useful at scale.
Software licenses are not enough. Success demands paying down what the company describes as “design, data and security debt.”
Design comes first. Organizations need to map workflows step by step and decide where agents can act autonomously, where they need human approval and where automation must stop entirely. Many enterprises have never been forced to make those boundaries explicit.
Data is the next barrier. Many of the Copilot and agent scenarios shown at Ignite and the Summit rely on Microsoft Graph data and what the company calls “intelligence on tap” — a unified view of documents, emails, meetings and business records. That level of cohesion is still rare in organizations running fragmented CRMs, regional ERPs, legacy ticketing tools and pockets of shadow IT. In those environments, agents work with partial context by default, increasing the risk of confident decisions based on incomplete or outdated information.
Security and governance may be the most unforgiving constraint. Microsoft’s 2024 Data Security Index highlights a sharp rise in incidents linked to AI tools: data security incidents from the use of AI applications increased significantly, from 27% in 2023 to 40% in 2024. Those findings are now frequently echoed in partner guidance and security briefings around Copilot deployments.
The takeaway
For leaders, the message is clear: AI will scale weaknesses as quickly as it scales strengths. Becoming a frontier firm is not about replacing people but about fixing the organizational plumbing — especially data flows and Zero Trust security — before automation spreads.
As Duncan put it, the goal is not an AI-run company. It’s an AI-operated, human-led one.