Managing productivity in the workplace is hard—especially when it comes to gathering the data needed to understand how individuals and teams are really performing. But even that can be easier than facing the difficult conversations those findings sometimes spark.

It’s no surprise, then, that many managers are drawn to tools that promise to automate data collection and surface performance trends—whether they’re tracking progress toward defined goals or gauging more nebulous factors like engagement and morale.

From there, it’s a small step toward relying on these tools not just to track activity, but to shape decisions about how individuals or teams should be managed.

But handing over that kind of judgment to algorithms can have unintended consequences. Research from Cornell University found that simply believing your work is being monitored by AI can lower performance and stifle creativity—with more than 30% of participants voicing concerns about AI oversight, compared to just 7% under human supervision.

The tools themselves can range widely—from targeted project or time-tracking platforms to broader productivity ecosystems like Microsoft 365 or Google Workspace. And before even getting to questions of data or fairness, just choosing the right tool can be a challenge in itself.

Jon Collins, a tech analyst at research firm GigaOm, points out “Each tool comes with its own philosophy, probably baked in from the founders’ idea of how work should happen.”

So, for example, Trello is very kanban-like and visual. Asana is more hierarchical. Others like Airtable, Monday, or Smartsheets are more spreadsheet oriented.

“None are right or wrong, but they influence how activities are considered and managed,” Collins says. “So, if you’re thinking you need a tool, it’s worth asking what kind of management style you want to support?” Moreover, he says, tools tend to “grow to fill the problem space.”

So, he says, tracking tasks quickly turns into setting goals, building backlogs, monitoring burndowns. “Tools can be used strategically but are sold as tactical fixes: get visibility, simplify your day. It’s worth thinking strategically from the outset.”

That strategic view might need to be broader than you think. Data protection and privacy issues should be considered from the outset, though obligations will likely vary from region to region.

Likewise, managers in some geographies will have to take into account the views of unions or workers’ councils.

But it doesn’t stop there. The employer’s broader AI policy comes into play, given that these tools incorporate algorithms and AI, to the extent of making recommendations.

Patrick Brodie, head of employment, engagement and equality at law firm RPC, says organisations may have to grapple with some “big issues” depending on not just what an AI tool measures but how it does so.

These include assessing the possibility of bias or discriminatory outputs, and by extension considering the data an underlying model or algorithm was trained on.

The real impact of AI

 

“If the AI system is making a qualitative judgment about the performance, how has the algorithm reached that output or determination?” Brodie continues.

“And that will probably be a combination of understanding, for example, what features and factors are taken into account, the weight placed on each, and then understanding the decision-making process which leads to that output, including the level of automation, extent of human intervention and margin of error if factors are changed.”

Productivity Brodie Patrick

Patrick Brodie, head of employment, engagement and equality at RPC

 

Likewise, companies should be thinking about impact assessments, he says, including “looking to identify areas of high AI risk, which either lead to unfair outputs or discriminatory outputs, and introducing systems and embedding which prevent or mitigate that risk.”  This should be an ongoing process, he adds.

Very often, efficiency is often the avowed goal of adopting productivity or engagement tools. This implies the end result will be less people, which will be naturally unsettling for employees.

So, Brodie says, companies should be transparent about why they are adopting these systems and their potential impact on roles whilst ensuring people have the skills and training, they need to adapt and thrive. And they should reassure employees that it is humans who ultimately make the decisions.

And workforce wellbeing has to be part of the equation, he says. “I think employees’ increasing level of concern about the consequence of AI risks feeding into paradoxically productivity and efficiency.”

“It’s the AI paradox,” he continues. “You introduce an AI system to improve employee productivity, and because, without clear communication, many are worried about what it means, your productivity dips.”

Zoom’s head of solution engineering EMEA, Helen Hawthorn, tells us, “A successful deployment is built in trust, transparency, and a positive employee experience. The most effective rollouts involved integrating AI into everyday workflows, alongside clear communication and inclusive training.”

Meanwhile, she adds, “Common missteps include introducing tools without context, skipping training, underestimating change management, or using AI in ways that feel intrusive.”

Q&A:Vanessa O’Mahony, head of small and growth businesses EMEA, Slack

One example of a thoughtful deployment comes from Denso, a global automotive parts supplier.

At one of its US plants, around five years ago, the company implemented an AI-powered visual analytics system to track workers’ cycle times on the assembly line. Initially, staff were wary, but the system’s design—offering live visual feedback through icons and dashboards—proved to be more collaborative than punitive.

“People saw it as a support tool, not surveillance,” one manager said. Rather than being used to punish slowdowns, the data helped managers identify bottlenecks and offer targeted assistance. The key, Denso found, was framing the tech as a tool for shared success—not personal scrutiny.

More recently, rather than using AI tools purely for surveillance, Denso is increasingly embedding them into broader process transformation and workplace improvement. Their implementation strategy emphasises transparency, worker empowerment, and knowledge transfer—not just optimisation for its own sake.

Productivity: Denso found that framing the tech as a tool for shared success—not personal scrutiny was key

Framing AI as a tool for shared success—not personal scrutiny — was key at Denso

 

Additionally, employee training and internal platforms like SOMRIE and the Toyota Software Academy, launched in May 2025, aim to reskill talent across the Toyota Group—including Denso—ensuring people remain at the centre of AI-led change. It highlights how worker trust improves when data is shared openly, and AI is seen as a tool to help people improve—not as a digital overseer.

After all, as Collins suggests, tools reflect the philosophy of their creators—and how they’re implemented reflects the mindset of the organisation. When well-aligned, they can reinforce supportive, transparent management. But the same systems, in the wrong hands, can easily amplify control or mistrust.

“Management needs to be neither too much nor too little. Goldilocks style,” says Collins. “But tools can lean managers towards micromanagement, which is unhealthy.” Without clear boundaries, features designed for visibility or coordination can end up enabling excessive oversight or pressure.

And ultimately, he warns, no system can override intent. “Unscrupulous managers and bullies are going to stay in character and may use tool features in a toxic way. It’s ultimately counterproductive.”

Missed part 1 of this feature? Check it out here

Personalized Feed
Personalized Feed