We’ve all seen the impact of AI in recruitment, particularly in how CVs are screened. What started as a way to quickly filter through applications using basic keyword matching has evolved into a highly automated process powered by large language models (LLMs). These tools can extract, summarise and interpret complex candidate information with astonishing speed. But the rise of generative AI has made the recruitment process less predictable and potentially, less human.
Historically, automated screening tools depended on manually curated taxonomies and ontologies to identify relevant skills and experience. These systems had to be continually updated to reflect new terminology, role titles and industry jargon. It was resource-intensive and rigid. LLMs have changed that. They can infer relationships between terms, pick up contextual clues and surface candidate insights without needing the same level of manual input.
Sounds like progress? In many ways it is, but it’s also created a new kind of race between recruiters and candidates.
As hiring platforms get smarter, so do applicants. It’s now common practice for jobseekers to run their CVs through AI tools to improve readability, structure and keyword density. These tools are designed to optimise CVs to pass through AI-driven filters, essentially helping candidates speak the machine’s language
And who can blame them? If your CV is being read by a machine, it makes sense to tailor it for a machine. In many ways, this shows initiative and digital literacy – valuable skills in today’s workplace. But there’s a tipping point. When every applicant uses the same optimisation tools, those carefully crafted CVs begin to blur into one. The same structure, the same buzzwords, the same phrasing. Suddenly, your truly qualified candidates are hidden among a sea of overly polished submissions that all look equally strong.
This isn’t just a theoretical concern. At Matrix, we’ve seen applications from candidates claiming experience with proprietary platforms and tools that they could only have encountered by working with us directly, which they hadn’t. AI can enhance, but it can also exaggerate. And if you’re relying on algorithms alone, you may miss the signs.
This is where human judgment must re-enter the conversation. Many recruiters, especially in the public sector, rely heavily on automated tools to reduce workload and speed up hiring. But what they gain in efficiency, they risk losing in discernment. Worse still, some AI models, particularly those trained on historic hiring data, carry embedded bias. This is a major problem, and one we’ve highlighted before at Matrix.
AI bias doesn’t always look like overt discrimination. It can be subtle, favouring CV formats used more often by certain demographics or penalising gaps in employment that may reflect legitimate life circumstances. If left unchecked, these biases can replicate and even amplify existing inequalities in the workforce.
To truly benefit from AI in hiring, businesses need to take a balanced, transparent and continuously reviewed approach. This means not only auditing your AI tools for fairness, but also understanding their limitations.
When used well, AI can play a huge role in improving hiring outcomes, from reducing unconscious human bias, to helping small businesses compete for top talent, to making the application process more accessible for neurodiverse candidates or those with non-traditional backgrounds.
At its best, AI can strip away surface-level judgments and allow recruiters to focus on actual capabilities. But that requires pairing automation with accountability.
So what can business owners and hiring teams do now to stay ahead of the curve and avoid falling into the trap of relying too heavily on tech?
Shift the mindset around screening. It’s no longer enough to tick off right-to-work checks and call it a day. Verification needs to go deeper including reference checks, employment history validation and skills testing, ideally earlier in the process. If you’re going to trust an AI-filtered shortlist, you need to back it up with real evidence of competence.
Consider AI-generated CVs as a starting point for conversation, not an end in themselves. If a candidate has used AI to strengthen their application, that’s not inherently deceptive. In fact, it could be a sign they’re resourceful and tech-savvy. But use interviews, assessments and situational tasks to test for real capability. Can they demonstrate the knowledge they’ve claimed? Can they think beyond the script?
Invest in explainable AI and maintain a human loop. If a candidate is rejected based on an automated process, can you explain why? Transparency helps build trust, both in your hiring practices and in your employer brand. No one wants to be ghosted by a machine.
Review your tools regularly. Are your screening systems still aligned with your values and hiring goals? Are they helping you find the right people — or just the most optimised applications?
Lastly but most importantly, make room for continuous learning. AI will continue to evolve and so must we. That means upskilling recruiters, re-evaluating your hiring policies, and being prepared to shift direction as technology advances and regulation follows. Because in the end, hiring isn’t just about filling a vacancy. It’s about building a team you can trust. And trust isn’t just about who shows up on the shortlist – it’s about knowing that the person behind the polished CV is the real deal.
AI can help. But it shouldn’t be the only voice in the room. People still make the best people decisions.
