Start for free

AI in recruitment: why fighting bots with bots isn't working

75% of candidates use AI to apply. 65% of hiring managers catch them doing it. Here's why the AI arms race is making hiring worse for everyone.

Back to blog

We're in the middle of an AI arms race in recruitment. Candidates use AI to write applications. Employers use AI to screen them. Candidates learn to game the AI. Employers add more AI to detect gaming. Nobody's winning.

The result is a system where humans on both sides are increasingly removed from the process. Algorithms talk to algorithms while real people wait in the queue, wondering if anyone will ever actually see their work.

75%
Of candidates now use AI for applications

Three quarters of job seekers are using AI tools to write CVs, cover letters, and application responses.

How we got into the doom loop

It started with volume. As applications per role exploded, employers turned to AI screening to cope. Keyword matching. Resume parsing. Automated rejection emails. The technology promised efficiency, and it delivered, sort of.

But candidates adapted. They learned which keywords triggered the algorithms. They optimized their CVs for parsing. They used AI tools to generate application materials that would score well with screening systems. The game became about beating the bots rather than demonstrating genuine fit.

Recruiters are drinking through a fire hose of job applications. The technology that was supposed to help is now part of the problem.

Employers responded with more sophisticated AI. Detection systems for AI-generated content. Additional screening layers. Longer assessments to weed out candidates who weren't truly invested. Each escalation prompted a counter-escalation from the candidate side.

41%
Of candidates use prompt injections

Nearly half of job seekers have tried to manipulate AI screening systems by embedding hidden instructions in their applications.

What we're losing

The casualties of this arms race aren't hard to spot. Trust has collapsed on both sides. Only 8% of candidates believe AI screening is fair. Meanwhile, 74% of hiring managers are more fearful of fraud than they were a year ago.

But there's a deeper problem. When the process becomes about gaming algorithms rather than demonstrating genuine ability, we lose signal. The candidates who rise to the top aren't necessarily the most capable. They're the ones who've optimized most effectively for the screening system.

Signal vs. noise

When AI writes applications and AI screens them, we're measuring candidates' ability to use AI tools rather than their ability to do the job. The signal we actually care about gets lost in the noise.

Authentic candidates get filtered out

Candidates who refuse to play the AI game, whether on principle or because they don't know how, get systematically disadvantaged. Their authentic applications score worse than AI-optimized ones.

Everyone wastes more time

The arms race demands ever-increasing investment from both sides. Candidates spend hours optimizing. Employers add screening layers. The process gets longer and more frustrating for everyone.

AI as a tool, not a gatekeeper

The problem isn't AI itself. It's how we're using it. When AI becomes the gatekeeper, making binary decisions about who advances, we create the conditions for an arms race. Both sides optimize for the algorithm rather than focusing on what actually matters.

The alternative is using AI as a tool that enhances human decision-making rather than replacing it. Let AI handle the genuinely mechanical work: scheduling, organizing, surfacing information. But keep humans in the loop for judgment calls about talent and fit.

65%
Of hiring managers have caught AI deception

Most recruiters have encountered candidates using AI deceptively, creating an atmosphere of distrust that poisons the entire process.

Breaking the cycle

The way out of the doom loop is to change what we're measuring. Instead of evaluating application materials that AI can easily generate, we need to evaluate work that demonstrates genuine capability.

Skills-first hiring sidesteps the AI arms race entirely. When you ask candidates to complete a focused assessment that demonstrates job-relevant skills, you're measuring something AI can't easily fake. The work itself becomes the signal.

This doesn't mean banning AI. Candidates can use whatever tools they want. But when the assessment is designed around authentic demonstration of capability, tool usage becomes irrelevant. What matters is the quality of the output.

The question isn't whether candidates used AI. The question is whether they can do the job. Skills-first assessment lets you answer that directly.

What this looks like in practice

A well-designed skills assessment asks candidates to do something that directly reflects the role. For a content writer, that might mean editing a piece or drafting a short section. For a data analyst, it might mean interpreting a dataset. For a designer, creating a quick mockup.

The key is keeping it focused and job-relevant. Candidates invest 10-15 minutes demonstrating genuine ability. Their work is reviewed by humans who can assess quality, creativity, and thinking, things AI screening can't reliably evaluate.

Measure what matters

Instead of evaluating keyword optimization or AI-generation quality, you're evaluating actual job-relevant skills. The signal-to-noise ratio improves dramatically.

Rebuild trust

When candidates know their work will be seen by humans, they engage authentically. When employers see genuine work samples, they can make confident decisions. Trust returns to the process.

The future of AI in hiring

AI will continue to play a role in recruitment. It's too useful for handling administrative tasks to disappear. But the companies that succeed will be those that use AI thoughtfully, as an assistant rather than a replacement for human judgment.

The current arms race is unsustainable. As AI tools become more sophisticated, the escalation will only intensify. The winners will be those who step off the treadmill entirely and refocus on what matters: identifying candidates who can actually do the job.

That means less time parsing AI-generated applications and more time looking at real work. Less keyword matching and more genuine assessment. Less automation of judgment and more technology that supports human decision-making.

Step off the AI treadmill

FirstLook helps you see genuine work from real candidates, breaking the cycle of AI vs. AI.

Start for free

Related articles

Hiring

The application spam problem: why hiring is broken

Applications have tripled since 2021. Here's what happened and why traditional screening no longer works.

Skills-first

Why skills-first hiring works

What if you could see what candidates can do before their background? Here's why it leads to better outcomes.