Every recruiter who has opened their ATS in the past year has felt it: something has shifted. The applications look different. They read smoother, hit more keywords, and sound eerily similar to one another. That uniformity is not a coincidence. According to a 2025 report from Career Group Companies, roughly 65 percent of job candidates now use AI at some point during the application process. Resume Now’s survey of over 900 HR professionals found that 90 percent have noticed an increase in low-effort or formulaic applications, largely driven by AI tools.
The result is a genuine operational problem. Recruiters are managing 2.7 times more inbound applications than they were three years ago, while team headcounts have actually shrunk. The old screening playbook — scan for keywords, check formatting, look for a compelling cover letter — was built for a world where those signals still meant something. That world is disappearing fast.
So what do we do about it?
The Scale of the Shift
Let’s be honest about the numbers. Application volumes per job opening have surged roughly 182 percent since 2021, according to recruiting benchmarks data from Ashby. Some of that increase is driven by “easy apply” features and a competitive labor market, but a growing share is attributable to candidates using generative AI to produce and submit materials at scale.
This creates a paradox. AI tools on the employer side were supposed to help manage volume, and in some ways they do — automated screening can reduce resume review time by up to 75 percent. But when the applications themselves are also AI-generated, those screening tools are increasingly matching on surface-level polish rather than substance. You end up with a system where AI is screening AI, and the humans in the loop are left trying to figure out who actually wrote what, and whether it matters.
For recruiting teams already stretched thin, this is not an abstract concern. It is a workflow problem that directly affects quality of hire.
Why “Ban AI” Is Not a Viable Strategy
Some organizations have responded by asking candidates to certify that their materials are “AI-free.” This approach is understandable but ultimately unworkable, for three reasons.
First, enforcement is nearly impossible. Detection tools remain unreliable, producing frequent false positives that can penalize strong writers and non-native English speakers who happen to write in a style that triggers the algorithm. Building a screening process on a foundation of inaccurate detection is a compliance risk, not a solution.
Second, the line between “AI-generated” and “AI-assisted” is already blurred beyond recognition. A candidate who uses Grammarly, a spell checker, or a smart template is using AI. A candidate who asks ChatGPT to brainstorm how to frame a career transition, then writes the actual content themselves, is using AI. Drawing a bright line between acceptable and unacceptable use is a losing proposition.
Third, and most importantly, banning AI use in applications sends a strange signal in 2025. Most organizations are actively investing in AI adoption across their operations. Telling candidates they cannot use the same tools your own employees rely on daily creates a credibility gap that savvy applicants will notice.
The better question is not whether candidates use AI, but whether they use it thoughtfully.
What Separates Good AI-Assisted Applications from Bad Ones
Resume Now’s research found that 62 percent of employers reject AI-generated resumes that lack personalization, while 78 percent of hiring managers say personalized details are what signal genuine interest and fit. This tells us something important: recruiters are not reacting to AI use itself. They are reacting to laziness that AI makes easier.
After reviewing thousands of applications, experienced recruiters tend to spot consistent patterns that distinguish thoughtful AI-assisted work from low-effort generation.
Generic applications treat the job description as a mirror. They reflect the posting’s language back without adding context. The candidate’s summary reads like a paraphrase of the role requirements rather than an account of their actual experience. Responsibilities are listed but never connected to outcomes. The cover letter could apply to any company in the same industry with a simple find-and-replace of the name.
Thoughtful applications, by contrast, show evidence of real engagement with the role. The candidate references specific aspects of the team, the product, or the company’s recent work. They connect their experience to the job’s challenges in ways that require actual knowledge of both. Their writing has a point of view — not just competence, but perspective. These qualities are difficult to generate without genuine human input, even with the best AI tools.
The distinction matters because it maps directly to the kind of employee you are likely to get. A candidate who takes the time to understand your organization and articulate a specific value proposition is demonstrating the same skills you will need from them on the job.
Practical Screening Adjustments
If your evaluation criteria were designed for a pre-AI application landscape, they are overdue for an update. Here are four adjustments that talent teams can implement without overhauling their entire process.
Weight specificity over polish. Train your reviewers to look for concrete details: named projects, quantified outcomes, references to specific challenges. Polish is now cheap. Specificity still requires real knowledge.
Introduce asymmetric questions. In your application or early screening, include at least one question that requires the candidate to engage with something unique to your organization — a recent initiative, a known industry challenge, a hypothetical scenario specific to the role. These are difficult to answer well with a generic prompt.
Move evaluation downstream. If written applications are becoming less reliable as a signal, shift more weight to live interactions earlier in the process. A brief phone screen or a short asynchronous video response can reveal communication skills that a polished resume cannot. This does not mean abandoning resume review, but it does mean treating it as one data point among several rather than the primary gate.
Evaluate the work sample, not the format. For roles that involve writing, analysis, or strategy, consider replacing the traditional cover letter with a short work sample or case response. Give candidates a realistic task and a reasonable time frame. What they produce under those conditions will tell you far more than any application document, regardless of how it was drafted.
The Emerging Skill Signal
There is a broader point here that is easy to miss in the operational details. A candidate who uses AI tools effectively — who can take a powerful but generic technology and produce something specific, relevant, and authentic — is demonstrating a skill that most organizations desperately need.
The future of knowledge work is not about choosing between human effort and AI assistance. It is about combining them well. Candidates who can do that in their job search are showing you, in real time, how they will work once they are on your team.
This reframes the screening challenge. Instead of trying to detect and penalize AI use, the more productive approach is to design evaluation processes that surface the qualities AI alone cannot produce: specificity, judgment, genuine understanding of context, and the ability to adapt general tools to particular problems.
The organizations that figure this out first will not just improve their hiring efficiency. They will build teams that are better equipped for the way work actually operates now.
The applications have changed. The question is whether our evaluation methods will change with them.
Post Views: 163