Fake candidates, hallucinated jobs: How AI could poison online hiring
Next year will see some kind of embarrassing calamity related to artificial intelligence and hiring.
That’s according to Forrester’s predictions for 2024, which prophesied that the heavy use of AI by both candidates and recruiters will lead to at least one well-known company to hire a nonexistent candidate, and at least one business to hire a real candidate for a nonexistent job.
“We think that there will be a bit of AI mischief in talent management and recruiting,” says J.P. Gownder, vice president and principal analyst on Forrester’s future of work team. “AI can create all of these incredible, new, magical moments, but it also creates what we call mayhem, which is when things start to go a little haywire.”
How a False Candidate Could Slip Through the Cracks
Gownder envisions two ways this “mischief” and “mayhem” could go down.
The more straightforward scenario involves a candidate who uses AI to automatically respond to job postings, which continues running after they’ve accepted a role. Eventually, one of those applications could be successful and lead a company to hire a candidate that isn’t even on the market.
“That’s the boring version,” Gownder says. The more interesting possibility, he says, would involve a candidate using Generative AI to cook up résumés and cover letters that bend the truth to maximize their odds of success. He imagines a scenario in which the technology is directed to create the most compelling application possible, leading to fabricated credentials and even a name change to avoid bias.
“Let’s say you have an ethnically marginalized name, and you know employers are more likely to discriminate against you because of your name—which has been proven in many studies—so maybe you have a generative AI cook up similar résumés to yourself, but they’re not actually you,” he says. “It’s not like it’s a complete lie, but it’s not you.”
How a Fake Job Could Get a Real Listing
On the employer side, meanwhile, a heavier dependence on generative AI and other automated tools to write job postings, sift through candidates—and in some cases, even make hiring decisions—could result in an employer posting a job that doesn’t actually exist.
According to the Forrester study, 33% of AI decision-makers say they’re expanding its use at their company in the year ahead, and another 29% say they’re experimenting with the technology.
Gownder says the most likely opportunity for an AI-related hiring mishap is during the handoff between internal human resources software and external recruiting services.
“Somewhere along the line there has been a generative AI system that has tried to pick up a job opening from the first system, and has generated a job that is totally different, or has generated two jobs, or generates a job that doesn’t actually trace back to the original system,” he says. “That’s quite possible, particularly if you’re using some third party for recruiting, which is happening more often.”
Some Messiness is Inevitable
Gownder adds that with AI utilization expected to grow on both sides of the employment equation, he expects some kind of hallucination, error, or mishap within the next twelve months, and he’s not alone.
“This is still the messy early stages of AI, and we will indeed have to work our way through this kind of messiness to get to the good stuff on the other side,” says Thomas Frey, the executive director of futurist think tank the DaVinci Institute. “We will indeed see some next-level deceptions.”
Frey explains that it took the automobile 120 long years to arrive at the technology we use today, adding that car owners in the 1900s often had to travel with a toolbox, out of necessity.
He similarly expects there to be a lot of tinkering and fine-tuning ahead with AI, which he believes will lead to better, more effective tools in the future. Frey adds that eventually some AI solutions will be developed explicitly to monitor and verify the activities of others.
“Very soon we will see a number of cross-validation systems serving as the ‘truth police’ for AI, where one AI system will be used to flag all the inconsistencies of another AI system,” he says.
Until such time, however, Frey says a certain degree of mayhem should be expected.
More of the Same, Just Faster
While the possibility of a major company hiring a fake employee, or a real employee accepting a fake job offer, sounds like the stuff of science fiction, exaggerating on a résumé or job application is hardly a modern invention.
In fact, Indeed’s head of responsible AI, Trey Causey, stresses that the technology is only enabling us humans to do the things we’ve always done, just faster.
“That’s a story as old as time,” he says. “There have been many high-profile cases of people inventing credentials or diplomas, and it’s not much of a stretch to think of someone creating a persona that uses LLMs [large language models] to generate correspondence.”
How to stay out of the headlines
To avoid embarrassment, Causey advises organizations to simply maintain a certain degree of human oversight, especially when it comes to recruitment.
“Any time you see a new technology develop that removes or has the potential to remove human oversight, you should tread carefully, especially when dealing with impactful decisions,” he says. “You certainly don’t want to be in a position where an LLM is writing job descriptions, then they’re posting it to a live job site without being reviewed by a human.”
Causey also recommends looking to industry standards, AI vendors, and third-party resources to get a sense of how to best minimize the risks associated with utilizing AI in recruiting and hiring.
“Looking to those kinds of nonprofit-driven standard-setting practices can be a way to at least get you asking the right questions,” he says. “Ask vendors, how do you vet your technology? What are you thinking with respect to bias? How are you complying with AI specific laws in your jurisdictions? Those are all questions you can arm yourself with, without hiring specialized talent.”
(25)