Back to Blog
Candidate Experience
5 min read

How Do Algorithms Score Us? Candidates' Secret Theories

When candidates don't know how AI makes decisions, they develop their own 'folk theories.' How do these assumptions affect hiring quality?

Mei SullivanMei Sullivan·
Algorithms and scoring visual

Ever since AI entered hiring processes, candidates have had one question on their mind: "What criteria is this machine using to eliminate me?" Candidates who don't know the technical details develop intuition-based assumptions to figure out how the system works. Known in the scientific world as "Folk Theories," this phenomenon leads candidates to view AI as a black box and to follow sometimes very creative, sometimes completely wrong strategies to crack it open. Recent research reveals with striking data how candidates try to "hack" AI and how these theories affect hiring quality.

Folk theories: Viewing AI as a "superpower"

Even though candidates don't fully understand how AI makes decisions, they form complex mental models about it. Research titled "Experience and Adaptation in AI-mediated Hiring Systems" conducted by Md Nazmus Sakib and colleagues (2018/2024) notes that individuals resort to intuitive reasoning methods, or "folk theories," to explain AI behavior. According to this study, people describe AI sometimes as a simple tool, sometimes as an assistant, and sometimes as a mysterious "superpower" whose behavior is unpredictable.

These theories directly shape candidates' behavior during interviews. For example, when candidates realize the system isn't as advanced as advertised (expectation violation), they begin speculating about how decisions are made, and this uncertainty seriously increases stress levels. Sakib and team (2018/2024) emphasize that when candidates can't find clear guidance, they produce their own theories and this creates emotional tension.

Strategic manipulation: Attempts to mislead the algorithm

When candidates think they've cracked the AI's "ideal employee" profile, they begin modifying their answers to fit that profile. The field experiment titled "Behavioral Measures Improve AI Hiring" conducted by Marie-Pierre Dargnies and colleagues (2025) proves that candidates predict the company's expectations and "strategically" modify their answers accordingly. However, candidates' predictions don't always turn out to be correct.

One of the most interesting findings from the research involves the "patience" variable. According to data from Dargnies and team (2025), candidates actually appear more "impatient" to the AI than they really are, because they assume impatience will be perceived as a sign of high motivation and ambition. In reality, the algorithm scores patient candidates higher for long-term productivity. Similarly, candidates report artificially low "neuroticism" (emotional instability) scores and much higher "locus of control" scores than they actually have, in an attempt to impress the AI.

Performance art: Interviewing against silence

One of candidates' biggest folk theories relates to how AI scores body language and accent. Many candidates believe the AI will code even the slightest eye movement as "negative." Sakib and colleagues (2018/2024) note in their research that candidates describe this situation as "performing to silence."

This belief leads to the following behaviors:

  • Accent masking: Research data shows that candidates whose native language differs suppress their natural accent and speak in a robotic tone to avoid being misunderstood by AI.
  • Forced smiling: Candidates try to maintain a forced smile throughout the interview, assuming the system measures "positivity."
  • Keyword hunting: Nicole Jurado (2025) notes in her study that candidates stuff their resumes and interview answers with keywords the system "likes."

Candidates say these performative actions emotionally drain them and make them feel "dehumanized." Sakib and team (2018/2024) emphasize that candidates view these processes more as a "hackathon" than an interview.

How transparency eliminates folk theories

The only way to prevent candidates from building strategies on false assumptions is to establish transparent communication. Results from Poenaru and Diaconescu's (2025) research show that transparent and fair use of AI increases candidates' trust in the technology. When candidates know what the system scores (for example: "we only look at word content, not facial expressions"), they focus on their actual performance instead of feeling like they're playing a game.

Md Nazmus Sakib and colleagues (2018/2024) recommend the following design interventions to reduce candidate stress:

  • Transcript editing: Showing the candidate how the AI heard them and granting the right to correct errors.
  • Instant feedback: The AI giving responses like "I understand, that was a great example" eliminates the feeling of "talking to a wall."
  • Guidance: Providing a brief informational video about how the system works before the interview.

The "folk theories" that candidates develop are actually a cry of uncertainty and distrust. As long as companies use AI as a "screening shield," candidates will continue seeking ways to penetrate that shield.

The successful hiring systems of the future are not those that force candidates to speculate, but those that explain the process with full transparency and provide a humane interaction foundation. As emphasized in Brian Jabarian and Luca Henkel's (2026) research, when the balance between AI and humans is struck correctly, job offer rates increase. The best candidates are not those who play games with algorithms, but those who fearlessly showcase their true potential in a transparent environment.

References

  • Dargnies, M. P., Hakimov, R., & Kübler, D. (2025). Behavioral Measures Improve AI Hiring: A Field Experiment. Discussion Paper No. 532, CRC TRR 190.
  • Jabarian, B., & Henkel, L. (2026). Voice AI in Firms: A Natural Field Experiment on Automated Job Interviews. Booth School of Business, University of Chicago.
  • Jurado, N. (2025). The effects of artificial intelligence on shaping employer brand perception: insights from entry-level hiring practices. Master Thesis, Universidad Carlos III de Madrid.
  • Poenaru, L. F., & Diaconescu, V. (2025). Bridging Technology and Talent: Gen Z's Take on AI in Recruiting and Hiring. Bucharest University of Economic Studies.
  • Sakib, M. N., Rayasam, N. M., & Dey, S. (2018/2024). Experience and Adaptation in AI-mediated Hiring Systems: A Combined Analysis of Online Discourse and Interface Design. University of Maryland.
  • Gartner. (2026). Gartner Survey Shows Just 26% of Job Applicants Trust AI Will Fairly Evaluate Them.
  • Chopra, F., & Haaland, I. (2024). Conducting Qualitative Interviews with AI. CESifo Working Papers, No. 10666.