"Expectancy Violation" in AI Interviews
Candidates expect a mechanical bot but are surprised by the fluency of LLM-based systems. How does this 'expectancy violation' affect satisfaction?

AI voice interview systems generate both intense curiosity and a certain apprehension in candidates' minds. While candidates typically expect a far more mechanical and limited interaction from a bot, they find themselves surprised by the fluency and logical reasoning capabilities of modern Large Language Model (LLM) based systems. Yet this technological "leap" brings with it new dynamics known as "expectancy violation" that directly affect candidate satisfaction.
The decisive role of technical performance and flow
The most critical factor shaping candidate satisfaction during an interview is the level of interaction and speed that the AI delivers. In a study conducted by Leybzon and colleagues (2025), stuttering and delays experienced during the system's early stages were found to cause confusion among candidates. When candidates could not be sure whether the AI had heard them, they tended to abandon the interview altogether.
However, once improvements reduced latency, interview completion rates rose markedly. Leybzon and team (2025) report that in cases where the AI felt more "natural" and "understanding," 86% of candidates described their experience as positive. This demonstrates that technical flawlessness is the first prerequisite for meeting candidates' expectation of an "intelligent conversation."
Perfection pressure and the "robotization" feeling
Although one of the biggest expectations candidates have of AI is objectivity, this can sometimes morph into a psychological burden. According to Sunil's (2024) research, candidates feel a "perfection pressure" because they believe the AI will analyze every answer with extreme thoroughness. This pressure leads candidates to abandon natural behavior in favor of trying to use keywords they think the system favors, resulting in heightened performance anxiety.
The University of Sussex (2025) report highlights that candidates feel forced to adopt "robotic behaviors" such as maintaining a fixed gaze, wearing an artificial smile, and speaking in a monotone voice in order to satisfy a bot. When candidates do not know what the system is scoring or how, this uncertainty exhausts them both emotionally and cognitively. This creates a tension between the speed that the technology offers and the feeling of "dehumanization" it can produce.
Interaction quality: Deep probing and empathy
Whether candidates find the AI "successful" depends not just on its ability to ask questions, but on how it responds to the answers it receives. Venkanna and team (2025) note that the system's use of adaptive questioning, dynamically evolving follow-up questions based on the candidate's responses, creates a genuinely realistic interview atmosphere. Candidates find motivational feedback from the AI, such as "I understand, that was a great example," highly valuable because it diminishes the feeling of talking to a wall.
Candidates also report that feedback provided by the AI based on vocal tone and facial expressions helps them develop their skills. In the study by Sahani et al. (2025), candidates stated that real-time feedback mechanisms boosted their confidence by 80%. However, when this level of interaction falls below expectations, the interview feels like nothing more than a "voice-based survey," and satisfaction drops rapidly.
The data shows that candidates do not expect AI to be "human," but rather to deliver interaction at "human standards." The University of Sussex (2025) toolkit study argues that adopting a "glass box" approach, where candidates are transparently informed about what the system measures, resolves this expectation conflict.
While candidates appreciate the speed and non-judgmental nature of AI at the beginning of the interview, they still want to know that a human expert will review them at the end. For companies, the path to ensuring candidate satisfaction runs not through deploying the technology as a mere screening bot, but through designing it as an interactive assistant that listens to candidates and offers them room for growth.
References
- Jaser, Z., et al. (2025). Artificial Intelligence (AI) in the job interview process: Toolkit for employers, careers advisers and hiring platforms. University of Sussex.
- Leybzon, D. D., et al. (2025). AI Telephone Surveying: Automating Quantitative Data Collection with an AI Interviewer. VKL Research & SSRS.
- Sahani, K. K., et al. (2025). A smart interview simulator using AI avatars and real-time feedback mechanisms. International Journal of Engineering Technologies and Management Research.
- Sahu, A., et al. (2025). AI Interviewer Using Generative AI. ICAAAI 2025 Proceedings.
- Sunil, A. (2024). Exploring Job Applicants' Perspectives on Ai-Driven Interviews: The Influence on Stress and Anxiety Levels Due to Perceived Expectations of Perfection. IJAEM.
- Venkanna, G., et al. (2025). AI Interview Simulator: An Intelligent Hiring & Preparation Assistant. ICCSCE 2025 Proceedings.
- Jagtap, S. R., et al. (2025). AI-Driven Real-Time Interview Simulation App with Voice Recognition and Facial Analysis. Indian Journal of Science and Technology.
- Dijkkamp, J. (2019). The recruiter of the future, a qualitative study in AI supported recruitment process. University of Twente.
- Barari, S., et al. (2025). AI-Assisted Conversational Interviewing: Effects on Data Quality and User Experience. NORC at the University of Chicago.