The AI Interviewer: Mastering Quality, Rigor, and Deployment in Mixed-Methods Evaluation

Conférence | En ligne

À propos de l'événement

This event bridges theory and field practice to explore the lifecycle of AI-mediated interviewing. We first present a framework of eight measures to assess AI interview quality and methodological risks. Next, we host a hands-on lab to co-design and deploy a live AI mixed-methods agent. Participants analyze real-time results to see how tools solve bottlenecks while safeguarding evaluative integrity and trust. Join and master the craft of AI-enhanced evidence generation.

Séance

Webinaire | En ligne
June 2, 2026 15:00 PM - 15:45 PM
Artificial intelligence is rapidly entering evaluation practice, yet there is little guidance on how to judge the quality of AI-mediated interviews. This session presents a theory-grounded framework for assessing AI interview performance using eight operational measures derived from evaluation and qualitative research theory. Drawing on an empirical comparison of six LLM-based interviewers across simulated evaluation scenarios, the presentation highlights capability boundaries, methodological risks, and practical safeguards for evaluators considering AI-mediated data collection.

Conférenciers

Nom Titre Biography
Ali Safarnejad Ali Safarnejad (PhD) is a Multi-Country Evaluation Specialist with UNICEF covering East Asia and the Pacific. He leads evaluations across humanitarian and development contexts and has a strong interest in improving the efficiency and quality of evaluative evidence through thoughtful use of technology and AI.

Moderators

Nom Titre Biography

Atelier | En ligne
June 3, 2026 15:00 PM - 16:00 PM
Participants will learn how to design and deploy an AI-powered interview agent for mixed-methods data collection in evaluation. Using insights from successful field pilots in humanitarian and development sector evaluations, this hands-on session will guide participants to jointly develop an interview guide, define the agent’s parameters, and deploy a live AI-facilitated survey. Participants will take the survey themselves and review real-time results, reflecting on methodological implications, ethical considerations, data quality, and practical applications.

Conférenciers

Nom Titre Biography
Ali Safarnejad Evaluation Specialist Ali Safarnejad (PhD) is a Multi-Country Evaluation Specialist with UNICEF covering East Asia and the Pacific. He leads evaluations across humanitarian and development contexts and has a strong interest in improving the efficiency and quality of evaluative evidence through thoughtful use of technology and AI.

Moderators

Nom Titre Biography

Sujets et thèmes

Évaluateurs Commissaires à l’évaluation Utilisateurs de l’évaluation VOPE / Réseaux d’évaluation Universitaires Étudiants Fonctionnaire / Employé de l’organisation internationale Thème annuel : Évaluation, preuves et confiance à l'ère de l'IA Innovation dans l'évaluation

Detailles de l'événement

Se connecter