The AI Interviewer: Mastering Quality, Rigor, and Deployment in Mixed-Methods Evaluation

Conferencia | En línea

Sobre el evento

This event bridges theory and field practice to explore the lifecycle of AI-mediated interviewing. We first present a framework of eight measures to assess AI interview quality and methodological risks. Next, we host a hands-on lab to co-design and deploy a live AI mixed-methods agent. Participants analyze real-time results to see how tools solve bottlenecks while safeguarding evaluative integrity and trust. Join and master the craft of AI-enhanced evidence generation.

Sesión

Seminario web | En línea
2 de junio, 2026 15:00 PM - 15:45 PM
Artificial intelligence is rapidly entering evaluation practice, yet there is little guidance on how to judge the quality of AI-mediated interviews. This session presents a theory-grounded framework for assessing AI interview performance using eight operational measures derived from evaluation and qualitative research theory. Drawing on an empirical comparison of six LLM-based interviewers across simulated evaluation scenarios, the presentation highlights capability boundaries, methodological risks, and practical safeguards for evaluators considering AI-mediated data collection.

Presentador/a

Nombre Título Biografía
Ali Safarnejad Ali Safarnejad (PhD) is a Multi-Country Evaluation Specialist with UNICEF covering East Asia and the Pacific. He leads evaluations across humanitarian and development contexts and has a strong interest in improving the efficiency and quality of evaluative evidence through thoughtful use of technology and AI.

Moderador/a

Nombre Título Biografía

Taller | En línea
3 de junio, 2026 15:00 PM - 16:00 PM
Participants will learn how to design and deploy an AI-powered interview agent for mixed-methods data collection in evaluation. Using insights from successful field pilots in humanitarian and development sector evaluations, this hands-on session will guide participants to jointly develop an interview guide, define the agent’s parameters, and deploy a live AI-facilitated survey. Participants will take the survey themselves and review real-time results, reflecting on methodological implications, ethical considerations, data quality, and practical applications.

Presentador/a

Nombre Título Biografía
Ali Safarnejad Evaluation Specialist Ali Safarnejad (PhD) is a Multi-Country Evaluation Specialist with UNICEF covering East Asia and the Pacific. He leads evaluations across humanitarian and development contexts and has a strong interest in improving the efficiency and quality of evaluative evidence through thoughtful use of technology and AI.

Moderador/a

Nombre Título Biografía

Temas

Evaluadores Comisionados de Evaluación Usuarios de evaluación VOPEs / Redes de evaluación Académicos Estudiantes Funcionario / Empleado de Organización Internacional Tema anual: Evaluación, evidencia y confianza en la era de la IA Innovación en la evaluación

Detalles del evento

Iniciar sesión