The AI Interviewer: Mastering Quality, Rigor, and Deployment in Mixed-Methods Evaluation

Conferência | Online

Sobre o evento

This event bridges theory and field practice to explore the lifecycle of AI-mediated interviewing. We first present a framework of eight measures to assess AI interview quality and methodological risks. Next, we host a hands-on lab to co-design and deploy a live AI mixed-methods agent. Participants analyze real-time results to see how tools solve bottlenecks while safeguarding evaluative integrity and trust. Join and master the craft of AI-enhanced evidence generation.

Sessão

Webinar (em inglês) | Online
2 de junho, 2026 15:00 PM - 15:45 PM
Artificial intelligence is rapidly entering evaluation practice, yet there is little guidance on how to judge the quality of AI-mediated interviews. This session presents a theory-grounded framework for assessing AI interview performance using eight operational measures derived from evaluation and qualitative research theory. Drawing on an empirical comparison of six LLM-based interviewers across simulated evaluation scenarios, the presentation highlights capability boundaries, methodological risks, and practical safeguards for evaluators considering AI-mediated data collection.

Orador/a

Nome Título Biography
Ali Safarnejad Ali Safarnejad (PhD) is a Multi-Country Evaluation Specialist with UNICEF covering East Asia and the Pacific. He leads evaluations across humanitarian and development contexts and has a strong interest in improving the efficiency and quality of evaluative evidence through thoughtful use of technology and AI.

Moderators

Nome Título Biography

Oficina | Online
3 de junho, 2026 15:00 PM - 16:00 PM
Participants will learn how to design and deploy an AI-powered interview agent for mixed-methods data collection in evaluation. Using insights from successful field pilots in humanitarian and development sector evaluations, this hands-on session will guide participants to jointly develop an interview guide, define the agent’s parameters, and deploy a live AI-facilitated survey. Participants will take the survey themselves and review real-time results, reflecting on methodological implications, ethical considerations, data quality, and practical applications.

Orador/a

Nome Título Biography
Ali Safarnejad Evaluation Specialist Ali Safarnejad (PhD) is a Multi-Country Evaluation Specialist with UNICEF covering East Asia and the Pacific. He leads evaluations across humanitarian and development contexts and has a strong interest in improving the efficiency and quality of evaluative evidence through thoughtful use of technology and AI.

Moderators

Nome Título Biography

Tópicos e Temas

Avaliadores Comissionistas de Avaliação Usuários de avaliação VOPEs / Redes de avaliação Acadêmicos Estudantes Servidor Público / Funcionário da Organização Internacional Tema anual: Avaliação, Evidências e Confiança na Era da IA Inovação em Avaliação

Event Details

Entrar