Coding Evaluation Policies at Scale: A Human-AI Calibration Approach to Content Analysis

Webinaire | En ligne

À propos de l'événement

This session presents a case study of human-AI calibration for large-scale qualitative coding of evaluation policies across 20 countries spanning Africa, Asia-Pacific, the Americas, and Europe. Presenters will share how iterative collaboration between evaluators and a large language model helped scale policy analysis while keeping human judgment central, and facilitated the development of a consolidated taxonomy for evaluation policy. They will also discuss what responsible AI looks like in practice, specifically where it adds value and where it falls short.

Conférenciers

Nom Titre Biography
Alana Kinarsky, PhD Research Analyst Alana Kinarsky is a Research Analyst at the UCLA School of Education and the owner of A.R.K. Consulting, an independent evaluation consultancy. Her research focuses on evaluation policy, evaluation marketplace, and the intersection of AI and evaluation methodology.
Élyse McCall-Thomas PhD Candidate Elyse McCall-Thomas is a PhD candidate at the University of Ottawa. Her research examines evaluation theory, policy and practice, with a focus on how formal policy frameworks shape organizational and institutional approaches to learning and accountability.
Christina A. Christie, PhD Wasserman Dean and Professor of Education Christina A. Christie is the Wasserman Dean of the School of Education and Information Studies at UCLA and a Professor in the Division of Social Research Methodology. Her work focuses on advancing the theories and methods used to facilitate social betterment through evaluation.
Leslie Fierro, PhD Sydney Duder Professor in Program Evaluation Leslie Fierro is the Sydney Duder Professor in Program Evaluation at the Max Bell School of Public Policy at McGill University. Her work focuses on strengthening organizational and systems-level capacity for commissioning, planning, implementing, and using evaluation. She is a current AEA Board Member-at-Large.

Sujets et thèmes

Évaluateurs Thème annuel : Évaluation, preuves et confiance à l'ère de l'IA

Detailles de l'événement

Se connecter