Faster Evidence, Harder Questions: AI and the Evolution of Evaluation
Table ronde | En ligne
-
Organisé par:
Asian Development Bank (ADB)
À propos de l'événement
Evaluators are increasingly turning to AI-enabled platforms to respond to mounting pressures for speed, scale, and relevance. Tools such as the Asian Development Bank’s (ADB) EVA, United Nations Development Programme’s (UNDP) AIDA, Ministry for Foreign Affairs of Finland’s OpenEval, and similar initiatives across multilateral and bilateral institutions promise rapid access to institutional knowledge and faster synthesis of evidence at precisely the moment when trust in data, analysis, and public institutions is under strain.
This panel session examines what happens when organizations prioritize speed in environments where credibility and rigor cannot be compromised. Drawing on concrete organizational experiences, the discussion explores how AI is reshaping the evaluation landscape: altering workflows, expectations of responsiveness, and the balance between human judgment and machine-assisted analysis.
Rather than showcasing tools in isolation, the session brings together organizations that are independently building AI platforms for evaluation and learning to interrogate common design choices, governance dilemmas, and unintended consequences. Panelists will reflect on where AI has demonstrably improved evaluation practice, such as navigating fragmented evidence bases or supporting time-critical decision making, and where it has exposed new risks, including opacity, bias, and overreliance.
A central focus will be the ethical and institutional implications of these platforms. Panelists will discuss how their organizations are addressing questions of transparency, accountability, and professional standards, and how AI adoption intersects with evaluation capacity development (ECD). The discussion will emphasize that in an AI-augmented world, human judgment, contextual understanding, and methodological rigor are core assets.
The session invites participants to confront a fundamental tension facing the profession: how to harness AI’s efficiencies without eroding the very trust that gives evaluation its value.
This panel session examines what happens when organizations prioritize speed in environments where credibility and rigor cannot be compromised. Drawing on concrete organizational experiences, the discussion explores how AI is reshaping the evaluation landscape: altering workflows, expectations of responsiveness, and the balance between human judgment and machine-assisted analysis.
Rather than showcasing tools in isolation, the session brings together organizations that are independently building AI platforms for evaluation and learning to interrogate common design choices, governance dilemmas, and unintended consequences. Panelists will reflect on where AI has demonstrably improved evaluation practice, such as navigating fragmented evidence bases or supporting time-critical decision making, and where it has exposed new risks, including opacity, bias, and overreliance.
A central focus will be the ethical and institutional implications of these platforms. Panelists will discuss how their organizations are addressing questions of transparency, accountability, and professional standards, and how AI adoption intersects with evaluation capacity development (ECD). The discussion will emphasize that in an AI-augmented world, human judgment, contextual understanding, and methodological rigor are core assets.
The session invites participants to confront a fundamental tension facing the profession: how to harness AI’s efficiencies without eroding the very trust that gives evaluation its value.
Conférenciers
| Nom | Titre | Biography |
|---|---|---|
| Maya Vijayaraghavan | Principal Evaluation Specialist, Independent Evaluation Department, ADB | Maya leads work on evaluation methods, data analytics, AI, and capacity development at ADB. She previously served as a Lead Economist at the U.S. Centers for Disease Control and Prevention and as an econometrician at the World Bank. She holds a PhD in Applied Economics from Clemson University. |
| Nea-Mari Heinonan | Lead Evaluation Specialist, Ministry for Foreign Affairs of Finland | Nea-Mari manages strategic evaluations for Finland’s Ministry for Foreign Affairs. With 20 years in international development, she specializes in planning, monitoring, evaluation, and learning. She has piloted AI and data science approaches and is currently pursuing a PhD in social data science. |
| Anna Guerraggio | Chief, Engagement and Communications, Independent Evaluation Office, United Nations Development Programme (UNDP) | Anna has led UN evaluations for 20 years. A behavioral scientist by training, Anna recently pivoted her work to engagement and communication, championing a shift to embed these elements throughout the evaluation lifecycle at the UNDP Independent Evaluation Office. |
Moderators
| Nom | Titre | Biography |
|---|---|---|
| Kerry Albright | Advisor and Head, Evaluation Knowledge Management Unit, Independent Evaluation Department, ADB | Kerry oversees evaluation capacity development, methods, strategic communications and knowledge management at ADB. She is a former UNICEF Deputy Director of Evaluation and UK DFID adviser. She holds a Master’s degree in Rural Resources and Environmental Policy from the University of London. |