Doing More with Less while Not Losing Trust: How Should Evaluation Standards Evolve in an AI Augmented World?
圆桌会议 | Online
-
Organized by:
EvalforEarth
About the Event
AI has the potential to rapidly reshaping how evaluations are planned, conducted, synthesised, and communicated particularly in low resource and humanitarian contexts, where pressure to “do more with less” is acute. However, it simultaneously introduces significant ethical, methodological, and governance risks that directly challenge the profession’s credibility, independence, and trustworthiness.
AI adds the greatest value when it reduces duplication, lowers transaction costs, and frees human expertise for judgment, ethics, and contextual interpretation, not when it substitutes core evaluative functions or obscures accountability. This creates an urgent need for professional standards to evolve, not by endorsing AI wholesale but defining where it is appropriate, where it is not, and what competencies and safeguards are required.
Looking at their own use case, this roundtable will explore questions around how evaluation standards, competencies, and institutional frameworks adapt to ensure AI strengthen rather than undermine ethical practice, methodological rigor, and public trust. It will be particularly relevant for evaluators and commissioners wrestling with how to translate high-level AI ethics and governance principles into concrete evaluation standards and professional practice.
AI adds the greatest value when it reduces duplication, lowers transaction costs, and frees human expertise for judgment, ethics, and contextual interpretation, not when it substitutes core evaluative functions or obscures accountability. This creates an urgent need for professional standards to evolve, not by endorsing AI wholesale but defining where it is appropriate, where it is not, and what competencies and safeguards are required.
Looking at their own use case, this roundtable will explore questions around how evaluation standards, competencies, and institutional frameworks adapt to ensure AI strengthen rather than undermine ethical practice, methodological rigor, and public trust. It will be particularly relevant for evaluators and commissioners wrestling with how to translate high-level AI ethics and governance principles into concrete evaluation standards and professional practice.
Speakers
| 名称 | 标题 | Biography |
|---|---|---|
| Alexandra Priebe, PhD | Evaluation Officer, OEV WFP | Alexandra Priebe is an Evaluation Officer in WFP’s Office of Evaluation, Use Unit, supporting the development and adoption of an AI evidence mining tool. She brings 20+ years’ experience across humanitarian and development contexts. |
| Steven Jonckheere | Senior Evaluation Officer, IOE IFAD | Steven Jonckheere is Senior Evaluation Officer at IFAD’s Independent Office of Evaluation (IOE). With over 20 years of experience in agricultural and rural development across Africa, Asia, Europe, and Latin America, he specializes in social inclusion, gender, and methodological innovation. At IOE, Steven oversees and leads the Office’s work on AI and innovation in evaluation. |
| Andreas Reumann | Head, Independent Evaluation Unit, GCF | TBA |
| Martin Prowse | Evaluation Specialist | TBA |
| Fabrizio Felloni | Deputy Director, Independent Evaluation Office of the GEF | Fabrizio Felloni is Chief Evaluation Officer and Deputy Director at the Independent Evaluation Office of the Global Environment Facility (Washington DC), from September 2024. He was previously, Deputy Director at the Independent Office of Evaluation of IFAD (2016-2024), Lead Evaluation Officer in the same office (2010-2016), Evaluation Specialist at UNDP (2008-2010) and at IFAD (2001-2007). He has led project, country-level, sub-regional, thematic and corporate evaluations in over 25 countries (Africa, Asia, Eastern Europe, Latin America). He holds a Master’s Degree in Agricultural Economics from Washington State University and a Master-equivalent degree in Social and Economic Sciences from Bocconi University (Italy). He is the author / co-author of over a dozen publications in peer-reviewed journals. He is fluent in English, French and Spanish |
| Anupam Anand, PhD | Senior Evaluation Officer, GEF IEO | Dr. Anupam Anand is a Senior Evaluation Officer at the GEF IEO, where he serves as the program manager for biodiversity evaluations and methods, developing applied approaches that integrate LLMs, geospatial analysis, satellite data, drones and field methods into evaluative practice. With over 17 years of experience in academia, evaluation and international development, he designs and deploys these tools to generate stronger, field-grounded evaluative evidence, bridging technical execution with evaluation design. Previously, he led NASA-funded projects and conducted climate risk assessments for the World Bank. He holds a Ph.D. from the University of Maryland and a postgraduate diploma in environmental law. |
| Thanicha Ruangmas,PhD | Data Scientist, GEF IEO | Dr. Thanicha Ruangmas is a data scientist at the GEF IEO, where she develops LLMs to classify large document corpora. Her work supports implementation issue classification, activity classification, and policy coherence analysis. She holds a Ph.D. from the University of Wisconsin-Madison. |
| Carlos Tarazona | Senior Evaluation Officer, Food and Agricultural Organization. | Carlos Tarazona is a Senior Evaluation Officer at FAO with over 20 years of experience in the evaluation of agricultural and rural development programmes. He has led major evaluations and previously worked with the International Atomic Energy Agency. His work focuses on evaluation methods, learning, and strengthening evaluation practice. |
| Zhiqi Xu | Evaluation Analyst, Food and Agricultural Organization | Zhiqi Xu is an Evaluation Analyst at the FAO Office of Evaluation specializing in mixed-methods evaluation and methodological innovation. She promotes data-informed and AI-assisted approaches in evaluation practice. She is currently pursuing a PhD in Development Studies at Erasmus University Rotterdam on policy experimentation and co-production in China’s poverty alleviation. |
Moderators
| 名称 | 标题 | Biography |
|---|---|---|
| Anoop Sharma | Evaluation and AI Specialist | Anoop Sharma is Evaluation and AI Specialist at IFAD’s Independent Office of Evaluation (IOE). He has more than seven years of UN experience in evaluations, operational assessments, and AI and data driven analysis. At IOE, Anoop is leading the integration of the Office’s AI strategy into evaluation processes. |
| Innocent Chamisa | EvalforEarth CoP Coordinator | International development specialist with over 10 years of experience across food systems, land governance, digital innovation, and evaluation. FAO award recipient for policy coordination and sustainable agriculture. Currently serving as Global Coordinator of EvalforEarth, supporting evaluation for food security, environment, agriculture, and rural development. |