The Role of AI in Building Evaluation Capacity in Resource-Constrained Settings
小组座谈 | Online
-
Organized by:
SAMEA
About the Event
Background
Monitoring and evaluation (M&E) has traditionally relied on manual, resource-intensive processes that limit timely analysis, scalability, and adaptive learning. These challenges persist in many NGO and government contexts, where resource constraints and limited technical capacity often reduce M&E to compliance-driven reporting rather than meaningful, learning-oriented practice (UNDP 2023). As demand grows for more responsive and real-time evaluation, artificial intelligence (AI) presents an opportunity to enhance efficiency, accuracy, and accessibility across the M&E cycle. Emerging AI tools can support data collection, transcription, analysis, and reporting, helping to alleviate operational burdens. However, the integration of AI must be guided by strong ethical principles, including transparency, data privacy, and accountability, and should complement rather than replace human expertise (Radanliev 2025). The discussion aims to highlight how AI bridges the gap between efficiency and constraints, all whilst still requiring the human element to ensure that organisations streamline their operations and optimise available resources.
Objective
The main objective of the panel discussion is to explore how AI can enhance evaluation capacity, improve data-driven decision-making, and strengthen monitoring and evaluation systems in resource-constrained settings. The secondary objectives are i)To assess how AI tools can enhance monitoring and evaluation by reducing the time and cost of data analysis and reporting ii)To identify opportunities for integrating AI into government and social sector evaluation systems and iii)To analyze ethical considerations and risks associated with the use of AI in evaluation.
Methodology and Significance of The Presentation
This panel discussion employed a desktop review methodology to examine how AI can support evaluation capacity in resource constrained settings. The discussion draws from academic literature and practitioner experience to explore the opportunities and limitations of AI including bias and unequal access to technology and presents a set of considerations for integrating AI into evaluation capacity development in a way that is ethical, context aware, and people centered. These considerations can enhance the design of evaluation capacity interventions and improve the facilitation of ethical AI use in low resource settings. This panel discussion shares practical insights for evaluators, development practitioners, and policymakers on the future of AI use in Monitoring & Evaluation.
Monitoring and evaluation (M&E) has traditionally relied on manual, resource-intensive processes that limit timely analysis, scalability, and adaptive learning. These challenges persist in many NGO and government contexts, where resource constraints and limited technical capacity often reduce M&E to compliance-driven reporting rather than meaningful, learning-oriented practice (UNDP 2023). As demand grows for more responsive and real-time evaluation, artificial intelligence (AI) presents an opportunity to enhance efficiency, accuracy, and accessibility across the M&E cycle. Emerging AI tools can support data collection, transcription, analysis, and reporting, helping to alleviate operational burdens. However, the integration of AI must be guided by strong ethical principles, including transparency, data privacy, and accountability, and should complement rather than replace human expertise (Radanliev 2025). The discussion aims to highlight how AI bridges the gap between efficiency and constraints, all whilst still requiring the human element to ensure that organisations streamline their operations and optimise available resources.
Objective
The main objective of the panel discussion is to explore how AI can enhance evaluation capacity, improve data-driven decision-making, and strengthen monitoring and evaluation systems in resource-constrained settings. The secondary objectives are i)To assess how AI tools can enhance monitoring and evaluation by reducing the time and cost of data analysis and reporting ii)To identify opportunities for integrating AI into government and social sector evaluation systems and iii)To analyze ethical considerations and risks associated with the use of AI in evaluation.
Methodology and Significance of The Presentation
This panel discussion employed a desktop review methodology to examine how AI can support evaluation capacity in resource constrained settings. The discussion draws from academic literature and practitioner experience to explore the opportunities and limitations of AI including bias and unequal access to technology and presents a set of considerations for integrating AI into evaluation capacity development in a way that is ethical, context aware, and people centered. These considerations can enhance the design of evaluation capacity interventions and improve the facilitation of ethical AI use in low resource settings. This panel discussion shares practical insights for evaluators, development practitioners, and policymakers on the future of AI use in Monitoring & Evaluation.
Speakers
| 名称 | 标题 | Biography |
|---|---|---|
| Khalalelo Mokoena | Emerging Evaluator | Khalalelo is an Emerging Evaluator at Redflank, one of South Africa’s top consulting firms for 2025. She holds a Masters of Philosophy in Innovation and Development from the University of Johannesburg. Her interests include contributing to evidence-based decision-making to drive long-term impact. |
| Zandile Baloyi | Emerging Evaluator | Zandile is an Emerging Evaluator at the National Department of Social Development with a bachelor's degree in Public Administration and a certificate in Outcomes-based Monitoring and Evaluation Implementation. Her interests include programme & policy evaluation and community-based evaluations. |
| Nikita | Emerging Evaluator | Nikita is a Development Studies Honours graduate with 2 years of experience in the Social Development Department at the NPO Monitoring and Evaluation Unit. She works in the Department partnered with the South African Monitoring and Evaluation Association (SAMEA) as an Emerging Evaluator. |
Moderators
| 名称 | 标题 | Biography |
|---|---|---|
| Lebo Nchachi | Emerging Evaluator | Lebo is an Emerging Evaluator at Nation Builder, a social impact organisation that connects social investors with nonprofits to drive measurable, sustainable change across communities in Southern Africa. She holds a Masters of Philosophy in Programme Evaluation from the University of Cape Town (UCT) and a B(Socsci) Hons in Social Development from UCT. She has been involved in the student success space and is keen to contribute meaningfully to the upliftment and empowerment of communities. Her areas of interest include education, governance and policies, livelihoods and job creation, and humanitarianism. |