AI and the Future of Evaluation: Practice, Judgment, and Systems
Webinar | Online
-
Organized by:
4th Wheel Social Impact
About the Event
As artificial intelligence becomes increasingly embedded in evaluation, there is growing interest, but also uncertainty around how it should be used in practice. This three-part series explores the evolving role of AI in evaluation through a structured progression: what is already changing, what must remain human-led, and what the sector needs to build going forward. The first session focuses on practice, showcasing how AI is being used across the evaluation lifecycle and how it is transforming everyday workflows. The second session shifts to judgment, examining the ethical boundaries of AI and the aspects of evaluation, such as context, interpretation, and trust , that must remain human-led. The third session looks ahead to systems, exploring how organisations can move beyond fragmented tool use towards more integrated, AI-enabled evaluation approaches. Together, the series combines practitioner insights, critical reflection, and forward-looking discussion, using a mix of formats including experience-sharing, panel discussion, and interactive workshop. It aims to provide participants with both practical understanding and strategic direction on how to engage with AI in evaluation in a responsible, effective, and future-ready manner.
Sessions
Webinar
|
Online
June 3, 2026
14:00 PM - 15:30 PM
As artificial intelligence becomes increasingly embedded in development and evaluation work, many conversations remain either highly conceptual or focused only on tools. This session shifts the discussion from general possibilities to actual evaluation practice by exploring how AI is changing the day-to-day work of evaluators across the full evaluation lifecycle.
Framed around a practical case-based example, the session will walk participants through how an evaluation of education and skill development programmes may have been conducted earlier, and how similar work can be approached now using AI-supported methods. Rather than presenting AI as a replacement for evaluators, the session will demonstrate how AI can augment human work across different stages of an evaluation—while also highlighting the continuing importance of professional judgment, contextual understanding, and methodological rigor.
The session will feature a moderated relay-style conversation with practitioners playing different roles in the evaluation workflow. Each speaker will reflect on how their work has changed in specific areas such as designing theories of change and outcome frameworks, developing tools, reviewing or cleaning data, analysing findings, and preparing reports and knowledge products. Through this format, participants will see not only what tools are being used, but also what has changed in terms of speed, quality, synthesis, and decision-making.
Using a practical example from education and skill training evaluations, the panel will illustrate how AI can support:
the development of frameworks, outcomes, and indicators
synthesis of prior evidence and programme documents
improvement of tools and instrument design
data handling, summarisation, and analysis
report writing, presentation building, and communication of findings
The session will also reflect on where AI adds genuine value, where caution is needed, and what this means for the future skill set of evaluators.
Session Objectives
demonstrate how AI can be used across the evaluation lifecycle in practical and grounded ways
show how evaluation workflows are changing, not just which tools are available
provide real examples of how different evaluation roles are using AI in their daily work
compare “before” and “now” approaches to highlight changes in efficiency, quality, and output generation
encourage critical reflection on where human judgment remains essential
Speakers
| Name | Title | Biography |
|---|---|---|
| Ms. Aditi Chatterjee | Senior Consultant | Research , Data , & Monitoring , Evaluation , & Learning ( MEL ) Specialist with 14+ years of development sector experience, most recently as a 'solopreneur'. As an independent consultant with international development & private sector stakeholders globally. She enables data & evidence-based decision-making to plan, measure, & improve social impact outcomes in orgs, programs, & supply chains. Life, study, & work across India & the UK have enabled her to contribute to stakeholders both in the 'Global South' & 'Global North', esp. across India, South Asia, Southeast Asia, Africa, Europe, UK & US. I have worked with clients & funders such as, Global Fund to End Modern Slavery US, Innovations for Poverty Action US, US Dept. of State, US Dept. of Labor, British International Investment, FCDO UK, Freedom Fund UK, NORAD Norway, CIFF UK, Bond UK, Coca-Cola India, & UNDP India / Vietnam. She has also worked directly with communities, right up to living in a Mumbai slum & working at labour jobs with informal workers, for immersive research. Specializing in mixed methods research, data insights, descriptive data analysis, surveys, systems change & developmental evaluations, complexity-aware monitoring & learning, development of strategic ToCs & MEL systems for orgs, programs, coalitions & accelerators, & management of RCTs & quasi-experimental studies. |
Moderators
| Name | Title | Biography |
|---|---|---|
| Ms. Sharon Weir | Co-Founder 4th Wheel | Co-founder of 4th Wheel Social Impact (2010), she has led 100+ research, evaluation, and communication projects. She specializes in M&E frameworks across education, health, livelihoods, WASH, and women’s empowerment. Her work spans government, NGOs, corporates, and impact investors, with a strong focus on impact measurement systems. She has implemented diverse evaluations using methods like quasi-experimental designs, SROI, social audits, and participatory approaches. |
Webinar
|
Online
June 4, 2026
14:00 PM - 15:00 PM
As artificial intelligence becomes increasingly integrated into evaluation practice, much of the focus has been on improving efficiency, speed, and scale. However, evaluation is not only a technical process—it is fundamentally human, requiring contextual understanding, ethical judgment, and meaningful engagement with communities.
This session explores the limits of AI in evaluation by asking a critical question: What aspects of evaluation cannot—and should not—be automated?
Positioned as a counterpoint to AI-focused efficiency discussions, this panel will bring together diverse stakeholders from across the evaluation ecosystem to reflect on where human judgment remains essential for maintaining credibility, trust, and ethical integrity.
Potential Speakers / Roles
1. an NGO leader or programme implementer with deep experience in field engagement and community-based work
2. a Monitoring & Evaluation (M&E) specialist or qualitative researcher focused on analysis, interpretation, and learning.
3. a CSR leader, donor representative, or policy advisor who uses evaluation evidence for decision-making.
4. a representative from a digital data or technology platform (such as SurveyCTO or similar) working at the intersection of AI and data systems.
Through this multi-stakeholder perspective, the session will explore key dimensions where human involvement remains critical:
Field engagement and trust-building, which shape the quality and authenticity of data
Contextual interpretation, including cultural, social, and political nuances
Ethical decision-making, including what to report, how to represent findings, and how to minimise harm
Strategic use of evidence, where judgment is required to translate findings into meaningful action
The session will also include an interactive component where participants reflect on differences between AI-generated outputs and human interpretation, highlighting how depth, nuance, and “so what” insights emerge through human engagement. Rather than positioning AI as a replacement for evaluators, the discussion will focus on defining a balanced, responsible approach—where AI supports evaluation processes while human judgment remains central.
Speakers
| Name | Title | Biography |
|---|---|---|
| Ms. Sulagna Choudhari | M&E Specialist/Social Impact Research/Impact Assessment/CSR Reporting/Empowering CSR and NGOs with Utilization Focused Monitoring and Evaluation/Education and Skills | With 8 years of experience in research studies, evaluation practice, and impact assessments, Sulagna has collaborated with esteemed organizations like the UNFPA, UNDP-IICPSD, and UNESCO-IIEP. Currently, as a Senior Advisor at 4th Wheel, she designs and implement impact evaluation projects using diverse methodologies such as social audits, SROI, participatory needs assessments, and pre-post assessments. She excels in conducting comprehensive, utilization-focused evaluations, developing data collection tools, and creating detailed, visualized reports. |
Moderators
| Name | Title | Biography |
|---|---|---|
| Ms. Juhi Jain | Associate Manager - M&E | Trained as a social worker with a specialisation in Public health. Juhi has close to 8 years of experience working with non-profits in India, focusing on children, adolescent girls and young women, and health rights of marginalised communities in urban settings. Over the years, her work has led her to specialise in qualitative research, project management, M&E, and writing skills. |
Webinar
|
Online
June 5, 2026
14:00 PM - 15:00 PM
AI is increasingly being used in evaluation, but most adoption in the social sector remains fragmented and task-based, focused on isolated uses such as report writing, data summarisation, or presentation development. There is limited clarity on how AI can be integrated into evaluation systems in a structured and meaningful way.
In contrast, sectors such as technology and business have moved beyond using disconnected tools. They have embedded AI within SaaS-based platforms, integrated data systems, automated workflows, and real-time dashboards that support continuous data use and faster decision-making. These approaches enable data to flow across processes rather than remain siloed within reports.
This session focuses on bridging this gap. It will explore how evaluation can move from tool-based use of AI to more integrated, system-level approaches, where data, analysis, and decision-making are more closely connected.
Speakers
| Name | Title | Biography |
|---|---|---|
| Mr. Maulik Chauhan | Founder, Trestle Research | India Lead, SurveyCTO | Ex-UN | Ex-JPAL | Maulik Chauhan is the Founder & Managing Director of Trestle Research, a technology-driven social enterprise created to bridge data, technology, and grassroots impact. Under his leadership, Trestle has partnered with NGOs, CSRs, M&Es, and researchers to digitize surveys, build interactive dashboards, and strengthen monitoring & evaluation systems. Beyond technology, organizations have also been supported in storytelling and data-driven communication with stakeholders, designing SMART questions for KPI tracking, data cleaning, data management, and developing stronger capacities to understand and use data effectively—ensuring that decisions are guided by reliable, real-time insights. He currently serves as India Lead (Growth Consultant) for SurveyCTO, where operations in Asia have been supported to scale secure, offline-first digital data collection. As a United Nations Consultant, his expertise has been extended to African countries to digitize CPI data collection and strengthen national statistical systems. |
Moderators
| Name | Title | Biography |
|---|---|---|
| Ms. Jayashri Ramesh Sundaram | Associate Manager - M&E | Communications and research professional with a background in International Relations and Journalism, dedicated to advancing global social impact through evidence, storytelling, and on-ground engagement. She works at the intersection of M&E, field research, and strategic communication, supporting rural development, gender, and livelihood programs across India. My global academic lens informs my local, grounded approach to development. |