Inclusion and intersectionality in AI-enabled evaluations

Mesa redonda | En línea

Sobre el evento

This session, organized by the UN Network for Evaluation Systems Strengthening in Africa (UNNESSA), an interagency network, examines inclusion and intersectionality in evaluation as a question of power, not only participation. It begins by clarifying what “inclusion” and “intersectionality” mean in evaluative practice, then explores how exclusion can be (re)produced at key points in the evaluation cycle: who sets questions and priorities, whose knowledge is treated as credible, which methods and evidence hierarchies are labelled “rigorous,” who validates findings, and how results are communicated and used in this AI era.

Grounded in practical examples of “invisible” exclusion points, panelists apply intersectionality as a working lens to map who is excluded, how, and why across overlapping axes of identity and power (e.g., gender, race, class, disability, geography, language, migration status, colonial histories). Aligning with gLOCAL 2026’s theme “Evaluation, Evidence, and Trust in the Age of AI”, the session frames exclusion as an ethical challenge that can be intensified or mitigated by AI-enabled tools and workflows, raising critical questions about ethics, accountability, and credibility.
The session closes with a short diagnostic exercise to help participants identify exclusion risks (including AI-related risks), pinpoint remedies, and define concrete redesign actions that shift voice, influence, and decision-making towards those most affected.

Presentador/a

Nombre Título Biografía
Jean Providence Nzabonimpa WFP - Regional Evaluation Officer Advocate and practitioner of conventional & non-conventional research and evaluation methods in development and humanitarian programmes. Proven experience and expertise in mixed methods research and evaluation. Apt user of Artificial Intelligence and software for optimal evidence generation and use.
Cyuma Mbayiha UN Women - Regional Evaluation Specialist Lead and quality‑assure country & regional portfolio evaluations across West and Central Africa. Specialized in gender‑responsive, intersectional, theory‑based evaluation in fragile & humanitarian contexts, with interest in the responsible use of AI for evaluative reasoning, learning & data ethics.
Bikul Tulachan UNICEF - Evaluation Specialist > 15 yrs experience across Africa & Asia, incl. > a decade at UNICEF. Managed many evaluations, worked with J-PAL on RCTs in India & Nepal, trained > 250 UN professionals in applying AI to evaluation. Contributed to national evaluation systems, co-authored papers. MSc & BSc in Economics from the US.
Souraya Hassan UNICEF - Regional Evaluation Adviser
Deqa Moussa UNDP - Regional Evaluation Adviser An international development practitioner with extensive experience in programme evaluation, as well as monitoring and reporting at UN headquarters, regional and country levels.  She is interested in ethics, AI, and gender and political economy analysis.
Malene Nielsen UNHCR - Regional Evaluation Adviser

Moderador/a

Nombre Título Biografía
Loveena Dookhony UNFPA - Humanitarian Evaluation Team Lead Loveena Dookhony is the Team Leader for the Humanitarian Evaluation Team at UNFPA. She has 20 years of experience managing evaluations in both development and humanitarian settings. Before UNFPA, she worked at both HQ and in the field for the World Bank, WHO, and SADC Parliamentary Forum.

Temas

Evaluadores Tema anual: Evaluación, evidencia y confianza en la era de la IA

Detalles del evento

Iniciar sesión