Ethics, Standards, and Human Jugdgment
Mesa | En línea
-
Organizado por:
Compass Advisory Group
Sobre el evento
Bias Mitigation and Responsible AI in Evaluation,” hosted by Compass Advisory Group SA, is a high-level roundtable designed to interrogate the growing intersection between artificial intelligence and evaluation practice, with a sharp focus on fairness, accountability, and ethical integrity. The session explores how bias manifests within AI-driven evaluation systems—whether through data, algorithms, or human oversight—and examines practical strategies to detect, mitigate, and govern these risks in real-world applications across public policy, development programs, and private sector innovation. This convening is particularly valuable for monitoring and evaluation (M&E) practitioners, policymakers, researchers, data scientists, development agencies, academic institutions, and innovation leaders seeking to responsibly integrate AI into decision-making and impact assessment frameworks. Participants will engage in critical dialogue on emerging standards, governance models, and tools that ensure AI systems uphold transparency, inclusivity, and contextual relevance, especially within African and Global South contexts. Anticipated key takeaways include actionable approaches to identifying and correcting bias in datasets and models, frameworks for embedding ethical AI principles into evaluation systems, insights into regulatory and compliance trends, and a deeper understanding of how to balance technological advancement with social responsibility. Ultimately, the session aims to equip stakeholders with both the strategic awareness and practical tools needed to design evaluation systems that are not only intelligent, but just, credible, and future-ready. Navigating Ethical AI Issues for Evaluators: COMPAS(S) Sentinel & Sentinel Maturity Certification Model,” hosted by Compass Advisory Group SA, is a strategic demonstration that unpacks the ethical complexities introduced by artificial intelligence within monitoring and evaluation systems, while presenting Sentinel as a pioneering governance assurance framework purpose-built to mitigate bias and strengthen accountability in evaluation practice. The session will explore how AI-driven tools, if left unchecked, can embed systemic bias, distort evidence, and compromise the credibility of evaluation outcomes, and will position the COMPAS(S) Sentinel framework as a structured solution that enables organizations to proactively diagnose, manage, and govern these risks through a maturity-based certification model. Designed for M&E practitioners, policymakers, government departments, development agencies, research institutions, data scientists, and private sector leaders integrating AI into decision-making, the discussion will provide both conceptual clarity and practical pathways for embedding ethical safeguards into evaluation ecosystems. Participants will gain insight into how Sentinel functions as a bias mitigation instrument, offering standardized protocols, transparency mechanisms, and performance benchmarks that align AI use with principles of fairness, inclusivity, and contextual intelligence. Anticipated key takeaways include a deeper understanding of ethical AI risks in evaluation, a clear view of how the Sentinel Maturity Certification Model can be used to assess and elevate organizational readiness, practical strategies for operationalizing responsible AI governance, and a forward-looking perspective on building resilient, credible, and future-fit evaluation systems. Ultimately, the session positions Sentinel not just as a tool, but as a transformative layer of assurance that safeguards the integrity of evaluation in an increasingly automated world.
Sesión
Mesa
|
En línea
3 de junio, 2026
15:00 PM - 16:30 PM
The session will focus on the problematization of ethical implications posed to AI users and most especially evaluators. Ethically grounded recommendations and solutions embedded on sound practice will be presented.
Zoom
Presentador/a
| Nombre | Título | Biografía |
|---|---|---|
| Daniel | Mr | Daniel Okoko is a Monitoring & Evaluation specialist and AI systems architect focused on transforming impact measurement through intelligent technologies. With expertise in governance performance systems, ESG compliance frameworks, and predictive analytic, Daniel is pioneering AI-driven evaluation models designed for African institutions. His work integrates machine learning, natural language processing, and automated compliance systems to shift organizations from reactive reporting proactive decision intelligence. He is the founder of an emerging AI-integrated monitoring platform focused on public sector performance, NGO accountability, and corporate ESG systems. Daniel’s long-term vision is to position Africa as the forefront of intelligent governance and impact intelligent infrastructure. |
Moderador/a
| Nombre | Título | Biografía |
|---|---|---|
| Daniel | Mr | Daniel Okoko is a Monitoring & Evaluation specialist and AI systems architect focused on transforming impact measurement through intelligent technologies. With expertise in governance performance systems, ESG compliance frameworks, and predictive analytic, Daniel is pioneering AI-driven evaluation models designed for African institutions. His work integrates machine learning, natural language processing, and automated compliance systems to shift organizations from reactive reporting proactive decision intelligence. He is the founder of an emerging AI-integrated monitoring platform focused on public sector performance, NGO accountability, and corporate ESG systems. Daniel’s long-term vision is to position Africa as the forefront of intelligent governance and impact intelligent infrastructure. |
Demostración
|
En línea
5 de junio, 2026
16:00 PM - 17:30 PM
New tools to navigate ethical implications in evaluation will be demonstrated to the audience. These tools can be institutionally adopted.
Zoom
Presentador/a
| Nombre | Título | Biografía |
|---|---|---|
| Daniel | Mr | Daniel Okoko is a Monitoring & Evaluation specialist and AI systems architect focused on transforming impact measurement through intelligent technologies. With expertise in governance performance systems, ESG compliance frameworks, and predictive analytic, Daniel is pioneering AI-driven evaluation models designed for African institutions. His work integrates machine learning, natural language processing, and automated compliance systems to shift organizations from reactive reporting proactive decision intelligence. He is the founder of an emerging AI-integrated monitoring platform focused on public sector performance, NGO accountability, and corporate ESG systems. Daniel’s long-term vision is to position Africa as the forefront of intelligent governance and impact intelligent infrastructure. |
Moderador/a
| Nombre | Título | Biografía |
|---|---|---|
| Daniel | Mr | Daniel Okoko is a Monitoring & Evaluation specialist and AI systems architect focused on transforming impact measurement through intelligent technologies. With expertise in governance performance systems, ESG compliance frameworks, and predictive analytic, Daniel is pioneering AI-driven evaluation models designed for African institutions. His work integrates machine learning, natural language processing, and automated compliance systems to shift organizations from reactive reporting proactive decision intelligence. He is the founder of an emerging AI-integrated monitoring platform focused on public sector performance, NGO accountability, and corporate ESG systems. Daniel’s long-term vision is to position Africa as the forefront of intelligent governance and impact intelligent infrastructure. |