Reimagining Evidence Through Storytelling in the Age of AI
Demonstration | Hybrid
-
Organized by:
Cloneshouse Nigeria Ltd
About the Event
1.0 Background and Rationale
In the evolving landscape of Monitoring, Evaluation, and Learning (MEL), the rapid advancement of artificial intelligence (AI) is reshaping how data is generated, analyzed, and communicated. While AI offers significant opportunities for efficiency, scale, and innovation, it also introduces critical concerns around bias, misinterpretation, and the erosion of trust in evidence.
Within this shifting context, a key challenge extends beyond the production of evidence to its communication and interpretation. Evaluation findings often remain highly technical, inaccessible, or disconnected from the lived realities they are intended to represent. Consequently, stakeholders, including policymakers, practitioners, and communities, may struggle to understand, trust, or act on available evidence.
This exhibition responds to this gap by positioning communication as a critical bridge between data and understanding, which is where Cloneshouse’s EvalBlues Series comes in. Eval Blues is an illustrated informational, educational, and communication resource that captures the everyday realities of professionals in the monitoring, evaluation, and learning (MEL) field through comics, delivered in a playful, relatable, and easy-to-digest format.
With the rigor of fieldwork, the intensity of data structuring and analysis, and the demand for concise insight communication, monitoring, and evaluation can be demanding. Eval Blues offers a moment of relief, bringing you both the comic and the blues. Through creative and visual storytelling using EvalBlues, it seeks to translate complex evaluation concepts into relatable, engaging, and human-centred narratives.
The concept aligns with the sub-theme “Ethics, Standards, and Human Judgment” by interrogating how evidence can be either misrepresented or strengthened depending on how it is framed and communicated, and what this means for trust, transparency, and accountability in evaluation practice. It further connects with “Practical Applications of AI in Evaluation” by exploring the contrasts and intersections between AI-generated and human-generated outputs, particularly in shaping how evidence is interpreted and understood.
Anchored in the 2026 GLOCAL theme “Evaluation, Evidence, and Trust in the Age of AI,” this exhibition underscores the importance of not only producing credible evidence but also ensuring that it is communicated in ways that foster understanding, critical reflection, and informed action.
2.0 Concept Overview
“Reimagining Evidence Through Storytelling in the Age of AI” is a creative, interactive exhibition that uses illustrated storytelling to explore how evidence is generated, interpreted, and trusted in an AI-driven world.
The exhibition moves beyond static infographics to present narrative-driven visual experiences, where participants engage with evaluation concepts through relatable characters, real-life scenarios, and reflective prompts. At the heart of the exhibition is an EvalBlues storytelling approach, a visual narrative style that simplifies complex MEL concepts into accessible, emotionally engaging stories.
2.1 Objectives
The exhibition aims to:
i. Simplify complex evaluation concepts through visual storytelling and relatable narratives
ii. Enhance understanding of AI’s role in evaluation, including its opportunities and risks
iii. Promote critical thinking around data quality, bias, and interpretation
2.2 Exhibition Structure
The exhibition will be organized into four thematic presentations, each illustrating a key dimension of evidence and trust.
Presentation 1: When Evidence Fails
Theme: Bad data equals bad decisions
Description:
This station highlights the consequences of poor-quality data, weak methodologies, or misinterpretation. Through EvalBlues illustrations, participants see how flawed evidence can lead to ineffective or harmful decisions.
Key Message:
Evidence is only as strong as the process that produces it.
Presentation 2: The Illusion of Intelligence
Theme: AI does not equal Truth
Description:
This station explores the risks of over-reliance on AI-generated insights. It demonstrates how bias, incomplete datasets, and algorithmic limitations can distort findings.
2.3 Interactive Element:
Participants are invited to identify errors or biases in AI-generated outputs.
Key Message:
AI can support evaluation, but it cannot replace critical human judgment.
Presentation 3: Making Sense of Evidence
Theme: Human plus AI collaboration
Description:
This section demonstrates how combining AI tools with human expertise yields more accurate, context-sensitive evaluation outcomes.
Key Message:
Credible evidence emerges from thoughtful interpretation, not automation alone.
Presentation 4: Evidence that Builds Trust
Theme: Good evidence equals real impact
Description:
This final station demonstrates how high-quality, ethically sourced, and well-communicated evidence leads to better decisions and tangible development outcomes.
Key Message:
Trustworthy evidence drives meaningful and sustainable change.
Formats and Design Elements
The exhibition will integrate multiple creative formats to enhance engagement:
2.4 EvalBlues Illustrated Panels
i. Comic-style storytelling boards
ii. Sequential narratives featuring recurring characters
iii. Simplified visual explanations of complex MEL concepts
2.5 Embedded Infographics
i. Data visualizations integrated within illustrations
ii. Contextualized statistics that support each narrative
2.6 AI vs Human Interpretation Displays
i. Side-by-side comparison of:
- AI-generated insights
- Human-refined interpretations
Prompt:
“Which do you trust more and why?”
2.7 Interactive Reflection Wall
Participants contribute responses to prompts such as:
i. “What makes you trust data?”
ii. “Have you experienced the impact of poor evidence?”
2.8 Live Visual Content Creation
i. On-site development of EvalBlues-style illustrations
ii. Content repurposed for real-time digital engagement
2.9 Event Type and Format
i. Mode: Hybrid (in-person + online)
ii. Technology: Zoom for online participants; live streaming, digital reflection wall, polls, and Q&A.
iii. Interactive Engagement:
- Online participants submit reflections, answer prompts, and vote on AI vs human interpretation.
- Live Q&A sessions allow online participants to interact in real-time.
- Physical participants receive small incentives for engagement; online participants receive digital badges or downloadable EvalBlues content.
3.0 Speakers and Moderators
i. Onochie Mokwunye – Director of Strategic Partnerships and Programs; moderator linking exhibition themes to practical MEL applications.
ii. Princess Odey – Cloneshouse Foundation Community Manager; facilitates participant engagement and online interaction.
iii. Kayode Ogunwole – Illustrator, EvalBlues creator; leads live storytelling sessions and demonstrates comic-style translation of MEL concepts.
Rationale: This team ensures a blend of methodological expertise, community engagement, and creative storytelling for an immersive learning experience.
3.1 Target Audience
1. Monitoring, Evaluation, and Learning (MEL) practitioners
ii. Development professionals and NGOs
iii. Policymakers and government stakeholders
iv. Researchers and academics
v. Communication specialists
vi. Youth and emerging professionals in development
3.2 Expected Outcomes
The exhibition is expected to:
i. Increase participant understanding of evidence integrity in the age of AI
ii. Strengthen awareness of the risks of biased or poorly interpreted data
iii. Promote responsible use of AI tools in evaluation
iv. Encourage human-centred approaches to evidence communication
v. Enhance engagement and knowledge retention through visual learning
3.3. Communication and Knowledge-Sharing Strategy
To extend impact beyond the physical exhibition, Cloneshouse will:
i. Convert exhibition panels into digital storytelling content
ii. Share insights via:
- LinkedIn posts
- Twitter threads
iii. Produce a post-event visual summary report
iv. Develop reusable EvalBlues knowledge assets for future learning
3.4 Innovation and Value Addition
This exhibition introduces a novel approach to evaluation communication by:
i. Combining art, storytelling, and MEL practice
ii. Making technical concepts accessible and memorable
iii. Bridging the gap between data producers and data users
iv. Demonstrating practical applications of AI in communication
v. Centering human experience in evidence discourse
3.5 Alignment with Glocal Theme
The exhibition directly contributes to the Glocal 2026 theme by:
i. Exploring the intersection of AI and evaluation
ii. Addressing trust deficits in evidence systems
iii. Promoting ethical and transparent communication practices
iv. Encouraging critical engagement with data and technology
4.0 Conclusion
“Reimagining Evidence Through Storytelling in the Age of AI” transforms evaluation from abstract data into lived, visual, and relatable narratives. By combining creativity with critical inquiry, the exhibition enables participants to not only understand evidence but to question, interpret, and trust it responsibly.
In an age where AI is reshaping knowledge systems, this initiative ensures that human understanding, ethical judgment, and storytelling remain at the center of evaluation practice.
In the evolving landscape of Monitoring, Evaluation, and Learning (MEL), the rapid advancement of artificial intelligence (AI) is reshaping how data is generated, analyzed, and communicated. While AI offers significant opportunities for efficiency, scale, and innovation, it also introduces critical concerns around bias, misinterpretation, and the erosion of trust in evidence.
Within this shifting context, a key challenge extends beyond the production of evidence to its communication and interpretation. Evaluation findings often remain highly technical, inaccessible, or disconnected from the lived realities they are intended to represent. Consequently, stakeholders, including policymakers, practitioners, and communities, may struggle to understand, trust, or act on available evidence.
This exhibition responds to this gap by positioning communication as a critical bridge between data and understanding, which is where Cloneshouse’s EvalBlues Series comes in. Eval Blues is an illustrated informational, educational, and communication resource that captures the everyday realities of professionals in the monitoring, evaluation, and learning (MEL) field through comics, delivered in a playful, relatable, and easy-to-digest format.
With the rigor of fieldwork, the intensity of data structuring and analysis, and the demand for concise insight communication, monitoring, and evaluation can be demanding. Eval Blues offers a moment of relief, bringing you both the comic and the blues. Through creative and visual storytelling using EvalBlues, it seeks to translate complex evaluation concepts into relatable, engaging, and human-centred narratives.
The concept aligns with the sub-theme “Ethics, Standards, and Human Judgment” by interrogating how evidence can be either misrepresented or strengthened depending on how it is framed and communicated, and what this means for trust, transparency, and accountability in evaluation practice. It further connects with “Practical Applications of AI in Evaluation” by exploring the contrasts and intersections between AI-generated and human-generated outputs, particularly in shaping how evidence is interpreted and understood.
Anchored in the 2026 GLOCAL theme “Evaluation, Evidence, and Trust in the Age of AI,” this exhibition underscores the importance of not only producing credible evidence but also ensuring that it is communicated in ways that foster understanding, critical reflection, and informed action.
2.0 Concept Overview
“Reimagining Evidence Through Storytelling in the Age of AI” is a creative, interactive exhibition that uses illustrated storytelling to explore how evidence is generated, interpreted, and trusted in an AI-driven world.
The exhibition moves beyond static infographics to present narrative-driven visual experiences, where participants engage with evaluation concepts through relatable characters, real-life scenarios, and reflective prompts. At the heart of the exhibition is an EvalBlues storytelling approach, a visual narrative style that simplifies complex MEL concepts into accessible, emotionally engaging stories.
2.1 Objectives
The exhibition aims to:
i. Simplify complex evaluation concepts through visual storytelling and relatable narratives
ii. Enhance understanding of AI’s role in evaluation, including its opportunities and risks
iii. Promote critical thinking around data quality, bias, and interpretation
2.2 Exhibition Structure
The exhibition will be organized into four thematic presentations, each illustrating a key dimension of evidence and trust.
Presentation 1: When Evidence Fails
Theme: Bad data equals bad decisions
Description:
This station highlights the consequences of poor-quality data, weak methodologies, or misinterpretation. Through EvalBlues illustrations, participants see how flawed evidence can lead to ineffective or harmful decisions.
Key Message:
Evidence is only as strong as the process that produces it.
Presentation 2: The Illusion of Intelligence
Theme: AI does not equal Truth
Description:
This station explores the risks of over-reliance on AI-generated insights. It demonstrates how bias, incomplete datasets, and algorithmic limitations can distort findings.
2.3 Interactive Element:
Participants are invited to identify errors or biases in AI-generated outputs.
Key Message:
AI can support evaluation, but it cannot replace critical human judgment.
Presentation 3: Making Sense of Evidence
Theme: Human plus AI collaboration
Description:
This section demonstrates how combining AI tools with human expertise yields more accurate, context-sensitive evaluation outcomes.
Key Message:
Credible evidence emerges from thoughtful interpretation, not automation alone.
Presentation 4: Evidence that Builds Trust
Theme: Good evidence equals real impact
Description:
This final station demonstrates how high-quality, ethically sourced, and well-communicated evidence leads to better decisions and tangible development outcomes.
Key Message:
Trustworthy evidence drives meaningful and sustainable change.
Formats and Design Elements
The exhibition will integrate multiple creative formats to enhance engagement:
2.4 EvalBlues Illustrated Panels
i. Comic-style storytelling boards
ii. Sequential narratives featuring recurring characters
iii. Simplified visual explanations of complex MEL concepts
2.5 Embedded Infographics
i. Data visualizations integrated within illustrations
ii. Contextualized statistics that support each narrative
2.6 AI vs Human Interpretation Displays
i. Side-by-side comparison of:
- AI-generated insights
- Human-refined interpretations
Prompt:
“Which do you trust more and why?”
2.7 Interactive Reflection Wall
Participants contribute responses to prompts such as:
i. “What makes you trust data?”
ii. “Have you experienced the impact of poor evidence?”
2.8 Live Visual Content Creation
i. On-site development of EvalBlues-style illustrations
ii. Content repurposed for real-time digital engagement
2.9 Event Type and Format
i. Mode: Hybrid (in-person + online)
ii. Technology: Zoom for online participants; live streaming, digital reflection wall, polls, and Q&A.
iii. Interactive Engagement:
- Online participants submit reflections, answer prompts, and vote on AI vs human interpretation.
- Live Q&A sessions allow online participants to interact in real-time.
- Physical participants receive small incentives for engagement; online participants receive digital badges or downloadable EvalBlues content.
3.0 Speakers and Moderators
i. Onochie Mokwunye – Director of Strategic Partnerships and Programs; moderator linking exhibition themes to practical MEL applications.
ii. Princess Odey – Cloneshouse Foundation Community Manager; facilitates participant engagement and online interaction.
iii. Kayode Ogunwole – Illustrator, EvalBlues creator; leads live storytelling sessions and demonstrates comic-style translation of MEL concepts.
Rationale: This team ensures a blend of methodological expertise, community engagement, and creative storytelling for an immersive learning experience.
3.1 Target Audience
1. Monitoring, Evaluation, and Learning (MEL) practitioners
ii. Development professionals and NGOs
iii. Policymakers and government stakeholders
iv. Researchers and academics
v. Communication specialists
vi. Youth and emerging professionals in development
3.2 Expected Outcomes
The exhibition is expected to:
i. Increase participant understanding of evidence integrity in the age of AI
ii. Strengthen awareness of the risks of biased or poorly interpreted data
iii. Promote responsible use of AI tools in evaluation
iv. Encourage human-centred approaches to evidence communication
v. Enhance engagement and knowledge retention through visual learning
3.3. Communication and Knowledge-Sharing Strategy
To extend impact beyond the physical exhibition, Cloneshouse will:
i. Convert exhibition panels into digital storytelling content
ii. Share insights via:
- LinkedIn posts
- Twitter threads
iii. Produce a post-event visual summary report
iv. Develop reusable EvalBlues knowledge assets for future learning
3.4 Innovation and Value Addition
This exhibition introduces a novel approach to evaluation communication by:
i. Combining art, storytelling, and MEL practice
ii. Making technical concepts accessible and memorable
iii. Bridging the gap between data producers and data users
iv. Demonstrating practical applications of AI in communication
v. Centering human experience in evidence discourse
3.5 Alignment with Glocal Theme
The exhibition directly contributes to the Glocal 2026 theme by:
i. Exploring the intersection of AI and evaluation
ii. Addressing trust deficits in evidence systems
iii. Promoting ethical and transparent communication practices
iv. Encouraging critical engagement with data and technology
4.0 Conclusion
“Reimagining Evidence Through Storytelling in the Age of AI” transforms evaluation from abstract data into lived, visual, and relatable narratives. By combining creativity with critical inquiry, the exhibition enables participants to not only understand evidence but to question, interpret, and trust it responsibly.
In an age where AI is reshaping knowledge systems, this initiative ensures that human understanding, ethical judgment, and storytelling remain at the center of evaluation practice.
Speakers
| Name | Title | Biography |
|---|---|---|
| Onochie Mokwunye | Director of Programs and Strategic Partnerships | Onochie is a senior evaluation expert with experience across health, power, and education in Nigeria and Sub-Saharan Africa. He specializes in M&E, large-scale programme analysis, and food/nutrition assessments, working with donors like EU, JICA, DFID, and organisations like Oxfam and FAO. |