Masterclass: AI: The Good, The Bad, and The Ugly - Practical Uses in the Development Sector
Master Class | Online
-
Organized by:
Sambodhi Research and Communications
About the Event
In the light of Glocal Evaluation Week 2026, we at Sambodhi propose a 2/3-hour masterclass that dissects how artificial intelligence is currently being deployed across the development sector.
There is a lot of noise around AI right now. On one end, it is seen as a tool that can significantly improve efficiency and scale. On the other, there are real concerns around accuracy, bias, data privacy, and the dilution of rigorous thinking. For professionals working in research, monitoring and evaluation, communications, and program implementation, the challenge is not whether to use AI, but how to use it responsibly and efficaciously.
This session will take the discussion beyond abstraction and incorporate case studies of AI use, during which, we will outline some of the major types of AI applications within the context of development space and analyze the performance of these AI applications. The focus will also be on identifying some of the risks associated with AI.
The masterclass will also introduce participants to specific tools and demonstrate how they can be used across different functions.
Objectives
• To look at the areas where AI can be utilized.
• In this utilization, what all challenges one can face, such as inclusivity, ethics, learning gaps etc.
• After the class, participants will learn how to use these practical tools ethically and where they can deploy these AI tools accordingly.
Perspectives on AI and Trust
1. The Good: This will cover areas where AI is already improving workflows, such as data analysis, transcription, content drafting, literature reviews, and knowledge management. The focus will be on efficiency gains and access.
2. The Bad: We will look at common pitfalls, like hallucinations, shallow analysis, over-reliance on generic content, and the risk of weakening methodological rigor.
3. The Ugly: This segment will address deeper concerns around bias, misinformation, data security, and the implications of AI-generated outputs in evaluation and research contexts where credibility is critical.
These three touchpoints will link how trust in evidence can be built and maintained.
Practice Lab
The session will end with a hands-on component where participants:
• Identify 2–3 tasks from their own work where AI could be deployed.
• Match these tasks with relevant tools introduced during the session.
• Reflect on potential risks and define basic checks or safeguards.
There is a lot of noise around AI right now. On one end, it is seen as a tool that can significantly improve efficiency and scale. On the other, there are real concerns around accuracy, bias, data privacy, and the dilution of rigorous thinking. For professionals working in research, monitoring and evaluation, communications, and program implementation, the challenge is not whether to use AI, but how to use it responsibly and efficaciously.
This session will take the discussion beyond abstraction and incorporate case studies of AI use, during which, we will outline some of the major types of AI applications within the context of development space and analyze the performance of these AI applications. The focus will also be on identifying some of the risks associated with AI.
The masterclass will also introduce participants to specific tools and demonstrate how they can be used across different functions.
Objectives
• To look at the areas where AI can be utilized.
• In this utilization, what all challenges one can face, such as inclusivity, ethics, learning gaps etc.
• After the class, participants will learn how to use these practical tools ethically and where they can deploy these AI tools accordingly.
Perspectives on AI and Trust
1. The Good: This will cover areas where AI is already improving workflows, such as data analysis, transcription, content drafting, literature reviews, and knowledge management. The focus will be on efficiency gains and access.
2. The Bad: We will look at common pitfalls, like hallucinations, shallow analysis, over-reliance on generic content, and the risk of weakening methodological rigor.
3. The Ugly: This segment will address deeper concerns around bias, misinformation, data security, and the implications of AI-generated outputs in evaluation and research contexts where credibility is critical.
These three touchpoints will link how trust in evidence can be built and maintained.
Practice Lab
The session will end with a hands-on component where participants:
• Identify 2–3 tasks from their own work where AI could be deployed.
• Match these tasks with relevant tools introduced during the session.
• Reflect on potential risks and define basic checks or safeguards.
Speakers
| 名称 | 标题 | Biography |
|---|---|---|
| Dr. Anuradha Katyal | Deputy Vice President - Public Health Practice at Sambodhi | Dr. Anuradha Katyal has over 15 years of experience at the intersection of health systems research and data science, her expertise spans health financing, primary care, urban health, and service delivery. She holds dual master’s degrees in data science from IIIT - Bangalore and Liverpool John Moores University, which have prompted her to explore the application of machine learning in health systems and to work towards advancing ethical, equity-focused uses of AI in public health. |