The UKES AI Principles in Practice: Scenario-Based Decision Logic for Responsible AI Use
Atelier | En ligne
-
Organisé par:
UK Evaluation Society (UKES)
À propos de l'événement
Most evaluators sit somewhere between prohibition and permission—experimenting cautiously with AI, uncertain where the lines are, and lacking a shared professional reference point. The UKES AI Guidelines, published November 2025, were developed to close that gap.
This session takes those guidelines off the page and into practice. Rather than presenting principles as abstract rules, we use three realistic scenarios of escalating complexity to show how the four UKES principles—Transparency and Competence, Human Control, Risk Management, and Quality Assurance—work together when facing real evaluation decisions.
Scenario A: AI-assisted literature screening in a policy evaluation. Scenario B: Coding sensitive qualitative interviews with domestic violence survivors. Scenario C: AI-generated conclusions in a high-stakes impact evaluation for a government spending review.
For each scenario, participants apply the UKES decision logic via live polling: Should AI be used? Under what conditions? With what safeguards? The facilitator works through each in plenary, drawing on participant responses to surface where principles create tension and where context shifts the lines.
Participants leave with a reusable decision framework, a scenario workbook, and clearer judgement about when AI use is appropriate—and when it should be avoided.
This session takes those guidelines off the page and into practice. Rather than presenting principles as abstract rules, we use three realistic scenarios of escalating complexity to show how the four UKES principles—Transparency and Competence, Human Control, Risk Management, and Quality Assurance—work together when facing real evaluation decisions.
Scenario A: AI-assisted literature screening in a policy evaluation. Scenario B: Coding sensitive qualitative interviews with domestic violence survivors. Scenario C: AI-generated conclusions in a high-stakes impact evaluation for a government spending review.
For each scenario, participants apply the UKES decision logic via live polling: Should AI be used? Under what conditions? With what safeguards? The facilitator works through each in plenary, drawing on participant responses to surface where principles create tension and where context shifts the lines.
Participants leave with a reusable decision framework, a scenario workbook, and clearer judgement about when AI use is appropriate—and when it should be avoided.
Conférenciers
| Nom | Titre | Biography |
|---|---|---|
| Jonathan Kuhn-Patrick | Evaluator and UKES Board Member | Led development of the UKES AI in Evaluation Guidelines (2025)—the first national evaluation society AI guidance globally. 30+ years in humanitarian and development evaluation. Designs practical frameworks for responsible AI adoption in evaluation practice. |