AI in Evaluation: The Conversation Commissioners and Consultants Aren't Having
Panel Discussion | Online
-
Organized by:
UK Evaluation Society (UKES)
About the Event
Commissioners and consultants are navigating AI in evaluation with fundamentally different pressures—and often talking past each other. Commissioners worry about assessing genuine capability versus AI proficiency, and struggle to compare bids when disclosure levels vary. Consultants face a transparency penalty: honest disclosure of AI use and efficiency gains can disadvantage their proposals against silent competitors.
This session brings a government commissioner and independent consultant into direct dialogue about the tensions neither side has resolved. Drawing on UKES's published AI Guidelines and real procurement dilemmas, we explore: What should appropriate AI disclosure look like? How do commissioners fairly compare bids when pricing reflects undisclosed efficiency gains? Who bears accountability when AI-assisted analysis goes wrong?
Rather than presenting solved problems, this session surfaces the genuinely contested questions and invites participants to share how these play out in their own contexts. Participants leave with practical frameworks for the conversations they need to have—and a clearer view of where professional consensus is still forming.
This session brings a government commissioner and independent consultant into direct dialogue about the tensions neither side has resolved. Drawing on UKES's published AI Guidelines and real procurement dilemmas, we explore: What should appropriate AI disclosure look like? How do commissioners fairly compare bids when pricing reflects undisclosed efficiency gains? Who bears accountability when AI-assisted analysis goes wrong?
Rather than presenting solved problems, this session surfaces the genuinely contested questions and invites participants to share how these play out in their own contexts. Participants leave with practical frameworks for the conversations they need to have—and a clearer view of where professional consensus is still forming.
Speakers
| Name | Title | Biography |
|---|---|---|
| Jonathan Kuhn-Patrick | Evaluator and UKES Board Member | Jonathan leads UKES's AI in Evaluation initiative and developed the first national evaluation society AI guidelines (2025). 30+ years in humanitarian and development evaluation. Independent consultant exploring how AI changes evaluator-commissioner relationships. |
| Cormac Quinn | Evaluation Adviser, Foreign, Commonwealth and Development Office | Cormac advises on evaluation across FCDO's international development portfolio. UKES Trustee bringing the commissioner perspective: how to assess AI competence, manage disclosure, and maintain quality when procurement frameworks lag behind practice. |