C16: Identify effective explanations – a hands-on course

Back to Courses' Program

Tuesday, 24 June 2025, 13:30 - 17:30 CEST (Central European Summer Time - Sweden)

Helmut Degen and Christof Budnik (short bios)
Siemens Corporation, USA

Modality

on-site

Room: TBA

Target Audience

  • Researchers / academics
  • Students
  • Professionals
  • Industry
  • Regulatory authorities

Requirements for participants

Participants should bring a laptop with an Office software suite and a vector program (e.g., Visio, PowerPoint, …)

Abstract

Artificial intelligence (AI) is increasingly integrated into consumer and business applications. A prominent area within AI, machine learning (ML), brings significant advantages but also introduces a key challenge: uncertainty surrounding the correctness of its outcomes from a human perspective. To foster trust in these systems, it is essential to provide clear and traceable explanations that support the human to understand why specific outcomes are determined.

However, identifying "effective" explanations is not straightforward. In this context, "effective" refers to explanations that are tailored to the needs and goals of a specific user group. Currently, there is no widely accepted framework or methodology for systematically identifying such tailored explanations.

This course introduces a methodology specifically designed to address this gap by helping participants develop user-centric explanations for AI-based systems. The methodology, refined over several years of industrial research, offers a hands-on, step-by-step approach, guiding participants through the process of identifying effective explanations for a particular target audience. By the end of the course, participants will have identified the explanations with its structure and content needed and applied in selected ML-based applications fostering the trust of the selected target user group. We refer here to the structured explanation content as a mental model for explanations—which can serve as the foundation for UX design.

We encourage participants to come to the course with their own application and target user group in mind. Given the practical nature of the course, it is recommended that participants from the same organization join as a small team to maximize collaboration during the exercises and produce a useful result. Exercises are conducted in groups of 3 – 4 participants.

Benefits for attendees

  • Learn a structured method for identifying effective explanations, serving as a foundation for UX design.
  • Complete the course with a well-defined explanation content and structure (i.e., a mental model for explanations) tailored to a selected application and target user group of your choice.

Course Content

The objective of the course is the creation of a mental model for explanations for an application and a target user group for each participant. 

  • Introduction (Uncertainty premise, reasons for uncertainty)
  • Understanding the explanation model
  • Understanding the method to identify and evaluate effective explanations
  • Step 1: Specify application and target user group
  • Step 2: Identify user goals
  • Step 3: Identify user tasks
  • Step 4: Mark the user tasks that require explanations
  • Step 5: Identify inputs and outputs per user task
  • Step 6: Answer backward-looking explainability questions
  • Step 7: Answer forward-looking explainability questions
  • Step 8: Create the mental model for explanations
  • Step 9: Group presentation and wrap-up

hands-on part

Steps 1 through 9 are hands-on exercises tailored to the application and target user group each participant has selected in advance. For those who haven't made an upfront selection, we will provide pre-prepared consumer and business applications examples to work with.

Bio Sketch of Course instructors

Dr. Helmut Degen is a Senior Key Expert for User Experience at Siemens Corporation in Princeton, NJ, USA. His research focuses on trust, value, and efficiency, with a particular emphasis on explainable AI (XAI) and human-computer interaction. Helmut serves as the co-chair of the annual international conference "AI in HCI," which is affiliated with the HCI International conference. He holds a Master of Science (Diplom-Informatiker) from the Karlsruhe Institute of Technology and a PhD in Information Science from Freie Universität Berlin, both in Germany.

Dr. Christof J. Budnik is a Senior Key Expert Engineer for Model-based Testing and Verification of Intelligent Systems at Siemens Corporation, Princeton, NJ, USA. He leads research and business projects in several industrial domains, striving for innovative assurance technologies and tools to solve real world problems. Before joining Siemens he was head of software quality for a German company within the smart card business. Dr. Budnik obtained his Ph.D. in Electrical Engineering 2006 from the University of Paderborn. He is the author of more than sixty published contributions to international journals and conferences. Dr. Budnik is a regular program committee member of several software engineering conferences and serves as associate guest editor and reviewer for selected journals.