How can we integrate abstract ethical concerns into an operational AI risk assessment methodology?
During the workshop run by the Responsible AI (RAI) group at Digital Health Lab Day on 3 September in Winterthur, we presented a value-driven risk assessment methodology. Using the dimensions defined in the EU Ethical Guidelines for trustworthy AI as grounding values, participants analysed a case study based of real-life mental health app to identify associated risks and vulnerabilities in the system.
This structured, value-based lens enabled participants to widen their perception of AI-related risks. While initial thoughts focused on privacy and technical robustness, discussions quickly broaden to include other societal impacts:
· How can we be sure users are aware of the social influences the system has?
· Should a chatbot give recommendations in cases not covered by its training data?
· Who controls and oversees the recommendations I receive from the chatbot?
By linking risks such as over-reliance on AI outputs, effect of system’s hallucination, and biased responses to specific values represented by the trustworthiness dimensions, participants could identify and locate potential sources of harm in the system. The exercise equipped participants with means to translate the abstract concept of AI trustworthiness onto a formal structure that can be integrated into operational risk management practice. The outcome of this workshop illustrates how a value-driven approach can help in identifying which mitigation measures best address the vulnerabilities with the highest effects.
The feedback was encouraging: participants shared that the session broadened their perspectives and made them reflect on aspects they would not have considered otherwise. The diversity of professional and personal viewpoints enriched the discussions and showed the strength of a value-driven methodology in practice. Importantly, this reflection was complemented with actionable processes that can ultimately enhance the trustworthiness of real-word AI applications.
👉 Our takeaway: risk assessment should begin with the question, “What values do we want to protect when using this system?” A value-driven approach helps identify risks, understand vulnerabilities, and guide mitigation—paving the way for AI systems that are not only technically robust but also aligned with human values.