Navigation auf uzh.ch
The achievements of contemporary machine learning (ML) methods highlight the enormous potential of integrating AI systems in various domains of medicine, ranging from the analysis of diagnostic images in radiology and dermatology to increasingly complex applications such as forecasting in intensive care units or the diagnosis of psychiatric disorders. However, despite their potential, many medical professionals are sceptical toward the integration of machine learning tools in their practices. The reasons for this scepticism are mostly related to opacity, or so-called black-box, problem, which refers to the difficulty of humans to understand the reasoning behind the outcomes of ML models and ultimately decide whether to trust them or not.
Much effort has been dedicated in the last year to overcome such difficulty, both from a policy and ethical but also engineering and design perspectives. Nevertheless there is still much disagreement among scholars on the real effectiveness of the various proposed solutions.
The aim of the meeting «Explainable AI in Medicine; A critical appraisal of limitations and insights for future developments» is to bring together experts from different fields such as philosophy, bioethics, AI ethics, XAI, and human-computer interaction to discuss if, how and to what extent proposed solution to the black-box problem are effective in supporting the successful integration and appropriation of AI systems in medical practice.
When / Wann: 2 - 3 November, 2023
Where / Wo: Lugano, Switzerland
Language / Sprache: English
More info: Link
Registration / Anmeldung: alessandro.facchini@idsia.ch
Organisation: Digital Society Initiative, EPFL, SUPSI& TUM