Workshop
Exploring Machine Learning and Explainable AI in Medical Data Analysis
Organisers
Malika Bendechache
School of Computer Science, University of Galway
Length
2 hours
Description
The processing of medical data, including images, text, and time series, is an area of research that is rapidly expanding and evolving, with numerous exciting developments and applications emerging in recent years. The utilisation of these various types of data is fundamental in healthcare, playing a critical role in diagnosing, monitoring, and treating a wide array of diseases and conditions. Machine Learning (ML) techniques, such as Deep Learning (DL), have demonstrated their effectiveness in analysing medical data, particularly images such as X-rays, CT scans, and MRI scans, leading to improved diagnosis, treatment, and patient outcomes. The integration of ML and DL in healthcare represents a significant advancement in processing and comprehending complex data, uncovering patterns and insights at a scale and speed that would be unattainable by humans alone. This development creates new opportunities for early disease detection, precision medicine, and predictive health management. However, the effectiveness of these technologies depends on their explainability. Explainable Artificial Intelligence (XAI) plays a crucial role in ensuring transparency in AI decision-making, thereby empowering healthcare professionals to confidently utilize insights derived from AI.
Keynote
Explainable AI for Clinical Decision Support Systems
In clinical settings, it is crucial for doctors to understand the reasoning behind an AI model’s decision-making. This need has led to the development of eXplainable AI (XAI) methods, which aim to improve trust in AI models among medical professionals and facilitate their integration into Clinical Decision Support Systems.
Dr. Ayse KELES's keynote will delve into the pivotal role of interpretable and explainable AI in clinical decision support systems. She will explore how these models can enhance diagnostic accuracy and foster trust among healthcare professionals. To illustrate these concepts, she will discuss her works on developing an interpretable AI model for the interpreting model's predictions and diagnosis of brain tumours as part of her Marie Curie fellowship.
Her insights will be drawn from her extensive research and practical experience in health informatics, providing a comprehensive overview of the current advancements and future directions in this critical field.
Dr. KELES has Bachelor's, Master's, and PhD degrees in Computer Science. She has over 18 years of invaluable experience at the Ministry of Health in Türkiye, where she served as a Decision Support System expert. During this period, she managed several national projects and played a pivotal role in the national data warehouse of the National Health System (e-pulse), which is recognised as the world’s largest and most comprehensive health informatics infrastructure. KELES recently received a Marie Curie fellowship for her work on developing explainable AI models for the segmentation and classification of paediatric brain tumours due to start in September 2024 at Galway University.
Topics
This special session will bring together experts from both Machine Learning and medical fields to delve into the latest research findings, challenges, and opportunities in this dynamic field. The session will also include works on XAI methods and their integration into AI systems for medical analysis, fostering trust, accountability, and the promotion of responsible, patient-centric AI applications in healthcare.
Category/Keywords:
Medical Data Processing
Machine Learning (ML) in Healthcare
Deep Learning (DL) in Medical Imaging
Explainable Artificial Intelligence (XAI)
XAI for Healthcare
Integrating ML/DL in Healthcare
Patient-Centric AI Applications
Healthcare Data Analytics
Predictive Health Management
Synthetic Medical Image Generation
Precision Medicine
Early Disease Detection
Medical Image Analysis
Time Series Analysis in Healthcare
AI Transparency
Trust in medical AI Systems
Responsible AI in Healthcare
AI Ethics in Medicine
Clinical Decision Support Systems
Challenges in ML/DL for Healthcare
Opportunities in Medical AI Research
Submission Instructions
Submission to the workshop follows the same procedures as for main conference papers. This workshop supports both ordinary and short paper submissions. please select the specific Special Session name you are interested in the "additional questions" part of the submission.
Please note that submission dates are the same as per the main conference schedule.