作者
Andrea Mastropietro
发表日期
2024/1/31
出版商
Università degli Studi di Roma" La Sapienza"
简介
Deep learning has been extensively utilized in the domains of bioinformatics and chemoinformatics, yielding compelling results. However, neural networks have predominantly been regarded as black boxes, characterized by internal mechanisms that hinder interpretability due to the highly nonlinear functions they learn. In the biomedical field, this lack of interpretability is undesirable, as it is imperative for scientists to comprehend the reasons behind the occurrence of specific diseases or the molecular properties that make a compound effective against a particular target protein. Consequently, the inherent closure of those models keeps their results far from being trusted. To address this issue and make deep learning suitable for bioinformatics and chemoinformatics tasks, there is the urge to develop techniques for explainable artificial intelligence (XAI). These techniques should be capable of measuring the significance of input features for predictions or determining the strength of their interactions. The ability to provide explanations must be integrated into the biomedical deep learning pipeline, which utilizes available data sources to uncover new insights regarding potentially disease-associated genes, thereby facilitating the repurposing and development of new drugs. In line with this objective, this thesis focuses on the development of innovative explainability techniques for neural networks and demonstrates their effective applications in bioinformatics and medicinal chemistry. The devised models find their place in the pipeline, wherein each component of the protocol generates effective and explainable results. These results span from the …