[HTML][HTML] Deep multimodal graph-based network for survival prediction from highly multiplexed images and patient variables
The spatial architecture of the tumour microenvironment and phenotypic heterogeneity of
tumour cells have been shown to be associated with cancer prognosis and clinical
outcomes, including survival. Recent advances in highly multiplexed imaging, including
imaging mass cytometry (IMC), capture spatially resolved, high-dimensional maps that
quantify dozens of disease-relevant biomarkers at single-cell resolution, that contain
potential to inform patient-specific prognosis. Existing automated methods for predicting …
tumour cells have been shown to be associated with cancer prognosis and clinical
outcomes, including survival. Recent advances in highly multiplexed imaging, including
imaging mass cytometry (IMC), capture spatially resolved, high-dimensional maps that
quantify dozens of disease-relevant biomarkers at single-cell resolution, that contain
potential to inform patient-specific prognosis. Existing automated methods for predicting …
Abstract
The spatial architecture of the tumour microenvironment and phenotypic heterogeneity of tumour cells have been shown to be associated with cancer prognosis and clinical outcomes, including survival. Recent advances in highly multiplexed imaging, including imaging mass cytometry (IMC), capture spatially resolved, high-dimensional maps that quantify dozens of disease-relevant biomarkers at single-cell resolution, that contain potential to inform patient-specific prognosis. Existing automated methods for predicting survival, on the other hand, typically do not leverage spatial phenotype information captured at the single-cell level. Furthermore, there is no end-to-end method designed to leverage the rich information in whole IMC images and all marker channels, and aggregate this information with clinical data in a complementary manner to predict survival with enhanced accuracy. To that end, we present a deep multimodal graph-based network (DMGN) with two modules: (1) a multimodal graph-based module that considers relationships between spatial phenotype information in all image regions and all clinical variables adaptively, and (2) a clinical embedding module that automatically generates embeddings specialised for each clinical variable to enhance multimodal aggregation. We demonstrate that our modules are consistently effective at improving survival prediction performance using two public breast cancer datasets, and that our new approach can outperform state-of-the-art methods in survival prediction.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果