Multi-modal brain tumor segmentation via missing modality synthesis and modality-level attention fusion

Z Huang, L Lin, P Cheng, L Peng, X Tang - arXiv preprint arXiv …, 2022 - arxiv.org
arXiv preprint arXiv:2203.04586, 2022arxiv.org
Multi-modal magnetic resonance (MR) imaging provides great potential for diagnosing and
analyzing brain gliomas. In clinical scenarios, common MR sequences such as T1, T2 and
FLAIR can be obtained simultaneously in a single scanning process. However, acquiring
contrast enhanced modalities such as T1ce requires additional time, cost, and injection of
contrast agent. As such, it is clinically meaningful to develop a method to synthesize
unavailable modalities which can also be used as additional inputs to downstream tasks …
Multi-modal magnetic resonance (MR) imaging provides great potential for diagnosing and analyzing brain gliomas. In clinical scenarios, common MR sequences such as T1, T2 and FLAIR can be obtained simultaneously in a single scanning process. However, acquiring contrast enhanced modalities such as T1ce requires additional time, cost, and injection of contrast agent. As such, it is clinically meaningful to develop a method to synthesize unavailable modalities which can also be used as additional inputs to downstream tasks (e.g., brain tumor segmentation) for performance enhancing. In this work, we propose an end-to-end framework named Modality-Level Attention Fusion Network (MAF-Net), wherein we innovatively conduct patchwise contrastive learning for extracting multi-modal latent features and dynamically assigning attention weights to fuse different modalities. Through extensive experiments on BraTS2020, our proposed MAF-Net is found to yield superior T1ce synthesis performance (SSIM of 0.8879 and PSNR of 22.78) and accurate brain tumor segmentation (mean Dice scores of 67.9%, 41.8% and 88.0% on segmenting the tumor core, enhancing tumor and whole tumor).
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果