FusionDTA: attention-based feature polymerizer and knowledge distillation for drug-target binding affinity prediction

W Yuan, G Chen, CYC Chen - Briefings in Bioinformatics, 2022 - academic.oup.com
knowledge distillation for DTA tasks as an improvement in training strategy. Knowledge
distillationknowledge from the teacher model with more parameters. Through transferring …

Improving drug-target affinity prediction via feature fusion and knowledge distillation

R Lu, J Wang, P Li, Y Li, S Tan, Y Pan… - Briefings in …, 2023 - academic.oup.com
… introduced a novel knowledge-distillation insights drug-target affinity prediction model with
… and explainable predictions. We benchmarked the model on public affinity prediction and …

Channel-wise knowledge distillation for dense prediction

C Shu, Y Liu, J Gao, Z Yan… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
… on knowledge distillation focus on classification tasks [10, 11, 15, 26, 35, 37, 43]. Our work
here aims to study efficient and effective distillation methods for dense prediction, … affinity map. …

Cross-dimension affinity distillation for 3d em neuron segmentation

X Liu, M Cai, Y Chen, Y Zhang, T Shi… - 2024 IEEE/CVF …, 2024 - computer.org
knowledge from a 3Dteacher network to the 2D student network by minimizingthe affinity
prediction … By incorporatingonline knowledge distillation, our method further enhancesthe …

A self-distillation embedded supervised affinity attention model for few-shot segmentation

Q Zhao, B Liu, S Lyu, H Chen - IEEE Transactions on Cognitive …, 2023 - ieeexplore.ieee.org
affinity attention map to give a prior prediction of query target. Based on the two modules
mentioned above, we propose the Self-Distillation embedded Affinity … the knowledge distillation

Channel Affinity Knowledge Distillation for Semantic Segmentation

H Li, Y Zhang, S Tian, P Cheng… - 2023 IEEE 25th …, 2023 - ieeexplore.ieee.org
predicted maps in the spatial domain, but channel distillation also may help to improve
segmentation performance. Additionally, pairwise pixel affinity … a novel Channel Affinity KD (CAKD…

A knowledge distillation-guided equivariant graph neural network for improving protein interaction site prediction performance

S Chen, Z Tang, L You, CYC Chen - Knowledge-Based Systems, 2024 - Elsevier
… ) and compared their prediction performances without the self-distillation module. …
distillation module by analyzing the prediction performance of Layer_1 with KD (with the self-distillation

Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanism

C Guo, S Zhong, X Liu, Q Feng, Y Ma - arXiv preprint arXiv:2405.00739, 2024 - arxiv.org
Affinity value corresponds to smaller offset between the data distributions. In this paper, Affinity
… of data augmentation on the student’s prediction confidence and model calibration in KD …

Multi-task learning with knowledge distillation for dense prediction

Y Xu, Y Yang, L Zhang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
… task-specific guidance to enable effective knowledge transfer. In Figure 1, we show the … ’
prediction when using KL divergence (see Figure 1). We propose a novel knowledge distillation

Inter-region affinity distillation for road marking segmentation

Y Hou, Z Ma, C Liu, TW Hui… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
… We study the problem of distilling knowledge from a large deep … In this work, we explore
a novel knowledge distillation (KD) ap… Apart from model predictions, we also show the deep …