Towards discriminability and diversity: Batch nuclear-norm maximization under label insufficient situations
The learning of the deep networks largely relies on the data with human-annotated labels. In
some label insufficient situations, the performance degrades on the decision boundary with …
some label insufficient situations, the performance degrades on the decision boundary with …
Cross-domain adaptive clustering for semi-supervised domain adaptation
In semi-supervised domain adaptation, a few labeled samples per class in the target domain
guide features of the remaining target samples to aggregate around them. However, the …
guide features of the remaining target samples to aggregate around them. However, the …
Ml-lmcl: Mutual learning and large-margin contrastive learning for improving asr robustness in spoken language understanding
Spoken language understanding (SLU) is a fundamental task in the task-oriented dialogue
systems. However, the inevitable errors from automatic speech recognition (ASR) usually …
systems. However, the inevitable errors from automatic speech recognition (ASR) usually …
Mcf: Mutual correction framework for semi-supervised medical image segmentation
Semi-supervised learning is a promising method for medical image segmentation under
limited annotation. However, the model cognitive bias impairs the segmentation …
limited annotation. However, the model cognitive bias impairs the segmentation …
Alignsam: Aligning segment anything model to open context via reinforcement learning
Powered by massive curated training data Segment Anything Model (SAM) has
demonstrated its impressive generalization capabilities in open-world scenarios with the …
demonstrated its impressive generalization capabilities in open-world scenarios with the …
Towards cross-modality medical image segmentation with online mutual knowledge distillation
The success of deep convolutional neural networks is partially attributed to the massive
amount of annotated training data. However, in practice, medical data annotations are …
amount of annotated training data. However, in practice, medical data annotations are …
Adaptive betweenness clustering for semi-supervised domain adaptation
Compared to unsupervised domain adaptation, semi-supervised domain adaptation (SSDA)
aims to significantly improve the classification performance and generalization capability of …
aims to significantly improve the classification performance and generalization capability of …
Inter-domain mixup for semi-supervised domain adaptation
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain
distributions, with a small number of target labels available, achieving better classification …
distributions, with a small number of target labels available, achieving better classification …
Self-ensembling co-training framework for semi-supervised COVID-19 CT segmentation
The coronavirus disease 2019 (COVID-19) has become a severe worldwide health
emergency and is spreading at a rapid rate. Segmentation of COVID lesions from computed …
emergency and is spreading at a rapid rate. Segmentation of COVID lesions from computed …
Semi-supervised learning with pseudo-negative labels for image classification
Semi-supervised learning frameworks usually adopt mutual learning approaches with
multiple submodels to learn from different perspectives. Usually, a high threshold is used to …
multiple submodels to learn from different perspectives. Usually, a high threshold is used to …