Fedcon: A contrastive framework for federated semi-supervised learning

Z Long, J Wang, Y Wang, H Xiao, F Ma - arXiv preprint arXiv:2109.04533, 2021 - arxiv.org
arXiv preprint arXiv:2109.04533, 2021arxiv.org
Federated Semi-Supervised Learning (FedSSL) has gained rising attention from both
academic and industrial researchers, due to its unique characteristics of co-training machine
learning models with isolated yet unlabeled data. Most existing FedSSL methods focus on
the classical scenario, ie, the labeled and unlabeled data are stored at the client side.
However, in real world applications, client users may not provide labels without any
incentive. Thus, the scenario of labels at the server side is more practical. Since unlabeled …
Federated Semi-Supervised Learning (FedSSL) has gained rising attention from both academic and industrial researchers, due to its unique characteristics of co-training machine learning models with isolated yet unlabeled data. Most existing FedSSL methods focus on the classical scenario, i.e, the labeled and unlabeled data are stored at the client side. However, in real world applications, client users may not provide labels without any incentive. Thus, the scenario of labels at the server side is more practical. Since unlabeled data and labeled data are decoupled, most existing FedSSL approaches may fail to deal with such a scenario. To overcome this problem, in this paper, we propose FedCon, which introduces a new learning paradigm, i.e., contractive learning, to FedSSL. Experimental results on three datasets show that FedCon achieves the best performance with the contractive framework compared with state-of-the-art baselines under both IID and Non-IID settings. Besides, ablation studies demonstrate the characteristics of the proposed FedCon framework.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果