Choose a transformer: Fourier or galerkin
S Cao - Advances in neural information processing systems, 2021 - proceedings.neurips.cc
In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is
All You Need for the first time to a data-driven operator learning problem related to partial …
All You Need for the first time to a data-driven operator learning problem related to partial …
Deep embedded complementary and interactive information for multi-view classification
J Xu, W Li, X Liu, D Zhang, J Liu, J Han - … of the AAAI conference on artificial …, 2020 - aaai.org
Multi-view classification optimally integrates various features from different views to improve
classification tasks. Though most of the existing works demonstrate promising performance …
classification tasks. Though most of the existing works demonstrate promising performance …
Robust federated learning with attack-adaptive aggregation
Federated learning is vulnerable to various attacks, such as model poisoning and backdoor
attacks, even if some existing defense strategies are used. To address this challenge, we …
attacks, even if some existing defense strategies are used. To address this challenge, we …
Transformers are deep infinite-dimensional non-mercer binary kernel machines
MA Wright, JE Gonzalez - arXiv preprint arXiv:2106.01506, 2021 - arxiv.org
Despite their ubiquity in core AI fields like natural language processing, the mechanics of
deep attention-based neural networks like the Transformer model are not fully understood …
deep attention-based neural networks like the Transformer model are not fully understood …
Multi-View Deep Gaussian Processes for Supervised Learning
W Dong, S Sun - IEEE Transactions on Pattern Analysis and …, 2023 - ieeexplore.ieee.org
Multi-view learning is a widely studied topic in machine learning, which considers learning
with multiple views of samples to improve the prediction performance. Even though some …
with multiple views of samples to improve the prediction performance. Even though some …
Latent relation shared learning for endometrial cancer diagnosis with incomplete multi-modality medical images
J Li, L Liao, M Jia, Z Chen, X Liu - Iscience, 2024 - cell.com
Magnetic resonance imaging (MRI), ultrasound (US), and contrast-enhanced ultrasound
(CEUS) can provide different image data about uterus, which have been used in the …
(CEUS) can provide different image data about uterus, which have been used in the …
Robust graph embedding with noisy link weights
A Okuno, H Shimodaira - The 22nd International Conference …, 2019 - proceedings.mlr.press
Abstract We propose $\beta $-graph embedding for robustly learning feature vectors from
data vectors and noisy link weights. A newly introduced empirical moment $\beta $-score …
data vectors and noisy link weights. A newly introduced empirical moment $\beta $-score …
Graph embedding with shifted inner product similarity and its improved approximation capability
We propose shifted inner-product similarity (SIPS), which is a novel yet very simple
extension of the ordinary inner-product similarity (IPS) for neural-network based graph …
extension of the ordinary inner-product similarity (IPS) for neural-network based graph …
Representation learning with weighted inner product for universal approximation of general similarities
We propose $\textit {weighted inner product similarity} $(WIPS) for neural network-based
graph embedding. In addition to the parameters of neural networks, we optimize the weights …
graph embedding. In addition to the parameters of neural networks, we optimize the weights …
Hyperlink regression via Bregman divergence
A Okuno, H Shimodaira - Neural Networks, 2020 - Elsevier
A collection of U (∈ N) data vectors is called a U-tuple, and the association strength among
the vectors of a tuple is termed as the hyperlink weight, that is assumed to be symmetric with …
the vectors of a tuple is termed as the hyperlink weight, that is assumed to be symmetric with …