A complete survey on generative ai (aigc): Is chatgpt from gpt-4 to gpt-5 all you need?
As ChatGPT goes viral, generative AI (AIGC, aka AI-generated content) has made headlines
everywhere because of its ability to analyze and create text, images, and beyond. With such …
everywhere because of its ability to analyze and create text, images, and beyond. With such …
Application of machine learning, deep learning and optimization algorithms in geoengineering and geoscience: Comprehensive review and future challenge
W Zhang, X Gu, L Tang, Y Yin, D Liu, Y Zhang - Gondwana Research, 2022 - Elsevier
Abstract The so-called Fourth Paradigm has witnessed a boom during the past two decades,
with large volumes of observational data becoming available to scientists and engineers …
with large volumes of observational data becoming available to scientists and engineers …
Graphmae: Self-supervised masked graph autoencoders
Self-supervised learning (SSL) has been extensively explored in recent years. Particularly,
generative SSL has seen emerging success in natural language processing and other …
generative SSL has seen emerging success in natural language processing and other …
Masked autoencoders are scalable vision learners
This paper shows that masked autoencoders (MAE) are scalable self-supervised learners
for computer vision. Our MAE approach is simple: we mask random patches of the input …
for computer vision. Our MAE approach is simple: we mask random patches of the input …
Context autoencoder for self-supervised representation learning
We present a novel masked image modeling (MIM) approach, context autoencoder (CAE),
for self-supervised representation pretraining. We pretrain an encoder by making predictions …
for self-supervised representation pretraining. We pretrain an encoder by making predictions …
Perceiver io: A general architecture for structured inputs & outputs
A central goal of machine learning is the development of systems that can solve many
problems in as many data domains as possible. Current architectures, however, cannot be …
problems in as many data domains as possible. Current architectures, however, cannot be …
Self-supervised speech representation learning: A review
Although supervised deep learning has revolutionized speech and audio processing, it has
necessitated the building of specialist models for individual tasks and application scenarios …
necessitated the building of specialist models for individual tasks and application scenarios …
A survey on vision transformer
Transformer, first applied to the field of natural language processing, is a type of deep neural
network mainly based on the self-attention mechanism. Thanks to its strong representation …
network mainly based on the self-attention mechanism. Thanks to its strong representation …
Radiomics in oncology: a practical guide
Radiomics refers to the extraction of mineable data from medical imaging and has been
applied within oncology to improve diagnosis, prognostication, and clinical decision support …
applied within oncology to improve diagnosis, prognostication, and clinical decision support …
Intrinsic dimensionality explains the effectiveness of language model fine-tuning
Although pretrained language models can be fine-tuned to produce state-of-the-art results
for a very wide range of language understanding tasks, the dynamics of this process are not …
for a very wide range of language understanding tasks, the dynamics of this process are not …