A comprehensive overview of large language models
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in
natural language processing tasks and beyond. This success of LLMs has led to a large …
natural language processing tasks and beyond. This success of LLMs has led to a large …
[HTML][HTML] Understanding of machine learning with deep learning: architectures, workflow, applications and future directions
MM Taye - Computers, 2023 - mdpi.com
In recent years, deep learning (DL) has been the most popular computational approach in
the field of machine learning (ML), achieving exceptional results on a variety of complex …
the field of machine learning (ML), achieving exceptional results on a variety of complex …
A survey of large language models
Language is essentially a complex, intricate system of human expressions governed by
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …
grammatical rules. It poses a significant challenge to develop capable AI algorithms for …
Eva: Exploring the limits of masked visual representation learning at scale
We launch EVA, a vision-centric foundation model to explore the limits of visual
representation at scale using only publicly accessible data. EVA is a vanilla ViT pre-trained …
representation at scale using only publicly accessible data. EVA is a vanilla ViT pre-trained …
[HTML][HTML] Cellpose 2.0: how to train your own model
M Pachitariu, C Stringer - Nature methods, 2022 - nature.com
Pretrained neural network models for biological segmentation can provide good out-of-the-
box results for many image types. However, such models do not allow users to adapt the …
box results for many image types. However, such models do not allow users to adapt the …
Scaling up gans for text-to-image synthesis
The recent success of text-to-image synthesis has taken the world by storm and captured the
general public's imagination. From a technical standpoint, it also marked a drastic change in …
general public's imagination. From a technical standpoint, it also marked a drastic change in …
Biformer: Vision transformer with bi-level routing attention
As the core building block of vision transformers, attention is a powerful tool to capture long-
range dependency. However, such power comes at a cost: it incurs a huge computation …
range dependency. However, such power comes at a cost: it incurs a huge computation …
Reproducible scaling laws for contrastive language-image learning
M Cherti, R Beaumont, R Wightman… - Proceedings of the …, 2023 - openaccess.thecvf.com
Scaling up neural networks has led to remarkable performance across a wide range of
tasks. Moreover, performance often follows reliable scaling laws as a function of training set …
tasks. Moreover, performance often follows reliable scaling laws as a function of training set …
Efficient memory management for large language model serving with pagedattention
High throughput serving of large language models (LLMs) requires batching sufficiently
many requests at a time. However, existing systems struggle because the key-value cache …
many requests at a time. However, existing systems struggle because the key-value cache …
4d gaussian splatting for real-time dynamic scene rendering
Representing and rendering dynamic scenes has been an important but challenging task.
Especially to accurately model complex motions high efficiency is usually hard to guarantee …
Especially to accurately model complex motions high efficiency is usually hard to guarantee …