Scalable deep learning on distributed infrastructures: Challenges, techniques, and tools
R Mayer, HA Jacobsen - ACM Computing Surveys (CSUR), 2020 - dl.acm.org
Deep Learning (DL) has had an immense success in the recent past, leading to state-of-the-
art results in various domains, such as image recognition and natural language processing …
art results in various domains, such as image recognition and natural language processing …
Data management in machine learning: Challenges, techniques, and systems
Large-scale data analytics using statistical machine learning (ML), popularly called
advanced analytics, underpins many modern data-driven applications. The data …
advanced analytics, underpins many modern data-driven applications. The data …
Efficient memory management for large language model serving with pagedattention
High throughput serving of large language models (LLMs) requires batching sufficiently
many requests at a time. However, existing systems struggle because the key-value cache …
many requests at a time. However, existing systems struggle because the key-value cache …
Orca: A distributed serving system for {Transformer-Based} generative models
Large-scale Transformer-based models trained for generation tasks (eg, GPT-3) have
recently attracted huge interest, emphasizing the need for system support for serving models …
recently attracted huge interest, emphasizing the need for system support for serving models …
{AlpaServe}: Statistical multiplexing with model parallelism for deep learning serving
Model parallelism is conventionally viewed as a method to scale a single large deep
learning model beyond the memory limits of a single device. In this paper, we demonstrate …
learning model beyond the memory limits of a single device. In this paper, we demonstrate …
Pond: Cxl-based memory pooling systems for cloud platforms
Public cloud providers seek to meet stringent performance requirements and low hardware
cost. A key driver of performance and cost is main memory. Memory pooling promises to …
cost. A key driver of performance and cost is main memory. Memory pooling promises to …
{InfiniGen}: Efficient generative inference of large language models with dynamic {KV} cache management
Transformer-based large language models (LLMs) demonstrate impressive performance
across various natural language processing tasks. Serving LLM inference for generating …
across various natural language processing tasks. Serving LLM inference for generating …
Software engineering for AI-based systems: a survey
AI-based systems are software systems with functionalities enabled by at least one AI
component (eg, for image-, speech-recognition, and autonomous driving). AI-based systems …
component (eg, for image-, speech-recognition, and autonomous driving). AI-based systems …
Ray: A distributed framework for emerging {AI} applications
The next generation of AI applications will continuously interact with the environment and
learn from these interactions. These applications impose new and demanding systems …
learn from these interactions. These applications impose new and demanding systems …
{INFaaS}: Automated model-less inference serving
Despite existing work in machine learning inference serving, ease-of-use and cost efficiency
remain challenges at large scales. Developers must manually search through thousands of …
remain challenges at large scales. Developers must manually search through thousands of …