MGPUSim: Enabling multi-GPU performance modeling and optimization
The rapidly growing popularity and scale of data-parallel workloads demand a
corresponding increase in raw computational power of Graphics Processing Units (GPUs) …
corresponding increase in raw computational power of Graphics Processing Units (GPUs) …
Combining HW/SW mechanisms to improve NUMA performance of multi-GPU systems
Historically, improvement in GPU performance has been tightly coupled with transistor
scaling. As Moore's Law slows down, performance of single GPUs may ultimately plateau …
scaling. As Moore's Law slows down, performance of single GPUs may ultimately plateau …
The locality descriptor: A holistic cross-layer abstraction to express data locality in GPUs
Exploiting data locality in GPUs is critical to making more efficient use of the existing caches
and the NUMA-based memory hierarchy expected in future GPUs. While modern GPU …
and the NUMA-based memory hierarchy expected in future GPUs. While modern GPU …
Need for speed: Experiences building a trustworthy system-level gpu simulator
The demands of high-performance computing (HPC) and machine learning (ML) workloads
have resulted in the rapid architectural evolution of GPUs over the last decade. The growing …
have resulted in the rapid architectural evolution of GPUs over the last decade. The growing …
Griffin: Hardware-software support for efficient page migration in multi-gpu systems
As transistor scaling becomes increasingly more difficult to achieve, scaling the core count
on a single GPU chip has also become extremely challenging. As the volume of data to …
on a single GPU chip has also become extremely challenging. As the volume of data to …
Wire-aware architecture and dataflow for cnn accelerators
S Gudaparthi, S Narayanan… - Proceedings of the …, 2019 - dl.acm.org
In spite of several recent advancements, data movement in modern CNN accelerators
remains a significant bottleneck. Architectures like Eyeriss implement large scratchpads …
remains a significant bottleneck. Architectures like Eyeriss implement large scratchpads …
SAC: Sharing-aware caching in multi-chip GPUs
S Zhang, M Naderan-Tahan, M Jahre… - Proceedings of the 50th …, 2023 - dl.acm.org
Bandwidth non-uniformity in multi-chip GPUs poses a major design challenge for its last-
level cache (LLC) architecture. Whereas a memory-side LLC caches data from the local …
level cache (LLC) architecture. Whereas a memory-side LLC caches data from the local …
Architecting waferscale processors-a GPU case study
Increasing communication overheads are already threatening computer system scaling. One
approach to dramatically reduce communication overheads is waferscale processing …
approach to dramatically reduce communication overheads is waferscale processing …
Buddy compression: Enabling larger memory for deep learning and hpc workloads on gpus
GPUs accelerate high-throughput applications, which require orders-of-magnitude higher
memory bandwidth than traditional CPU-only systems. However, the capacity of such high …
memory bandwidth than traditional CPU-only systems. However, the capacity of such high …
Negative perceptions about the applicability of source-to-source compilers in hpc: A literature review
A source-to-source compiler is a type of translator that accepts the source code of a program
written in a programming language as its input and produces an equivalent source code in …
written in a programming language as its input and produces an equivalent source code in …