A large-scale analysis of hundreds of in-memory key-value cache clusters at twitter
Modern web services use in-memory caching extensively to increase throughput and reduce
latency. There have been several workload analyses of production systems that have fueled …
latency. There have been several workload analyses of production systems that have fueled …
FIFO queues are all you need for cache eviction
As a cache eviction algorithm, FIFO has a lot of attractive properties, such as simplicity,
speed, scalability, and flash-friendliness. The most prominent criticism of FIFO is its low …
speed, scalability, and flash-friendliness. The most prominent criticism of FIFO is its low …
{GL-Cache}: Group-level learning for efficient and high-performance caching
Web applications rely heavily on software caches to achieve low-latency, high-throughput
services. To adapt to changing workloads, three types of learned caches (learned evictions) …
services. To adapt to changing workloads, three types of learned caches (learned evictions) …
Segcache: a memory-efficient and scalable in-memory key-value cache for small objects
Modern web applications heavily rely on in-memory key-value caches to deliver low-latency,
high-throughput services. In-memory caches store small objects of size in the range of 10s to …
high-throughput services. In-memory caches store small objects of size in the range of 10s to …
FIFO can be Better than LRU: the Power of Lazy Promotion and Quick Demotion
LRU has been the basis of cache eviction algorithms for decades, with a plethora of
innovations on improving LRU's miss ratio and throughput. While it is well-known that FIFO …
innovations on improving LRU's miss ratio and throughput. While it is well-known that FIFO …
{OSCA}: An {Online-Model} Based Cache Allocation Scheme in Cloud Block Storage Systems
Y Zhang, P Huang, K Zhou, H Wang, J Hu, Y Ji… - 2020 USENIX Annual …, 2020 - usenix.org
We propose an Online-Model based Scheme for Cache Allocation for shared cache servers
among cloud block storage devices. OSCA can find a near-optimal configuration scheme at …
among cloud block storage devices. OSCA can find a near-optimal configuration scheme at …
Frozenhot cache: Rethinking cache management for modern hardware
Caching is crucial for accelerating data access, employed as a ubiquitous design in modern
systems at many parts of computer systems. With increasing core count, and shrinking …
systems at many parts of computer systems. With increasing core count, and shrinking …
Netco: Cache and i/o management for analytics over disaggregated stores
We consider a common setting where storage is disaggregated from the compute in data-
parallel systems. Colocating caching tiers with the compute machines can reduce load on …
parallel systems. Colocating caching tiers with the compute machines can reduce load on …
Challenges and opportunities of dnn model execution caching
We explore the opportunities and challenges of model execution caching, a nascent
research area that promises to improve the performance of cloud-based deep inference …
research area that promises to improve the performance of cloud-based deep inference …
[PDF][PDF] Massive Files Prefetching Model Based on LSTM Neural Network with Cache Transaction Strategy.
D Zhu, Y Sun, X Li, R Qu, H Hu, S Dong… - … Materials & Continua, 2020 - cdn.techscience.cn
In distributed storage systems, file access efficiency has an important impact on the real-time
nature of information forensics. As a popular approach to improve file accessing efficiency …
nature of information forensics. As a popular approach to improve file accessing efficiency …