A utility optimization approach to network cache design
In any caching system, the admission and eviction policies determine which contents are
added and removed from a cache when a miss occurs. Usually, these policies are devised …
added and removed from a cache when a miss occurs. Usually, these policies are devised …
Double auction mechanism design for video caching in heterogeneous ultra-dense networks
Recently, wireless streaming of on-demand videos of mobile users (MUs) has become the
major form of data traffic over cellular networks. As a response, caching popular videos in …
major form of data traffic over cellular networks. As a response, caching popular videos in …
Performance analysis and optimization on scheduling stochastic cloud service requests: a survey
Performance analysis and optimization is a critical task for the successful development of
cloud computing systems and services. Unfortunately, performance analysis and …
cloud computing systems and services. Unfortunately, performance analysis and …
Time-to-live caching with network delays: Exact analysis and computable approximations
We consider Time-to-Live (TTL) caches that tag every object in cache with a specific (and
possibly renewable) expiration time. State-of-the-art models for TTL caches assume zero …
possibly renewable) expiration time. State-of-the-art models for TTL caches assume zero …
Optimal local storage policy based on stochastic intensities and its large scale behavior
M Carrasco, A Ferragut, F Paganini - arXiv preprint arXiv:2412.00279, 2024 - arxiv.org
In this paper, we analyze the optimal management of local memory systems, using the tools
of stationary point processes. We provide a rigorous setting of the problem, building upon …
of stationary point processes. We provide a rigorous setting of the problem, building upon …
Caching or pre-fetching? the role of hazard rates
A Ferragut, M Carrasco… - 2024 60th Annual Allerton …, 2024 - ieeexplore.ieee.org
Local memory systems play a crucial role in today's networks: keeping popular content close
to users improves performance by reducing the latency of fetching an item from a more …
to users improves performance by reducing the latency of fetching an item from a more …
[PDF][PDF] 云环境下随机请求性能分析综述
王爽, 李小平, 陈龙 - 计算机学报, 2022 - cjc.ict.ac.cn
摘要如何在云服务中心为随机到达系统的用户请求选择并分配合适资源以最优化某些性能指标
是云计算的关键问题之一. 不同云计算场景下请求到资源的映射产生不同排队模型 …
是云计算的关键问题之一. 不同云计算场景下请求到资源的映射产生不同排队模型 …
On the Impact of Network Delays on Time-to-Live Caching
We consider Time-to-Live (TTL) caches that tag every object in cache with a specific (and
possibly renewable) expiration time. State-of-the-art models for TTL caches assume zero …
possibly renewable) expiration time. State-of-the-art models for TTL caches assume zero …
Timer-based pre-fetching for increasing hazard rates
A Ferragut, M Carrasco, F Paganini - ACM SIGMETRICS Performance …, 2024 - dl.acm.org
Caching plays a crucial role in today's networks: keeping popular content close to users
reduces latency. Timer-based caching policies (TTL) have long been used to deal with …
reduces latency. Timer-based caching policies (TTL) have long been used to deal with …
QoS-Aware Caching Resource Allocation
Recently, wireless streaming of on-demand videos of mobile users (MUs) has become the
major form of data traffic over cellular networks. As responding, caching popular videos in …
major form of data traffic over cellular networks. As responding, caching popular videos in …