skip to main content
research-article

A New Flexible Multi-flow LRU Cache Management Paradigm for Minimizing Misses

Authors Info & Claims
Published:19 June 2019Publication History
Skip Abstract Section

Abstract

The Least Recently Used (LRU) caching and its variants are used in large-scale data systems in order to provide high-speed data access for a wide class of applications. Nonetheless, a fundamental question still remains open: in order to minimize miss probabilities, how should the cache space be organized to serve multiple data flows? Commonly used strategies can be categorized into two designs: pooled LRU (PLRU) caching and separated LRU (SLRU) caching. However, neither of these designs can satisfactorily solve this problem. PLRU caching is easy to implement and self-adaptive, but does not often achieve optimal or even efficient performance because its set of feasible solutions are limited. SLRU caching can be statically configured to achieve optimal performance for stationary workload, which nevertheless could suffer in a dynamically changing environment and from a cold-start problem. To this end, we propose a new insertion based pooled LRU paradigm, termed I-PLRU, where data flows can be inserted at different positions of a pooled cache. This new design can achieve the optimal performance of the static SLRU, and retains the adaptability of PLRU in virtue of resource sharing. Theoretically, we characterize the asymptotic miss probabilities of I-PLRU, and prove that, for any given SLRU design, there always exists an I-PLRU configuration that achieves the same asymptotic miss probability, and vice versa. We next design a policy to minimize the miss probabilities. However, the miss probability minimization problem turns out to be non-convex under the I-PLRU paradigm. Notably, we utilize an equivalence mapping between I-PLRU and SLRU to efficiently find the optimal I-PLRU configuration. We prove that I-PLRU outperforms PLRU and achieves the same miss probability as the optimal SLRU for stationary workload. Engineeringly, the flexibility of I-PLRU avoids separating the memory space, supports dynamic and refined configurations, and alleviates the cold-start problem, potentially yielding better performance than both SLRU and PLRU.

References

  1. Berk Atikoglu, Yuehai Xu, Eitan Frachtenberg, Song Jiang, and Mike Paleczny. 2012. Workload analysis of a large-scale key-value store. In ACM SIGMETRICS Performance Evaluation Review, Vol. 40. ACM, 53--64. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Muhammad Abdullah Awais. 2016. Memory management: Challenges and techniques for traditional memory allocation algorithms in relation with today's real time needs. Advances in Computer Science: an International Journal, Vol. 5, 2 (2016), 22--27.Google ScholarGoogle Scholar
  3. Sorav Bansal and Dharmendra S Modha. 2004. CAR: Clock with adaptive replacement.. In FAST, Vol. 4. 187--200. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Nathan Beckmann, Haoxian Chen, and Asaf Cidon. 2018. LHD: Improving cache hit rate by maximizing hit density. In 15th USENIX Symposium on Networked Systems Design and Implementation (NSDI 18). USENIX Association. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Daniel S Berger, Ramesh K Sitaraman, and Mor Harchol-Balter. 2017. AdaptSize: Orchestrating the hot object memory cache in a content delivery network.. In NSDI. 483--498. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Stephen Boyd and Lieven Vandenberghe. 2004. Convex optimization. Cambridge university press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Jacob Brock, Chencheng Ye, Chen Ding, Yechen Li, Xiaolin Wang, and Yingwei Luo. 2015. Optimal cache partition-sharing. In 2015 44th International Conference on Parallel Processing (ICPP). IEEE, 749--758.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Weibo Chu, Mostafa Dehghan, Don Towsley, and Zhi-Li Zhang. 2016. On allocating cache resources to content providers. In Proceedings of the 3rd ACM Conference on Information-Centric Networking. ACM, 154--159. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Malcolm C Easton and Ronald Fagin. 1978. Cold-start vs. warm-start miss ratios. Commun. ACM, Vol. 21, 10 (1978), 866--872. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Gil Einziger, Roy Friedman, and Ben Manes. 2017. TinyLFU: A highly efficient cache admission policy. ACM Transactions on Storage (TOS), Vol. 13, 4 (2017), 35. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Ronald Fagin. 1977. Asymptotic miss ratios over independent references. J. Comput. System Sci., Vol. 14, 2 (1977), 222--250.Google ScholarGoogle Scholar
  12. Nicolas Gast and Benny Van Houdt. 2015. Transient and steady-state regime of a family of list-based cache replacement algorithms. ACM SIGMETRICS Performance Evaluation Review, Vol. 43, 1 (2015), 123--136. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Nicolas Gast and Benny Van Houdt. 2017. TTL approximations of the cache replacement algorithms LRU(m) and h-LRU. Performance Evaluation, Vol. 117 (2017), 33--57.Google ScholarGoogle ScholarCross RefCross Ref
  14. Ryo Hirade and Takayuki Osogami. 2010. Analysis of page replacement policies in the fluid limit. Operations research, Vol. 58, 4-part-1 (2010), 971--984. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Predrag R. Jelenković. 1999. Asymptotic approximation of the move-to-front search cost distribution and least-recently-used caching fault probabilities. The Annals of Applied Probability 2 (1999), 430--464.Google ScholarGoogle Scholar
  16. Predrag R Jelenković and Xiaozhu Kang. 2007. LRU caching with moderately heavy request distributions. In 2007 Proceedings of the Fourth Workshop on Analytic Algorithmics and Combinatorics (ANALCO). SIAM, 212--222. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Kaiyi Ji, Guocong Quan, and Jian Tan. 2018. Asymptotic miss ratio of LRU caching with consistent hashing. In IEEE Conference on Computer Communications (INFOCOM 2018). Honolulu, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Song Jiang, Feng Chen, and Xiaodong Zhang. 2005. CLOCK-Pro: An effective improvement of the CLOCK replacement. In USENIX Annual Technical Conference, General Track. 323--336. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Song Jiang and Xiaodong Zhang. 2002. LIRS: An efficient low inter-reference recency set replacement policy to improve buffer cache performance. ACM SIGMETRICS Performance Evaluation Review, Vol. 30, 1 (2002), 31--42. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Mark S Johnstone and Paul R Wilson. 1998. The memory fragmentation problem: Solved?. In ACM Sigplan Notices, Vol. 34. ACM, 26--36. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Conglong Li and Alan L Cox. 2015. GD-Wheel: A cost-aware replacement policy for key-value stores. In Tenth European Conference on Computer Systems. ACM, 5.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Nimrod Megiddo and Dharmendra S Modha. 2003. ARC: A self-tuning, low overhead replacement cache. In FAST, Vol. 3. 115--130. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Elizabeth J O'neil, Patrick E O'neil, and Gerhard Weikum. 1993. The LRU-K page replacement algorithm for database disk buffering. ACM Sigmod Record, Vol. 22, 2 (1993), 297--306. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Guocong Quan, Kaiyi Ji, and Jian Tan. 2018. LRU caching with dependent competing requests. In IEEE Conference on Computer Communications (INFOCOM 2018). Honolulu, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Guocong Quan, Jian Tan, and Atilla Eryilmaz. 2019. Counterintuitive characteristics of optimal distributed LRU caching over unreliable channels. In IEEE Conference on Computer Communications (INFOCOM 2019). Paris, France.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Yannis Smaragdakis, Scott Kaplan, and Paul Wilson. 1999. EELRU: Simple and effective adaptive page replacement. In ACM SIGMETRICS Conference on Measuring and Modeling of Computer Systems. ACM, 122--133. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Jian Tan, Guocong Quan, Kaiyi Ji, and Ness Shroff. 2018. On resource pooling and separation for LRU caching. Proceedings of the ACM on Measurement and Analysis of Computing Systems, Vol. 2, 1 (2018), 5. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Andrew S. Tanenbaum. 2001. Modern operating systems 2rd ed.). Prentice Hall Press, Upper Saddle River, NJ, USA.Google ScholarGoogle Scholar
  29. Paul R Wilson, Mark S Johnstone, Michael Neely, and David Boles. 1995. Dynamic storage allocation: A survey and critical review. In Memory Management. Springer, 1--116. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Yue Yang and Jianwen Zhu. 2016. Write skew and Zipf distribution: evidence and implications. ACM Transactions on Storage (TOS), Vol. 12, 4 (2016), 21. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A New Flexible Multi-flow LRU Cache Management Paradigm for Minimizing Misses

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image Proceedings of the ACM on Measurement and Analysis of Computing Systems
          Proceedings of the ACM on Measurement and Analysis of Computing Systems  Volume 3, Issue 2
          June 2019
          683 pages
          EISSN:2476-1249
          DOI:10.1145/3341617
          Issue’s Table of Contents

          Copyright © 2019 ACM

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 19 June 2019
          Published in pomacs Volume 3, Issue 2

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!