10.1145/3422575.3422799acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmemsysConference Proceedingsconference-collections
research-article
Public Access

Dynamically Configuring LRU Replacement Policy in Redis

Published:21 March 2021Publication History

ABSTRACT

To reduce the latency of accessing backend servers, today’s web services usually adopt in-memory key-value stores in the front end which cache the frequently accessed objects. Memcached and Redis are two most popular key-value cache systems. Due to the limited size of memory, an in-memory key-value store needs to be configured with a fixed amount of memory, i.e., cache size, and cache replacement is unavoidable when the footprint of accessed objects is larger than the cache size. Memcached implements the least recently used (LRU) policy. Redis adopts an approximated LRU policy to avoid maintaining LRU list structures. On a replacement, Redis samples pre-configured K keys, adds them to the eviction pool, and then chooses the LRU key from the eviction pool for eviction. We name this policy approx-K-LRU. We find that approx-K-LRU behaves close to LRU when K is large. However, different Ks can yield different miss ratios. On the other hand, the sampling and replacement decision itself results in an overhead that is related to K. This paper proposes DLRU (Dynamic LRU), which explores this configurable parameter and dynamically sets K. DLRU utilizes a low-overhead miniature cache simulator to predict miss ratios of different Ks and adopts a cost model to estimate the performance trade-offs. Our experimental results show that DLRU is able to improve Redis throughput over the recommended, default approx-5-LRU by up to 32.5% for a set of storage traces.

References

  1. [n.d.]. MSR Cambridge Traces. http://iotta.snia.org/traces/388. Accessed: 2020-03-15.Google ScholarGoogle Scholar
  2. Amazon. 2019. Amazon Elastic Cache. Retrieved Oct. 15, 2019 from https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/SelectEngine.htmlGoogle ScholarGoogle Scholar
  3. Aaron Blankstein, Siddhartha Sen, and Michael J. Freedman. 2017. Hyperbolic Caching: Flexible Caching for Web Applications. In 2017 USENIX Annual Technical Conference (USENIX ATC 17). USENIX Association, Santa Clara, CA, 499–511. https://www.usenix.org/conference/atc17/technical-sessions/presentation/blankstein Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Daniel Byrne, Nilufer Onder, and Zhenlin Wang. 2018. mPart: Miss-Ratio Curve Guided Partitioning in Key-Value Stores. In Proceedings of the 2018 ACM SIGPLAN International Symposium on Memory Management (Philadelphia, PA, USA) (ISMM 2018). Association for Computing Machinery, New York, NY, USA, 84–95. https://doi.org/10.1145/3210563.3210571 Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Daniel Byrne, Nilufer Onder, and Zhenlin Wang. 2019. Faster Slab Reassignment in Memcached. In Proceedings of the International Symposium on Memory Systems (Washington, District of Columbia) (MEMSYS ’19). Association for Computing Machinery, New York, NY, USA, 353–362. https://doi.org/10.1145/3357526.3357562 Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Jiqiang Chen, Liang Chen, Sheng Wang, Guoyun Zhu, Yuanyuan Sun, Huan Liu, and Feifei Li. 2020. HotRing: A Hotspot-Aware In-Memory Key-Value Store. In 18th USENIX Conference on File and Storage Technologies (FAST 20). USENIX Association, Santa Clara, CA, 239–252. https://www.usenix.org/conference/fast20/presentation/chen-jiqiangGoogle ScholarGoogle Scholar
  7. Asaf Cidon, Daniel Rushton, Stephen M. Rumble, and Ryan Stutsman. 2017. Memshare: a Dynamic Multi-tenant Key-value Cache. In 2017 USENIX Annual Technical Conference (USENIX ATC 17). USENIX Association, Santa Clara, CA, 321–334. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. D. Eklov and E. Hagersten. 2010. StatStack: Efficient modeling of LRU caches. In 2010 IEEE International Symposium on Performance Analysis of Systems Software (ISPASS). 55–65.Google ScholarGoogle Scholar
  9. Xiameng Hu, Xiaolin Wang, Yechen Li, Lan Zhou, Yingwei Luo, Chen Ding, Song Jiang, and Zhenlin Wang. 2015. LAMA: Optimized Locality-aware Memory Allocation for Key-value Cache. In 2015 USENIX Annual Technical Conference (USENIX ATC 15). USENIX Association, Santa Clara, CA, 57–69. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Xiameng Hu, Xiaolin Wang, Lan Zhou, Yingwei Luo, Chen Ding, and Zhenlin Wang. 2016. Kinetic Modeling of Data Eviction in Cache. In 2016 USENIX Annual Technical Conference (USENIX ATC 16). Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Xiameng Hu, Xiaolin Wang, Lan Zhou, Yingwei Luo, Zhenlin Wang, Chen Ding, and Chencheng Ye. 2018. Fast Miss Ratio Curve Modeling for Storage Cache. ACM Trans. Storage 14, 2, Article 12 (April 2018), 34 pages. https://doi.org/10.1145/3185751 Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Aamer Jaleel, Kevin B. Theobald, Simon C. Steely, and Joel Emer. 2010. High Performance Cache Replacement Using Re-Reference Interval Prediction (RRIP). SIGARCH Comput. Archit. News 38, 3 (June 2010), 60–71. https://doi.org/10.1145/1816038.1815971 Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. R. L. Mattson, J. Gecsei, D. R. Slutz, and I. L. Traiger. 1970. Evaluation Techniques for Storage Hierarchies. IBM Syst. J. 9, 2 (June 1970), 78–117. https://doi.org/10.1147/sj.92.0078 Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. memcached. 2018. memcached. Retrieved May 10, 2018 from https://memcached.orgGoogle ScholarGoogle Scholar
  15. Mutilate. 2019. Mutilate. Retrieved Feb. 22, 2019 from https://github.com/leverich/mutilateGoogle ScholarGoogle Scholar
  16. Cheng Pan, Yingwei Luo, Xiaolin Wang, and Zhenlin Wang. 2019. pRedis: Penalty and Locality Aware Memory Allocation in Redis. In Proceedings of the ACM Symposium on Cloud Computing (Santa Cruz, CA, USA) (SoCC ’19). Association for Computing Machinery, New York, NY, USA, 193–205. https://doi.org/10.1145/3357223.3362729 Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Redis. 2019. Redis 4.0.13. Retrieved Feb. 22, 2019 from http://download.redis.io/releases/Google ScholarGoogle Scholar
  18. Redis. 2019. Redis Data Types. Retrieved Oct. 15, 2019 from https://redis.io/topics/data-typesGoogle ScholarGoogle Scholar
  19. Redis. 2019. Redis Deployment. Retrieved Dec. 26, 2019 from https://redis.io/topics/whos-using-redisGoogle ScholarGoogle Scholar
  20. Redis. 2019. Redis Replacement Policy. Retrieved Oct. 15, 2019 from https://redis.io/topics/lru-cacheGoogle ScholarGoogle Scholar
  21. TechStacks. 2019. Redis Deployment Listed in TechStacks. Retrieved Dec. 26, 2019 from ttps://techstacks.io/tech/redisGoogle ScholarGoogle Scholar
  22. Carl Waldspurger, Trausti Saemundsson, Irfan Ahmad, and Nohhyun Park. 2017. Cache Modeling and Optimization using Miniature Simulations. In 2017 USENIX Annual Technical Conference (USENIX ATC 17). USENIX Association, Santa Clara, CA, 487–498. https://www.usenix.org/conference/atc17/technical-sessions/presentation/waldspurger Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Carl A Waldspurger, Nohhyun Park, Alexander Garthwaite, and Irfan Ahmad. 2015. Efficient MRC construction with SHARDS. In 13th USENIX Conference on File and Storage Technologies (FAST 15). USENIX Association, 95–110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Jake Wires, Stephen Ingram, Zachary Drudi, Nicholas J. A. Harvey, and Andrew Warfield. 2014. Characterizing Storage Workloads with Counter Stacks. In 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14). USENIX Association, Broomfield, CO, 335–349. https://www.usenix.org/conference/osdi14/technical-sessions/presentation/wires Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Xiaoya Xiang, Chen Ding, Hao Luo, and Bin Bao. 2013. HOTL: A Higher Order Theory of Locality. In Proceedings of the Eighteenth International Conference on Architectural Support for Programming Languages and Operating Systems (Houston, Texas, USA) (ASPLOS ’13). Association for Computing Machinery, New York, NY, USA, 343–356. https://doi.org/10.1145/2451116.2451153 Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Yuanyuan Zhou, James Philbin, and Kai Li. 2001. The Multi-Queue Replacement Algorithm for Second Level Buffer Caches. In Proceedings of the General Track: 2001 USENIX Annual Technical Conference. USENIX Association, USA, 91–104. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format
About Cookies On This Site

We use cookies to ensure that we give you the best experience on our website.

Learn more

Got it!