ABSTRACT
To reduce the latency of accessing backend servers, today’s web services usually adopt in-memory key-value stores in the front end which cache the frequently accessed objects. Memcached and Redis are two most popular key-value cache systems. Due to the limited size of memory, an in-memory key-value store needs to be configured with a fixed amount of memory, i.e., cache size, and cache replacement is unavoidable when the footprint of accessed objects is larger than the cache size. Memcached implements the least recently used (LRU) policy. Redis adopts an approximated LRU policy to avoid maintaining LRU list structures. On a replacement, Redis samples pre-configured K keys, adds them to the eviction pool, and then chooses the LRU key from the eviction pool for eviction. We name this policy approx-K-LRU. We find that approx-K-LRU behaves close to LRU when K is large. However, different Ks can yield different miss ratios. On the other hand, the sampling and replacement decision itself results in an overhead that is related to K. This paper proposes DLRU (Dynamic LRU), which explores this configurable parameter and dynamically sets K. DLRU utilizes a low-overhead miniature cache simulator to predict miss ratios of different Ks and adopts a cost model to estimate the performance trade-offs. Our experimental results show that DLRU is able to improve Redis throughput over the recommended, default approx-5-LRU by up to 32.5% for a set of storage traces.
- [n.d.]. MSR Cambridge Traces. http://iotta.snia.org/traces/388. Accessed: 2020-03-15.Google Scholar
- Amazon. 2019. Amazon Elastic Cache. Retrieved Oct. 15, 2019 from https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/SelectEngine.htmlGoogle Scholar
- Aaron Blankstein, Siddhartha Sen, and Michael J. Freedman. 2017. Hyperbolic Caching: Flexible Caching for Web Applications. In 2017 USENIX Annual Technical Conference (USENIX ATC 17). USENIX Association, Santa Clara, CA, 499–511. https://www.usenix.org/conference/atc17/technical-sessions/presentation/blankstein Google Scholar
Digital Library
- Daniel Byrne, Nilufer Onder, and Zhenlin Wang. 2018. mPart: Miss-Ratio Curve Guided Partitioning in Key-Value Stores. In Proceedings of the 2018 ACM SIGPLAN International Symposium on Memory Management (Philadelphia, PA, USA) (ISMM 2018). Association for Computing Machinery, New York, NY, USA, 84–95. https://doi.org/10.1145/3210563.3210571 Google Scholar
Digital Library
- Daniel Byrne, Nilufer Onder, and Zhenlin Wang. 2019. Faster Slab Reassignment in Memcached. In Proceedings of the International Symposium on Memory Systems (Washington, District of Columbia) (MEMSYS ’19). Association for Computing Machinery, New York, NY, USA, 353–362. https://doi.org/10.1145/3357526.3357562 Google Scholar
Digital Library
- Jiqiang Chen, Liang Chen, Sheng Wang, Guoyun Zhu, Yuanyuan Sun, Huan Liu, and Feifei Li. 2020. HotRing: A Hotspot-Aware In-Memory Key-Value Store. In 18th USENIX Conference on File and Storage Technologies (FAST 20). USENIX Association, Santa Clara, CA, 239–252. https://www.usenix.org/conference/fast20/presentation/chen-jiqiangGoogle Scholar
- Asaf Cidon, Daniel Rushton, Stephen M. Rumble, and Ryan Stutsman. 2017. Memshare: a Dynamic Multi-tenant Key-value Cache. In 2017 USENIX Annual Technical Conference (USENIX ATC 17). USENIX Association, Santa Clara, CA, 321–334. Google Scholar
Digital Library
- D. Eklov and E. Hagersten. 2010. StatStack: Efficient modeling of LRU caches. In 2010 IEEE International Symposium on Performance Analysis of Systems Software (ISPASS). 55–65.Google Scholar
- Xiameng Hu, Xiaolin Wang, Yechen Li, Lan Zhou, Yingwei Luo, Chen Ding, Song Jiang, and Zhenlin Wang. 2015. LAMA: Optimized Locality-aware Memory Allocation for Key-value Cache. In 2015 USENIX Annual Technical Conference (USENIX ATC 15). USENIX Association, Santa Clara, CA, 57–69. Google Scholar
Digital Library
- Xiameng Hu, Xiaolin Wang, Lan Zhou, Yingwei Luo, Chen Ding, and Zhenlin Wang. 2016. Kinetic Modeling of Data Eviction in Cache. In 2016 USENIX Annual Technical Conference (USENIX ATC 16). Google Scholar
Digital Library
- Xiameng Hu, Xiaolin Wang, Lan Zhou, Yingwei Luo, Zhenlin Wang, Chen Ding, and Chencheng Ye. 2018. Fast Miss Ratio Curve Modeling for Storage Cache. ACM Trans. Storage 14, 2, Article 12 (April 2018), 34 pages. https://doi.org/10.1145/3185751 Google Scholar
Digital Library
- Aamer Jaleel, Kevin B. Theobald, Simon C. Steely, and Joel Emer. 2010. High Performance Cache Replacement Using Re-Reference Interval Prediction (RRIP). SIGARCH Comput. Archit. News 38, 3 (June 2010), 60–71. https://doi.org/10.1145/1816038.1815971 Google Scholar
Digital Library
- R. L. Mattson, J. Gecsei, D. R. Slutz, and I. L. Traiger. 1970. Evaluation Techniques for Storage Hierarchies. IBM Syst. J. 9, 2 (June 1970), 78–117. https://doi.org/10.1147/sj.92.0078 Google Scholar
Digital Library
- memcached. 2018. memcached. Retrieved May 10, 2018 from https://memcached.orgGoogle Scholar
- Mutilate. 2019. Mutilate. Retrieved Feb. 22, 2019 from https://github.com/leverich/mutilateGoogle Scholar
- Cheng Pan, Yingwei Luo, Xiaolin Wang, and Zhenlin Wang. 2019. pRedis: Penalty and Locality Aware Memory Allocation in Redis. In Proceedings of the ACM Symposium on Cloud Computing (Santa Cruz, CA, USA) (SoCC ’19). Association for Computing Machinery, New York, NY, USA, 193–205. https://doi.org/10.1145/3357223.3362729 Google Scholar
Digital Library
- Redis. 2019. Redis 4.0.13. Retrieved Feb. 22, 2019 from http://download.redis.io/releases/Google Scholar
- Redis. 2019. Redis Data Types. Retrieved Oct. 15, 2019 from https://redis.io/topics/data-typesGoogle Scholar
- Redis. 2019. Redis Deployment. Retrieved Dec. 26, 2019 from https://redis.io/topics/whos-using-redisGoogle Scholar
- Redis. 2019. Redis Replacement Policy. Retrieved Oct. 15, 2019 from https://redis.io/topics/lru-cacheGoogle Scholar
- TechStacks. 2019. Redis Deployment Listed in TechStacks. Retrieved Dec. 26, 2019 from ttps://techstacks.io/tech/redisGoogle Scholar
- Carl Waldspurger, Trausti Saemundsson, Irfan Ahmad, and Nohhyun Park. 2017. Cache Modeling and Optimization using Miniature Simulations. In 2017 USENIX Annual Technical Conference (USENIX ATC 17). USENIX Association, Santa Clara, CA, 487–498. https://www.usenix.org/conference/atc17/technical-sessions/presentation/waldspurger Google Scholar
Digital Library
- Carl A Waldspurger, Nohhyun Park, Alexander Garthwaite, and Irfan Ahmad. 2015. Efficient MRC construction with SHARDS. In 13th USENIX Conference on File and Storage Technologies (FAST 15). USENIX Association, 95–110. Google Scholar
Digital Library
- Jake Wires, Stephen Ingram, Zachary Drudi, Nicholas J. A. Harvey, and Andrew Warfield. 2014. Characterizing Storage Workloads with Counter Stacks. In 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14). USENIX Association, Broomfield, CO, 335–349. https://www.usenix.org/conference/osdi14/technical-sessions/presentation/wires Google Scholar
Digital Library
- Xiaoya Xiang, Chen Ding, Hao Luo, and Bin Bao. 2013. HOTL: A Higher Order Theory of Locality. In Proceedings of the Eighteenth International Conference on Architectural Support for Programming Languages and Operating Systems (Houston, Texas, USA) (ASPLOS ’13). Association for Computing Machinery, New York, NY, USA, 343–356. https://doi.org/10.1145/2451116.2451153 Google Scholar
Digital Library
- Yuanyuan Zhou, James Philbin, and Kai Li. 2001. The Multi-Queue Replacement Algorithm for Second Level Buffer Caches. In Proceedings of the General Track: 2001 USENIX Annual Technical Conference. USENIX Association, USA, 91–104. Google Scholar
Digital Library
Recommendations
Counter-Based Cache Replacement and Bypassing Algorithms
Recent studies have shown that in highly associative caches, the performance gap between the Least Recently Used (LRU) and the theoretical optimal replacement algorithms is large, motivating the design of alternative replacement algorithms to improve ...
Modeling LRU cache with invalidation
Least Recently Used (LRU) is a very popular caching replacement policy. It is very easy to implement and offers good performance, especially when data requests are temporally correlated, as in the case of web traffic.When the data content can change ...
Secure Hierarchy-Aware Cache Replacement Policy (SHARP): Defending Against Cache-Based Side Channel Atacks
In cache-based side channel attacks, a spy that shares a cache with a victim probes cache locations to extract information on the victim's access patterns. For example, in evict+reload, the spy repeatedly evicts and then reloads a probe address, ...





Comments