ABSTRACT
Web applications, databases, and many datacenter services rely on in-memory key-value stores to cache frequently accessed data. In this work, we focus on a commonly used system, memcached, where even small performance improvements can result in large end-to-end speed ups in request latency. memcached organizes its memory into slabs that belong to different classes corresponding to object sizes. Many prior works have explored the problem of how many slabs should each class be assigned in the face of dynamic workloads, typically reassigning hundreds of slabs during a reassignment. However, we find that as workloads scale and applications use increasing amounts of memory, the current reassignment mechanism in memcached is inefficient. In fact, we measure that reassignments can take millions of requests to complete.
Motivated by these findings, we introduce a faster slab reassignment mechanism in memcached with minimal changes to existing source code. In our experiments, we show that the time needed to reassign a slab reduces by over 99% resulting in the ability to reach workloads' steady state miss ratio by 53% to 75% faster. By arriving at the steady state miss ratio faster, we reduce the overall average miss ratio by 3.42% to 11.5%.
- Berk Atikoglu, Yuehai Xu, Eitan Frachtenberg, Song Jiang, and Mike Paleczny. 2012. Workload Analysis of a Large-scale Key-value Store. In Proceedings of the 12th ACM SIGMETRICS/PERFORMANCE Joint International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS '12). ACM, New York, NY, USA, 53--64.Google Scholar
Digital Library
- Daniel S. Berger, Benjamin Berg, Timothy Zhu, Siddhartha Sen, and Mor Harchol-Balter. 2018. RobinHood: Tail Latency Aware Caching -- Dynamic Reallocation from Cache-Rich to Cache-Poor. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). USENIX Association, Carlsbad, CA, 195--212. https://www.usenix.org/conference/osdi18/presentation/bergerGoogle Scholar
- Lee Breslau, Pei Cao, Li Fan, Graham Phillips, and Scott Shenker. 1999. Web caching and Zipf-like distributions: evidence and implications. In Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings (INFOCOM '99), Vol. 1. 126--134 vol.1.Google Scholar
- Daniel Byrne, Nilufer Onder, and Zhenlin Wang. 2018. mPart: Missratio Curve Guided Partitioning in Key-value Stores. In Proceedings of the 2018 ACM SIGPLAN International Symposium on Memory Management (ISMM 2018). ACM, New York, NY, USA, 84--95. Google Scholar
Digital Library
- D. Carra and P. Michiardi. 2014. Memory partitioning in Memcached: An experimental performance analysis. In 2014 IEEE International Conference on Communications (ICC '14). 1154--1159.Google Scholar
- Asaf Cidon, Assaf Eisenman, Mohammad Alizadeh, and Sachin Katti. 2015. Dynacache: Dynamic Cloud Caching. In Proceedings of the 7th USENIX Conference on Hot Topics in Cloud Computing (HotCloud'15). USENIX Association, Berkeley, CA, USA, 19--19. http://dl.acm.org/citation.cfm?id=2827719.2827738Google Scholar
- Asaf Cidon, Assaf Eisenman, Mohammad Alizadeh, and Sachin Katti. 2016. Cliffhanger: Scaling Performance Cliffs in Web Memory Caches. In 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 16). USENIX Association, Santa Clara, CA, 379--392.Google Scholar
Digital Library
- Asaf Cidon, Daniel Rushton, Stephen M. Rumble, and Ryan Stutsman. 2017. Memshare: a Dynamic Multi-tenant Key-value Cache. In 2017 USENIX Annual Technical Conference (USENIX ATC 17). USENIX Association, Santa Clara, CA, 321--334.Google Scholar
Digital Library
- Brian F. Cooper, Adam Silberstein, Erwin Tam, Raghu Ramakrishnan, and Russell Sears. 2010. Benchmarking Cloud Serving Systems with YCSB. In Proceedings of the 1st ACM Symposium on Cloud Computing (SoCC '10). ACM, New York, NY, USA, 143--154.Google Scholar
Digital Library
- Xiameng Hu, Xiaolin Wang, Yechen Li, Lan Zhou, Yingwei Luo, Chen Ding, Song Jiang, and Zhenlin Wang. 2015. LAMA: Optimized Locality-aware Memory Allocation for Key-value Cache. In 2015 USENIX Annual Technical Conference (USENIX ATC 15). USENIX Association, Santa Clara, CA, 57--69.Google Scholar
- Xiameng Hu, Xiaolin Wang, Lan Zhou, Yingwei Luo, Chen Ding, Song Jiang, and Zhenlin Wang. 2017. Optimizing Locality-Aware Memory Management of Key-Value Caches. IEEE Trans. Comput. 66, 5 (May 2017), 862--875. Google Scholar
Digital Library
- memcached. 2018. memcached. Retrieved May 10, 2018 from https://memcached.orgGoogle Scholar
- Rajesh Nishtala, Hans Fugal, Steven Grimm, Marc Kwiatkowski, Herman Lee, Harry C. Li, Ryan McElroy, Mike Paleczny, Daniel Peek, Paul Saab, David Stafford, Tony Tung, and Venkateshwaran Venkataramani. 2013. Scaling Memcache at Facebook. In Presented as part of the 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI 13). USENIX, Lombard, IL, 385--398. https://www.usenix.org/conference/nsdi13/technical-sessions/presentation/nishtalaGoogle Scholar
Digital Library
- Manju Rajashekhar and Yao Yue. 2012. Twemcache: Twitter memcached.Google Scholar
Index Terms
Faster slab reassignment in memcached
Recommendations
Penalty- and Locality-aware Memory Allocation in Redis Using Enhanced AET
Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and ...
Dynamically Configuring LRU Replacement Policy in Redis
To reduce the latency of accessing backend servers, today’s web services usually adopt in-memory key-value stores in the front end which cache the frequently accessed objects. Memcached and Redis are two most popular key-value cache systems. Due to the ...
pRedis: Penalty and Locality Aware Memory Allocation in Redis
Due to large data volume and low latency requirements of modern web services, the use of in-memory key-value (KV) cache often becomes an inevitable choice (e.g. Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and ...





Comments