Abstract
Prefetching is a widely used technique in modern data storage systems. We study the most widely used class of prefetching algorithms known as sequential prefetching. There are two problems that plague the state-of-the-art sequential prefetching algorithms: (i) cache pollution, which occurs when prefetched data replaces more useful prefetched or demand-paged data, and (ii) prefetch wastage, which happens when prefetched data is evicted from the cache before it can be used.
A sequential prefetching algorithm can have a fixed or adaptive degree of prefetch and can be either synchronous (when it can prefetch only on a miss) or asynchronous (when it can also prefetch on a hit). To capture these distinctions we define four classes of prefetching algorithms: fixed synchronous (FS), fixed asynchronous (FA), adaptive synchronous (AS), and adaptive asynchronous (AsynchA). We find that the relatively unexplored class of AsynchA algorithms is in fact the most promising for sequential prefetching. We provide a first formal analysis of the criteria necessary for optimal throughput when using an AsynchA algorithm in a cache shared by multiple steady sequential streams. We then provide a simple implementation called AMP (adaptive multistream prefetching) which adapts accordingly, leading to near-optimal performance for any kind of sequential workload and cache size.
Our experimental setup consisted of an IBM xSeries 345 dual processor server running Linux using five SCSI disks. We observe that AMP convincingly outperforms all the contending members of the FA, FS, and AS classes for any number of streams and over all cache sizes. As anecdotal evidence, in an experiment with 100 concurrent sequential streams and varying cache sizes, AMP surpasses the FA, FS, and AS algorithms by 29--172%, 12--24%, and 21--210%, respectively, while outperforming OBL by a factor of 8. Even for complex workloads like SPC1-Read, AMP is consistently the best-performing algorithm. For the SPC2 video-on-demand workload, AMP can sustain at least 25% more streams than the next best algorithm. Furthermore, for a workload consisting of short sequences, where optimality is more elusive, AMP is able to outperform all the other contenders in overall performance.
Finally, we implemented AMP in the state-of-the-art enterprise storage system, the IBM system storage DS8000 series. We demonstrated that AMP dramatically improves performance for common sequential and batch processing workloads and delivers up to a twofold increase in the sequential read capacity.
- Cao, P., Felten, E. W., Karlin, A. R., and Li, K. 1995. A study of integrated prefetching and caching strategies. In Proceedings of the ACM SIGMETRICS Joint International Conference on Measurement and Modeling of Computer Systems. 188--197. Google Scholar
Digital Library
- Chen, T. -F. and Baer, J. -L. 1992. Reducing memory latency via non-blocking and prefetching caches. In Proceedings of the 5th International Conference on Architectural Support for Programming Languages and Operating System (ASPLOS). SIGPLAN Not. 27, 9, 51--61.Google Scholar
Digital Library
- Chen, T. -F. and Baer, J. -L. 1995. Effective hardware based data prefetching for high-performance processors. IEEE Trans. Compute. 44, 5, 609--623. Google Scholar
Digital Library
- Cortes, T. and Labarta, J. 1999. Linear aggressive prefetching: A way to increase the performance of cooperative caches. In Proceedings of the Joint International Parallel Processing Symposium and IEEE Symposium on Parallel and Distributed Processing. San Juan, PR, 45--54. Google Scholar
Digital Library
- Curewitz, K. M., Krishnan, P., and Vitter, J.S. 1993. Practical prefetching via data compression. ACM SIGMOD Rec. 22, 2, 257--266. Google Scholar
Digital Library
- Dahlgren, F., Dubois, M., and Stenström, P. 1993. Fixed and adaptive sequential prefetching in shared memory multiprocessors. In Proceedings of the International Conference on Parallel Processing (ICPP), 56--63. Google Scholar
Digital Library
- Dahlgren, F. and Stenström, P. 1996. Evaluation of hardware-based stride and sequential prefetching in shared-memory multiprocessors. IEEE Trans. Parallel Distrib. Syst. 7, 4, 385--398. Google Scholar
Digital Library
- David Callahan, K. K. and Porterfield, A. 1991. Software prefetching. In ACM SIGARCH Comput. Archit. News. 19, 2, ACM Press, New York, 40--52. Google Scholar
Digital Library
- Fu, J. W. C. and Patel, J. H. 1991. Data prefetching in multiprocessor vector cache memories. In Proceedings of the 18th Annual International Symposium on Computer Architecture, Toronto, Intario, Canada, 54--63. Google Scholar
Digital Library
- Gill, B. S. and Modha, D.S. 2005a. SARC: Sequential prefetching in adaptive replacement cache. In Proceedings of the USENIX Annual Technical Conference. 293--308. Google Scholar
Digital Library
- Gill, B. S. and Modha, D. S. 2005b. WOW: Wide ordering of writes---Combining spatial and temporal locality in non-volatile caches. In Proceedings of the 4th USENIX Conference on File and Storage Technologies (FAST). 129--142. Google Scholar
Digital Library
- Gornish, E. H., Granston, E. D., and Veidenbaum, A. V. 1990. Compiler-Directed data prefetching in multiprocessors with memory hierarchies. In Proceedings International Conference on Supercomputing ACM SIGARCH Comput. Archite. News. 18, 3, 354--368. Google Scholar
Digital Library
- Griffioen, J. and Appleton, R. 1994. Reducing file system latency using a predictive approach. In Proceedings of the Conference, USENIX Summer. 197--207. Google Scholar
Digital Library
- Grimsrud, K. S., Archibald, J. K., and Nelson, B. E. 1993. Multiple prefetch adaptive disk caching. IEEE Trans. Knowl. Data Eng. 5, 1, 88--103. Google Scholar
Digital Library
- Harizopoulos, S., Harizakis, C., and Triantafillou, P. 2000. Hierarchical caching and prefetching for high performance continuous media servers with smart disks. IEEE Concurrency 8, 3, 16--22. Google Scholar
Digital Library
- IBM Redbooks. 2007. IBM System Storage DS8000 Series: Architecture and Implementation. http://www.redbooks.ibm.com/redbooks/pdfs/sg246786.pdf.Google Scholar
- Jain, P., Devadas, S., and Rudolph, L. 2001. Controlling cache pollution in prefetching with software-assisted cache replacement. Tech. Rep. CSG-462. Massachusetts Institute of Technology.Google Scholar
- Joseph, D. and Grunwald, D. 1999. Prefetching using Markov predictors. IEEE Trans. Comput. 48, 2, 121--133. Google Scholar
Digital Library
- Kallahalla, M. and Varman, P. J. 2002. Pc-Opt: Optimal offline prefetching and caching for parallel I/O systems. IEEE Trans. Comput. 51, 11, 1333--1344. Google Scholar
Digital Library
- Kimbrel, T. and Karlin, A. R. 1996. Near-Optimal parallel prefetching and caching. In Proceedings of the (FOCS), IEEE Symposium on Foundations of Computer Science. 540--549. Google Scholar
Digital Library
- Kimbrel, T., Tomkins, A., Patterson, R. H., Bershad, B., Cao, P., Felten, E., Gibson, G., Karlin, A. R., and Li, K. 1996. A trace-driven comparison of algorithms for parallel prefetching and caching. In Proceedings of the Symposium on Operating Systems Design and Implementation, USENIX Association, 19--34. Google Scholar
Digital Library
- Kotz, D. and Ellis, C. S. 1991. Practical prefetching techniques for parallel file systems. In Proceedings of the 1st International Conference on Parallel and Distributed Information Systems. IEEE Computer Society, 182--189. Google Scholar
Digital Library
- Kroeger, T. M., Long, D. D. E., and Mogul, J. C. 1997. Exploring the bounds of web latency reduction from caching and prefetching. In USENIX Symposium on Internet Technologies and Systems. Google Scholar
Digital Library
- Lee, R. L., Yew, P. -C., and Lawrie, D. H. 1987. Data prefetching in shared memory multiprocessors. In Proceedings of the International Conference on Parallel Processing (ICPP). 28--31.Google Scholar
- Lei, H. and Duchamp, D. 1997. An analytical approach to file prefetching. In the USENIX Annual Technical Conference, Anaheim, CA. Google Scholar
Digital Library
- Lipasti, M. H., Schmidt, W. J., Kunkel, S. R., and Roediger, R. R. 1995. SPAID: Software prefetching in pointer- and call-intensive environments. In Proceedings of the 28th Annual IEEE/ACM International Symposium on Microarchitecture, 231--236. Google Scholar
Digital Library
- Luk, C. -K. and Mowry, T. C. 1996. Compiler-Based prefetching for recursive data structures. In Architectural Support for Programming Languages and Operating Systems. 222--233. Google Scholar
Digital Library
- McNutt, B. and Johnson, S. 2001. A standard test of I/O cache. In Proceedings of the Computer Measurements Group's International Conference.Google Scholar
- Metcalf, C. 1993. Data prefetching: A cost/performance analysis. Area Exam, MIT, October.Google Scholar
- Mowry, T. and Gupta, A. 1991. Tolerating latency through software-controlled prefetching in shared-memory multiprocessors. J. Parallel Distrib. Comput. 12, 2, 87--106. Google Scholar
Digital Library
- Patterson, R. H., Gibson, G. A., Ginting, E., Stodolsky, D., and Zelenka, J. 1995. Informed prefetching and caching. In Proceedings of the ACM Symposium on Operating Systems Principles (SOSP). 79--95. Google Scholar
Digital Library
- Reungsang, P., Park, S. K., Jeong, S. -W., Roh, H. -L., and Lee, G. 2001. Reducing cache pollution of prefetching in a small data cache. In Proceedings of the IEEE International Conference on Computer Design (ICCD). 530--533. Google Scholar
Digital Library
- Rogers, A. and Li, K. 1992. Software support for speculative loads. In Proceedings of the 5th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), SIGPLAN Not. 27, 9, 38--50. Google Scholar
Digital Library
- Roth, A., Moshovos, A., and Sohi, G. S. 1998. Dependence based prefetching for linked data structures. ACM SIG-PLAN Not. 33, 11, 115--126. Google Scholar
Digital Library
- Smith, A. J. 1982. Cache memories. ACM Comput. Surv. 14, 3, 473--530. Google Scholar
Digital Library
- Storage Performance Council. 2006a. SPC Benchmark-1: Specification, version 1.10.1. www.storageperformance.org.Google Scholar
- Storage Performance Council. 2006b. SPC Benchmark-2: Specification, version 1.2. www.storageperformance.org.Google Scholar
- Tcheun, M. K., Yoon, H., and Maeng, S. R. 1997. An adaptive sequential prefetching scheme in shared-memory multiprocessors. In ICPP. 306--313. Google Scholar
Digital Library
- Vitter, J. S. and Krishnan, P. 1996. Optimal prefetching via data compression. J. ACM 43, 5, 771--793. Google Scholar
Digital Library
Index Terms
Optimal multistream sequential prefetching in a shared cache
Recommendations
Sequential Hardware Prefetching in Shared-Memory Multiprocessors
To offset the effect of read miss penalties on processor utilization in shared-memory multiprocessors, several software- and hardware-based data prefetching schemes have been proposed. A major advantage of hardware techniques is that they need no ...
Performance evaluation of the fixed sequential prefetching on a bus-based multiprocessor: preliminary results
ISPAN '96: Proceedings of the 1996 International Symposium on Parallel Architectures, Algorithms and NetworksPrefetching caches is an important technique for hiding the average latency of memory accesses by exploiting the overlap of processor computations with data accesses. Several software and hardware-based data prefetching approaches have been proposed. ...
Reducing last level cache pollution in NUMA multicore systems for improving cache performance
ICCSA'12: Proceedings of the 12th international conference on Computational Science and Its Applications - Volume Part IIINon-uniform memory architecture (NUMA) system has numerous nodes with shared last level cache (LLC). Their shared LLC has brought many benefits in the cache utilization. However, LLC can be seriously polluted by tasks that cause huge I/O traffic for a ...






Comments