skip to main content
research-article

DEFUSE: An Interface for Fast and Correct User Space File System Access

Authors Info & Claims
Published:26 September 2022Publication History
Skip Abstract Section

Abstract

Traditionally, the only option for developers was to implement file systems (FSs) via drivers within the operating system kernel. However, there exists a growing number of file systems (FSs), notably distributed FSs for the cloud, whose interfaces are implemented solely in user space to (i) isolate FS logic, (ii) take advantage of user space libraries, and/or (iii) for rapid FS prototyping. Common interfaces for implementing FSs in user space exist, but they do not guarantee POSIX compliance in all cases, or suffer from considerable performance penalties due to high amounts of wait context switchs between kernel and user space processes.

We propose DEFUSE: an interface for user space FSs that provides fast accesses while ensuring access correctness and requiring no modifications to applications. DEFUSE: achieves significant performance improvements over existing user space FS interfaces thanks to its novel design that drastically reduces the number of wait context switchs for FS accesses. Additionally, to ensure access correctness, DEFUSE: maintains POSIX compliance for FS accesses thanks to three novel concepts of bypassed file descriptor (FD) lookup, FD stashing, and user space paging. Our evaluation spanning a variety of workloads shows that by reducing the number of wait context switchs per workload from as many as 16,000 or 41,000 with filesystem in user space down to 9 on average, DEFUSE: increases performance 2× over existing interfaces for typical workloads and by as many as 10× in certain instances.

REFERENCES

  1. [1] Personal conversation with David Bonnie, storage tech lead at Los Alamos National Laboratory and co-designer of OrangeFS/PVFS2, in reference to work on MarFS (November 15, 2016).Google ScholarGoogle Scholar
  2. [2] Access DBFS Using Local File APIs. Retrieved from https://docs.databricks.com/user-guide/dbfs-databricks-file-system.html#access-dbfs-using-local-file-apis.Google ScholarGoogle Scholar
  3. [3] Access DBFS with the Databricks CLI. Retrieved from https://docs.databricks.com/user-guide/dbfs-databricks-file-system.html#access-dbfs-with-the-databricks-cli.Google ScholarGoogle Scholar
  4. [4] AccessFS: Permission Filesystem for Linux. Retrieved from http://www.olafdietsche.de/2002/11/07/accessfs-permission-filesystem-linux/.Google ScholarGoogle Scholar
  5. [5] Ahmad Faraz, Lee Seyong, Thottethodi Mithuna, and Vijaykumar T. N.. 2012. PUMA: Purdue University Benchmark Suite.Google ScholarGoogle Scholar
  6. [6] Alluxio-FUSE. Retrieved from https://github.com/Alluxio/alluxio/tree/master/integration/fuse.Google ScholarGoogle Scholar
  7. [7] Amazon S3. Retrieved from https://aws.amazon.com/s3/.Google ScholarGoogle Scholar
  8. [8] Amazon S3 FUSE. Retrieved from https://github.com/s3fs-fuse/s3fs-fuse.Google ScholarGoogle Scholar
  9. [9] Apache Hadoop 2.4.1—File System Shell Guide. Retrieved from https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-common/FileSystemShell.html#Overview.Google ScholarGoogle Scholar
  10. [10] Apache Spark. Retrieved from http://spark.apache.org/.Google ScholarGoogle Scholar
  11. [11] Armbrust Michael, Xin Reynold S, Lian Cheng, Huai Yin, Liu Davies, Bradley Joseph K, Meng Xiangrui, Kaftan Tomer, Franklin Michael J, Ghodsi Ali, et al. 2015. Spark SQL: Relational data processing in spark. In Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD’15). 13831394.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] AVFS—A Virtual File System. Retrieved from http://avf.sourceforge.net/.Google ScholarGoogle Scholar
  13. [13] Amazon Web Services SDK for C++. Retrieved from https://aws.amazon.com/sdk-for-cpp/.Google ScholarGoogle Scholar
  14. [14] Behrens Diogo, Serafini Marco, Junqueira Flavio P., Arnautov Sergei, and Fetzer Christof. 2015. Scalable error isolation for distributed systems. In Proceedings of the 12th USENIX Symposium on Networked Systems Design and Implementation (NSDI’15). 605620.Google ScholarGoogle Scholar
  15. [15] Bent John, Gibson Garth, Grider Gary, McClelland Ben, Nowoczynski Paul, Nunez James, Polte Milo, and Wingate Meghan. 2009. PLFS: A checkpoint filesystem for parallel applications. In High Performance Computing, Networking, Storage and Analysis (SC’09). 112.Google ScholarGoogle Scholar
  16. [16] Bijlani Ashish and Ramachandran Umakishore. 2019. Extension framework for file systems in user space. In Proceedings of the USENIX Annual Technical Conference (ATC’19). 121134.Google ScholarGoogle Scholar
  17. [17] Bonnie David J. and Torres Aaron G.. 2013. Small File Aggregation with PLFS. Technical Report. Los Alamos National Laboratory.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Borthakur Dhruba et al. 2008. HDFS architecture guide. https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.Google ScholarGoogle Scholar
  19. [19] Burihabwa Dorian, Felber Pascal, Mercier Hugues, and Schiavoni Valerio. 2018. SGX-FS: Hardening a file system in user-space with intel SGX. In Proceedings of the IEEE International Conference on Cloud Computing Technology and Science (CloudCom’18). 6772.Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Caldwell Blake, Goodarzy Sepideh, Ha Sangtae, Han Richard, Keller Eric, Rozner Eric, and Im Youngbin. 2020. FluidMem: Full, flexible, and fast memory disaggregation for the cloud. In Proceedings of the IEEE 40th International Conference on Distributed Computing Systems (ICDCS’20). 665677. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  21. [21] Carns Philip, Lang Sam, Ross Robert, Vilayannur Murali, Kunkel Julian, and Ludwig Thomas. 2009. Small-file access in parallel file systems. In Proceedings of the IEEE International Parallel & Distributed Processing Symposium (IPDPS’09). 111.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. [22] Corbett Peter, Feitelson Dror, Fineberg Sam, Hsu Yarsun, Nitzberg Bill, Prost Jean-Pierre, Snir Marc, Traversat Bernard, and Wong Parkson. 1995. Overview of the MPI-IO parallel I/O interface. In Proceedings of the Workshop on Input/Output in Parallel and Distributed Systems (IPPS ’95). 115.Google ScholarGoogle Scholar
  23. [23] Databricks File System. Retrieved from https://docs.databricks.com/user-guide/dbfs-databricks-file-system.html/.Google ScholarGoogle Scholar
  24. [24] Deniel Philippe, Leibovici Thomas, and Lafoucrière Jacques-Charles. 2007. GANESHA, A multi-usage with large cache NFSv4 server. In Proceedings of the Linux Symposium. 113.Google ScholarGoogle Scholar
  25. [25] EMACS Hooks. Retrieved from https://www.gnu.org/software/emacs/manual/html_node/emacs/Hooks.html.Google ScholarGoogle Scholar
  26. [26] Essertel Gregory, Tahboub Ruby, Decker James, Brown Kevin, Olukotun Kunle, and Rompf Tiark. 2018. Flare: Optimizing apache spark with native compilation for scale-up architectures and medium-size data. In Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI’18). 799815.Google ScholarGoogle Scholar
  27. [27] Ext4 (and Ext2/Ext3) Wiki. Retrieved from https://ext4.wiki.kernel.org/.Google ScholarGoogle Scholar
  28. [28] FAT filesystem library in R6RS Scheme. Retrieved from https://gitlab.com/weinholt/fs-fatfs.Google ScholarGoogle Scholar
  29. [29] FUSE Example fusexmp. Retrieved from https://github.com/fuse4x/fuse/blob/master/example/fusexmp.c.Google ScholarGoogle Scholar
  30. [30] FUSE Google Cloud Storage. Retrieved from https://github.com/GoogleCloudPlatform/gcsfuse/.Google ScholarGoogle Scholar
  31. [31] FUSE High Level Interface. Retrieved from https://github.com/libfuse/libfuse/blob/master/include/fuse.h.Google ScholarGoogle Scholar
  32. [32] GDB: The GNU Project Debugger. Retrieved from https://www.gnu.org/software/gdb/.Google ScholarGoogle Scholar
  33. [33] GlusterFS—A Scale-Out Network-Attached Storage File System. Retrieved from https://www.gluster.org/.Google ScholarGoogle Scholar
  34. [34] Google Cloud Storage. Retrieved from https://cloud.google.com/storage/.Google ScholarGoogle Scholar
  35. [35] gsutil tool. Retrieved from https://cloud.google.com/storage/docs/gsutil.Google ScholarGoogle Scholar
  36. [36] Hupfeld Felix, Cortes Toni, Kolbeck Bjoern, Stender Jan, Focht Erich, Hess Matthias, Malo Jesus, Marti Jonathan, and Cesario Eugenio. 2007. XtreemFS: A case for object-based storage in grid data management. In Proceedings of the 3rd VLDB Workshop on Data Management in Grids, co-located with VLDB.Google ScholarGoogle Scholar
  37. [37] IBM Spectrum Scale—Formerly General Parallel File System (GPFS). Retrieved from https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage.Google ScholarGoogle Scholar
  38. [38] Inman Jeffrey Thornton, Vining William Flynn, Ransom Garrett Wilson, and Grider Gary Alan. Spring 2017. MarFS, a near-POSIX interface to cloud objects. USENIX Mag. (2017).Google ScholarGoogle Scholar
  39. [39] IOzone Filesystem Benchmark. Retrieved from http://iozone.org/.Google ScholarGoogle Scholar
  40. [40] Ishiguro Shun, Murakami Jun, Oyama Yoshihiro, and Tatebe Osamu. 2012. Optimizing local file accesses for FUSE-based distributed storage. In Proceedings of the High Performance Computing, Networking, Storage and Analysis (SC’12). 760765.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Jannen William, Yuan Jun, Zhan Yang, Akshintala Amogh, Esmet John, Jiao Yizheng, Mittal Ankur, Pandey Prashant, Reddy Phaneendra, Walsh Leif, et al. 2015. BetrFS: A right-optimized write-optimized file system. In Proceedings of the 13rd USENIX Conference on File and Storage Technologies (FAST’15). 301315.Google ScholarGoogle Scholar
  42. [42] Journaled File System Technology for Linux. Retrieved from http://jfs.sourceforge.net/.Google ScholarGoogle Scholar
  43. [43] Kadekodi Rohan, Lee Se Kwon, Kashyap Sanidhya, Kim Taesoo, Kolli Aasheesh, and Chidambaram Vijay. 2019. SplitFS: Reducing software overhead in file systems for persistent memory. In Proceedings of the 27th ACM Symposium on Operating Systems Principles (SOSP’19). 494508.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Kantee Antti and Crooks Alistair. 2007. ReFUSE: Userspace FUSE reimplementation using PUFFS. In Proceedings of the 6th European BSD Conference (EuroBSDCon’07).Google ScholarGoogle Scholar
  45. [45] Leslie Ben, Chubb Peter, Fitzroy-Dale Nicholas, Götz Stefan, Gray Charles, Macpherson Luke, Potts Daniel, Shen Yue-Ting, Elphinstone Kevin, and Heiser Gernot. 2005. User-level device drivers: Achieved performance. J. Comput. Sci. Technol. 20, 5 (September 2005), 654664.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Li Haoyuan. 2018. Alluxio: A Virtual Distributed File System. Ph.D. Dissertation. UC Berkeley.Google ScholarGoogle Scholar
  47. [47] LIB HDFS. Retrieved from https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/LibHdfs.html.Google ScholarGoogle Scholar
  48. [48] libfuse—Filesystem in Userspace. Retrieved from https://github.com/libfuse/libfuse.Google ScholarGoogle Scholar
  49. [49] libfuse—SSHFS implementation. Retrieved from https://github.com/libfuse/sshfs.Google ScholarGoogle Scholar
  50. [50] Linux Manual–bpf–Perform a Command on an Extended BPF Map or Program. Retrieved from http://man7.org/linux/man-pages/man2/bpf.2.html.Google ScholarGoogle Scholar
  51. [51] Linux Manual—Overview, Conventions, and Miscellaneous: libc. Retrieved from http://man7.org/linux/man-pages/man7/libc.7.html.Google ScholarGoogle Scholar
  52. [52] Linux User Manual—Time Command. Retrieved from https://linux.die.net/man/1/time.Google ScholarGoogle Scholar
  53. [53] Linux Virtual File System. Retrieved from http://www.tldp.org/LDP/tlk/fs/filesystem.html.Google ScholarGoogle Scholar
  54. [54] Lustre Parallel File System. Retrieved from http://lustre.org/.Google ScholarGoogle Scholar
  55. [55] Corporation Microsoft. 2000. Microsoft Extensible Firmware Initiative FAT32 File System Specification. Technical Report. Microsoft Corporation.Google ScholarGoogle Scholar
  56. [56] Moose File System (MooseFS). Retrieved from https://moosefs.com/index.html.Google ScholarGoogle Scholar
  57. [57] Mountable HDFS. Retrieved from https://wiki.apache.org/hadoop/MountableHDFS.Google ScholarGoogle Scholar
  58. [58] Message Passing Interface Forum. 2021. MPI: A Message-Passing Interface Standard Version 4.0.Google ScholarGoogle Scholar
  59. [59] Muniswamy-Reddy Kiran-Kumar, Wright Charles P., Himmer Andrew, and Zadok Erez. 2004. A versatile and user-oriented versioning file system. In Proceedings of the 3rd USENIX Conference on File and Storage Technologies (FAST’04), Vol. 4. 115128.Google ScholarGoogle Scholar
  60. [60] Narayan Sumit, Mehta Rohit K., and Chandy John A.. 2010. User space storage system stack modules with file level control. In Proceedings of the 12th Annual Linux Symposium in Ottawa. 189196.Google ScholarGoogle Scholar
  61. [61] Native HDFS FUSE. Retrieved from https://github.com/remis-thoughts/native-hdfs-fuse.Google ScholarGoogle Scholar
  62. [62] NFS Ganesha—File System Abstraction Layer (FSAL). Retrieved from https://github.com/nfs-ganesha/nfs-ganesha/wiki/Fsalsupport.Google ScholarGoogle Scholar
  63. [63] ObjectiveFS. Retrieved from https://objectivefs.com/.Google ScholarGoogle Scholar
  64. [64] OrangeFS Direct Interface. Retrieved from http://docs.orangefs.com/v_2_9/Direct_Interface.htm.Google ScholarGoogle Scholar
  65. [65] Patlasov Maxim. 2015. Optimizing FUSE for cloud storage. In Linux Vault.Google ScholarGoogle Scholar
  66. [66] Peng Ivy, McFadden Marty, Green Eric, Iwabuchi Keita, Wu Kai, Li Dong, Pearce Roger, and Gokhale Maya. 2019. UMap: Enabling application-driven optimizations for page management. In Proceedings of the IEEE/ACM Workshop on Memory Centric High Performance Computing (MCHPC’19). IEEE, 7178.Google ScholarGoogle ScholarCross RefCross Ref
  67. [67] Pillai Manoj, Gowdappa Raghavendra, and Henk Csaba. 2019. Experiences with fuse in the real world. In Proceedings of the Linux Storage and Filesystems Conference (VAULT’19).Google ScholarGoogle Scholar
  68. [68] Rajachandrasekar Raghunath, Moody Adam, Mohror Kathryn, and Panda Dhabaleswar K.. 2013. A 1 PB/s file system to checkpoint three million MPI tasks. In Proceedings of the 22nd International Symposium on High-Performance Parallel and Distributed Computing (HPDC’13). 143154.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. [69] react hooks. Retrieved from https://reactjs.org/docs/hooks-intro.html.Google ScholarGoogle Scholar
  70. [70] Anass Sebbar, Karim Zkik, Youssef Baddi, Mohammed Boulmalf, and Mohamed Dafir Ech-Cherif El Kettani. 2020. MitM detection and defense mechanism CBNA-RF based on machine learning for large-scale SDN context. Journal of Ambient Intelligence and Humanized Computing 11, 12 (2020), 5875–5894.Google ScholarGoogle Scholar
  71. [71] Ross Robert B., Thakur Rajeev, et al. 2000. PVFS: A parallel file system for Linux clusters. In Proceedings of the 4th Annual Linux Showcase and Conference. 391430.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. [72] Satyanarayanan Mahadev, Kistler James J., Kumar Puneet, Okasaki Maria E., Siegel Ellen H., and Steere David C.. 1990. Coda: A highly available file system for a distributed workstation environment. IEEE Trans. Comput. 39, 4 (1990), 447459.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. [73] Sharwood Simon. 2018. Linux literally loses its Lustre—HPC filesystem ditched in new kernel. Retrieved from https://www.theregister.co.uk/2018/06/18/linux_4_18_rc_1_removes_lustre_filesystem/.Google ScholarGoogle Scholar
  74. [74] Shvachko Konstantin, Kuang Hairong, Radia Sanjay, and Chansler Robert. 2010. The Hadoop distributed file system. In Proceedings of the 26th IEEE Symposium on Massive Storage Systems and Technologies (MSST’10). 110.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. [75] Solucorp VirtualFS. Retrieved from http://www.solucorp.qc.ca/virtualfs/.Google ScholarGoogle Scholar
  76. [76] Spark PySpark Daemon. Retrieved from https://github.com/apache/spark/blob/5264164a67df98b73facae207eda12ee 133be7d/python/pyspark/daemon.py.Google ScholarGoogle Scholar
  77. [77] Spillane Richard P., Wright Charles P., Sivathanu Gopalan, and Zadok Erez. 2007. Rapid file system development using ptrace. In Workshop on Experimental Computer Science, Part of ACM FCRC. 22.Google ScholarGoogle Scholar
  78. [78] Vassos Hadzilacos and Sam Toueg. 1994. A Modular Approach to Fault-Tolerant Broadcasts and Related Problems. Technical report. Cornell University.Google ScholarGoogle Scholar
  79. [79] Sundararaman Swaminathan, Visampalli Laxman, Arpaci-Dusseau Andrea C., and Arpaci-Dusseau Remzi H.. 2011. Refuse to crash with re-FUSE. In Proceedings of the 6th Conference on Computer Systems (EuroSys’11). 7790.Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. [80] System-call wrappers for glibc. Retrieved from https://lwn.net/Articles/799331/.Google ScholarGoogle Scholar
  81. [81] Tahoe-LAFS - Tahoe Least-Authority File Store. Retrieved from https://tahoe-lafs.org/trac/tahoe-lafs/.Google ScholarGoogle Scholar
  82. [82] Tarasov Vasily, Gupta Abhishek, Sourav Kumar, Trehan Sagar, and Zadok Erez. 2015. Terra incognita: On the practicality of user-space file systems. In Proceedings of the 7th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage’15).Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. [83] Thain Douglas and Livny Miron. 2001. Multiple bypass: Interposition agents for distributed computing. Cluster Comput. 4, 1 (2001), 3947. DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. [84] Thakur Rajeev, Gropp William, and Lusk Ewing. 1996. An abstract-device interface for implementing portable parallel-I/O interfaces. In Proceedings of the 6th Symposium on the Frontiers of Massively Parallel Computing (Frontiers’96). 180187.Google ScholarGoogle ScholarCross RefCross Ref
  85. [85] Thakur Rajeev, Lusk Ewing, and Gropp William. 1997. Users Guide for ROMIO: A High-Performance, Portable MPI-IO Implementation. Technical Report. Technical Report ANL/MCS-TM-234, Mathematics and Computer Science Division, Argonne National Laboratory.Google ScholarGoogle ScholarCross RefCross Ref
  86. [86] The Linux Kernel—d_splice_alias. Retrieved from https://www.kernel.org/doc/htmldocs/filesystems/API-d-splice-alias.html.Google ScholarGoogle Scholar
  87. [87] The Linux Kernel—userfaultfd. Retrieved from https://www.kernel.org/doc/html/latest/admin-guide/mm/userfaultfd.html.Google ScholarGoogle Scholar
  88. [88] The Plastic File System. Retrieved from http://plasticfs.sourceforge.net/.Google ScholarGoogle Scholar
  89. [89] The SYSIO library. Retrieved from https://libsysio.sourceforge.io/.Google ScholarGoogle Scholar
  90. [90] tmpfs Documentation. Retrieved from https://www.kernel.org/doc/Documentation/filesystems/tmpfs.txt.Google ScholarGoogle Scholar
  91. [91] TPCx-BB Specification. Retrieved from https://www.tpc.org/.Google ScholarGoogle Scholar
  92. [92] User-space page fault handling. Retrieved from https://lwn.net/Articles/550555/.Google ScholarGoogle Scholar
  93. [93] Vangoor Bharath Kumar Reddy, Agarwal Prafful, Mathew Manu, Ramachandran Arun, Sivaraman Swaminathan, Tarasov Vasily, and Zadok Erez. 2019. Performance and resource utilization of FUSE user-space file systems. ACM Trans. Stor. 15, 2, Article 15 (May 2019).Google ScholarGoogle Scholar
  94. [94] Wang Weisu, Meyers Christopher, Roy Robert, Diesburg Sarah, and Wang An-I Andy. 2021. ADAPT: An auxiliary storage data path toolkit. J. Syst. Arch. 113 (2021), 101902.Google ScholarGoogle ScholarCross RefCross Ref
  95. [95] Weil Sage A., Brandt Scott A., Miller Ethan L., Long Darrell D. E., and Maltzahn Carlos. 2006. Ceph: A scalable, high-performance distributed file system. In Proceedings of the 7th USENIX Symposium on Operating Systems Design and Implementation (OSDI’06). 307320.Google ScholarGoogle Scholar
  96. [96] Wright Steven A., Hammond Simon D., Pennycook Simon J., Miller Iain, Herdman John A., and Jarvis Stephen A.. 2012. LDPLFS: Improving I/O performance without application modification. In Proceedings of the 26th IEEE International Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW’12). 13521359.Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. [97] Zadok Erez and Bădulescu Ion. 1999. A stackable file system interface for Linux. In Proceedings of the LinuxExpo Conference. 141151.Google ScholarGoogle Scholar
  98. [98] Zadok Erez and Nieh Jason. 2000. FiST: A language for stackable filesystems.Google ScholarGoogle Scholar
  99. [99] Zaharia Matei, Chowdhury Mosharaf, Das Tathagata, Dave Ankur, Ma Justin, McCauley Murphy, Franklin Michael J., Shenker Scott, and Stoica Ion. 2012. Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing. In Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation (NSDI’12). 1528.Google ScholarGoogle Scholar
  100. [100] Zhu Yue, Wang Teng, Mohror Kathryn, Moody Adam, Sato Kento, Khan Muhib, and Yu Weikuan. 2018. Direct-FUSE: Removing the middleman for high-performance FUSE file system support. In Proceedings of the 8th International Workshop on Runtime and Operating Systems for Supercomputers (ROSS’18). 6.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. DEFUSE: An Interface for Fast and Correct User Space File System Access

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Storage
          ACM Transactions on Storage  Volume 18, Issue 3
          August 2022
          244 pages
          ISSN:1553-3077
          EISSN:1553-3093
          DOI:10.1145/3555792
          • Editor:
          • Sam H. Noh
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 26 September 2022
          • Online AM: 30 August 2022
          • Accepted: 25 October 2021
          • Revised: 21 July 2021
          • Received: 11 February 2021
          Published in tos Volume 18, Issue 3

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Refereed
        • Article Metrics

          • Downloads (Last 12 months)266
          • Downloads (Last 6 weeks)22

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!