skip to main content
research-article

Oasis: Controlling Data Migration in Expansion of Object-based Storage Systems

Published:19 January 2023Publication History
Skip Abstract Section

Abstract

Object-based storage systems have been widely used for various scenarios such as file storage, block storage, blob (e.g., large videos) storage, and so on, where the data is placed among a large number of object storage devices (OSDs). Data placement is critical for the scalability of decentralized object-based storage systems. The state-of-the-art CRUSH placement method is a decentralized algorithm that deterministically places object replicas onto storage devices without relying on a central directory. While enjoying the benefits of decentralization such as high scalability, robustness, and performance, CRUSH-based storage systems suffer from uncontrolled data migration when expanding the capacity of the storage clusters (i.e., adding new OSDs), which is determined by the nature of CRUSH and will cause significant performance degradation when the expansion is nontrivial.

This article presents MapX, a novel extension to CRUSH that uses an extra time-dimension mapping (from object creation times to cluster expansion times) for controlling data migration after cluster expansions. Each expansion is viewed as a new layer of the CRUSH map represented by a virtual node beneath the CRUSH root. MapX controls the mapping from objects onto layers by manipulating the timestamps of the intermediate placement groups (PGs). MapX is applicable to a large variety of object-based storage scenarios where object timestamps can be maintained as higher-level metadata. We have applied MapX to the state-of-the-art Ceph-RBD (RADOS Block Device) to implement a migration-controllable, decentralized object-based block store (called Oasis). Oasis extends the RBD metadata structure to maintain and retrieve approximate object creation times (for migration control) at the granularity of expansion layers. Experimental results show that the MapX-based Oasis block store outperforms the CRUSH-based Ceph-RBD (which is busy in migrating objects after expansions) by 3.17× ∼ 4.31× in tail latency, and 76.3% (respectively, 83.8%) in IOPS for reads (respectively, writes).

REFERENCES

  1. [1] RedHat Team. BlueStore. Retrieved from https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/administration_guide/osd-bluestore.Google ScholarGoogle Scholar
  2. [2] AWS Team. Amazon S3. Retrieved from https://aws.amazon.com/s3/.Google ScholarGoogle Scholar
  3. [3] Armond Bigian. Blobstore: Twitter’s in-house photo storage system. Retrieved from https://blog.twitter.com/engineering/en_us/a/2012/blobstore-twitter-s-in-house-photo-storage-system.html.Google ScholarGoogle Scholar
  4. [4] Ceph Team. Ceph block storage Retrieved from https://ceph.com/ceph-storage/block-storage/.Google ScholarGoogle Scholar
  5. [5] Ceph Team. Ceph file system. Retrieved from https://ceph.com/ceph-storage/file-system/.Google ScholarGoogle Scholar
  6. [6] Ceph Team. Crush tool. Retrieved from https://docs.ceph.com/docs/mimic/man/8/crushtool/.Google ScholarGoogle Scholar
  7. [7] Ceph Team. Ceph RADOS. Retrieved from https://docs.ceph.com/docs/mimic/rados/configuration/osd-config-ref/.Google ScholarGoogle Scholar
  8. [8] Ceph Team. Ceph Object Gateway. Retrieved from https://docs.ceph.com/en/pacific/radosgw/.Google ScholarGoogle Scholar
  9. [9] Fio Team. Fio Navigation. Retrieved from https://fio.readthedocs.io/en/latest/.Google ScholarGoogle Scholar
  10. [10] NiceX Lab. MapX code. Retrieved from https://github.com/nicexlab/ceph/tree/mpx.Google ScholarGoogle Scholar
  11. [11] Hadoop Team. HDFS Architecture Guide. Retrieved from https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html.Google ScholarGoogle Scholar
  12. [12] MinIO Team. Multi-Cloud Object Storage. Retrieved from https://min.io/.Google ScholarGoogle Scholar
  13. [13] RocksDB Team. A persistent key-value store for fast storage environments. Retrieved from https://rocksdb.org/.Google ScholarGoogle Scholar
  14. [14] Swift Team. OpenStack Swift Storage. Retrieved from https://wiki.openstack.org/wiki/Swift.Google ScholarGoogle Scholar
  15. [15] Aiken Stephen, Grunwald Dirk, Pleszkun Andrew R., and Willeke Jesse. 2003. A performance analysis of the iSCSI protocol. In 20th IEEE/11th NASA Goddard Conference on Mass Storage Systems and Technologies. IEEE, 123134.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Anand Ashok, Muthukrishnan Chitra, Kappes Steven, Akella Aditya, and Nath Suman. 2010. Cheap and large CAMs for high performance data-intensive networked systems. In USENIX Symposium on Networked Systems Design and Implementation. USENIX Association, 433448. Retrieved from http://www.usenix.org/events/nsdi10/tech/full_papers/anand.pdf.Google ScholarGoogle Scholar
  17. [17] Andersen David G., Franklin Jason, Kaminsky Michael, Phanishayee Amar, Tan Lawrence, and Vasudevan Vijay. 2009. FAWN: A fast array of wimpy nodes. In ACM Symposium on Operating Systems Principles (SOSP). ACM, 114. Retrieved from http://dblp.uni-trier.de/db/conf/sosp/sosp2009.html#AndersenFKPTV09.Google ScholarGoogle Scholar
  18. [18] Beaver Doug, Kumar Sanjeev, Li Harry C., Sobel Jason, and Vajgel Peter. 2010. Finding a needle in Haystack: Facebook’s photo storage. In USENIX Conference on Operating Systems Design and Implementation. 4760.Google ScholarGoogle Scholar
  19. [19] Braam Peter. 2019. The Lustre storage architecture. arXiv preprint arXiv:1903.01955 (2019).Google ScholarGoogle Scholar
  20. [20] Cashin Ed L.. 2005. Kernel korner: ATA over ethernet: Putting hard drives on the lan. Linux J. 2005, 134 (2005), 10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Castro Miguel, Costa Manuel, and Rowstron Antony I. T.. 2005. Debunking some myths about structured and unstructured overlays. In USENIX Symposium on Networked Systems Design and Implementation.Google ScholarGoogle Scholar
  22. [22] Chan Jeremy C. W., Ding Qian, Lee Patrick P. C., and Chan Helen H. W.. 2014. Parity logging with reserved space: Towards efficient updates and recovery in erasure-coded clustered storage. In 12th USENIX Conference on File and Storage Technologies (FAST’14). 163176.Google ScholarGoogle Scholar
  23. [23] Chang Fay, Dean Jeffrey, Ghemawat Sanjay, Hsieh Wilson C., Wallach Deborah A., Burrows Michael, Chandra Tushar, Fikes Andrew, and Gruber Robert E.. 2008. BigTable: A distributed storage system for structured data. ACM Trans. Comput. Syst. 26, 2 (2008), 126.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Chen Peter M., Lee Edward K., Gibson Garth A., Katz Randy H., and Patterson David A.. 1994. RAID: High-performance, reliable secondary storage. ACM Comput. Surv. 26, 2 (1994), 145185.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Chidambaram Vijay, Pillai Thanumalayan Sankaranarayana, Arpaci-Dusseau Andrea C., and Arpaci-Dusseau Remzi H.. 2013. Optimistic crash consistency. In 24th ACM Symposium on Operating Systems Principles. ACM, 228243.Google ScholarGoogle Scholar
  26. [26] Chidambaram Vijay, Sharma Tushar, Arpaci-Dusseau Andrea C., and Arpaci-Dusseau Remzi H.. 2012. Consistency without ordering. In 10th USENIX Conference on File and Storage Technologies. 9. Retrieved from https://www.usenix.org/conference/fast12/consistency-without-ordering.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Condit Jeremy, Nightingale Edmund B., Frost Christopher, Ipek Engin, Lee Benjamin, Burger Doug, and Coetzee Derrick. 2009. Better I/O through byte-addressable, persistent memory. In ACM SIGOPS 22nd Symposium on Operating Systems Principles. ACM, 133146.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Dean Jeffrey and Ghemawat Sanjay. 2008. MapReduce: Simplified data processing on large clusters. Commun. ACM 51, 1 (2008), 107113.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Debnath Biplob, Sengupta Sudipta, and Li Jin. 2011. SkimpyStash: RAM space Skimpy key-value store on flash-based storage. In ACM SIGMOD International Conference on Management of Data (SIGMOD’11). ACM, New York, NY, 2536. DOI:DOI: DOI: https://doi.org/10.1145/1989323.1989327Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Debnath Biplob K., Sengupta Sudipta, and Li Jin. 2010. FlashStore: High throughput persistent key-value store. PVLDB 3, 2 (2010), 14141425. Retrieved from http://dblp.uni-trier.de/db/journals/pvldb/pvldb3.html#DebnathSL10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Decandia Giuseppe, Hastorun Deniz, Jampani Madan, Kakulapati Gunavardhan, Lakshman Avinash, Pilchin Alex, Sivasubramanian Swaminathan, Vosshall Peter, and Vogels Werner. 2007. Dynamo: Amazon’s highly available key-value store. ACM SIGOPS Oper. Syst. Rev. 41, 6 (2007), 205220.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. [32] Ganesan Prasanna, Gummadi P. Krishna, and Garcia-Molina Hector. 2004. Canon in G major: Designing DHTs with hierarchical structure. In IEEE International Conference on Distributed Computing Systems. 263272.Google ScholarGoogle Scholar
  33. [33] Ghemawat Sanjay, Gobioff Howard, and Leung Shun-Tak. 2003. The Google file system. In ACM Symposium on Operating Systems Principles (SOSP). 2943.Google ScholarGoogle Scholar
  34. [34] Harter Tyler, Borthakur Dhruba, Dong Siying, Aiyer Amitanand, Tang Liyin, Arpaci-Dusseau Andrea C., and Arpaci-Dusseau Remzi H.. 2014. Analysis of HDFS under HBase: A Facebook messages case study. In 12th USENIX Conference on File and Storage Technologies (FAST’14). 199212.Google ScholarGoogle Scholar
  35. [35] Hartman John H. and Ousterhout John K.. 1995. The Zebra striped network file system. ACM Trans. Comput. Syst. 13, 3 (1995), 274310.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. [36] Harvey Nicholas J. A., Jones Michael B., Saroiu Stefan, Theimer Marvin, and Wolman Alec. 2003. SkipNet: A scalable overlay network with practical locality properties. In USENIX Symposium on Internet Technologies and Systems.Google ScholarGoogle Scholar
  37. [37] Herlihy Maurice P. and Wing Jeannette M.. 1990. Linearizability: A correctness condition for concurrent objects. ACM Trans. Program. Lang. Syst. 12, 3 (July1990), 463492. DOI:DOI: DOI: https://doi.org/10.1145/78969.78972Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. [38] Hildebrand Dean and Honeyman Peter. 2005. Exporting storage systems in a scalable manner with pNFS. In 22nd IEEE/13th NASA Goddard Conference on Mass Storage Systems and Technologies (MSST’05). IEEE, 1827.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Huang Manqi, Luo Lan, Li You, and Liang Liang. 2017. Research on data migration optimization of Ceph. In 14th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP). IEEE, 8388.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Jin Chao, Feng Dan, Jiang Hong, and Tian Lei. 2011. RAID6L: A log-assisted RAID6 storage architecture with improved write performance. In IEEE 27th Symposium on Mass Storage Systems and Technologies (MSST). IEEE, 16.Google ScholarGoogle Scholar
  41. [41] Kaashoek M. Frans and Karger David R.. 2003. Koorde: A simple degree-optimal distributed hash table. In International Workshop on Peer-to-Peer Systems (IPTPS). 98107.Google ScholarGoogle Scholar
  42. [42] Karger David R. and Ruhl Matthias. 2004. Diminished chord: A protocol for heterogeneous subgroup formation in peer-to-peer networks. In International Workshop on Peer-to-Peer Systems (IPTPS). 288297.Google ScholarGoogle Scholar
  43. [43] Lakshman Avinash and Malik Prashant. 2009. Cassandra: A structured storage system on a P2P network. In ACM SIGMOD International Conference on Management of Data.Google ScholarGoogle Scholar
  44. [44] Lakshman Avinash and Malik Prashant. 2010. Cassandra: A decentralized structured storage system. ACM SIGOPS Oper. Syst. Rev. 44, 2 (2010), 3540.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Lee Edward K. and Thekkath Chandramohan A.. 1996. Petal: Distributed virtual disks. In ACM SIGPLAN Notices, Vol. 31. ACM, 8492.Google ScholarGoogle Scholar
  46. [46] Leung Andrew W., Pasupathy Shankar, Goodson Garth R., and Miller Ethan L.. 2008. Measurement and analysis of large-scale network file system workloads. In USENIX Annual Technical Conference, Vol. 1. 25.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Li Huiba, Zhang Yiming, Li Dongsheng, Zhang Zhiming, Liu Shengyun, Huang Peng, Qin Zheng, Chen Kai, and Xiong Yongqiang. 2019. URSA: Hybrid block storage for cloud-scale virtual disks. In 14th EuroSys Conference. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Li Huiba, Zhang Yiming, Zhang Zhiming, Liu Shengyun, Li Dongsheng, Liu Xiaohui, and Peng Yuxing. 2017. PARIX: Speculative partial writes in erasure-coded systems. In USENIX Annual Technical Conference (USENIX ATC’17). USENIX Association, 581587.Google ScholarGoogle Scholar
  49. [49] Lim Hyeontaek, Fan Bin, Andersen David G., and Kaminsky Michael. 2011. SILT: A memory-efficient, high-performance key-value store. In 23rd ACM Symposium on Operating Systems Principles. ACM, 113.Google ScholarGoogle Scholar
  50. [50] Lu Lanyue, Gopalakrishnan Hariharan, Arpaci-Dusseau Andrea C., and Arpaci-Dusseau Remzi H.. 2017. WiscKey: Separating keys from values in SSD-conscious storage. ACM Trans. Stor. 13, 1 (2017), 5.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] McKusick Marshall K., Joy William N., Leffler Samuel J., and Fabry Robert S.. 1984. A fast file system for UNIX. ACM ACM Trans. Comput. Syst. 2, 3 (1984), 181197.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Meyer Dutch T., Aggarwal Gitika, Cully Brendan, Lefebvre Geoffrey, Feeley Michael J., Hutchinson Norman C., and Warfield Andrew. 2008. Parallax: Virtual disks for virtual machines. In ACM SIGOPS Operating Systems Review, Vol. 42. ACM, 4154.Google ScholarGoogle Scholar
  53. [53] Mickens James, Nightingale Edmund B., Elson Jeremy, Gehring Darren, Fan Bin, Kadav Asim, Chidambaram Vijay, Khan Osama, and Nareddy Krishna. 2014. Blizzard: Fast, cloud-scale block storage for cloud-oblivious applications. In 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI’14). 257273.Google ScholarGoogle Scholar
  54. [54] Mislove Alan and Druschel Peter. 2004. Providing administrative control and autonomy in structured peer-to-peer overlays. In International Workshop on Peer-to-Peer Systems (IPTPS). 162172.Google ScholarGoogle Scholar
  55. [55] Muralidhar Subramanian, Lloyd Wyatt, Roy Sabyasachi, Hill Cory, Lin Ernest, Liu Weiwen, Pan Satadru, Shankar Shiva, Sivakumar Viswanath, and Tang Linpeng. 2014. F4: Facebook’s warm BLOB storage system. In USENIX Conference on Operating Systems Design and Implementation. 383398.Google ScholarGoogle Scholar
  56. [56] Nightingale Edmund B., Elson Jeremy, Fan Jinliang, Hofmann Owen, Howell Jon, and Suzue Yutaka. 2012. Flat datacenter storage. In USENIX Operating Systems Design and Implementation (OSDI).Google ScholarGoogle Scholar
  57. [57] Noghabi Shadi A., Subramanian Sriram, Narayanan Priyesh, Narayanan Sivabalan, Holla Gopalakrishna, Zadeh Mammad, Li Tianwei, Gupta Indranil, and Campbell Roy H.. 2016. Ambry: LinkedIn’s scalable geo-distributed object store. In International Conference on Management of Data. 253265.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. [58] Ongaro Diego, Rumble Stephen M., Stutsman Ryan, Ousterhout John K., and Rosenblum Mendel. 2011. Fast crash recovery in RAMCloud. In ACM Symposium on Operating Systems Principles (SOSP). 2941.Google ScholarGoogle Scholar
  59. [59] Piernas Juan, Cortes Toni, and García José M.. 2002. DualFS: A new journaling file system without meta-data duplication. In 16th International Conference on Supercomputing. ACM, 137146.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Ratnasamy Sylvia, Francis Paul, Handley Mark, Karp Richard M., and Shenker Scott. 2001. A scalable content-addressable network. In ACM SIGCOMM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication. 161172. DOI:DOI: DOI: https://doi.org/10.1145/383059.383072Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Ren Kai, Zheng Qing, Patil Swapnil, and Gibson Garth. 2014. IndexFS: Scaling file system metadata performance with stateless caching and bulk insertion. In International Conference for High Performance Computing, Networking, Storage and Analysis. IEEE Press, 237248.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Rowstron Antony I. T. and Druschel Peter. 2001. Pastry: Scalable, decentralized object location, and routing for large-scale peer-to-peer systems. In IFIP/ACM International Conference on Distributed Systems Platforms and Open Distributed Processing (Middleware). 329350.Google ScholarGoogle ScholarCross RefCross Ref
  63. [63] Shen Haiying, Xu Cheng-Zhong, and Chen Guihai. 2006. Cycloid: A constant-degree and lookup-efficient P2P overlay network. Perform. Eval. 63, 3 (2006), 195216.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Spirovska Kristina, Didona Diego, and Zwaenepoel Willy. 2017. Optimistic causal consistency for geo-replicated key-value stores. In IEEE 37th International Conference on Distributed Computing Systems. IEEE, 26262629.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Stodolsky Daniel, Gibson Garth, and Holland Mark. 1993. Parity logging overcoming the small write problem in redundant disk arrays. In ACM SIGARCH Computer Architecture News, Vol. 21. ACM, 6475.Google ScholarGoogle Scholar
  66. [66] Stoica Ion, Morris Robert, Karger David, Kaashoek M. Frans, and Balakrishnan Hari. 2001. Chord: A scalable peer-to-peer lookup service for internet applications. ACM SIGCOMM Comput. Commun. Rev. 31, 4 (2001), 149160.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. [67] Wang Yang, Kapritsos Manos, Ren Zuocheng, Mahajan Prince, Kirubanandam Jeevitha, Alvisi Lorenzo, and Dahlin Mike. 2013. Robustness in the Salus scalable block store. In 10th USENIX Symposium on Networked Systems Design and Implementation (NSDI’13). 357370.Google ScholarGoogle Scholar
  68. [68] Warfield Andrew, Ross Russ, Fraser Keir, Limpach Christian, and Hand Steven. 2005. Parallax: Managing storage for a million machines. In Workshop on Hot Topics in Operating Systems (HotOS).Google ScholarGoogle Scholar
  69. [69] Weil Sage A., Brandt Scott A., Miller Ethan L., Long Darrell D. E., and Maltzahn Carlos. 2006. Ceph: A scalable, high-performance distributed file system. In 7th Symposium on Operating Systems Design and Implementation. 307320.Google ScholarGoogle Scholar
  70. [70] Weil Sage A., Brandt Scott A., Miller Ethan L., and Maltzahn Carlos. 2006. CRUSH: Controlled, scalable, decentralized placement of replicated data. In USENIX Operating Systems Design and Implementation (OSDI).Google ScholarGoogle Scholar
  71. [71] Zaharia Matei, Chowdhury Mosharaf, Das Tathagata, and Dave Ankur. 2012. Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing. In USENIX Symposium on Networked Systems Design and Implementation. 114.Google ScholarGoogle Scholar
  72. [72] Zhang Yiming, Chen Lei, Lu Xicheng, and Li Dongsheng. 2009. Enabling routing control in a DHT. IEEE J. Select. Areas Commun. 28, 1 (2009), 2838.Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. [73] Zhang Yiming, Li Dongsheng, Guo Chuanxiong, Wu Haitao, Xiong Yongqiang, and Lu Xicheng. 2017. CubicRing: Exploiting network proximity for distributed in-memory key-value store. IEEE/ACM Trans. Network. 25, 4 (2017), 20402053.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Zhang Yiming, Li Dongsheng, and Liu Ling. 2019. Leveraging glocality for fast failure recovery in distributed RAM storage. ACM Trans. Stor. 15, 1 (2019), 124.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. [75] Zhang Yiming, Li Dongsheng, Tian Tian, and Zhong Ping. 2017. CubeX: Leveraging glocality of cube-based networks for RAM-based key-value store. In IEEE Conference on Computer Communications. IEEE, 19.Google ScholarGoogle ScholarCross RefCross Ref
  76. [76] Zhang Yiming, Li Huiba, Liu Shengyun, Xu Jiawei, and Xue Guangtao. 2020. PBS: An efficient erasure-coded block storage system based on speculative partial writes. ACM Trans. Stor. 15 (2020), 126.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. [77] Zhang Yiming and Liu Ling. 2011. Distributed line graphs: A universal technique for designing DHTs based on arbitrary regular graphs. IEEE Trans. Knowl. Data Eng. 24, 9 (2011), 15561569.Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. [78] Zhao Ben Y., Huang Ling, Stribling Jeremy, Rhea Sean C., Joseph Anthony D., and Kubiatowicz John. 2004. Tapestry: A resilient global-scale overlay for service deployment. IEEE J. Select. Areas Commun. 22, 1 (2004), 4153.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Oasis: Controlling Data Migration in Expansion of Object-based Storage Systems

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Storage
          ACM Transactions on Storage  Volume 19, Issue 1
          February 2023
          259 pages
          ISSN:1553-3077
          EISSN:1553-3093
          DOI:10.1145/3578369
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 19 January 2023
          • Online AM: 19 November 2022
          • Accepted: 24 May 2022
          • Revised: 1 January 2022
          • Received: 2 March 2021
          Published in tos Volume 19, Issue 1

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Refereed
        • Article Metrics

          • Downloads (Last 12 months)266
          • Downloads (Last 6 weeks)35

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!