skip to main content
research-article

Data and Computation Reuse in CNNs Using Memristor TCAMs

Published:22 December 2022Publication History
Skip Abstract Section

Abstract

Exploiting computational and data reuse in CNNs is crucial for the successful design of resource-constrained platforms. In image recognition applications, high levels of input locality and redundancy present in CNNs have become the golden goose for skipping costly arithmetic operations. One promising technique for this consists in storing function responses of some input patterns into offline lookup tables and replacing online computation with search operations, which are highly efficient when implemented by emerging non-volatile memory technologies. In this work, we rethink both algorithm and architecture for exploiting locality and reuse opportunities by replacing entire convolutions with searches on Content-addressable Memories. By previously calculating convolution results and building compact lookup tables with our novel clustering algorithm, one can evaluate activations at constant time complexity, also requiring a single read operation of the current input tensor. Then, we devise a reconfigurable array of processing elements based on memristive Ternary Content-addressable Memories to efficiently implement the algorithmic solution and meet the flexibility requirements of several CNN architectures. Results show that our design reduces the number of multiplications and memory accesses proportionally to the number of convolutional layer channels. The average performance is 1,172 and 82 FPS for AlexNet and VGG-16 models, thus outperforming state-of-the-art works by 13×.

REFERENCES

  1. [1] Jeong Doo Seok, Kim Kyung Min, Kim Sungho, Choi Byung Joon, and Hwang Cheol Seong. 2016. Memristors for energy-efficient new computing paradigms. Adv. Electron. Mater. 2, 9 (2016), 1600090.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Murshed M. G. Sarwar, Murphy Christopher, Hou Daqing, Khan Nazar, Ananthanarayanan Ganesh, and Hussain Faraz. 2021. Machine learning at the network edge: A survey. ACM Comput. Surveys 54, 8 (2021), 137.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Strubell Emma, Ganesh Ananya, and McCallum Andrew. 2020. Energy and policy considerations for modern deep learning research. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 1369313696.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Guo Kaiyuan, Zeng Shulin, Yu Jincheng, Wang Yu, and Yang Huazhong. 2019. A survey of FPGA-based neural network inference accelerators. ACM Trans. Reconfig. Technol. Syst. 12, 1, Article 2 (Mar.2019), 26 pages. DOI: DOI: http://dx.doi.org/10.1145/3289185Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Yin Shihui, Jiang Zhewei, Seo Jae-Sun, and Seok Mingoo. 2020. XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks. IEEE J. Solid-State Circ. (2020).Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Zhuge Chuanhao, Liu Xinheng, Zhang Xiaofan, Gummadi Sudeep, Xiong Jinjun, and Chen Deming. 2018. Face recognition with hybrid efficient convolution algorithms on FPGAs. In Proceedings of the on Great Lakes Symposium on VLSI. 123128.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Liu Zhiqiang, Chow Paul, Xu Jinwei, Jiang Jingfei, Dou Yong, and Zhou Jie. 2019. A uniform architecture design for accelerating 2D and 3D CNNS on FPGAs. Electronics 8, 1 (2019), 65.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Ma Yufei, Cao Yu, Vrudhula Sarma, and Seo Jae-sun. 2018. Optimizing the convolution operation to accelerate deep neural networks on FPGA. IEEE Trans. Very Large Scale Integr. Syst. 26, 7 (2018), 13541367.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Guo Kaiyuan, Sui Lingzhi, Qiu Jiantao, Yu Jincheng, Wang Junbin, Yao Song, Han Song, Wang Yu, and Yang Huazhong. 2017. Angel-eye: A complete design flow for mapping CNN onto embedded FPGA. IEEE Trans. Computer-aided Design Integr. Circ. Syst. 37, 1 (2017), 3547.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Alwani Manoj, Chen Han, Ferdman Michael, and Milder Peter. 2016. Fused-layer CNN accelerators. In Proceedings of the 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO’16). IEEE, 112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. [11] Chen Yunji, Chen Tianshi, Xu Zhiwei, Sun Ninghui, and Temam Olivier. 2016. DianNao family: Energy-efficient hardware accelerators for machine learning. Commun. ACM 59, 11 (2016), 105112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] Jouppi Norman P., Young Cliff, Patil Nishant, Patterson David, Agrawal Gaurav, Bajwa Raminder, Bates Sarah, Bhatia Suresh, Boden Nan, Borchers Al, et al. 2017. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture. 112.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Shafiee Ali, Nag Anirban, Muralimanohar Naveen, Balasubramonian Rajeev, Strachan John Paul, Hu Miao, Williams R. Stanley, and Srikumar Vivek. 2016. ISAAC: A convolutional neural network accelerator with in-situ analog arithmetic in crossbars. ACM SIGARCH Comput. Architect. News 44, 3 (2016), 1426.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] Chi Ping, Li Shuangchen, Xu Cong, Zhang Tao, Zhao Jishen, Liu Yongpan, Wang Yu, and Xie Yuan. 2016. Prime: A novel processing-in-memory architecture for neural network computation in reram-based main memory. ACM SIGARCH Comput. Architect. News 44, 3 (2016), 2739.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] UltraScale Xilinx Zynq. 2020. MPSoC ZCU102 evaluation kit. Retrieved from https://www.xilinx.com/products/boards-andkits/ek-u1-zcu102-g.html.Google ScholarGoogle Scholar
  16. [16] Deng Quan, Zhang Youtao, Zhang Minxuan, and Yang Jun. 2019. LAcc: Exploiting lookup table-based fast and accurate vector multiplication in DRAM-based CNN accelerator. In Proceedings of the 56th Annual Design Automation Conference. 16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Mocerino Luca, Tenace Valerio, and Calimera Andrea. 2019. Energy-efficient convolutional neural networks via recurrent data reuse. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition (DATE’19). IEEE, 848853.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18] Bagherinezhad Hessam, Rastegari Mohammad, and Farhadi Ali. 2017. LCNN: Lookup-based convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 71207129.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Gonçalves Larissa Rozales, Moura Rafael Fão De, and Carro Luigi. 2019. Aggressive energy reduction for video inference with software-only strategies. ACM Trans. Embed. Comput. Syst. 18, 5s (2019), 120.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Moura Rafael Fão de, Santos Paulo C., Lima João Paulo C. de, Alves Marco A. Z., Beck Antonio C. S., and Carro Luigi. 2019. Skipping CNN convolutions through efficient memoization. In Proceedings of the International Conference on Embedded Computer Systems. Springer, 6576.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Lima João Paulo Cardoso de, Brandalero Marcelo, and Carro Luigi. 2020. Endurance-aware RRAM-based reconfigurable architecture using TCAM arrays. In Proceedings of the 30th International Conference on Field-Programmable Logic and Applications (FPL’20). IEEE, 4046.Google ScholarGoogle Scholar
  22. [22] Jiao Xun, Akhlaghi Vahideh, Jiang Yu, and Gupta Rajesh K.. 2018. Energy-efficient neural networks using approximate computation reuse. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition (DATE’18). IEEE, 12231228.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Deng Quan, Jiang Lei, Zhang Youtao, Zhang Minxuan, and Yang Jun. 2018. DrAcc: A DRAM-based accelerator for accurate CNN inference. In Proceedings of the 55th Annual Design Automation Conference. 16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Eckert Charles, Wang Xiaowei, Wang Jingcheng, Subramaniyan Arun, Iyer Ravi, Sylvester Dennis, Blaaauw David, and Das Reetuparna. 2018. Neural cache: Bit-serial in-cache acceleration of deep neural networks. In Proceedings of the ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA’18). IEEE, 383396.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Pan Yu, Ouyang Peng, Zhao Yinglin, Kang Wang, Yin Shouyi, Zhang Youguang, Zhao Weisheng, and Wei Shaojun. 2018. A multilevel cell STT-MRAM-based computing in-memory accelerator for binary convolutional neural network. IEEE Trans. Magn. 54, 11 (2018), 15.Google ScholarGoogle ScholarCross RefCross Ref
  26. [26] Li Yixing, Liu Zichuan, Xu Kai, Yu Hao, and Ren Fengbo. 2018. A GPU-outperforming FPGA accelerator architecture for binary convolutional neural networks. ACM J. Emerg. Technol. Comput. Syst. 14, 2 (2018), 116.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Biswas Avishek and Chandrakasan Anantha P.. 2018. Conv-RAM: An energy-efficient SRAM with embedded convolution computation for low-power CNN-based machine learning applications. In Proceedings of the IEEE International Solid-State Circuits Conference (ISSCC’18). IEEE, 488490.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Sun Xiaoyu, Yin Shihui, Peng Xiaochen, Liu Rui, Seo Jae-sun, and Yu Shimeng. 2018. XNOR-RRAM: A scalable and parallel resistive synaptic architecture for binary neural networks. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition (DATE’18). IEEE, 14231428.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Ji Yu, Zhang Youyang, Xie Xinfeng, Li Shuangchen, Wang Peiqi, Hu Xing, Zhang Youhui, and Xie Yuan. 2019. FPSA: A full system stack solution for reconfigurable ReRAM-based NN accelerator architecture. In Proceedings of the 24th International Conference on Architectural Support for Programming Languages and Operating Systems. 733747.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Zhu Yuhao, Samajdar Anand, Mattina Matthew, and Whatmough Paul. 2018. Euphrates: Algorithm-soc co-design for low-power mobile continuous vision. Retrieved from httsp://arXiv:1803.11232.Google ScholarGoogle Scholar
  31. [31] Razlighi Mohammad Samragh, Imani Mohsen, Koushanfar Farinaz, and Rosing Tajana. 2017. Looknn: Neural network with no multiplication. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition (DATE’17). IEEE, 17751780.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Umuroglu Yaman, Akhauri Yash, Fraser Nicholas James, and Blott Michaela. 2020. LogicNets: Co-designed neural networks and circuits for extreme-throughput applications. In Proceedings of the 30th International Conference on Field-Programmable Logic and Applications (FPL’20). IEEE, 291297.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Liu Zhenhong, Yazdanbakhsh Amir, Wang Dong Kai, Esmaeilzadeh Hadi, and Kim Nam Sung. 2019. AxMemo: Hardware-compiler co-design for approximate code memoization. In Proceedings of the 46th International Symposium on Computer Architecture. 685697.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Halawani Yasmin, Mohammad Baker, Abu-Lebdeh Muath, Al-Qutayri Mahmoud, and Al-Sarawi Said F.. 2019. ReRAM-based in-memory computing for search engine and neural network applications. IEEE J. Emerg. Select. Top. Circ. Syst. (2019).Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Tang Xifan, Giacomin Edouard, Micheli Giovanni De, and Gaillardon Pierre-Emmanuel. 2018. Post-P&R performance and power analysis for RRAM-based FPGAs. IEEE J. Emerg. Select. Top. Circ. Syst. 8, 3 (2018), 639650.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Grossi Alessandro, Vianello Elisa, Sabry Mohamed M., Barlas Marios, Grenouillet Laurent, Coignus Jean, Beigne Edith, Wu Tony, Le Binh Q., Wootters Mary K., et al. 2019. Resistive RAM endurance: Array-level characterization and correction techniques targeting deep learning applications. IEEE Trans. Electron Devices 66, 3 (2019), 12811288.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Graves Catherine E., Li Can, Sheng Xia, Ma Wen, Chalamalasetti Sai Rahul, Miller Darrin, Ignowski James S., Buchanan Brent, Zheng Le, Lam Si-Ty, et al. 2019. Memristor TCAMs accelerate regular expression matching for network intrusion detection. IEEE Trans. Nanotechnol. 18 (2019), 963970.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Guo Qing, Guo Xiaochen, Bai Yuxin, Patel Ravi, Ipek Engin, and Friedman Eby G.. 2015. Resistive ternary content addressable memory systems for data-intensive computing. IEEE Micro 35, 5 (2015), 6271.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Imani Mohsen, Rahimi Abbas, and Rosing Tajana S.. 2016. Resistive configurable associative memory for approximate computing. In Proceedings of the Design, Automation & Test in Europe Conference & Exhibition (DATE’16). IEEE, 13271332.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Grossi Alessandro, Vianello Elisa, Zambelli Cristian, Royer Pablo, Noel Jean-Philippe, Giraud Bastien, Perniola Luca, Olivo Piero, and Nowak Etienne. 2018. Experimental investigation of 4-kb RRAM arrays programming conditions suitable for TCAM. IEEE Trans. Very Large Scale Integr. Syst. 26, 12 (2018), 25992607.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Paul Somnath and Bhunia Swarup. 2008. Reconfigurable computing using content addressable memory for improved performance and resource usage. In Proceedings of the 45th ACM/IEEE Design Automation Conference. IEEE, 786791.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. [42] Kaplan Roman, Yavits Leonid, and Ginosar Ran. 2018. PRINS: Processing-in-storage acceleration of machine learning. IEEE Trans. Nanotechnol. 17, 5 (2018), 889896.Google ScholarGoogle ScholarCross RefCross Ref
  43. [43] Zha Yue and Li Jing. 2020. Hyper-AP: Enhancing associative processing through a full-stack optimization. In Proceedings of the ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA’20). IEEE, 846859.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Kaplan Roman, Yavits Leonid, Ginosar Ran, and Weiser Uri. 2017. A resistive cam processing-in-storage architecture for dna sequence alignment. IEEE Micro 37, 4 (2017), 2028.Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. [45] Karam Robert, Puri Ruchir, Ghosh Swaroop, and Bhunia Swarup. 2015. Emerging trends in design and applications of memory-based computing and content-addressable memories. Proc. IEEE 103, 8 (2015), 13111330.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Hegde Kartik, Yu Jiyong, Agrawal Rohit, Yan Mengjia, Pellauer Michael, and Fletcher Christopher W.. 2018. Ucnn: Exploiting computational reuse in deep neural networks via weight repetition. In Proceedings of the 45th Annual International Symposium on Computer Architecture. IEEE Press, 674687.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Muralimanohar Naveen, Balasubramonian Rajeev, and Jouppi Norman P.. 2009. CACTI 6.0: A tool to model large caches. HP Lab. 27 (2009), 28.Google ScholarGoogle Scholar
  48. [48] Waterman Andrew, Lee Yunsup, Patterson David A., and Asanovic Krste. 2011. The risc-v instruction set manual, volume i: Base user-level isa. EECS Department, UC Berkeley, Tech. Rep. UCB/EECS-2011-62, 116 (2011).Google ScholarGoogle Scholar
  49. [49] Ahmed Elias and Rose Jonathan. 2004. The effect of LUT and cluster size on deep-submicron FPGA performance and density. IEEE Trans. Very Large Scale Integr. Syst. 12, 3 (2004), 288298.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Zhou Xuegong, Wang Lingli, and Mishchenko Alan. 2020. Fast exact NPN classification by co-designing canonical form and its computation algorithm. IEEE Trans. Comput. (2020).Google ScholarGoogle ScholarCross RefCross Ref
  51. [51] Lima João Paulo Cardoso de, Brandalero Marcelo, Hübner Michael, and Carro Luigi. 2021. STAP: An architecture and design tool for automata processing on memristor TCAMs. ACM J. Emerg. Technol. Comput. Syst. 18, 2 (2021), 122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Cong Jason and Xiao Bingjun. 2013. FPGA-RPI: A novel FPGA architecture with RRAM-based programmable interconnects. IEEE Trans. Very Large Scale Integr. Syst. 22, 4 (2013), 864877.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Gong Yunchao, Liu Liu, Yang Ming, and Bourdev Lubomir. 2014. Compressing deep convolutional networks using vector quantization. Retrieved from https://arXiv:1412.6115.Google ScholarGoogle Scholar
  54. [54] Han Song, Mao Huizi, and Dally William J.. 2015. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. Retrieved from https://arXiv:1510.00149.Google ScholarGoogle Scholar
  55. [55] Redmon Joseph and Farhadi Ali. 2018. Yolov3: An incremental improvement. Retrieved from https://arXiv:1804.02767.Google ScholarGoogle Scholar
  56. [56] Murray Kevin E., Petelin Oleg, Zhong Sheng, Wang Jai Min, ElDafrawy Mohamed, Legault Jean-Philippe, Sha Eugene, Graham Aaron G., Wu Jean, Walker Matthew J. P., Zeng Hanqing, Patros Panagiotis, Luu Jason, Kent Kenneth B., and Betz Vaughn. 2020. VTR 8: High performance CAD and customizable FPGA architecture modelling. ACM Trans. Reconfig. Technol. Syst. (2020).Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. [57] Mishchenko Alan, Cho Sungmin, Chatterjee Satrajit, and Brayton Robert. 2007. Combinational and sequential mapping with priority cuts. In Proceedings of the IEEE/ACM International Conference on Computer-Aided Design. IEEE, 354361.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Mishchenko Alan et al. 2007. ABC: A system for sequential synthesis and verification. Retrieved from http://www.eecs.berkeley.edu/alanmi/abc.Google ScholarGoogle Scholar
  59. [59] Dong Xiangyu, Xu Cong, Xie Yuan, and Jouppi Norman P.. 2012. Nvsim: A circuit-level performance, energy, and area model for emerging nonvolatile memory. IEEE Trans. Comput.-Aided Design Integr. Circ. Syst. 31, 7 (2012), 9941007.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Asanovic Krste, Avizienis Rimas, Bachrach Jonathan, Beamer Scott, Biancolin David, Celio Christopher, Cook Henry, Dabbelt Daniel, Hauser John, Izraelevitz Adam, et al. 2016. The rocket chip generator. EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2016-17.Google ScholarGoogle Scholar
  61. [61] Binkert Nathan, Beckmann Bradford, Black Gabriel, Reinhardt Steven K., Saidi Ali, Basu Arkaprava, Hestness Joel, Hower Derek R., Krishna Tushar, Sardashti Somayeh, et al. 2011. The gem5 simulator. ACM SIGARCH Comput. Architect. News 39, 2 (2011), 17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] Courbariaux Matthieu, Hubara Itay, Soudry Daniel, El-Yaniv Ran, and Bengio Yoshua. 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to \(+1\) or \(-1\). Retrieved from https://arXiv:1602.02830.Google ScholarGoogle Scholar
  63. [63] Wei Xuechao, Yu Cody Hao, Zhang Peng, Chen Youxiang, Wang Yuxin, Hu Han, Liang Yun, and Cong Jason. 2017. Automated systolic array architecture synthesis for high throughput CNN inference on FPGAs. In Proceedings of the 54th Annual Design Automation Conference 2017. 16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. [64] Sun Yanan, Ma Chang, Li Zhi, Zhao Yilong, Jiang Jiachen, Qian Weikang, Yang Rui, He Zhezhi, and Jiang Li. 2021. Unary coding and variation-aware optimal mapping scheme for reliable ReRAM-based neuromorphic computing. IEEE Trans. Comput.-Aided Design Integr. Circ. Syst. 40, 12 (2021), 24952507.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Li Huihan, Wang Shaocong, Zhang Xumeng, Wang Wei, Yang Rui, Sun Zhong, Feng Wanxiang, Lin Peng, Wang Zhongrui, Sun Linfeng, et al. 2021. Memristive crossbar arrays for storage and computing applications. Adv. Intell. Syst. 3, 9 (2021), 2100017.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Data and Computation Reuse in CNNs Using Memristor TCAMs

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Reconfigurable Technology and Systems
          ACM Transactions on Reconfigurable Technology and Systems  Volume 16, Issue 1
          March 2023
          403 pages
          ISSN:1936-7406
          EISSN:1936-7414
          DOI:10.1145/35733111
          • Editor:
          • Deming Chen
          Issue’s Table of Contents

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 22 December 2022
          • Online AM: 20 July 2022
          • Accepted: 19 June 2022
          • Revised: 1 May 2022
          • Received: 7 February 2022
          Published in trets Volume 16, Issue 1

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Refereed
        • Article Metrics

          • Downloads (Last 12 months)219
          • Downloads (Last 6 weeks)23

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Full Text

        View this article in Full Text.

        View Full Text

        HTML Format

        View this article in HTML Format .

        View HTML Format
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!