skip to main content
research-article

WH2D2N2: Distributed AI-enabled OK-ASN Service for Web of Things

Published:09 May 2023Publication History
Skip Abstract Section

Abstract

Model data-driven ontology and knowledge presentation for evolving semantic Asian social networks (OK-ASN) is a critical strategy for web of things (WoT) services. Meanwhile, Deep Neural Network (DNN)-based OK-ASN service in WoT is growing rapidly. However, most DNN-based services cannot utilize the potential of WoT fully, as heterogeneity exists in WoT. Therefore, this article proposes a novel framework called Web-based Heterogeneous Hierarchical Distributed Deep Neural Network (WH2D2N2) to deploy the DNNs for OK-ASN services on WoT, overcoming the heterogeneity. The architecture of the system and the designed Edge-Cloud-Joint execute scheme utilize heterogeneous devices to make DNN inference ubiquitous and output two types of results to meet various requirements. To bring robustness to OK-ASN services, a global scheduling is designed to arrange the workflow dynamically. The results of our experiments prove the efficiency of the execute scheme and the global scheduling in the system.

REFERENCES

  1. [1] Abdel-Hamid Ossama, Mohamed Abdel-rahman, Jiang Hui, Deng Li, Penn Gerald, and Yu Dong. 2014. Convolutional neural networks for speech recognition. IEEE/ACM Trans. Audio, Speech, Lang. Process. 22, 10 (2014), 15331545.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Ahmadi Sina, Hassani Hossein, and Jaff Daban Q.. 2022. Leveraging multilingual news websites for building a Kurdish parallel corpus. ACM Trans. Asian Low-Resour. Lang. Info. Process. (2022).Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. [3] Beseiso Majdi and Elmousalami Haytham. 2020. Subword attentive model for arabic sentiment analysis: A deep learning approach. ACM Trans. Asian Low-Resour. Lang. Info. Process. 19, 2 (2020), 117.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Bochkovskiy Alexey, Wang Chien-Yao, and Liao Hong-Yuan Mark. 2020. YOLOv4: Optimal speed and accuracy of object detection. Retrieved from https://arXiv:2004.10934.Google ScholarGoogle Scholar
  5. [5] Chen Chen, Jiang Jiange, Zhou Yang, Lv Ning, Liang Xiaoxu, and Wan Shaohua. 2022. An edge intelligence empowered flooding process prediction using Internet of things in smart city. J. Parallel Distrib. Comput. 165 (2022), 6678.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Chen Chen, Zhang Yuru, Wang Zheng, Wan Shaohua, and Pei Qingqi. 2021. Distributed computation offloading method based on deep reinforcement learning in ICV. Appl. Soft Comput. 103 (2021), 107108.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. [7] Courbariaux Matthieu, Hubara Itay, Soudry Daniel, El-Yaniv Ran, and Bengio Yoshua. 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or \(-1\). Retrieved from https://arXiv:1602.02830.Google ScholarGoogle Scholar
  8. [8] Eshratifar Amir Erfan, Abrishami Mohammad Saeed, and Pedram Massoud. 2019. JointDNN: An efficient training and inference engine for intelligent mobile cloud computing services. IEEE Trans. Mobile Comput. 20, 2 (2019), 565–576.Google ScholarGoogle Scholar
  9. [9] Fette Ian and Melnikov Alexey. 2011. The websocket protocol. https://www.rfc-editor.org/rfc/rfc6455.txt.Google ScholarGoogle Scholar
  10. [10] Gao Honghao, Xiao Junsheng, Yin Yuyu, Liu Tong, and Shi Jiangang. 2022. A mutually supervised graph attention network for few-shot segmentation: The perspective of fully utilizing limited samples. IEEE Trans. Neural Netw. Learn. Syst. (2022).Google ScholarGoogle Scholar
  11. [11] Gill Sukhpal Singh, Tuli Shreshth, Xu Minxian, Singh Inderpreet, Singh Karan Vijay, Lindsay Dominic, Tuli Shikhar, Smirnova Daria, Singh Manmeet, Jain Udit, et al. 2019. Transformative effects of IoT, blockchain and artificial intelligence on cloud computing: Evolution, vision, trends and open challenges. Internet Things 8 (2019), 100118.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Gorbachev Yury, Fedorov Mikhail, Slavutin Iliya, Tugarev Artyom, Fatekhov Marat, and Tarkan Yaroslav. 2019. OpenVINO deep learning workbench: Comprehensive analysis and tuning of neural networks inference. In Proceedings of the IEEE International Conference on Computer Vision Workshops. 00.Google ScholarGoogle Scholar
  13. [13] Hancock Braden, Lee Hongrae, and Yu Cong. 2019. Generating titles for web tables. In Proceedings of the World Wide Web Conference. 638647.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. [14] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770778.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Hidaka Masatoshi, Kikura Yuichiro, Ushiku Yoshitaka, and Harada Tatsuya. 2017. Webdnn: Fastest dnn execution framework on web browser. In Proceedings of the 25th ACM International Conference on Multimedia. 12131216.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Howard Andrew G., Zhu Menglong, Chen Bo, Kalenichenko Dmitry, Wang Weijun, Weyand Tobias, Andreetto Marco, and Adam Hartwig. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. Retrieved from https://arXiv:1704.04861.Google ScholarGoogle Scholar
  17. [17] Huang Yakun, Qiao Xiuquan, Tang Jian, Ren Pei, Liu Ling, Pu Calton, and Chen Junliang. 2020. DeepAdapter: A collaborative deep learning framework for the mobile web using context-aware network pruning. In Proceedings of the IEEE INFOCOM Conference on Computer Communications. IEEE, 834843.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Jain Praphula Kumar, Saravanan Vijayalakshmi, and Pamula Rajendra. 2021. A hybrid CNN-LSTM: A deep learning approach for consumer sentiment analysis using qualitative user-generated contents. Trans. Asian Low-Resour. Lang. Info. Process. 20, 5 (2021), 115.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. [19] Krizhevsky Alex, Hinton Geoffrey, et al. 2009. Learning multiple layers of features from tiny images. (2009).Google ScholarGoogle Scholar
  20. [20] Laskaridis Stefanos, Venieris Stylianos I., Almeida Mario, Leontiadis Ilias, and Lane Nicholas D.. 2020. SPINN: Synergistic progressive inference of neural networks over device and cloud. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking. 115.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Ledig Christian, Theis Lucas, Huszár Ferenc, Caballero Jose, Cunningham Andrew, Acosta Alejandro, Aitken Andrew, Tejani Alykhan, Totz Johannes, Wang Zehan, et al. 2017. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 46814690.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Li En, Zhou Zhi, and Chen Xu. 2018. Edge intelligence: On-demand deep learning model co-inference with device-edge synergy. In Proceedings of the Workshop on Mobile Edge Communications. 3136.Google ScholarGoogle Scholar
  23. [23] Liu Su, Yu Jiong, Deng Xiaoheng, and Wan Shaohua. 2021. FedCPF: An efficient-communication federated learning approach for vehicular edge computing in 6G communication networks. IEEE Trans. Intell. Transport. Syst. 23, 2 (2021), 16161629.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Mothukuri Viraaji, Khare Prachi, Parizi Reza M., Pouriyeh Seyedamin, Dehghantanha Ali, and Srivastava Gautam. 2021. Federated-learning-based anomaly detection for IoT security attacks. IEEE Internet Things J. 9, 4 (2021), 25452554.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25] Novo Oscar and Francesco Mario Di. 2020. Semantic interoperability in the IoT: Extending the web of things architecture. ACM Trans. Internet Things 1, 1 (2020), 125.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Ouyang Wanli, Wang Xiaogang, Zeng Xingyu, Qiu Shi, Luo Ping, Tian Yonglong, Li Hongsheng, Yang Shuo, Wang Zhe, Loy Chen-Change, et al. 2015. Deepid-net: Deformable deep convolutional neural networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 24032412.Google ScholarGoogle ScholarCross RefCross Ref
  27. [27] Paszke Adam, Gross Sam, Massa Francisco, Lerer Adam, Bradbury James, Chanan Gregory, Killeen Trevor, Lin Zeming, Gimelshein Natalia, Antiga Luca, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems. 80268037.Google ScholarGoogle Scholar
  28. [28] Patel Pankesh, Ali Muhammad Intizar, and Sheth Amit. 2018. From raw data to smart manufacturing: AI and semantic web of things for industry 4.0. IEEE Intell. Syst. 33, 4 (2018), 7986.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Rochet Florentin, Efthymiadis Kyriakos, Koeune Francois, and Pereira Olivier. 2019. SWAT: Seamless web authentication technology. In Proceedings of the World Wide Web Conference. 15791589.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Sandler Mark, Howard Andrew, Zhu Menglong, Zhmoginov Andrey, and Chen Liang-Chieh. 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 45104520.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Seki Hiroshi, Hori Takaaki, Watanabe Shinji, Roux Jonathan Le, and Hershey John R.. 2018. A purely end-to-end system for multi-speaker speech recognition. Retrieved from https://arXiv:1805.05826.Google ScholarGoogle Scholar
  32. [32] Smilkov Daniel, Thorat Nikhil, Assogba Yannick, Yuan Ann, Kreeger Nick, Yu Ping, Zhang Kangyi, Cai Shanqing, Nielsen Eric, Soergel David, et al. 2019. Tensorflow.js: Machine learning for the web and beyond. Retrieved from https://arXiv:1901.05350.Google ScholarGoogle Scholar
  33. [33] Taheri Sajjad, Vedienbaum Alexander, Nicolau Alexandru, Hu Ningxin, and Haghighat Mohammad R.. 2018. OpenCV. js: Computer vision processing for the open web platform. In Proceedings of the 9th ACM Multimedia Systems Conference. 478483.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Teerapittayanon Surat, McDanel Bradley, and Kung Hsiang-Tsung. 2016. Branchynet: Fast inference via early exiting from deep neural networks. In Proceedings of the 23rd International Conference on Pattern Recognition (ICPR’16). IEEE, 24642469.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Teerapittayanon Surat, McDanel Bradley, and Kung Hsiang-Tsung. 2017. Distributed deep neural networks over the cloud, the edge and end devices. In Proceedings of the IEEE 37th International Conference on Distributed Computing Systems (ICDCS’17). IEEE, 328339.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Wei Wenting, Yang Ruying, Gu Huaxi, Zhao Weike, Chen Chen, and Wan Shaohua. 2021. Multi-objective optimization for resource allocation in vehicular cloud computing networks. IEEE Trans. Intell. Transport. Syst. 23, 12 (2021), 25536–25545.Google ScholarGoogle Scholar
  37. [37] Williams Kyle and Zitouni Imed. 2017. Does that mean you’re happy? RNN-based modeling of user interaction sequences to detect good abandonment. In Proceedings of the ACM on Conference on Information and Knowledge Management. 727736.Google ScholarGoogle Scholar
  38. [38] Zhang Xiangyu, Zhou Xinyu, Lin Mengxiao, and Sun Jian. 2018. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 68486856.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Zhang Zongpu, Song Tao, Lin Liwei, Hua Yang, He Xufeng, Xue Zhengui, et al. 2018. Towards ubiquitous intelligent computing: Heterogeneous distributed deep neural networks. IEEE Trans. Big Data 8, 3 (2018), 644–657.Google ScholarGoogle Scholar

Index Terms

  1. WH2D2N2: Distributed AI-enabled OK-ASN Service for Web of Things

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Asian and Low-Resource Language Information Processing
        ACM Transactions on Asian and Low-Resource Language Information Processing  Volume 22, Issue 5
        May 2023
        653 pages
        ISSN:2375-4699
        EISSN:2375-4702
        DOI:10.1145/3596451
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 9 May 2023
        • Online AM: 15 December 2022
        • Accepted: 8 September 2022
        • Revised: 14 August 2022
        • Received: 8 April 2022
        Published in tallip Volume 22, Issue 5

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
      • Article Metrics

        • Downloads (Last 12 months)78
        • Downloads (Last 6 weeks)13

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!