skip to main content
research-article

Single-shot Semantic Matching Network for Moment Localization in Videos

Authors Info & Claims
Published:22 July 2021Publication History
Skip Abstract Section

Abstract

Moment localization in videos using natural language refers to finding the most relevant segment from videos given a natural language query. Most of the existing methods require video segment candidates for further matching with the query, which leads to extra computational costs, and they may also not locate the relevant moments under any length evaluated. To address these issues, we present a lightweight single-shot semantic matching network (SSMN) to avoid the complex computations required to match the query and the segment candidates, and the proposed SSMN can locate moments of any length theoretically. Using the proposed SSMN, video features are first uniformly sampled to a fixed number, while the query sentence features are generated and enhanced by GloVe, long-term short memory (LSTM), and soft-attention modules. Subsequently, the video features and sentence features are fed to an enhanced cross-modal attention model to mine the semantic relationships between vision and language. Finally, a score predictor and a location predictor are designed to locate the start and stop indexes of the query moment. We evaluate the proposed method on two benchmark datasets and the experimental results demonstrate that SSMN outperforms state-of-the-art methods in both precision and efficiency.

References

  1. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing moments in video with natural language. In Proceedings of the IEEE International Conference on Computer Vision. 5803–5812.Google ScholarGoogle ScholarCross RefCross Ref
  2. Joao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? A new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6299–6308.Google ScholarGoogle ScholarCross RefCross Ref
  3. Xiaojun Chang, Zhigang Ma, Yi Yang, Zhiqiang Zeng, and Alexander G. Hauptmann. 2016. Bi-level semantic representation analysis for multimedia event detection. IEEE Trans. Cyber. 47, 5 (2016), 1180–1197.Google ScholarGoogle ScholarCross RefCross Ref
  4. X. Chang, Y. L. Yu, Y. Yang, and E. P. Xing. 2017. Semantic pooling for complex event analysis in untrimmed videos. IEEE Trans. Pattern Anal. Mach. Intel. 39, 8 (2017), 1617–1632. DOI:https://doi.org/10.1109/TPAMI.2016.2608901Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Jingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, and Tat-Seng Chua. 2018. Temporally grounding natural sentence in video. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 162–171.Google ScholarGoogle ScholarCross RefCross Ref
  6. Jianfeng Dong, Xirong Li, and Cees G. M. Snoek. 2016. Word2VisualVec: Image and video to sentence matching by visual feature prediction. arXiv preprint arXiv:1604.06838 (2016).Google ScholarGoogle Scholar
  7. Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. 2017. Tall: Temporal activity localization via language query. In Proceedings of the IEEE International Conference on Computer Vision. 5267–5275.Google ScholarGoogle ScholarCross RefCross Ref
  8. Lianli Gao, Xiangpeng Li, Jingkuan Song, and Heng Tao Shen. 2019. Hierarchical LSTMs with adaptive attention for visual captioning. IEEE Trans. Pattern Anal. Mach. Intel. 42, 5 (2019), 1112–1131.Google ScholarGoogle Scholar
  9. Soham Ghosh, Anuva Agarwal, Zarana Parekh, and Alexander G. Hauptmann. 2019. ExCL: Extractive clip localization using natural language descriptions. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 1984–1990.Google ScholarGoogle Scholar
  10. Sergio Guadarrama, Niveda Krishnamoorthy, Girish Malkarnenkar, Subhashini Venugopalan, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2013. YouTube2Text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition. In Proceedings of the IEEE International Conference on Computer Vision. 2712–2719. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Meera Hahn, Asim Kadav, James M. Rehg, and Hans Peter Graf. 2019. Tripping through time: Efficient Localization of Activities in Videos. arxiv:cs.CV/1904.09936 (2019).Google ScholarGoogle Scholar
  12. Dongliang He, Xiang Zhao, Jizhou Huang, Fu Li, Xiao Liu, and Shilei Wen. 2019. Read, watch, and move: Reinforcement learning for temporally grounding natural language descriptions in videos. In Proceedings of the AAAI Conference on Artificial Intelligence. 8393–8400.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2018. Localizing moments in video with temporal language. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. DOI:https://doi.org/10.18653/v1/d18-1168Google ScholarGoogle ScholarCross RefCross Ref
  14. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9, 8 (1997), 1735–1780. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Mihir Jain, Jan Van Gemert, Hervé Jégou, Patrick Bouthemy, and Cees G. M. Snoek. 2014. Action localization with tubelets from motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 740–747. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Andrej Karpathy, Armand Joulin, and Li F. Fei-Fei. 2014. Deep fragment embeddings for bidirectional image sentence mapping. In Proceedings of the International Conference on Advances in Neural Information Processing Systems. 1889–1897. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).Google ScholarGoogle Scholar
  18. Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In Proceedings of the IEEE International Conference on Computer Vision. 706–715.Google ScholarGoogle ScholarCross RefCross Ref
  19. Niveda Krishnamoorthy, Girish Malkarnenkar, Raymond Mooney, Kate Saenko, and Sergio Guadarrama. 2013. Generating natural-language video descriptions using text-mined knowledge. In Proceedings of the 27th AAAI Conference on Artificial Intelligence. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Ji Lin, Chuang Gan, and Song Han. 2019. TSM: Temporal shift module for efficient video understanding. In Proceedings of the IEEE International Conference on Computer Vision. 7083–7093.Google ScholarGoogle ScholarCross RefCross Ref
  21. Tianwei Lin, Xu Zhao, and Zheng Shou. 2017. Temporal convolution based action proposal: Submission to ActivityNet 2017. arXiv preprint arXiv:1707.06750 (2017).Google ScholarGoogle Scholar
  22. Meng Liu, Xiang Wang, Liqiang Nie, Xiangnan He, Baoquan Chen, and Tat-Seng Chua. 2018. Attentive moment retrieval in videos. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. 15–24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C. Berg. 2016. SSD: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision. Springer, 21–37.Google ScholarGoogle Scholar
  24. Xiang Long, Chuang Gan, and Gerard de Melo. 2018. Video captioning with multi-faceted attention. Trans. Assoc. Comput. Ling. 6 (2018), 173–184.Google ScholarGoogle ScholarCross RefCross Ref
  25. Pascal Mettes, Jan C. Van Gemert, Spencer Cappallo, Thomas Mensink, and Cees G. M. Snoek. 2015. Bag-of-fragments: Selecting and encoding video fragments for event detection and recounting. In Proceedings of the 5th ACM on International Conference on Multimedia Retrieval. 427–434. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Antoine Miech, Ivan Laptev, and Josef Sivic. 2018. Learning a text-video embedding from incomplete and heterogeneous data. arXiv preprint arXiv:1804.02516 (2018).Google ScholarGoogle Scholar
  27. Niluthpol Chowdhury Mithun, Juncheng Li, Florian Metze, and Amit K. Roy-Chowdhury. 2018. Learning joint embedding with multimodal cues for cross-modal video-text retrieval. In Proceedings of the ACM International Conference on Multimedia Retrieval. 19–27. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Niluthpol C. Mithun, Juncheng Li, Florian Metze, and Amit K. Roy-Chowdhury. 2019. Joint embeddings with multimodal cues for video-text retrieval. Int. J. Multim. Inf. Retr. 8, 1 (2019), 3–18.Google ScholarGoogle ScholarCross RefCross Ref
  29. Niluthpol Chowdhury Mithun, Rameswar Panda, Evangelos E. Papalexakis, and Amit K. Roy-Chowdhury. 2018. Webly supervised joint embedding for cross-modal image-text retrieval. In Proceedings of the 26th ACM International Conference on Multimedia. 1856–1864. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Jonghwan Mun, Linjie Yang, Zhou Ren, Ning Xu, and Bohyung Han. 2019. Streamlined dense video captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6588–6597.Google ScholarGoogle ScholarCross RefCross Ref
  31. Xiushan Nie, Yane Chai, Ju Liu, Jiande Sun, and Yilong Yin. 2016. Spherical torus-based video hashing for near-duplicate video detection. Sci. China Inf. Sci. 59, 5 (2016), 059101.Google ScholarGoogle ScholarCross RefCross Ref
  32. X. Nie, W. Jing, C. Cui, C. J. Zhang, L. Zhu, and Y. Yin. 2020. Joint multi-view hashing for large-scale near-duplicate video retrieval. IEEE Trans. Knowl. Data Eng. 32, 10 (2020), 1951–1965. DOI:https://doi.org/10.1109/TKDE.2019.2913383Google ScholarGoogle ScholarCross RefCross Ref
  33. X. Nie, J. Liu, and J. Sun. 2010. Robust video hashing for identification based on MDS. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. 1834–1837. DOI:https://doi.org/10.1109/ICASSP.2010.5495386Google ScholarGoogle Scholar
  34. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’14). 1532–1543.Google ScholarGoogle Scholar
  35. Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. 2016. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 779–788.Google ScholarGoogle ScholarCross RefCross Ref
  36. Rakshith Shetty and Jorma Laaksonen. 2015. Video captioning with recurrent networks based on frame-and video-level features and visual content classification. arXiv preprint arXiv:1512.02949 (2015).Google ScholarGoogle Scholar
  37. Zheng Shou, Dongang Wang, and Shih-Fu Chang. 2016. Temporal action localization in untrimmed videos via multi-stage CNNs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1049–1058.Google ScholarGoogle ScholarCross RefCross Ref
  38. Gunnar A. Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. 2016. Hollywood in homes: Crowdsourcing data collection for activity understanding. In Proceedings of the European Conference on Computer Vision. Springer, 510–526.Google ScholarGoogle ScholarCross RefCross Ref
  39. Jingkuan Song, Tao He, Lianli Gao, Xing Xu, Alan Hanjalic, and Heng Tao Shen. 2020. Unified binary generative adversarial network for image retrieval and compression. Int. J. Comput. Vis. 128 (2020), 2243–2264.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Christian Szegedy, Alexander Toshev, and Dumitru Erhan. 2013. Deep neural networks for object detection. In Proceedings of the International Conference on Advances in Neural Information Processing Systems. 2553–2561. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Jesse Thomason, Subhashini Venugopalan, Sergio Guadarrama, Kate Saenko, and Raymond Mooney. 2014. Integrating language and vision to generate natural language descriptions of videos in the wild. In Proceedings of the 25th International Conference on Computational Linguistics: Technical Papers (COLING’14). 1218–1227.Google ScholarGoogle Scholar
  42. Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. 2015. Learning spatiotemporal features with 3D convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision. 4489–4497. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2015. Sequence to sequence-video to text. In Proceedings of the IEEE International Conference on Computer Vision. 4534–4542. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Weining Wang, Yan Huang, and Liang Wang. 2019. Language-driven temporal activity localization: A semantic matching reinforcement learning model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 334–343.Google ScholarGoogle ScholarCross RefCross Ref
  45. Huijuan Xu, Abir Das, and Kate Saenko. 2017. R-c3d: Region convolutional 3d network for temporal activity detection. In Proceedings of the IEEE International Conference on Computer Vision. 5783–5792.Google ScholarGoogle ScholarCross RefCross Ref
  46. Huijuan Xu, Kun He, Bryan A Plummer, Leonid Sigal, Stan Sclaroff, and Kate Saenko. 2019. Multilevel language and vision integration for text-to-clip retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 9062–9069.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Da Zhang, Xiyang Dai, Xin Wang, Yuan-Fang Wang, and Larry S. Davis. 2019. MAN: Moment alignment network for natural language moment retrieval via iterative graph adjustment. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Jun 2019). DOI:https://doi.org/10.1109/cvpr.2019.00134Google ScholarGoogle Scholar
  48. Bin Zhao, Xuelong Li, and Xiaoqiang Lu. 2019. CAM-RNN: Co-attention model based RNN for video captioning. IEEE Transactions on Image Processing 28, 11 (2019), 5552–5565.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Luowei Zhou, Yingbo Zhou, Jason J. Corso, Richard Socher, and Caiming Xiong. 2018. End-to-end dense video captioning with masked transformer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8739–8748.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Single-shot Semantic Matching Network for Moment Localization in Videos

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!