skip to main content
research-article

Efficient matchings and mobile augmented reality

Authors Info & Claims
Published:16 October 2012Publication History
Skip Abstract Section

Abstract

With the fast-growing popularity of smart phones in recent years, augmented reality (AR) on mobile devices is gaining more attention and becomes more demanding than ever before. However, the limited processors in mobile devices are not quite promising for AR applications that require real-time processing speed. The challenge exists due to the fact that, while fast features are usually not robust enough in matchings, robust features like SIFT or SURF are not computationally efficient. There is always a tradeoff between robustness and efficiency and it seems that we have to sacrifice one for the other. While this is true for most existing features, researchers have been working on designing new features with both robustness and efficiency. In this article, we are not trying to present a completely new feature. Instead, we propose an efficient matching method for robust features. An adaptive scoring scheme and a more distinctive descriptor are also proposed for performance improvements. Besides, we have developed an outdoor augmented reality system that is based on our proposed methods. The system demonstrates that not only it can achieve robust matchings efficiently, it is also capable to handle large occlusions such as passengers and moving vehicles, which is another challenge for many AR applications.

References

  1. Agarwal, S., Snavely, N., Simon, I., Seitz, S., and Szeliski, R. 2009. Building rome in a day. In Proceedings of the International Conference on Computer Vision (ICCV).Google ScholarGoogle Scholar
  2. Azad, P., Asfour, T., and Dillmann, R. 2009. Combining harris interest points and the sift descriptor for fast scale-invariant object recognition. In Proceedings of the EEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Bay, H., Ess, A., Tuytelaars, T., and Gool, L. V. 2008. Speeded-up robust features (SURF). Comput. Vis. Image Understand. 110, 3, 346--359. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Boykov, Y. and Huttenlocher, D. 2000. Daptive bayesian recognition in tracking rigid objects. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  5. Cham, T. and Rehg, J. 1999. A multiple hypothesis approach to figure tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  6. Chen, Y., Rui, Y., and Huang, T. 2001. JPDAF-based HMM for real-time contour tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  7. Henze, N., Schinke, T., and Boll, S. 2009. What is that? Object recognition from natural features on a mobile phone. In Proceedings of the Workshop on Mobile Interaction with the Real World.Google ScholarGoogle Scholar
  8. Isard, M. and Blake, A. 1998. Condensation-conditional density propagation for visual tracking. Int. J. Comput. Vision 29, 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Kolsch, M. and Turk, M. 2005. Hand tracking with flocks of features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Li, B. and Chellappa, R. 2000. Simultaneous tracking and verification via sequential posterior estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  11. Lipton, A., Fujiyoshi, H., and Patil, R. 1998. Moving target classification and tracking from real-time video. In Proceedings of the IEEE Workshop Applications of Computer Vision. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Lowe, D. 1999. Object recognition from local scaleinvariant features. In Proceedings of the 7th International Conference on Computer Vision (ICCV). Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Rosales, R. and Sclaroff, S. 1999. 3D trajectory recovery for tracking multiple objects and trajectory guided recognition of actions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  14. Se, S., Lowe, D., and Little, J. 2001. Vision-based mobile robot localization and mapping using scale-invariant features. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). 2051--2058.Google ScholarGoogle Scholar
  15. Takacs, G., Chandrasekhar, V., Gelfand, N., Xiong, Y., Chen, W.-C., Bismpigiannis, T., Grzeszczuk, R., Pulli, K., and Girod, B. 2008. Outdoors augmented reality on mobile phone using loxel-based visual feature organization. In Proceedings of the 1st ACM International Conference on Multimedia Information Retrieval (MIR). Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Tamimi, H., Andreasson, H., Treptow, A., Duckett, T., and Zell, A. 2005. Localization of mobile robots with omnidirectional vision using particle filter and iterative sift. In Proceedings of the European Conference on Mobile Robots (ECMR).Google ScholarGoogle Scholar
  17. Wagner, D., Reitmayr, G., Mulloni, A., Drummond, T., and Schmalstieg, D. 2008. Pose tracking from natural features on mobile phones. In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR). Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Wang, L. and Neumann, U. 2009. A robust approach for automatic registration of aerial images with untextured aerial LIDAR data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  19. Wolf, J., Burgard, W., and Burkhardt, H. 2002. Robust vision-based localization for mobile robots using an image retrieval system based on invariant features. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).Google ScholarGoogle Scholar
  20. Zhou, Q. and Neumann, U. 2009. A streaming framework for seamless building reconstruction from large-scale aerial LIDAR data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar

Index Terms

  1. Efficient matchings and mobile augmented reality

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Multimedia Computing, Communications, and Applications
          ACM Transactions on Multimedia Computing, Communications, and Applications  Volume 8, Issue 3s
          Special section of best papers of ACM multimedia 2011, and special section on 3D mobile multimedia
          September 2012
          173 pages
          ISSN:1551-6857
          EISSN:1551-6865
          DOI:10.1145/2348816
          Issue’s Table of Contents

          Copyright © 2012 ACM

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 16 October 2012
          • Accepted: 1 May 2012
          • Revised: 1 April 2012
          • Received: 1 January 2012
          Published in tomm Volume 8, Issue 3s

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader
        About Cookies On This Site

        We use cookies to ensure that we give you the best experience on our website.

        Learn more

        Got it!