skip to main content
research-article

An FPGA-based accelerator for LambdaRank in Web search engines

Authors Info & Claims
Published:22 August 2011Publication History
Skip Abstract Section

Abstract

In modern Web search engines, Neural Network (NN)-based learning to rank algorithms is intensively used to increase the quality of search results. LambdaRank is one such algorithm. However, it is hard to be efficiently accelerated by computer clusters or GPUs, because: (i) the cost function for the ranking problem is much more complex than that of traditional Back-Propagation(BP) NNs, and (ii) no coarse-grained parallelism exists in the algorithm. This article presents an FPGA-based accelerator solution to provide high computing performance with low power consumption. A compact deep pipeline is proposed to handle the complex computing in the batch updating. The area scales linearly with the number of hidden nodes in the algorithm. We also carefully design a data format to enable streaming consumption of the training data from the host computer. The accelerator shows up to 15.3X (with PCIe x4) and 23.9X (with PCIe x8) speedup compared with the pure software implementation on datasets from a commercial search engine.

References

  1. Baeza-Yates, R. and Ribeiro-Neto, B. 1999. Modern Information Retrieval. Addison Wesley. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Brin, S. and Page, L. 1998. The anatomy of a large-scale hypertextual web search engine. Comput. Netw. ISDN Syst. 30, 1-7, 107--117. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Burges, C. 2005. Ranking as learning structured outputs. In Proceedings of the NIPS Workshop on Machine Learning. 7--11.Google ScholarGoogle Scholar
  4. Burges, C., Ragno, R., and Le, Q. V. 2006. Learning to rank with nonsmooth cost functions. In Proceedings of the Conferences on Advances in Neural Information Processing Systems (NIPS). 193--200.Google ScholarGoogle Scholar
  5. Burges, C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N., and Hullender, G. 2005. Learning to rank using gradient descent. In Proceedings of the International Conference on Machine Learning (ICML). 89--96. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Jarvelin, K. and Kekalainen, J. 2000. Ir evaluation methods for retrieving highly relevant documents. In Proceedings of the Annual ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR). 41--48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Joachims, T. 2002. Evaluating retrieval performance using clickthrough data. In Proceedings of the 25th Annual International ACM SIGIR Workshop on Mathematical/Formal Methods in Information Retrieval (SIGIR '02).Google ScholarGoogle Scholar
  8. Joachims, T., Li, H., Liu, T.-Y., Zhai, C., and et.al. 2007. Learning to rank for information retrieval. SIGIR Forum 41, 2, 58--62. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Kapasi, U., Rixner, S., Dally, W., Khailany, B., Ahn, J. H., Mattson, P., and Owens, J. 2003. Programmable stream processors. Comput. 36, 8, 54--62. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Microsoft. 2011 http://research.microsoft.com/en-us/um/beijing/projects/letor/.Google ScholarGoogle Scholar
  11. Nordstrom, T. and Svensson, B. 1992. Using and designing massively parallel computers for artificial neural networks. J. Parall. Distrib. Comput. 14, 3, 260--285. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Omondi, A. R. and Rajapakse, J. C. 2006. FPGA Implementations of Neural Networks. Birkhauser. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Savich, A., Moussa, M., and Areibi, S. 2007. The impact of arithmetic representation on implementing mlp-bp on fpgas: A study. IEEE Trans. Neural Netw. 18, 1, 240--252. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Tommiska, M. 2003. Efficient digital implementation of the sigmoid function for reprogrammable logic. IEEE Proc. Comput. Digital Technol. 150, 6, 403--411.Google ScholarGoogle ScholarCross RefCross Ref
  15. Xu, N.-Y., Cai, X.-F., Gao, R., Zhang, L., and Hsu, F.-H. 2009. Fpga acceleration of rankboost in web search engines. ACM Trans. Reconfig. Technol. Syst. 1, 4, 1--19. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Yan, J., Xu, N.-Y., Cai, X. F., Gao, R., Wang, Y., Luo, R., and Hsu, F. H. 2009. Fpga based acceleration of neural network for ranking in web search engine with a streaming architecture. In Proceedings of the 19th International Conference on Field Programmable Logic and Applications (FPL'09).Google ScholarGoogle Scholar
  17. Zhu, J. and Sutton, P. 2003. Fpga implementations of neural networks—A survey of a decade of progress. In Proceedings of the 13th International Conference on Field Programmable Logic and Applications (FPL'03). 1062--1066.Google ScholarGoogle Scholar

Index Terms

  1. An FPGA-based accelerator for LambdaRank in Web search engines

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Reconfigurable Technology and Systems
      ACM Transactions on Reconfigurable Technology and Systems  Volume 4, Issue 3
      August 2011
      204 pages
      ISSN:1936-7406
      EISSN:1936-7414
      DOI:10.1145/2000832
      Issue’s Table of Contents

      Copyright © 2011 ACM

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 22 August 2011
      • Revised: 1 August 2010
      • Accepted: 1 August 2010
      • Received: 1 April 2010
      Published in trets Volume 4, Issue 3

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader
    About Cookies On This Site

    We use cookies to ensure that we give you the best experience on our website.

    Learn more

    Got it!