skip to main content
article
Free Access

Distributed last call optimization for portable parallel logic programming

Published:01 September 1992Publication History
Skip Abstract Section

Abstract

A difficult but challenging problem is the efficient exploitation of AND and OR parallelism in logic programs without making any assumptions about the underlying target machine(s). In earlier papers, we described the design of a binding environment for AND and OR parallel execution of logic programs on shared and nonshared memory machines and the performance of a compiler (called ROLOG) using this binding environment on a range of MIMD parallel machines.

In this paper, we present an important optimization for portable parallel logic programming, namely distributed last-call optimization, an analog of the tail recursion optimization for sequential Prolog. This scheme has been implemented in the ROLOG compiler, which ports unchanged on several shared memory and nonshared memory machines. We describe the effect of this optimization on several OR, AND/OR and AND parallel benchmark programs.

References

  1. 1 AGHA, G.A. Actors: A Model of Concurrent Computatmn tn Distributed Systems. MIT Press, Cambridge, Mass., 1986.]] Google ScholarGoogle Scholar
  2. 2 ALI, K. A. M., AND KARLSSON, R. The Muse OR-parallel Prolog model and its performance. In North American Conference on Logic Programming (Austin, Tex., Oct. 1990), 757-776.]] Google ScholarGoogle Scholar
  3. 3 BARON, U., AND DE KERGOMMEAUX, J. C. ET AL. The parallel ECRC prolog system PEPSys: An overview and evaluation results. In International Conference on Fifth Generation Computer Systems (Tokyo, Nov. 1988).]]Google ScholarGoogle Scholar
  4. 4 BISWAS, P., Su, S. C., AND YUN, D. Y. A scalable abstract machine model to support limited-OR/restricted-AND parallelism in logic programs. In International Conference on Logic Programming (Seattle, Wash., Aug. 1988), 1160-1179.]]Google ScholarGoogle Scholar
  5. 5 CONERY, J. S. The AND/OR process model for parallel interpretation of logic programs. Ph.D. dissertation, Dept. of Computer Science, Univ. of California, Irvine, 1983.]] Google ScholarGoogle Scholar
  6. 6 CONERY, J. S. Binding environments for parallel logic programs in nonshared memory multiprocessors. In Symposium on Logic Programming (San Francisco, Calif., Sept. 1987), 457-467.]]Google ScholarGoogle Scholar
  7. 7 CONERY, J.S. The OPAL machine, in Implementations of Distributed Prolog. Wiley, New York, 1992.]] Google ScholarGoogle Scholar
  8. 8 DE, K., RAMKUMAR, B., AND BANERJEE, P. ProperSYN: A portable parallel algorithm for logic synthesis. In Proceedings of the Internattonal Conference on Computer-Aided Design (San Francisco, Calif., Nov. 1992), 412-416.]] Google ScholarGoogle Scholar
  9. 9 FENTON, W., RAMKUMAR, B., SALETORE, V. A., SINHA, A. B., AND KAL}~, L.V. Supporting machine independent programming on diverse parallel architectures. In International Conference on Parallel Processtng (St. Charles, Ill., Aug. 1991), II-193-201.]]Google ScholarGoogle Scholar
  10. 10 FERNANDEZ, J. MXSCHEME: A future-based concurrent scheme for shared and distributed memory multiprocessors. M.S. thesis, Dept. of Computer Science, Univ. of Illinois at Urbana- Champaign, Aug. 1991.]]Google ScholarGoogle Scholar
  11. 11 GUPTA, G., AND JAYARAMAN, B. Compiled and-or parallelism on shared memory multiprocessors. In North American Conference on Logic Programmtng (Cleveland, Ohio, Oct. 1989), 332-349.]]Google ScholarGoogle Scholar
  12. 12 HARIDI, S., AND BRAND, P. ANDORRA Prolog: An integration of Prolog and committed choice languages. In International Conference on Fifth Generation Computer Systems (Tokyo, Nov. 1988), 745-754.]]Google ScholarGoogle Scholar
  13. 13 HERMENEGILDO, M., AND GREENE, K.J. &-Prolog and its performance: exploiting independent and-parallelism. In International Conference on Logic Programming (Jerusalem, June 1990), 253-270.]] Google ScholarGoogle Scholar
  14. 14 HERMENEGILDO, M., AND ROSSI, F. On the correctness and efficiency of independent AND parallelism in logic programs. In North American Conference on Logic Programming (Cleveland, Ohio. Oct. 1989), 369-389.]]Google ScholarGoogle Scholar
  15. 15 KALe, L.V. The REDUCE-OR process model for the parallel evaluation of logic programs. In International Conference on Logic Programming (Melbourne, Australia, May 1987), 616-632.]]Google ScholarGoogle Scholar
  16. 16 KALt~, L.V. The Chare kernel parallel programming system. In International Conference on Parallel Processmg (St. Charles, Ill., Aug. 1990), II-17-25.]]Google ScholarGoogle Scholar
  17. 17 KALe, L.V. The REDUCE-OR process model for the parallel evaluation of logic programs. J. Logic Program. 11, 1 (July 1991), 55-84.]] Google ScholarGoogle Scholar
  18. 18 KALe., L. V., RAMKUMAR, B., AND SHU, W. A memory organization independent binding environment for AND and OR parallel execution of logic programs, in The 5th International Conference~Symposium on Logic Programming (Seattle, Wash., August 1988), 1223-1240.]]Google ScholarGoogle Scholar
  19. 19 LIN, Y-J., AND KUMAR, V. Performance of AND-parallel execution of logic programs on a shared memory multiprocessor. In International Conference on Fifth Generatwn Computer Systems (Tokyo, Nov. 1988), 851-860.]]Google ScholarGoogle Scholar
  20. 20 LUSK, E., AND WARREN, D. H. D., ST AL. The Aurora OR-parallel Prolog system. In International Conference on Fifth Generation Computer Systems (Tokyo, Nov. 1988), 819-830.]]Google ScholarGoogle Scholar
  21. 21 MUDAMBI, S. Performance of Aurora on a switch-based multiprocessor. In North American Conference on Logic Programming (Cleveland, Ohio, Oct. 1989).]]Google ScholarGoogle Scholar
  22. 22 MUDAMBI, S. Performances of Aurora on NUMA machines. In International Conference on Logic Programming (Paris, June 1991).]]Google ScholarGoogle Scholar
  23. 23 RAMKUMnR, B. Machine independent "AND" and "OR" parallel execution of logic programs. Ph.D. dissertation, Dept. of Computer Science, Univ. of Illinois at Urbana-Champaign, 1991.]] Google ScholarGoogle Scholar
  24. 24 RAMKUMAR, B., AND BANERJEE, P. Portable parallel test generation for sequential circuits. In Proceedings of the International Conference on Computer-Aided Design (San Francisco, Calif., Nov. 1992), 220-223.]] Google ScholarGoogle Scholar
  25. 25 RAMKUMAR, B., AND KALI~, L.V. Compiled execution of the REDUCE-OR process model on multiprocessors. In North American Conference on Logic Programming (Cleveland, Ohio, Oct. 1989).]]Google ScholarGoogle Scholar
  26. 26 SZEREDI, P. Performance analysis of the Aurora OR-parallel Prolog system. In North American Conference on Logic Programming (Cleveland, Ohio, Oct. 1989).]]Google ScholarGoogle Scholar

Index Terms

  1. Distributed last call optimization for portable parallel logic programming

              Recommendations

              Reviews

              Pierre Jouvelot

              AND and OR parallelism has been suggested as a way to make Prolog programs run faster on parallel machines, but its nai¨ve implementation is usually somewhat deceptive. Sophisticated parallel optimizations are called for to make the available parallelism pragmatically useful. Replacing recursion by iteration is a well-known technique to improve sequential tail-recursive programs. The distributed last call optimization presented here builds on this idea to efficiently implement parallel logic programs when their clauses have a unique last literal. The paper introduces the particular Reduce-OR model used by Ramkumar's ROLOG compiler, describes the binding mechanism used for unification (this is of utmost importance, since these bindings need to be made available to concurrent processes, by sharing or copying), digs into the details of distributed last-call optimization (LCO) , and presents benchmark results to support the effectiveness of this idea. Speedups of up to 50 percent have been obtained on standard programs. On distributed architectures, these speedups are due to significant savings in the number of messages sent during evaluation; answer messages bypass intermediate nodes when a tail literal succeeds. Distributed LCO is interesting although its concepts are not really new, as discussed in the related work section. The paper is generally well written, although some forward references for unusual terms could have been avoided. The description of the algorithm is based on an example, which helps since its pseudocode is a bit difficult to grasp. The paper should interest researchers interested in the details of optimizing parallel implementations of logic programs.

              Access critical reviews of Computing literature here

              Become a reviewer for Computing Reviews.

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in

              Full Access

              • Published in

                cover image ACM Letters on Programming Languages and Systems
                ACM Letters on Programming Languages and Systems  Volume 1, Issue 3
                Sept. 1992
                104 pages
                ISSN:1057-4514
                EISSN:1557-7384
                DOI:10.1145/151640
                Issue’s Table of Contents

                Copyright © 1992 ACM

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 1 September 1992
                Published in loplas Volume 1, Issue 3

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • article

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader
              About Cookies On This Site

              We use cookies to ensure that we give you the best experience on our website.

              Learn more

              Got it!