skip to main content
article
Free Access

Efficient call graph analysis

Published:01 September 1992Publication History
Skip Abstract Section

Abstract

We present an efficient algorithm for computing the procedure call graph, the program representation underlying most interprocedural optimization techniques. The algorithm computes the possible bindings of procedure variables in languages where such variables only receive their values through parameter passing, such as Fortran. We extend the algorithm to accommodate a limited form of assignments to procedure variables. The resulting algorithm can also be used in analysis of functional programs that have been converted to Continuation-Passing-Style.

We discuss the algorithm in relationship to other call graph analysis approaches. Many less efficient techniques produce essentially the same call graph. A few algorithms are more precise, but they may be prohibitively expensive depending on language features.

References

  1. 1 BURKE, M. An interval-based approach to exhaustive and incremental interprocedural analysis. Res. Rep. RC 12702, IBM, Yorktown Heights, N.Y., 1987.Google ScholarGoogle Scholar
  2. 2 CALLAHAN, D., CARLE, A., HALL, M. W., AND KENNEDY, K. Constructing the procedure call multigraph. IEEE Trans. Softw. Eng. SE-16, 4 (Apr. 1990), 483-487. Google ScholarGoogle Scholar
  3. 3 CALLAHAN, C. D., COOPER, K. D., HOOD, R. T., KENNEDY, K., AND TORCZON, L. ParaScope: A parallel programming environment. Int. J. Supercomput. Appl. 2, 4 (1988), 84-89.Google ScholarGoogle Scholar
  4. 4 CHAMBERS, C., AND UNGAR, D. Customization: Optimizing compiler technology for SELF, a dynamically-typed object-oriented programming language. ACM SIGPLAN Not. 24, 7 (1989), 146-160. Google ScholarGoogle Scholar
  5. 5 COOPER, K. D., AND KENNEDY, K. Fast interprocedural alias analysis, in Conference Record of the 16th Annual Symposium on Principles of Programming Languages (Jan. 1989). ACM, New York, 49-59. Google ScholarGoogle Scholar
  6. 6 COOPER, K. D., AND KENNEDY, K. Interprocedural side-effect analysis in linear time. ACM SIGPLAN Not. 23, 7 (1988), 57-66. Google ScholarGoogle Scholar
  7. 7 EIGENMANN, R., AND PLUME, W. An effectiveness study of parallelizing compiler techniques. In Proceedings of the 1991 International Conference on Parallel Processing (Boca Raton, Fla., Aug. 1991). CRC Press, II-17-II-24.Google ScholarGoogle Scholar
  8. 8 FISCHER, M.J. Lambda calculus schemata. ACM SIGPLAN Not. 7, 1 (1972), 104-109. Google ScholarGoogle Scholar
  9. 9 HALL, M. W. Managing interprocedural optimization. Ph.D. thesis, Rice Univ., Dept. of Computer Science, Houston, Tex., 1991. Google ScholarGoogle Scholar
  10. 10 HALL, M. W., HIRANANDANI, S., KENNEDY, K., AND TSENG, C. Interprocedural compilation of Fortran D for MIMD distributed-memory machines. In Proceedings of Supercomputing "92 (Nov. 1992). IEEE Computer Society, Los Alamitos, Calif, 1992. Google ScholarGoogle Scholar
  11. 11 HALL, M. W., KENNEDY, K., AND MCKINLEY, K. S. Interprocedural transformations for parallel code generation. In Proceedings of Supercomputing '91 (Nov. 1991). IEEE Computer Society, Los Alamitos, Calif., 1991, 424-434. Google ScholarGoogle Scholar
  12. 12 HAVLAK, P., AND KENNEDY, K. All implementation of interprocedural bounded regular section analysis. IEEE Trans. Parall. Distrib. Syst. 2, 3 (July 1991), 350-360. Google ScholarGoogle Scholar
  13. 13 KILDALL, G. A unified approach to global program optimization. In Conference Record of the Symposium on Principles of Programming Languages (Jan. 1973). ACM, New York, 194-206. Google ScholarGoogle Scholar
  14. 14 LI, Z., AND YEW, P. Efficient interprocedural analysis for program restructuring for parallel programs. ACM SIGPLAN Not. 23, 9 (1988), 85-99. Google ScholarGoogle Scholar
  15. 15 LOELIGER, J., METZGER, R., SELIGMAN, M., AND STROUD, S. Pointer target tracking: An empirical study. In Proceedings of Supercomputing '91 (Nov. 1991). IEEE Computer Society, Los Alamitos, Calif., 1991, 14-23. Google ScholarGoogle Scholar
  16. 16 REYNOLDS,. J. C. Definitional interpreters for higher-order programming languages. In Proceedings of the ACM Annual Conference (1972). ACM, New York, 1972, 717-740. Google ScholarGoogle Scholar
  17. 17 RYDER, B. Constructing the call graph of a program. IEEE Trans. Softw. Eng. SE-5, 3 (1979), 216-225.Google ScholarGoogle Scholar
  18. 18 SHIVERS, O. The semantics of Scheme control flow analysis. ACM SIGPLAN Not. 26, 9 (1991), 190-198. Google ScholarGoogle Scholar
  19. 19 SHIVERS, O. Control-flow analysis of higher-order languages. Ph.D. thesis. Carnegie-Mellon Univ., School of Computer Science, Pittsburgh, Penn., 1991. Google ScholarGoogle Scholar
  20. 20 SHrVERS, O. Control flow analysis in Scheme. ACM SIGPLAN Not. 23, 7 (1988), 164-174. Google ScholarGoogle Scholar
  21. 21 SPmLM~, T.C. Exposing side-effects in a PL/I optimizing compiler. In Proceedings of the IFIP Congress 1971. North Holland, Amsterdam, 376-381.Google ScholarGoogle Scholar
  22. 22 TRIOLET, R., {RIGOIN, F., AND FEAUTRIER, P. Direct parallelization of call statements. ACM SIGPLAN Not. 21, 7 (1986), 176-185. Google ScholarGoogle Scholar
  23. 23 WALTER, K. Recursion analysis for compiler optimization. Commun. ACM 19, 9 (Sept. 1976), 511-516. Google ScholarGoogle Scholar
  24. 24 WEIHL, W. E. Interprocedural data flow analysis in the presence of pointers, procedure variables, and label variables. In Conference Record of the 7th Symposium on Principles of Programming Languages (Jan. 1980). ACM, New York, 1980, 83-94. Google ScholarGoogle Scholar

Index Terms

  1. Efficient call graph analysis

      Recommendations

      Reviews

      Kathleen H. V. Booth

      According to the authors, “most compilers optimize procedures as separate units ” and do not take into account the possibility of global optimization. The paper considers this question and examines previous attempts to produce a reliable and speedy global optimizer. This brief introduction is followed by a definition of the call graph technique, which leads to a statement of a proposed global optimization algorithm, an example, a proof of correctness and discussions of implementation, extensions to various languages, and comparisons with previous algorithms. The main thrust of the analysis is directed at FORTRAN-like languages but, unfortunately, no actual comparison of compile times, code size, and improvement in running times for a real program is provided. Since, in my experience, optimizers tend to produce unexpected results and marginally worthwhile speedups in running time, such information would have made this paper much more useful.

      Access critical reviews of Computing literature here

      Become a reviewer for Computing Reviews.

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Letters on Programming Languages and Systems
        ACM Letters on Programming Languages and Systems  Volume 1, Issue 3
        Sept. 1992
        104 pages
        ISSN:1057-4514
        EISSN:1557-7384
        DOI:10.1145/151640
        Issue’s Table of Contents

        Copyright © 1992 ACM

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 1 September 1992
        Published in loplas Volume 1, Issue 3

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!