Concepts inInterprocedural parallelization analysis in SUIF
Interprocedural optimization
Interprocedural optimization (IPO) is a collection of compiler techniques used in computer programming to improve performance in programs containing many frequently used functions of small or medium length. IPO differs from other compiler optimization because it analyzes the entire program; other optimizations look at only a single function, or even a single block of code.
more from Wikipedia
Fortran 95 language features
This is a comprehensive overview of features of the Fortran 95 language, the version supported by almost all existing Fortran compilers . Old features that have been superseded by new ones are not described ¿ few of those historic features are used in modern programs (although most have been retained in the language to maintain backward compatibility). The current standard is known as Fortran 2008 but, as of 2011, features introduced into Fortran 2003 are still, only now, being implemented.
more from Wikipedia
Variable (computer science)
In computer programming, a variable is a storage location and an associated symbolic name which contains some known or unknown quantity or information, a value. The variable name is the usual way to reference the stored value; this separation of name and content allows the name to be used independently of the exact information it represents.
more from Wikipedia
Automatic parallelization
Automatic parallelization, also auto parallelization, autoparallelization, or parallelization, the last one of which implies automation when used in context, refers to converting sequential code into multi-threaded or vectorized (or even both) code in order to utilize multiple processors simultaneously in a shared-memory multiprocessor machine. The goal of automatic parallelization is to relieve programmers from the tedious and error-prone manual parallelization process.
more from Wikipedia
Data-flow analysis
Data-flow analysis is a technique for gathering information about the possible set of values calculated at various points in a computer program. A program's control flow graph (CFG) is used to determine those parts of a program to which a particular value assigned to a variable might propagate. The information gathered is often used by compilers when optimizing a program. A canonical example of a data-flow analysis is reaching definitions.
more from Wikipedia
Dataflow
Dataflow is a term used in computing, and may have various shades of meaning. It is closely related to message passing.
more from Wikipedia
Shared memory
In computing, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between programs. Depending on context, programs may run on a single processor or on multiple separate processors. Using memory for communication inside a single program, for example among its multiple threads, is generally not referred to as shared memory.
more from Wikipedia