Concepts inEfficient dynamic heap allocation of scratch-pad memory
Memory management
Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. This is critical to the computer system. Several methods have been devised that increase the effectiveness of memory management.
more from Wikipedia
Doug Lea
Doug Lea is a professor of computer science at State University of New York at Oswego where he specializes in concurrent programming and the design of concurrent data structures. He was on the Executive Committee of the Java Community Process and chaired JSR 166, which added concurrency utilities to the Java programming language. On October 22, 2010, Doug Lea notified the Java Community Process Executive Committee he would not stand for reelection.
more from Wikipedia
C dynamic memory allocation
C dynamic memory allocation refers to performing dynamic memory allocation in the C programming language via a group of functions in the C standard library, namely,, and . The C++ programming language includes these functions for backwards compatibility; its use in C++ has been largely superseded by operators and . Many different implementations of the actual memory allocation mechanism, used by, are available. Their performance varies in both execution time and required memory.
more from Wikipedia
Allocator (C++)
In C++ computer programming, allocators are an important component of the C++ Standard Library. The standard library provides several data structures, such as list and set, commonly referred to as containers. A common trait among these containers is their ability to change size during the execution of the program. To achieve this, some form of dynamic memory allocation is usually required. Allocators handle all the requests for allocation and deallocation of memory for a given container.
more from Wikipedia
Memory footprint refers to the amount of main memory that a program uses or references while running. This includes all sorts of active memory regions like code, static data sections (both initialized and uninitialized), heap, as well as all the stacks, plus memory required to hold any additional data structures, such as symbol tables, constant tables, debugging structures, open files, etc, that the program ever needs while executing and will be loaded at least once during the entire run.
more from Wikipedia
Compile time
In computer science, compile time refers to either the operations performed by a compiler (the "compile-time operations"), programming language requirements that must be met by source code for it to be successfully compiled (the "compile-time requirements"), or properties of the program that can be reasoned about at compile time. The operations performed at compile time usually include syntax analysis, various kinds of semantic analysis and code generation.
more from Wikipedia
Shared memory
In computing, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between programs. Depending on context, programs may run on a single processor or on multiple separate processors. Using memory for communication inside a single program, for example among its multiple threads, is generally not referred to as shared memory.
more from Wikipedia