=================================================================== RCS file: /home/cvs/OpenXM_contrib2/asir2000/gc/doc/gcdescr.html,v retrieving revision 1.1 retrieving revision 1.2 diff -u -p -r1.1 -r1.2 --- OpenXM_contrib2/asir2000/gc/doc/gcdescr.html 2002/07/24 08:00:16 1.1 +++ OpenXM_contrib2/asir2000/gc/doc/gcdescr.html 2003/06/24 05:11:38 1.2 @@ -1,7 +1,7 @@ Conservative GC Algorithmic Overview - Hans-J. Boehm, Silicon Graphics + Hans-J. Boehm, HP Labs (Much of this was written at SGI)

This is under construction

@@ -96,20 +96,24 @@ typically on the order of the page size.

Large block sizes are rounded up to the next multiple of HBLKSIZE and then allocated by -GC_allochblk. This uses roughly what Paul Wilson has termed -a "next fit" algorithm, i.e. first-fit with a rotating pointer. -The implementation does check for a better fitting immediately -adjacent block, which gives it somewhat better fragmentation characteristics. -I'm now convinced it should use a best fit algorithm. The actual +GC_allochblk. Recent versions of the collector +use an approximate best fit algorithm by keeping free lists for +several large block sizes. +The actual implementation of GC_allochblk is significantly complicated by black-listing issues (see below).

-Small blocks are allocated in blocks of size HBLKSIZE. -Each block is +Small blocks are allocated in chunks of size HBLKSIZE. +Each chunk is dedicated to only one object size and kind. The allocator maintains separate free lists for each size and kind of object.

+Once a large block is split for use in smaller objects, it can only +be used for objects of that size, unless the collector discovers a completely +empty chunk. Completely empty chunks are restored to the appropriate +large block free list. +

In order to avoid allocating blocks for too many distinct object sizes, the collector normally does not directly allocate objects of every possible request size. Instead request are rounded up to one of a smaller number @@ -139,27 +143,35 @@ expand the heap. Otherwise, we initiate a garbage col that the amount of garbage collection work per allocated byte remains constant.

-The above is in fat an oversimplification of the real heap expansion -heuristic, which adjusts slightly for root size and certain kinds of -fragmentation. In particular, programs with a large root set size and +The above is in fact an oversimplification of the real heap expansion +and GC triggering heuristic, which adjusts slightly for root size +and certain kinds of +fragmentation. In particular: +

-(It has been suggested that this should be adjusted so that we favor +It has been suggested that this should be adjusted so that we favor expansion if the resulting heap still fits into physical memory. In many cases, that would no doubt help. But it is tricky to do this in a way that remains robust if multiple application are contending -for a single pool of physical memory.) +for a single pool of physical memory.

Mark phase

@@ -204,7 +216,7 @@ changes to
  • MS_NONE indicating that reachable objects are marked. -The core mark routine GC_mark_from_mark_stack, is called +The core mark routine GC_mark_from, is called repeatedly by several of the sub-phases when the mark stack starts to fill up. It is also called repeatedly in MS_ROOTS_PUSHED state to empty the mark stack. @@ -213,6 +225,12 @@ each call, so that it can also be used by the incremen It is fairly carefully tuned, since it usually consumes a large majority of the garbage collection time.

    +The fact that it perform a only a small amount of work per call also +allows it to be used as the core routine of the parallel marker. In that +case it is normally invoked on thread-private mark stacks instead of the +global mark stack. More details can be found in +scale.html +

    The marker correctly handles mark stack overflows. Whenever the mark stack overflows, the mark state is reset to MS_INVALID. Since there are already marked objects in the heap, @@ -281,7 +299,8 @@ Unmarked large objects are immediately returned to the Each small object page is checked to see if all mark bits are clear. If so, the entire page is returned to the large object free list. Small object pages containing some reachable object are queued for later -sweeping. +sweeping, unless we determine that the page contains very little free +space, in which case it is not examined further.

    This initial sweep pass touches only block headers, not the blocks themselves. Thus it does not require significant paging, even @@ -341,12 +360,16 @@ object itself becomes marked, we have uncovered a cycle involving the object. This usually results in a warning from the collector. Such objects are not finalized, since it may be unsafe to do so. See the more detailed - discussion of finalization semantics. + discussion of finalization semantics.

    Any objects remaining unmarked at the end of this process are added to a queue of objects whose finalizers can be run. Depending on collector configuration, finalizers are dequeued and run either implicitly during allocation calls, or explicitly in response to a user request. +(Note that the former is unfortunately both the default and not generally safe. +If finalizers perform synchronization, it may result in deadlocks. +Nontrivial finalizers generally need to perform synchronization, and +thus require a different collector configuration.)

    The collector provides a mechanism for replacing the procedure that is used to mark through objects. This is used both to provide support for @@ -354,13 +377,14 @@ Java-style unordered finalization, and to ignore certa e.g. those arising from C++ implementations of virtual inheritance.

    Generational Collection and Dirty Bits

    -We basically use the parallel and generational GC algorithm described in -"Mostly Parallel Garbage Collection", +We basically use the concurrent and generational GC algorithm described in +"Mostly Parallel Garbage Collection", by Boehm, Demers, and Shenker.

    The most significant modification is that -the collector always runs in the allocating thread. -There is no separate garbage collector thread. +the collector always starts running in the allocating thread. +There is no separate garbage collector thread. (If parallel GC is +enabled, helper threads may also be woken up.) If an allocation attempt either requests a large object, or encounters an empty small object free list, and notices that there is a collection in progress, it immediately performs a small amount of marking work @@ -389,50 +413,108 @@ cannot be satisfied from small object free lists. When the set of modified pages is retrieved, and we mark once again from marked objects on those pages, this time with the mutator stopped.

    -We keep track of modified pages using one of three distinct mechanisms: +We keep track of modified pages using one of several distinct mechanisms:

    1. Through explicit mutator cooperation. Currently this requires -the use of GC_malloc_stubborn. +the use of GC_malloc_stubborn, and is rarely used.
    2. -By write-protecting physical pages and catching write faults. This is +(MPROTECT_VDB) By write-protecting physical pages and +catching write faults. This is implemented for many Unix-like systems and for win32. It is not possible in a few environments.
    3. -By retrieving dirty bit information from /proc. (Currently only Sun's +(PROC_VDB) By retrieving dirty bit information from /proc. +(Currently only Sun's Solaris supports this. Though this is considerably cleaner, performance may actually be better with mprotect and signals.) +
    4. +(PCR_VDB) By relying on an external dirty bit implementation, in this +case the one in Xerox PCR. +
    5. +(DEFAULT_VDB) By treating all pages as dirty. This is the default if +none of the other techniques is known to be usable, and +GC_malloc_stubborn is not used. Practical only for testing, or if +the vast majority of objects use GC_malloc_stubborn.
    +

    Black-listing

    + +The collector implements black-listing of pages, as described +in + +Boehm, ``Space Efficient Conservative Collection'', PLDI '93, also available +here. +

    +During the mark phase, the collector tracks ``near misses'', i.e. attempts +to follow a ``pointer'' to just outside the garbage-collected heap, or +to a currently unallocated page inside the heap. Pages that have been +the targets of such near misses are likely to be the targets of +misidentified ``pointers'' in the future. To minimize the future +damage caused by such misidentifications they will be allocated only to +small pointerfree objects. +

    +The collector understands two different kinds of black-listing. A +page may be black listed for interior pointer references +(GC_add_to_black_list_stack), if it was the target of a near +miss from a location that requires interior pointer recognition, +e.g. the stack, or the heap if GC_all_interior_pointers +is set. In this case, we also avoid allocating large blocks that include +this page. +

    +If the near miss came from a source that did not require interior +pointer recognition, it is black-listed with +GC_add_to_black_list_normal. +A page black-listed in this way may appear inside a large object, +so long as it is not the first page of a large object. +

    +The GC_allochblk routine respects black-listing when assigning +a block to a particular object kind and size. It occasionally +drops (i.e. allocates and forgets) blocks that are completely black-listed +in order to avoid excessively long large block free lists containing +only unusable blocks. This would otherwise become an issue +if there is low demand for small pointerfree objects. +

    Thread support

    We support several different threading models. Unfortunately Pthreads, the only reasonably well standardized thread model, supports too narrow an interface for conservative garbage collection. There appears to be -no portable way to allow the collector to coexist with various Pthreads +no completely portable way to allow the collector to coexist with various Pthreads implementations. Hence we currently support only a few of the more common Pthreads implementations.

    In particular, it is very difficult for the collector to stop all other threads in the system and examine the register contents. This is currently -accomplished with very different mechanisms for different Pthreads +accomplished with very different mechanisms for some Pthreads implementations. The Solaris implementation temporarily disables much of the user-level threads implementation by stopping kernel-level threads -("lwp"s). The Irix implementation sends signals to individual Pthreads -and has them wait in the signal handler. The Linux implementation -is similar in spirit to the Irix one. +("lwp"s). The Linux/HPUX/OSF1 and Irix implementations sends signals to +individual Pthreads and has them wait in the signal handler.

    -The Irix implementation uses -only documented Pthreads calls, but relies on extensions to their semantics, -notably the use of mutexes and condition variables from signal -handlers. The Linux implementation should be far closer to -portable, though impirically it is not completely portable. +The Linux and Irix implementations use +only documented Pthreads calls, but rely on extensions to their semantics. +The Linux implementation linux_threads.c relies on only very +mild extensions to the pthreads semantics, and already supports a large number +of other Unix-like pthreads implementations. Our goal is to make this the +only pthread support in the collector.

    +(The Irix implementation is separate only for historical reasons and should +clearly be merged. The current Solaris implementation probably performs +better in the uniprocessor case, but does not support thread operations in the +collector. Hence it cannot support the parallel marker.) +

    All implementations must intercept thread creation and a few other thread-specific calls to allow enumeration of threads and location of thread stacks. This is current -accomplished with # define's in gc.h, or optionally +accomplished with # define's in gc.h +(really gc_pthread_redirects.h), or optionally by using ld's function call wrapping mechanism under Linux.

    Comments are appreciated. Please send mail to -boehm@acm.org +boehm@acm.org or +Hans.Boehm@hp.com +

    +This is a modified copy of a page written while the author was at SGI. +The original was here. +