< prev index next >

src/hotspot/share/gc/g1/g1YoungCollector.cpp

Print this page

 318       // live objects are missed by the marking process.  Objects
 319       // allocated after the start of concurrent marking don't need to
 320       // be scanned.
 321       //
 322       // * An object must not be reclaimed if it is on the concurrent
 323       // mark stack.  Objects allocated after the start of concurrent
 324       // marking are never pushed on the mark stack.
 325       //
 326       // Nominating only objects allocated after the start of concurrent
 327       // marking is sufficient to meet both constraints.  This may miss
 328       // some objects that satisfy the constraints, but the marking data
 329       // structures don't support efficiently performing the needed
 330       // additional tests or scrubbing of the mark stack.
 331       //
 332       // We handle humongous objects specially, because frequent allocation and
 333       // dropping of large binary blobs is an important use case for eager reclaim,
 334       // and this special handling increases needed headroom.
 335       // It also helps with G1 allocating humongous objects as old generation
 336       // objects although they might also die quite quickly.
 337       //
 338       // TypeArray objects are allowed to be reclaimed even if allocated before

 339       // the start of concurrent mark.  For this we rely on mark stack insertion
 340       // to exclude is_typeArray() objects, preventing reclaiming an object
 341       // that is in the mark stack.  We also rely on the metadata for
 342       // such objects to be built-in and so ensured to be kept live.
 343       //
 344       // Non-typeArrays that were allocated before marking are excluded from
 345       // eager reclaim during marking.  One issue is the problem described
 346       // above with scrubbing the mark stack, but there is also a problem
 347       // causing these humongous objects being collected incorrectly:
 348       //
 349       // E.g. if the mutator is running, we may have objects o1 and o2 in the same
 350       // region, where o1 has already been scanned and o2 is only reachable by
 351       // the candidate object h, which is humongous.
 352       //
 353       // If the mutator read the reference to o2 from h and installed it into o1,
 354       // no remembered set entry would be created for keeping alive o2, as o1 and
 355       // o2 are in the same region.  Object h might be reclaimed by the next
 356       // garbage collection. o1 still has the reference to o2, but since o1 had
 357       // already been scanned we do not detect o2 to be still live and reclaim it.
 358       //
 359       // There is another minor problem with non-typeArray regions being the source
 360       // of remembered set entries in other region's remembered sets.  There are
 361       // two cases: first, the remembered set entry is in a Free region after reclaim.
 362       // We handle this case by ignoring these cards during merging the remembered
 363       // sets.
 364       //
 365       // Second, there may be cases where eagerly reclaimed regions were already
 366       // reallocated.  This may cause scanning of these outdated remembered set
 367       // entries, containing some objects. But apart from extra work this does
 368       // not cause correctness issues.

 369       // There is no difference between scanning cards covering an effectively
 370       // dead humongous object vs. some other objects in reallocated regions.
 371       //
 372       // TAMSes are only reset after completing the entire mark cycle, during
 373       // bitmap clearing. It is worth to not wait until then, and allow reclamation
 374       // outside of actual (concurrent) SATB marking.
 375       // This also applies to the concurrent start pause - we only set
 376       // mark_in_progress() at the end of that GC: no mutator is running that can
 377       // sneakily install a new reference to the potentially reclaimed humongous
 378       // object.
 379       // During the concurrent start pause the situation described above where we
 380       // miss a reference can not happen. No mutator is modifying the object
 381       // graph to install such an overlooked reference.
 382       //
 383       // After the pause, having reclaimed h, obviously the mutator can't fetch
 384       // the reference from h any more.
 385       if (!obj->is_typeArray()) {

 386         // All regions that were allocated before marking have a TAMS != bottom.
 387         bool allocated_before_mark_start = region->bottom() != _g1h->concurrent_mark()->top_at_mark_start(region);
 388         bool mark_in_progress = _g1h->collector_state()->is_in_marking();
 389 
 390         if (allocated_before_mark_start && mark_in_progress) {
 391           return false;
 392         }
 393       }
 394       return _g1h->is_potential_eager_reclaim_candidate(region);
 395     }
 396 
 397   public:
 398     G1PrepareRegionsClosure(G1CollectedHeap* g1h, G1PrepareEvacuationTask* parent_task) :
 399       _g1h(g1h),
 400       _parent_task(parent_task),
 401       _worker_humongous_total(0),
 402       _worker_humongous_candidates(0),
 403       _humongous_card_set_stats() { }
 404 
 405     ~G1PrepareRegionsClosure() {

 318       // live objects are missed by the marking process.  Objects
 319       // allocated after the start of concurrent marking don't need to
 320       // be scanned.
 321       //
 322       // * An object must not be reclaimed if it is on the concurrent
 323       // mark stack.  Objects allocated after the start of concurrent
 324       // marking are never pushed on the mark stack.
 325       //
 326       // Nominating only objects allocated after the start of concurrent
 327       // marking is sufficient to meet both constraints.  This may miss
 328       // some objects that satisfy the constraints, but the marking data
 329       // structures don't support efficiently performing the needed
 330       // additional tests or scrubbing of the mark stack.
 331       //
 332       // We handle humongous objects specially, because frequent allocation and
 333       // dropping of large binary blobs is an important use case for eager reclaim,
 334       // and this special handling increases needed headroom.
 335       // It also helps with G1 allocating humongous objects as old generation
 336       // objects although they might also die quite quickly.
 337       //
 338       // Humongous objects without oops (typeArrays, flatArrays without oops in
 339       // its elements) are allowed to be reclaimed even if allocated before
 340       // the start of concurrent mark.  For this we rely on mark stack insertion
 341       // to exclude them, preventing reclaiming an object
 342       // that is in the mark stack.  That code also ensures that metadata (klass)
 343       // is kept live.
 344       //
 345       // Other humongous objects that were allocated before marking are excluded from
 346       // eager reclaim during marking.  One issue is the problem described
 347       // above with scrubbing the mark stack, but there is also a problem
 348       // causing these humongous objects being collected incorrectly:
 349       //
 350       // E.g. if the mutator is running, we may have objects o1 and o2 in the same
 351       // region, where o1 has already been scanned and o2 is only reachable by
 352       // the candidate object h, which is humongous.
 353       //
 354       // If the mutator read the reference to o2 from h and installed it into o1,
 355       // no remembered set entry would be created for keeping alive o2, as o1 and
 356       // o2 are in the same region.  Object h might be reclaimed by the next
 357       // garbage collection. o1 still has the reference to o2, but since o1 had
 358       // already been scanned we do not detect o2 to be still live and reclaim it.
 359       //
 360       // There is another minor problem with these humongous objects with oops being
 361       // the source of remembered set entries in other region's remembered sets.
 362       // There are two cases: first, the remembered set entry is in a Free region
 363       // after reclaim.  We handle this case by ignoring these cards during merging
 364       // the remembered sets.
 365       //
 366       // Second, there may be cases where regions previously containing eagerly
 367       // reclaimed objects were already allocated into again.
 368       // This may cause scanning of these outdated remembered set entries,
 369       // containing some objects. But apart from extra work this does not cause
 370       // correctness issues.
 371       // There is no difference between scanning cards covering an effectively
 372       // dead humongous object vs. some other objects in reallocated regions.
 373       //
 374       // TAMSes are only reset after completing the entire mark cycle, during
 375       // bitmap clearing. It is worth to not wait until then, and allow reclamation
 376       // outside of actual (concurrent) SATB marking.
 377       // This also applies to the concurrent start pause - we only set
 378       // mark_in_progress() at the end of that GC: no mutator is running that can
 379       // sneakily install a new reference to the potentially reclaimed humongous
 380       // object.
 381       // During the concurrent start pause the situation described above where we
 382       // miss a reference can not happen. No mutator is modifying the object
 383       // graph to install such an overlooked reference.
 384       //
 385       // After the pause, having reclaimed h, obviously the mutator can't fetch
 386       // the reference from h any more.
 387       bool marked_immediately = _g1h->can_be_marked_through_immediately(obj);
 388       if (!marked_immediately) {
 389         // All regions that were allocated before marking have a TAMS != bottom.
 390         bool allocated_before_mark_start = region->bottom() != _g1h->concurrent_mark()->top_at_mark_start(region);
 391         bool mark_in_progress = _g1h->collector_state()->is_in_marking();
 392 
 393         if (allocated_before_mark_start && mark_in_progress) {
 394           return false;
 395         }
 396       }
 397       return _g1h->is_potential_eager_reclaim_candidate(region);
 398     }
 399 
 400   public:
 401     G1PrepareRegionsClosure(G1CollectedHeap* g1h, G1PrepareEvacuationTask* parent_task) :
 402       _g1h(g1h),
 403       _parent_task(parent_task),
 404       _worker_humongous_total(0),
 405       _worker_humongous_candidates(0),
 406       _humongous_card_set_stats() { }
 407 
 408     ~G1PrepareRegionsClosure() {
< prev index next >