mirror of
https://github.com/golang/go.git
synced 2025-12-08 06:10:04 +00:00
runtime: make sweep time proportional to in-use spans
Currently sweeping walks the list of all spans, which means the work in sweeping is proportional to the maximum number of spans ever used. If the heap was once large but is now small, this causes an amortization failure: on a small heap, GCs happen frequently, but a full sweep still has to happen in each GC cycle, which means we spent a lot of time in sweeping. Fix this by creating a separate list consisting of just the in-use spans to be swept, so sweeping is proportional to the number of in-use spans (which is proportional to the live heap). Specifically, we create two lists: a list of unswept in-use spans and a list of swept in-use spans. At the start of the sweep cycle, the swept list becomes the unswept list and the new swept list is empty. Allocating a new in-use span adds it to the swept list. Sweeping moves spans from the unswept list to the swept list. This fixes the amortization problem because a shrinking heap moves spans off the unswept list without adding them to the swept list, reducing the time required by the next sweep cycle. Updates #9265. This fix eliminates almost all of the time spent in sweepone; however, markrootSpans has essentially the same bug, so now the test program from this issue spends all of its time in markrootSpans. No significant effect on other benchmarks. Change-Id: Ib382e82790aad907da1c127e62b3ab45d7a4ac1e Reviewed-on: https://go-review.googlesource.com/30535 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Rick Hudson <rlh@golang.org>
This commit is contained in:
parent
45baff61e3
commit
f9497a6747
4 changed files with 173 additions and 8 deletions
|
|
@ -60,6 +60,17 @@ type mheap struct {
|
|||
// mapped. cap(spans) indicates the total reserved memory.
|
||||
spans []*mspan
|
||||
|
||||
// sweepSpans contains two mspan stacks: one of swept in-use
|
||||
// spans, and one of unswept in-use spans. These two trade
|
||||
// roles on each GC cycle. Since the sweepgen increases by 2
|
||||
// on each cycle, this means the swept spans are in
|
||||
// sweepSpans[sweepgen/2%2] and the unswept spans are in
|
||||
// sweepSpans[1-sweepgen/2%2]. Sweeping pops spans from the
|
||||
// unswept stack and pushes spans that are still in-use on the
|
||||
// swept stack. Likewise, allocating an in-use span pushes it
|
||||
// on the swept stack.
|
||||
sweepSpans [2]gcSweepBuf
|
||||
|
||||
_ uint32 // align uint64 fields on 32-bit for atomics
|
||||
|
||||
// Proportional sweep
|
||||
|
|
@ -546,6 +557,7 @@ func (h *mheap) alloc_m(npage uintptr, sizeclass int32, large bool) *mspan {
|
|||
// Record span info, because gc needs to be
|
||||
// able to map interior pointer to containing span.
|
||||
atomic.Store(&s.sweepgen, h.sweepgen)
|
||||
h.sweepSpans[h.sweepgen/2%2].push(s) // Add to swept in-use list.
|
||||
s.state = _MSpanInUse
|
||||
s.allocCount = 0
|
||||
s.sizeclass = uint8(sizeclass)
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue