runtime: don't hold the heap lock while scavenging

This change modifies the scavenger to no longer hold the heap lock while
actively scavenging pages. To achieve this, the change also:
* Reverses the locking behavior of the (*pageAlloc).scavenge API, to
  only acquire the heap lock when necessary.
* Introduces a new lock on the scavenger-related fields in a pageAlloc
  so that access to those fields doesn't require the heap lock. There
  are a few places in the scavenge path, notably reservation, that
  requires synchronization. The heap lock is far too heavy handed for
  this case.
* Changes the scavenger to marks pages that are actively being scavenged
  as allocated, and "frees" them back to the page allocator the usual
  way.
* Lifts the heap-growth scavenging code out of mheap.grow, where the
  heap lock is held, and into allocSpan, just after the lock is
  released. Releasing the lock during mheap.grow is not feasible if we
  want to ensure that allocation always makes progress (post-growth,
  another allocator could come in and take all that space, forcing the
  goroutine that just grew the heap to do so again).

This change means that the scavenger now must do more work for each
scavenge, but it is also now much more scalable. Although in theory it's
not great by always taking the locked paths in the page allocator, it
takes advantage of some properties of the allocator:
* Most of the time, the scavenger will be working with one page at a
  time. The page allocator's locked path is optimized for this case.
* On the allocation path, it doesn't need to do the find operation at
  all; it can go straight to setting bits for the range and updating the
  summary structure.

Change-Id: Ie941d5e7c05dcc96476795c63fef74bcafc2a0f1
Reviewed-on: https://go-review.googlesource.com/c/go/+/353974
Trust: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
This commit is contained in:
Michael Anthony Knyszek 2021-10-04 20:36:49 +00:00 committed by Michael Knyszek
parent 0bc98b3e9b
commit 4f543b59c5
6 changed files with 187 additions and 177 deletions

View file

@ -226,6 +226,8 @@ type pageAlloc struct {
// are currently available. Otherwise one might iterate over unused
// ranges.
//
// Protected by mheapLock.
//
// TODO(mknyszek): Consider changing the definition of the bitmap
// such that 1 means free and 0 means in-use so that summaries and
// the bitmaps align better on zero-values.
@ -261,29 +263,41 @@ type pageAlloc struct {
inUse addrRanges
// scav stores the scavenger state.
//
// All fields are protected by mheapLock.
scav struct {
lock mutex
// inUse is a slice of ranges of address space which have not
// yet been looked at by the scavenger.
//
// Protected by lock.
inUse addrRanges
// gen is the scavenge generation number.
//
// Protected by lock.
gen uint32
// reservationBytes is how large of a reservation should be made
// in bytes of address space for each scavenge iteration.
//
// Protected by lock.
reservationBytes uintptr
// released is the amount of memory released this generation.
//
// Updated atomically.
released uintptr
// scavLWM is the lowest (offset) address that the scavenger reached this
// scavenge generation.
//
// Protected by lock.
scavLWM offAddr
// freeHWM is the highest (offset) address of a page that was freed to
// the page allocator this scavenge generation.
//
// Protected by mheapLock.
freeHWM offAddr
}
@ -864,17 +878,19 @@ Found:
// Must run on the system stack because p.mheapLock must be held.
//
//go:systemstack
func (p *pageAlloc) free(base, npages uintptr) {
func (p *pageAlloc) free(base, npages uintptr, scavenged bool) {
assertLockHeld(p.mheapLock)
// If we're freeing pages below the p.searchAddr, update searchAddr.
if b := (offAddr{base}); b.lessThan(p.searchAddr) {
p.searchAddr = b
}
// Update the free high watermark for the scavenger.
limit := base + npages*pageSize - 1
if offLimit := (offAddr{limit}); p.scav.freeHWM.lessThan(offLimit) {
p.scav.freeHWM = offLimit
if !scavenged {
// Update the free high watermark for the scavenger.
if offLimit := (offAddr{limit}); p.scav.freeHWM.lessThan(offLimit) {
p.scav.freeHWM = offLimit
}
}
if npages == 1 {
// Fast path: we're clearing a single bit, and we know exactly