runtime: bring back minHeapIdx in scavenge index

The scavenge index currently doesn't guard against overflow, and CL
436395 removed the minHeapIdx optimization that allows the chunk scan to
skip scanning chunks that haven't been mapped for the heap, and are only
available as a consequence of chunks' mapped region being rounded out to
a page on both ends.

Because the 0'th chunk is never mapped, minHeapIdx effectively prevents
overflow, fixing the iOS breakage.

This change also refactors growth and initialization a little bit to
decouple it from pageAlloc a bit and share code across platforms.

Change-Id: If7fc3245aa81cf99451bf8468458da31986a9b0a
Reviewed-on: https://go-review.googlesource.com/c/go/+/486695
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
This commit is contained in:
Michael Anthony Knyszek 2023-04-20 02:41:08 +00:00 committed by Gopher Robot
parent bdccb85f50
commit 15c1276246
5 changed files with 59 additions and 22 deletions

View file

@ -322,11 +322,10 @@ func (p *pageAlloc) init(mheapLock *mutex, sysStat *sysMemStat, test bool) {
p.mheapLock = mheapLock
// Initialize the scavenge index.
p.scav.index.init()
p.summaryMappedReady += p.scav.index.init(test, sysStat)
// Set if we're in a test.
p.test = test
p.scav.index.test = test
}
// tryChunkOf returns the bitmap data for the given chunk.
@ -363,6 +362,9 @@ func (p *pageAlloc) grow(base, size uintptr) {
// We just update a bunch of additional metadata here.
p.sysGrow(base, limit)
// Grow the scavenge index.
p.summaryMappedReady += p.scav.index.grow(base, limit, p.sysStat)
// Update p.start and p.end.
// If no growth happened yet, start == 0. This is generally
// safe since the zero page is unmapped.