mirror of
https://github.com/golang/go.git
synced 2025-12-08 06:10:04 +00:00
runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR https://go-review.googlesource.com/c/go/+/192704 (with deadlock detection from the logs), and developed a setup to give static lock ranking for runtime locks. Static lock ranking establishes a documented total ordering among locks, and then reports an error if the total order is violated. This can happen if a deadlock happens (by acquiring a sequence of locks in different orders), or if just one side of a possible deadlock happens. Lock ordering deadlocks cannot happen as long as the lock ordering is followed. Along the way, I found a deadlock involving the new timer code, which Ian fixed via https://go-review.googlesource.com/c/go/+/207348, as well as two other potential deadlocks. See the constants at the top of runtime/lockrank.go to show the static lock ranking that I ended up with, along with some comments. This is great documentation of the current intended lock ordering when acquiring multiple locks in the runtime. I also added an array lockPartialOrder[] which shows and enforces the current partial ordering among locks (which is embedded within the total ordering). This is more specific about the dependencies among locks. I don't try to check the ranking within a lock class with multiple locks that can be acquired at the same time (i.e. check the ranking when multiple hchan locks are acquired). Currently, I am doing a lockInit() call to set the lock rank of most locks. Any lock that is not otherwise initialized is assumed to be a leaf lock (a very high rank lock), so that eliminates the need to do anything for a bunch of locks (including all architecture-dependent locks). For two locks, root.lock and notifyList.lock (only in the runtime/sema.go file), it is not as easy to do lock initialization, so instead, I am passing the lock rank with the lock calls. For Windows compilation, I needed to increase the StackGuard size from 896 to 928 because of the new lock-rank checking functions. Checking of the static lock ranking is enabled by setting GOEXPERIMENT=staticlockranking before doing a run. To make sure that the static lock ranking code has no overhead in memory or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so that it defines a build tag (with the same name) whenever any experiment has been baked into the toolchain (by checking Expstring()). This allows me to avoid increasing the size of the 'mutex' type when static lock ranking is not enabled. Fixes #38029 Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a Reviewed-on: https://go-review.googlesource.com/c/go/+/207619 Reviewed-by: Dan Scales <danscales@google.com> Reviewed-by: Keith Randall <khr@golang.org> Run-TryBot: Dan Scales <danscales@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org>
This commit is contained in:
parent
bfd569fcb0
commit
0a820007e7
29 changed files with 561 additions and 29 deletions
|
|
@ -91,7 +91,7 @@ const (
|
|||
|
||||
// The stack guard is a pointer this many bytes above the
|
||||
// bottom of the stack.
|
||||
_StackGuard = 896*sys.StackGuardMultiplier + _StackSystem
|
||||
_StackGuard = 928*sys.StackGuardMultiplier + _StackSystem
|
||||
|
||||
// After a stack split check the SP is allowed to be this
|
||||
// many bytes below the stack guard. This saves an instruction
|
||||
|
|
@ -161,9 +161,11 @@ func stackinit() {
|
|||
}
|
||||
for i := range stackpool {
|
||||
stackpool[i].item.span.init()
|
||||
lockInit(&stackpool[i].item.mu, lockRankStackpool)
|
||||
}
|
||||
for i := range stackLarge.free {
|
||||
stackLarge.free[i].init()
|
||||
lockInit(&stackLarge.lock, lockRankStackLarge)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -182,6 +184,7 @@ func stacklog2(n uintptr) int {
|
|||
func stackpoolalloc(order uint8) gclinkptr {
|
||||
list := &stackpool[order].item.span
|
||||
s := list.first
|
||||
lockWithRankMayAcquire(&mheap_.lock, lockRankMheap)
|
||||
if s == nil {
|
||||
// no free stacks. Allocate another span worth.
|
||||
s = mheap_.allocManual(_StackCacheSize>>_PageShift, &memstats.stacks_inuse)
|
||||
|
|
@ -389,6 +392,8 @@ func stackalloc(n uint32) stack {
|
|||
}
|
||||
unlock(&stackLarge.lock)
|
||||
|
||||
lockWithRankMayAcquire(&mheap_.lock, lockRankMheap)
|
||||
|
||||
if s == nil {
|
||||
// Allocate a new stack from the heap.
|
||||
s = mheap_.allocManual(npage, &memstats.stacks_inuse)
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue