tracebackdefers is used for scanning/copying deferred functions'
arguments. Now that deferred functions are always argumentless,
it does nothing. Remove.
Change-Id: I55bedabe5584ea41a12cdb03d55ec9692a5aacd9
Reviewed-on: https://go-review.googlesource.com/c/go/+/325916
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Implement register ABI for reflect.MakeFunc and method Value Call
on ARM64.
Change-Id: I5487febb9ea764af5ccf5d7c94858ab0acec7cac
Reviewed-on: https://go-review.googlesource.com/c/go/+/323936
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
At this point all funcPC references are ABIInternal functions.
Replace with the intrinsics.
Change-Id: I3ba7e485c83017408749b53f92877d3727a75e27
Reviewed-on: https://go-review.googlesource.com/c/go/+/321954
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Currently, for stack objects, the compiler emits metadata that
includes the offset and type descriptor for each object. The type
descriptor symbol has many fields, and it references many other
symbols, e.g. field/element types, equality functions, names.
Observe that what we actually need at runtime is only the GC
metadata that are needed to scan the object, and the GC metadata
are "leaf" symbols (which doesn't reference other symbols). Emit
only the GC data instead. This avoids bringing live the type
descriptor as well as things referenced by it (if it is not
otherwise live).
This reduces binary sizes:
old new
hello (println) 1187776 1133856 (-4.5%)
hello (fmt) 1902448 1844416 (-3.1%)
cmd/compile 22670432 22438576 (-1.0%)
cmd/link 6346272 6225408 (-1.9%)
No significant change in compiler speed.
name old time/op new time/op delta
Template 184ms ± 2% 186ms ± 5% ~ (p=0.905 n=9+10)
Unicode 78.4ms ± 5% 76.3ms ± 3% -2.60% (p=0.009 n=10+10)
GoTypes 1.09s ± 1% 1.08s ± 1% -0.73% (p=0.027 n=10+8)
Compiler 85.6ms ± 3% 84.6ms ± 4% ~ (p=0.143 n=10+10)
SSA 7.23s ± 1% 7.25s ± 1% ~ (p=0.780 n=10+9)
Flate 116ms ± 5% 115ms ± 6% ~ (p=0.912 n=10+10)
GoParser 201ms ± 4% 195ms ± 1% ~ (p=0.089 n=10+10)
Reflect 455ms ± 1% 458ms ± 2% ~ (p=0.050 n=9+9)
Tar 155ms ± 2% 155ms ± 3% ~ (p=0.436 n=10+10)
XML 202ms ± 2% 200ms ± 2% ~ (p=0.053 n=10+9)
Change-Id: I33a7f383d79afba1a482cac6da0cf5b7de9c0ec4
Reviewed-on: https://go-review.googlesource.com/c/go/+/313514
Trust: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
For stack frames larger than StackBig, the stack split prologue needs
to guard against potential wraparound. Currently, it carefully
arranges to avoid underflow, but this is complicated and requires a
special check for StackPreempt. StackPreempt is no longer the only
stack poison value, so this check will incorrectly succeed if the
stack bound is poisoned with any other value.
This CL simplifies the logic of the check, reduces its length, and
accounts for any possible poison value by directly checking for
underflow.
Change-Id: I917a313102d6a21895ef7c4b0f304fb84b292c81
Reviewed-on: https://go-review.googlesource.com/c/go/+/307010
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
This change finishes off functionality register ABI for the reflect
package.
Specifically, it implements a call on a MakeFunc'd value by performing
the reverse process that reflect.Value.Call does, using the same ABI
steps. It implements a call on a method value created by reflect by
translating between the method value's ABI to the method's ABI.
Tests are added for both cases.
For #40724.
Change-Id: I302820b61fc0a8f94c5525a002bc02776aef41af
Reviewed-on: https://go-review.googlesource.com/c/go/+/298670
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Currently, gcTestMoveStackOnNextCall doubles the stack allocation on
each call because stack movement always doubles the stack. That's
rather unfortunate if you're doing a bunch of stack movement tests in
a row that don't actually have to grow the stack because you'll
quickly hit the stack size limit even though you're hardly using any
of the stack.
Fix this by adding a special stack poison value for
gcTestMoveStackOnNextCall that newstack recognizes and inhibits the
allocation doubling.
Change-Id: Iace7055a0f33cb48dc97b8f4b46e45304bee832c
Reviewed-on: https://go-review.googlesource.com/c/go/+/306672
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
See if this clears failure on openbsd-amd64-68 (ahem).
Change-Id: Ifa60ef711a95e5de8ad91433ffa425f75b36c76f
Reviewed-on: https://go-review.googlesource.com/c/go/+/295629
Trust: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
This is partial plumbing recycled from the original register abi test work;
these are the parts that translate easily. Some other bits are deferred till
later when they are ready to be used.
For #40724.
Change-Id: Ica8c55a4526793446189725a2bc3839124feb38f
Reviewed-on: https://go-review.googlesource.com/c/go/+/260539
Trust: David Chase <drchase@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Document what the values in internal/sys mean.
Remove various special cases for arm64 in the code using StackAlign.
Delete Uintreg - it was for GOARCH=amd64p32,
which was specific to GOOS=nacl and has been retired.
This CL is part of a stack adding windows/arm64
support (#36439), intended to land in the Go 1.17 cycle.
This CL is, however, not windows/arm64-specific.
It is cleanup meant to make the port (and future ports) easier.
Change-Id: I40e8fa07b4e192298b6536b98a72a751951a4383
Reviewed-on: https://go-review.googlesource.com/c/go/+/288795
Trust: Russ Cox <rsc@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change modifies mheap's span allocation API to have each caller
declare a purpose, defined as a new enum called spanAllocType.
The purpose behind this change is two-fold:
1. Tight control over who gets to allocate heap memory is, generally
speaking, a good thing. Every codepath that allocates heap memory
places additional implicit restrictions on the allocator. A notable
example of a restriction is work bufs coming from heap memory: write
barriers are not allowed in allocation paths because then we could
have a situation where the allocator calls into the allocator.
2. Memory statistic updating is explicit. Instead of passing an opaque
pointer for statistic updating, which places restrictions on how that
statistic may be updated, we use the spanAllocType to determine which
statistic to update and how.
We also take this opportunity to group all the statistic updating code
together, which should make the accounting code a little easier to
follow.
Change-Id: Ic0b0898959ba2a776f67122f0e36c9d7d60e3085
Reviewed-on: https://go-review.googlesource.com/c/go/+/246970
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Currently we don't use sigaltstack on darwin/arm64, as is not
supported on iOS. However, it is supported on macOS. Use it.
(iOS remains unchanged.)
Change-Id: Icc154c5e2edf2dbdc8ca68741ad9157fc15a72ee
Reviewed-on: https://go-review.googlesource.com/c/go/+/256917
Trust: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Introduce GOOS=ios for iOS systems. GOOS=ios matches "darwin"
build tag, like GOOS=android matches "linux" and GOOS=illumos
matches "solaris". Only ios/arm64 is supported (ios/amd64 is
not).
GOOS=ios and GOOS=darwin remain essentially the same at this
point. They will diverge at later time, to differentiate macOS
and iOS.
Uses of GOOS=="darwin" are changed to (GOOS=="darwin" || GOOS=="ios"),
except if it clearly means macOS (e.g. GOOS=="darwin" && GOARCH=="amd64"),
it remains GOOS=="darwin".
Updates #38485.
Change-Id: I4faacdc1008f42434599efb3c3ad90763a83b67c
Reviewed-on: https://go-review.googlesource.com/c/go/+/254740
Trust: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently activeStackChans is set before a goroutine blocks on a channel
operation in an unlockf passed to gopark. The trouble is that the
unlockf is called *after* the G's status is changed, and the G's status
is what is used by a concurrent mark worker (calling suspendG) to
determine that a G has successfully been suspended. In this window
between the status change and unlockf, the mark worker could try to
shrink the G's stack, and in particular observe that activeStackChans is
false. This observation will cause the mark worker to *not* synchronize
with concurrent channel operations when it should, and so updating
pointers in the sudog for the blocked goroutine (which may point to the
goroutine's stack) races with channel operations which may also
manipulate the pointer (read it, dereference it, update it, etc.).
Fix the problem by adding a new atomically-updated flag to the g struct
called parkingOnChan, which is non-zero in the race window above. Then,
in isShrinkStackSafe, check if parkingOnChan is zero. The race is
resolved like so:
* Blocking G sets parkingOnChan, then changes status in gopark.
* Mark worker successfully suspends blocking G.
* If the mark worker observes parkingOnChan is non-zero when checking
isShrinkStackSafe, then it's not safe to shrink (we're in the race
window).
* If the mark worker observes parkingOnChan as zero, then because
the mark worker observed the G status change, it can be sure that
gopark's unlockf completed, and gp.activeStackChans will be correct.
The risk of this change is low, since although it reduces the number of
places that stack shrinking is allowed, the window here is incredibly
small. Essentially, every place that it might crash now is replaced with
no shrink.
This change adds a test, but the race window is so small that it's hard
to trigger without a well-placed sleep in park_m. Also, this change
fixes stackGrowRecursive in proc_test.go to actually allocate a 128-byte
stack frame. It turns out the compiler was destructuring the "pad" field
and only allocating one uint64 on the stack.
Fixes#40641.
Change-Id: I7dfbe7d460f6972b8956116b137bc13bc24464e8
Reviewed-on: https://go-review.googlesource.com/c/go/+/247050
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Trust: Michael Knyszek <mknyszek@google.com>
In the current implementation, we can observe crashes after calling
debug.SetMaxStack and allocating a stack larger than 4GB since
stackalloc works with 32-bit sizes. To avoid this, we define an upper
limit as the largest feasible point we can grow a stack to and provide a
better error message when we get a stack overflow.
Fixes#41228
Change-Id: I55fb0a824f47ed9fb1fcc2445a4dfd57da9ef8d4
Reviewed-on: https://go-review.googlesource.com/c/go/+/255997
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Giovanni Bajo <rasky@develer.com>
Reviewed-by: Keith Randall <khr@golang.org>
That port is gone.
Change-Id: I212d435e290d1890d6cd5531be98bb692650595e
Reviewed-on: https://go-review.googlesource.com/c/go/+/254077
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
I think they are no longer experimental status. Might as well promote
them to permanent.
Change-Id: Id1259601b3dd2061dd60df86ee48080bfb575d2f
Reviewed-on: https://go-review.googlesource.com/c/go/+/249857
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
reflect.assignTo writes to the target using write barriers. Make sure
that the memory it is writing to is zeroed, so the write barrier does
not read pointers from uninitialized memory.
Fixes#39541
Change-Id: Ia64b2cacc193bffd0c1396bbce1dfb8182d4905b
Reviewed-on: https://go-review.googlesource.com/c/go/+/238760
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
I added routines that can acquire/release a particular rank without
acquiring/releasing an associated lock. I added lockRankGscan as a rank
for acquiring/releasing the Gscan bit.
castogscanstatus() and casGtoPreemptScan() are acquires of the Gscan
bit. casfrom_Gscanstatus() is a release of the Gscan bit. casgstatus()
is like an acquire and release of the Gscan bit, since it will wait if
Gscan bit is currently set.
We have a cycle between hchan and Gscan. The acquisition of Gscan and
then hchan only happens in syncadjustsudogs() when the G is suspended,
so the main normal ordering (get hchan, then get Gscan) can't be
happening. So, I added a new rank lockRankHchanLeaf that is used when
acquiring hchan locks in syncadjustsudogs. This ranking is set so no
other locks can be acquired except other hchan locks.
Fixes#38922
Change-Id: I58ce526a74ba856cb42078f7b9901f2832e1d45c
Reviewed-on: https://go-review.googlesource.com/c/go/+/228417
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
We might as well grow the stack at least as large as we'll need for
the frame that is calling morestack. It doesn't help with the
lots-of-small-frames case, but it may help a bit with the
few-big-frames case.
Update #18138
Change-Id: I1f49c97706a70e20b30433cbec99a7901528ea52
Reviewed-on: https://go-review.googlesource.com/c/go/+/225800
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
A debugger can inject a call at almost any PC, which causes
significant complications with stack scanning and growth. Currently,
the runtime solves this using precise stack maps and register maps at
nearly all PCs, but these extra maps require roughly 5% of the binary.
These extra maps were originally considered worth this space because
they were intended to be used for non-cooperative preemption, but are
now used only for debug call injection.
This CL switches from using precise maps to instead using conservative
frame scanning, much like how non-cooperative preemption works. When a
call is injected, the runtime flushes all potential pointer registers
to the stack, and then treats that frame as well as the interrupted
frame conservatively.
The limitation of conservative frame scanning is that we cannot grow
the goroutine stack. That's doable because the previous CL switched to
performing debug calls on a new goroutine, where they are free to grow
the stack.
With this CL, there are no remaining uses of precise register maps
(though we still use the unsafe-point information that's encoded in
the register map PCDATA stream), and stack maps are only used at call
sites.
For #36365.
Change-Id: Ie217b6711f3741ccc437552d8ff88f961a73cee0
Reviewed-on: https://go-review.googlesource.com/c/go/+/229300
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.
Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.
Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.
See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.
I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.
I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).
Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.
For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.
Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.
To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.
Fixes#38029
Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Having an mcache field in both m and p is confusing, so remove it from m.
Always use mcache field from p. Use new variable mcache0 during bootstrap.
Change-Id: If2cba9f8bb131d911d512b61fd883a86cf62cc98
Reviewed-on: https://go-review.googlesource.com/c/go/+/205239
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Print the current SP and (old) stack bounds when the stack grows
too large. This helps to identify the problem: whether a large
stack is used, or something else goes wrong.
For #35470.
Change-Id: I34a4064d5c7280978391d835e171b90d06f87222
Reviewed-on: https://go-review.googlesource.com/c/go/+/207351
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
This change renames the "round" function to the more appropriately named
"alignUp" which rounds an integer up to the next multiple of a power of
two.
This change also adds the alignDown function, which is almost like
alignUp but rounds down to the previous multiple of a power of two.
With these two functions, we also go and replace manual rounding code
with it where we can.
Change-Id: Ie1487366280484dcb2662972b01b4f7135f72fec
Reviewed-on: https://go-review.googlesource.com/c/go/+/190618
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
This adds support for pausing a running G by sending a signal to its
M.
The main complication is that we want to target a G, but can only send
a signal to an M. Hence, the protocol we use is to simply mark the G
for preemption (which we already do) and send the M a "wake up and
look around" signal. The signal checks if it's running a G with a
preemption request and stops it if so in the same way that stack check
preemptions stop Gs. Since the preemption may fail (the G could be
moved or the signal could arrive at an unsafe point), we keep a count
of the number of received preemption signals. This lets stopG detect
if its request failed and should be retried without an explicit
channel back to suspendG.
For #10958, #24543.
Change-Id: I3e1538d5ea5200aeb434374abb5d5fdc56107e53
Reviewed-on: https://go-review.googlesource.com/c/go/+/201760
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
When everything is working correctly, any pointer the garbage
collector encounters can only point into a fully initialized heap
span, since the span must have been initialized before that pointer
could escape the heap allocator and become visible to the GC.
However, in various cases, we try to be defensive against bad
pointers. In findObject, this is just a sanity check: we never expect
to find a bad pointer, but programming errors can lead to them. In
spanOfHeap, we don't necessarily trust the pointer and we're trying to
check if it really does point to the heap, though it should always
point to something. Conservative scanning takes this to a new level,
since it can only guess that a word may be a pointer and verify this.
In all of these cases, we have a problem that the span lookup and
check can race with span initialization, since the span becomes
visible to lookups before it's fully initialized.
Furthermore, we're about to start initializing the span without the
heap lock held, which is going to introduce races where accesses were
previously protected by the heap lock.
To address this, this CL makes accesses to mspan.state atomic, and
ensures that the span is fully initialized before setting the state to
mSpanInUse. All loads are now atomic, and in any case where we don't
trust the pointer, it first atomically loads the span state and checks
that it's mSpanInUse, after which it will have synchronized with span
initialization and can safely check the other span fields.
For #10958, #24543, but a good fix in general.
Change-Id: I518b7c63555b02064b98aa5f802c92b758fef853
Reviewed-on: https://go-review.googlesource.com/c/go/+/203286
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
We check whether an M is preemptible in a surprising number of places.
Put it in one function.
For #10958, #24543.
Change-Id: I305090fdb1ea7f7a55ffe25851c1e35012d0d06c
Reviewed-on: https://go-review.googlesource.com/c/go/+/201439
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
We're about to introduce asynchronous safe points, where we won't have
precise pointer maps for all stack frames. That's okay for scanning
the stack (conservatively), but not for shrinking the stack.
Hence, this CL prepares for this by only shrinking the stack as part
of the stack scan if the goroutine is stopped at a synchronous safe
point. Otherwise, it queues up the stack shrink for the next
synchronous safe point.
We already have one condition under which we can't shrink the stack
for very similar reasons: syscalls. Currently, we just give up on
shrinking the stack if it's in a syscall. But with this mechanism, we
defer that stack shrink until the next synchronous safe point.
For #10958, #24543.
Change-Id: Ifa1dec6f33fdf30f9067be2ce3f7ab8a7f62ce38
Reviewed-on: https://go-review.googlesource.com/c/go/+/201438
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
When we copy a stack of a goroutine blocked in a channel operation, we
have to be very careful because other goroutines may be writing to
that goroutine's stack. To handle this, stack copying acquires the
locks for the channels a goroutine is waiting on.
One complication is that stack growth may happen while a goroutine
holds these locks, in which case stack copying must *not* acquire
these locks because that would self-deadlock.
Currently, stack growth never acquires these locks because stack
growth only happens when a goroutine is running, which means it's
either not blocking on a channel or it's holding the channel locks
already. Stack shrinking always acquires these locks because shrinking
happens asynchronously, so the goroutine is never running, so there
are either no locks or they've been released by the goroutine.
However, we're about to change when stack shrinking can happen, which
is going to break the current rules. Rather than find a new way to
derive whether to acquire these locks or not, this CL simply adds a
flag to the g struct that indicates that stack copying should acquire
channel locks. This flag is set while the goroutine is blocked on a
channel op.
For #10958, #24543.
Change-Id: Ia2ac8831b1bfda98d39bb30285e144c4f7eaf9ab
Reviewed-on: https://go-review.googlesource.com/c/go/+/172982
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Currently, the process of suspending a goroutine is tied to stack
scanning. In preparation for non-cooperative preemption, this CL
abstracts this into general purpose suspendG/resumeG functions.
suspendG and resumeG closely follow the existing scang and restartg
functions with one exception: the addition of a _Gpreempted status.
Currently, preemption tasks (stack scanning) are carried out by the
target goroutine if it's in _Grunning. In this new approach, the task
is always carried out by the goroutine that called suspendG. Thus, we
need a reliable way to drive the target goroutine out of _Grunning
until the requesting goroutine is ready to resume it. The new
_Gpreempted state provides the handshake: when a runnable goroutine
responds to a preemption request, it now parks itself and enters
_Gpreempted. The requesting goroutine races to put it in _Gwaiting,
which gives it ownership, but also the responsibility to start it
again.
This CL adds several TODOs about improving the synchronization on the
G status. The existing code already has these problems; we're just
taking note of them.
The next CL will remove the now-dead scang and preemptscan.
For #10958, #24543.
Change-Id: I16dbf87bea9d50399cc86719c156f48e67198f16
Reviewed-on: https://go-review.googlesource.com/c/go/+/201137
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Currently, shrinkstack will not shrink a stack on Windows if
gp.m.libcallsp != 0. In general, we can't shrink stacks in syscalls
because the syscall may hold pointers into the stack, and in principle
this is supposed to be preventing that for libcall-based syscalls
(which are direct syscalls from the runtime). But this test is
actually broken and has been for a long time. That turns out to be
okay because it also appears it's not necessary.
This test is racy. g.m points to whatever M the G was last running on,
even if the G is in a blocked state, and that M could be doing
anything, including making libcalls. Hence, observing that libcallsp
== 0 at one moment in shrinkstack is no guarantee that it won't become
non-zero while we're shrinking the stack, and vice-versa.
It's also weird that this check is only performed on Windows, given
that we now use libcalls on macOS, Solaris, and AIX.
This check was added when stack shrinking was first implemented in CL
69580044. The history of that CL (though not the final version)
suggests this was necessary for libcalls that happened on Go user
stacks, which we never do now because of the limited stack space.
It could also be defending against user stack pointers passed to
libcall system calls from blocked Gs. But the runtime isn't allowed to
keep pointers into the user stack for blocked Gs on any OS, so it's
not clear this would be of any value.
Hence, this checks seems to be simply unnecessary.
Rather than simply remove it, this CL makes it defensive. We can't do
anything about blocked Gs, since it doesn't even make sense to look at
their M, but if a G tries to shrink its own stack while in a libcall,
that indicates a bug in the libcall code. This CL makes shrinkstack
panic in this case.
For #10958, #24543, since those are going to rearrange how we decide
that it's safe to shrink a stack.
Change-Id: Ia865e1f6340cff26637f8d513970f9ebb4735c6d
Reviewed-on: https://go-review.googlesource.com/c/go/+/173724
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Generate inline code at defer time to save the args of defer calls to unique
(autotmp) stack slots, and generate inline code at exit time to check which defer
calls were made and make the associated function/method/interface calls. We
remember that a particular defer statement was reached by storing in the deferBits
variable (always stored on the stack). At exit time, we check the bits of the
deferBits variable to determine which defer function calls to make (in reverse
order). These low-cost defers are only used for functions where no defers
appear in loops. In addition, we don't do these low-cost defers if there are too
many defer statements or too many exits in a function (to limit code increase).
When a function uses open-coded defers, we produce extra
FUNCDATA_OpenCodedDeferInfo information that specifies the number of defers, and
for each defer, the stack slots where the closure and associated args have been
stored. The funcdata also includes the location of the deferBits variable.
Therefore, for panics, we can use this funcdata to determine exactly which defers
are active, and call the appropriate functions/methods/closures with the correct
arguments for each active defer.
In order to unwind the stack correctly after a recover(), we need to add an extra
code segment to functions with open-coded defers that simply calls deferreturn()
and returns. This segment is not reachable by the normal function, but is returned
to by the runtime during recovery. We set the liveness information of this
deferreturn() to be the same as the liveness at the first function call during the
last defer exit code (so all return values and all stack slots needed by the defer
calls will be live).
I needed to increase the stackguard constant from 880 to 896, because of a small
amount of new code in deferreturn().
The -N flag disables open-coded defers. '-d defer' prints out the kind of defer
being used at each defer statement (heap-allocated, stack-allocated, or
open-coded).
Cost of defer statement [ go test -run NONE -bench BenchmarkDefer$ runtime ]
With normal (stack-allocated) defers only: 35.4 ns/op
With open-coded defers: 5.6 ns/op
Cost of function call alone (remove defer keyword): 4.4 ns/op
Text size increase (including funcdata) for go binary without/with open-coded defers: 0.09%
The average size increase (including funcdata) for only the functions that use
open-coded defers is 1.1%.
The cost of a panic followed by a recover got noticeably slower, since panic
processing now requires a scan of the stack for open-coded defer frames. This scan
is required, even if no frames are using open-coded defers:
Cost of panic and recover [ go test -run NONE -bench BenchmarkPanicRecover runtime ]
Without open-coded defers: 62.0 ns/op
With open-coded defers: 255 ns/op
A CGO Go-to-C-to-Go benchmark got noticeably faster because of open-coded defers:
CGO Go-to-C-to-Go benchmark [cd misc/cgo/test; go test -run NONE -bench BenchmarkCGoCallback ]
Without open-coded defers: 443 ns/op
With open-coded defers: 347 ns/op
Updates #14939 (defer performance)
Updates #34481 (design doc)
Change-Id: I63b1a60d1ebf28126f55ee9fd7ecffe9cb23d1ff
Reviewed-on: https://go-review.googlesource.com/c/go/+/202340
Reviewed-by: Austin Clements <austin@google.com>
Generate inline code at defer time to save the args of defer calls to unique
(autotmp) stack slots, and generate inline code at exit time to check which defer
calls were made and make the associated function/method/interface calls. We
remember that a particular defer statement was reached by storing in the deferBits
variable (always stored on the stack). At exit time, we check the bits of the
deferBits variable to determine which defer function calls to make (in reverse
order). These low-cost defers are only used for functions where no defers
appear in loops. In addition, we don't do these low-cost defers if there are too
many defer statements or too many exits in a function (to limit code increase).
When a function uses open-coded defers, we produce extra
FUNCDATA_OpenCodedDeferInfo information that specifies the number of defers, and
for each defer, the stack slots where the closure and associated args have been
stored. The funcdata also includes the location of the deferBits variable.
Therefore, for panics, we can use this funcdata to determine exactly which defers
are active, and call the appropriate functions/methods/closures with the correct
arguments for each active defer.
In order to unwind the stack correctly after a recover(), we need to add an extra
code segment to functions with open-coded defers that simply calls deferreturn()
and returns. This segment is not reachable by the normal function, but is returned
to by the runtime during recovery. We set the liveness information of this
deferreturn() to be the same as the liveness at the first function call during the
last defer exit code (so all return values and all stack slots needed by the defer
calls will be live).
I needed to increase the stackguard constant from 880 to 896, because of a small
amount of new code in deferreturn().
The -N flag disables open-coded defers. '-d defer' prints out the kind of defer
being used at each defer statement (heap-allocated, stack-allocated, or
open-coded).
Cost of defer statement [ go test -run NONE -bench BenchmarkDefer$ runtime ]
With normal (stack-allocated) defers only: 35.4 ns/op
With open-coded defers: 5.6 ns/op
Cost of function call alone (remove defer keyword): 4.4 ns/op
Text size increase (including funcdata) for go cmd without/with open-coded defers: 0.09%
The average size increase (including funcdata) for only the functions that use
open-coded defers is 1.1%.
The cost of a panic followed by a recover got noticeably slower, since panic
processing now requires a scan of the stack for open-coded defer frames. This scan
is required, even if no frames are using open-coded defers:
Cost of panic and recover [ go test -run NONE -bench BenchmarkPanicRecover runtime ]
Without open-coded defers: 62.0 ns/op
With open-coded defers: 255 ns/op
A CGO Go-to-C-to-Go benchmark got noticeably faster because of open-coded defers:
CGO Go-to-C-to-Go benchmark [cd misc/cgo/test; go test -run NONE -bench BenchmarkCGoCallback ]
Without open-coded defers: 443 ns/op
With open-coded defers: 347 ns/op
Updates #14939 (defer performance)
Updates #34481 (design doc)
Change-Id: I51a389860b9676cfa1b84722f5fb84d3c4ee9e28
Reviewed-on: https://go-review.googlesource.com/c/go/+/190098
Reviewed-by: Austin Clements <austin@google.com>
This reverts CL 180761
Reason for revert: Reinstate the stack-allocated defer CL.
There was nothing wrong with the CL proper, but stack allocation of defers exposed two other issues.
Issue #32477: Fix has been submitted as CL 181258.
Issue #32498: Possible fix is CL 181377 (not submitted yet).
Change-Id: I32b3365d5026600069291b068bbba6cb15295eb3
Reviewed-on: https://go-review.googlesource.com/c/go/+/181378
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This marks all Go symbols called from assembly in other packages with
"go:linkname" directives to ensure they get ABI wrappers.
Now that we have this go:linkname convention, this also removes the
abi0Syms definition in the runtime, which was used to give morestackc
an ABI0 wrapper. Instead, we now just mark morestackc with a
go:linkname directive.
This was tested with buildall.bash in the default configuration, with
-race, and with -gcflags=all=-d=ssa/intrinsics/off. Since I couldn't
test cgo on non-Linux configurations, I manually grepped for runtime
symbols in runtime/cgo.
Updates #31230.
Change-Id: I6c8aa56be2ca6802dfa2bf159e49c411b9071bf1
Reviewed-on: https://go-review.googlesource.com/c/go/+/179862
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
This reverts commit fff4f599fe.
Reason for revert: Seems to still have issues around GC.
Fixes#32452
Change-Id: Ibe7af629f9ad6a3d5312acd7b066123f484da7f0
Reviewed-on: https://go-review.googlesource.com/c/go/+/180761
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
When a defer is executed at most once in a function body,
we can allocate the defer record for it on the stack instead
of on the heap.
This should make defers like this (which are very common) faster.
This optimization applies to 363 out of the 370 static defer sites
in the cmd/go binary.
name old time/op new time/op delta
Defer-4 52.2ns ± 5% 36.2ns ± 3% -30.70% (p=0.000 n=10+10)
Fixes#6980
Update #14939
Change-Id: I697109dd7aeef9e97a9eeba2ef65ff53d3ee1004
Reviewed-on: https://go-review.googlesource.com/c/go/+/171758
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently, shrinkstack will free the stack if the goroutine is dead.
There are only two places that call shrinkstack: scanstack, which will
never call it if the goroutine is dead; and markrootFreeGStacks, which
only calls it on dead goroutines.
Clean this up by separating stack freeing out of shrinkstack.
Change-Id: I7d7891e620550c32a2220833923a025704986681
Reviewed-on: https://go-review.googlesource.com/c/go/+/170890
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
When we set an explicit argmap, we may want only a prefix of that
argmap. Argmap is set when the function is reflect.makeFuncStub or
reflect.methodValueCall. In this case, arglen specifies how much of
the args section is actually live. (It could be either all the args +
results, or just the args.)
Fixes#28750
Change-Id: Idf060607f15a298ac591016994e58e22f7f92d83
Reviewed-on: https://go-review.googlesource.com/c/149217
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>