Commit graph

182 commits

Author SHA1 Message Date
Keith Randall
5a0f2a7a7c cmd/compile: remove gc programs from stack frame objects
This is a two-pronged approach. First, try to keep large objects
off the stack frame. Second, if they do manage to appear anyway,
use straight bitmasks instead of gc programs.

Generally probably a good idea to keep large objects out of stack frames.
But particularly keeping gc programs off the stack simplifies
runtime code a bit.

This CL sets the limit of most stack objects to 131072 bytes (on 64-bit archs).
There can still be large objects if allocated by a late pass, like order, or
they are required to be on the stack, like function arguments.
But the size for the bitmasks for these objects isn't a huge deal,
as we have already have (probably several) bitmasks for the frame
liveness map itself.

Change-Id: I6d2bed0e9aa9ac7499955562c6154f9264061359
Reviewed-on: https://go-review.googlesource.com/c/go/+/542815
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2024-11-18 18:43:25 +00:00
Russ Cox
7eeb0a188e runtime: reserve 4kB for system stack on windows-386
The failures in #70288 are consistent with and strongly imply
stack corruption during fault handling, and debug prints show
that the Go code run during fault handling is running about
300 bytes above the bottom of the goroutine stack.
That should be okay, but that implies the DLL code that called
Go's handler was running near the bottom of the stack too,
and maybe it called other deeper things before or after the
Go handler and smashed the stack that way.

stackSystem is already 4096 bytes on amd64;
making it match that on 386 makes the flaky failures go away.
It's a little unsatisfying not to be able to say exactly what is
overflowing the stack, but the circumstantial evidence is
very strong that it's Windows.

Fixes #70288.

Change-Id: Ife89385873d5e5062a71629dbfee40825edefa49
Reviewed-on: https://go-review.googlesource.com/c/go/+/627375
Reviewed-by: Ian Lance Taylor <iant@google.com>
Auto-Submit: Russ Cox <rsc@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2024-11-13 01:24:26 +00:00
Michael Anthony Knyszek
e76353d5a9 runtime: allow the tracer to be reentrant
This change allows the tracer to be reentrant by restructuring the
internals such that writing an event is atomic with respect to stack
growth. Essentially, a core set of functions that are involved in
acquiring a trace buffer and writing to it are all marked nosplit.

Stack growth is currently the only hidden place where the tracer may be
accidentally reentrant, preventing the tracer from being used
everywhere. It already lacks write barriers, lacks allocations, and is
non-preemptible. This change thus makes the tracer fully reentrant,
since the only reentry case it needs to handle is stack growth.

Since the invariants needed to attain this are subtle, this change also
extends the debugTraceReentrancy debug mode to check these invariants as
well. Specifically, the invariants are checked by setting the throwsplit
flag.

A side benefit of this change is it simplifies the trace event writing
API a good bit: there's no need to actually thread the event writer
through things, and most callsites look a bit simpler.

Change-Id: I7c329fb7a6cb936bd363c44cf882ea0a925132f3
Reviewed-on: https://go-review.googlesource.com/c/go/+/587599
Reviewed-by: Austin Clements <austin@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2024-07-25 14:38:21 +00:00
David Chase
fc5073bc15 runtime,internal: move runtime/internal/sys to internal/runtime/sys
Cleanup and friction reduction

For #65355.

Change-Id: Ia14c9dc584a529a35b97801dd3e95b9acc99a511
Reviewed-on: https://go-review.googlesource.com/c/go/+/600436
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2024-07-23 19:05:35 +00:00
Michael Anthony Knyszek
ca1d2ead5d runtime: skip tracing events that would cause reentrancy
Some of the new experimental events added have a problem in that they
might be emitted during stack growth. This is, to my knowledge, the only
restriction on the tracer, because the tracer otherwise prevents
preemption, avoids allocation, and avoids write barriers. However, the
stack can grow from within the tracer. This leads to
tracing-during-tracing which can result in lost buffers and broken event
streams. (There's a debug mode to get a nice error message, but it's
disabled by default.)

This change resolves the problem by skipping writing out these new
events. This results in the new events sometimes being broken (alloc
without a free, free without an alloc) but for now that's OK. Before the
freeze begins we just want to fix broken tests; tools interpreting these
events will be totally in-house to begin with, and if they have to be a
little bit smarter about missing information, that's OK. In the future
we'll have a more robust fix for this, but it appears that it's going to
require making the tracer fully reentrant. (This is not too hard; either
we force flushing all buffers when going reentrant (which is actually
somewhat subtle with respect to event ordering) or we isolate down just
the actual event writing to be atomic with respect to stack growth. Both
are just bigger changes on shared codepaths that are scary to land this
late in the release cycle.)

Fixes #67379.

Change-Id: I46bb7e470e61c64ff54ac5aec5554b828c1ca4be
Reviewed-on: https://go-review.googlesource.com/c/go/+/587597
Reviewed-by: Carlos Amedee <carlos@golang.org>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2024-05-22 22:31:00 +00:00
Michael Anthony Knyszek
724bab1505 runtime: add traceallocfree GODEBUG for alloc/free events in traces
This change adds expensive alloc/free events to traces, guarded by a
GODEBUG that can be set at run time by mutating the GODEBUG environment
variable. This supersedes the alloc/free trace deleted in a previous CL.

There are two parts to this CL.

The first part is adding a mechanism for exposing experimental events
through the tracer and trace parser. This boils down to a new
ExperimentalEvent event type in the parser API which simply reveals the
raw event data for the event. Each experimental event can also be
associated with "experimental data" which is associated with a
particular generation. This experimental data is just exposed as a bag
of bytes that supplements the experimental events.

In the runtime, this CL organizes experimental events by experiment.
An experiment is defined by a set of experimental events and a single
special batch type. Batches of this special type are exposed through the
parser's API as the aforementioned "experimental data".

The second part of this CL is defining the AllocFree experiment, which
defines 9 new experimental events covering heap object alloc/frees, span
alloc/frees, and goroutine stack alloc/frees. It also generates special
batches that contain a type table: a mapping of IDs to type information.

Change-Id: I965c00e3dcfdf5570f365ff89d0f70d8aeca219c
Reviewed-on: https://go-review.googlesource.com/c/go/+/583377
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2024-05-08 17:47:01 +00:00
Michael Anthony Knyszek
d6a3d093c3 runtime: take a stack trace during tracing only when we own the stack
Currently, the execution tracer may attempt to take a stack trace of a
goroutine whose stack it does not own. For example, if the goroutine is
in _Grunnable or _Gwaiting. This is easily fixed in all cases by simply
moving the emission of GoStop and GoBlock events to before the
casgstatus happens. The goroutine status is what is used to signal stack
ownership, and the GC may shrink a goroutine's stack if it can acquire
the scan bit.

Although this is easily fixed, the interaction here is very subtle,
because stack ownership is only implicit in the goroutine's scan status.
To make this invariant more maintainable and less error-prone in the
future, this change adds a GODEBUG setting that checks, at the point of
taking a stack trace, whether the caller owns the goroutine. This check
is not quite perfect because there's no way for the stack tracing code
to know that the _Gscan bit was acquired by the caller, so for
simplicity it assumes that it was the caller that acquired the scan bit.
In all other cases however, we can check for ownership precisely. At the
very least, this check is sufficient to catch the issue this change is
fixing.

To make sure this debug check doesn't bitrot, it's always enabled during
trace testing. This new mode has actually caught a few other issues
already, so this change fixes them.

One issue that this debug mode caught was that it's not safe to take a
stack trace of a _Gwaiting goroutine that's being unparked.

Another much bigger issue this debug mode caught was the fact that the
execution tracer could try to take a stack trace of a G that was in
_Gwaiting solely to avoid a deadlock in the GC. The execution tracer
already has a partial list of these cases since they're modeled as the
goroutine just executing as normal in the tracer, but this change takes
the list and makes it more formal. In this specific case, we now prevent
the GC from shrinking the stacks of goroutines in this state if tracing
is enabled. The stack traces from these scenarios are too useful to
discard, but there is indeed a race here between the tracer and any
attempt to shrink the stack by the GC.

Change-Id: I019850dabc8cede202fd6dcc0a4b1f16764209fb
Cq-Include-Trybots: luci.golang.try:gotip-linux-amd64-longtest,gotip-linux-amd64-longtest-race
Reviewed-on: https://go-review.googlesource.com/c/go/+/573155
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
2024-04-05 20:50:21 +00:00
Andy Pan
4c2b1e0feb runtime: migrate internal/atomic to internal/runtime
For #65355

Change-Id: I65dd090fb99de9b231af2112c5ccb0eb635db2be
Reviewed-on: https://go-review.googlesource.com/c/go/+/560155
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Ibrahim Bazoka <ibrahimbazoka729@gmail.com>
Auto-Submit: Emmanuel Odeke <emmanuel@orijtech.com>
2024-03-25 19:53:03 +00:00
Austin Clements
caf9e15fb7 runtime: drop stack-allocated pcvalueCaches
Now that pcvalue keeps its cache on the M, we can drop all of the
stack-allocated pcvalueCaches and stop carefully passing them around
between lots of operations. This significantly simplifies a fair
amount of code and makes several structures smaller.

This series of changes has no statistically significant effect on any
runtime Stack benchmarks.

I also experimented with making the cache larger, now that the impact
is limited to the M struct, but wasn't able to measure any
improvements.

This is a re-roll of CL 515277

Change-Id: Ia27529302f81c1c92fb9c3a7474739eca80bfca1
Reviewed-on: https://go-review.googlesource.com/c/go/+/520064
Auto-Submit: Austin Clements <austin@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
2023-08-21 21:06:52 +00:00
Austin Clements
5f674f64e4 Revert "runtime: drop stack-allocated pcvalueCaches"
This reverts CL 515277

Change-Id: Ie10378eed4993cb69f4a9b43a38af32b9d743155
Reviewed-on: https://go-review.googlesource.com/c/go/+/516855
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Austin Clements <austin@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
2023-08-07 22:27:50 +00:00
Austin Clements
aca6577196 runtime: drop stack-allocated pcvalueCaches
Now that pcvalue keeps its cache on the M, we can drop all of the
stack-allocated pcvalueCaches and stop carefully passing them around
between lots of operations. This significantly simplifies a fair
amount of code and makes several structures smaller.

This series of changes has no statistically significant effect on any
runtime Stack benchmarks.

I also experimented with making the cache larger, now that the impact
is limited to the M struct, but wasn't able to measure any
improvements.

Change-Id: I4719ebf347c7150a05e887e75a238e23647c20cd
Reviewed-on: https://go-review.googlesource.com/c/go/+/515277
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Austin Clements <austin@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
2023-08-07 19:31:26 +00:00
Matthew Dempsky
9eb1d5317b runtime: refactor defer processing
This CL refactors gopanic, Goexit, and deferreturn to share a common
state machine for processing pending defers. The new state machine
removes a lot of redundant code and does overall less work.

It should also make it easier to implement further optimizations
(e.g., TODOs added in this CL).

Change-Id: I71d3cc8878a6f951d8633505424a191536c8e6b3
Reviewed-on: https://go-review.googlesource.com/c/go/+/513837
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-07-31 16:52:06 +00:00
Felix Geisendörfer
47e6fd05f7 runtime: remove systemstack logic from adjustframe
Remove logic for skipping some adjustframe logic for systemstack (aka
FuncID_systemstack_switch). This was introduced in 2014 by
9198ed4bd6 but doesn't seem to be needed
anymore.

Updates #59692

Change-Id: I2368d64f9bb28ced4e7f15c9b15dac7a29194389
Reviewed-on: https://go-review.googlesource.com/c/go/+/489116
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Felix Geisendörfer <felix.geisendoerfer@datadoghq.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2023-05-03 14:34:50 +00:00
Felix Geisendörfer
6dca1a29ab runtime: add test for systemstack frame pointer adjustment
Add TestSystemstackFramePointerAdjust as a regression test for CL
489015.

By turning stackPoisonCopy into a var instead of const and introducing
the ShrinkStackAndVerifyFramePointers() helper function, we are able to
trigger the exact combination of events that can crash traceEvent() if
systemstack restores a frame pointer that is pointing into the old
stack.

Updates #59692

Change-Id: I60fc6940638077e3b60a81d923b5f5b4f6d8a44c
Reviewed-on: https://go-review.googlesource.com/c/go/+/489115
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Felix Geisendörfer <felix.geisendoerfer@datadoghq.com>
2023-05-03 14:34:04 +00:00
Felix Geisendörfer
2380d17aea runtime: fix systemstack frame pointer adjustment
Change adjustframe to adjust the frame pointer of systemstack (aka
FuncID_systemstack_switch) before returning early.

Without this fix it is possible for traceEvent() to crash when using
frame pointer unwinding. The issue occurs when a goroutine calls
systemstack in order to call shrinkstack. While returning, systemstack
will restore the unadjusted frame pointer from its frame as part of its
epilogue. If the callee of systemstack then triggers a traceEvent, it
will try to unwind into the old stack. This can lead to a crash if the
memory of the old stack has been reused or freed in the meantime.

The most common situation in which this will manifest is when when
gcAssistAlloc() invokes gcAssistAlloc1() on systemstack() and performs a
shrinkstack() followed by a traceGCMarkAssistDone() or Gosched()
triggering traceEvent().

See CL 489115 for a deterministic test case that triggers the issue.
Meanwhile the problem can frequently be observed using the command
below:

$ GODEBUG=tracefpunwindoff=0 ../bin/go test -trace /dev/null -run TestDeferHeapAndStack ./runtime
SIGSEGV: segmentation violation
PC=0x45f977 m=14 sigcode=128

goroutine 0 [idle]:
runtime.fpTracebackPCs(...)
	.../go/src/runtime/trace.go:945
runtime.traceStackID(0xcdab904677a?, {0x7f1584346018, 0x0?, 0x80}, 0x0?)
	.../go/src/runtime/trace.go:917 +0x217 fp=0x7f1565ffab00 sp=0x7f1565ffaab8 pc=0x45f977
runtime.traceEventLocked(0x0?, 0x0?, 0x0?, 0xc00003dbd0, 0x12, 0x0, 0x1, {0x0, 0x0, 0x0})
	.../go/src/runtime/trace.go:760 +0x285 fp=0x7f1565ffab78 sp=0x7f1565ffab00 pc=0x45ef45
runtime.traceEvent(0xf5?, 0x1, {0x0, 0x0, 0x0})
	.../go/src/runtime/trace.go:692 +0xa9 fp=0x7f1565ffabe0 sp=0x7f1565ffab78 pc=0x45ec49
runtime.traceGoPreempt(...)
	.../go/src/runtime/trace.go:1535
runtime.gopreempt_m(0xc000328340?)
	.../go/src/runtime/proc.go:3551 +0x45 fp=0x7f1565ffac20 sp=0x7f1565ffabe0 pc=0x4449a5
runtime.newstack()
	.../go/src/runtime/stack.go:1077 +0x3cb fp=0x7f1565ffadd0 sp=0x7f1565ffac20 pc=0x455feb
runtime.morestack()
	.../go/src/runtime/asm_amd64.s:593 +0x8f fp=0x7f1565ffadd8 sp=0x7f1565ffadd0 pc=0x47644f

goroutine 19 [running]:
runtime.traceEvent(0x2c?, 0xffffffffffffffff, {0x0, 0x0, 0x0})
	.../go/src/runtime/trace.go:669 +0xe8 fp=0xc0006e6c28 sp=0xc0006e6c20 pc=0x45ec88
runtime.traceGCMarkAssistDone(...)
	.../go/src/runtime/trace.go:1497
runtime.gcAssistAlloc(0xc0003281a0)
	.../go/src/runtime/mgcmark.go:517 +0x27d fp=0xc0006e6c88 sp=0xc0006e6c28 pc=0x421a1d
runtime.deductAssistCredit(0x0?)
	.../go/src/runtime/malloc.go:1287 +0x54 fp=0xc0006e6cb0 sp=0xc0006e6c88 pc=0x40fed4
runtime.mallocgc(0x400, 0x7a9420, 0x1)
	.../go/src/runtime/malloc.go:1002 +0xc9 fp=0xc0006e6d18 sp=0xc0006e6cb0 pc=0x40f709
runtime.newobject(0xb3?)
	.../go/src/runtime/malloc.go:1324 +0x25 fp=0xc0006e6d40 sp=0xc0006e6d18 pc=0x40ffc5
runtime_test.deferHeapAndStack(0xb4)
	.../go/src/runtime/stack_test.go:924 +0x165 fp=0xc0006e6e20 sp=0xc0006e6d40 pc=0x75c2a5

Fixes #59692

Co-Authored-By: Cherry Mui <cherryyz@google.com>
Co-Authored-By: Michael Knyszek <mknyszek@google.com>
Co-Authored-By: Nick Ripley <nick.ripley@datadoghq.com>
Change-Id: I1c0c28327fc2fac0b8cfdbaa72e25584331be31e
Reviewed-on: https://go-review.googlesource.com/c/go/+/489015
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Felix Geisendörfer <felix.geisendoerfer@datadoghq.com>
2023-04-30 16:27:56 +00:00
Austin Clements
87272bd1a1 runtime: tidy _Stack* constant naming
For #59670.

Change-Id: I0efa743edc08e48dc8d906803ba45e9f641369db
Reviewed-on: https://go-review.googlesource.com/c/go/+/486977
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2023-04-21 19:29:00 +00:00
Austin Clements
0f099a4bc5 runtime, cmd: rationalize StackLimit and StackGuard
The current definitions of StackLimit and StackGuard only indirectly
specify the NOSPLIT stack limit and duplicate a literal constant
(928). Currently, they define the stack guard delta, and from there
compute the NOSPLIT limit.

Rationalize these by defining a new constant, abi.StackNosplitBase,
which consolidates and directly specifies the NOSPLIT stack limit (in
the default case). From this we then compute the stack guard delta,
inverting the relationship between these two constants. While we're
here, we rename StackLimit to StackNosplit to make it clearer what's
being limited.

This change does not affect the values of these constants in the
default configuration. It does slightly change how
StackGuardMultiplier values other than 1 affect the constants, but
this multiplier is a pretty rough heuristic anyway.

                    before after
stackNosplit           800   800
_StackGuard            928   928
stackNosplit -race    1728  1600
_StackGuard -race     1856  1728

For #59670.

Change-Id: Ia94094c5e47897e7c088d24b4a5e33f5c2768db5
Reviewed-on: https://go-review.googlesource.com/c/go/+/486976
Auto-Submit: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-04-21 19:28:56 +00:00
Austin Clements
03ad1f1a34 internal/abi, runtime, cmd: merge StackSmall, StackBig consts into internal/abi
For #59670.

Change-Id: I91448363be2fc678964ce119d85cd5fae34a14da
Reviewed-on: https://go-review.googlesource.com/c/go/+/486975
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Auto-Submit: Austin Clements <austin@google.com>
2023-04-21 19:28:53 +00:00
Austin Clements
9754521157 internal/abi, runtime, cmd: merge funcID_* consts into internal/abi
For #59670.

Change-Id: I517e97ea74cf232e5cfbb77b127fa8804f74d84b
Reviewed-on: https://go-review.googlesource.com/c/go/+/485495
Reviewed-by: Michael Pratt <mpratt@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2023-04-21 19:28:44 +00:00
Austin Clements
466e6dae95 Revert "internal/abi, runtime, cmd: merge StackSmall, StackBig consts into internal/abi"
This reverts commit CL 486379.

Submitted out of order and breaks bootstrap.

Change-Id: Ie20a61cc56efc79a365841293ca4e7352b02d86b
Reviewed-on: https://go-review.googlesource.com/c/go/+/486917
TryBot-Bypass: Austin Clements <austin@google.com>
Reviewed-by: David Chase <drchase@google.com>
2023-04-20 16:19:49 +00:00
Austin Clements
d11ff3f081 Revert "runtime, cmd: rationalize StackLimit and StackGuard"
This reverts commit CL 486380.

Submitted out of order and breaks bootstrap.

Change-Id: I67bd225094b5c9713b97f70feba04d2c99b7da76
Reviewed-on: https://go-review.googlesource.com/c/go/+/486916
Reviewed-by: David Chase <drchase@google.com>
TryBot-Bypass: Austin Clements <austin@google.com>
2023-04-20 16:19:35 +00:00
Austin Clements
df777cfa15 Revert "runtime: tidy _Stack* constant naming"
This reverts commit CL 486381.

Submitted out of order and breaks bootstrap.

Change-Id: Ia472111cb966e884a48f8ee3893b3bf4b4f4f875
Reviewed-on: https://go-review.googlesource.com/c/go/+/486915
Reviewed-by: David Chase <drchase@google.com>
TryBot-Bypass: Austin Clements <austin@google.com>
2023-04-20 16:19:25 +00:00
Austin Clements
ef8e3b7fa4 runtime: tidy _Stack* constant naming
For #59670.

Change-Id: I4476d6f92663e8a825d063d6e6a7fc9a2ac99d4d
Reviewed-on: https://go-review.googlesource.com/c/go/+/486381
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2023-04-20 16:05:23 +00:00
Austin Clements
921699fe5f runtime, cmd: rationalize StackLimit and StackGuard
The current definitions of StackLimit and StackGuard only indirectly
specify the NOSPLIT stack limit and duplicate a literal constant
(928). Currently, they define the stack guard delta, and from there
compute the NOSPLIT limit.

Rationalize these by defining a new constant, abi.StackNosplitBase,
which consolidates and directly specifies the NOSPLIT stack limit (in
the default case). From this we then compute the stack guard delta,
inverting the relationship between these two constants. While we're
here, we rename StackLimit to StackNosplit to make it clearer what's
being limited.

This change does not affect the values of these constants in the
default configuration. It does slightly change how
StackGuardMultiplier values other than 1 affect the constants, but
this multiplier is a pretty rough heuristic anyway.

                    before after
stackNosplit           800   800
_StackGuard            928   928
stackNosplit -race    1728  1600
_StackGuard -race     1856  1728

For #59670.

Change-Id: Ibe20825ebe0076bbd7b0b7501177b16c9dbcb79e
Reviewed-on: https://go-review.googlesource.com/c/go/+/486380
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-04-20 16:05:21 +00:00
Austin Clements
f5f7755da8 internal/abi, runtime, cmd: merge StackSmall, StackBig consts into internal/abi
For #59670.

Change-Id: I04a17079b351b9b4999ca252825373c17afb8a88
Reviewed-on: https://go-review.googlesource.com/c/go/+/486379
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2023-04-20 16:05:19 +00:00
Cherry Zhang
a41a29ad19 runtime: adjust frame pointer on stack copy on ARM64
Frame pointer is enabled on ARM64. When copying stacks, the
saved frame pointers need to be adjusted.

Updates #39524, #40044.
Fixes #58432.

Change-Id: I73651fdfd1a6cccae26a5ce02e7e86f6c2fb9bf7
Reviewed-on: https://go-review.googlesource.com/c/go/+/241158
Reviewed-by: Felix Geisendörfer <felix.geisendoerfer@datadoghq.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-04-18 22:58:13 +00:00
Austin Clements
2d99109cfc runtime: replace all callback uses of gentraceback with unwinder
This is a really nice simplification for all of these call sites.

It also achieves a nice performance improvement for stack copying:

goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz
                       │   before    │                after                │
                       │   sec/op    │   sec/op     vs base                │
StackCopyPtr-48          89.25m ± 1%   79.78m ± 1%  -10.62% (p=0.000 n=20)
StackCopy-48             83.48m ± 2%   71.88m ± 1%  -13.90% (p=0.000 n=20)
StackCopyNoCache-48      2.504m ± 2%   2.195m ± 1%  -12.32% (p=0.000 n=20)
StackCopyWithStkobj-48   21.66m ± 1%   21.02m ± 2%   -2.95% (p=0.000 n=20)
geomean                  25.21m        22.68m       -10.04%

Updates #54466.

Change-Id: I31715b7b6efd65726940041d3052bb1c0a1186f3
Reviewed-on: https://go-review.googlesource.com/c/go/+/468297
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
2023-03-10 17:59:32 +00:00
Keith Randall
893964b972 runtime,cmd/link: increase stack guard space when building with -race
More stuff to do = more stack needed. Bump up the guard space when
building with the race detector.

Fixes #54291

Change-Id: I701bc8800507921bed568047d35b8f49c26e7df7
Reviewed-on: https://go-review.googlesource.com/c/go/+/451217
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
2022-11-18 16:26:25 +00:00
cui fliter
969bea8d59 runtime: fix a few function names on comments
Change-Id: I9ef4898d68dfd06618c0bd8e23f81a1d2c77a836
Signed-off-by: cui fliter <imcusg@gmail.com>
Reviewed-on: https://go-review.googlesource.com/c/go/+/447460
Auto-Submit: Michael Pratt <mpratt@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Run-TryBot: Michael Pratt <mpratt@google.com>
2022-11-07 19:48:30 +00:00
Youlin Feng
7ae652b7c0 runtime: replace all uses of CtzXX with TrailingZerosXX
Replace all uses of Ctz64/32/8 with TrailingZeros64/32/8, because they
are the same and maybe duplicated. Also renamed CtzXX functions in 386
assembly code.

Change-Id: I19290204858083750f4be589bb0923393950ae6d
Reviewed-on: https://go-review.googlesource.com/c/go/+/438935
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
2022-10-18 18:06:27 +00:00
Austin Clements
35026f3732 runtime: consolidate stkframe and its methods into stkframe.go
The stkframe struct and its methods are strewn across different source
files. Since they actually have a pretty coherent theme at this point,
migrate it all into a new file, stkframe.go. There are no code changes
in this CL.

For #54466, albeit rather indirectly.

Change-Id: Ibe53fc4b1106d131005e1c9d491be838a8f14211
Reviewed-on: https://go-review.googlesource.com/c/go/+/424516
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Auto-Submit: Austin Clements <austin@google.com>
2022-09-02 19:08:58 +00:00
Austin Clements
b91e373729 runtime: make getStackMap a method of stkframe
This places getStackMap alongside argBytes and argMapInternal as
another method of stkframe.

For #54466, albeit rather indirectly.

Change-Id: I411dda3605dd7f996983706afcbefddf29a68a85
Reviewed-on: https://go-review.googlesource.com/c/go/+/424515
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Auto-Submit: Austin Clements <austin@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-09-02 19:08:56 +00:00
Austin Clements
dbf442b1b2 runtime: replace stkframe.arglen/argmap with methods
Currently, stkframe.arglen and stkframe.argmap are populated by
gentraceback under a particular set of circumstances. But because they
can be constructed from other fields in stkframe, they don't need to
be computed eagerly at all. They're also rather misleading, as they're
only part of computing the actual argument map and most callers should
be using getStackMap, which does the rest of the work.

This CL drops these fields from stkframe. It shifts the functions that
used to compute them, getArgInfoFast and getArgInfo, into
corresponding methods stkframe.argBytes and stkframe.argMapInternal.
argBytes is expected to be used by callers that need to know only the
argument frame size, while argMapInternal is used only by argBytes and
getStackMap.

We also move some of the logic from getStackMap into argMapInternal
because the previous split of responsibilities didn't make much sense.
This lets us return just a bitvector from argMapInternal, rather than
both a bitvector, which carries a size, and an "actually use this
size".

The getArgInfoFast function was inlined before (and inl_test checked
this). We drop that requirement from stkframe.argBytes because the
uses of this have shifted and now it's only called from heap dumping
(which never happens) and conservative stack frame scanning (which
very, very rarely happens).

There will be a few follow-up clean-up CLs.

For #54466. This is a nice clean-up on its own, but it also serves to
remove pointers from the traceback state that would eventually become
troublesome write barriers once we stack-rip gentraceback.

Change-Id: I107f98ed8e7b00185c081de425bbf24af02a4163
Reviewed-on: https://go-review.googlesource.com/c/go/+/424514
Run-TryBot: Austin Clements <austin@google.com>
Auto-Submit: Austin Clements <austin@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-09-02 19:08:53 +00:00
Ludi Rehak
f15761b50b runtime: fix formula for computing number of padding bytes
In order to prevent false sharing of cache lines, structs are
padded with some number of bytes. These bytes are unused, serving
only to make the size of the struct a multiple of the size of the
cache line.

The current calculation of how much to pad is an overestimation,
when the struct size is already a multiple of the cache line size
without padding. For these cases, no padding is necessary, and
the size of the inner pad field should be 0. The bug is that the
pad field is sized to a whole 'nother cache line, wasting space.

Here is the current formula that can never return 0:
cpu.CacheLinePadSize - unsafe.Sizeof(myStruct{})%cpu.CacheLinePadSize

This change simply mods that calculation by cpu.CacheLinePadSize,
so that 0 will be returned instead of cpu.CacheLinePadSize.

Change-Id: I26a2b287171bf47a3b9121873b2722f728381b5e
Reviewed-on: https://go-review.googlesource.com/c/go/+/414214
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Joedian Reid <joedian@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-08-19 16:04:12 +00:00
Cuong Manh Le
a719a78c1b runtime: add and use runtime/internal/sys.NotInHeap
Updates #46731

Change-Id: Ic2208c8bb639aa1e390be0d62e2bd799ecf20654
Reviewed-on: https://go-review.googlesource.com/c/go/+/421878
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2022-08-19 00:29:18 +00:00
Cuong Manh Le
5b0ce94c07 runtime: convert g.parkingOnChan to atomic type
Updates #53821

Change-Id: I54de39b984984fb3c160aba5afacb90131fd47c4
Reviewed-on: https://go-review.googlesource.com/c/go/+/424394
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2022-08-17 16:26:22 +00:00
Keith Randall
016d755213 runtime: measure stack usage; start stacks larger if needed
Measure the average stack size used by goroutines at every GC. When
starting a new goroutine, allocate an initial goroutine stack of that
average size. Intuition is that we'll waste at most 2x in stack space
because only half the goroutines can be below average. In turn, we
avoid some of the early stack growth / copying needed in the average
case.

More details in the design doc at: https://docs.google.com/document/d/1YDlGIdVTPnmUiTAavlZxBI1d9pwGQgZT7IKFKlIXohQ/edit?usp=sharing

name        old time/op  new time/op  delta
Issue18138  95.3µs ± 0%  67.3µs ±13%  -29.35%  (p=0.000 n=9+10)

Fixes #18138

Change-Id: Iba34d22ed04279da7e718bbd569bbf2734922eaa
Reviewed-on: https://go-review.googlesource.com/c/go/+/345889
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
2022-05-12 22:32:42 +00:00
Cuong Manh Le
4388faf964 runtime: use unsafe.Slice in getStackMap
CL 362934 added open code for unsafe.Slice, so using it now no longer
negatively impacts the performance.

Updates #48798

Change-Id: Ifbabe8bc1cc4349c5bcd11586a11fc99bcb388b1
Reviewed-on: https://go-review.googlesource.com/c/go/+/404974
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-05-11 04:25:28 +00:00
Russ Cox
19309779ac all: gofmt main repo
[This CL is part of a sequence implementing the proposal #51082.
The design doc is at https://go.dev/s/godocfmt-design.]

Run the updated gofmt, which reformats doc comments,
on the main repository. Vendored files are excluded.

For #51082.

Change-Id: I7332f099b60f716295fb34719c98c04eb1a85407
Reviewed-on: https://go-review.googlesource.com/c/go/+/384268
Reviewed-by: Jonathan Amsterdam <jba@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2022-04-11 16:34:30 +00:00
Meng Zhuo
bb2b16d15e reflect, runtime: add reflect support for regabi on riscv64
This CL adds regabi support needed for reflect and reimplement
of CL 360994, which is reverted.

Change-Id: I140673f09109dd9f72eff70aad64c0aa00d6857a
Reviewed-on: https://go-review.googlesource.com/c/go/+/396077
Reviewed-by: Cherry Mui <cherryyz@google.com>
Trust: mzh <mzh@golangcn.org>
2022-04-01 01:41:42 +00:00
mzh
ad646b33c9 Revert "reflect, runtime: add reflect support for regabi on riscv64"
This reverts commit 56400fc706.

Reason for revert: this CL requires CL360296 be merged

Change-Id: I4c48c4d23b73b6e892cf86cbbc864698ebc5c992
Reviewed-on: https://go-review.googlesource.com/c/go/+/396076
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Trust: mzh <mzh@golangcn.org>
Run-TryBot: mzh <mzh@golangcn.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
2022-03-28 01:10:35 +00:00
Meng Zhuo
56400fc706 reflect, runtime: add reflect support for regabi on riscv64
This CL adds regabi support needed for reflect.

Change-Id: Ib78f8c7765f03e3a7b46e8b115bf8870b8076e6a
Reviewed-on: https://go-review.googlesource.com/c/go/+/360994
Trust: mzh <mzh@golangcn.org>
Run-TryBot: mzh <mzh@golangcn.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2022-03-27 12:55:16 +00:00
Russ Cox
2580d0e08d all: gofmt -w -r 'interface{} -> any' src
And then revert the bootstrap cmd directories and certain testdata.
And adjust tests as needed.

Not reverting the changes in std that are bootstrapped,
because some of those changes would appear in API docs,
and we want to use any consistently.
Instead, rewrite 'any' to 'interface{}' in cmd/dist for those directories
when preparing the bootstrap copy.

A few files changed as a result of running gofmt -w
not because of interface{} -> any but because they
hadn't been updated for the new //go:build lines.

Fixes #49884.

Change-Id: Ie8045cba995f65bd79c694ec77a1b3d1fe01bb09
Reviewed-on: https://go-review.googlesource.com/c/go/+/368254
Trust: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
2021-12-13 18:45:54 +00:00
Austin Clements
71559a6ffd runtime: fix racy stackForceMove check
Currently, newstack loads gp.stackguard0 twice to check for different
poison values. The race window between these two checks can lead to
unintentional stack doubling, and ultimately to stack overflows.

Specifically, newstack checks if stackguard0 is stackPreempt first,
then it checks if it's stackForceMove. If stackguard0 is set to
stackForceMove on entry, but changes to stackPreempt between the two
checks, newstack will incorrectly double the stack allocation.

Fix this by loading stackguard0 exactly once and then checking it
against different poison values.

The effect of this is relatively minor because stackForceMove is only
used by a small number of runtime tests. I found this because
mayMorestackMove uses stackForceMove aggressively, which makes this
failure mode much more likely.

Change-Id: I1f8b6a6744e45533580a3f45d7030ec2ec65a5fb
Reviewed-on: https://go-review.googlesource.com/c/go/+/361775
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
2021-11-05 20:59:32 +00:00
Austin Clements
35c7234601 runtime: add always-preempt maymorestack hook
This adds a maymorestack hook that forces a preemption at every
possible cooperative preemption point. This would have helped us catch
several recent preemption-related bugs earlier, including #47302,
 #47304, and #47441.

For #48297.

Change-Id: Ib82c973589c8a7223900e1842913b8591938fb9f
Reviewed-on: https://go-review.googlesource.com/c/go/+/359796
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: David Chase <drchase@google.com>
2021-11-05 00:52:08 +00:00
fanzha02
6f327f7b88 runtime, syscall: add calls to asan functions
Add explicit address sanitizer instrumentation to the runtime and
syscall packages. The compiler does not instrument the runtime
package. It does instrument the syscall package, but we need to add
a couple of cases that it can't see.

Refer to the implementation of the asan malloc runtime library,
this patch also allocates extra memory as the redzone, around the
returned memory region, and marks the redzone as unaddressable to
detect the overflows or underflows.

Updates #44853.

Change-Id: I2753d1cc1296935a66bf521e31ce91e35fcdf798
Reviewed-on: https://go-review.googlesource.com/c/go/+/298614
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Trust: fannie zhang <Fannie.Zhang@arm.com>
2021-11-02 05:35:11 +00:00
Michael Anthony Knyszek
9ac1ee2d46 runtime: track the amount of scannable allocated stack for the GC pacer
This change adds two fields to gcControllerState: stackScan, used for
pacing decisions, and scannableStackSize, which directly tracks the
amount of space allocated for inuse stacks that will be scanned.

scannableStackSize is not updated directly, but is instead flushed from
each P when at an least 8 KiB delta has accumulated. This helps reduce
issues with atomics contention for newly created goroutines. Stack
growth paths are largely unaffected.

StackGrowth-48			51.4ns ± 0%	51.4ns ± 0%	~	(p=0.927 n=10+10)
StackGrowthDeep-48		6.14µs ± 3%	6.25µs ± 4%	~	(p=0.090 n=10+9)
CreateGoroutines-48		273ns ± 1%	273ns ± 1%	~	(p=0.676 n=9+10)
CreateGoroutinesParallel-48	65.5ns ± 5%	66.6ns ± 7%	~	(p=0.340 n=9+9)
CreateGoroutinesCapture-48	2.06µs ± 1%	2.07µs ± 4%	~	(p=0.217 n=10+10)
CreateGoroutinesSingle-48	550ns ± 3%	563ns ± 4%	+2.41%	(p=0.034 n=8+10)

For #44167.

Change-Id: Id1800d41d3a6c211b43aeb5681c57c0dc8880daf
Reviewed-on: https://go-review.googlesource.com/c/go/+/309589
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
2021-10-29 18:35:20 +00:00
Josh Bleecher Snyder
990c9c6cab Revert "runtime: use unsafe.Slice in getStackMap"
This reverts commit golang.org/cl/352953.

Reason for revert: unsafe.Slice is considerably slower.
Part of this is extra safety checks (good), but most of it
is the function call overhead. We should consider open-coding it (#48798).

Impact of this change:

name                   old time/op  new time/op  delta
StackCopyWithStkobj-8  12.1ms ± 5%  11.6ms ± 3%  -4.03%  (p=0.009 n=10+8)

Change-Id: Ib2448e3edac25afd8fb55ffbea073b8b11521bde
Reviewed-on: https://go-review.googlesource.com/c/go/+/354090
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2021-10-05 20:35:54 +00:00
Josh Bleecher Snyder
3100dc1a7f cmd/link,runtime: remove relocations from stkobjs
Use an offset from go.func.* instead.
This removes the last relocation from funcdata symbols,
which lets us simplify that code.

size      before    after     Δ       %
addr2line 3683218   3680706   -2512   -0.068%
api       4951074   4944850   -6224   -0.126%
asm       4744258   4757586   +13328  +0.281%
buildid   2419986   2418546   -1440   -0.060%
cgo       4218306   4197346   -20960  -0.497%
compile   22132066  22076882  -55184  -0.249%
cover     4432834   4411362   -21472  -0.484%
dist      3111202   3091346   -19856  -0.638%
doc       3583602   3563234   -20368  -0.568%
fix       3023922   3020658   -3264   -0.108%
link      6188034   6164642   -23392  -0.378%
nm        3665826   3646818   -19008  -0.519%
objdump   4015234   4012450   -2784   -0.069%
pack      2155010   2153554   -1456   -0.068%
pprof     13044178  13011522  -32656  -0.250%
test2json 2402146   2383906   -18240  -0.759%
trace     9765410   9736514   -28896  -0.296%
vet       6681250   6655058   -26192  -0.392%
total     104217556 103926980 -290576 -0.279%

relocs    before  after   Δ       %
addr2line 25563   25066   -497    -1.944%
api       18409   17176   -1233   -6.698%
asm       18903   18271   -632    -3.343%
buildid   9513    9233    -280    -2.943%
cgo       17103   16222   -881    -5.151%
compile   64825   60421   -4404   -6.794%
cover     19464   18479   -985    -5.061%
dist      10798   10135   -663    -6.140%
doc       13503   12735   -768    -5.688%
fix       11465   10820   -645    -5.626%
link      23214   21849   -1365   -5.880%
nm        25480   24987   -493    -1.935%
objdump   26610   26057   -553    -2.078%
pack      7951    7665    -286    -3.597%
pprof     63964   60761   -3203   -5.008%
test2json 8735    8389    -346    -3.961%
trace     39639   37180   -2459   -6.203%
vet       25970   24044   -1926   -7.416%
total     431108  409489  -21619  -5.015%

Change-Id: I43c26196a008da6d1cb3a782eea2f428778bd569
Reviewed-on: https://go-review.googlesource.com/c/go/+/353138
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2021-10-05 19:16:08 +00:00
Josh Bleecher Snyder
a9d5ea650b runtime: use unsafe.Slice in getStackMap
It's not less code, but it is clearer code.

Change-Id: I32239baf92487a56900a4edd8a2593014f37d093
Reviewed-on: https://go-review.googlesource.com/c/go/+/352953
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Trust: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2021-09-29 17:50:31 +00:00