Currently, the execution tracer may attempt to take a stack trace of a
goroutine whose stack it does not own. For example, if the goroutine is
in _Grunnable or _Gwaiting. This is easily fixed in all cases by simply
moving the emission of GoStop and GoBlock events to before the
casgstatus happens. The goroutine status is what is used to signal stack
ownership, and the GC may shrink a goroutine's stack if it can acquire
the scan bit.
Although this is easily fixed, the interaction here is very subtle,
because stack ownership is only implicit in the goroutine's scan status.
To make this invariant more maintainable and less error-prone in the
future, this change adds a GODEBUG setting that checks, at the point of
taking a stack trace, whether the caller owns the goroutine. This check
is not quite perfect because there's no way for the stack tracing code
to know that the _Gscan bit was acquired by the caller, so for
simplicity it assumes that it was the caller that acquired the scan bit.
In all other cases however, we can check for ownership precisely. At the
very least, this check is sufficient to catch the issue this change is
fixing.
To make sure this debug check doesn't bitrot, it's always enabled during
trace testing. This new mode has actually caught a few other issues
already, so this change fixes them.
One issue that this debug mode caught was that it's not safe to take a
stack trace of a _Gwaiting goroutine that's being unparked.
Another much bigger issue this debug mode caught was the fact that the
execution tracer could try to take a stack trace of a G that was in
_Gwaiting solely to avoid a deadlock in the GC. The execution tracer
already has a partial list of these cases since they're modeled as the
goroutine just executing as normal in the tracer, but this change takes
the list and makes it more formal. In this specific case, we now prevent
the GC from shrinking the stacks of goroutines in this state if tracing
is enabled. The stack traces from these scenarios are too useful to
discard, but there is indeed a race here between the tracer and any
attempt to shrink the stack by the GC.
Change-Id: I019850dabc8cede202fd6dcc0a4b1f16764209fb
Cq-Include-Trybots: luci.golang.try:gotip-linux-amd64-longtest,gotip-linux-amd64-longtest-race
Reviewed-on: https://go-review.googlesource.com/c/go/+/573155
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Auto-Submit: Michael Knyszek <mknyszek@google.com>
Now that pcvalue keeps its cache on the M, we can drop all of the
stack-allocated pcvalueCaches and stop carefully passing them around
between lots of operations. This significantly simplifies a fair
amount of code and makes several structures smaller.
This series of changes has no statistically significant effect on any
runtime Stack benchmarks.
I also experimented with making the cache larger, now that the impact
is limited to the M struct, but wasn't able to measure any
improvements.
This is a re-roll of CL 515277
Change-Id: Ia27529302f81c1c92fb9c3a7474739eca80bfca1
Reviewed-on: https://go-review.googlesource.com/c/go/+/520064
Auto-Submit: Austin Clements <austin@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Now that pcvalue keeps its cache on the M, we can drop all of the
stack-allocated pcvalueCaches and stop carefully passing them around
between lots of operations. This significantly simplifies a fair
amount of code and makes several structures smaller.
This series of changes has no statistically significant effect on any
runtime Stack benchmarks.
I also experimented with making the cache larger, now that the impact
is limited to the M struct, but wasn't able to measure any
improvements.
Change-Id: I4719ebf347c7150a05e887e75a238e23647c20cd
Reviewed-on: https://go-review.googlesource.com/c/go/+/515277
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Austin Clements <austin@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
This CL refactors gopanic, Goexit, and deferreturn to share a common
state machine for processing pending defers. The new state machine
removes a lot of redundant code and does overall less work.
It should also make it easier to implement further optimizations
(e.g., TODOs added in this CL).
Change-Id: I71d3cc8878a6f951d8633505424a191536c8e6b3
Reviewed-on: https://go-review.googlesource.com/c/go/+/513837
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Remove logic for skipping some adjustframe logic for systemstack (aka
FuncID_systemstack_switch). This was introduced in 2014 by
9198ed4bd6 but doesn't seem to be needed
anymore.
Updates #59692
Change-Id: I2368d64f9bb28ced4e7f15c9b15dac7a29194389
Reviewed-on: https://go-review.googlesource.com/c/go/+/489116
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Felix Geisendörfer <felix.geisendoerfer@datadoghq.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Add TestSystemstackFramePointerAdjust as a regression test for CL
489015.
By turning stackPoisonCopy into a var instead of const and introducing
the ShrinkStackAndVerifyFramePointers() helper function, we are able to
trigger the exact combination of events that can crash traceEvent() if
systemstack restores a frame pointer that is pointing into the old
stack.
Updates #59692
Change-Id: I60fc6940638077e3b60a81d923b5f5b4f6d8a44c
Reviewed-on: https://go-review.googlesource.com/c/go/+/489115
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Felix Geisendörfer <felix.geisendoerfer@datadoghq.com>
Change adjustframe to adjust the frame pointer of systemstack (aka
FuncID_systemstack_switch) before returning early.
Without this fix it is possible for traceEvent() to crash when using
frame pointer unwinding. The issue occurs when a goroutine calls
systemstack in order to call shrinkstack. While returning, systemstack
will restore the unadjusted frame pointer from its frame as part of its
epilogue. If the callee of systemstack then triggers a traceEvent, it
will try to unwind into the old stack. This can lead to a crash if the
memory of the old stack has been reused or freed in the meantime.
The most common situation in which this will manifest is when when
gcAssistAlloc() invokes gcAssistAlloc1() on systemstack() and performs a
shrinkstack() followed by a traceGCMarkAssistDone() or Gosched()
triggering traceEvent().
See CL 489115 for a deterministic test case that triggers the issue.
Meanwhile the problem can frequently be observed using the command
below:
$ GODEBUG=tracefpunwindoff=0 ../bin/go test -trace /dev/null -run TestDeferHeapAndStack ./runtime
SIGSEGV: segmentation violation
PC=0x45f977 m=14 sigcode=128
goroutine 0 [idle]:
runtime.fpTracebackPCs(...)
.../go/src/runtime/trace.go:945
runtime.traceStackID(0xcdab904677a?, {0x7f1584346018, 0x0?, 0x80}, 0x0?)
.../go/src/runtime/trace.go:917 +0x217 fp=0x7f1565ffab00 sp=0x7f1565ffaab8 pc=0x45f977
runtime.traceEventLocked(0x0?, 0x0?, 0x0?, 0xc00003dbd0, 0x12, 0x0, 0x1, {0x0, 0x0, 0x0})
.../go/src/runtime/trace.go:760 +0x285 fp=0x7f1565ffab78 sp=0x7f1565ffab00 pc=0x45ef45
runtime.traceEvent(0xf5?, 0x1, {0x0, 0x0, 0x0})
.../go/src/runtime/trace.go:692 +0xa9 fp=0x7f1565ffabe0 sp=0x7f1565ffab78 pc=0x45ec49
runtime.traceGoPreempt(...)
.../go/src/runtime/trace.go:1535
runtime.gopreempt_m(0xc000328340?)
.../go/src/runtime/proc.go:3551 +0x45 fp=0x7f1565ffac20 sp=0x7f1565ffabe0 pc=0x4449a5
runtime.newstack()
.../go/src/runtime/stack.go:1077 +0x3cb fp=0x7f1565ffadd0 sp=0x7f1565ffac20 pc=0x455feb
runtime.morestack()
.../go/src/runtime/asm_amd64.s:593 +0x8f fp=0x7f1565ffadd8 sp=0x7f1565ffadd0 pc=0x47644f
goroutine 19 [running]:
runtime.traceEvent(0x2c?, 0xffffffffffffffff, {0x0, 0x0, 0x0})
.../go/src/runtime/trace.go:669 +0xe8 fp=0xc0006e6c28 sp=0xc0006e6c20 pc=0x45ec88
runtime.traceGCMarkAssistDone(...)
.../go/src/runtime/trace.go:1497
runtime.gcAssistAlloc(0xc0003281a0)
.../go/src/runtime/mgcmark.go:517 +0x27d fp=0xc0006e6c88 sp=0xc0006e6c28 pc=0x421a1d
runtime.deductAssistCredit(0x0?)
.../go/src/runtime/malloc.go:1287 +0x54 fp=0xc0006e6cb0 sp=0xc0006e6c88 pc=0x40fed4
runtime.mallocgc(0x400, 0x7a9420, 0x1)
.../go/src/runtime/malloc.go:1002 +0xc9 fp=0xc0006e6d18 sp=0xc0006e6cb0 pc=0x40f709
runtime.newobject(0xb3?)
.../go/src/runtime/malloc.go:1324 +0x25 fp=0xc0006e6d40 sp=0xc0006e6d18 pc=0x40ffc5
runtime_test.deferHeapAndStack(0xb4)
.../go/src/runtime/stack_test.go:924 +0x165 fp=0xc0006e6e20 sp=0xc0006e6d40 pc=0x75c2a5
Fixes#59692
Co-Authored-By: Cherry Mui <cherryyz@google.com>
Co-Authored-By: Michael Knyszek <mknyszek@google.com>
Co-Authored-By: Nick Ripley <nick.ripley@datadoghq.com>
Change-Id: I1c0c28327fc2fac0b8cfdbaa72e25584331be31e
Reviewed-on: https://go-review.googlesource.com/c/go/+/489015
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Felix Geisendörfer <felix.geisendoerfer@datadoghq.com>
The current definitions of StackLimit and StackGuard only indirectly
specify the NOSPLIT stack limit and duplicate a literal constant
(928). Currently, they define the stack guard delta, and from there
compute the NOSPLIT limit.
Rationalize these by defining a new constant, abi.StackNosplitBase,
which consolidates and directly specifies the NOSPLIT stack limit (in
the default case). From this we then compute the stack guard delta,
inverting the relationship between these two constants. While we're
here, we rename StackLimit to StackNosplit to make it clearer what's
being limited.
This change does not affect the values of these constants in the
default configuration. It does slightly change how
StackGuardMultiplier values other than 1 affect the constants, but
this multiplier is a pretty rough heuristic anyway.
before after
stackNosplit 800 800
_StackGuard 928 928
stackNosplit -race 1728 1600
_StackGuard -race 1856 1728
For #59670.
Change-Id: Ia94094c5e47897e7c088d24b4a5e33f5c2768db5
Reviewed-on: https://go-review.googlesource.com/c/go/+/486976
Auto-Submit: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
This reverts commit CL 486379.
Submitted out of order and breaks bootstrap.
Change-Id: Ie20a61cc56efc79a365841293ca4e7352b02d86b
Reviewed-on: https://go-review.googlesource.com/c/go/+/486917
TryBot-Bypass: Austin Clements <austin@google.com>
Reviewed-by: David Chase <drchase@google.com>
This reverts commit CL 486380.
Submitted out of order and breaks bootstrap.
Change-Id: I67bd225094b5c9713b97f70feba04d2c99b7da76
Reviewed-on: https://go-review.googlesource.com/c/go/+/486916
Reviewed-by: David Chase <drchase@google.com>
TryBot-Bypass: Austin Clements <austin@google.com>
This reverts commit CL 486381.
Submitted out of order and breaks bootstrap.
Change-Id: Ia472111cb966e884a48f8ee3893b3bf4b4f4f875
Reviewed-on: https://go-review.googlesource.com/c/go/+/486915
Reviewed-by: David Chase <drchase@google.com>
TryBot-Bypass: Austin Clements <austin@google.com>
The current definitions of StackLimit and StackGuard only indirectly
specify the NOSPLIT stack limit and duplicate a literal constant
(928). Currently, they define the stack guard delta, and from there
compute the NOSPLIT limit.
Rationalize these by defining a new constant, abi.StackNosplitBase,
which consolidates and directly specifies the NOSPLIT stack limit (in
the default case). From this we then compute the stack guard delta,
inverting the relationship between these two constants. While we're
here, we rename StackLimit to StackNosplit to make it clearer what's
being limited.
This change does not affect the values of these constants in the
default configuration. It does slightly change how
StackGuardMultiplier values other than 1 affect the constants, but
this multiplier is a pretty rough heuristic anyway.
before after
stackNosplit 800 800
_StackGuard 928 928
stackNosplit -race 1728 1600
_StackGuard -race 1856 1728
For #59670.
Change-Id: Ibe20825ebe0076bbd7b0b7501177b16c9dbcb79e
Reviewed-on: https://go-review.googlesource.com/c/go/+/486380
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Frame pointer is enabled on ARM64. When copying stacks, the
saved frame pointers need to be adjusted.
Updates #39524, #40044.
Fixes#58432.
Change-Id: I73651fdfd1a6cccae26a5ce02e7e86f6c2fb9bf7
Reviewed-on: https://go-review.googlesource.com/c/go/+/241158
Reviewed-by: Felix Geisendörfer <felix.geisendoerfer@datadoghq.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
More stuff to do = more stack needed. Bump up the guard space when
building with the race detector.
Fixes#54291
Change-Id: I701bc8800507921bed568047d35b8f49c26e7df7
Reviewed-on: https://go-review.googlesource.com/c/go/+/451217
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Replace all uses of Ctz64/32/8 with TrailingZeros64/32/8, because they
are the same and maybe duplicated. Also renamed CtzXX functions in 386
assembly code.
Change-Id: I19290204858083750f4be589bb0923393950ae6d
Reviewed-on: https://go-review.googlesource.com/c/go/+/438935
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Bryan Mills <bcmills@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
The stkframe struct and its methods are strewn across different source
files. Since they actually have a pretty coherent theme at this point,
migrate it all into a new file, stkframe.go. There are no code changes
in this CL.
For #54466, albeit rather indirectly.
Change-Id: Ibe53fc4b1106d131005e1c9d491be838a8f14211
Reviewed-on: https://go-review.googlesource.com/c/go/+/424516
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Auto-Submit: Austin Clements <austin@google.com>
This places getStackMap alongside argBytes and argMapInternal as
another method of stkframe.
For #54466, albeit rather indirectly.
Change-Id: I411dda3605dd7f996983706afcbefddf29a68a85
Reviewed-on: https://go-review.googlesource.com/c/go/+/424515
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Auto-Submit: Austin Clements <austin@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Currently, stkframe.arglen and stkframe.argmap are populated by
gentraceback under a particular set of circumstances. But because they
can be constructed from other fields in stkframe, they don't need to
be computed eagerly at all. They're also rather misleading, as they're
only part of computing the actual argument map and most callers should
be using getStackMap, which does the rest of the work.
This CL drops these fields from stkframe. It shifts the functions that
used to compute them, getArgInfoFast and getArgInfo, into
corresponding methods stkframe.argBytes and stkframe.argMapInternal.
argBytes is expected to be used by callers that need to know only the
argument frame size, while argMapInternal is used only by argBytes and
getStackMap.
We also move some of the logic from getStackMap into argMapInternal
because the previous split of responsibilities didn't make much sense.
This lets us return just a bitvector from argMapInternal, rather than
both a bitvector, which carries a size, and an "actually use this
size".
The getArgInfoFast function was inlined before (and inl_test checked
this). We drop that requirement from stkframe.argBytes because the
uses of this have shifted and now it's only called from heap dumping
(which never happens) and conservative stack frame scanning (which
very, very rarely happens).
There will be a few follow-up clean-up CLs.
For #54466. This is a nice clean-up on its own, but it also serves to
remove pointers from the traceback state that would eventually become
troublesome write barriers once we stack-rip gentraceback.
Change-Id: I107f98ed8e7b00185c081de425bbf24af02a4163
Reviewed-on: https://go-review.googlesource.com/c/go/+/424514
Run-TryBot: Austin Clements <austin@google.com>
Auto-Submit: Austin Clements <austin@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
In order to prevent false sharing of cache lines, structs are
padded with some number of bytes. These bytes are unused, serving
only to make the size of the struct a multiple of the size of the
cache line.
The current calculation of how much to pad is an overestimation,
when the struct size is already a multiple of the cache line size
without padding. For these cases, no padding is necessary, and
the size of the inner pad field should be 0. The bug is that the
pad field is sized to a whole 'nother cache line, wasting space.
Here is the current formula that can never return 0:
cpu.CacheLinePadSize - unsafe.Sizeof(myStruct{})%cpu.CacheLinePadSize
This change simply mods that calculation by cpu.CacheLinePadSize,
so that 0 will be returned instead of cpu.CacheLinePadSize.
Change-Id: I26a2b287171bf47a3b9121873b2722f728381b5e
Reviewed-on: https://go-review.googlesource.com/c/go/+/414214
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Joedian Reid <joedian@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Measure the average stack size used by goroutines at every GC. When
starting a new goroutine, allocate an initial goroutine stack of that
average size. Intuition is that we'll waste at most 2x in stack space
because only half the goroutines can be below average. In turn, we
avoid some of the early stack growth / copying needed in the average
case.
More details in the design doc at: https://docs.google.com/document/d/1YDlGIdVTPnmUiTAavlZxBI1d9pwGQgZT7IKFKlIXohQ/edit?usp=sharing
name old time/op new time/op delta
Issue18138 95.3µs ± 0% 67.3µs ±13% -29.35% (p=0.000 n=9+10)
Fixes#18138
Change-Id: Iba34d22ed04279da7e718bbd569bbf2734922eaa
Reviewed-on: https://go-review.googlesource.com/c/go/+/345889
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
CL 362934 added open code for unsafe.Slice, so using it now no longer
negatively impacts the performance.
Updates #48798
Change-Id: Ifbabe8bc1cc4349c5bcd11586a11fc99bcb388b1
Reviewed-on: https://go-review.googlesource.com/c/go/+/404974
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
[This CL is part of a sequence implementing the proposal #51082.
The design doc is at https://go.dev/s/godocfmt-design.]
Run the updated gofmt, which reformats doc comments,
on the main repository. Vendored files are excluded.
For #51082.
Change-Id: I7332f099b60f716295fb34719c98c04eb1a85407
Reviewed-on: https://go-review.googlesource.com/c/go/+/384268
Reviewed-by: Jonathan Amsterdam <jba@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This CL adds regabi support needed for reflect and reimplement
of CL 360994, which is reverted.
Change-Id: I140673f09109dd9f72eff70aad64c0aa00d6857a
Reviewed-on: https://go-review.googlesource.com/c/go/+/396077
Reviewed-by: Cherry Mui <cherryyz@google.com>
Trust: mzh <mzh@golangcn.org>
And then revert the bootstrap cmd directories and certain testdata.
And adjust tests as needed.
Not reverting the changes in std that are bootstrapped,
because some of those changes would appear in API docs,
and we want to use any consistently.
Instead, rewrite 'any' to 'interface{}' in cmd/dist for those directories
when preparing the bootstrap copy.
A few files changed as a result of running gofmt -w
not because of interface{} -> any but because they
hadn't been updated for the new //go:build lines.
Fixes#49884.
Change-Id: Ie8045cba995f65bd79c694ec77a1b3d1fe01bb09
Reviewed-on: https://go-review.googlesource.com/c/go/+/368254
Trust: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Currently, newstack loads gp.stackguard0 twice to check for different
poison values. The race window between these two checks can lead to
unintentional stack doubling, and ultimately to stack overflows.
Specifically, newstack checks if stackguard0 is stackPreempt first,
then it checks if it's stackForceMove. If stackguard0 is set to
stackForceMove on entry, but changes to stackPreempt between the two
checks, newstack will incorrectly double the stack allocation.
Fix this by loading stackguard0 exactly once and then checking it
against different poison values.
The effect of this is relatively minor because stackForceMove is only
used by a small number of runtime tests. I found this because
mayMorestackMove uses stackForceMove aggressively, which makes this
failure mode much more likely.
Change-Id: I1f8b6a6744e45533580a3f45d7030ec2ec65a5fb
Reviewed-on: https://go-review.googlesource.com/c/go/+/361775
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
This adds a maymorestack hook that forces a preemption at every
possible cooperative preemption point. This would have helped us catch
several recent preemption-related bugs earlier, including #47302,
#47304, and #47441.
For #48297.
Change-Id: Ib82c973589c8a7223900e1842913b8591938fb9f
Reviewed-on: https://go-review.googlesource.com/c/go/+/359796
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: David Chase <drchase@google.com>
Add explicit address sanitizer instrumentation to the runtime and
syscall packages. The compiler does not instrument the runtime
package. It does instrument the syscall package, but we need to add
a couple of cases that it can't see.
Refer to the implementation of the asan malloc runtime library,
this patch also allocates extra memory as the redzone, around the
returned memory region, and marks the redzone as unaddressable to
detect the overflows or underflows.
Updates #44853.
Change-Id: I2753d1cc1296935a66bf521e31ce91e35fcdf798
Reviewed-on: https://go-review.googlesource.com/c/go/+/298614
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Trust: fannie zhang <Fannie.Zhang@arm.com>
This change adds two fields to gcControllerState: stackScan, used for
pacing decisions, and scannableStackSize, which directly tracks the
amount of space allocated for inuse stacks that will be scanned.
scannableStackSize is not updated directly, but is instead flushed from
each P when at an least 8 KiB delta has accumulated. This helps reduce
issues with atomics contention for newly created goroutines. Stack
growth paths are largely unaffected.
StackGrowth-48 51.4ns ± 0% 51.4ns ± 0% ~ (p=0.927 n=10+10)
StackGrowthDeep-48 6.14µs ± 3% 6.25µs ± 4% ~ (p=0.090 n=10+9)
CreateGoroutines-48 273ns ± 1% 273ns ± 1% ~ (p=0.676 n=9+10)
CreateGoroutinesParallel-48 65.5ns ± 5% 66.6ns ± 7% ~ (p=0.340 n=9+9)
CreateGoroutinesCapture-48 2.06µs ± 1% 2.07µs ± 4% ~ (p=0.217 n=10+10)
CreateGoroutinesSingle-48 550ns ± 3% 563ns ± 4% +2.41% (p=0.034 n=8+10)
For #44167.
Change-Id: Id1800d41d3a6c211b43aeb5681c57c0dc8880daf
Reviewed-on: https://go-review.googlesource.com/c/go/+/309589
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
This reverts commit golang.org/cl/352953.
Reason for revert: unsafe.Slice is considerably slower.
Part of this is extra safety checks (good), but most of it
is the function call overhead. We should consider open-coding it (#48798).
Impact of this change:
name old time/op new time/op delta
StackCopyWithStkobj-8 12.1ms ± 5% 11.6ms ± 3% -4.03% (p=0.009 n=10+8)
Change-Id: Ib2448e3edac25afd8fb55ffbea073b8b11521bde
Reviewed-on: https://go-review.googlesource.com/c/go/+/354090
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This adds the regabi support needed for reflect including:
- implementation of the makeFuncSub and methodValueCall for
reflect
- implementations of archFloat32FromReg and archFloat32ToReg
needed for PPC64 due to differences in the way float32 are
represented in registers as compared to other platforms
- change needed to stack.go due to the functions that are
changed above
Change-Id: Ida40d831370e39b91711ccb9616492b7fad3debf
Reviewed-on: https://go-review.googlesource.com/c/go/+/352429
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Lynn Boger <laboger@linux.vnet.ibm.com>
A subsequent change will alter the semantics of _func.entry.
To make that change obvious and clear, change _func.entry to a method,
and rename the field to _func.entryPC.
Change-Id: I05d66b54d06c5956d4537b0729ddf4290c3e2635
Reviewed-on: https://go-review.googlesource.com/c/go/+/351460
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
The current stack growing implementation looks not right.
Specially, the line runtime/stack.go#L1068 never gets executed,
which causes many unnecessary copystack calls.
This PR is trying to correct the implementation.
As I'm not familiar with the code, the fix is just a guess.
Change-Id: I0bea1148175fad34f74f19d455c240c94d3cb78b
GitHub-Last-Rev: 57205f91fe
GitHub-Pull-Request: golang/go#47010
Reviewed-on: https://go-review.googlesource.com/c/go/+/332229
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Dmitri Shuralyov <dmitshur@golang.org>