Currently, stkframe.arglen and stkframe.argmap are populated by
gentraceback under a particular set of circumstances. But because they
can be constructed from other fields in stkframe, they don't need to
be computed eagerly at all. They're also rather misleading, as they're
only part of computing the actual argument map and most callers should
be using getStackMap, which does the rest of the work.
This CL drops these fields from stkframe. It shifts the functions that
used to compute them, getArgInfoFast and getArgInfo, into
corresponding methods stkframe.argBytes and stkframe.argMapInternal.
argBytes is expected to be used by callers that need to know only the
argument frame size, while argMapInternal is used only by argBytes and
getStackMap.
We also move some of the logic from getStackMap into argMapInternal
because the previous split of responsibilities didn't make much sense.
This lets us return just a bitvector from argMapInternal, rather than
both a bitvector, which carries a size, and an "actually use this
size".
The getArgInfoFast function was inlined before (and inl_test checked
this). We drop that requirement from stkframe.argBytes because the
uses of this have shifted and now it's only called from heap dumping
(which never happens) and conservative stack frame scanning (which
very, very rarely happens).
There will be a few follow-up clean-up CLs.
For #54466. This is a nice clean-up on its own, but it also serves to
remove pointers from the traceback state that would eventually become
troublesome write barriers once we stack-rip gentraceback.
Change-Id: I107f98ed8e7b00185c081de425bbf24af02a4163
Reviewed-on: https://go-review.googlesource.com/c/go/+/424514
Run-TryBot: Austin Clements <austin@google.com>
Auto-Submit: Austin Clements <austin@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
In order to prevent false sharing of cache lines, structs are
padded with some number of bytes. These bytes are unused, serving
only to make the size of the struct a multiple of the size of the
cache line.
The current calculation of how much to pad is an overestimation,
when the struct size is already a multiple of the cache line size
without padding. For these cases, no padding is necessary, and
the size of the inner pad field should be 0. The bug is that the
pad field is sized to a whole 'nother cache line, wasting space.
Here is the current formula that can never return 0:
cpu.CacheLinePadSize - unsafe.Sizeof(myStruct{})%cpu.CacheLinePadSize
This change simply mods that calculation by cpu.CacheLinePadSize,
so that 0 will be returned instead of cpu.CacheLinePadSize.
Change-Id: I26a2b287171bf47a3b9121873b2722f728381b5e
Reviewed-on: https://go-review.googlesource.com/c/go/+/414214
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Joedian Reid <joedian@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Measure the average stack size used by goroutines at every GC. When
starting a new goroutine, allocate an initial goroutine stack of that
average size. Intuition is that we'll waste at most 2x in stack space
because only half the goroutines can be below average. In turn, we
avoid some of the early stack growth / copying needed in the average
case.
More details in the design doc at: https://docs.google.com/document/d/1YDlGIdVTPnmUiTAavlZxBI1d9pwGQgZT7IKFKlIXohQ/edit?usp=sharing
name old time/op new time/op delta
Issue18138 95.3µs ± 0% 67.3µs ±13% -29.35% (p=0.000 n=9+10)
Fixes#18138
Change-Id: Iba34d22ed04279da7e718bbd569bbf2734922eaa
Reviewed-on: https://go-review.googlesource.com/c/go/+/345889
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
CL 362934 added open code for unsafe.Slice, so using it now no longer
negatively impacts the performance.
Updates #48798
Change-Id: Ifbabe8bc1cc4349c5bcd11586a11fc99bcb388b1
Reviewed-on: https://go-review.googlesource.com/c/go/+/404974
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
[This CL is part of a sequence implementing the proposal #51082.
The design doc is at https://go.dev/s/godocfmt-design.]
Run the updated gofmt, which reformats doc comments,
on the main repository. Vendored files are excluded.
For #51082.
Change-Id: I7332f099b60f716295fb34719c98c04eb1a85407
Reviewed-on: https://go-review.googlesource.com/c/go/+/384268
Reviewed-by: Jonathan Amsterdam <jba@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This CL adds regabi support needed for reflect and reimplement
of CL 360994, which is reverted.
Change-Id: I140673f09109dd9f72eff70aad64c0aa00d6857a
Reviewed-on: https://go-review.googlesource.com/c/go/+/396077
Reviewed-by: Cherry Mui <cherryyz@google.com>
Trust: mzh <mzh@golangcn.org>
And then revert the bootstrap cmd directories and certain testdata.
And adjust tests as needed.
Not reverting the changes in std that are bootstrapped,
because some of those changes would appear in API docs,
and we want to use any consistently.
Instead, rewrite 'any' to 'interface{}' in cmd/dist for those directories
when preparing the bootstrap copy.
A few files changed as a result of running gofmt -w
not because of interface{} -> any but because they
hadn't been updated for the new //go:build lines.
Fixes#49884.
Change-Id: Ie8045cba995f65bd79c694ec77a1b3d1fe01bb09
Reviewed-on: https://go-review.googlesource.com/c/go/+/368254
Trust: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Currently, newstack loads gp.stackguard0 twice to check for different
poison values. The race window between these two checks can lead to
unintentional stack doubling, and ultimately to stack overflows.
Specifically, newstack checks if stackguard0 is stackPreempt first,
then it checks if it's stackForceMove. If stackguard0 is set to
stackForceMove on entry, but changes to stackPreempt between the two
checks, newstack will incorrectly double the stack allocation.
Fix this by loading stackguard0 exactly once and then checking it
against different poison values.
The effect of this is relatively minor because stackForceMove is only
used by a small number of runtime tests. I found this because
mayMorestackMove uses stackForceMove aggressively, which makes this
failure mode much more likely.
Change-Id: I1f8b6a6744e45533580a3f45d7030ec2ec65a5fb
Reviewed-on: https://go-review.googlesource.com/c/go/+/361775
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
This adds a maymorestack hook that forces a preemption at every
possible cooperative preemption point. This would have helped us catch
several recent preemption-related bugs earlier, including #47302,
#47304, and #47441.
For #48297.
Change-Id: Ib82c973589c8a7223900e1842913b8591938fb9f
Reviewed-on: https://go-review.googlesource.com/c/go/+/359796
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: David Chase <drchase@google.com>
Add explicit address sanitizer instrumentation to the runtime and
syscall packages. The compiler does not instrument the runtime
package. It does instrument the syscall package, but we need to add
a couple of cases that it can't see.
Refer to the implementation of the asan malloc runtime library,
this patch also allocates extra memory as the redzone, around the
returned memory region, and marks the redzone as unaddressable to
detect the overflows or underflows.
Updates #44853.
Change-Id: I2753d1cc1296935a66bf521e31ce91e35fcdf798
Reviewed-on: https://go-review.googlesource.com/c/go/+/298614
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Trust: fannie zhang <Fannie.Zhang@arm.com>
This change adds two fields to gcControllerState: stackScan, used for
pacing decisions, and scannableStackSize, which directly tracks the
amount of space allocated for inuse stacks that will be scanned.
scannableStackSize is not updated directly, but is instead flushed from
each P when at an least 8 KiB delta has accumulated. This helps reduce
issues with atomics contention for newly created goroutines. Stack
growth paths are largely unaffected.
StackGrowth-48 51.4ns ± 0% 51.4ns ± 0% ~ (p=0.927 n=10+10)
StackGrowthDeep-48 6.14µs ± 3% 6.25µs ± 4% ~ (p=0.090 n=10+9)
CreateGoroutines-48 273ns ± 1% 273ns ± 1% ~ (p=0.676 n=9+10)
CreateGoroutinesParallel-48 65.5ns ± 5% 66.6ns ± 7% ~ (p=0.340 n=9+9)
CreateGoroutinesCapture-48 2.06µs ± 1% 2.07µs ± 4% ~ (p=0.217 n=10+10)
CreateGoroutinesSingle-48 550ns ± 3% 563ns ± 4% +2.41% (p=0.034 n=8+10)
For #44167.
Change-Id: Id1800d41d3a6c211b43aeb5681c57c0dc8880daf
Reviewed-on: https://go-review.googlesource.com/c/go/+/309589
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
This reverts commit golang.org/cl/352953.
Reason for revert: unsafe.Slice is considerably slower.
Part of this is extra safety checks (good), but most of it
is the function call overhead. We should consider open-coding it (#48798).
Impact of this change:
name old time/op new time/op delta
StackCopyWithStkobj-8 12.1ms ± 5% 11.6ms ± 3% -4.03% (p=0.009 n=10+8)
Change-Id: Ib2448e3edac25afd8fb55ffbea073b8b11521bde
Reviewed-on: https://go-review.googlesource.com/c/go/+/354090
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This adds the regabi support needed for reflect including:
- implementation of the makeFuncSub and methodValueCall for
reflect
- implementations of archFloat32FromReg and archFloat32ToReg
needed for PPC64 due to differences in the way float32 are
represented in registers as compared to other platforms
- change needed to stack.go due to the functions that are
changed above
Change-Id: Ida40d831370e39b91711ccb9616492b7fad3debf
Reviewed-on: https://go-review.googlesource.com/c/go/+/352429
Run-TryBot: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Lynn Boger <laboger@linux.vnet.ibm.com>
A subsequent change will alter the semantics of _func.entry.
To make that change obvious and clear, change _func.entry to a method,
and rename the field to _func.entryPC.
Change-Id: I05d66b54d06c5956d4537b0729ddf4290c3e2635
Reviewed-on: https://go-review.googlesource.com/c/go/+/351460
Trust: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
The current stack growing implementation looks not right.
Specially, the line runtime/stack.go#L1068 never gets executed,
which causes many unnecessary copystack calls.
This PR is trying to correct the implementation.
As I'm not familiar with the code, the fix is just a guess.
Change-Id: I0bea1148175fad34f74f19d455c240c94d3cb78b
GitHub-Last-Rev: 57205f91fe
GitHub-Pull-Request: golang/go#47010
Reviewed-on: https://go-review.googlesource.com/c/go/+/332229
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Dmitri Shuralyov <dmitshur@golang.org>
tracebackdefers is used for scanning/copying deferred functions'
arguments. Now that deferred functions are always argumentless,
it does nothing. Remove.
Change-Id: I55bedabe5584ea41a12cdb03d55ec9692a5aacd9
Reviewed-on: https://go-review.googlesource.com/c/go/+/325916
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Implement register ABI for reflect.MakeFunc and method Value Call
on ARM64.
Change-Id: I5487febb9ea764af5ccf5d7c94858ab0acec7cac
Reviewed-on: https://go-review.googlesource.com/c/go/+/323936
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
At this point all funcPC references are ABIInternal functions.
Replace with the intrinsics.
Change-Id: I3ba7e485c83017408749b53f92877d3727a75e27
Reviewed-on: https://go-review.googlesource.com/c/go/+/321954
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Currently, for stack objects, the compiler emits metadata that
includes the offset and type descriptor for each object. The type
descriptor symbol has many fields, and it references many other
symbols, e.g. field/element types, equality functions, names.
Observe that what we actually need at runtime is only the GC
metadata that are needed to scan the object, and the GC metadata
are "leaf" symbols (which doesn't reference other symbols). Emit
only the GC data instead. This avoids bringing live the type
descriptor as well as things referenced by it (if it is not
otherwise live).
This reduces binary sizes:
old new
hello (println) 1187776 1133856 (-4.5%)
hello (fmt) 1902448 1844416 (-3.1%)
cmd/compile 22670432 22438576 (-1.0%)
cmd/link 6346272 6225408 (-1.9%)
No significant change in compiler speed.
name old time/op new time/op delta
Template 184ms ± 2% 186ms ± 5% ~ (p=0.905 n=9+10)
Unicode 78.4ms ± 5% 76.3ms ± 3% -2.60% (p=0.009 n=10+10)
GoTypes 1.09s ± 1% 1.08s ± 1% -0.73% (p=0.027 n=10+8)
Compiler 85.6ms ± 3% 84.6ms ± 4% ~ (p=0.143 n=10+10)
SSA 7.23s ± 1% 7.25s ± 1% ~ (p=0.780 n=10+9)
Flate 116ms ± 5% 115ms ± 6% ~ (p=0.912 n=10+10)
GoParser 201ms ± 4% 195ms ± 1% ~ (p=0.089 n=10+10)
Reflect 455ms ± 1% 458ms ± 2% ~ (p=0.050 n=9+9)
Tar 155ms ± 2% 155ms ± 3% ~ (p=0.436 n=10+10)
XML 202ms ± 2% 200ms ± 2% ~ (p=0.053 n=10+9)
Change-Id: I33a7f383d79afba1a482cac6da0cf5b7de9c0ec4
Reviewed-on: https://go-review.googlesource.com/c/go/+/313514
Trust: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
For stack frames larger than StackBig, the stack split prologue needs
to guard against potential wraparound. Currently, it carefully
arranges to avoid underflow, but this is complicated and requires a
special check for StackPreempt. StackPreempt is no longer the only
stack poison value, so this check will incorrectly succeed if the
stack bound is poisoned with any other value.
This CL simplifies the logic of the check, reduces its length, and
accounts for any possible poison value by directly checking for
underflow.
Change-Id: I917a313102d6a21895ef7c4b0f304fb84b292c81
Reviewed-on: https://go-review.googlesource.com/c/go/+/307010
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
This change finishes off functionality register ABI for the reflect
package.
Specifically, it implements a call on a MakeFunc'd value by performing
the reverse process that reflect.Value.Call does, using the same ABI
steps. It implements a call on a method value created by reflect by
translating between the method value's ABI to the method's ABI.
Tests are added for both cases.
For #40724.
Change-Id: I302820b61fc0a8f94c5525a002bc02776aef41af
Reviewed-on: https://go-review.googlesource.com/c/go/+/298670
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Currently, gcTestMoveStackOnNextCall doubles the stack allocation on
each call because stack movement always doubles the stack. That's
rather unfortunate if you're doing a bunch of stack movement tests in
a row that don't actually have to grow the stack because you'll
quickly hit the stack size limit even though you're hardly using any
of the stack.
Fix this by adding a special stack poison value for
gcTestMoveStackOnNextCall that newstack recognizes and inhibits the
allocation doubling.
Change-Id: Iace7055a0f33cb48dc97b8f4b46e45304bee832c
Reviewed-on: https://go-review.googlesource.com/c/go/+/306672
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
See if this clears failure on openbsd-amd64-68 (ahem).
Change-Id: Ifa60ef711a95e5de8ad91433ffa425f75b36c76f
Reviewed-on: https://go-review.googlesource.com/c/go/+/295629
Trust: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
This is partial plumbing recycled from the original register abi test work;
these are the parts that translate easily. Some other bits are deferred till
later when they are ready to be used.
For #40724.
Change-Id: Ica8c55a4526793446189725a2bc3839124feb38f
Reviewed-on: https://go-review.googlesource.com/c/go/+/260539
Trust: David Chase <drchase@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Document what the values in internal/sys mean.
Remove various special cases for arm64 in the code using StackAlign.
Delete Uintreg - it was for GOARCH=amd64p32,
which was specific to GOOS=nacl and has been retired.
This CL is part of a stack adding windows/arm64
support (#36439), intended to land in the Go 1.17 cycle.
This CL is, however, not windows/arm64-specific.
It is cleanup meant to make the port (and future ports) easier.
Change-Id: I40e8fa07b4e192298b6536b98a72a751951a4383
Reviewed-on: https://go-review.googlesource.com/c/go/+/288795
Trust: Russ Cox <rsc@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This change modifies mheap's span allocation API to have each caller
declare a purpose, defined as a new enum called spanAllocType.
The purpose behind this change is two-fold:
1. Tight control over who gets to allocate heap memory is, generally
speaking, a good thing. Every codepath that allocates heap memory
places additional implicit restrictions on the allocator. A notable
example of a restriction is work bufs coming from heap memory: write
barriers are not allowed in allocation paths because then we could
have a situation where the allocator calls into the allocator.
2. Memory statistic updating is explicit. Instead of passing an opaque
pointer for statistic updating, which places restrictions on how that
statistic may be updated, we use the spanAllocType to determine which
statistic to update and how.
We also take this opportunity to group all the statistic updating code
together, which should make the accounting code a little easier to
follow.
Change-Id: Ic0b0898959ba2a776f67122f0e36c9d7d60e3085
Reviewed-on: https://go-review.googlesource.com/c/go/+/246970
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Currently we don't use sigaltstack on darwin/arm64, as is not
supported on iOS. However, it is supported on macOS. Use it.
(iOS remains unchanged.)
Change-Id: Icc154c5e2edf2dbdc8ca68741ad9157fc15a72ee
Reviewed-on: https://go-review.googlesource.com/c/go/+/256917
Trust: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Introduce GOOS=ios for iOS systems. GOOS=ios matches "darwin"
build tag, like GOOS=android matches "linux" and GOOS=illumos
matches "solaris". Only ios/arm64 is supported (ios/amd64 is
not).
GOOS=ios and GOOS=darwin remain essentially the same at this
point. They will diverge at later time, to differentiate macOS
and iOS.
Uses of GOOS=="darwin" are changed to (GOOS=="darwin" || GOOS=="ios"),
except if it clearly means macOS (e.g. GOOS=="darwin" && GOARCH=="amd64"),
it remains GOOS=="darwin".
Updates #38485.
Change-Id: I4faacdc1008f42434599efb3c3ad90763a83b67c
Reviewed-on: https://go-review.googlesource.com/c/go/+/254740
Trust: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Currently activeStackChans is set before a goroutine blocks on a channel
operation in an unlockf passed to gopark. The trouble is that the
unlockf is called *after* the G's status is changed, and the G's status
is what is used by a concurrent mark worker (calling suspendG) to
determine that a G has successfully been suspended. In this window
between the status change and unlockf, the mark worker could try to
shrink the G's stack, and in particular observe that activeStackChans is
false. This observation will cause the mark worker to *not* synchronize
with concurrent channel operations when it should, and so updating
pointers in the sudog for the blocked goroutine (which may point to the
goroutine's stack) races with channel operations which may also
manipulate the pointer (read it, dereference it, update it, etc.).
Fix the problem by adding a new atomically-updated flag to the g struct
called parkingOnChan, which is non-zero in the race window above. Then,
in isShrinkStackSafe, check if parkingOnChan is zero. The race is
resolved like so:
* Blocking G sets parkingOnChan, then changes status in gopark.
* Mark worker successfully suspends blocking G.
* If the mark worker observes parkingOnChan is non-zero when checking
isShrinkStackSafe, then it's not safe to shrink (we're in the race
window).
* If the mark worker observes parkingOnChan as zero, then because
the mark worker observed the G status change, it can be sure that
gopark's unlockf completed, and gp.activeStackChans will be correct.
The risk of this change is low, since although it reduces the number of
places that stack shrinking is allowed, the window here is incredibly
small. Essentially, every place that it might crash now is replaced with
no shrink.
This change adds a test, but the race window is so small that it's hard
to trigger without a well-placed sleep in park_m. Also, this change
fixes stackGrowRecursive in proc_test.go to actually allocate a 128-byte
stack frame. It turns out the compiler was destructuring the "pad" field
and only allocating one uint64 on the stack.
Fixes#40641.
Change-Id: I7dfbe7d460f6972b8956116b137bc13bc24464e8
Reviewed-on: https://go-review.googlesource.com/c/go/+/247050
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Trust: Michael Knyszek <mknyszek@google.com>
In the current implementation, we can observe crashes after calling
debug.SetMaxStack and allocating a stack larger than 4GB since
stackalloc works with 32-bit sizes. To avoid this, we define an upper
limit as the largest feasible point we can grow a stack to and provide a
better error message when we get a stack overflow.
Fixes#41228
Change-Id: I55fb0a824f47ed9fb1fcc2445a4dfd57da9ef8d4
Reviewed-on: https://go-review.googlesource.com/c/go/+/255997
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Giovanni Bajo <rasky@develer.com>
Reviewed-by: Keith Randall <khr@golang.org>
That port is gone.
Change-Id: I212d435e290d1890d6cd5531be98bb692650595e
Reviewed-on: https://go-review.googlesource.com/c/go/+/254077
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
I think they are no longer experimental status. Might as well promote
them to permanent.
Change-Id: Id1259601b3dd2061dd60df86ee48080bfb575d2f
Reviewed-on: https://go-review.googlesource.com/c/go/+/249857
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
reflect.assignTo writes to the target using write barriers. Make sure
that the memory it is writing to is zeroed, so the write barrier does
not read pointers from uninitialized memory.
Fixes#39541
Change-Id: Ia64b2cacc193bffd0c1396bbce1dfb8182d4905b
Reviewed-on: https://go-review.googlesource.com/c/go/+/238760
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
I added routines that can acquire/release a particular rank without
acquiring/releasing an associated lock. I added lockRankGscan as a rank
for acquiring/releasing the Gscan bit.
castogscanstatus() and casGtoPreemptScan() are acquires of the Gscan
bit. casfrom_Gscanstatus() is a release of the Gscan bit. casgstatus()
is like an acquire and release of the Gscan bit, since it will wait if
Gscan bit is currently set.
We have a cycle between hchan and Gscan. The acquisition of Gscan and
then hchan only happens in syncadjustsudogs() when the G is suspended,
so the main normal ordering (get hchan, then get Gscan) can't be
happening. So, I added a new rank lockRankHchanLeaf that is used when
acquiring hchan locks in syncadjustsudogs. This ranking is set so no
other locks can be acquired except other hchan locks.
Fixes#38922
Change-Id: I58ce526a74ba856cb42078f7b9901f2832e1d45c
Reviewed-on: https://go-review.googlesource.com/c/go/+/228417
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
We might as well grow the stack at least as large as we'll need for
the frame that is calling morestack. It doesn't help with the
lots-of-small-frames case, but it may help a bit with the
few-big-frames case.
Update #18138
Change-Id: I1f49c97706a70e20b30433cbec99a7901528ea52
Reviewed-on: https://go-review.googlesource.com/c/go/+/225800
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
A debugger can inject a call at almost any PC, which causes
significant complications with stack scanning and growth. Currently,
the runtime solves this using precise stack maps and register maps at
nearly all PCs, but these extra maps require roughly 5% of the binary.
These extra maps were originally considered worth this space because
they were intended to be used for non-cooperative preemption, but are
now used only for debug call injection.
This CL switches from using precise maps to instead using conservative
frame scanning, much like how non-cooperative preemption works. When a
call is injected, the runtime flushes all potential pointer registers
to the stack, and then treats that frame as well as the interrupted
frame conservatively.
The limitation of conservative frame scanning is that we cannot grow
the goroutine stack. That's doable because the previous CL switched to
performing debug calls on a new goroutine, where they are free to grow
the stack.
With this CL, there are no remaining uses of precise register maps
(though we still use the unsafe-point information that's encoded in
the register map PCDATA stream), and stack maps are only used at call
sites.
For #36365.
Change-Id: Ie217b6711f3741ccc437552d8ff88f961a73cee0
Reviewed-on: https://go-review.googlesource.com/c/go/+/229300
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>