CL 106735 changed to the new softfloat support on GOARM=5.
ARM assembly code that uses FP instructions not guarded on GOARM,
if any, will break. The easiest way to fix is probably to use Go
implementation on GOARM=5, like
MOVB runtime·goarm(SB), R11
CMP $5, R11
BEQ arm5
... FP instructions ...
RET
arm5:
CALL or JMP to Go implementation
Change-Id: I52fc76fac9c854ebe7c6c856c365fba35d3f560a
Reviewed-on: https://go-review.googlesource.com/107475
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The implementation of runtime.abort on arm64 currently branches to
address 0, which results in a signal from PC 0, rather than from
runtime.abort, so the runtime fails to recognize it as an abort.
Fix runtime.abort on arm64 to read from address 0 like what other
architectures do and recognize this in the signal handler.
Should fix the linux/arm64 build.
Change-Id: I960ab630daaeadc9190287604d4d8337b1ea3853
Reviewed-on: https://go-review.googlesource.com/99895
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Currently, throw may grow the stack, which means whenever we call it
from a context where it's not safe to grow the stack, we first have to
switch to the system stack. This is pretty easy to get wrong.
Fix this by making throw switch to the system stack so it doesn't grow
the stack and is hence safe to call without a system stack switch at
the call site.
The only thing this complicates is badsystemstack itself, which would
now go into an infinite loop before printing anything (previously it
would also go into an infinite loop, but would at least print the
error first). Fix this by making badsystemstack do a direct write and
then crash hard.
Change-Id: Ic5b4a610df265e47962dcfa341cabac03c31c049
Reviewed-on: https://go-review.googlesource.com/93659
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Currently parts of unrecoverable panic handling (notably, printing
panic messages) can happen on the user stack. This may grow the stack,
which is generally fine, but if we're handling a runtime panic, it's
better to do as little as possible in case the runtime is in an
inconsistent state.
Hence, this commit rearranges the handling of unrecoverable panics so
that it's done entirely on the system stack.
This is mostly a matter of shuffling code a bit so everything can move
into a systemstack block. The one slight subtlety is in the "panic
during panic" case, where we now depend on startpanic_m's caller to
print the stack rather than startpanic_m itself. To make this work,
startpanic_m now returns a boolean indicating that the caller should
avoid trying to print any panic messages and get right to the stack
trace. Since the caller is already in a position to do this, this
actually simplifies things a little.
Change-Id: Id72febe8c0a9fb31d9369b600a1816d65a49bfed
Reviewed-on: https://go-review.googlesource.com/93658
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
They have either already been called by preprintpanics, or they can
not be called safely because of the various conditions checked at the
start of gopanic.
Fixes#24059
Change-Id: I4a6233d12c9f7aaaee72f343257ea108bae79241
Reviewed-on: https://go-review.googlesource.com/96755
Reviewed-by: Austin Clements <austin@google.com>
Currently, if a sigpanic call is injected into C code, it's possible
for preparePanic to leave the stack in a state where traceback can't
unwind correctly past the sigpanic.
Specifically, shouldPushPanic sniffs the stack to decide where to put
the PC from the signal context. In the cgo case, it will find that
!findfunc(pc).valid() because pc is in C code, and then it will check
if the top of the stack looks like a Go PC. However, this stack slot
is just in a C frame, so it could be uninitialized and contain
anything, including what looks like a valid Go PC. For example, in
https://build.golang.org/log/c601a18e2af24794e6c0899e05dddbb08caefc17,
it sees 1c02c23a <runtime.newproc1+682>. When this condition is met,
it skips putting the signal PC on the stack at all. As a result, when
we later unwind from the sigpanic, we'll "successfully" but
incorrectly unwind to whatever PC was in this uninitialized slot and
go who knows where from there.
Fix this by making shouldPushPanic assume that the signal PC is always
usable if we're running C code, so we always make it appear like
sigpanic's caller.
This lets us be pickier again about unexpected return PCs in
gentraceback.
Updates #23640.
Change-Id: I1e8ade24b031bd905d48e92d5e60c982e8edf160
Reviewed-on: https://go-review.googlesource.com/91137
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This logic is duplicated in all of the preparePanic functions. Pull it
out into one architecture-independent function.
Change-Id: I7ef4e78e3eda0b7be1a480fb5245fc7424fb2b4e
Reviewed-on: https://go-review.googlesource.com/91255
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently, startpanic_m (which prepares for an unrecoverable panic)
goes out of its way to make it possible to allocate during panic
handling by allocating an mcache if there isn't one.
However, this is both potentially dangerous and unnecessary.
Allocating an mcache is a generally complex thing to do in an already
precarious situation. Specifically, it requires obtaining the heap
lock, and there's evidence that this may be able to deadlock (#23360).
However, it's also unnecessary because we never allocate from the
unrecoverable panic path.
This didn't use to be the case. The call to allocmcache was introduced
long ago, in CL 7388043, where it was in preparation for separating Ms
and Ps and potentially running an M without an mcache. At the time,
after calling startpanic, the runtime could call String and Error
methods on panicked values, which could do anything including
allocating. That was generally unsafe even at the time, and CL 19792
fixed this be pre-printing panic messages before calling startpanic.
As a result, we now no longer allocate after calling startpanic.
This CL not only removes the allocmcache call, but goes a step further
to explicitly disallow any allocation during unrecoverable panic
handling, even in situations where it might be safe. This way, if
panic handling ever does an allocation that would be unsafe in unusual
circumstances, we'll know even if it happens during normal
circumstances.
This would help with debugging #23360, since the deadlock in
allocmcache is currently masking the real failure.
Beyond all.bash, I manually tested this change by adding panics at
various points in early runtime init, signal handling, and the
scheduler to check unusual panic situations.
Change-Id: I85df21e2b4b20c6faf1f13fae266c9339eebc061
Reviewed-on: https://go-review.googlesource.com/88835
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently, if a _SigPanic signal arrives in a throwsplit context,
nothing is stopping the runtime from injecting a call to sigpanic that
may attempt to grow the stack. This will fail and, in turn, mask the
real problem.
Fix this by checking for throwsplit in the signal handler itself
before injecting the sigpanic call.
Updates #21431, where this problem is likely masking the real problem.
Change-Id: I64b61ff08e8c4d6f6c0fb01315d7d5e66bf1d3e2
Reviewed-on: https://go-review.googlesource.com/87595
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Use singular form of panic and remove the unnecessary
'however', when comparing Goexit's behavior to 'a panic'
as well as what happens for deferred recovers with Goexit.
Change-Id: I3116df3336fa135198f6a39cf93dbb88a0e2f46e
Reviewed-on: https://go-review.googlesource.com/79755
Reviewed-by: Rob Pike <r@golang.org>
We're about to start tracking nowritebarrierrec through systemstack
calls, which will reveal write barriers in startpanic_m prohibited by
various callers.
We actually can allow write barriers here because the write barrier is
a no-op when we're panicking. Let the compiler know.
Updates #22384.
For #22460.
Change-Id: Ifb3a38d3dd9a4125c278c3680f8648f987a5b0b8
Reviewed-on: https://go-review.googlesource.com/72770
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
Now that getcallerpc is a compiler intrinsic on x86 and non-x86
platforms don't need the argument, we can drop it.
Sadly, this doesn't let us remove any dummy arguments since all of
those cases also use getcallersp, which still takes the argument
pointer, but this is at least an improvement.
Change-Id: I9c34a41cf2c18cba57f59938390bf9491efb22d2
Reviewed-on: https://go-review.googlesource.com/65474
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
For a trivial benchmark with a do-nothing cgo call:
name old time/op new time/op delta
Call-4 64.5ns ± 7% 63.0ns ± 6% -2.25% (p=0.027 n=20+16)
Because Windows uses the cgocall mechanism to make system calls,
and passes arguments in a struct held in the m,
we need to do the lockOSThread/unlockOSThread in that code.
Because deferreturn was getting a nosplit stack overflow error,
change it to avoid calling typedmemmove.
Updates #21827.
Change-Id: I9b1d61434c44faeb29805b46b409c812c9acadc2
Reviewed-on: https://go-review.googlesource.com/64070
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Went mainly for the ones that make no sense, such as the ones
mid-sentence or after commas.
Change-Id: Ie245d2c19cc7428a06295635cf6a9482ade25ff0
Reviewed-on: https://go-review.googlesource.com/57293
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Found with mvdan.cc/unindent. Prioritized the ones with the biggest wins
for now.
Change-Id: I2b032e45cdd559fc9ed5b1ee4c4de42c4c92e07b
Reviewed-on: https://go-review.googlesource.com/56470
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Try to avoid a race between the main goroutine exiting and a panic
occurring. Don't try too hard, to avoid hanging.
Updates #3934Fixes#20018
Change-Id: I57a02b6d795d2a61f1cadd137ce097145280ece7
Reviewed-on: https://go-review.googlesource.com/41052
Reviewed-by: Austin Clements <austin@google.com>
Now that we don't rescan stacks, stack barriers are unnecessary. This
removes all of the code and structures supporting them as well as
tests that were specifically for stack barriers.
Updates #17503.
Change-Id: Ia29221730e0f2bbe7beab4fa757f31a032d9690c
Reviewed-on: https://go-review.googlesource.com/36620
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This commit fixes two bizarrely related bugs:
1. The signatures for the call* functions were wrong, indicating that
they had only two pointer arguments instead of three. We didn't notice
because the call* functions are defined by a macro expansion, which go
vet doesn't see.
2. deferArgs on a defer object with a zero-sized frame returned a
pointer just past the end of the allocated object, which is illegal in
Go (and can cause the "sweep increased allocation count" crashes).
In a fascinating twist, these two bugs canceled each other out, which
is why I'm fixing them together. The pointer returned by deferArgs is
used in only two ways: as an argument to memmove and as an argument to
reflectcall. memmove is NOSPLIT, so the argument was unobservable.
reflectcall immediately tail calls one of the call* functions, which
are not NOSPLIT, but the deferArgs pointer just happened to be the
third argument that was accidentally marked as a scalar. Hence, when
the garbage collector scanned the stack, it didn't see the bad
pointer as a pointer.
I believe this was all ultimately benign. In principle, stack growth
during the reflectcall could fail to update the args pointer, but it
never points to the stack, so it never needs to be updated. Also in
principle, the garbage collector could fail to mark the args object
because of the incorrect call* signatures, but in all calls to
reflectcall (including the ones spelled "call" in the reflect package)
the args object is kept live by the calling stack.
Change-Id: Ic932c79d5f4382be23118fdd9dba9688e9169e28
Reviewed-on: https://go-review.googlesource.com/31654
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
The panic leaves the lock in an unusable state.
Trying to panic with a usable state makes the lock significantly
less efficient and scalable (see early CL patch sets and discussion).
Instead, use runtime.throw, which will crash the program directly.
In general throw is reserved for when the runtime detects truly
serious, unrecoverable problems. This problem is certainly serious,
and, without a significant performance hit, is unrecoverable.
Fixes#13879.
Change-Id: I41920d9e2317270c6f909957d195bd8b68177f8d
Reviewed-on: https://go-review.googlesource.com/31359
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This optimizes deferproc and deferreturn in various ways.
The most important optimization is that it more carefully arranges to
prevent preemption or stack growth. Currently we do this by switching
to the system stack on every deferproc and every deferreturn. While we
need to be on the system stack for the slow path of allocating and
freeing defers, in the common case we can fit in the nosplit stack.
Hence, this change pushes the system stack switch down into the slow
paths and makes everything now exposed to the user stack nosplit. This
also eliminates the need for various acquirem/releasem pairs, since we
are now preventing preemption by preventing stack split checks.
As another smaller optimization, we special case the common cases of
zero-sized and pointer-sized defer frames to respectively skip the
copy and perform the copy in line instead of calling memmove.
This speeds up the runtime defer benchmark by 42%:
name old time/op new time/op delta
Defer-4 75.1ns ± 1% 43.3ns ± 1% -42.31% (p=0.000 n=8+10)
In reality, this speeds up defer by about 2.2X. The two benchmarks
below compare a Lock/defer Unlock pair (DeferLock) with a Lock/Unlock
pair (NoDeferLock). NoDeferLock establishes a baseline cost, so these
two benchmarks together show that this change reduces the overhead of
defer from 61.4ns to 27.9ns.
name old time/op new time/op delta
DeferLock-4 77.4ns ± 1% 43.9ns ± 1% -43.31% (p=0.000 n=10+10)
NoDeferLock-4 16.0ns ± 0% 15.9ns ± 0% -0.39% (p=0.000 n=9+8)
This also shaves 34ns off cgo calls:
name old time/op new time/op delta
CgoNoop-4 122ns ± 1% 88.3ns ± 1% -27.72% (p=0.000 n=8+9)
Updates #14939, #16051.
Change-Id: I2baa0dea378b7e4efebbee8fca919a97d5e15f38
Reviewed-on: https://go-review.googlesource.com/29656
Reviewed-by: Keith Randall <khr@golang.org>
Adds a small function signame that infers a signal name
from the signal table, otherwise will fallback to using
hex(sig) as previously. No signal table is present for
Windows hence it will always print the hex value.
Sample code and new result:
```go
package main
import (
"fmt"
"time"
)
func main() {
defer func() {
if err := recover(); err != nil {
fmt.Printf("err=%v\n", err)
}
}()
ticker := time.Tick(1e9)
for {
<-ticker
}
}
```
```shell
$ go run main.go &
$ kill -11 <pid>
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0xb01dfacedebac1e
pc=0xc71db]
...
```
Fixes#13969
Change-Id: Ie6be312eb766661f1cea9afec352b73270f27f9d
Reviewed-on: https://go-review.googlesource.com/22753
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
mallocgc can calculate noscan itself. The only remaining
flag argument is needzero, so we just make that a boolean arg.
Fixes#15379
Change-Id: I839a70790b2a0c9dbcee2600052bfbd6c8148e20
Reviewed-on: https://go-review.googlesource.com/22290
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Also fix compiler-invoked panics to avoid a confusing "malloc deadlock"
crash if they are invoked while executing the runtime.
Fixes#14599.
Change-Id: I89436abcbf3587901909abbdca1973301654a76e
Reviewed-on: https://go-review.googlesource.com/20219
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
The tree's pretty inconsistent about single space vs double space
after a period in documentation. Make it consistently a single space,
per earlier decisions. This means contributors won't be confused by
misleading precedence.
This CL doesn't use go/doc to parse. It only addresses // comments.
It was generated with:
$ perl -i -npe 's,^(\s*// .+[a-z]\.) +([A-Z]),$1 $2,' $(git grep -l -E '^\s*//(.+\.) +([A-Z])')
$ go test go/doc -update
Change-Id: Iccdb99c37c797ef1f804a94b22ba5ee4b500c4f7
Reviewed-on: https://go-review.googlesource.com/20022
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Dave Day <djd@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This change breaks out most of the atomics functions in the runtime
into package runtime/internal/atomic. It adds some basic support
in the toolchain for runtime packages, and also modifies linux/arm
atomics to remove the dependency on the runtime's mutex. The mutexes
have been replaced with spinlocks.
all trybots are happy!
In addition to the trybots, I've tested on the darwin/arm64 builder,
on the darwin/arm builder, and on a ppc64le machine.
Change-Id: I6698c8e3cf3834f55ce5824059f44d00dc8e3c2f
Reviewed-on: https://go-review.googlesource.com/14204
Run-TryBot: Michael Matloob <matloob@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Abandon (but still support) the old numbering system.
GOTRACEBACK=none is old 0
GOTRACEBACK=single is the new behavior
GOTRACEBACK=all is old 1
GOTRACEBACK=system is old 2
GOTRACEBACK=crash is unchanged
See doc comment change in runtime1.go for details.
Filed #13107 to decide whether to change default back to GOTRACEBACK=all for Go 1.6 release.
If you run into programs where printing only the current goroutine omits
needed information, please add details in a comment on that issue.
Fixes#12366.
Change-Id: I82ca8b99b5d86dceb3f7102d38d2659d45dbe0db
Reviewed-on: https://go-review.googlesource.com/16512
Reviewed-by: Austin Clements <austin@google.com>
It appears this was made possible by commit 89f185f; before that, g was
not dereferenced above.
Change-Id: I70bc571d924b36351392fd4c13d681e938cfb573
Reviewed-on: https://go-review.googlesource.com/16033
Reviewed-by: Andrew Gerrand <adg@golang.org>
Shared libraries on ppc64le will require a larger minimum stack frame (because
the ABI mandates that the TOC pointer is available at 24(R1)). So to prepare
for this, make a constant for the fixed part of a stack and use that where
necessary.
Change-Id: I447949f4d725003bb82e7d2cf7991c1bca5aa887
Reviewed-on: https://go-review.googlesource.com/15523
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
A TODO to merge is removed from panic1.go.
The rest is appended to panic.go
Updates #12952
Change-Id: Ied4382a455abc20bc2938e34d031802e6b4baf8b
Reviewed-on: https://go-review.googlesource.com/15905
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
This isn't C anymore. No binary change to pkg/linux_amd64/runtime.a.
Change-Id: I24d66b0f5ac888f432b874aac684b1395e7c8345
Reviewed-on: https://go-review.googlesource.com/15903
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Currently, runtime.Goexit() calls goexit()—the goroutine exit stub—to
terminate the goroutine. This *mostly* works, but can cause a
"leftover stack barriers" panic if the following happens:
1. Goroutine A has a reasonably large stack.
2. The garbage collector scan phase runs and installs stack barriers
in A's stack. The top-most stack barrier happens to fall at address X.
3. Goroutine A unwinds the stack far enough to be a candidate for
stack shrinking, but not past X.
4. Goroutine A calls runtime.Goexit(), which calls goexit(), which
calls goexit1().
5. The garbage collector enters mark termination.
6. Goroutine A is preempted right at the prologue of goexit1() and
performs a stack shrink, which calls gentraceback.
gentraceback stops as soon as it sees goexit on the stack, which is
only two frames up at this point, even though there may really be many
frames above it. More to the point, the stack barrier at X is above
the goexit frame, so gentraceback never sees that stack barrier. At
the end of gentraceback, it checks that it saw all of the stack
barriers and panics because it didn't see the one at X.
The fix is simple: call goexit1, which actually implements the process
of exiting a goroutine, rather than goexit, the exit stub.
To make sure this doesn't happen again in the future, we also add an
argument to the stub prototype of goexit so you really, really have to
want to call it in order to call it. We were able to reliably
reproduce the above sequence with a fair amount of awful code inserted
at the right places in the runtime, but chose to change the goexit
prototype to ensure this wouldn't happen again rather than pollute the
runtime with ugly testing code.
Change-Id: Ifb6fb53087e09a252baddadc36eebf954468f2a8
Reviewed-on: https://go-review.googlesource.com/13323
Reviewed-by: Russ Cox <rsc@golang.org>
These were found by grepping the comments from the go code and feeding
the output to aspell.
Change-Id: Id734d6c8d1938ec3c36bd94a4dbbad577e3ad395
Reviewed-on: https://go-review.googlesource.com/10941
Reviewed-by: Aamir Khan <syst3m.w0rm@gmail.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This CL revises CL 7504 to use explicitly uintptr types for the
struct fields that are going to be updated sometimes without
write barriers. The result is that the fields are now updated *always*
without write barriers.
This approach has two important properties:
1) Now the GC never looks at the field, so if the missing reference
could cause a problem, it will do so all the time, not just when the
write barrier is missed at just the right moment.
2) Now a write barrier never happens for the field, avoiding the
(correct) detection of inconsistent write barriers when GODEBUG=wbshadow=1.
Change-Id: Iebd3962c727c0046495cc08914a8dc0808460e0e
Reviewed-on: https://go-review.googlesource.com/9019
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
I asked for this in CL 3742 and it was ignored.
Change-Id: I30ad05f87c7d9eccb11df7e19288e3ed2c7e2e3f
Reviewed-on: https://go-review.googlesource.com/6930
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
The unbounded list-based defer pool can grow infinitely.
This can happen if a goroutine routinely allocates a defer;
then blocks on one P; and then unblocked, scheduled and
frees the defer on another P.
The scenario was reported on golang-nuts list.
We've been here several times. Any unbounded local caches
are bad and grow to infinite size. This change introduces
central defer pool; local pools become fixed-size
with the only purpose of amortizing accesses to the
central pool.
Freedefer now executes on system stack to not consume
nosplit stack space.
Change-Id: I1a27695838409259d1586a0adfa9f92bccf7ceba
Reviewed-on: https://go-review.googlesource.com/3967
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
The unbounded list-based defer pool can grow infinitely.
This can happen if a goroutine routinely allocates a defer;
then blocks on one P; and then unblocked, scheduled and
frees the defer on another P.
The scenario was reported on golang-nuts list.
We've been here several times. Any unbounded local caches
are bad and grow to infinite size. This change introduces
central defer pool; local pools become fixed-size
with the only purpose of amortizing accesses to the
central pool.
Change-Id: Iadcfb113ccecf912e1b64afc07926f0de9de2248
Reviewed-on: https://go-review.googlesource.com/3741
Reviewed-by: Keith Randall <khr@golang.org>
m.gcing has become overloaded to mean "don't preempt this g" in
general. Once the garbage collector is preemptible, the one thing it
*won't* mean is that we're in the garbage collector.
So, rename gcing to "preemptoff" and make it a string giving a reason
that preemption is disabled. gcing was never set to anything but 0 or
1, so we don't have to worry about there being a stack of reasons.
Change-Id: I4337c29e8e942e7aa4f106fc29597e1b5de4ef46
Reviewed-on: https://go-review.googlesource.com/3660
Reviewed-by: Russ Cox <rsc@golang.org>
Use typedmemmove, typedslicecopy, and adjust reflect.call
to execute the necessary write barriers.
Found with GODEBUG=wbshadow=2 mode.
Eventually that will run automatically, but right now
it still detects other missing write barriers.
Change-Id: Iec5b5b0c1be5589295e28e5228e37f1a92e07742
Reviewed-on: https://go-review.googlesource.com/2312
Reviewed-by: Keith Randall <khr@golang.org>
This is the detection code. It works well enough that I know of
a handful of missing write barriers. However, those are subtle
enough that I'll address them in separate followup CLs.
GODEBUG=wbshadow=1 checks for a write that bypassed the
write barrier at the next write barrier of the same word.
If a bug can be detected in this mode it is typically easy to
understand, since the crash says quite clearly what kind of
word has missed a write barrier.
GODEBUG=wbshadow=2 adds a check of the write barrier
shadow copy during garbage collection. Bugs detected at
garbage collection can be difficult to understand, because
there is no context for what the found word means.
Typically you have to reproduce the problem with allocfreetrace=1
in order to understand the type of the badly updated word.
Change-Id: If863837308e7c50d96b5bdc7d65af4969bf53a6e
Reviewed-on: https://go-review.googlesource.com/2061
Reviewed-by: Austin Clements <austin@google.com>
They are no longer needed now that C is gone.
goatoi -> atoi
gofuncname/funcname -> funcname/cfuncname
goroundupsize -> already existing roundupsize
Change-Id: I278bc33d279e1fdc5e8a2a04e961c4c1573b28c7
Reviewed-on: https://go-review.googlesource.com/2154
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Rename "gothrow" to "throw" now that the C version of "throw"
is no longer needed.
This change is purely mechanical except in panic.go where the
old version of "throw" has been deleted.
sed -i "" 's/[[:<:]]gothrow[[:>:]]/throw/g' runtime/*.go
Change-Id: Icf0752299c35958b92870a97111c67bcd9159dc3
Reviewed-on: https://go-review.googlesource.com/2150
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Dave Cheney <dave@cheney.net>
Calls to goproc/deferproc used to push & pop two extra arguments,
the argument size and the function to call. Now, we allocate space
for those arguments in the outargs section so we don't have to
modify the SP.
Defers now use the stack pointer (instead of the argument pointer)
to identify which frame they are associated with.
A followon CL might simplify funcspdelta and some of the stack
walking code.
Fixes issue #8641
Change-Id: I835ec2f42f0392c5dec7cb0fe6bba6f2aed1dad8
Reviewed-on: https://go-review.googlesource.com/1601
Reviewed-by: Russ Cox <rsc@golang.org>