golang.org/issue/17594 was caused by additab being called more than once for
an itab. golang.org/cl/32131 fixed that by making the itabs local symbols,
but that in turn causes golang.org/issue/18252 because now there are now
multiple itab symbols in a process for a given (type,interface) pair and
different code paths can end up referring to different itabs which breaks
lots of reflection stuff. So this makes itabs global again and just takes
care to only call additab once for each itab.
Fixes#18252
Change-Id: I781a193e2f8dd80af145a3a971f6a25537f633ea
Reviewed-on: https://go-review.googlesource.com/34173
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
It takes me several minutes every time I want to find where the linker
writes out the _func structures. Add some comments to make this
easier.
Change-Id: Ic75ce2786ca4b25726babe3c4fe9cd30c85c34e2
Reviewed-on: https://go-review.googlesource.com/34390
Reviewed-by: Ian Lance Taylor <iant@golang.org>
gobuf.ctxt is set to nil from many places in assembly code and these
assignments require write barriers with the hybrid barrier.
Conveniently, in most of these places ctxt should already be nil, in
which case we don't need the barrier. This commit changes these places
to assert that ctxt is already nil.
gogo is more complicated, since ctxt may not already be nil. For gogo,
we manually perform the write barrier if ctxt is not nil.
Updates #17503.
Change-Id: I9d75e27c75a1b7f8b715ad112fc5d45ffa856d30
Reviewed-on: https://go-review.googlesource.com/31764
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Currently this contains no write barriers because it's writing nil
pointers, but with the hybrid barrier, even these will produce write
barriers. However, since these are *gs and *ms, they don't need write
barriers, so we can simply eliminate them.
Updates #17503.
Change-Id: Ib188a60492c5cfb352814bf9b2bcb2941fb7d6c0
Reviewed-on: https://go-review.googlesource.com/31570
Reviewed-by: Rick Hudson <rlh@golang.org>
runtime.SetMutexProfileFraction(n int) will capture 1/n-th of stack
traces of goroutines holding contended mutexes if n > 0. From runtime/pprof,
pprot.Lookup("mutex").WriteTo writes the accumulated
stack traces to w (in essentially the same format that blocking
profiling uses).
Change-Id: Ie0b54fa4226853d99aa42c14cb529ae586a8335a
Reviewed-on: https://go-review.googlesource.com/29650
Reviewed-by: Austin Clements <austin@google.com>
The current padding in the 'p' struct is hardcoded at 64 bytes. It should be the
cache line size. On ppc64x, the current value is only okay because sys.CacheLineSize
is wrong at 64 bytes. This change fixes that by making the padding equal to the
cache line size. It also fixes the cache line size for ppc64/ppc64le to 128 bytes.
Fixes#16477
Change-Id: Ib7ec5195685116eb11ba312a064f41920373d4a3
Reviewed-on: https://go-review.googlesource.com/25370
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Minux Ma <minux@golang.org>
Run-TryBot: Michael Munday <munday@ca.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The type sigtabtt was introduced by an automated tool in
https://golang.org/cl/167550043. It was the Go version of the C type
SigTab. However, when the C code using SigTab was converted to Go in
https://golang.org/cl/168500044 it was rewritten to use a different Go
type, sigTabT, rather than sigtabtt (the difference being that sigTabT
uses string where sigtabtt uses *int8 from the C type char*). So this is
just a dreg from the conversion that was never actually used.
Change-Id: I2ec6eb4b25613bf5e5ad1dbba1f4b5ff20f80f55
Reviewed-on: https://go-review.googlesource.com/27691
Run-TryBot: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This has a minor performance cost, but far less than is being gained by SSA.
As an experiment, enable it during the Go 1.7 beta.
Having frame pointers on by default makes Linux's perf, Intel VTune,
and other profilers much more useful, because it lets them gather a
stack trace efficiently on profiling events.
(It doesn't help us that much, since when we walk the stack we usually
need to look up PC-specific information as well.)
Fixes#15840.
Change-Id: I4efd38412a0de4a9c87b1b6e5d11c301e63f1a2a
Reviewed-on: https://go-review.googlesource.com/23451
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Race runtime also needs local malloc caches and currently uses
a mix of per-OS-thread and per-goroutine caches. This leads to
increased memory consumption. But more importantly cache of
synchronization objects is per-goroutine and we don't always
have goroutine context when feeing memory in GC. As the result
synchronization object descriptors leak (more precisely, they
can be reused if another synchronization object is recreated
at the same address, but it does not always help). For example,
the added BenchmarkSyncLeak has effectively runaway memory
consumption (based on a real long running server).
This change updates race runtime with support for per-P contexts.
BenchmarkSyncLeak now stabilizes at ~1GB memory consumption.
Long term, this will allow us to remove race runtime dependency
on glibc (as malloc is the main cornerstone).
I've also implemented a different scheme to pass P context to
race runtime: scheduler notified race runtime about association
between G and P by calling procwire(g, p)/procunwire(g, p).
But it turned out to be very messy as we have lots of places
where the association changes (e.g. syscalls). So I dropped it
in favor of the current scheme: race runtime asks scheduler
about the current P.
Fixes#14533
Change-Id: Iad10d2f816a44affae1b9fed446b3580eafd8c69
Reviewed-on: https://go-review.googlesource.com/19970
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Add support for the context function set by runtime.SetCgoTraceback.
The context function was added in CL 17761, without support.
This CL is the support.
This CL has not been tested for real C code, as a working context
function for C code requires unwind support that does not seem to exist.
I wanted to get the CL out before the freeze.
I apologize for the length of this CL. It's mostly plumbing, but
unfortunately the plumbing is processor-specific.
Change-Id: I8ce11a0de9b3dafcc29efd2649d776e93bff0e90
Reviewed-on: https://go-review.googlesource.com/22508
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently all free Gs are added to one list. Split this into two
lists: one for free Gs with cached stacks and one for Gs without
cached stacks.
This lets us preferentially allocate Gs that already have a stack, but
more importantly, it sets us up to free cached G stacks concurrently.
Change-Id: Idbe486f708997e1c9d166662995283f02d1eeb3c
Reviewed-on: https://go-review.googlesource.com/20664
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently tracer uses global sequencer and it introduces
significant slowdown on parallel machines (up to 10x).
Replace the global sequencer with per-goroutine sequencer.
If we assign per-goroutine sequence numbers to only 3 types
of events (start, unblock and syscall exit), it is enough to
restore consistent partial ordering of all events. Even these
events don't need sequence numbers all the time (if goroutine
starts on the same P where it was unblocked, then start does
not need sequence number).
The burden of restoring the order is put on trace parser.
Details of the algorithm are described in the comments.
On http benchmark with GOMAXPROCS=48:
no tracing: 5026 ns/op
tracing: 27803 ns/op (+453%)
with this change: 6369 ns/op (+26%, mostly for traceback)
Also trace size is reduced by ~22%. Average event size before: 4.63
bytes/event, after: 3.62 bytes/event.
Besides running trace tests, I've also tested with manually broken
cputicks (random skew for each event, per-P skew and episodic random skew).
In all cases broken timestamps were detected and no test failures.
Change-Id: I078bde421ccc386a66f6c2051ab207bcd5613efa
Reviewed-on: https://go-review.googlesource.com/21512
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
These comments were left behind after runtime.h was converted
from C to Go. I examined the original code and tried to move these
to the places that the most sense.
Change-Id: I8769d60234c0113d682f9de3bd8d6c34c450c188
Reviewed-on: https://go-review.googlesource.com/21969
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The new function runtime.SetCgoTraceback may be used to register stack
traceback and symbolizer functions, written in C, to do a stack
traceback from cgo code.
There is a sample implementation of runtime.SetCgoSymbolizer at
github.com/ianlancetaylor/cgosymbolizer. Just importing that package is
sufficient to get symbolic C backtraces.
Currently only supported on linux/amd64.
Change-Id: If96ee2eb41c6c7379d407b9561b87557bfe47341
Reviewed-on: https://go-review.googlesource.com/17761
Reviewed-by: Austin Clements <austin@google.com>
Only use REP;MOVSB if:
1) The CPUID flag says it is fast, and
2) The pointers are unaligned
Otherwise, use REP;MOVSQ.
Update #14630
Change-Id: I946b28b87880c08e5eed1ce2945016466c89db66
Reviewed-on: https://go-review.googlesource.com/21300
Reviewed-by: Nigel Tao <nigeltao@golang.org>
See #14874
This change tells the compiler to emit itab and itablink symbols in
situations where they could be useful; however the compiled code does
not actually make use of the new symbols yet.
Change-Id: I0db3e6ec0cb1f3b7cebd4c60229e4a48372fe586
Reviewed-on: https://go-review.googlesource.com/20888
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: Michel Lespinasse <walken@google.com>
Currently shinkstack is only safe during STW because it adjusts
channel-related stack pointers and moves send/receive stack slots
without synchronizing with the channel code. Make it safe to use when
the world isn't stopped by:
1) Locking all channels the G is blocked on while adjusting the sudogs
and copying the area of the stack that may contain send/receive
slots.
2) For any stack frames that may contain send/receive slot, using an
atomic CAS to adjust pointers to prevent races between adjusting a
pointer in a receive slot and a concurrent send writing to that
receive slot.
In principle, the synchronization could be finer-grained. For example,
we considered synchronizing around the sudogs, which would allow
channel operations involving other Gs to continue if the G being
shrunk was far enough down the send/receive queue. However, using the
channel lock means no additional locks are necessary in the channel
code. Furthermore, the stack shrinking code holds the channel lock for
a very short time (much less than the time required to shrink the
stack).
This does not yet make stack shrinking concurrent; it merely makes
doing so safe.
This has negligible effect on the go1 and garbage benchmarks.
For #12967.
Change-Id: Ia49df3a8a7be4b36e365aac4155a2416b94b988c
Reviewed-on: https://go-review.googlesource.com/20042
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Currently sudog.elem is never accessed concurrently, so in several
cases we drop the channel lock just before reading/writing the
sent/received value from/to sudog.elem. However, concurrent stack
shrinking is going to have to adjust sudog.elem to point to the new
stack, which means it needs a way to synchronize with accesses to
sudog.elem. Hence, add sudog.elem to the fields protected by
hchan.lock and scoot the unlocks down past the uses of sudog.elem.
While we're here, better document the channel synchronization rules.
For #12967.
Change-Id: I3ad0ca71f0a74b0716c261aef21b2f7f13f74917
Reviewed-on: https://go-review.googlesource.com/20040
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Currently the g.waiting list created by a select is in poll order.
However, nothing depends on this, and we're going to need access to
the channel lock order in other places shortly, so modify select to
put the waiting list in channel lock order.
For #12967.
Change-Id: If0d38816216ecbb37a36624d9b25dd96e0a775ec
Reviewed-on: https://go-review.googlesource.com/20037
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
Given a G, there's currently no way to find the channel it's blocking
on. We'll need this information to fix a (probably theoretical) bug in
select and to implement concurrent stack shrinking, so record the
channel in the sudog.
For #12967.
Change-Id: If8fb63a140f1d07175818824d08c0ebeec2bdf66
Reviewed-on: https://go-review.googlesource.com/20035
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Change Cond implementation to use a notification list such that waiters
can first register for a notification, release the lock, then actually
wait. Signalers never have to park anymore.
This is intended to address an issue in the previous implementation
where Broadcast could fail to signal all waiters.
Results of the existing benchmark are below.
Original New Diff
BenchmarkCond1-48 2000000 745 ns/op 755 +1.3%
BenchmarkCond2-48 1000000 1545 ns/op 1532 -0.8%
BenchmarkCond4-48 300000 3833 ns/op 3896 +1.6%
BenchmarkCond8-48 200000 10049 ns/op 10257 +2.1%
BenchmarkCond16-48 100000 21123 ns/op 21236 +0.5%
BenchmarkCond32-48 30000 40393 ns/op 41097 +1.7%
Fixes#14064
Change-Id: I083466d61593a791a034df61f5305adfb8f1c7f9
Reviewed-on: https://go-review.googlesource.com/18892
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Caleb Spare <cespare@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
In particular, write down the rules for stack ownership because the
details of this are about to get very important with concurrent stack
shrinking. (Interestingly, the details don't actually change, but
anything that's currently skating on thin ice is likely to fall
through.)
Fox #12967.
Change-Id: I561e2610e864295e9faba07717a934aabefcaab9
Reviewed-on: https://go-review.googlesource.com/20034
Reviewed-by: Rick Hudson <rlh@golang.org>
The tree's pretty inconsistent about single space vs double space
after a period in documentation. Make it consistently a single space,
per earlier decisions. This means contributors won't be confused by
misleading precedence.
This CL doesn't use go/doc to parse. It only addresses // comments.
It was generated with:
$ perl -i -npe 's,^(\s*// .+[a-z]\.) +([A-Z]),$1 $2,' $(git grep -l -E '^\s*//(.+\.) +([A-Z])')
$ go test go/doc -update
Change-Id: Iccdb99c37c797ef1f804a94b22ba5ee4b500c4f7
Reviewed-on: https://go-review.googlesource.com/20022
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Dave Day <djd@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
_Genqueue and _Gscanenqueue were introduced as part of the GC quiesce
code. The quiesce code was removed by 197aa9e, but these states and
some associated code stuck around. Remove them.
Change-Id: I69df81881602d4a431556513dac2959668d27c20
Reviewed-on: https://go-review.googlesource.com/19638
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
It's not needed on other OSes.
Change-Id: Ia6b13510585392a7062374806527d33876beba2a
Reviewed-on: https://go-review.googlesource.com/19818
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Also eliminates per-maptype hiter and hmap types, since they're not
really needed anyway. Update packages reflect and runtime
accordingly.
Reduces golang.org/x/tools/cmd/godoc's text segment by ~170kB:
text data bss dec hex filename
13085702 140640 151520 13377862 cc2146 godoc.before
12915382 140640 151520 13207542 c987f6 godoc.after
Updates #6853.
Change-Id: I948b2bc1f22d477c1756204996b4e3e1fb568d81
Reviewed-on: https://go-review.googlesource.com/16610
Reviewed-by: Keith Randall <khr@golang.org>
Currently p.gcBgMarkWorker is a *g. Change it to a guintptr. This
eliminates a write barrier during the subtle mark worker parking dance
(which isn't known to be causing problems, but may).
Change-Id: Ibf12c05ac910820448059e69a68e5b882c993ed8
Reviewed-on: https://go-review.googlesource.com/18970
Run-TryBot: Austin Clements <austin@google.com>
Reviewed-by: Rick Hudson <rlh@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
[Repeat of CL 18343 with build fixes.]
Before, NumGoroutine counted system goroutines and Stack (usually) didn't show them,
which was inconsistent and confusing.
To resolve which way they should be consistent, it seems like
package main
import "runtime"
func main() { println(runtime.NumGoroutine()) }
should print 1 regardless of internal runtime details. Make it so.
Fixes#11706.
Change-Id: If26749fec06aa0ff84311f7941b88d140552e81d
Reviewed-on: https://go-review.googlesource.com/18432
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
Before, NumGoroutine counted system goroutines and Stack (usually) didn't show them,
which was inconsistent and confusing.
To resolve which way they should be consistent, it seems like
package main
import "runtime"
func main() { println(runtime.NumGoroutine()) }
should print 1 regardless of internal runtime details. Make it so.
Fixes#11706.
Change-Id: I6bfe26a901de517728192cfb26a5568c4ef4fe47
Reviewed-on: https://go-review.googlesource.com/18343
Reviewed-by: Austin Clements <austin@google.com>
When calling a Go function on a C thread, if the C thread already has an
alternate signal stack, use that signal stack instead of installing a
new one.
Update #9896.
Change-Id: I62aa3a6a4a1dc4040fca050757299c8e6736987c
Reviewed-on: https://go-review.googlesource.com/18108
Reviewed-by: Russ Cox <rsc@golang.org>
I've already turned away one attempt to remove this field.
As the comment above the struct says, many tools know the layout.
The field cannot simply be removed.
It was one thing to remove the fields name, but the TODO should
not have been added.
Change-Id: If40eacf0eb35835082055e129e2b88333a0731b9
Reviewed-on: https://go-review.googlesource.com/17741
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Currently we wake up new worker threads whenever we pass
through the scheduler with nmspinning==0. This leads to
lots of unnecessary thread wake ups.
Instead let only spinning threads wake up new spinning threads.
For the following program:
package main
import "runtime"
func main() {
for i := 0; i < 1e7; i++ {
runtime.Gosched()
}
}
Before:
$ time ./test
real 0m4.278s
user 0m7.634s
sys 0m1.423s
$ strace -c ./test
% time seconds usecs/call calls errors syscall
99.93 9.314936 3 2685009 17536 futex
After:
$ time ./test
real 0m1.200s
user 0m1.181s
sys 0m0.024s
$ strace -c ./test
% time seconds usecs/call calls errors syscall
3.11 0.000049 25 2 futex
Fixes#13527
Change-Id: Ia1f5bf8a896dcc25d8b04beb1f4317aa9ff16f74
Reviewed-on: https://go-review.googlesource.com/17540
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Dmitry Vyukov <dvyukov@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Replace the cross platform but unsafe [4]uintptr type with a OS
specific type, sigset. Most OSes already define sigset, and this
change defines a suitable sigset for the OSes that don't (darwin,
openbsd). The OSes that don't use m.sigmask (windows, plan9, nacl)
now defines sigset as the empty type, struct{}.
The gain is strongly typed access to m.sigmask, saving a dynamic
size sanity check and unsafe.Pointer casting. Also, some storage is
saved for each M, since [4]uinptr was conservative for most OSes.
The cost is that OSes that don't need m.sigmask has to define sigset.
completes ./all.bash with GOOS linux, on amd64
completes ./make.bash with GOOSes openbsd, android, plan9, windows,
darwin, solaris, netbsd, freebsd, dragonfly, all amd64.
With GOOS=nacl ./make.bash failed with a seemingly unrelated error.
[Replay of CL 16942 by Elias Naur.]
Change-Id: I98f144d626033ae5318576115ed635415ac71b2c
Reviewed-on: https://go-review.googlesource.com/17033
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
These are methods that are "obviously" going to get inlined -- until you build
with -l, when they can trigger a stack split at a bad time.
Fixes#11482
Change-Id: Ia065c385978a2e7fe9f587811991d088c4d68325
Reviewed-on: https://go-review.googlesource.com/17165
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
Replace the cross platform but unsafe [4]uintptr type with a OS
specific type, sigset. Most OSes already define sigset, and this
change defines a suitable sigset for the OSes that don't (darwin,
openbsd). The OSes that don't use m.sigmask (windows, plan9, nacl)
now defines sigset as the empty type, struct{}.
The gain is strongly typed access to m.sigmask, saving a dynamic
size sanity check and unsafe.Pointer casting. Also, some storage is
saved for each M, since [4]uinptr was conservative for most OSes.
The cost is that OSes that don't need m.sigmask has to define sigset.
completes ./all.bash with GOOS linux, on amd64
completes ./make.bash with GOOSes openbsd, android, plan9, windows,
darwin, solaris, netbsd, freebsd, dragonfly, all amd64.
With GOOS=nacl ./make.bash failed with a seemingly unrelated error.
R=go1.7
Change-Id: Ib460379f063eb83d393e1c5efe7333a643c1595e
Reviewed-on: https://go-review.googlesource.com/16942
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
golang.org/cl/16346 changed the runtime on linux/386 to invoke the vsyscall
helper via a PIC sequence (CALL 0x10(GS)) when dynamically linking. But it's
actually quite easy to make that code sequence work all the time, so do that,
and remove the ugly machinery that passed the buildmode from the go tool to the
assembly.
This means enlarging m.tls so that we can safely access 0x10(GS) (GS is set to
&m.tls + 4, so 0x10(GS) accesses m_tls[5]).
Change-Id: I1345c34029b149cb5f25320bf19a3cdd73a056fa
Reviewed-on: https://go-review.googlesource.com/16796
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Allows removing fields that aren't relevant to a particular OS or
changing their types to match the underlying OS system calls they'll
be used for.
Change-Id: I5cea89ee77b4e7b985bff41337e561887c3272ff
Reviewed-on: https://go-review.googlesource.com/16176
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
runtime/internal/sys will hold system-, architecture- and config-
specific constants.
Updates #11647
Change-Id: I6db29c312556087a42e8d2bdd9af40d157c56b54
Reviewed-on: https://go-review.googlesource.com/16817
Reviewed-by: Russ Cox <rsc@golang.org>
The tracing code is currently called from contexts such as sysmon and
the scheduler where write barriers are not allowed. Unfortunately,
while the common paths through the tracing code do not have write
barriers, many of the less common paths dealing with buffer overflow
and recycling do.
This change replaces all *traceBufs with traceBufPtrs. In the style of
guintptr, etc., the GC does not trace traceBufPtrs and write barriers
do not apply when these pointers are written. Since traceBufs are
allocated from non-GC'd memory and manually managed, this is always
safe.
Updates #10600.
Change-Id: I52b992d36d1b634ebd855c8cde27947ec14f59ba
Reviewed-on: https://go-review.googlesource.com/16812
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
This change breaks out most of the atomics functions in the runtime
into package runtime/internal/atomic. It adds some basic support
in the toolchain for runtime packages, and also modifies linux/arm
atomics to remove the dependency on the runtime's mutex. The mutexes
have been replaced with spinlocks.
all trybots are happy!
In addition to the trybots, I've tested on the darwin/arm64 builder,
on the darwin/arm builder, and on a ppc64le machine.
Change-Id: I6698c8e3cf3834f55ce5824059f44d00dc8e3c2f
Reviewed-on: https://go-review.googlesource.com/14204
Run-TryBot: Michael Matloob <matloob@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
This eliminates many write barriers in the scheduler code that are
unnecessary and will interfere with upcoming changes where the garbage
collector will have to invoke run queue functions in contexts that
must not have write barriers.
Change-Id: I702d0ac99cfd00ffff406e7362917db6a43e7e55
Reviewed-on: https://go-review.googlesource.com/16556
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>