Commit graph

149 commits

Author SHA1 Message Date
Michael Anthony Knyszek
6d85891b29 [dev.typeparams] runtime: replace uses of runtime/internal/sys.PtrSize with internal/goarch.PtrSize [generated]
[git-generate]
cd src/runtime/internal/math
gofmt -w -r "sys.PtrSize -> goarch.PtrSize" .
goimports -w *.go
cd ../..
gofmt -w -r "sys.PtrSize -> goarch.PtrSize" .
goimports -w *.go

Change-Id: I43491cdd54d2e06d4d04152b3d213851b7d6d423
Reviewed-on: https://go-review.googlesource.com/c/go/+/328337
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2021-06-17 18:54:48 +00:00
Cherry Mui
3298c749ac [dev.typeparams] runtime: undo go'd closure argument workaround
In CL 298669 we added defer/go wrapping, and, as it is not
allowed for closures to escape when compiling runtime, we worked
around it by rewriting go'd closures to argumentless
non-capturing closures, so it is not a real closure and so not
needed to escape.

Previous CL removes the restriction. Now we can undo the
workaround.

Updates #40724.

Change-Id: Ic7bf129da4aee7b7fdb7157414eca943a6a27264
Reviewed-on: https://go-review.googlesource.com/c/go/+/325110
Trust: Cherry Mui <cherryyz@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
2021-06-04 17:00:32 +00:00
Matthew Dempsky
5c1e119d48 [dev.typeparams] all: merge master (f22ec51) into dev.typeparams
Merge List:

+ 2021-05-25 f22ec51deb doc: add Go 1.17 release note about inlining functions with closures
+ 2021-05-25 8b462d7567 cmd/go: add a -compat flag to 'go mod tidy'
+ 2021-05-24 c89f1224a5 net: verify results from Lookup* are valid domain names
+ 2021-05-24 08a8fa9c47 misc/wasm: ensure correct stack pointer in catch clauses
+ 2021-05-24 32b73ae180 cmd/go: align checks of module path during initialization.
+ 2021-05-24 15d9d4a009 cmd/go: add tests illustrating what happens when Go 1.16 is used in a Go 1.17 main module
+ 2021-05-24 873401df5b cmd/compile: ensure equal functions don't do unaligned loads
+ 2021-05-24 b83610699a cmd/compile: record regabi status in DW_AT_producer
+ 2021-05-24 a22e317220 cmd/compile: always include underlying type for map types
+ 2021-05-24 4356e7e85f runtime: account for spill slots in Windows callback compilation
+ 2021-05-24 52d7033ff6 cmd/go/internal/modload: set the default GoVersion in a single location
+ 2021-05-24 05819bc104 cmd/go/internal/modcmd: factor out a type for flags whose arguments are Go versions
+ 2021-05-22 cca23a7373 cmd/compile: revert CL/316890
+ 2021-05-21 f87194cbd7 doc/go1.17: document changes to net/http package
+ 2021-05-21 217f5dd496 doc: document additional atomic.Value methods
+ 2021-05-21 3c656445f1 cmd/go: in TestScript/mod_replace, download an explicit module path
+ 2021-05-21 76b2d6afed os: document that StartProcess puts files into blocking mode
+ 2021-05-21 e4d7525c3e cmd/dist: display first class port status in json output
+ 2021-05-21 4fb10b2118 cmd/go: in 'go mod download' without args, don't save module zip sums
+ 2021-05-21 4fda54ce3f doc/go1.17: document database/sql changes for Go 1.17
+ 2021-05-21 8876b9bd6a doc/go1.17: document io/fs changes for Go 1.17
+ 2021-05-21 5fee772c87 doc/go1.17: document archive/zip changes for Go 1.17
+ 2021-05-21 3148694f60 cmd/go: remove warning from module deprecation notice printing
+ 2021-05-21 7e63c8b765 runtime: wait for Go runtime to initialize in Windows signal test
+ 2021-05-21 831573cd21 io/fs: added an example for io/fs.WalkDir
+ 2021-05-20 baa934d26d cmd: go get golang.org/x/tools/analysis@49064d23 && go mod vendor
+ 2021-05-20 7c692cc7ea doc/go1.17: document changes to os package
+ 2021-05-20 ce9a3b79d5 crypto/x509: add new FreeBSD 12.2+ trusted certificate folder
+ 2021-05-20 f8be906d74 test: re-enable test on riscv64 now that it supports external linking
+ 2021-05-20 def5360541 doc/go1.17: add release notes for OpenBSD ports
+ 2021-05-20 ef1f52cc38 doc/go1.17: add release note for windows/arm64 port
+ 2021-05-20 bb7495a46d doc/go1.17: document new math constants
+ 2021-05-20 f07e4dae3c syscall: document NewCallback and NewCallbackCDecl limitations
+ 2021-05-20 a8d85918b6 misc/cgo/testplugin: skip TestIssue25756pie on darwin/arm64 builder
+ 2021-05-19 6c1c055d1e cmd/internal/moddeps: use filepath.SkipDir only on directories
+ 2021-05-19 658b5e66ec net: return nil UDPAddr from ReadFromUDP
+ 2021-05-19 15a374d5c1 test: check portable error message on issue46234.go
+ 2021-05-18 eeadce2d87 go/build/constraint: fix parsing of "// +build" (with no args)
+ 2021-05-18 6d2ef2ef2a cmd/compile: don't emit inltree for closure within body of inlined func
+ 2021-05-18 048cb4ceee crypto/x509: remove duplicate import

Change-Id: Ib0442e3555493805f2aa1df26dfd6898df989a37
2021-05-25 15:37:20 -07:00
Keith Randall
a22e317220 cmd/compile: always include underlying type for map types
This is a different fix for #37716.

Should help make the fix for #46283 easier, since we will no longer
need to keep compiler-generated hash functions and the runtime
hash function in sync.

Change-Id: I84cb93144e425dcd03afc552b5fbd0f2d2cc6d39
Reviewed-on: https://go-review.googlesource.com/c/go/+/322150
Trust: Keith Randall <khr@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2021-05-24 17:43:50 +00:00
Cherry Mui
626e89c261 [dev.typeparams] runtime: replace funcPC with internal/abi.FuncPCABIInternal
At this point all funcPC references are ABIInternal functions.
Replace with the intrinsics.

Change-Id: I3ba7e485c83017408749b53f92877d3727a75e27
Reviewed-on: https://go-review.googlesource.com/c/go/+/321954
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
2021-05-21 22:40:36 +00:00
Keith Randall
2c05ba4ae0 runtime: top align tinyallocs in race mode
Top align allocations in tinyalloc buckets when in race mode.
This will make checkptr checks more reliable, because any code
that modifies a pointer past the end of the object will trigger
a checkptr error.

No test, because we need -race for this to actually kick in.  We could
add it to the race detector tests, but the race detector tests are all
geared towards race detector reports, not checkptr reports. Mucking
with parsing reports is more than a test is worth.

Fixes #38872

Change-Id: Ie56f0fbd1a9385539f6631fd1ac40c3de5600154
Reviewed-on: https://go-review.googlesource.com/c/go/+/315029
Trust: Keith Randall <khr@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
2021-04-29 17:39:31 +00:00
Michael Anthony Knyszek
0b9ca4d907 runtime/metrics: add tiny allocs metric
Currently tiny allocations are not represented in either MemStats or
runtime/metrics, but they're represented in MemStats (indirectly) via
Mallocs. Add them to runtime/metrics by first merging
memstats.tinyallocs into consistentHeapStats (just for simplicity; it's
monotonic so metrics would still be self-consistent if we just read it
atomically) and then adding /gc/heap/tiny/allocs:objects to the list of
supported metrics.

Change-Id: Ie478006ab942a3e877b4a79065ffa43569722f3d
Reviewed-on: https://go-review.googlesource.com/c/go/+/312909
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
2021-04-27 13:59:22 +00:00
Michael Anthony Knyszek
44dd06670f runtime: support register ABI Go functions from Windows callbacks
This change modifies the system that allows Go functions to be set as
callbacks in various Windows systems to support the new register ABI.

For #40724.

Change-Id: Ie067f9e8a76c96d56177d7aa88f89cbe7223e12e
Reviewed-on: https://go-review.googlesource.com/c/go/+/300113
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
2021-03-31 20:09:03 +00:00
Austin Clements
4e1bf8ed38 runtime: add GC testing helpers for regabi signature fuzzer
This CL adds a set of helper functions for testing GC interactions.
These are intended for use in the regabi signature fuzzer, but are
generally useful for GC tests, so we make them generally available to
runtime tests.

These provide:

1. An easy way to force stack movement, for testing stack copying.

2. A simple and robust way to check the reachability of a set of
pointers.

3. A way to check what general category of memory a pointer points to,
mostly so tests can make sure they're testing what they mean to.

For #40724, but generally useful.

Change-Id: I15d33ccb3f5a792c0472a19c2cc9a8b4a9356a66
Reviewed-on: https://go-review.googlesource.com/c/go/+/305330
Trust: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
2021-03-29 21:50:16 +00:00
Than McIntosh
769d4b68ef cmd/compile: wrap/desugar defer calls for register abi
Adds code to the compiler's "order" phase to rewrite go and defer
statements to always be argument-less. E.g.

 defer f(x,y)       =>     x1, y1 := x, y
			   defer func() { f(x1, y1) }

This transformation is not beneficial on its own, but it helps
simplify runtime defer handling for the new register ABI (when
invoking deferred functions on the panic path, the runtime doesn't
need to manage the complexity of determining which args to pass in
register vs memory).

This feature is currently enabled by default if GOEXPERIMENT=regabi or
GOEXPERIMENT=regabidefer is in effect.

Included in this CL are some workarounds in the runtime to insure that
"go" statement targets in the runtime are argument-less already (since
wrapping them can potentially introduce heap-allocated closures, which
are currently not allowed). The expectation is that these workarounds
will be temporary, and can go away once we either A) change the rules
about heap-allocated closures, or B) implement some other scheme for
handling go statements.

Change-Id: I01060d79a6b140c6f0838d6e6813f807ccdca319
Reviewed-on: https://go-review.googlesource.com/c/go/+/298669
Trust: Than McIntosh <thanm@google.com>
Run-TryBot: Than McIntosh <thanm@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: David Chase <drchase@google.com>
2021-03-23 23:08:19 +00:00
Michael Anthony Knyszek
bd6aeca968 runtime: prepare arenas for use incrementally
This change moves the call of sysMap from (*mheap).sysAlloc into
(*mheap).grow, so we only sysMap what we're going to use in the near
future (thanks to the curArena mechanism). The purpose of this change is
to better support systems with strict overcommit rules which generally
accept reserved memory but not prepared memory (see malloc.go for exact
descriptions of these states).

This move requires changing linearAlloc to only optionally map memory.
In one case, with mheap.heapArenaAlloc, we do want it to map memory. But
now in the other case, with mheap.arena, we don't, because we want grow
to take care of it.

The risk with this change is we may make more syscalls than before on
systems with 64 MiB arenas, but because heap growth is relatively rare
this is unlikely to be a noticable issue. We also bound the amount of
syscalls made by only extending curArena (and thus mapping) by
pallocChunkPages*pageSize which is 4 MiB.

Fixes #42612.

Change-Id: I736df696afe78ddb1a747a896caa0db8726027e5
Reviewed-on: https://go-review.googlesource.com/c/go/+/270537
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
2021-03-15 20:20:51 +00:00
Michael Anthony Knyszek
f009b5b226 runtime: support register ABI for finalizers
This change modifies runfinq to properly pass arguments to finalizers in
registers via reflectcall.

For #40724.

Change-Id: I414c0eff466ef315a0eb10507994e598dd29ccb2
Reviewed-on: https://go-review.googlesource.com/c/go/+/300112
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2021-03-11 17:26:22 +00:00
Michael Pratt
c41bf9ee81 runtime: check partial lock ranking order
To ease readability we typically keep the partial order lists sorted by
rank. This isn't required for correctness, it just makes it (slightly)
easier to read the lists.

Currently we must notice out-of-order entries during code review, which
is an error-prone process.

Add a test to enforce ordering, and fix the errors that have crept in.
Most of the existing errors were misordered lockRankHchan or
lockRankPollDesc.

While we're here, I've moved the correctness check that the partial
ordering satisfies the total ordering from init to a test case. This
will allow us to catch these errors without even running
staticlockranking.

Change-Id: I9c11abe49ea26c556439822bb6a3183129600c3b
Reviewed-on: https://go-review.googlesource.com/c/go/+/300171
Trust: Michael Pratt <mpratt@google.com>
Trust: Dan Scales <danscales@google.com>
Run-TryBot: Michael Pratt <mpratt@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Dan Scales <danscales@google.com>
2021-03-10 19:07:29 +00:00
Russ Cox
8ac23a1f15 runtime: document, clean up internal/sys
Document what the values in internal/sys mean.

Remove various special cases for arm64 in the code using StackAlign.

Delete Uintreg - it was for GOARCH=amd64p32,
which was specific to GOOS=nacl and has been retired.

This CL is part of a stack adding windows/arm64
support (#36439), intended to land in the Go 1.17 cycle.
This CL is, however, not windows/arm64-specific.
It is cleanup meant to make the port (and future ports) easier.

Change-Id: I40e8fa07b4e192298b6536b98a72a751951a4383
Reviewed-on: https://go-review.googlesource.com/c/go/+/288795
Trust: Russ Cox <rsc@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2021-02-19 00:01:38 +00:00
Michael Anthony Knyszek
b116404444 runtime: shift timeHistogram buckets and allow negative durations
Today, timeHistogram, when copied, has the wrong set of counts for the
bucket that should represent (-inf, 0), when in fact it contains [0, 1).
In essence, the buckets are all shifted over by one from where they're
supposed to be.

But this also means that the existence of the overflow bucket is wrong:
the top bucket is supposed to extend to infinity, and what we're really
missing is an underflow bucket to represent the range (-inf, 0).

We could just always zero this bucket and continue ignoring negative
durations, but that likely isn't prudent.

timeHistogram is intended to be used with differences in nanotime, but
depending on how a platform is implemented (or due to a bug in that
platform) it's possible to get a negative duration without having done
anything wrong. We should just be resilient to that and be able to
detect it.

So this change removes the overflow bucket and replaces it with an
underflow bucket, and timeHistogram no longer panics when faced with a
negative duration.

Fixes #43328.
Fixes #43329.

Change-Id: If336425d7d080fd37bf071e18746800e22d38108
Reviewed-on: https://go-review.googlesource.com/c/go/+/279468
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
2020-12-23 17:31:18 +00:00
Michael Pratt
9393b5bae5 runtime: add heap lock assertions
Some functions that required holding the heap lock _or_ world stop have
been simplified to simply requiring the heap lock. This is conceptually
simpler and taking the heap lock during world stop is guaranteed to not
contend. This was only done on functions already called on the
systemstack to avoid too many extra systemstack calls in GC.

Updates #40677

Change-Id: I15aa1dadcdd1a81aac3d2a9ecad6e7d0377befdc
Reviewed-on: https://go-review.googlesource.com/c/go/+/250262
Run-TryBot: Michael Pratt <mpratt@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Trust: Michael Pratt <mpratt@google.com>
2020-10-30 20:21:14 +00:00
Michael Anthony Knyszek
36c5edd8d9 runtime: add timeHistogram type
This change adds a concurrent HDR time histogram to the runtime with
tests. It also adds a function to generate boundaries for use by the
metrics package.

For #37112.

Change-Id: Ifbef8ddce8e3a965a0dcd58ccd4915c282ae2098
Reviewed-on: https://go-review.googlesource.com/c/go/+/247046
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26 21:47:49 +00:00
Michael Anthony Knyszek
b08dfbaa43 runtime,runtime/metrics: add memory metrics
This change adds support for a variety of runtime memory metrics and
contains the base implementation of Read for the runtime/metrics
package, which lives in the runtime.

It also adds testing infrastructure for the metrics package, and a bunch
of format and documentation tests.

For #37112.

Change-Id: I16a2c4781eeeb2de0abcb045c15105f1210e2d8a
Reviewed-on: https://go-review.googlesource.com/c/go/+/247041
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Trust: Michael Knyszek <mknyszek@google.com>
2020-10-26 18:29:05 +00:00
Michael Anthony Knyszek
79781e8dd3 runtime: move malloc stats into consistentHeapStats
This change moves the mcache-local malloc stats into the
consistentHeapStats structure so the malloc stats can be managed
consistently with the memory stats. The one exception here is
tinyAllocs for which moving that into the global stats would incur
several atomic writes on the fast path. Microbenchmarks for just one CPU
core have shown a 50% loss in throughput. Since tiny allocation counnt
isn't exposed anyway and is always blindly added to both allocs and
frees, let that stay inconsistent and flush the tiny allocation count
every so often.

Change-Id: I2a4b75f209c0e659b9c0db081a3287bf227c10ca
Reviewed-on: https://go-review.googlesource.com/c/go/+/247039
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26 18:28:56 +00:00
Michael Anthony Knyszek
8ebc58452a runtime: delineate which memstats are system stats with a type
This change modifies the type of several mstats fields to be a new type:
sysMemStat. This type has the same structure as the fields used to have.

The purpose of this change is to make it very clear which stats may be
used in various functions for accounting (usually the platform-specific
sys* functions, but there are others). Currently there's an implicit
understanding that the *uint64 value passed to these functions is some
kind of statistic whose value is atomically managed. This understanding
isn't inherently problematic, but we're about to change how some stats
(which currently use mSysStatInc and mSysStatDec) work, so we want to
make it very clear what the various requirements are around "sysStat".

This change also removes mSysStatInc and mSysStatDec in favor of a
method on sysMemStat. Note that those two functions were originally
written the way they were because atomic 64-bit adds required a valid G
on ARM, but this hasn't been the case for a very long time (since
golang.org/cl/14204, but even before then it wasn't clear if mutexes
required a valid G anymore). Today we implement 64-bit adds on ARM with
a spinlock table.

Change-Id: I4e9b37cf14afc2ae20cf736e874eb0064af086d7
Reviewed-on: https://go-review.googlesource.com/c/go/+/246971
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26 18:09:41 +00:00
Michael Anthony Knyszek
c863849800 runtime: rename mcache fields to match Go style
This change renames a bunch of malloc statistics stored in the mcache
that are all named with the "local_" prefix. It also renames largeAlloc
to allocLarge to prevent a naming conflict, and next_sample because it
would be the last mcache field with the old C naming style.

Change-Id: I29695cb83b397a435ede7e9ad5c3c9be72767ea3
Reviewed-on: https://go-review.googlesource.com/c/go/+/246969
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26 17:26:48 +00:00
Michael Anthony Knyszek
cca3d1e553 runtime: don't flush local_tinyallocs
This change makes local_tinyallocs work like the rest of the malloc
stats and doesn't flush local_tinyallocs, instead making that the
source-of-truth.

Change-Id: I3e6cb5f1b3d086e432ce7d456895511a48e3617a
Reviewed-on: https://go-review.googlesource.com/c/go/+/246967
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26 17:26:30 +00:00
Michael Anthony Knyszek
42019613df runtime: make distributed/local malloc stats the source-of-truth
This change makes it so that various local malloc stats (excluding
heap_scan and local_tinyallocs) are no longer written first to mheap
fields but are instead accessed directly from each mcache.

This change is part of a move toward having stats be distributed, and
cleaning up some old code related to the stats.

Note that because there's no central source-of-truth, when an mcache
dies, it must donate its stats to another mcache. It's always safe to
donate to the mcache for the 0th P, so do that.

Change-Id: I2556093dbc27357cb9621c9b97671f3c00aa1173
Reviewed-on: https://go-review.googlesource.com/c/go/+/246964
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-26 17:26:08 +00:00
Michael Anthony Knyszek
64dc25b2db runtime: add tests for addrRanges.add
Change-Id: I249deb482df74068b0538e9d773b9a87bc5a6df3
Reviewed-on: https://go-review.googlesource.com/c/go/+/242681
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-23 23:01:52 +00:00
Michael Anthony Knyszek
e01a1c01f8 runtime: add tests for addrRanges.findSucc
This change adds a test suite for addrRanges.findSucc so we can change
the implementation more safely.

For #40191.

Change-Id: I14a834b6d54836cbc676eb0edb292ba6176705cc
Reviewed-on: https://go-review.googlesource.com/c/go/+/242678
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
2020-10-23 22:33:17 +00:00
Michael Anthony Knyszek
b4a06b2089 runtime: define the AddrRange used for testing in terms of addrRange
Currently the AddrRange used for testing is defined separately from
addrRange in the runtime, making it difficult to test it as well as
addrRanges. Redefine AddrRange in terms of addrRange instead.

For #40191.

Change-Id: I3aa5b8df3e4c9a3c494b46ab802dd574b2488141
Reviewed-on: https://go-review.googlesource.com/c/go/+/242677
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Austin Clements <austin@google.com>
2020-10-22 15:25:33 +00:00
Michael Anthony Knyszek
10dfb1dd3d runtime: actually fix locking in BenchmarkMSpanCountAlloc
I just submitted CL 255297 which mostly fixed this problem, but totally
forgot to actually acquire/release the heap lock. Oops.

Updates #41391.

Change-Id: I45b42f20a9fc765c4de52476db3654d4bfe9feb3
Reviewed-on: https://go-review.googlesource.com/c/go/+/255298
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
2020-09-16 17:36:58 +00:00
Michael Anthony Knyszek
2ae2a94857 runtime: fix leak and locking in BenchmarkMSpanCountAlloc
CL 249917 made the mspan in MSpanCountAlloc no longer stack-allocated
(for good reason), but then allocated an mspan on each call and did not
free it, resulting in a leak. That allocation was also not protected by
the heap lock, which could lead to data corruption of mheap fields and
the spanalloc.

To fix this, export some functions to allocate/free dummy mspans from
spanalloc (with proper locking) and allocate just one up-front for the
benchmark, freeing it at the end. Then, update MSpanCountAlloc to accept
a dummy mspan.

Note that we need to allocate the dummy mspan up-front otherwise we
measure things like heap locking and fixalloc performance instead of
what we actually want to measure: how fast we can do a popcount on the
mark bits.

Fixes #41391.

Change-Id: If6629a6ec1ece639c7fb78532045837a8c872c04
Reviewed-on: https://go-review.googlesource.com/c/go/+/255297
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Michael Knyszek <mknyszek@google.com>
2020-09-16 17:07:38 +00:00
Michael Anthony Knyszek
34835df048 runtime: fix ReadMemStatsSlow's and CheckScavengedBits' chunk iteration
Both ReadMemStatsSlow and CheckScavengedBits iterate over the page
allocator's chunks but don't actually check if they exist. During the
development process the chunks index became sparse, so now this was a
possibility. If the runtime tests' heap is sparse we might end up
segfaulting in either one of these functions, though this will generally
be very rare.

The pattern here to return nil for a nonexistent chunk is also useful
elsewhere, so this change introduces tryChunkOf which won't throw, but
might return nil. It also updates the documentation of chunkOf.

Fixes #41296.

Change-Id: Id5ae0ca3234480de1724fdf2e3677eeedcf76fa0
Reviewed-on: https://go-review.googlesource.com/c/go/+/253777
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2020-09-09 17:48:56 +00:00
Keith Randall
d9a6bdf7ef cmd/compile: don't allow go:notinheap on the heap or stack
Right now we just prevent such types from being on the heap. This CL
makes it so they cannot appear on the stack either. The distinction
between heap and stack is pretty vague at the language level (e.g. it
is affected by -N), and we don't need the flexibility anyway.

Once go:notinheap types cannot be in either place, we don't need to
consider pointers to such types to be pointers, at least according to
the garbage collector and stack copying. (This is the big win of this
CL, in my opinion.)

The distinction between HasPointers and HasHeapPointer no longer
exists. There is only HasPointers.

This CL is cleanup before possible use of go:notinheap to fix #40954.

Update #13386

Change-Id: Ibd895aadf001c0385078a6d4809c3f374991231a
Reviewed-on: https://go-review.googlesource.com/c/go/+/249917
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2020-08-25 01:46:05 +00:00
Austin Clements
92bda33d27 runtime: revert signal stack mlocking
Go 1.14 included a (rather awful) workaround for a Linux kernel bug
that corrupted vector registers on x86 CPUs during signal delivery
(https://bugzilla.kernel.org/show_bug.cgi?id=205663). This bug was
introduced in Linux 5.2 and fixed in 5.3.15, 5.4.2 and all 5.5 and
later kernels. The fix was also back-ported by major distros. This
workaround was necessary, but had unfortunate downsides, including
causing Go programs to exceed the mlock ulimit in many configurations
(#37436).

We're reasonably confident that by the Go 1.16 release, the number of
systems running affected kernels will be vanishingly small. Hence,
this CL removes this workaround.

This effectively reverts CLs 209597 (version parser), 209899 (mlock
top of signal stack), 210299 (better failure message), 223121 (soft
mlock failure handling), and 244059 (special-case patched Ubuntu
kernels). The one thing we keep is the osArchInit function. It's empty
everywhere now, but is a reasonable hook to have.

Updates #35326, #35777 (the original register corruption bugs).
Updates #40184 (request to revert in 1.15).
Fixes #35979.

Change-Id: Ie213270837095576f1f3ef46bf3de187dc486c50
Reviewed-on: https://go-review.googlesource.com/c/go/+/246200
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2020-08-13 02:17:17 +00:00
Michael Anthony Knyszek
796786cd0c runtime: make maxOffAddr reflect the actual address space upper bound
Currently maxOffAddr is defined in terms of the whole 64-bit address
space, assuming that it's all supported, by using ^uintptr(0) as the
maximal address in the offset space. In reality, the maximal address in
the offset space is (1<<heapAddrBits)-1 because we don't have more than
that actually available to us on a given platform.

On most platforms this is fine, because arenaBaseOffset is just
connecting two segments of address space, but on AIX we use it as an
actual offset for the starting address of the available address space,
which is limited. This means using ^uintptr(0) as the maximal address in
the offset address space causes wrap-around, especially when we just
want to represent a range approximately like [addr, infinity), which
today we do by using maxOffAddr.

To fix this, we define maxOffAddr more appropriately, in terms of
(1<<heapAddrBits)-1.

This change also redefines arenaBaseOffset to not be the negation of the
virtual address corresponding to address zero in the virtual address
space, but instead directly as the virtual address corresponding to
zero. This matches the existing documentation more closely and makes the
logic around arenaBaseOffset decidedly simpler, especially when trying
to reason about its use on AIX.

Fixes #38966.

Change-Id: I1336e5036a39de846f64cc2d253e8536dee57611
Reviewed-on: https://go-review.googlesource.com/c/go/+/233497
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
2020-05-14 16:20:19 +00:00
Michael Anthony Knyszek
d69509ff99 runtime: make addrRange[s] operate on offset addresses
Currently addrRange and addrRanges operate on real addresses. That is,
the addresses they manipulate don't include arenaBaseOffset. When added
to an address, arenaBaseOffset makes the address space appear contiguous
on platforms where the address space is segmented. While this is
generally OK because even those platforms which have a segmented address
space usually don't give addresses in a different segment, today it
causes a mismatch between the scavenger and the rest of the page
allocator. The scavenger scavenges from the highest addresses first, but
only via real address, whereas the page allocator allocates memory in
offset address order.

So this change makes addrRange and addrRanges, i.e. what the scavenger
operates on, use offset addresses. However, lots of the page allocator
relies on an addrRange containing real addresses.

To make this transition less error-prone, this change introduces a new
type, offAddr, whose purpose is to make offset addresses a distinct
type, so any attempt to trivially mix real and offset addresses will
trigger a compilation error.

This change doesn't attempt to use offAddr in all of the runtime; a
follow-up change will look for and catch remaining uses of an offset
address which doesn't use the type.

Updates #35788.

Change-Id: I991d891ac8ace8339ca180daafdf6b261a4d43d1
Reviewed-on: https://go-review.googlesource.com/c/go/+/230717
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2020-05-08 16:31:00 +00:00
Michael Anthony Knyszek
55ec5182d7 runtime: remove scavAddr in favor of address ranges
This change removes the concept of s.scavAddr in favor of explicitly
reserving and unreserving address ranges. s.scavAddr has several
problems with raciness that can cause the scavenger to miss updates, or
move it back unnecessarily, forcing future scavenge calls to iterate
over searched address space unnecessarily.

This change achieves this by replacing scavAddr with a second addrRanges
which is cloned from s.inUse at the end of each sweep phase. Ranges from
this second addrRanges are then reserved by scavengers (with the
reservation size proportional to the heap size) who are then able to
safely iterate over those ranges without worry of another scavenger
coming in.

Fixes #35788.

Change-Id: Ief01ae170384174875118742f6c26b2a41cbb66d
Reviewed-on: https://go-review.googlesource.com/c/go/+/208378
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Austin Clements <austin@google.com>
2020-05-08 16:24:40 +00:00
Aaron Patterson
94e61ab94d runtime/runtime2: pack the sudog struct
This commit moves the isSelect bool below the ticket uint32.  The
boolean was consuming 8 bytes of the struct.  The uint32 was also
consuming 8 bytes, so we can pack isSelect below the uint32 and save 8
bytes.  This reduces the sudog struct from 96 bytes to 88 bytes.

Change-Id: If555cdaf2f5eaa125e2590fc4d113dbc99750738
GitHub-Last-Rev: d63b4e086b
GitHub-Pull-Request: golang/go#36552
Reviewed-on: https://go-review.googlesource.com/c/go/+/214677
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-05-07 04:05:18 +00:00
Dan Scales
0a820007e7 runtime: static lock ranking for the runtime (enabled by GOEXPERIMENT)
I took some of the infrastructure from Austin's lock logging CR
https://go-review.googlesource.com/c/go/+/192704 (with deadlock
detection from the logs), and developed a setup to give static lock
ranking for runtime locks.

Static lock ranking establishes a documented total ordering among locks,
and then reports an error if the total order is violated. This can
happen if a deadlock happens (by acquiring a sequence of locks in
different orders), or if just one side of a possible deadlock happens.
Lock ordering deadlocks cannot happen as long as the lock ordering is
followed.

Along the way, I found a deadlock involving the new timer code, which Ian fixed
via https://go-review.googlesource.com/c/go/+/207348, as well as two other
potential deadlocks.

See the constants at the top of runtime/lockrank.go to show the static
lock ranking that I ended up with, along with some comments. This is
great documentation of the current intended lock ordering when acquiring
multiple locks in the runtime.

I also added an array lockPartialOrder[] which shows and enforces the
current partial ordering among locks (which is embedded within the total
ordering). This is more specific about the dependencies among locks.

I don't try to check the ranking within a lock class with multiple locks
that can be acquired at the same time (i.e. check the ranking when
multiple hchan locks are acquired).

Currently, I am doing a lockInit() call to set the lock rank of most
locks. Any lock that is not otherwise initialized is assumed to be a
leaf lock (a very high rank lock), so that eliminates the need to do
anything for a bunch of locks (including all architecture-dependent
locks). For two locks, root.lock and notifyList.lock (only in the
runtime/sema.go file), it is not as easy to do lock initialization, so
instead, I am passing the lock rank with the lock calls.

For Windows compilation, I needed to increase the StackGuard size from
896 to 928 because of the new lock-rank checking functions.

Checking of the static lock ranking is enabled by setting
GOEXPERIMENT=staticlockranking before doing a run.

To make sure that the static lock ranking code has no overhead in memory
or CPU when not enabled by GOEXPERIMENT, I changed 'go build/install' so
that it defines a build tag (with the same name) whenever any experiment
has been baked into the toolchain (by checking Expstring()). This allows
me to avoid increasing the size of the 'mutex' type when static lock
ranking is not enabled.

Fixes #38029

Change-Id: I154217ff307c47051f8dae9c2a03b53081acd83a
Reviewed-on: https://go-review.googlesource.com/c/go/+/207619
Reviewed-by: Dan Scales <danscales@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Dan Scales <danscales@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2020-04-07 21:51:03 +00:00
Michael Anthony Knyszek
7af6a31b48 runtime: add countAlloc benchmark
This change adds a small microbenchmark for (*mspan).countAlloc, which
we're about to replace. Admittedly this isn't a critical piece of code,
but the benchmark was useful in understanding the performance change.

Change-Id: Iea93c00f571ee95534a42f2ef2ab026b382242b3
Reviewed-on: https://go-review.googlesource.com/c/go/+/224438
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2020-03-23 16:55:37 +00:00
Keith Randall
2b8e60d464 runtime: make typehash match compiler generated hashes exactly
If typehash (used by reflect) does not match the built-in map's hash,
then problems occur. If a map is built using reflect, and then
assigned to a variable of map type, the hash function can change. That
causes very bad things.

This issue is rare. MapOf consults a cache of all types that occur in
the binary before making a new one. To make a true new map type (with
a hash function derived from typehash) that map type must not occur in
the binary anywhere. But to cause the bug, we need a variable of that
type in order to assign to it. The only way to make that work is to
use a named map type for the variable, so it is distinct from the
unnamed version that MapOf looks for.

Fixes #37716

Change-Id: I3537bfceca8cbfa1af84202f432f3c06953fe0ed
Reviewed-on: https://go-review.googlesource.com/c/go/+/222357
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-03-10 16:26:59 +00:00
Michael Anthony Knyszek
e7f9e17b79 runtime: ensure that searchAddr always refers to inUse memory
This change formalizes an assumption made by the page allocator, which
is that (*pageAlloc).searchAddr should never refer to memory that is not
represented by (*pageAlloc).inUse. The portion of address space covered
by (*pageAlloc).inUse reflects the parts of the summary arrays which are
guaranteed to mapped, and so looking at any summary which is not
reflected there may cause a segfault.

In fact, this can happen today. This change thus also removes a
micro-optimization which is the only case which may cause
(*pageAlloc).searchAddr to point outside of any region covered by
(*pageAlloc).inUse, and adds a test verifying that the current segfault
can no longer occur.

Change-Id: I98b534f0ffba8656d3bd6d782f6fc22549ddf1c2
Reviewed-on: https://go-review.googlesource.com/c/go/+/216697
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-01-28 22:08:43 +00:00
Cherry Zhang
d6bf2d7b83 runtime: test memmove writes pointers atomically
In the previous CL we ensures that memmove writes pointers
atomically, so the concurrent GC won't observe a partially
updated pointer. This CL adds a test.

Change-Id: Icd1124bf3a15ef25bac20c7fb8933f1a642d897c
Reviewed-on: https://go-review.googlesource.com/c/go/+/212627
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2020-01-06 18:32:21 +00:00
Michael Anthony Knyszek
1b1fbb3192 runtime: use inUse ranges to map in summary memory only as needed
Prior to this change, if the heap was very discontiguous (such as in
TestArenaCollision) it's possible we could map a large amount of memory
as R/W and commit it. We would use only the start and end to track what
should be mapped, and we would extend that mapping as needed to
accomodate a potentially fragmented address space.

After this change, we only map exactly the part of the summary arrays
that we need by using the inUse ranges from the previous change. This
reduces the GCSys footprint of TestArenaCollision from 300 MiB to 18
MiB.

Because summaries are no longer mapped contiguously, this means the
scavenger can no longer iterate directly. This change also updates the
scavenger to borrow ranges out of inUse and iterate over only the
parts of the heap which are actually currently in use. This is both an
optimization and necessary for correctness.

Fixes #35514.

Change-Id: I96bf0c73ed0d2d89a00202ece7b9d089a53bac90
Reviewed-on: https://go-review.googlesource.com/c/go/+/207758
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-12-11 19:51:34 +00:00
Michael Anthony Knyszek
9d78e75a0a runtime: track ranges of address space which are owned by the heap
This change adds a new inUse field to the allocator which tracks ranges
of addresses that are owned by the heap. It is updated on each heap
growth.

These ranges are tracked in an array which is kept sorted. In practice
this array shouldn't exceed its initial allocation except in rare cases
and thus should be small (ideally exactly 1 element in size).

In a hypothetical worst-case scenario wherein we have a 1 TiB heap and 4
MiB arenas (note that the address ranges will never be at a smaller
granularity than an arena, since arenas are always allocated
contiguously), inUse would use at most 4 MiB of memory if the heap
mappings were completely discontiguous (highly unlikely) with an
additional 2 MiB leaked from previous allocations. Furthermore, the
copies that are done to keep the inUse array sorted will copy at most 4
MiB of memory in such a scenario, which, assuming a conservative copying
rate of 5 GiB/s, amounts to about 800µs.

However, note that in practice:
1) Most 64-bit platforms have 64 MiB arenas.
2) The copies should incur little-to-no page faults, meaning a copy rate
   closer to 25-50 GiB/s is expected.
3) Go heaps are almost always mostly contiguous.

Updates #35514.

Change-Id: I3ad07f1c2b5b9340acf59ecc3b9ae09e884814fe
Reviewed-on: https://go-review.googlesource.com/c/go/+/207757
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Austin Clements <austin@google.com>
2019-12-11 19:37:19 +00:00
Austin Clements
fa3a121a79 runtime: add a simple version number parser
This will be used to parse the Linux kernel versions, but this code is
generic and can be tested on its own.

For #35777.

Change-Id: If1df48d07250e5855dde45bc9d57c66f777b9fb4
Reviewed-on: https://go-review.googlesource.com/c/go/+/209597
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-12-05 01:48:12 +00:00
Michael Anthony Knyszek
acf3ff2e8a runtime: convert page allocator bitmap to sparse array
Currently the page allocator bitmap is implemented as a single giant
memory mapping which is reserved at init time and committed as needed.
This causes problems on systems that don't handle large uncommitted
mappings well, or institute low virtual address space defaults as a
memory limiting mechanism.

This change modifies the implementation of the page allocator bitmap
away from a directly-mapped set of bytes to a sparse array in same vein
as mheap.arenas. This will hurt performance a little but the biggest
gains are from the lockless allocation possible with the page allocator,
so the impact of this extra layer of indirection should be minimal.

In fact, this is exactly what we see:
    https://perf.golang.org/search?q=upload:20191125.5

This reduces the amount of mapped (PROT_NONE) memory needed on systems
with 48-bit address spaces to ~600 MiB down from almost 9 GiB. The bulk
of this remaining memory is used by the summaries.

Go processes with 32-bit address spaces now always commit to 128 KiB of
memory for the bitmap. Previously it would only commit the pages in the
bitmap which represented the range of addresses (lowest address to
highest address, even if there are unused regions in that range) used by
the heap.

Updates #35568.
Updates #35451.

Change-Id: I0ff10380156568642b80c366001eefd0a4e6c762
Reviewed-on: https://go-review.googlesource.com/c/go/+/207497
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-12-03 17:35:06 +00:00
Clément Chigot
5042317d69 runtime: add arenaBaseOffset on aix/ppc64
On AIX, addresses returned by mmap are between 0x0a00000000000000
and 0x0afffffffffffff. The previous solution to handle these large
addresses was to increase the arena size up to 60 bits addresses,
cf CL 138736.

However, with the new page allocator, the 60bit heap addresses are
causing huge memory allocations, especially by (s *pageAlloc).init. mmap
and munmap syscalls dealing with these allocations are reducing
performances of every Go programs.

In order to avoid these allocations, arenaBaseOffset is set to
0x0a00000000000000 and heap addresses are on 48bit, as others operating
systems.

Updates: #35451

Change-Id: Ice916b8578f76703428ec12a82024147a7592bc0
Reviewed-on: https://go-review.googlesource.com/c/go/+/206841
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
2019-11-16 00:02:02 +00:00
Michael Anthony Knyszek
e6fb39aa68 runtime: make the test addresses for pageAlloc smaller on 32-bit
This change makes the test addresses start at 1 GiB instead of 2 GiB to
support mips and mipsle, which only have 31-bit address spaces.

It also changes some tests to use smaller offsets for the chunk index to
avoid jumping too far ahead in the address space to support 31-bit
address spaces. The tests don't require such large jumps for what
they're testing anyway.

Updates #35112.
Fixes #35440.

Change-Id: Ic68ff2b0a1f10ef37ac00d4bb5b910ddcdc76f2e
Reviewed-on: https://go-review.googlesource.com/c/go/+/205938
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2019-11-10 04:23:22 +00:00
Rhys Hiltner
7148478f1b sync: yield to the waiter when unlocking a starving mutex
When we have already assigned the semaphore ticket to a specific
waiter, we want to get the waiter running as fast as possible since
no other G waiting on the semaphore can acquire it optimistically.

The net effect is that, when a sync.Mutex is contended, the code in
the critical section guarded by the Mutex gets a priority boost.

Fixes #33747

The original work was done in CL 200577 by Carlo Alberto Ferraris. The
change was reverted in CL 205817 because it broke the linux-arm64-packet
and solaris-amd64-oraclerel builders.

Change-Id: I76d79b1d63fd206ed1c57fe6900cb7ae9e4d46cb
Reviewed-on: https://go-review.googlesource.com/c/go/+/206180
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2019-11-09 19:31:32 +00:00
David Chase
11da2b227a runtime: copy some functions from math/bits to runtime/internal/sys
CL 201765 activated calls from the runtime to functions in math/bits.
When coverage and race detection were simultaneously enabled,
this caused a crash when the covered+race-checked code in
math/bits was called from the runtime before there was even a P.

PS Win for gdlv in helping sort this out.

TODO - next CL intrinsifies the new functions in
runtime/internal/sys

TODO/Would-be-nice - Ctz64 and TrailingZeros64 are the same
function; 386.s is intrinsified; clean all that up.

Fixes #35461.
Updates #35112.

Change-Id: I750a54dba493130ad3e68a06530ede7687d41e1d
Reviewed-on: https://go-review.googlesource.com/c/go/+/206199
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2019-11-08 23:22:06 +00:00
Michael Anthony Knyszek
a2cd2bd55d runtime: add per-p page allocation cache
This change adds a per-p free page cache which the page allocator may
allocate out of without a lock. The change also introduces a completely
lockless page allocator fast path.

Although the cache contains at most 64 pages (and usually less), the
vast majority (85%+) of page allocations are exactly 1 page in size.

Updates #35112.

Change-Id: I170bf0a9375873e7e3230845eb1df7e5cf741b78
Reviewed-on: https://go-review.googlesource.com/c/go/+/195701
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-08 18:00:54 +00:00
Michael Anthony Knyszek
81640ea38d runtime: add page cache and tests
This change adds a page cache structure which owns a chunk of free pages
at a given base address. It also adds code to allocate to this cache
from the page allocator. Finally, it adds tests for both.

Notably this change does not yet integrate the code into the runtime,
just into runtime tests.

Updates #35112.

Change-Id: Ibe121498d5c3be40390fab58a3816295601670df
Reviewed-on: https://go-review.googlesource.com/c/go/+/196643
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-08 18:00:45 +00:00