Commit graph

27 commits

Author SHA1 Message Date
Michael Anthony Knyszek
7af6a31b48 runtime: add countAlloc benchmark
This change adds a small microbenchmark for (*mspan).countAlloc, which
we're about to replace. Admittedly this isn't a critical piece of code,
but the benchmark was useful in understanding the performance change.

Change-Id: Iea93c00f571ee95534a42f2ef2ab026b382242b3
Reviewed-on: https://go-review.googlesource.com/c/go/+/224438
Run-TryBot: Michael Knyszek <mknyszek@google.com>
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2020-03-23 16:55:37 +00:00
Michael Anthony Knyszek
7cfb814b0a runtime: add ReadMemStats latency benchmark
This change adds a benchmark to the runtime which measures ReadMemStats
latencies. It generates allocations with lots of pointers to keep the GC
busy while hitting ReadMemStats and measuring the time it takes to
complete.

Updates #19812.

Change-Id: I7a76aaf497ba5324d3c7a7b3df32461b3e6c3ac8
Reviewed-on: https://go-review.googlesource.com/c/go/+/220177
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
2020-03-18 17:59:19 +00:00
Michael Anthony Knyszek
33dfd3529b runtime: remove old page allocator
This change removes the old page allocator from the runtime.

Updates #35112.

Change-Id: Ib20e1c030f869b6318cd6f4288a9befdbae1b771
Reviewed-on: https://go-review.googlesource.com/c/go/+/195700
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-08 00:07:43 +00:00
Michael Anthony Knyszek
689f6f77f0 runtime: integrate new page allocator into runtime
This change integrates all the bits and pieces of the new page allocator
into the runtime, behind a global constant.

Updates #35112.

Change-Id: I6696bde7bab098a498ab37ed2a2caad2a05d30ec
Reviewed-on: https://go-review.googlesource.com/c/go/+/201764
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-11-07 20:14:02 +00:00
Michael Knyszek
aae0b5b0b2 runtime: use hard heap goal if we've done more scan work than expected
This change makes it so that if we're already finding ourselves in a
situation where we've done more scan work than expected in the
steady-state (that is, 50% of heap_scan for GOGC=100), then we fall back
on the hard heap goal instead of continuing to assume the expected case.

In some cases its possible that we're already doing more scan work than
expected, and if GC assists come in just at that window where we notice
it, they might accumulate way too much assist credit, causing undue heap
growths if GOMAXPROCS=1 (since the fractional background worker isn't
guaranteed to fire). This case seems awfully specific, and that's
because it's exactly the case for TestGcSys, which has been flaky for
some time as a result.

Fixes #28574, #27636, and #27156.

Change-Id: I771f42bed34739dbb1b84ad82cfe247f70836031
Reviewed-on: https://go-review.googlesource.com/c/go/+/184097
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2019-09-04 21:52:18 +00:00
Michael Anthony Knyszek
f4a5ae5594 runtime: track the number of free unscavenged huge pages
This change tracks the number of potential free and unscavenged huge
pages which will be used to inform the rate at which scavenging should
occur.

For #30333.

Change-Id: I47663e5ffb64cac44ffa10db158486783f707479
Reviewed-on: https://go-review.googlesource.com/c/go/+/170860
Run-TryBot: Michael Knyszek <mknyszek@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2019-05-06 20:59:20 +00:00
Austin Clements
9108ae7751 runtime: eliminate mark 2 and fix mark termination race
The mark 2 phase was originally introduced as a way to reduce the
chance of entering STW mark termination while there was still marking
work to do. It works by flushing and disabling all local work caches
so that all enqueued work becomes immediately globally visible.
However, mark 2 is not only slow–disabling caches makes marking and
the write barrier both much more expensive–but also imperfect. There
is still a rare but possible race (~once per all.bash) that can cause
GC to enter mark termination while there is still marking work. This
race is detailed at
https://github.com/golang/proposal/blob/master/design/17503-eliminate-rescan.md#appendix-mark-completion-race
The effect of this is that mark termination must still cope with the
possibility that there may be work remaining after a concurrent mark
phase. Dealing with this increases STW pause time and increases the
complexity of mark termination.

Furthermore, a similar but far more likely race can cause early
transition from mark 1 to mark 2. This is unfortunate because it
causes performance instability because of the cost of mark 2.

This CL fixes this by replacing mark 2 with a distributed termination
detection algorithm. This algorithm is correct, so it eliminates the
mark termination race, and doesn't require disabling local caches. It
ensures that there are no grey objects upon entering mark termination.
With this change, we're one step closer to eliminating marking from
mark termination entirely (it's still used by STW GC and checkmarks
mode).

This CL does not eliminate the gcBlackenPromptly global flag, though
it is always set to false now. It will be removed in a cleanup CL.

This led to only minor variations in the go1 benchmarks
(https://perf.golang.org/search?q=upload:20180909.1) and compilebench
benchmarks (https://perf.golang.org/search?q=upload:20180910.2).

This significantly improves performance of the garbage benchmark, with
no impact on STW times:

name                        old time/op    new time/op   delta
Garbage/benchmem-MB=64-12    2.21ms ± 1%   2.05ms ± 1%   -7.38% (p=0.000 n=18+19)
Garbage/benchmem-MB=1024-12  2.30ms ±16%   2.20ms ± 7%   -4.51% (p=0.001 n=20+20)

name                        old STW-ns/GC  new STW-ns/GC  delta
Garbage/benchmem-MB=64-12      138k ±44%     141k ±23%     ~    (p=0.309 n=19+20)
Garbage/benchmem-MB=1024-12    159k ±25%     178k ±98%     ~    (p=0.798 n=16+18)

name                        old STW-ns/op  new STW-ns/op                delta
Garbage/benchmem-MB=64-12     4.42k ±44%    4.24k ±23%     ~    (p=0.531 n=19+20)
Garbage/benchmem-MB=1024-12     591 ±24%      636 ±111%    ~    (p=0.309 n=16+18)

(https://perf.golang.org/search?q=upload:20180910.1)

Updates #26903.
Updates #17503.

Change-Id: Icbd1e12b7a12a76f423c9bf033b13cb363e4cd19
Reviewed-on: https://go-review.googlesource.com/c/134318
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2018-10-02 20:35:20 +00:00
erifan01
0ef42f4dd6 runtime: skip TestGcSys on arm64
This failure occurs randomly on arm64.

13:10:32 --- FAIL: TestGcSys (0.06s)
13:10:32 gc_test.go:30: expected "OK\n", but got "using too much memory: 71401472 bytes\n"
13:10:32 FAIL

Updates #27636

Change-Id: Ifd4cfce167d8054dc6f037bd34368d63c7f68ed4
Reviewed-on: https://go-review.googlesource.com/135155
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2018-09-13 17:55:15 +00:00
Martin Möhrmann
28fbf5b831 runtime: skip TestGcSys on Windows
This is causing failures on TryBots and BuildBots:
--- FAIL: TestGcSys (0.06s)
    gc_test.go:27: expected "OK\n", but got "using too much memory: 39882752 bytes\n"
FAIL

Updates #27156

Change-Id: I418bbec89002574cd583c97422e433f042c07492
Reviewed-on: https://go-review.googlesource.com/130875
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-08-22 20:50:43 +00:00
Josh Bleecher Snyder
dda4591c8c runtime: add BenchmarkScanStack
There are many possible stack scanning benchmarks,
but this one is at least a start.

cpuprofiling shows about 75% of CPU in func scanstack.

Change-Id: I906b0493966f2165c1920636c4e057d16d6447e0
Reviewed-on: https://go-review.googlesource.com/105535
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-05-01 13:33:01 +00:00
Richard Musiol
e3c684777a all: skip unsupported tests for js/wasm
The general policy for the current state of js/wasm is that it only
has to support tests that are also supported by nacl.

The test nilptr3.go makes assumptions about which nil checks can be
removed. Since WebAssembly does not signal on reading a null pointer,
all nil checks have to be explicit.

Updates #18892

Change-Id: I06a687860b8d22ae26b1c391499c0f5183e4c485
Reviewed-on: https://go-review.googlesource.com/110096
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-04-30 19:39:18 +00:00
mingrammer
fceaa2e242 runtime: rename the TestGcHashmapIndirection to TestGcMapIndirection
There was still the word 'Hashmap' in gc_test.go, so I renamed it to just 'Map'

Previous renaming commit: https://golang.org/cl/90336

Change-Id: I5b0e5c2229d1c30937c7216247f4533effb81ce7
Reviewed-on: https://go-review.googlesource.com/96675
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2018-02-23 16:48:01 +00:00
Austin Clements
f96b95bcd1 runtime: benchmark for bulk write barriers
This adds a benchmark of typedslicecopy and its bulk write barriers.

For #22460.

Change-Id: I439ca3b130bb22944468095f8f18b464e5bb43ca
Reviewed-on: https://go-review.googlesource.com/74051
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2017-10-30 18:12:49 +00:00
Austin Clements
1e8ab99b37 runtime: add benchmark for write barriers
For #22460.

Change-Id: I798f26d45bbe1efd16b632e201413cb26cb3e6c7
Reviewed-on: https://go-review.googlesource.com/73811
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2017-10-30 18:12:41 +00:00
Russ Cox
3a35e0253c runtime: deflake TestPeriodicGC
It was only waiting 0.1 seconds for the two GCs it wanted.
Let it wait 1 second.

Change-Id: Ib3cdc8127cbf95694a9f173643c02529a85063af
Reviewed-on: https://go-review.googlesource.com/68151
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2017-10-04 16:57:26 +00:00
Austin Clements
0744c21b98 runtime: make runtime.GC() trigger GC even if GOGC=off
Currently, the priority of checks in (gcTrigger).test() puts the
gcpercent<0 test above gcTriggerCycle, which is used for runtime.GC().
This is an unintentional change from 1.8 and before, where
runtime.GC() triggered a GC even if GOGC=off.

Fix this by rearranging the priority so the gcTriggerCycle test
executes even if gcpercent < 0.

Fixes #22023.

Change-Id: I109328d7b643b6824eb9d79061a9e775f0149575
Reviewed-on: https://go-review.googlesource.com/65994
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2017-09-26 21:55:05 +00:00
Austin Clements
4a7cf960c3 runtime: make ReadMemStats STW for < 25µs
Currently ReadMemStats stops the world for ~1.7 ms/GB of heap because
it collects statistics from every single span. For large heaps, this
can be quite costly. This is particularly unfortunate because many
production infrastructures call this function regularly to collect and
report statistics.

Fix this by tracking the necessary cumulative statistics in the
mcaches. ReadMemStats still has to stop the world to stabilize these
statistics, but there are only O(GOMAXPROCS) mcaches to collect
statistics from, so this pause is only 25µs even at GOMAXPROCS=100.

Fixes #13613.

Change-Id: I3c0a4e14833f4760dab675efc1916e73b4c0032a
Reviewed-on: https://go-review.googlesource.com/34937
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2017-03-04 02:56:37 +00:00
Keith Randall
688995d1e9 cmd/compile: do more type conversion inline
The code to do the conversion is smaller than the
call to the runtime.
The 1-result asserts need to call panic if they fail, but that
code is out of line.

The only conversions left in the runtime are those which
might allocate and those which might need to generate an itab.

Given the following types:
  type E interface{}
  type I interface { foo() }
  type I2 iterface { foo(); bar() }
  type Big [10]int
  func (b Big) foo() { ... }

This CL inlines the following conversions:

was assertE2T
  var e E = ...
  b := i.(Big)
was assertE2T2
  var e E = ...
  b, ok := i.(Big)
was assertI2T
  var i I = ...
  b := i.(Big)
was assertI2T2
  var i I = ...
  b, ok := i.(Big)
was assertI2E
  var i I = ...
  e := i.(E)
was assertI2E2
  var i I = ...
  e, ok := i.(E)

These are the remaining runtime calls:

convT2E:
  var b Big = ...
  var e E = b
convT2I:
  var b Big = ...
  var i I = b
convI2I:
  var i2 I2 = ...
  var i I = i2
assertE2I:
  var e E = ...
  i := e.(I)
assertE2I2:
  var e E = ...
  i, ok := e.(I)
assertI2I:
  var i I = ...
  i2 := i.(I2)
assertI2I2:
  var i I = ...
  i2, ok := i.(I2)

Fixes #17405
Fixes #8422

Change-Id: Ida2367bf8ce3cd2c6bb599a1814f1d275afabe21
Reviewed-on: https://go-review.googlesource.com/32313
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2016-11-02 21:33:03 +00:00
Austin Clements
61f56e925e runtime: fix pagesInUse accounting
When we grow the heap, we create a temporary "in use" span for the
memory acquired from the OS and then free that span to link it into
the heap. Hence, we (1) increase pagesInUse when we make the temporary
span so that (2) freeing the span will correctly decrease it.

However, currently step (1) increases pagesInUse by the number of
pages requested from the heap, while step (2) decreases it by the
number of pages requested from the OS (the size of the temporary
span). These aren't necessarily the same, since we round up the number
of pages we request from the OS, so steps 1 and 2 don't necessarily
cancel out like they're supposed to. Over time, this can add up and
cause pagesInUse to underflow and wrap around to 2^64. The garbage
collector computes the sweep ratio from this, so if this happens, the
sweep ratio becomes effectively infinite, causing the first allocation
on each P in a sweep cycle to sweep the entire heap. This makes
sweeping effectively STW.

Fix this by increasing pagesInUse in step 1 by the number of pages
requested from the OS, so that the two steps correctly cancel out. We
add a test that checks that the running total matches the actual state
of the heap.

Fixes #15022. For 1.6.x.

Change-Id: Iefd9d6abe37d0d447cbdbdf9941662e4f18eeffc
Reviewed-on: https://go-review.googlesource.com/21280
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2016-04-04 15:33:26 +00:00
Russ Cox
8d5ff2e182 runtime: move test programs out of source code, coalesce
Now there are just three programs to compile instead of many,
and repeated tests can reuse the compilation result instead of
rebuilding it.

Combined, these changes reduce the time spent testing runtime
during all.bash on my laptop from about 60 to about 30 seconds.
(All.bash itself runs in 5½ minutes.)

For #10571.

Change-Id: Ie2c1798b847f1a635a860d11dcdab14375319ae9
Reviewed-on: https://go-review.googlesource.com/18085
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Austin Clements <austin@google.com>
2015-12-29 21:16:59 +00:00
Todd Neal
e3e0122ae2 test: use go:noinline consistently
Replace various implementations of inlining prevention with
"go:noinline"

Change-Id: Iac90895c3a62d6f4b7a6c72e11e165d15a0abfa4
Reviewed-on: https://go-review.googlesource.com/16510
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Todd Neal <todd@tneal.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-11-03 02:01:34 +00:00
Austin Clements
e01be84149 runtime: test that periodic GC works
We've broken periodic GC a few times without noticing because there's
no test for it, partly because you have to wait two minutes to see if
it happens. This exposes control of the periodic GC timeout to runtime
tests and adds a test that cranks it down to zero and sleeps for a bit
to make sure periodic GCs happen.

Change-Id: I3ec44e967e99f4eda752f85c329eebd18b87709e
Reviewed-on: https://go-review.googlesource.com/13169
Reviewed-by: Rick Hudson <rlh@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
2015-09-30 19:24:07 +00:00
Austin Clements
05a3b1fce5 cmd/compile: fix uninitialized memory in compare of interface value
A comparison of the form l == r where l is an interface and r is
concrete performs a type assertion on l to convert it to r's type.
However, the compiler fails to zero the temporary where the result of
the type assertion is written, so if the type is a pointer type and a
stack scan occurs while in the type assertion, it may see an invalid
pointer on the stack.

Fix this by zeroing the temporary. This is equivalent to the fix for
type switches from c4092ac.

Fixes #12253.

Change-Id: Iaf205d456b856c056b317b4e888ce892f0c555b9
Reviewed-on: https://go-review.googlesource.com/13872
Reviewed-by: Russ Cox <rsc@golang.org>
2015-08-25 14:37:08 +00:00
Russ Cox
c4092ac398 cmd/compile: fix uninitialized memory during type switch assertE2I2
Fixes arm64 builder crash.

The bug is possible on all architectures; you just have to get lucky
and hit a preemption or a stack growth on entry to assertE2I2.
The test stacks the deck.

Change-Id: I8419da909b06249b1ad15830cbb64e386b6aa5f6
Reviewed-on: https://go-review.googlesource.com/12890
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
2015-07-30 05:21:56 +00:00
Russ Cox
30aacd4ce2 runtime: add Node128, Node130 benchmarks
Change-Id: I815a7ceeea48cc652b3c8568967665af39b02834
Reviewed-on: https://go-review.googlesource.com/10045
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-05-14 20:21:34 +00:00
Russ Cox
7d9e16abc6 runtime: add benchmark of heapBitsSetType
There was an old benchmark that measured this indirectly
via allocation, but I don't understand how to factor out the
allocation cost when interpreting the numbers.

Replace with a benchmark that only calls heapBitsSetType,
that does not allocate. This was not possible when the
benchmark was first written, because heapBitsSetType had
not been factored out of mallocgc.

Change-Id: I30f0f02362efab3465a50769398be859832e6640
Reviewed-on: https://go-review.googlesource.com/9701
Reviewed-by: Austin Clements <austin@google.com>
2015-05-11 14:40:27 +00:00
Russ Cox
c007ce824d build: move package sources from src/pkg to src
Preparation was in CL 134570043.
This CL contains only the effect of 'hg mv src/pkg/* src'.
For more about the move, see golang.org/s/go14nopkg.
2014-09-08 00:08:51 -04:00
Renamed from src/pkg/runtime/gc_test.go (Browse further)