Commit graph

64 commits

Author SHA1 Message Date
Keith Randall
77871cc664 runtime: no need to protect key/value increments against end of bucket
After the key and value arrays, we have an overflow pointer.
So there's no way a past-the-end key or value pointer could point
past the end of the containing bucket.

So we don't need this additional protection.

Update #21459

Change-Id: I7726140033b06b187f7a7d566b3af8cdcaeab0b0
Reviewed-on: https://go-review.googlesource.com/56772
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Avelino <t@avelino.xxx>
2017-08-18 14:52:25 +00:00
Martin Möhrmann
b6296426a0 runtime: avoid zeroing hmap fields in makemap twice
Stack allocated hmap structs are explicitly zeroed before being
passed by pointer to makemap.

Heap allocated hmap structs are created with newobject
which also zeroes on allocation.

Therefore, setting the hmap fields to 0 or nil is redundant
since they will have been zeroed when hmap was allocated.

Change-Id: I5fc55b75e9dc5ba69f5e3588d6c746f53b45ba66
Reviewed-on: https://go-review.googlesource.com/56291
Reviewed-by: Keith Randall <khr@golang.org>
2017-08-17 20:10:23 +00:00
Josh Bleecher Snyder
445717530c runtime: refactor out tophash calculation
No functional changes; tophash is inlined.

Change-Id: Ic8ce95b3622eafbddcfbc97f8c630ab8c5bfe7ad
Reviewed-on: https://go-review.googlesource.com/55233
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-08-15 00:20:06 +00:00
Josh Bleecher Snyder
02ad116bf1 runtime: unify cases in mapiternext
The preceding cleanup made it clear that two cases
(have golden data, unreachable key) are handled identically.
Simplify the control flow to reflect that.

Simplifies the code and generates shorter machine code.

Change-Id: Id612e0da6679813e855506f47222c58ea6497d70
Reviewed-on: https://go-review.googlesource.com/55093
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-08-15 00:19:36 +00:00
Josh Bleecher Snyder
c50a9718a6 runtime: mask a bounded slice access in hashmap evacuate
Shaves a few instructions off.

Change-Id: I39f1b01ae7e770d632d5e77a6aa4b5a1f123b41a
Reviewed-on: https://go-review.googlesource.com/55090
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-08-15 00:19:22 +00:00
Josh Bleecher Snyder
77a9cb9b4c runtime: refactor evacuate x/y handling
This change unifies the x and y cases.

It shrinks evacuate's machine code by ~25% and its stack size by ~15%.

It also eliminates a critical branch.
Whether an entry should go to x or y is designed to be unpredictable.
As a result, half of the branch predictions for useX were wrong.
Mispredicting that branch can easily incur an expensive cache miss.
Switching to an xy array allows elimination of that branch,
which in turn reduces cache misses.

Change-Id: Ie9cef53744b96c724c377ac0985b487fc50b49b1
Reviewed-on: https://go-review.googlesource.com/54653
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-08-14 23:51:14 +00:00
Josh Bleecher Snyder
589fc314af runtime: calculate k only once in mapiternext
Make the calculation of k and v a bit lazier.
None of the following code cares about indirect-vs-direct k,
and it happens on all code paths, so check t.indirectkey earlier.

Simplifies the code and reduces both machine code and stack size.

Change-Id: I5ea4c0772848d7a4b15383baedb9a1f7feb47201
Reviewed-on: https://go-review.googlesource.com/55092
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-08-14 23:32:56 +00:00
Josh Bleecher Snyder
733567a186 runtime: use integer math for hashmap overLoadFactor
Change-Id: I92cf39a05e738a03d956779d7a1ab1ef8074b2ab
Reviewed-on: https://go-review.googlesource.com/54655
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-08-14 23:31:22 +00:00
Martin Möhrmann
248a7c7c42 runtime: replace some uses of newarray with newobject for maps
This avoids the never triggered capacity checks in newarray.

Change-Id: Ib72b204adcb9e3fd3ab963defe0cd40e22d5d492
Reviewed-on: https://go-review.googlesource.com/54731
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-08-14 21:25:02 +00:00
Josh Bleecher Snyder
e0789d734d runtime: remove indentation in mapiternext
Invert the condition and continue, to remove indentation.

Change-Id: Id62a5d9abc9a4df1193bcf15f95f70f2c2e2abac
Reviewed-on: https://go-review.googlesource.com/55091
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2017-08-14 00:52:16 +00:00
Josh Bleecher Snyder
f5804ce4f3 runtime: simplify hashmap tooManyOverflowBuckets
This generates better code.

Masking B in the return statement should be unnecessary,
but the compiler is understandably not yet clever enough to see that.

Someday, it'd also be nice for the compiler to generate
a CMOV for the saturation if statement.

Change-Id: Ie1c157b21f5212610da1f3c7823a93816b3b61b9
Reviewed-on: https://go-review.googlesource.com/54656
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2017-08-14 00:51:48 +00:00
Josh Bleecher Snyder
aca92f352d runtime: CSE some function arguments in evacuate
Shrinks evacuate's machine code a little.

Change-Id: I08874c92abdc7e621bc0737e22f2a6be31542cab
Reviewed-on: https://go-review.googlesource.com/54652
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2017-08-14 00:51:14 +00:00
Josh Bleecher Snyder
a6136ded32 runtime: remove indentation in evacuate
Combine conditions into a single if statement.
This is more readable.

It should generate identical machine code, but it doesn't.
The new code is shorter.

Change-Id: I9bf52f8f288b0df97a2b9b4e4183f6ca74175e8a
Reviewed-on: https://go-review.googlesource.com/54651
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2017-08-14 00:51:02 +00:00
Filip Gruszczynski
f9531448b8 runtime: don't panic for bad size hint in hashmap
Because the hint parameter is supposed to be treated
purely as a hint, if it doesn't meet the requirements
we disregard it and continue as if there was no hint
at all.

Fixes #19926

Change-Id: I86e7f99472fad6b99ba4e2fd33e4a9e55d55115e
Reviewed-on: https://go-review.googlesource.com/40854
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-05-02 20:51:39 +00:00
Josh Bleecher Snyder
b666f2860b runtime: use 64 bit calculation in overLoadFactor
overLoadFactor used a uintptr for its calculations.
When the number of potential buckets was large,
perhaps due to a coding error or corrupt/malicious user input
leading to a very large map size hint,
this led to overflow on 32 bit systems.
This overflow resulted in an infinite loop.

Prevent it by always using a 64 bit calculation.

Updates #20195

Change-Id: Iaabc710773cd5da6754f43b913478cc5562d89a2
Reviewed-on: https://go-review.googlesource.com/42185
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-05-01 17:35:57 +00:00
Josh Bleecher Snyder
94e44a9c8e runtime: preallocate some overflow buckets
When allocating a non-small array of buckets for a map,
also preallocate some overflow buckets.

The estimate of the number of overflow buckets
is based on a simulation of putting mid=(low+high)/2 elements
into a map, where low is the minimum number of elements
needed to reach this value of b (according to overLoadFactor),
and high is the maximum number of elements possible
to put in this value of b (according to overLoadFactor).
This estimate is surprisingly reliable and accurate.

The number of overflow buckets needed is quadratic,
for a fixed value of b.
Using this mid estimate means that we will overallocate a few
too many overflow buckets when the actual number of elements is near low,
and underallocate significantly too few overflow buckets
when the actual number of elements is near high.

The mechanism introduced in this CL can be re-used for
other overflow bucket optimizations.

For example, given an initial size hint,
we could estimate quite precisely the number of overflow buckets.
This is #19931.

We could also change from "non-nil means end-of-list"
to "pointer-to-hmap.buckets means end-of-list",
and then create a linked list of reusable overflow buckets
when they are freed by map growth.
That is #19992.

We could also use a similar mechanism to do bulk allocation
of overflow buckets.
All these uses can co-exist with only the one additional pointer
in mapextra, given a little care.

name                  old time/op    new time/op    delta
MapPopulate/1-8         60.1ns ± 2%    60.3ns ± 2%     ~     (p=0.278 n=19+20)
MapPopulate/10-8         577ns ± 1%     578ns ± 1%     ~     (p=0.140 n=20+20)
MapPopulate/100-8       8.06µs ± 1%    8.19µs ± 1%   +1.67%  (p=0.000 n=20+20)
MapPopulate/1000-8       104µs ± 1%     104µs ± 1%     ~     (p=0.317 n=20+20)
MapPopulate/10000-8      891µs ± 1%     888µs ± 1%     ~     (p=0.101 n=19+20)
MapPopulate/100000-8    8.61ms ± 1%    8.58ms ± 0%   -0.34%  (p=0.009 n=20+17)

name                  old alloc/op   new alloc/op   delta
MapPopulate/1-8          0.00B          0.00B          ~     (all equal)
MapPopulate/10-8          179B ± 0%      179B ± 0%     ~     (all equal)
MapPopulate/100-8       3.33kB ± 0%    3.38kB ± 0%   +1.48%  (p=0.000 n=20+16)
MapPopulate/1000-8      55.5kB ± 0%    53.4kB ± 0%   -3.84%  (p=0.000 n=19+20)
MapPopulate/10000-8      432kB ± 0%     428kB ± 0%   -1.06%  (p=0.000 n=19+20)
MapPopulate/100000-8    3.65MB ± 0%    3.62MB ± 0%   -0.70%  (p=0.000 n=20+20)

name                  old allocs/op  new allocs/op  delta
MapPopulate/1-8           0.00           0.00          ~     (all equal)
MapPopulate/10-8          1.00 ± 0%      1.00 ± 0%     ~     (all equal)
MapPopulate/100-8         18.0 ± 0%      17.0 ± 0%   -5.56%  (p=0.000 n=20+20)
MapPopulate/1000-8        96.0 ± 0%      72.6 ± 1%  -24.38%  (p=0.000 n=20+20)
MapPopulate/10000-8        625 ± 0%       319 ± 0%  -48.86%  (p=0.000 n=20+20)
MapPopulate/100000-8     6.23k ± 0%     4.00k ± 0%  -35.79%  (p=0.000 n=20+20)

Change-Id: I01f41cb1374bdb99ccedbc00d04fb9ae43daa204
Reviewed-on: https://go-review.googlesource.com/40979
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-04-19 13:47:28 +00:00
Josh Bleecher Snyder
17d497feaa runtime: add bmap.setoverflow
bmap already has a overflow (getter) method.
Add a setoverflow (setter) method, for readability.

Updates #19931
Updates #19992

Change-Id: I00b3d94037c0d75508a7ebd51085c5c3857fb764
Reviewed-on: https://go-review.googlesource.com/40977
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-04-19 13:43:01 +00:00
Josh Bleecher Snyder
a41b1d5052 runtime: convert hmap.overflow into hmap.extra
Any change to how we allocate overflow buckets
will require some extra hmap storage,
but we don't want hmap to grow,
particular as small maps usually don't need overflow buckets.

This CL converts the existing hmap overflow field,
which is usually used for pointer-free maps,
into a generic extra field.

This extra field can be used to hold data that is optional.
If it is valuable enough to do have special
handling of overflow buckets, which are medium-sized,
it is valuable enough to pay an extra alloc and two extra words for.

Adding fields to extra would entail adding overhead to pointer-free maps;
any mapextra fields added would need to be weighed against that.
This CL is just rearrangement, though.

Updates #19931
Updates #19992

Change-Id: If8537a206905b9d4dc6cd9d886184ece671b3f80
Reviewed-on: https://go-review.googlesource.com/40976
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-04-19 13:42:04 +00:00
Josh Bleecher Snyder
619af17205 runtime: refactor hmap setoverflow into newoverflow
This simplifies the code, as well as providing
a single place to modify to change the
allocation of new overflow buckets.

Updates #19931
Updates #19992

Change-Id: I77070619f5c8fe449bbc35278278bca5eda780f2
Reviewed-on: https://go-review.googlesource.com/40975
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-04-19 13:41:44 +00:00
Filip Gruszczyński
6c5a819a5e reflect: add MakeMapWithSize for creating maps with size hint
Providing size hint when creating a map allows avoiding re-allocating
underlying data structure if we know how many elements are going to
be inserted. This can be used for example during decoding maps in
gob.

Fixes #19599

Change-Id: I108035fec29391215d2261a73eaed1310b46bab1
Reviewed-on: https://go-review.googlesource.com/38335
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2017-04-04 20:01:43 +00:00
Josh Bleecher Snyder
04fc887761 runtime: delay marking maps as writing until after first alg call
Fixes #19359

Change-Id: I196b47cf0471915b6dc63785e8542aa1876ff695
Reviewed-on: https://go-review.googlesource.com/37665
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-03-02 17:38:30 +00:00
Josh Bleecher Snyder
064e44f218 runtime: evacuate old map buckets more consistently
During map growth, buckets are evacuated in two ways.
When a value is altered, its containing bucket is evacuated.
Also, an evacuation mark is maintained and advanced every time.
Prior to this CL, the evacuation mark was always incremented,
even if the next bucket to be evacuated had already been evacuated.
This CL changes evacuation mark advancement to skip previously
evacuated buckets. This has the effect of making map evacuation both
more aggressive and more consistent.

Aggressive map evacuation is good. While the map is growing,
map accesses must check two buckets, which may be far apart in memory.
Map growth also delays garbage collection.
And if map evacuation is not aggressive enough, there is a risk that
a populate-once read-many map may be stuck permanently in map growth.
This CL does not eliminate that possibility, but it shrinks the window.

There is minimal impact on map benchmarks:

name                         old time/op    new time/op    delta
MapPop100-8                    12.4µs ±11%    12.4µs ± 7%    ~     (p=0.798 n=15+15)
MapPop1000-8                    240µs ± 8%     235µs ± 8%    ~     (p=0.217 n=15+14)
MapPop10000-8                  4.49ms ±10%    4.51ms ±15%    ~     (p=1.000 n=15+13)
MegMap-8                       11.9ns ± 2%    11.8ns ± 0%  -1.01%  (p=0.000 n=15+11)
MegOneMap-8                    9.30ns ± 1%    9.29ns ± 1%    ~     (p=0.955 n=14+14)
MegEqMap-8                     31.9µs ± 5%    31.9µs ± 3%    ~     (p=0.935 n=15+15)
MegEmptyMap-8                  2.41ns ± 2%    2.41ns ± 0%    ~     (p=0.594 n=12+14)
SmallStrMap-8                  12.8ns ± 1%    12.7ns ± 1%    ~     (p=0.569 n=14+13)
MapStringKeysEight_16-8        13.6ns ± 1%    13.7ns ± 2%    ~     (p=0.100 n=13+15)
MapStringKeysEight_32-8        12.1ns ± 1%    12.1ns ± 2%    ~     (p=0.340 n=15+15)
MapStringKeysEight_64-8        12.1ns ± 1%    12.1ns ± 2%    ~     (p=0.582 n=15+14)
MapStringKeysEight_1M-8        12.0ns ± 1%    12.1ns ± 1%    ~     (p=0.267 n=15+14)
IntMap-8                       7.96ns ± 1%    7.97ns ± 2%    ~     (p=0.991 n=15+13)
RepeatedLookupStrMapKey32-8    15.8ns ± 2%    15.8ns ± 1%    ~     (p=0.393 n=15+14)
RepeatedLookupStrMapKey1M-8    35.3µs ± 2%    35.3µs ± 1%    ~     (p=0.815 n=15+15)
NewEmptyMap-8                  36.0ns ± 4%    36.4ns ± 7%    ~     (p=0.270 n=15+15)
NewSmallMap-8                  85.5ns ± 1%    85.6ns ± 1%    ~     (p=0.674 n=14+15)
MapIter-8                      89.9ns ± 6%    90.8ns ± 6%    ~     (p=0.467 n=15+15)
MapIterEmpty-8                 10.0ns ±22%    10.0ns ±25%    ~     (p=0.846 n=15+15)
SameLengthMap-8                4.18ns ± 1%    4.17ns ± 1%    ~     (p=0.653 n=15+14)
BigKeyMap-8                    20.2ns ± 1%    20.1ns ± 1%  -0.82%  (p=0.002 n=15+15)
BigValMap-8                    22.5ns ± 8%    22.3ns ± 6%    ~     (p=0.615 n=15+15)
SmallKeyMap-8                  15.3ns ± 1%    15.3ns ± 1%    ~     (p=0.754 n=15+14)
ComplexAlgMap-8                58.4ns ± 1%    58.7ns ± 1%  +0.52%  (p=0.000 n=14+15)

There is a tiny but detectable difference in the compiler:

name       old time/op      new time/op      delta
Template        218ms ± 5%       219ms ± 4%    ~     (p=0.094 n=98+98)
Unicode        93.6ms ± 5%      93.6ms ± 4%    ~     (p=0.910 n=94+95)
GoTypes         596ms ± 5%       598ms ± 6%    ~     (p=0.533 n=98+100)
Compiler        2.72s ± 3%       2.72s ± 4%    ~     (p=0.238 n=100+99)
SSA             4.11s ± 3%       4.11s ± 3%    ~     (p=0.864 n=99+98)
Flate           129ms ± 6%       129ms ± 4%    ~     (p=0.522 n=98+96)
GoParser        151ms ± 4%       151ms ± 4%  -0.48%  (p=0.017 n=96+96)
Reflect         379ms ± 3%       376ms ± 4%  -0.57%  (p=0.011 n=99+99)
Tar             112ms ± 5%       112ms ± 6%    ~     (p=0.688 n=93+95)
XML             214ms ± 4%       214ms ± 5%    ~     (p=0.968 n=100+99)
StdCmd          16.2s ± 2%       16.2s ± 2%  -0.26%  (p=0.048 n=99+99)

name       old user-ns/op   new user-ns/op   delta
Template   252user-ms ± 4%  250user-ms ± 4%  -0.63%  (p=0.020 n=98+97)
Unicode    113user-ms ± 7%  114user-ms ± 5%    ~     (p=0.057 n=97+94)
GoTypes    776user-ms ± 5%  777user-ms ± 5%    ~     (p=0.375 n=97+96)
Compiler   3.61user-s ± 3%  3.60user-s ± 3%    ~     (p=0.445 n=98+93)
SSA        5.84user-s ± 6%  5.85user-s ± 5%    ~     (p=0.542 n=100+95)
Flate      154user-ms ± 5%  154user-ms ± 5%    ~     (p=0.699 n=99+99)
GoParser   184user-ms ± 6%  183user-ms ± 4%    ~     (p=0.557 n=98+95)
Reflect    461user-ms ± 5%  462user-ms ± 4%    ~     (p=0.853 n=97+99)
Tar        130user-ms ± 5%  129user-ms ± 6%    ~     (p=0.567 n=93+100)
XML        257user-ms ± 6%  258user-ms ± 6%    ~     (p=0.205 n=99+100)

Change-Id: Id92dd54a152904069aac415e6aaaab5c67f5f476
Reviewed-on: https://go-review.googlesource.com/37011
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2017-02-28 20:42:50 +00:00
Austin Clements
87e48c5afd runtime, cmd/compile: rename memclr -> memclrNoHeapPointers
Since barrier-less memclr is only safe in very narrow circumstances,
this commit renames memclr to avoid accidentally calling memclr on
typed memory. This can cause subtle, non-deterministic bugs, so it's
worth some effort to prevent. In the near term, this will also prevent
bugs creeping in from any concurrent CLs that add calls to memclr; if
this happens, whichever patch hits master second will fail to compile.

This also adds the other new memclr variants to the compiler's
builtin.go to minimize the churn on that binary blob. We'll use these
in future commits.

Updates #17503.

Change-Id: I00eead049f5bd35ca107ea525966831f3d1ed9ca
Reviewed-on: https://go-review.googlesource.com/31369
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-28 18:20:33 +00:00
Austin Clements
aa581f5157 runtime: use typedmemclr for typed memory
The hybrid barrier requires distinguishing typed and untyped memory
even when zeroing because the *current* contents of the memory matters
even when overwriting.

This commit introduces runtime.typedmemclr and runtime.memclrHasPointers
as a typed memory clearing functions parallel to runtime.typedmemmove.
Currently these simply call memclr, but with the hybrid barrier we'll
need to shade any pointers we're overwriting. These will provide us
with the necessary hooks to do so.

Updates #17503.

Change-Id: I74478619f8907825898092aaa204d6e4690f27e6
Reviewed-on: https://go-review.googlesource.com/31366
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Rick Hudson <rlh@golang.org>
2016-10-28 18:20:04 +00:00
Keith Randall
442de98c14 cmd/compile,runtime: redo how map assignments work
To compile:
  m[k] = v
instead of:
  mapassign(maptype, m, &k, &v), do
do:
  *mapassign(maptype, m, &k) = v

mapassign returns a pointer to the value slot in the map.  It is just
like mapaccess except that it will allocate a new slot if k is not
already present in the map.

This makes map accesses faster but potentially larger (codewise).

It is faster because the write into the map is done when the compiler
knows the concrete type, so it can be done with a few store
instructions instead of calling typedmemmove.  We also potentially
avoid stack temporaries to hold v.

The code can be larger when the map has pointers in its value type,
since there is a write barrier call in addition to the mapassign call.
That makes the code at the callsite a bit bigger (go binary is 0.3%
bigger).

This CL is in preparation for doing operations like m[k] += v with
only a single runtime call.  That will roughly double the speed of
such operations.

Update #17133
Update #5147

Change-Id: Ia435f032090a2ed905dac9234e693972fe8c2dc5
Reviewed-on: https://go-review.googlesource.com/30815
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-10-12 20:41:23 +00:00
Austin Clements
bf776a988b runtime: document bmap.tophash
In particular, it wasn't obvious that some values are special (unless
you also found those special values), so document that it isn't
necessarily a hash value.

Change-Id: Iff292822b44408239e26cd882dc07be6df2c1d38
Reviewed-on: https://go-review.googlesource.com/30143
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-10-03 22:00:06 +00:00
Josh Bleecher Snyder
9980b70cb4 runtime: limit the number of map overflow buckets
Consider repeatedly adding many items to a map
and then deleting them all, as in #16070. The map
itself doesn't need to grow above the high water
mark of number of items. However, due to random
collisions, the map can accumulate overflow
buckets.

Prior to this CL, those overflow buckets were
never removed, which led to a slow memory leak.

The problem with removing overflow buckets is
iterators. The obvious approach is to repack
keys and values and eliminate unused overflow
buckets. However, keys, values, and overflow
buckets cannot be manipulated without disrupting
iterators.

This CL takes a different approach, which is to
reuse the existing map growth mechanism,
which is well established, well tested, and
safe in the presence of iterators.
When a map has accumulated enough overflow buckets
we trigger map growth, but grow into a map of the
same size as before. The old overflow buckets will
be left behind for garbage collection.

For the code in #16070, instead of climbing
(very slowly) forever, memory usage now cycles
between 264mb and 483mb every 15 minutes or so.

To avoid increasing the size of maps,
the overflow bucket counter is only 16 bits.
For large maps, the counter is incremented
stochastically.

Fixes #16070

Change-Id: If551d77613ec6836907efca58bda3deee304297e
Reviewed-on: https://go-review.googlesource.com/25049
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-09-13 17:53:32 +00:00
Josh Bleecher Snyder
2b74de3ed9 runtime: rename fastrand1 to fastrand
Change-Id: I37706ff0a3486827c5b072c95ad890ea87ede847
Reviewed-on: https://go-review.googlesource.com/28210
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-08-30 23:59:21 +00:00
Keith Randall
e492d9f018 runtime: fix map iterator concurrent map check
We should check whether there is a concurrent writer at the
start of every mapiternext, not just in mapaccessK (which is
only called during certain map growth situations).

Tests turned off by default because they are inherently flaky.

Fixes #16278

Change-Id: I8b72cab1b8c59d1923bec6fa3eabc932e4e91542
Reviewed-on: https://go-review.googlesource.com/24749
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
2016-08-16 21:52:44 +00:00
Martin Möhrmann
7e460e70d9 runtime: use type int to specify size for newarray
Consistently use type int for the size argument of
runtime.newarray, runtime.reflect_unsafe_NewArray
and reflect.unsafe_NewArray.

Change-Id: Ic77bf2dde216c92ca8c49462f8eedc0385b6314e
Reviewed-on: https://go-review.googlesource.com/22311
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Martin Möhrmann <martisch@uos.de>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-04-21 04:15:14 +00:00
Keith Randall
60fd32a47f cmd/compile: change the way we handle large map values
mapaccess{1,2} returns a pointer to the value.  When the key
is not in the map, it returns a pointer to zeroed memory.
Currently, for large map values we have a complicated scheme which
dynamically allocates zeroed memory for this purpose.  It is ugly
code and requires an atomic.Load in a bunch of places we'd rather
not have it.

Switch to a scheme where callsites of mapaccess{1,2} which expect
large return values pass in a pointer to zeroed memory that
mapaccess can return if the key is not found.  This avoids the
atomic.Load on all map accesses with a few extra instructions only
for the large value acccesses, plus a bit of bss space.

There was a time (1.4 & 1.5?) where we did something like this but
all the tricks to make the right size zero value were done by the
linker.  That scheme broke in the presence of dyamic linking.
The scheme in this CL works even when dynamic linking.

Fixes #12337

Change-Id: Ic2d0319944af33bbb59785938d9ab80958d1b4b1
Reviewed-on: https://go-review.googlesource.com/22221
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Hudson-Doyle <michael.hudson@canonical.com>
2016-04-20 21:15:31 +00:00
Austin Clements
4721ea6abc runtime/internal/atomic: rename Storep1 to StorepNoWB
Make it clear that the point of this function stores a pointer
*without* a write barrier.

sed -i -e 's/Storep1/StorepNoWB/' $(git grep -l Storep1)

Updates #15270.

Change-Id: Ifad7e17815e51a738070655fe3b178afdadaecf6
Reviewed-on: https://go-review.googlesource.com/21994
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Michael Matloob <matloob@golang.org>
2016-04-13 19:17:25 +00:00
Josh Bleecher Snyder
974c201f74 runtime: avoid unnecessary map iteration write barrier
Update #14921

Change-Id: I5c5816d0193757bf7465b1e09c27ca06897df4bf
Reviewed-on: https://go-review.googlesource.com/21814
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-04-10 20:01:47 +00:00
Emmanuel Odeke
e4f1d9cf2e runtime: make execution error panic values implement the Error interface
Make execution panics implement Error as
mandated by https://golang.org/ref/spec#Run_time_panics,
instead of panics with strings.

Fixes #14965

Change-Id: I7827f898b9b9c08af541db922cc24fa0800ff18a
Reviewed-on: https://go-review.googlesource.com/21214
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-04-10 01:16:30 +00:00
Matthew Dempsky
a03bdc3e6b runtime: eliminate unnecessary type conversions
Automated refactoring produced using github.com/mdempsky/unconvert.

Change-Id: Iacf871a4f221ef17f48999a464ab2858b2bbaa90
Reviewed-on: https://go-review.googlesource.com/20071
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-03-07 20:53:27 +00:00
Brad Fitzpatrick
5fea2ccc77 all: single space after period.
The tree's pretty inconsistent about single space vs double space
after a period in documentation. Make it consistently a single space,
per earlier decisions. This means contributors won't be confused by
misleading precedence.

This CL doesn't use go/doc to parse. It only addresses // comments.
It was generated with:

$ perl -i -npe 's,^(\s*// .+[a-z]\.)  +([A-Z]),$1 $2,' $(git grep -l -E '^\s*//(.+\.)  +([A-Z])')
$ go test go/doc -update

Change-Id: Iccdb99c37c797ef1f804a94b22ba5ee4b500c4f7
Reviewed-on: https://go-review.googlesource.com/20022
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Dave Day <djd@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-03-02 00:13:47 +00:00
Matthew Dempsky
9877900c8c Revert "cmd/compile: move hiter, hmap, and scase definitions into builtin.go"
This reverts commit f28bbb776a.

Change-Id: I82fb81dcff3ddcaefef72949f1ef3a41bcd22301
Reviewed-on: https://go-review.googlesource.com/19849
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-02-23 19:42:52 +00:00
Matthew Dempsky
f28bbb776a cmd/compile: move hiter, hmap, and scase definitions into builtin.go
Also eliminates per-maptype hiter and hmap types, since they're not
really needed anyway.  Update packages reflect and runtime
accordingly.

Reduces golang.org/x/tools/cmd/godoc's text segment by ~170kB:

   text	   data	    bss	    dec	    hex	filename
13085702	 140640	 151520	13377862	 cc2146	godoc.before
12915382	 140640	 151520	13207542	 c987f6	godoc.after

Updates #6853.

Change-Id: I948b2bc1f22d477c1756204996b4e3e1fb568d81
Reviewed-on: https://go-review.googlesource.com/16610
Reviewed-by: Keith Randall <khr@golang.org>
2016-02-22 07:42:37 +00:00
Russ Cox
50c5042047 runtime: best-effort detection of concurrent misuse of maps
If reports like #13062 are really concurrent misuse of maps,
we can detect that, at least some of the time, with a cheap check.

There is an extra pair of memory writes for writing to a map,
but to the same cache line as h.count, which is often being modified anyway,
and there is an extra memory read for reading from a map,
but to the same cache line as h.count, which is always being read anyway.
So the check should be basically invisible and may help reduce the
number of "mysterious runtime crash due to map misuse" reports.

Change-Id: I0e71b0d92eaa3b7bef48bf41b0f5ab790092487e
Reviewed-on: https://go-review.googlesource.com/17501
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2015-12-07 20:19:52 +00:00
Michael Matloob
432cb66f16 runtime: break out system-specific constants into package sys
runtime/internal/sys will hold system-, architecture- and config-
specific constants.

Updates #11647

Change-Id: I6db29c312556087a42e8d2bdd9af40d157c56b54
Reviewed-on: https://go-review.googlesource.com/16817
Reviewed-by: Russ Cox <rsc@golang.org>
2015-11-12 17:04:45 +00:00
Michael Matloob
67faca7d9c runtime: break atomics out into package runtime/internal/atomic
This change breaks out most of the atomics functions in the runtime
into package runtime/internal/atomic. It adds some basic support
in the toolchain for runtime packages, and also modifies linux/arm
atomics to remove the dependency on the runtime's mutex. The mutexes
have been replaced with spinlocks.

all trybots are happy!
In addition to the trybots, I've tested on the darwin/arm64 builder,
on the darwin/arm builder, and on a ppc64le machine.

Change-Id: I6698c8e3cf3834f55ce5824059f44d00dc8e3c2f
Reviewed-on: https://go-review.googlesource.com/14204
Run-TryBot: Michael Matloob <matloob@golang.org>
Reviewed-by: Russ Cox <rsc@golang.org>
2015-11-10 17:38:04 +00:00
Ian Lance Taylor
73f329f472 runtime, syscall: add calls to msan functions
Add explicit memory sanitizer instrumentation to the runtime and syscall
packages.  The compiler does not instrument the runtime package.  It
does instrument the syscall package, but we need to add a couple of
cases that it can't see.

Change-Id: I2d66073f713fe67e33a6720460d2bb8f72f31394
Reviewed-on: https://go-review.googlesource.com/16164
Reviewed-by: David Crawshaw <crawshaw@golang.org>
2015-10-21 19:17:46 +00:00
Keith Randall
00c638d243 runtime: on map update, don't overwrite key if we don't need to.
Keep track of which types of keys need an update and which don't.

Strings need an update because the new key might pin a smaller backing store.
Floats need an update because it might be +0/-0.
Interfaces need an update because they may contain strings or floats.

Fixes #11088

Change-Id: I9ade53c1dfb3c1a2870d68d07201bc8128e9f217
Reviewed-on: https://go-review.googlesource.com/10843
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-09-09 21:06:49 +00:00
Michael Hudson-Doyle
38519e69d0 cmd/compile, runtime: stop returning t.zero on hashmap miss
Previously t.zero always pointed to runtime.zerovalue. Change the hashmap code
to always return a runtime pointer directly, and change that pointer to point
to a larger buffer if one is needed.

(It might be better to only copy from the pointer returned by the mapaccess
functions when the value type is small enough and have the compiler insert
explicit zeroing for larger value types, but I tried and failed to do this).

This removes all uses of the zero field of the type data; the field itself can
be removed in a separate change.

Fixes #11491

Change-Id: I5b81752ff4067d74a5a281c41e88f151bae0171e
Reviewed-on: https://go-review.googlesource.com/13784
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-08-26 00:03:21 +00:00
Russ Cox
c5dff7282e cmd/compile, runtime: fix placement of map bucket overflow pointer on nacl
On most systems, a pointer is the worst case alignment, so adding
a pointer field at the end of a struct guarantees there will be no
padding added after that field (to satisfy overall struct alignment
due to some more-aligned field also present).

In the runtime, the map implementation needs a quick way to
get to the overflow pointer, which is last in the bucket struct,
so it uses size - sizeof(pointer) as the offset.

NaCl/amd64p32 is the exception, as always.
The worst case alignment is 64 bits but pointers are 32 bits.
There's a long history that is not worth going into, but when
we moved the overflow pointer to the end of the struct,
we didn't get the padding computation right.
The compiler computed the regular struct size and then
on amd64p32 added another 32-bit field.
And the runtime assumed it could step back two 32-bit fields
(one 64-bit register size) to get to the overflow pointer.
But in fact if the struct needed 64-bit alignment, the computation
of the regular struct size would have added a 32-bit pad already,
and then the code unconditionally added a second 32-bit pad.
This placed the overflow pointer three words from the end, not two.
The last two were padding, and since the runtime was consistent
about using the second-to-last word as the overflow pointer,
no harm done in the sense of overwriting useful memory.
But writing the overflow pointer to a non-pointer word of memory
means that the GC can't see the overflow blocks, so it will
collect them prematurely. Then bad things happen.

Correct all this in a few steps:

1. Add an explicit check at the end of the bucket layout in the
compiler that the overflow field is last in the struct, never
followed by padding.

2. When padding is needed on nacl (not always, just when needed),
insert it before the overflow pointer, to preserve the "last in the struct"
property.

3. Let the compiler have the final word on the width of the struct,
by inserting an explicit padding field instead of overwriting the
results of the width computation it does.

4. For the same reason (tell the truth to the compiler), set the type
of the overflow field when we're trying to pretend its not a pointer
(in this case the runtime maintains a list of the overflow blocks
elsewhere).

5. Make the runtime use "last in the struct" as its location algorithm.

This fixes TestTraceStress on nacl/amd64p32.
The 'bad map state' and 'invalid free list' failures no longer occur.

Fixes #11838.

Change-Id: If918887f8f252d988db0a35159944d2b36512f92
Reviewed-on: https://go-review.googlesource.com/12971
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2015-07-31 18:49:32 +00:00
Russ Cox
d820d5f3ab runtime: make mapzero not crash on arm
Change-Id: I40e8a4a2e62253233b66f6a2e61e222437292c31
Reviewed-on: https://go-review.googlesource.com/10151
Reviewed-by: Minux Ma <minux@golang.org>
2015-05-15 20:14:41 +00:00
Russ Cox
7e26a2d9a8 runtime: allocate map element zero values for reflect-created types on demand
Preallocating them in reflect means that
(1) if you say _ = PtrTo(ArrayOf(1000000000, reflect.TypeOf(byte(0)))), you just allocated 1GB of data
(2) if you say it again, that's *another* GB of data.

The only use of t.zero in the runtime is for map elements.
Delay the allocation until the creation of a map with that element type,
and share the zeros.

The one downside of the shared zero is that it's not garbage collected,
but it's also never written, so the OS should be able to handle it fairly
efficiently.

Change-Id: I56b098a091abf3ac0945de28ebef9a6c08e76614
Reviewed-on: https://go-review.googlesource.com/10111
Reviewed-by: Keith Randall <khr@golang.org>
2015-05-15 13:56:40 +00:00
Austin Clements
822a24b602 runtime: remove checkgc code from hashmap
Currently hashmap is riddled with code that attempts to force a GC on
the next allocation if checkgc is set. This no longer works as
originally intended with the concurrent collector, and is apparently
no longer used anyway.

Remove checkgc.

Change-Id: Ia6c17c405fa8821dc2e6af28d506c1133ab1ca0c
Reviewed-on: https://go-review.googlesource.com/8355
Reviewed-by: Keith Randall <khr@golang.org>
2015-04-02 15:28:56 +00:00
Keith Randall
cd5b144d98 runtime,reflect,cmd/internal/gc: Fix comments referring to .c/.h files
Everything has moved to Go, but comments still refer to .c/.h files.
Fix all of those up, at least for these three directories.

Fixes #10138

Change-Id: Ie5efe89b247841e0b3f82aac5256b2c606ef67dc
Reviewed-on: https://go-review.googlesource.com/7431
Reviewed-by: Russ Cox <rsc@golang.org>
2015-03-11 20:19:43 +00:00
Dmitry Vyukov
52dadc1f31 cmd/gc: fix noscan maps
Change 85e7bee introduced a bug:
it marks map buckets as noscan when key and val do not contain pointers.
However, buckets with large/outline key or val do contain pointers.

This change takes key/val size into consideration when
marking buckets as noscan.

Change-Id: I7172a0df482657be39faa59e2579dd9f209cb54d
Reviewed-on: https://go-review.googlesource.com/4901
Reviewed-by: Keith Randall <khr@golang.org>
2015-02-15 08:52:14 +00:00