Commit graph

52 commits

Author SHA1 Message Date
David Crawshaw
69285a8b46 reflect: recognize unnamed directional channels
go test github.com/onsi/gomega/gbytes now passes at tip, and tests
added to the reflect package.

Fixes #14645

Change-Id: I16216c1a86211a1103d913237fe6bca5000cf885
Reviewed-on: https://go-review.googlesource.com/20221
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-03-04 20:34:30 +00:00
Brad Fitzpatrick
5fea2ccc77 all: single space after period.
The tree's pretty inconsistent about single space vs double space
after a period in documentation. Make it consistently a single space,
per earlier decisions. This means contributors won't be confused by
misleading precedence.

This CL doesn't use go/doc to parse. It only addresses // comments.
It was generated with:

$ perl -i -npe 's,^(\s*// .+[a-z]\.)  +([A-Z]),$1 $2,' $(git grep -l -E '^\s*//(.+\.)  +([A-Z])')
$ go test go/doc -update

Change-Id: Iccdb99c37c797ef1f804a94b22ba5ee4b500c4f7
Reviewed-on: https://go-review.googlesource.com/20022
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Dave Day <djd@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-03-02 00:13:47 +00:00
David Crawshaw
0231f5420f cmd/compile: remove uncommonType.name
Reduces binary size of cmd/go by 0.5%.
For #6853.

Change-Id: I5a4b814049580ab5098ad252d979f80b70d8a5f9
Reviewed-on: https://go-review.googlesource.com/19694
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-02-26 12:02:39 +00:00
David Crawshaw
30f93f0994 cmd/compile: remove rtype.ptrToThis
Simplifies some code as ptrToThis was unreliable under dynamic
linking. Now the same type lookup is used regardless of execution
mode.

A synthetic relocation, R_USETYPE, is introduced to make sure the
linker includes *T on use of T, if *T is carrying methods.

Changes the heap dump format. Anything reading the format needs to
look at the last bool of a type of an interface value to determine
if the type should be the pointer-to type.

Reduces binary size of cmd/go by 0.2%.
For #6853.

Change-Id: I79fcb19a97402bdb0193f3c7f6d94ddf061ee7b2
Reviewed-on: https://go-review.googlesource.com/19695
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-02-25 17:47:42 +00:00
David Crawshaw
a858931200 cmd/compile: embed type string header in rtype
Reduces binary size of cmd/go by 1%.

For #6853.

Change-Id: I6f2992a4dd3699db1b532ab08683e82741b9c2e4
Reviewed-on: https://go-review.googlesource.com/19692
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2016-02-24 17:12:15 +00:00
Matthew Dempsky
9877900c8c Revert "cmd/compile: move hiter, hmap, and scase definitions into builtin.go"
This reverts commit f28bbb776a.

Change-Id: I82fb81dcff3ddcaefef72949f1ef3a41bcd22301
Reviewed-on: https://go-review.googlesource.com/19849
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2016-02-23 19:42:52 +00:00
Matthew Dempsky
f28bbb776a cmd/compile: move hiter, hmap, and scase definitions into builtin.go
Also eliminates per-maptype hiter and hmap types, since they're not
really needed anyway.  Update packages reflect and runtime
accordingly.

Reduces golang.org/x/tools/cmd/godoc's text segment by ~170kB:

   text	   data	    bss	    dec	    hex	filename
13085702	 140640	 151520	13377862	 cc2146	godoc.before
12915382	 140640	 151520	13207542	 c987f6	godoc.after

Updates #6853.

Change-Id: I948b2bc1f22d477c1756204996b4e3e1fb568d81
Reviewed-on: https://go-review.googlesource.com/16610
Reviewed-by: Keith Randall <khr@golang.org>
2016-02-22 07:42:37 +00:00
kargakis
e243d242d7 reflect: Comment fix
Change-Id: I86cdd5c1d7b6f76d3474d180e75ea0c732241080
Reviewed-on: https://go-review.googlesource.com/16309
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-10-27 15:46:29 +00:00
Marcel van Lohuizen
adf9b30e55 reflect: adjust access to unexported embedded structs
This CL changes reflect to allow access to exported fields and
methods in unexported embedded structs for gccgo and after gc
has been adjusted to disallow access to embedded unexported structs.

Adresses #12367, #7363, #11007, and #7247.

Change-Id: If80536eab35abcd25300d8ddc2d27d5c42d7e78e
Reviewed-on: https://go-review.googlesource.com/14010
Reviewed-by: Russ Cox <rsc@golang.org>
2015-10-26 10:14:38 +00:00
Keith Randall
00c638d243 runtime: on map update, don't overwrite key if we don't need to.
Keep track of which types of keys need an update and which don't.

Strings need an update because the new key might pin a smaller backing store.
Floats need an update because it might be +0/-0.
Interfaces need an update because they may contain strings or floats.

Fixes #11088

Change-Id: I9ade53c1dfb3c1a2870d68d07201bc8128e9f217
Reviewed-on: https://go-review.googlesource.com/10843
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2015-09-09 21:06:49 +00:00
Michael Hudson-Doyle
af78482d6b cmd/compile, cmd/link, reflect, runtime: remove type.zero field
No longer used after previous hashmap change.

Change-Id: I558470f872281e84a78406132df4e391d077b833
Reviewed-on: https://go-review.googlesource.com/13785
Run-TryBot: Michael Hudson-Doyle <michael.hudson@canonical.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-08-26 00:28:17 +00:00
Michael Hudson-Doyle
38519e69d0 cmd/compile, runtime: stop returning t.zero on hashmap miss
Previously t.zero always pointed to runtime.zerovalue. Change the hashmap code
to always return a runtime pointer directly, and change that pointer to point
to a larger buffer if one is needed.

(It might be better to only copy from the pointer returned by the mapaccess
functions when the value type is small enough and have the compiler insert
explicit zeroing for larger value types, but I tried and failed to do this).

This removes all uses of the zero field of the type data; the field itself can
be removed in a separate change.

Fixes #11491

Change-Id: I5b81752ff4067d74a5a281c41e88f151bae0171e
Reviewed-on: https://go-review.googlesource.com/13784
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-08-26 00:03:21 +00:00
Russ Cox
c5dff7282e cmd/compile, runtime: fix placement of map bucket overflow pointer on nacl
On most systems, a pointer is the worst case alignment, so adding
a pointer field at the end of a struct guarantees there will be no
padding added after that field (to satisfy overall struct alignment
due to some more-aligned field also present).

In the runtime, the map implementation needs a quick way to
get to the overflow pointer, which is last in the bucket struct,
so it uses size - sizeof(pointer) as the offset.

NaCl/amd64p32 is the exception, as always.
The worst case alignment is 64 bits but pointers are 32 bits.
There's a long history that is not worth going into, but when
we moved the overflow pointer to the end of the struct,
we didn't get the padding computation right.
The compiler computed the regular struct size and then
on amd64p32 added another 32-bit field.
And the runtime assumed it could step back two 32-bit fields
(one 64-bit register size) to get to the overflow pointer.
But in fact if the struct needed 64-bit alignment, the computation
of the regular struct size would have added a 32-bit pad already,
and then the code unconditionally added a second 32-bit pad.
This placed the overflow pointer three words from the end, not two.
The last two were padding, and since the runtime was consistent
about using the second-to-last word as the overflow pointer,
no harm done in the sense of overwriting useful memory.
But writing the overflow pointer to a non-pointer word of memory
means that the GC can't see the overflow blocks, so it will
collect them prematurely. Then bad things happen.

Correct all this in a few steps:

1. Add an explicit check at the end of the bucket layout in the
compiler that the overflow field is last in the struct, never
followed by padding.

2. When padding is needed on nacl (not always, just when needed),
insert it before the overflow pointer, to preserve the "last in the struct"
property.

3. Let the compiler have the final word on the width of the struct,
by inserting an explicit padding field instead of overwriting the
results of the width computation it does.

4. For the same reason (tell the truth to the compiler), set the type
of the overflow field when we're trying to pretend its not a pointer
(in this case the runtime maintains a list of the overflow blocks
elsewhere).

5. Make the runtime use "last in the struct" as its location algorithm.

This fixes TestTraceStress on nacl/amd64p32.
The 'bad map state' and 'invalid free list' failures no longer occur.

Fixes #11838.

Change-Id: If918887f8f252d988db0a35159944d2b36512f92
Reviewed-on: https://go-review.googlesource.com/12971
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2015-07-31 18:49:32 +00:00
Brad Fitzpatrick
2ae77376f7 all: link to https instead of http
The one in misc/makerelease/makerelease.go is particularly bad and
probably warrants rotating our keys.

I didn't update old weekly notes, and reverted some changes involving
test code for now, since we're late in the Go 1.5 freeze. Otherwise,
the rest are all auto-generated changes, and all manually reviewed.

Change-Id: Ia2753576ab5d64826a167d259f48a2f50508792d
Reviewed-on: https://go-review.googlesource.com/12048
Reviewed-by: Rob Pike <r@golang.org>
2015-07-11 14:36:33 +00:00
Aaron Jacobs
8628688304 Fix several out of date references to 4g/5g/6g/8g/9g.
Change-Id: Ifb8e4e13c7778a7c0113190051415e096f5db94f
Reviewed-on: https://go-review.googlesource.com/11390
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Andrew Gerrand <adg@golang.org>
2015-06-26 03:38:21 +00:00
Russ Cox
80ec711755 runtime: use type-based write barrier for remote stack write during chansend
A send on an unbuffered channel to a blocked receiver is the only
case in the runtime where one goroutine writes directly to the stack
of another. The garbage collector assumes that if a goroutine is
blocked, its stack contains no new pointers since the last time it ran.
The send on an unbuffered channel violates this, so it needs an
explicit write barrier. It has an explicit write barrier, but not one that
can handle a write to another stack. Use one that can (based on type bitmap
instead of heap bitmap).

To make this work, raise the limit for type bitmaps so that they are
used for all types up to 64 kB in size (256 bytes of bitmap).
(The runtime already imposes a limit of 64 kB for a channel element size.)

I have been unable to reproduce this problem in a simple test program.

Could help #11035.

Change-Id: I06ad994032d8cff3438c9b3eaa8d853915128af5
Reviewed-on: https://go-review.googlesource.com/10815
Reviewed-by: Austin Clements <austin@google.com>
2015-06-15 16:50:30 +00:00
Russ Cox
d36cc02795 reflect: make PtrTo(FuncOf(...)) not crash
Change-Id: Ie67e295bf327126dfdc75b73979fe33fbcb79ad9
Reviewed-on: https://go-review.googlesource.com/10150
Reviewed-by: Austin Clements <austin@google.com>
2015-05-16 00:51:05 +00:00
Russ Cox
512f75e8df runtime: replace GC programs with simpler encoding, faster decoder
Small types record the location of pointers in their memory layout
by using a simple bitmap. In Go 1.4 the bitmap held 4-bit entries,
and in Go 1.5 the bitmap holds 1-bit entries, but in both cases using
a bitmap for a large type containing arrays does not make sense:
if someone refers to the type [1<<28]*byte in a program in such
a way that the type information makes it into the binary, it would be
a waste of space to write a 128 MB (for 4-bit entries) or even 32 MB
(for 1-bit entries) bitmap full of 1s into the binary or even to keep
one in memory during the execution of the program.

For large types containing arrays, it is much more compact to describe
the locations of pointers using a notation that can express repetition
than to lay out a bitmap of pointers. Go 1.4 included such a notation,
called ``GC programs'' but it was complex, required recursion during
decoding, and was generally slow. Dmitriy measured the execution of
these programs writing directly to the heap bitmap as being 7x slower
than copying from a preunrolled 4-bit mask (and frankly that code was
not terribly fast either). For some tests, unrollgcprog1 was seen costing
as much as 3x more than the rest of malloc combined.

This CL introduces a different form for the GC programs. They use a
simple Lempel-Ziv-style encoding of the 1-bit pointer information,
in which the only operations are (1) emit the following n bits
and (2) repeat the last n bits c more times. This encoding can be
generated directly from the Go type information (using repetition
only for arrays or large runs of non-pointer data) and it can be decoded
very efficiently. In particular the decoding requires little state and
no recursion, so that the entire decoding can run without any memory
accesses other than the reads of the encoding and the writes of the
decoded form to the heap bitmap. For recursive types like arrays of
arrays of arrays, the inner instructions are only executed once, not
n times, so that large repetitions run at full speed. (In contrast, large
repetitions in the old programs repeated the individual bit-level layout
of the inner data over and over.) The result is as much as 25x faster
decoding compared to the old form.

Because the old decoder was so slow, Go 1.4 had three (or so) cases
for how to set the heap bitmap bits for an allocation of a given type:

(1) If the type had an even number of words up to 32 words, then
the 4-bit pointer mask for the type fit in no more than 16 bytes;
store the 4-bit pointer mask directly in the binary and copy from it.

(1b) If the type had an odd number of words up to 15 words, then
the 4-bit pointer mask for the type, doubled to end on a byte boundary,
fit in no more than 16 bytes; store that doubled mask directly in the
binary and copy from it.

(2) If the type had an even number of words up to 128 words,
or an odd number of words up to 63 words (again due to doubling),
then the 4-bit pointer mask would fit in a 64-byte unrolled mask.
Store a GC program in the binary, but leave space in the BSS for
the unrolled mask. Execute the GC program to construct the mask the
first time it is needed, and thereafter copy from the mask.

(3) Otherwise, store a GC program and execute it to write directly to
the heap bitmap each time an object of that type is allocated.
(This is the case that was 7x slower than the other two.)

Because the new pointer masks store 1-bit entries instead of 4-bit
entries and because using the decoder no longer carries a significant
overhead, after this CL (that is, for Go 1.5) there are only two cases:

(1) If the type is 128 words or less (no condition about odd or even),
store the 1-bit pointer mask directly in the binary and use it to
initialize the heap bitmap during malloc. (Implemented in CL 9702.)

(2) There is no case 2 anymore.

(3) Otherwise, store a GC program and execute it to write directly to
the heap bitmap each time an object of that type is allocated.

Executing the GC program directly into the heap bitmap (case (3) above)
was disabled for the Go 1.5 dev cycle, both to avoid needing to use
GC programs for typedmemmove and to avoid updating that code as
the heap bitmap format changed. Typedmemmove no longer uses this
type information; as of CL 9886 it uses the heap bitmap directly.
Now that the heap bitmap format is stable, we reintroduce GC programs
and their space savings.

Benchmarks for heapBitsSetType, before this CL vs this CL:

name                    old mean               new mean              delta
SetTypePtr              7.59ns × (0.99,1.02)   5.16ns × (1.00,1.00)  -32.05% (p=0.000)
SetTypePtr8             21.0ns × (0.98,1.05)   21.4ns × (1.00,1.00)     ~    (p=0.179)
SetTypePtr16            24.1ns × (0.99,1.01)   24.6ns × (1.00,1.00)   +2.41% (p=0.001)
SetTypePtr32            31.2ns × (0.99,1.01)   32.4ns × (0.99,1.02)   +3.72% (p=0.001)
SetTypePtr64            45.2ns × (1.00,1.00)   47.2ns × (1.00,1.00)   +4.42% (p=0.000)
SetTypePtr126           75.8ns × (0.99,1.01)   79.1ns × (1.00,1.00)   +4.25% (p=0.000)
SetTypePtr128           74.3ns × (0.99,1.01)   77.6ns × (1.00,1.01)   +4.55% (p=0.000)
SetTypePtrSlice          726ns × (1.00,1.01)    712ns × (1.00,1.00)   -1.95% (p=0.001)
SetTypeNode1            20.0ns × (0.99,1.01)   20.7ns × (1.00,1.00)   +3.71% (p=0.000)
SetTypeNode1Slice        112ns × (1.00,1.00)    113ns × (0.99,1.00)     ~    (p=0.070)
SetTypeNode8            23.9ns × (1.00,1.00)   24.7ns × (1.00,1.01)   +3.18% (p=0.000)
SetTypeNode8Slice        294ns × (0.99,1.02)    287ns × (0.99,1.01)   -2.38% (p=0.015)
SetTypeNode64           52.8ns × (0.99,1.03)   51.8ns × (0.99,1.01)     ~    (p=0.069)
SetTypeNode64Slice      1.13µs × (0.99,1.05)   1.14µs × (0.99,1.00)     ~    (p=0.767)
SetTypeNode64Dead       36.0ns × (1.00,1.01)   32.5ns × (0.99,1.00)   -9.67% (p=0.000)
SetTypeNode64DeadSlice  1.43µs × (0.99,1.01)   1.40µs × (1.00,1.00)   -2.39% (p=0.001)
SetTypeNode124          75.7ns × (1.00,1.01)   79.0ns × (1.00,1.00)   +4.44% (p=0.000)
SetTypeNode124Slice     1.94µs × (1.00,1.01)   2.04µs × (0.99,1.01)   +4.98% (p=0.000)
SetTypeNode126          75.4ns × (1.00,1.01)   77.7ns × (0.99,1.01)   +3.11% (p=0.000)
SetTypeNode126Slice     1.95µs × (0.99,1.01)   2.03µs × (1.00,1.00)   +3.74% (p=0.000)
SetTypeNode128          85.4ns × (0.99,1.01)  122.0ns × (1.00,1.00)  +42.89% (p=0.000)
SetTypeNode128Slice     2.20µs × (1.00,1.01)   2.36µs × (0.98,1.02)   +7.48% (p=0.001)
SetTypeNode130          83.3ns × (1.00,1.00)  123.0ns × (1.00,1.00)  +47.61% (p=0.000)
SetTypeNode130Slice     2.30µs × (0.99,1.01)   2.40µs × (0.98,1.01)   +4.37% (p=0.000)
SetTypeNode1024          498ns × (1.00,1.00)    537ns × (1.00,1.00)   +7.96% (p=0.000)
SetTypeNode1024Slice    15.5µs × (0.99,1.01)   17.8µs × (1.00,1.00)  +15.27% (p=0.000)

The above compares always using a cached pointer mask (and the
corresponding waste of memory) against using the programs directly.
Some slowdown is expected, in exchange for having a better general algorithm.
The GC programs kick in for SetTypeNode128, SetTypeNode130, SetTypeNode1024,
along with the slice variants of those.
It is possible that the cutoff of 128 words (bits) should be raised
in a followup CL, but even with this low cutoff the GC programs are
faster than Go 1.4's "fast path" non-GC program case.

Benchmarks for heapBitsSetType, Go 1.4 vs this CL:

name                    old mean              new mean              delta
SetTypePtr              6.89ns × (1.00,1.00)  5.17ns × (1.00,1.00)  -25.02% (p=0.000)
SetTypePtr8             25.8ns × (0.97,1.05)  21.5ns × (1.00,1.00)  -16.70% (p=0.000)
SetTypePtr16            39.8ns × (0.97,1.02)  24.7ns × (0.99,1.01)  -37.81% (p=0.000)
SetTypePtr32            68.8ns × (0.98,1.01)  32.2ns × (1.00,1.01)  -53.18% (p=0.000)
SetTypePtr64             130ns × (1.00,1.00)    47ns × (1.00,1.00)  -63.67% (p=0.000)
SetTypePtr126            241ns × (0.99,1.01)    79ns × (1.00,1.01)  -67.25% (p=0.000)
SetTypePtr128           2.07µs × (1.00,1.00)  0.08µs × (1.00,1.00)  -96.27% (p=0.000)
SetTypePtrSlice         1.05µs × (0.99,1.01)  0.72µs × (0.99,1.02)  -31.70% (p=0.000)
SetTypeNode1            16.0ns × (0.99,1.01)  20.8ns × (0.99,1.03)  +29.91% (p=0.000)
SetTypeNode1Slice        184ns × (0.99,1.01)   112ns × (0.99,1.01)  -39.26% (p=0.000)
SetTypeNode8            29.5ns × (0.97,1.02)  24.6ns × (1.00,1.00)  -16.50% (p=0.000)
SetTypeNode8Slice        624ns × (0.98,1.02)   285ns × (1.00,1.00)  -54.31% (p=0.000)
SetTypeNode64            135ns × (0.96,1.08)    52ns × (0.99,1.02)  -61.32% (p=0.000)
SetTypeNode64Slice      3.83µs × (1.00,1.00)  1.14µs × (0.99,1.01)  -70.16% (p=0.000)
SetTypeNode64Dead        134ns × (0.99,1.01)    32ns × (1.00,1.01)  -75.74% (p=0.000)
SetTypeNode64DeadSlice  3.83µs × (0.99,1.00)  1.40µs × (1.00,1.01)  -63.42% (p=0.000)
SetTypeNode124           240ns × (0.99,1.01)    79ns × (1.00,1.01)  -67.05% (p=0.000)
SetTypeNode124Slice     7.27µs × (1.00,1.00)  2.04µs × (1.00,1.00)  -71.95% (p=0.000)
SetTypeNode126          2.06µs × (0.99,1.01)  0.08µs × (0.99,1.01)  -96.23% (p=0.000)
SetTypeNode126Slice     64.4µs × (1.00,1.00)   2.0µs × (1.00,1.00)  -96.85% (p=0.000)
SetTypeNode128          2.09µs × (1.00,1.01)  0.12µs × (1.00,1.00)  -94.15% (p=0.000)
SetTypeNode128Slice     65.4µs × (1.00,1.00)   2.4µs × (0.99,1.03)  -96.39% (p=0.000)
SetTypeNode130          2.11µs × (1.00,1.00)  0.12µs × (1.00,1.00)  -94.18% (p=0.000)
SetTypeNode130Slice     66.3µs × (1.00,1.00)   2.4µs × (0.97,1.08)  -96.34% (p=0.000)
SetTypeNode1024         16.0µs × (1.00,1.01)   0.5µs × (1.00,1.00)  -96.65% (p=0.000)
SetTypeNode1024Slice     512µs × (1.00,1.00)    18µs × (0.98,1.04)  -96.45% (p=0.000)

SetTypeNode124 uses a 124 data + 2 ptr = 126-word allocation.
Both Go 1.4 and this CL are using pointer bitmaps for this case,
so that's an overall 3x speedup for using pointer bitmaps.

SetTypeNode128 uses a 128 data + 2 ptr = 130-word allocation.
Both Go 1.4 and this CL are running the GC program for this case,
so that's an overall 17x speedup when using GC programs (and
I've seen >20x on other systems).

Comparing Go 1.4's SetTypeNode124 (pointer bitmap) against
this CL's SetTypeNode128 (GC program), the slow path in the
code in this CL is 2x faster than the fast path in Go 1.4.

The Go 1 benchmarks are basically unaffected compared to just before this CL.

Go 1 benchmarks, before this CL vs this CL:

name                   old mean              new mean              delta
BinaryTree17            5.87s × (0.97,1.04)   5.91s × (0.96,1.04)    ~    (p=0.306)
Fannkuch11              4.38s × (1.00,1.00)   4.37s × (1.00,1.01)  -0.22% (p=0.006)
FmtFprintfEmpty        90.7ns × (0.97,1.10)  89.3ns × (0.96,1.09)    ~    (p=0.280)
FmtFprintfString        282ns × (0.98,1.04)   287ns × (0.98,1.07)  +1.72% (p=0.039)
FmtFprintfInt           269ns × (0.99,1.03)   282ns × (0.97,1.04)  +4.87% (p=0.000)
FmtFprintfIntInt        478ns × (0.99,1.02)   481ns × (0.99,1.02)  +0.61% (p=0.048)
FmtFprintfPrefixedInt   399ns × (0.98,1.03)   400ns × (0.98,1.05)    ~    (p=0.533)
FmtFprintfFloat         563ns × (0.99,1.01)   570ns × (1.00,1.01)  +1.37% (p=0.000)
FmtManyArgs            1.89µs × (0.99,1.01)  1.92µs × (0.99,1.02)  +1.88% (p=0.000)
GobDecode              15.2ms × (0.99,1.01)  15.2ms × (0.98,1.05)    ~    (p=0.609)
GobEncode              11.6ms × (0.98,1.03)  11.9ms × (0.98,1.04)  +2.17% (p=0.000)
Gzip                    648ms × (0.99,1.01)   648ms × (1.00,1.01)    ~    (p=0.835)
Gunzip                  142ms × (1.00,1.00)   143ms × (1.00,1.01)    ~    (p=0.169)
HTTPClientServer       90.5µs × (0.98,1.03)  91.5µs × (0.98,1.04)  +1.04% (p=0.045)
JSONEncode             31.5ms × (0.98,1.03)  31.4ms × (0.98,1.03)    ~    (p=0.549)
JSONDecode              111ms × (0.99,1.01)   107ms × (0.99,1.01)  -3.21% (p=0.000)
Mandelbrot200          6.01ms × (1.00,1.00)  6.01ms × (1.00,1.00)    ~    (p=0.878)
GoParse                6.54ms × (0.99,1.02)  6.61ms × (0.99,1.03)  +1.08% (p=0.004)
RegexpMatchEasy0_32     160ns × (1.00,1.01)   161ns × (1.00,1.00)  +0.40% (p=0.000)
RegexpMatchEasy0_1K     560ns × (0.99,1.01)   559ns × (0.99,1.01)    ~    (p=0.088)
RegexpMatchEasy1_32     138ns × (0.99,1.01)   138ns × (1.00,1.00)    ~    (p=0.380)
RegexpMatchEasy1_1K     877ns × (1.00,1.00)   878ns × (1.00,1.00)    ~    (p=0.157)
RegexpMatchMedium_32    251ns × (0.99,1.00)   251ns × (1.00,1.01)  +0.28% (p=0.021)
RegexpMatchMedium_1K   72.6µs × (1.00,1.00)  72.6µs × (1.00,1.00)    ~    (p=0.539)
RegexpMatchHard_32     3.84µs × (1.00,1.00)  3.84µs × (1.00,1.00)    ~    (p=0.378)
RegexpMatchHard_1K      117µs × (1.00,1.00)   117µs × (1.00,1.00)    ~    (p=0.067)
Revcomp                 904ms × (0.99,1.02)   904ms × (0.99,1.01)    ~    (p=0.943)
Template                125ms × (0.99,1.02)   127ms × (0.99,1.01)  +1.79% (p=0.000)
TimeParse               627ns × (0.99,1.01)   622ns × (0.99,1.01)  -0.88% (p=0.000)
TimeFormat              655ns × (0.99,1.02)   655ns × (0.99,1.02)    ~    (p=0.976)

For the record, Go 1 benchmarks, Go 1.4 vs this CL:

name                   old mean              new mean              delta
BinaryTree17            4.61s × (0.97,1.05)   5.91s × (0.98,1.03)  +28.35% (p=0.000)
Fannkuch11              4.40s × (0.99,1.03)   4.41s × (0.99,1.01)     ~    (p=0.212)
FmtFprintfEmpty         102ns × (0.99,1.01)    84ns × (0.99,1.02)  -18.38% (p=0.000)
FmtFprintfString        302ns × (0.98,1.01)   303ns × (0.99,1.02)     ~    (p=0.203)
FmtFprintfInt           313ns × (0.97,1.05)   270ns × (0.99,1.01)  -13.69% (p=0.000)
FmtFprintfIntInt        524ns × (0.98,1.02)   477ns × (0.99,1.00)   -8.87% (p=0.000)
FmtFprintfPrefixedInt   424ns × (0.98,1.02)   386ns × (0.99,1.01)   -8.96% (p=0.000)
FmtFprintfFloat         652ns × (0.98,1.02)   594ns × (0.97,1.05)   -8.97% (p=0.000)
FmtManyArgs            2.13µs × (0.99,1.02)  1.94µs × (0.99,1.01)   -8.92% (p=0.000)
GobDecode              17.1ms × (0.99,1.02)  14.9ms × (0.98,1.03)  -13.07% (p=0.000)
GobEncode              13.5ms × (0.98,1.03)  11.5ms × (0.98,1.03)  -15.25% (p=0.000)
Gzip                    656ms × (0.99,1.02)   647ms × (0.99,1.01)   -1.29% (p=0.000)
Gunzip                  143ms × (0.99,1.02)   144ms × (0.99,1.01)     ~    (p=0.204)
HTTPClientServer       88.2µs × (0.98,1.02)  90.8µs × (0.98,1.01)   +2.93% (p=0.000)
JSONEncode             32.2ms × (0.98,1.02)  30.9ms × (0.97,1.04)   -4.06% (p=0.001)
JSONDecode              121ms × (0.98,1.02)   110ms × (0.98,1.05)   -8.95% (p=0.000)
Mandelbrot200          6.06ms × (0.99,1.01)  6.11ms × (0.98,1.04)     ~    (p=0.184)
GoParse                6.76ms × (0.97,1.04)  6.58ms × (0.98,1.05)   -2.63% (p=0.003)
RegexpMatchEasy0_32     195ns × (1.00,1.01)   155ns × (0.99,1.01)  -20.43% (p=0.000)
RegexpMatchEasy0_1K     479ns × (0.98,1.03)   535ns × (0.99,1.02)  +11.59% (p=0.000)
RegexpMatchEasy1_32     169ns × (0.99,1.02)   131ns × (0.99,1.03)  -22.44% (p=0.000)
RegexpMatchEasy1_1K    1.53µs × (0.99,1.01)  0.87µs × (0.99,1.02)  -43.07% (p=0.000)
RegexpMatchMedium_32    334ns × (0.99,1.01)   242ns × (0.99,1.01)  -27.53% (p=0.000)
RegexpMatchMedium_1K    125µs × (1.00,1.01)    72µs × (0.99,1.03)  -42.53% (p=0.000)
RegexpMatchHard_32     6.03µs × (0.99,1.01)  3.79µs × (0.99,1.01)  -37.12% (p=0.000)
RegexpMatchHard_1K      189µs × (0.99,1.02)   115µs × (0.99,1.01)  -39.20% (p=0.000)
Revcomp                 935ms × (0.96,1.03)   926ms × (0.98,1.02)     ~    (p=0.083)
Template                146ms × (0.97,1.05)   119ms × (0.99,1.01)  -18.37% (p=0.000)
TimeParse               660ns × (0.99,1.01)   624ns × (0.99,1.02)   -5.43% (p=0.000)
TimeFormat              670ns × (0.98,1.02)   710ns × (1.00,1.01)   +5.97% (p=0.000)

This CL is a bit larger than I would like, but the compiler, linker, runtime,
and package reflect all need to be in sync about the format of these programs,
so there is no easy way to split this into independent changes (at least
while keeping the build working at each change).

Fixes #9625.
Fixes #10524.

Change-Id: I9e3e20d6097099d0f8532d1cb5b1af528804989a
Reviewed-on: https://go-review.googlesource.com/9888
Reviewed-by: Austin Clements <austin@google.com>
Run-TryBot: Russ Cox <rsc@golang.org>
2015-05-16 00:38:17 +00:00
Russ Cox
7e26a2d9a8 runtime: allocate map element zero values for reflect-created types on demand
Preallocating them in reflect means that
(1) if you say _ = PtrTo(ArrayOf(1000000000, reflect.TypeOf(byte(0)))), you just allocated 1GB of data
(2) if you say it again, that's *another* GB of data.

The only use of t.zero in the runtime is for map elements.
Delay the allocation until the creation of a map with that element type,
and share the zeros.

The one downside of the shared zero is that it's not garbage collected,
but it's also never written, so the OS should be able to handle it fairly
efficiently.

Change-Id: I56b098a091abf3ac0945de28ebef9a6c08e76614
Reviewed-on: https://go-review.googlesource.com/10111
Reviewed-by: Keith Randall <khr@golang.org>
2015-05-15 13:56:40 +00:00
Russ Cox
6d8a147bef runtime: use 1-bit pointer bitmaps in type representation
The type information in reflect.Type and the GC programs is now
1 bit per word, down from 2 bits.

The in-memory unrolled type bitmap representation are now
1 bit per word, down from 4 bits.

The conversion from the unrolled (now 1-bit) bitmap to the
heap bitmap (still 4-bit) is not optimized. A followup CL will
work on that, after the heap bitmap has been converted to 2-bit.

The typeDead optimization, in which a special value denotes
that there are no more pointers anywhere in the object, is lost
in this CL. A followup CL will bring it back in the final form of
heapBitsSetType.

Change-Id: If61e67950c16a293b0b516a6fd9a1c755b6d5549
Reviewed-on: https://go-review.googlesource.com/9702
Reviewed-by: Austin Clements <austin@google.com>
2015-05-11 14:43:33 +00:00
Russ Cox
ceefebd795 runtime: rename ptrsize to ptrdata
I forgot there is already a ptrSize constant.
Rename field to avoid some confusion.

Change-Id: I098fdcc8afc947d6c02c41c6e6de24624cc1c8ff
Reviewed-on: https://go-review.googlesource.com/9700
Reviewed-by: Austin Clements <austin@google.com>
2015-05-05 19:27:47 +00:00
Austin Clements
98a9d36837 runtime: add pointer size to type structure
This adds a field to the runtime type structure that records the size
of the prefix of objects of that type containing pointers. Any data
after this offset is scalar data.

This is necessary for shrinking the type bitmaps to 1 bit and will
help the garbage collector efficiently estimate the amount of heap
that needs to be scanned.

Change-Id: I1318d79e6360dca0ac980245016c562e61f52ff5
Reviewed-on: https://go-review.googlesource.com/9691
Reviewed-by: Russ Cox <rsc@golang.org>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-05-04 20:17:48 +00:00
Sebastien Binet
918fdae348 reflect: implement ArrayOf
This change exposes reflect.ArrayOf to create new reflect.Type array
types at runtime, when given a reflect.Type element.

- reflect: implement ArrayOf
- reflect: tests for ArrayOf
- runtime: document that typeAlg is used by reflect and must be kept in
  synchronized

Fixes #5996.

Change-Id: I5d07213364ca915c25612deea390507c19461758
Reviewed-on: https://go-review.googlesource.com/4111
Reviewed-by: Keith Randall <khr@golang.org>
2015-04-21 15:21:09 +00:00
Michael Hudson-Doyle
2f0828ef7c reflect, cmd/internal/gc: look for pointer types by string before synthesizing
The ptrto field of the type data cannot be relied on when dynamic linking: a
type T may be defined in a module that makes no use of pointers to that type,
but another module can contain a package that imports the first one and does use
*T pointers.  The second module will end up defining type data for *T and a
type.*T symbol pointing at it. It's important that calling .PtrTo() on the
refect.Type for T returns this type data and not some synthesized object, so we
need reflect to be able to find it!

Fortunately, the reflect package already has a mechanism for doing this sort of
thing: ChanOf/MapOf/etc look for pre-existing type data by name.  So this change
just extends PtrTo() to consult this too, and changes the compiler to include
pointer types in the data consulted when compiling for dynamic linking.

Change-Id: I3773c066fd0679a62e9fc52a84bf64f1d67662b7
Reviewed-on: https://go-review.googlesource.com/8232
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
2015-04-16 16:11:07 +00:00
Dave Day
e1c1fa2919 reflect: add FuncOf function
This also involves adding functions to typelinks along with a minor
change to ensure they are sorted correctly.

Change-Id: I054a79b6498a634cbccce17579f52c299733c2cf
Reviewed-on: https://go-review.googlesource.com/1996
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-04-16 01:38:50 +00:00
Matthew Dempsky
eced964c2d reflect: document reflect.TypeOf((*Foo)(nil)).Elem() idiom
See also golang-dev discussion:
https://groups.google.com/d/msg/golang-dev/Nk9gnTINlTg/SV8rBt-2__kJ

Change-Id: I49edd98d73400c1757b6085dec86752de569c01a
Reviewed-on: https://go-review.googlesource.com/8923
Reviewed-by: Rob Pike <r@golang.org>
2015-04-14 17:19:36 +00:00
Michael Hudson-Doyle
e1366f94ee reflect, runtime: check equality, not identity, for method names
When dynamically linking Go code, it is no longer safe to assume that
strings that end up in method names are identical if they are equal.

The performance impact seems to be noise:

benchmark                    old ns/op     new ns/op     delta
BenchmarkAssertI2E2          13.3          13.1          -1.50%
BenchmarkAssertE2I           23.5          23.2          -1.28%
BenchmarkAssertE2E2Blank     0.83          0.82          -1.20%
BenchmarkConvT2ISmall        60.7          60.1          -0.99%
BenchmarkAssertI2T           10.2          10.1          -0.98%
BenchmarkAssertE2T           10.2          10.3          +0.98%
BenchmarkConvT2ESmall        56.7          57.2          +0.88%
BenchmarkConvT2ILarge        59.4          58.9          -0.84%
BenchmarkConvI2E             13.0          12.9          -0.77%
BenchmarkAssertI2E           13.4          13.3          -0.75%
BenchmarkConvT2IUintptr      57.9          58.3          +0.69%
BenchmarkConvT2ELarge        55.9          55.6          -0.54%
BenchmarkAssertI2I           23.8          23.7          -0.42%
BenchmarkConvT2EUintptr      55.4          55.5          +0.18%
BenchmarkAssertE2E           6.12          6.11          -0.16%
BenchmarkAssertE2E2          14.4          14.4          +0.00%
BenchmarkAssertE2T2          10.0          10.0          +0.00%
BenchmarkAssertE2T2Blank     0.83          0.83          +0.00%
BenchmarkAssertE2TLarge      10.7          10.7          +0.00%
BenchmarkAssertI2E2Blank     0.83          0.83          +0.00%
BenchmarkConvI2I             23.4          23.4          +0.00%

Change-Id: I0b3dfc314215a4d4e09eec6b42c1e3ebce33eb56
Reviewed-on: https://go-review.googlesource.com/8239
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-04-11 17:35:44 +00:00
Michael Hudson-Doyle
fae4a128cb runtime, reflect: support multiple moduledata objects
This changes all the places that consult themoduledata to consult a
linked list of moduledata objects, as will be necessary for
-linkshared to work.

Obviously, as there is as yet no way of adding moduledata objects to
this list, all this change achieves right now is wasting a few
instructions here and there.

Change-Id: I397af7f60d0849b76aaccedf72238fe664867051
Reviewed-on: https://go-review.googlesource.com/8231
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2015-04-10 04:51:42 +00:00
Josh Bleecher Snyder
2adc4e8927 all: use "reports whether" in place of "returns true if(f)"
Comment changes only.

Change-Id: I56848814564c4aa0988b451df18bebdfc88d6d94
Reviewed-on: https://go-review.googlesource.com/7721
Reviewed-by: Rob Pike <r@golang.org>
2015-03-18 15:14:06 +00:00
Keith Randall
cd5b144d98 runtime,reflect,cmd/internal/gc: Fix comments referring to .c/.h files
Everything has moved to Go, but comments still refer to .c/.h files.
Fix all of those up, at least for these three directories.

Fixes #10138

Change-Id: Ie5efe89b247841e0b3f82aac5256b2c606ef67dc
Reviewed-on: https://go-review.googlesource.com/7431
Reviewed-by: Russ Cox <rsc@golang.org>
2015-03-11 20:19:43 +00:00
Dmitry Vyukov
52dadc1f31 cmd/gc: fix noscan maps
Change 85e7bee introduced a bug:
it marks map buckets as noscan when key and val do not contain pointers.
However, buckets with large/outline key or val do contain pointers.

This change takes key/val size into consideration when
marking buckets as noscan.

Change-Id: I7172a0df482657be39faa59e2579dd9f209cb54d
Reviewed-on: https://go-review.googlesource.com/4901
Reviewed-by: Keith Randall <khr@golang.org>
2015-02-15 08:52:14 +00:00
Nigel Tao
6ea3adc3ba reflect: for struct tags, reject control chars (including tabs) in keys,
and empty keys. Also reject malformed (quoted) values.

See also https://go-review.googlesource.com/3952

Change-Id: Ice6de33b09f9904b28e410a680a90aa6c8c76fed
Reviewed-on: https://go-review.googlesource.com/3953
Reviewed-by: Rob Pike <r@golang.org>
2015-02-06 02:27:27 +00:00
Dmitry Vyukov
67f8a81316 reflect: cache call frames
Call frame allocations can account for significant portion
of all allocations in a program, if call is executed
in an inner loop (e.g. to process every line in a log).
On the other hand, the allocation is easy to remove
using sync.Pool since the allocation is strictly scoped.

benchmark           old ns/op     new ns/op     delta
BenchmarkCall       634           338           -46.69%
BenchmarkCall-4     496           167           -66.33%

benchmark           old allocs     new allocs     delta
BenchmarkCall       1              0              -100.00%
BenchmarkCall-4     1              0              -100.00%

Update #7818

Change-Id: Icf60cce0a9be82e6171f0c0bd80dee2393db54a7
Reviewed-on: https://go-review.googlesource.com/1954
Reviewed-by: Keith Randall <khr@golang.org>
2015-01-28 08:40:26 +00:00
Dmitry Vyukov
85e7bee19f runtime: do not scan maps when k/v do not contain pointers
Currently we scan maps even if k/v does not contain pointers.
This is required because overflow buckets are hanging off the main table.
This change introduces a separate array that contains pointers to all
overflow buckets and keeps them alive. Buckets themselves are marked
as containing no pointers and are not scanned by GC (if k/v does not
contain pointers).

This brings maps in line with slices and chans -- GC does not scan
their contents if elements do not contain pointers.

Currently scanning of a map[int]int with 2e8 entries (~8GB heap)
takes ~8 seconds. With this change scanning takes negligible time.

Update #9477.

Change-Id: Id8a04066a53d2f743474cad406afb9f30f00eaae
Reviewed-on: https://go-review.googlesource.com/3288
Reviewed-by: Keith Randall <khr@golang.org>
2015-01-27 17:47:49 +00:00
Keith Randall
6f07ac2f28 cmd/gc: pad structs which end in zero-sized fields
For a non-zero-sized struct with a final zero-sized field,
add a byte to the size (before rounding to alignment).  This
change ensures that taking the address of the zero-sized field
will not incorrectly leak the following object in memory.

reflect.funcLayout also needs this treatment.

Fixes #9401

Change-Id: I1dc503dc5af4ca22c8f8c048fb7b4541cc957e0f
Reviewed-on: https://go-review.googlesource.com/2452
Reviewed-by: Russ Cox <rsc@golang.org>
2015-01-08 21:05:10 +00:00
Michael Fraenkel
48d63035ce reflect: set dir when creating a channel via ChanOf
Fixes #9135

Change-Id: I4d0e4eb52a3d64262f107eb7eae4096a6e47ac08
Reviewed-on: https://go-review.googlesource.com/2238
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2015-01-04 19:42:14 +00:00
Keith Randall
b1f29b2d44 runtime: get rid of goalg, no longer needed
The goalg function was a holdover from when we had algorithm
tables in both C and Go.  It is no longer needed.

Change-Id: Ia0c1af35bef3497a899f22084a1a7b42daae72a0
Reviewed-on: https://go-review.googlesource.com/2099
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2014-12-28 18:42:39 +00:00
Keith Randall
d11f411181 reflect: add kindNoPointers if a function layout has no pointers.
malloc checks kindNoPointers and if it is not set and the object
is one pointer in size, it assumes it contains a pointer.  So we
must set kindNoPointers correctly; it isn't just a hint.

Fixes #9425

Change-Id: Ia43da23cc3298d6e3d6dbdf66d32e9678f0aedcf
Reviewed-on: https://go-review.googlesource.com/2055
Reviewed-by: Russ Cox <rsc@golang.org>
2014-12-23 03:18:29 +00:00
Keith Randall
fbc56cf050 runtime: hashmap: move overflow pointer to end of bucket
Pointers to zero-sized values may end up pointing to the next
object in memory, and possibly off the end of a span.  This
can cause memory leaks and/or confuse the garbage collector.

By putting the overflow pointer at the end of the bucket, we
make sure that pointers to any zero-sized keys or values don't
accidentally point to the next object in memory.

fixes #9384

Change-Id: I5d434df176984cb0210b4d0195dd106d6eb28f73
Reviewed-on: https://go-review.googlesource.com/1869
Reviewed-by: Russ Cox <rsc@golang.org>
2014-12-22 22:25:48 +00:00
Brad Fitzpatrick
895e48cac5 reflect: remove a double negative, use the rtype.pointers method for clarity
Change-Id: Ia24e9f0da1622cededa17b2c54ff9e4bb80cf946
Reviewed-on: https://go-review.googlesource.com/1633
Reviewed-by: Minux Ma <minux@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2014-12-16 21:17:07 +00:00
Keith Randall
df1739c77d runtime: if key type is reflexive, don't call equal(k, k)
Most types are reflexive (k == k for all k of type t), so don't
bother calling equal(k, k) when the key type is reflexive.

Change-Id: Ia716b4198b8b298687843b94b878dbc5e8fc2c65
Reviewed-on: https://go-review.googlesource.com/1480
Reviewed-by: Russ Cox <rsc@golang.org>
2014-12-15 21:43:49 +00:00
Russ Cox
829b286f2c [dev.cc] all: merge default (8d42099cdc23) into dev.cc
TBR=austin
CC=golang-codereviews
https://golang.org/cl/178700044
2014-12-05 11:18:10 -05:00
Keith Randall
7c1e33033d reflect: Fix reflect.funcLayout. The GC bitmap has two bits per
pointer, not one.

Fixes #9179

LGTM=iant, rsc
R=golang-codereviews, iant, rsc
CC=golang-codereviews
https://golang.org/cl/182160043
2014-12-01 07:52:09 -08:00
Russ Cox
33e910296e [dev.cc] reflect: interfaces contain only pointers
[This CL is part of the removal of C code from package runtime.
See golang.org/s/dev.cc for an overview.]

Adjustments for changes made in CL 169360043.
This change is already present in the dev.garbage branch.

LGTM=r
R=r
CC=austin, golang-codereviews, iant, khr
https://golang.org/cl/167520044
2014-11-11 01:23:01 -05:00
Russ Cox
0d81b72e1b reflect: a few microoptimizations
Replace i < 0 || i >= x with uint(i) >= uint(x).
Shorten a few other code sequences.
Move the kind bits to the bottom of the flag word, to avoid shifts.

LGTM=r
R=r, bradfitz
CC=golang-codereviews
https://golang.org/cl/159020043
2014-10-17 12:54:31 -04:00
Damien Neil
4e1d196543 reflect: fix struct size calculation to include terminal padding
LGTM=r
R=r
CC=golang-codereviews
https://golang.org/cl/160920045
2014-10-16 13:58:32 -07:00
Russ Cox
a1616d4a32 reflect: shorten value to 3 words
scalar is no longer needed, now that
interfaces always hold pointers.

Comparing best of 5 with TurboBoost turned off,
on a 2012 Retina MacBook Pro Core i5.
Still not completely confident in these numbers,
but the gob and template improvements seem real.

benchmark                       old ns/op   new ns/op   delta
BenchmarkBinaryTree17           3819892491  3803008185  -0.44%
BenchmarkFannkuch11             3623876405  3611776426  -0.33%
BenchmarkFmtFprintfEmpty        119         118         -0.84%
BenchmarkFmtFprintfString       294         292         -0.68%
BenchmarkFmtFprintfInt          310         304         -1.94%
BenchmarkFmtFprintfIntInt       513         507         -1.17%
BenchmarkFmtFprintfPrefixedInt  427         426         -0.23%
BenchmarkFmtFprintfFloat        562         554         -1.42%
BenchmarkFmtManyArgs            1873        1832        -2.19%
BenchmarkGobDecode              15824504    14746565    -6.81%
BenchmarkGobEncode              14347378    14208743    -0.97%
BenchmarkGzip                   537229271   537973492   +0.14%
BenchmarkGunzip                 134996775   135406149   +0.30%
BenchmarkHTTPClientServer       119065      116937      -1.79%
BenchmarkJSONEncode             29134359    28928099    -0.71%
BenchmarkJSONDecode             106867289   105770161   -1.03%
BenchmarkMandelbrot200          5798475     5791433     -0.12%
BenchmarkGoParse                5299169     5379201     +1.51%
BenchmarkRegexpMatchEasy0_32    195         195         +0.00%
BenchmarkRegexpMatchEasy0_1K    477         477         +0.00%
BenchmarkRegexpMatchEasy1_32    170         170         +0.00%
BenchmarkRegexpMatchEasy1_1K    1412        1397        -1.06%
BenchmarkRegexpMatchMedium_32   336         337         +0.30%
BenchmarkRegexpMatchMedium_1K   109025      108977      -0.04%
BenchmarkRegexpMatchHard_32     5854        5856        +0.03%
BenchmarkRegexpMatchHard_1K     184914      184748      -0.09%
BenchmarkRevcomp                829233526   836598734   +0.89%
BenchmarkTemplate               142055312   137016166   -3.55%
BenchmarkTimeParse              598         597         -0.17%
BenchmarkTimeFormat             564         568         +0.71%

Fixes #7425.

LGTM=r
R=golang-codereviews, r
CC=golang-codereviews, iant, khr
https://golang.org/cl/158890043
2014-10-15 14:24:18 -04:00
Ian Lance Taylor
3cf9acccae reflect: generated unrolled GC bitmask directly
The code for a generated type is already generating an
unrolled GC bitmask.  Rather than unrolling the the source
type bitmasks and copying them, just generate the required
bitmask directly.  Don't mark it as an unrolled GC program,
since there is no need to do so.

Fixes #8917.

LGTM=rsc
R=dvyukov, rsc
CC=golang-codereviews
https://golang.org/cl/156930044
2014-10-13 10:01:34 -07:00
Russ Cox
18172c42ff runtime: remove type-punning for Type.gc[0], gc[1]
Depending on flags&KindGCProg,
gc[0] and gc[1] are either pointers or inlined bitmap bits.
That's not compatible with a precise garbage collector:
it needs to be always pointers or never pointers.

Change the inlined bitmap case to store a pointer to an
out-of-line bitmap in gc[0]. The out-of-line bitmaps are
dedup'ed, so that for example all pointer types share the
same out-of-line bitmap.

Fixes #8864.

LGTM=r
R=golang-codereviews, dvyukov, r
CC=golang-codereviews, iant, khr, rlh
https://golang.org/cl/155820043
2014-10-07 11:06:51 -04:00
Russ Cox
a325f4f2b3 reflect: add Type.Comparable
Like most of the Type methods, the definition of Comparable
is what the Go spec says it is.

Fixes #7911.

LGTM=gri
R=gri, r
CC=golang-codereviews
https://golang.org/cl/144020043
2014-09-16 17:40:10 -04:00