This optimization is not sound if A, B, or C
might overlap with each other.
Thanks to Michael Munday for pointing this
out during the review of CL 143479.
This reduces the number of times this optimization
triggers during make.bash from 386 to 74.
This is unfortunate, but I don't see an obvious way around it,
short of souping up the disjointness analysis.
name old object-bytes new object-bytes delta
Template 507kB ± 0% 507kB ± 0% +0.13% (p=0.008 n=5+5)
Unicode 225kB ± 0% 225kB ± 0% ~ (all equal)
GoTypes 1.85MB ± 0% 1.85MB ± 0% +0.02% (p=0.008 n=5+5)
Flate 328kB ± 0% 328kB ± 0% ~ (all equal)
GoParser 402kB ± 0% 402kB ± 0% ~ (all equal)
Reflect 1.41MB ± 0% 1.41MB ± 0% ~ (all equal)
Tar 457kB ± 0% 458kB ± 0% +0.20% (p=0.008 n=5+5)
XML 600kB ± 0% 601kB ± 0% +0.03% (p=0.008 n=5+5)
Change-Id: Ida408cb627145ba9faf473a78606f050c2f3f51c
Reviewed-on: https://go-review.googlesource.com/c/145208
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Michael Munday <mike.munday@ibm.com>
During development and debugging, I often want to
write noteRule(fmt.Sprintf(...)), and end up
manually adding the import to the generated code.
Let's just make it always available instead.
Change-Id: I1e2d47c98ba056e1b5da42e35fb6ad26f1d9cc3d
Reviewed-on: https://go-review.googlesource.com/c/145207
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
The goal of this change is to move work from walk to SSA,
and simplify things along the way.
This is hard to accomplish cleanly with small incremental changes,
so this large commit message aims to provide a roadmap to the diff.
High level description:
Prior to this change, walk was responsible for constructing (most of) the stack for function calls.
ascompatte gathered variadic arguments into a slice.
It also rewrote n.List from a list of arguments to a list of assignments to stack slots.
ascompatte was called multiple times to handle the receiver in a method call.
reorder1 then introduced temporaries into n.List as needed to avoid smashing the stack.
adjustargs then made extra stack space for go/defer args as needed.
Node to SSA construction evaluated all the statements in n.List,
and issued the function call, assuming that the stack was correctly constructed.
Intrinsic calls had to dig around inside n.List to extract the arguments,
since intrinsics don't use the stack to make function calls.
This change moves stack construction to the SSA construction phase.
ascompatte, now called walkParams, does all the work that ascompatte and reorder1 did.
It handles variadic arguments, inserts the method receiver if needed, and allocates temporaries.
It does not, however, make any assignments to stack slots.
Instead, it moves the function arguments to n.Rlist, leaving assignments to temporaries in n.List.
(It would be better to use Ninit instead of List; future work.)
During SSA construction, after doing all the temporary assignments in n.List,
the function arguments are assigned to stack slots by
constructing the appropriate SSA Value, using (*state).storeArg.
SSA construction also now handles adjustments for go/defer args.
This change also simplifies intrinsic calls, since we no longer need to undo walk's work.
Along the way, we simplify nodarg by pushing the fp==1 case to its callers, where it fits nicely.
Generated code differences:
There were a few optimizations applied along the way, the old way.
f(g()) was rewritten to do a block copy of function results to function arguments.
And reorder1 avoided introducing the final "save the stack" temporary in n.List.
The f(g()) block copy optimization never actually triggered; the order pass rewrote away g(), so that has been removed.
SSA optimizations mostly obviated the need for reorder1's optimization of avoiding the final temporary.
The exception was when the temporary's type was not SSA-able;
in that case, we got a Move into an autotmp and then an immediate Move onto the stack,
with the autotmp never read or used again.
This change introduces a new rewrite rule to detect such pointless double Moves
and collapse them into a single Move.
This is actually more powerful than the original optimization,
since the original optimization relied on the imprecise Node.HasCall calculation.
The other significant difference in the generated code is that the stack is now constructed
completely in SP-offset order. Prior to this change, the stack was constructed somewhat
haphazardly: first the final argument that Node.HasCall deemed to require a temporary,
then other arguments, then the method receiver, then the defer/go args.
SP-offset is probably a good default order. See future work.
There are a few minor object file size changes as a result of this change.
I investigated some regressions in early versions of this change.
One regression (in archive/tar) was the addition of a single CMPQ instruction,
which would be eliminated were this TODO from flagalloc to be done:
// TODO: Remove original instructions if they are never used.
One regression (in text/template) was an ADDQconstmodify that is now
a regular MOVQLoad+ADDQconst+MOVQStore, due to an unlucky change
in the order in which arguments are written. The argument change
order can also now be luckier, so this appears to be a wash.
All in all, though there will be minor winners and losers,
this change appears to be performance neutral.
Future work:
Move loading the result of function calls to SSA construction; eliminate OINDREGSP.
Consider pushing stack construction deeper into SSA world, perhaps in an arch-specific pass.
Among other benefits, this would make it easier to transition to a new calling convention.
This would require rethinking the handling of stack conflicts and is non-trivial.
Figure out some clean way to indicate that stack construction Stores/Moves
do not alias each other, so that subsequent passes may do things like
CSE+tighten shared stack setup, do DSE using non-first Stores, etc.
This would allow us to eliminate the minor text/template regression.
Possibly make assignments to stack slots not treated as statements by DWARF.
Compiler benchmarks:
name old time/op new time/op delta
Template 182ms ± 2% 179ms ± 2% -1.69% (p=0.000 n=47+48)
Unicode 86.3ms ± 5% 85.1ms ± 4% -1.36% (p=0.001 n=50+50)
GoTypes 646ms ± 1% 642ms ± 1% -0.63% (p=0.000 n=49+48)
Compiler 2.89s ± 1% 2.86s ± 2% -1.36% (p=0.000 n=48+50)
SSA 8.47s ± 1% 8.37s ± 2% -1.22% (p=0.000 n=47+50)
Flate 122ms ± 2% 121ms ± 2% -0.66% (p=0.000 n=47+45)
GoParser 147ms ± 2% 146ms ± 2% -0.53% (p=0.006 n=46+49)
Reflect 406ms ± 2% 403ms ± 2% -0.76% (p=0.000 n=48+43)
Tar 162ms ± 3% 162ms ± 4% ~ (p=0.191 n=46+50)
XML 223ms ± 2% 222ms ± 2% -0.37% (p=0.031 n=45+49)
[Geo mean] 382ms 378ms -0.89%
name old user-time/op new user-time/op delta
Template 219ms ± 3% 216ms ± 3% -1.56% (p=0.000 n=50+48)
Unicode 109ms ± 6% 109ms ± 5% ~ (p=0.190 n=50+49)
GoTypes 836ms ± 2% 828ms ± 2% -0.96% (p=0.000 n=49+48)
Compiler 3.87s ± 2% 3.80s ± 1% -1.81% (p=0.000 n=49+46)
SSA 12.0s ± 1% 11.8s ± 1% -2.01% (p=0.000 n=48+50)
Flate 142ms ± 3% 141ms ± 3% -0.85% (p=0.003 n=50+48)
GoParser 178ms ± 4% 175ms ± 4% -1.66% (p=0.000 n=48+46)
Reflect 520ms ± 2% 512ms ± 2% -1.44% (p=0.000 n=45+48)
Tar 200ms ± 3% 198ms ± 4% -0.61% (p=0.037 n=47+50)
XML 277ms ± 3% 275ms ± 3% -0.85% (p=0.000 n=49+48)
[Geo mean] 482ms 476ms -1.23%
name old alloc/op new alloc/op delta
Template 36.1MB ± 0% 35.3MB ± 0% -2.18% (p=0.008 n=5+5)
Unicode 29.8MB ± 0% 29.3MB ± 0% -1.58% (p=0.008 n=5+5)
GoTypes 125MB ± 0% 123MB ± 0% -2.13% (p=0.008 n=5+5)
Compiler 531MB ± 0% 513MB ± 0% -3.40% (p=0.008 n=5+5)
SSA 2.00GB ± 0% 1.93GB ± 0% -3.34% (p=0.008 n=5+5)
Flate 24.5MB ± 0% 24.3MB ± 0% -1.18% (p=0.008 n=5+5)
GoParser 29.4MB ± 0% 28.7MB ± 0% -2.34% (p=0.008 n=5+5)
Reflect 87.1MB ± 0% 86.0MB ± 0% -1.33% (p=0.008 n=5+5)
Tar 35.3MB ± 0% 34.8MB ± 0% -1.44% (p=0.008 n=5+5)
XML 47.9MB ± 0% 47.1MB ± 0% -1.86% (p=0.008 n=5+5)
[Geo mean] 82.8MB 81.1MB -2.08%
name old allocs/op new allocs/op delta
Template 352k ± 0% 347k ± 0% -1.32% (p=0.008 n=5+5)
Unicode 342k ± 0% 339k ± 0% -0.66% (p=0.008 n=5+5)
GoTypes 1.29M ± 0% 1.27M ± 0% -1.30% (p=0.008 n=5+5)
Compiler 4.98M ± 0% 4.87M ± 0% -2.14% (p=0.008 n=5+5)
SSA 15.7M ± 0% 15.2M ± 0% -2.86% (p=0.008 n=5+5)
Flate 233k ± 0% 231k ± 0% -0.83% (p=0.008 n=5+5)
GoParser 296k ± 0% 291k ± 0% -1.54% (p=0.016 n=5+4)
Reflect 1.05M ± 0% 1.04M ± 0% -0.65% (p=0.008 n=5+5)
Tar 343k ± 0% 339k ± 0% -0.97% (p=0.008 n=5+5)
XML 432k ± 0% 426k ± 0% -1.19% (p=0.008 n=5+5)
[Geo mean] 815k 804k -1.35%
name old object-bytes new object-bytes delta
Template 505kB ± 0% 505kB ± 0% -0.01% (p=0.008 n=5+5)
Unicode 224kB ± 0% 224kB ± 0% ~ (all equal)
GoTypes 1.82MB ± 0% 1.83MB ± 0% +0.06% (p=0.008 n=5+5)
Flate 324kB ± 0% 324kB ± 0% +0.00% (p=0.008 n=5+5)
GoParser 402kB ± 0% 402kB ± 0% +0.04% (p=0.008 n=5+5)
Reflect 1.39MB ± 0% 1.39MB ± 0% -0.01% (p=0.008 n=5+5)
Tar 449kB ± 0% 449kB ± 0% -0.02% (p=0.008 n=5+5)
XML 598kB ± 0% 597kB ± 0% -0.05% (p=0.008 n=5+5)
Change-Id: Ifc9d5c1bd01f90171414b8fb18ffe2290d271143
Reviewed-on: https://go-review.googlesource.com/c/114797
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Do []byte(string) conversions more efficiently when the string
is a constant. Instead of calling stringtobyteslice, allocate
just the space we need and encode the initialization directly.
[]byte("foo") rewrites to the following pseudocode:
var s [3]byte // on heap or stack, depending on whether b escapes
s = *(*[3]byte)(&"foo"[0]) // initialize s from the string
b = s[:]
which generates this assembly:
0x001d 00029 (tmp1.go:9) LEAQ type.[3]uint8(SB), AX
0x0024 00036 (tmp1.go:9) MOVQ AX, (SP)
0x0028 00040 (tmp1.go:9) CALL runtime.newobject(SB)
0x002d 00045 (tmp1.go:9) MOVQ 8(SP), AX
0x0032 00050 (tmp1.go:9) MOVBLZX go.string."foo"+2(SB), CX
0x0039 00057 (tmp1.go:9) MOVWLZX go.string."foo"(SB), DX
0x0040 00064 (tmp1.go:9) MOVW DX, (AX)
0x0043 00067 (tmp1.go:9) MOVB CL, 2(AX)
// Then the slice is b = {AX, 3, 3}
The generated code is still not optimal, as it still does load/store
from read-only memory instead of constant stores. Next CL...
Update #26498Fixes#10170
Change-Id: I4b990b19f9a308f60c8f4f148934acffefe0a5bd
Reviewed-on: https://go-review.googlesource.com/c/140698
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
In some optimization rules the type of generated OffPtr was
incorrectly set to the type of the pointee, instead of the
pointer. When the OffPtr value is spilled, this may generate
a spill of the wrong type, e.g. a floating point spill of an
integer (pointer) value. On Wasm, this leads to invalid
bytecode.
Fixes#27961.
Change-Id: I5d464847eb900ed90794105c0013a1a7330756cc
Reviewed-on: https://go-review.googlesource.com/c/139257
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Richard Musiol <neelance@gmail.com>
That optimization is not valid if x == -0.
The test is a bit tricky because 0 == -0. We distinguish
0 from -0 with 1/0 == inf, 1/-0 == -inf.
This has been a bug since CL 24790 in Go 1.8. Probably doesn't
warrant a backport.
Fixes#27718
Note: the optimization x-0 -> x is actually valid.
But it's probably best to take it out, so as to not confuse readers.
Change-Id: I99f16a93b45f7406ec8053c2dc759a13eba035fa
Reviewed-on: https://go-review.googlesource.com/135701
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Use the new custom truncate/extension code when storing or extracting
float32 values from AuxInts to avoid the value being changed by the
host platform's floating point conversion instructions (e.g. sNaN ->
qNaN).
Updates #27516.
Change-Id: Id39650f1431ef74af088c895cf4738ea5fa87974
Reviewed-on: https://go-review.googlesource.com/134855
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Signalling NaNs were being converted to quiet NaNs during constant
propagation through integer <-> float store-to-load forwarding.
This occurs because we store float32 constants as float64
values and CPU hardware 'quietens' NaNs during conversion between
the two.
Eventually we want to move to using float32 values to store float32
constants, however this will be a big change since both the compiler
and the assembler expect float64 values. So for now this is a small
change that will fix the immediate issue.
Fixes#27193.
Change-Id: Iac54bd8c13abe26f9396712bc71f9b396f842724
Reviewed-on: https://go-review.googlesource.com/132956
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
A signed right shift before an unsigned right shift by register width-1
(extracts the sign bit) is superflous.
trigger counts during ./make.bash
0 (Rsh8U (Rsh8 x _) 7 ) -> (Rsh8U x 7 )
0 (Rsh16U (Rsh16 x _) 15 ) -> (Rsh16U x 15)
2 (Rsh32U (Rsh32 x _) 31 ) -> (Rsh32U x 31)
251 (Rsh64U (Rsh64 x _) 63 ) -> (Rsh64U x 63)
Changes the instructions generated on AMD64 for x / 2 where
x is a signed integer from:
MOVQ AX, CX
SARQ $63, AX
SHRQ $63, AX
ADDQ CX, AX
SARQ $1, AX
to:
MOVQ AX, CX
SHRQ $63, AX
ADDQ CX, AX
SARQ $1, AX
Change-Id: I86321ae8fc9dc24b8fa9eb80aa5c7299eff8c9dc
Reviewed-on: https://go-review.googlesource.com/115956
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
When compiling with -race, we insert calls to racefuncentry,
into every function. Add a rule that removes them in leaf functions,
without instrumented loads/stores.
Shaves ~30kb from "-race" version of go tool:
file difference:
go_old 15626192
go_new 15597520 [-28672 bytes]
section differences:
global text (code) = -24513 bytes (-0.358598%)
read-only data = -5849 bytes (-0.167064%)
Total difference -30362 bytes (-0.097928%)
Fixes#24662
Change-Id: Ia63bf1827f4cf2c25e3e28dcd097c150994ade0a
Reviewed-on: https://go-review.googlesource.com/121235
Run-TryBot: Ilya Tocar <ilya.tocar@intel.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Lack of a well-defined order between VarDef and related
address operations sometimes causes problems with store order
and write barrier transformations; glitches in the order are
made irreparable (by later optimizations) if the two parts of
the glitch straddle a split in the original block caused by
insertion of a write barrier diamond.
Fix this by creating a LocalAddr for addresses of locals
(what VarDef matters for) that takes a memory input to
help make the order explicit. Addr is modified to only
be legal for SB operand, so there is no overlap between
Addr and LocalAddr uses (there may be some downstream
cleanup from this).
Changes to generic.rules and rewrite.go ensure that codegen
tests continue to pass; CSE of LocalAddr is impaired, not
quite sure of the cost.
Fixes#26105.
Change-Id: Id4192b4440aa4e9d7ba54a465c456df9b530b515
Reviewed-on: https://go-review.googlesource.com/122483
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Move ops can be faster than memmove calls because the number of bytes
to be moved is fixed and they don't incur the overhead of a call.
This change allows memmove to be converted into a Move op when the
arguments are disjoint.
The optimization is only enabled on s390x at the moment, however
other architectures may also benefit from it in the future. The
memmove inlining rule triggers an extra 12 times when compiling the
standard library. It will most likely make more of a difference as the
disjoint function is improved over time (to recognize fresh heap
allocations for example).
Change-Id: I9af570dcfff28257b8e59e0ff584a46d8e248310
Reviewed-on: https://go-review.googlesource.com/110064
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ilya Tocar <ilya.tocar@intel.com>
Missing rule to fold out Convert ops on 32-bit architectures.
This comes up with expressions like
unsafe.Pointer(uintptr(p) + x)
Change-Id: I429e968e5d1a3e13a386ddd29a08ebb6d7dd938a
Reviewed-on: https://go-review.googlesource.com/112158
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Avoid using values that do not dominate the block the Zero op is in.
Should fix the SSA check builder.
The additional OffPtr ops inserted by these rules should always be
optimized away when the Load is replaced with a const zero.
Fixes#25288.
Change-Id: I4163b58e60364f77c8a206ba084073a58ca6320a
Reviewed-on: https://go-review.googlesource.com/112136
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This was an artifact from when we had a separate ssa.Type interface to
break circular dependency between packages ssa and gc. It's no longer
needed now that package ssa directly uses package types.
Change-Id: I6a93e5d79082815f7f0eb89507381969cc6cb403
Reviewed-on: https://go-review.googlesource.com/109137
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
If both inputs are constant offsets from the same pointer then we
can evaluate NeqPtr and EqPtr at compile time. Triggers a few times
during all.bash. Removes a conditional branch in the following
code:
copy(x[1:], x[:])
This branch was recently added as an optimization in CL 94596. We
now skip the memmove if the pointers are equal. However, in the
above code we know at compile time that they are never equal.
Also, when the offset is variable, check if the offset is zero
rather than if the pointers are equal. For example:
copy(x[a:], x[:])
This would now skip the copy if a == 0, rather than if x + a == x.
Finally I've also added a rule to make IsNonNil true for pointers
to values on the stack. The nil check elimination pass will catch
these anyway, but eliminating them here might eliminate branches
earlier.
Change-Id: If72f436fef0a96ad0f4e296d3a1f8b6c3e712085
Reviewed-on: https://go-review.googlesource.com/106635
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
CL 38338 introduced SSA rules to optimize two types of pointer equality
tests: a pointer compared with itself, and comparison of addresses taken
of two symbols which may have the same base. This patch adds rules to
apply the same optimization to pointer inequality tests, which also ensures
that two pointers to zero-width types cannot be both equal and unequal
at the same time.
Fixes#24503.
Change-Id: Ic828aeb86ae2e680caf66c35f4c247674768a9ba
Reviewed-on: https://go-review.googlesource.com/102275
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This commit allows architectures to disable optimizations that need the
Avg* and Hmul* operations.
WebAssembly has no such operations, so using them as an optimization
but then having to emulate them with multiple instructions makes no
sense, especially since the WebAssembly compiler may do the same
optimizations internally.
Updates #18892
Change-Id: If57b59e3235482a9e0ec334a7312b3e3b5fc2b61
Reviewed-on: https://go-review.googlesource.com/103256
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
The first word of an interface is a pointer, but for the purposes
of GC we don't need to treat it as such.
1. If it is a non-empty interface, the pointer points to an itab
which is always in persistentalloc space.
2. If it is an empty interface, the pointer points to a _type.
a. If it is a compile-time-allocated type, it points into
the read-only data section.
b. If it is a reflect-allocated type, it points into the Go heap.
Reflect is responsible for keeping a reference to
the underlying type so it won't be GCd.
If we ever have a moving GC, we need to change this for 2b (as
well as scan itabs to update their itab._type fields).
Write barriers on the first word of interfaces have already been removed.
Change-Id: I643e91d7ac4de980ac2717436eff94097c65d959
Reviewed-on: https://go-review.googlesource.com/97518
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Spotted while working on #18943, it triggers once during bootstrap.
Change-Id: Ia4330ccc6395627c233a8eb4dcc0e3e2a770bea7
Reviewed-on: https://go-review.googlesource.com/94764
Reviewed-by: Keith Randall <khr@golang.org>
In addition to look nicer to the eye, this allows to reformat
and indent rules without causing spurious changes to the generated
file, making it easier to spot functional changes.
After this CL, all CLs that will aggregate rules through
the new "|" functionality should cause no changes to the
generated files.
Change-Id: Icec283585ba8d7b91c79d76513c1d83dca4b30aa
Reviewed-on: https://go-review.googlesource.com/95216
Run-TryBot: Giovanni Bajo <rasky@develer.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Constant fold Not of boolean constants.
Noticed while working on #23504.
Change-Id: I965705154ee7348a1a159fad4e029b922d3171b3
Reviewed-on: https://go-review.googlesource.com/88956
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
Recurse into structs/arrays of one element when
assigning names.
Test incorporated into existing end-to-end debugger test,
hand-verified that it fails without this CL.
Fixes#19868
Revives CL 40010
Old-Change-Id: I0266e58af975fb64cfa17922be383b70f0a7ea96
Change-Id: I122ac2375931477769ec8d763607c1ec42d78a7f
Reviewed-on: https://go-review.googlesource.com/71731
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Heschi Kreinick <heschi@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
This CL optimizes assembly for len() or cap() division
by a power of 2 constants:
func lenDiv(s []int) int {
return len(s) / 16
}
amd64 assembly before the CL:
MOVQ "".s+16(SP), AX
MOVQ AX, CX
SARQ $63, AX
SHRQ $60, AX
ADDQ CX, AX
SARQ $4, AX
MOVQ AX, "".~r1+32(SP)
RET
amd64 assembly after the CL:
MOVQ "".s+16(SP), AX
SHRQ $4, AX
MOVQ AX, "".~r1+32(SP)
RET
The CL relies on the fact that len() and cap() result cannot
be negative.
Trigger stats for the added SSA rules on linux/amd64 when running
make.bash:
46 Div64
12 Mod64
The added SSA rules may trigger on more cases in the future
when SSA values will be populated with the info on their
lower bounds.
For instance:
func f(i int16) int16 {
if i < 3 {
return -1
}
// Lower bound of i is 3 here -> i is non-negative,
// so unsigned arithmetics may be used here.
return i % 16
}
Change-Id: I8bc6be5a03e71157ced533c01416451ff6f1a7f0
Reviewed-on: https://go-review.googlesource.com/65530
Reviewed-by: Keith Randall <khr@golang.org>
Add generic rules to propagate floating point constants through
comparisons and integer conversions. These new rules seldom trigger
in the standard library so there is no performance change, however
I think it is worth adding them anyway for completeness.
Change-Id: I9db5222746508a2996f1cafb72f4e0cf2541de07
Reviewed-on: https://go-review.googlesource.com/63795
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This CL adds generic SSA rules to propagate constants through raw bits
conversions between floats and integers. This allows constants to
propagate through some math functions. For example, math.Copysign(0, -1)
is now constant folded to a load of -0.0.
Requires a fix to the ARM assembler which loaded -0.0 as +0.0.
Change-Id: I52649a4691077c7414f19d17bb599a6743c23ac2
Reviewed-on: https://go-review.googlesource.com/62250
Run-TryBot: Michael Munday <mike.munday@ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Do the similar thing to CL 55143 to reduce IMUL.
Change-Id: I1bd38f618058e3cd74fac181f003610ea13f2294
Reviewed-on: https://go-review.googlesource.com/56252
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
For code like the following (where x escapes):
x := []int{1}
We're currently generating a nil check. The line above is really 3 operations:
t := new([1]int)
t[0] = 1
x := t[:]
We remove the nil check for t[0] = 1, but not for t[:].
Our current nil check removal rule is too strict about the possible
memory arguments of the nil check. Unlike zeroing or storing to the
result of runtime.newobject, the nilness of runtime.newobject is
always false, even after other stores have happened in the meantime.
Change-Id: I95fad4e3a59c27effdb37c43ea215e18f30b1e5f
Reviewed-on: https://go-review.googlesource.com/58711
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Normally 64-bit div/mod is turned into runtime calls on 32-bit
arch, but the front end leaves power-of-two constant division
and hopes the SSA backend turns into a shift or AND. The SSA rule is
(Mod64u <t> n (Const64 [c])) && isPowerOfTwo(c) -> (And64 n (Const64 <t> [c-1]))
But isPowerOfTwo returns true only for positive int64, which leaves
out 1<<63 unhandled. Add a special case for 1<<63.
Fixes#21517.
Change-Id: I02d27dc7177d4af0ee8d7f5533714edecddf8c95
Reviewed-on: https://go-review.googlesource.com/56890
Reviewed-by: Keith Randall <khr@golang.org>
There are a few cases where this can be useful. Apart from the obvious
(and silly)
100*n + 200*n
where we generate one IMUL instead of two, consider:
15*n + 31*n
Currently, the compiler strength-reduces both imuls, generating:
0x0000 00000 MOVQ "".n+8(SP), AX
0x0005 00005 MOVQ AX, CX
0x0008 00008 SHLQ $4, AX
0x000c 00012 SUBQ CX, AX
0x000f 00015 MOVQ CX, DX
0x0012 00018 SHLQ $5, CX
0x0016 00022 SUBQ DX, CX
0x0019 00025 ADDQ CX, AX
0x001c 00028 MOVQ AX, "".~r1+16(SP)
0x0021 00033 RET
But combining the imuls is both faster and shorter:
0x0000 00000 MOVQ "".n+8(SP), AX
0x0005 00005 IMULQ $46, AX
0x0009 00009 MOVQ AX, "".~r1+16(SP)
0x000e 00014 RET
even without strength-reduction.
Moreover, consider:
5*n + 7*(n+1) + 11*(n+2)
We already have a rule that rewrites 7(n+1) into 7n+7, so the
generated code (without imuls merging) looks like this:
0x0000 00000 MOVQ "".n+8(SP), AX
0x0005 00005 LEAQ (AX)(AX*4), CX
0x0009 00009 MOVQ AX, DX
0x000c 00012 NEGQ AX
0x000f 00015 LEAQ (AX)(DX*8), AX
0x0013 00019 ADDQ CX, AX
0x0016 00022 LEAQ (DX)(CX*2), CX
0x001a 00026 LEAQ 29(AX)(CX*1), AX
0x001f 00031 MOVQ AX, "".~r1+16(SP)
But with imuls merging, the 5n, 7n and 11n factors get merged, and the
generated code looks like this:
0x0000 00000 MOVQ "".n+8(SP), AX
0x0005 00005 IMULQ $23, AX
0x0009 00009 ADDQ $29, AX
0x000d 00013 MOVQ AX, "".~r1+16(SP)
0x0012 00018 RET
Which is both faster and shorter; that's also the exact same code that
clang and the intel c compiler generate for the above expression.
Change-Id: Ib4d5503f05d2f2efe31a1be14e2fe6cac33730a9
Reviewed-on: https://go-review.googlesource.com/55143
Reviewed-by: Keith Randall <khr@golang.org>
Add support for generating TBZ/TBNZ instructions.
The bit-test-and-branch pattern shows up in a number of
important places, including the runtime (gc bitmaps).
Before this change, there were 3 TB[N]?Z instructions in the Go tool,
all of which were in hand-written assembly. After this change, there
are 285. Also, the go1 benchmark binary gets about 4.5kB smaller.
Fixes#21361
Change-Id: I170c138b852754b9b8df149966ca5e62e6dfa771
Reviewed-on: https://go-review.googlesource.com/54470
Run-TryBot: Philip Hofer <phofer@umich.edu>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
These rules trigger a few times during make.bash.
When we eliminate boundedness checks from walk.go
we'll rely on them more heavily.
Updates #19692
Change-Id: I268c36ae2f1401c68dd685b15f2d30f5d6971176
Reviewed-on: https://go-review.googlesource.com/43775
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Noticed while looking at #20356.
Cuts 160k (1%) off of the cmd/compile binary.
Change-Id: If2397bc6971d6be9be6975048adecb0b5efa6d66
Reviewed-on: https://go-review.googlesource.com/43501
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Eliminates stores of values that have just been loaded from the same
location. Handles the common case where there are up to 3 intermediate
stores to non-overlapping struct fields.
For example the loads and stores of x.a, x.b and x.d in the following
function are now removed:
type T struct {
a, b, c, d int
}
func f(x *T) {
y := *x
y.c += 8
*x = y
}
Before this CL (s390x):
TEXT "".f(SB)
MOVD "".x(R15), R5
MOVD (R5), R1
MOVD 8(R5), R2
MOVD 16(R5), R0
MOVD 24(R5), R4
ADD $8, R0, R3
STMG R1, R4, (R5)
RET
After this CL (s390x):
TEXT "".f(SB)
MOVD "".x(R15), R1
MOVD 16(R1), R0
ADD $8, R0, R0
MOVD R0, 16(R1)
RET
In total these rules are triggered ~5091 times during all.bash,
which is broken down as:
Intermediate stores | Triggered
--------------------+----------
0 | 1434
1 | 2508
2 | 888
3 | 261
--------------------+----------
Change-Id: Ia4721ae40146aceec1fdd3e65b0e9283770bfba5
Reviewed-on: https://go-review.googlesource.com/38793
Run-TryBot: Michael Munday <munday@ca.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
When package ssa was created, Type was in package gc.
To avoid circular dependencies, we used an interface (ssa.Type)
to represent type information in SSA.
In the Go 1.9 cycle, gri extricated the Type type from package gc.
As a result, we can now use it in package ssa.
Now, instead of package types depending on package ssa,
it is the other way.
This is a more sensible dependency tree,
and helps compiler performance a bit.
Though this is a big CL, most of the changes are
mechanical and uninteresting.
Interesting bits:
* Add new singleton globals to package types for the special
SSA types Memory, Void, Invalid, Flags, and Int128.
* Add two new Types, TSSA for the special types,
and TTUPLE, for SSA tuple types.
ssa.MakeTuple is now types.NewTuple.
* Move type comparison result constants CMPlt, CMPeq, and CMPgt
to package types.
* We had picked the name "types" in our rules for the handy
list of types provided by ssa.Config. That conflicted with
the types package name, so change it to "typ".
* Update the type comparison routine to handle tuples and special
types inline.
* Teach gc/fmt.go how to print special types.
* We can now eliminate ElemTypes in favor of just Elem,
and probably also some other duplicated Type methods
designed to return ssa.Type instead of *types.Type.
* The ssa tests were using their own dummy types,
and they were not particularly careful about types in general.
Of necessity, this CL switches them to use *types.Type;
it does not make them more type-accurate.
Unfortunately, using types.Type means initializing a bit
of the types universe.
This is prime for refactoring and improvement.
This shrinks ssa.Value; it now fits in a smaller size class
on 64 bit systems. This doesn't have a giant impact,
though, since most Values are preallocated in a chunk.
name old alloc/op new alloc/op delta
Template 37.9MB ± 0% 37.7MB ± 0% -0.57% (p=0.000 n=10+8)
Unicode 28.9MB ± 0% 28.7MB ± 0% -0.52% (p=0.000 n=10+10)
GoTypes 110MB ± 0% 109MB ± 0% -0.88% (p=0.000 n=10+10)
Flate 24.7MB ± 0% 24.6MB ± 0% -0.66% (p=0.000 n=10+10)
GoParser 31.1MB ± 0% 30.9MB ± 0% -0.61% (p=0.000 n=10+9)
Reflect 73.9MB ± 0% 73.4MB ± 0% -0.62% (p=0.000 n=10+8)
Tar 25.8MB ± 0% 25.6MB ± 0% -0.77% (p=0.000 n=9+10)
XML 41.2MB ± 0% 40.9MB ± 0% -0.80% (p=0.000 n=10+10)
[Geo mean] 40.5MB 40.3MB -0.68%
name old allocs/op new allocs/op delta
Template 385k ± 0% 386k ± 0% ~ (p=0.356 n=10+9)
Unicode 343k ± 1% 344k ± 0% ~ (p=0.481 n=10+10)
GoTypes 1.16M ± 0% 1.16M ± 0% -0.16% (p=0.004 n=10+10)
Flate 238k ± 1% 238k ± 1% ~ (p=0.853 n=10+10)
GoParser 320k ± 0% 320k ± 0% ~ (p=0.720 n=10+9)
Reflect 957k ± 0% 957k ± 0% ~ (p=0.460 n=10+8)
Tar 252k ± 0% 252k ± 0% ~ (p=0.133 n=9+10)
XML 400k ± 0% 400k ± 0% ~ (p=0.796 n=10+10)
[Geo mean] 428k 428k -0.01%
Removing all the interface calls helps non-trivially with CPU, though.
name old time/op new time/op delta
Template 178ms ± 4% 173ms ± 3% -2.90% (p=0.000 n=94+96)
Unicode 85.0ms ± 4% 83.9ms ± 4% -1.23% (p=0.000 n=96+96)
GoTypes 543ms ± 3% 528ms ± 3% -2.73% (p=0.000 n=98+96)
Flate 116ms ± 3% 113ms ± 4% -2.34% (p=0.000 n=96+99)
GoParser 144ms ± 3% 140ms ± 4% -2.80% (p=0.000 n=99+97)
Reflect 344ms ± 3% 334ms ± 4% -3.02% (p=0.000 n=100+99)
Tar 106ms ± 5% 103ms ± 4% -3.30% (p=0.000 n=98+94)
XML 198ms ± 5% 192ms ± 4% -2.88% (p=0.000 n=92+95)
[Geo mean] 178ms 173ms -2.65%
name old user-time/op new user-time/op delta
Template 229ms ± 5% 224ms ± 5% -2.36% (p=0.000 n=95+99)
Unicode 107ms ± 6% 106ms ± 5% -1.13% (p=0.001 n=93+95)
GoTypes 696ms ± 4% 679ms ± 4% -2.45% (p=0.000 n=97+99)
Flate 137ms ± 4% 134ms ± 5% -2.66% (p=0.000 n=99+96)
GoParser 176ms ± 5% 172ms ± 8% -2.27% (p=0.000 n=98+100)
Reflect 430ms ± 6% 411ms ± 5% -4.46% (p=0.000 n=100+92)
Tar 128ms ±13% 123ms ±13% -4.21% (p=0.000 n=100+100)
XML 239ms ± 6% 233ms ± 6% -2.50% (p=0.000 n=95+97)
[Geo mean] 220ms 213ms -2.76%
Change-Id: I15c7d6268347f8358e75066dfdbd77db24e8d0c1
Reviewed-on: https://go-review.googlesource.com/42145
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
These collectively fire a few hundred times during make.bash,
mostly rewriting XOR SETNE -> SETEQ.
Fixes#17905.
Change-Id: Ic5eb241ee93ed67099da3de11f59e4df9fab64a3
Reviewed-on: https://go-review.googlesource.com/42491
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Noticed while adding to the bitset implementation
in cmd/compile/internal/gc.
The (Com (Const)) optimizations were already present
in the AMD64 lowered optimizations.
They trigger 118, 44, 262, and 108 times
respectively for int sizes 8, 16, 32, and 64
in a run of make.bash.
The (Or (And)) optimization is new.
It triggers 3 times for int size 8
and once for int size 64 during make.bash,
in packages internal/poll, reflect,
encoding/asn1, and go/types,
so there is a bit of natural test coverage.
Change-Id: I44072864ff88831d5ec7dce37c516d29df056e98
Reviewed-on: https://go-review.googlesource.com/41758
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Type.Size and Type.Alignment are for the front end:
They calculate size and alignment if needed.
Type.MustSize and Type.MustAlignment are for the back end:
They call Fatal if size and alignment are not already calculated.
Most uses are of MustSize and MustAlignment,
but that's because the back end is newer,
and this API was added to support it.
This CL was mostly generated with sed and selective reversion.
The only mildly interesting bit is the change of the ssa.Type interface
and the supporting ssa dummy types.
Follow-up to review feedback on CL 41970.
Passes toolstash-check.
Change-Id: I0d9b9505e57453dae8fb6a236a07a7a02abd459e
Reviewed-on: https://go-review.googlesource.com/42016
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This makes the cmd/compile/internal/ssa package
compile much faster, and has no impact
on the speed of the compiler.
The chunk size was selected empirically,
in that at chunk size 10, the object
file was smaller than at chunk size 5 or 20.
name old time/op new time/op delta
SSA 7.33s ± 5% 5.64s ± 1% -23.10% (p=0.000 n=10+10)
name old user-time/op new user-time/op delta
SSA 9.70s ± 1% 8.04s ± 2% -17.17% (p=0.000 n=9+10)
name old obj-bytes new obj-bytes delta
SSA 9.82M ± 0% 8.28M ± 0% -15.67% (p=0.000 n=10+10)
Change-Id: Iab472905da3f0e82f3db2c93d06e2759abc9dd44
Reviewed-on: https://go-review.googlesource.com/41296
Reviewed-by: Keith Randall <khr@golang.org>
They are left over from the days before
we had BlockKindFirst and swapSuccessors.
Change-Id: I9259d53ac2821ca4d5de5dd520ca4b78f52ecad4
Reviewed-on: https://go-review.googlesource.com/41206
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Now only cmd/asm and cmd/compile depend on cmd/internal/obj. Changing
the assembler backends no longer requires reinstalling cmd/link or
cmd/addr2line.
There's also now one canonical definition of the object file format in
cmd/internal/objabi/doc.go, with a warning to update all three
implementations.
objabi is still something of a grab bag of unrelated code (e.g., flag
and environment variable handling probably belong in a separate "tool"
package), but this is still progress.
Fixes#15165.
Fixes#20026.
Change-Id: Ic4b92fac7d0d35438e0d20c9579aad4085c5534c
Reviewed-on: https://go-review.googlesource.com/40972
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
Note that this is a redo of an undo of the original buggy CL 38666.
We have lots of rewrite rules that vary only in the fact that
we have 2 versions for the 2 different orderings of various
commuting ops. For example:
(ADDL x (MOVLconst [c])) -> (ADDLconst [c] x)
(ADDL (MOVLconst [c]) x) -> (ADDLconst [c] x)
It can get unwieldly quickly, especially when there is more than
one commuting op in a rule.
Our existing "fix" for this problem is to have rules that
canonicalize the operations first. For example:
(Eq64 x (Const64 <t> [c])) && x.Op != OpConst64 -> (Eq64 (Const64 <t> [c]) x)
Subsequent rules can then assume if there is a constant arg to Eq64,
it will be the first one. This fix kinda works, but it is fragile and
only works when we remember to include the required extra rules.
The fundamental problem is that the rule matcher doesn't
know anything about commuting ops. This CL fixes that fact.
We already have information about which ops commute. (The register
allocator takes advantage of commutivity.) The rule generator now
automatically generates multiple rules for a single source rule when
there are commutative ops in the rule. We can now drop all of our
almost-duplicate source-level rules and the canonicalization rules.
I have some CLs in progress that will be a lot less verbose when
the rule generator handles commutivity for me.
I had to reorganize the load-combining rules a bit. The 8-way OR rules
generated 128 different reorderings, which was causing the generator
to put too much code in the rewrite*.go files (the big ones were going
from 25K lines to 132K lines). Instead I reorganized the rules to
combine pairs of loads at a time. The generated rule files are now
actually a bit (5%) smaller.
Make.bash times are ~unchanged.
Compiler benchmarks are not observably different. Probably because
we don't spend much compiler time in rule matching anyway.
I've also done a pass over all of our ops adding commutative markings
for ops which hadn't had them previously.
Fixes#18292
Change-Id: Ic1c0e43fbf579539f459971625f69690c9ab8805
Reviewed-on: https://go-review.googlesource.com/38801
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>