2015-03-23 17:02:11 -07:00
|
|
|
// Copyright 2015 The Go Authors. All rights reserved.
|
|
|
|
|
// Use of this source code is governed by a BSD-style
|
|
|
|
|
// license that can be found in the LICENSE file.
|
|
|
|
|
|
|
|
|
|
package ssa
|
|
|
|
|
|
2015-09-04 06:33:56 -05:00
|
|
|
import (
|
cmd/compile: implement jump tables
Performance is kind of hard to exactly quantify.
One big difference between jump tables and the old binary search
scheme is that there's only 1 branch statement instead of O(n) of
them. That can be both a blessing and a curse, and can make evaluating
jump tables very hard to do.
The single branch can become a choke point for the hardware branch
predictor. A branch table jump must fit all of its state in a single
branch predictor entry (technically, a branch target predictor entry).
With binary search that predictor state can be spread among lots of
entries. In cases where the case selection is repetitive and thus
predictable, binary search can perform better.
The big win for a jump table is that it doesn't consume so much of the
branch predictor's resources. But that benefit is essentially never
observed in microbenchmarks, because the branch predictor can easily
keep state for all the binary search branches in a microbenchmark. So
that benefit is really hard to measure.
So predictable switch microbenchmarks are ~useless - they will almost
always favor the binary search scheme. Fully unpredictable switch
microbenchmarks are better, as they aren't lying to us quite so
much. In a perfectly unpredictable situation, a jump table will expect
to incur 1-1/N branch mispredicts, where a binary search would incur
lg(N)/2 of them. That makes the crossover point at about N=4. But of
course switches in real programs are seldom fully unpredictable, so
we'll use a higher crossover point.
Beyond the branch predictor, jump tables tend to execute more
instructions per switch but have no additional instructions per case,
which also argues for a larger crossover.
As far as code size goes, with this CL cmd/go has a slightly smaller
code segment and a slightly larger overall size (from the jump tables
themselves which live in the data segment).
This is a case where some FDO (feedback-directed optimization) would
be really nice to have. #28262
Some large-program benchmarks might help make the case for this
CL. Especially if we can turn on branch mispredict counters so we can
see how much using jump tables can free up branch prediction resources
that can be gainfully used elsewhere in the program.
name old time/op new time/op delta
Switch8Predictable 1.89ns ± 2% 1.27ns ± 3% -32.58% (p=0.000 n=9+10)
Switch8Unpredictable 9.33ns ± 1% 7.50ns ± 1% -19.60% (p=0.000 n=10+9)
Switch32Predictable 2.20ns ± 2% 1.64ns ± 1% -25.39% (p=0.000 n=10+9)
Switch32Unpredictable 10.0ns ± 2% 7.6ns ± 2% -24.04% (p=0.000 n=10+10)
Fixes #5496
Update #34381
Change-Id: I3ff56011d02be53f605ca5fd3fb96b905517c34f
Reviewed-on: https://go-review.googlesource.com/c/go/+/357330
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
2021-10-04 12:17:46 -07:00
|
|
|
"cmd/compile/internal/base"
|
2019-11-02 23:57:11 -04:00
|
|
|
"cmd/compile/internal/logopt"
|
2023-05-02 17:37:00 +00:00
|
|
|
"cmd/compile/internal/reflectdata"
|
2025-09-01 18:31:29 +08:00
|
|
|
"cmd/compile/internal/rttype"
|
cmd/compile: change ssa.Type into *types.Type
When package ssa was created, Type was in package gc.
To avoid circular dependencies, we used an interface (ssa.Type)
to represent type information in SSA.
In the Go 1.9 cycle, gri extricated the Type type from package gc.
As a result, we can now use it in package ssa.
Now, instead of package types depending on package ssa,
it is the other way.
This is a more sensible dependency tree,
and helps compiler performance a bit.
Though this is a big CL, most of the changes are
mechanical and uninteresting.
Interesting bits:
* Add new singleton globals to package types for the special
SSA types Memory, Void, Invalid, Flags, and Int128.
* Add two new Types, TSSA for the special types,
and TTUPLE, for SSA tuple types.
ssa.MakeTuple is now types.NewTuple.
* Move type comparison result constants CMPlt, CMPeq, and CMPgt
to package types.
* We had picked the name "types" in our rules for the handy
list of types provided by ssa.Config. That conflicted with
the types package name, so change it to "typ".
* Update the type comparison routine to handle tuples and special
types inline.
* Teach gc/fmt.go how to print special types.
* We can now eliminate ElemTypes in favor of just Elem,
and probably also some other duplicated Type methods
designed to return ssa.Type instead of *types.Type.
* The ssa tests were using their own dummy types,
and they were not particularly careful about types in general.
Of necessity, this CL switches them to use *types.Type;
it does not make them more type-accurate.
Unfortunately, using types.Type means initializing a bit
of the types universe.
This is prime for refactoring and improvement.
This shrinks ssa.Value; it now fits in a smaller size class
on 64 bit systems. This doesn't have a giant impact,
though, since most Values are preallocated in a chunk.
name old alloc/op new alloc/op delta
Template 37.9MB ± 0% 37.7MB ± 0% -0.57% (p=0.000 n=10+8)
Unicode 28.9MB ± 0% 28.7MB ± 0% -0.52% (p=0.000 n=10+10)
GoTypes 110MB ± 0% 109MB ± 0% -0.88% (p=0.000 n=10+10)
Flate 24.7MB ± 0% 24.6MB ± 0% -0.66% (p=0.000 n=10+10)
GoParser 31.1MB ± 0% 30.9MB ± 0% -0.61% (p=0.000 n=10+9)
Reflect 73.9MB ± 0% 73.4MB ± 0% -0.62% (p=0.000 n=10+8)
Tar 25.8MB ± 0% 25.6MB ± 0% -0.77% (p=0.000 n=9+10)
XML 41.2MB ± 0% 40.9MB ± 0% -0.80% (p=0.000 n=10+10)
[Geo mean] 40.5MB 40.3MB -0.68%
name old allocs/op new allocs/op delta
Template 385k ± 0% 386k ± 0% ~ (p=0.356 n=10+9)
Unicode 343k ± 1% 344k ± 0% ~ (p=0.481 n=10+10)
GoTypes 1.16M ± 0% 1.16M ± 0% -0.16% (p=0.004 n=10+10)
Flate 238k ± 1% 238k ± 1% ~ (p=0.853 n=10+10)
GoParser 320k ± 0% 320k ± 0% ~ (p=0.720 n=10+9)
Reflect 957k ± 0% 957k ± 0% ~ (p=0.460 n=10+8)
Tar 252k ± 0% 252k ± 0% ~ (p=0.133 n=9+10)
XML 400k ± 0% 400k ± 0% ~ (p=0.796 n=10+10)
[Geo mean] 428k 428k -0.01%
Removing all the interface calls helps non-trivially with CPU, though.
name old time/op new time/op delta
Template 178ms ± 4% 173ms ± 3% -2.90% (p=0.000 n=94+96)
Unicode 85.0ms ± 4% 83.9ms ± 4% -1.23% (p=0.000 n=96+96)
GoTypes 543ms ± 3% 528ms ± 3% -2.73% (p=0.000 n=98+96)
Flate 116ms ± 3% 113ms ± 4% -2.34% (p=0.000 n=96+99)
GoParser 144ms ± 3% 140ms ± 4% -2.80% (p=0.000 n=99+97)
Reflect 344ms ± 3% 334ms ± 4% -3.02% (p=0.000 n=100+99)
Tar 106ms ± 5% 103ms ± 4% -3.30% (p=0.000 n=98+94)
XML 198ms ± 5% 192ms ± 4% -2.88% (p=0.000 n=92+95)
[Geo mean] 178ms 173ms -2.65%
name old user-time/op new user-time/op delta
Template 229ms ± 5% 224ms ± 5% -2.36% (p=0.000 n=95+99)
Unicode 107ms ± 6% 106ms ± 5% -1.13% (p=0.001 n=93+95)
GoTypes 696ms ± 4% 679ms ± 4% -2.45% (p=0.000 n=97+99)
Flate 137ms ± 4% 134ms ± 5% -2.66% (p=0.000 n=99+96)
GoParser 176ms ± 5% 172ms ± 8% -2.27% (p=0.000 n=98+100)
Reflect 430ms ± 6% 411ms ± 5% -4.46% (p=0.000 n=100+92)
Tar 128ms ±13% 123ms ±13% -4.21% (p=0.000 n=100+100)
XML 239ms ± 6% 233ms ± 6% -2.50% (p=0.000 n=95+97)
[Geo mean] 220ms 213ms -2.76%
Change-Id: I15c7d6268347f8358e75066dfdbd77db24e8d0c1
Reviewed-on: https://go-review.googlesource.com/42145
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-04-28 14:12:28 -07:00
|
|
|
"cmd/compile/internal/types"
|
cmd/compile: de-virtualize interface calls
With this change, code like
h := sha1.New()
h.Write(buf)
sum := h.Sum()
gets compiled into static calls rather than
interface calls, because the compiler is able
to prove that 'h' is really a *sha1.digest.
The InterCall re-write rule hits a few dozen times
during make.bash, and hundreds of times during all.bash.
The most common pattern identified by the compiler
is a constructor like
func New() Interface { return &impl{...} }
where the constructor gets inlined into the caller,
and the result is used immediately. Examples include
{sha1,md5,crc32,crc64,...}.New, base64.NewEncoder,
base64.NewDecoder, errors.New, net.Pipe, and so on.
Some existing benchmarks that change on darwin/amd64:
Crc64/ISO4KB-8 2.67µs ± 1% 2.66µs ± 0% -0.36% (p=0.015 n=10+10)
Crc64/ISO1KB-8 694ns ± 0% 690ns ± 1% -0.59% (p=0.001 n=10+10)
Adler32KB-8 473ns ± 1% 471ns ± 0% -0.39% (p=0.010 n=10+9)
On architectures like amd64, the reduction in code size
appears to contribute more to benchmark improvements than just
removing the indirect call, since that branch gets predicted
accurately when called in a loop.
Updates #19361
Change-Id: I57d4dc21ef40a05ec0fbd55a9bb0eb74cdc67a3d
Reviewed-on: https://go-review.googlesource.com/38139
Run-TryBot: Philip Hofer <phofer@umich.edu>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
2017-03-13 15:03:17 -07:00
|
|
|
"cmd/internal/obj"
|
2020-04-16 11:40:09 +01:00
|
|
|
"cmd/internal/obj/s390x"
|
2018-10-09 22:55:36 -07:00
|
|
|
"cmd/internal/objabi"
|
cmd/compile: assign and preserve statement boundaries.
A new pass run after ssa building (before any other
optimization) identifies the "first" ssa node for each
statement. Other "noise" nodes are tagged as being never
appropriate for a statement boundary (e.g., VarKill, VarDef,
Phi).
Rewrite, deadcode, cse, and nilcheck are modified to move
the statement boundaries forward whenever possible if a
boundary-tagged ssa value is removed; never-boundary nodes
are ignored in this search (some operations involving
constants are also tagged as never-boundary and also ignored
because they are likely to be moved or removed during
optimization).
Code generation treats all nodes except those explicitly
marked as statement boundaries as "not statement" nodes,
and floats statement boundaries to the beginning of each
same-line run of instructions found within a basic block.
Line number html conversion was modified to make statement
boundary nodes a bit more obvious by prepending a "+".
The code in fuse.go that glued together the value slices
of two blocks produced a result that depended on the
former capacities (not lengths) of the two slices. This
causes differences in the 386 bootstrap, and also can
sometimes put values into an order that does a worse job
of preserving statement boundaries when values are removed.
Portions of two delve tests that had caught problems were
incorporated into ssa/debug_test.go. There are some
opportunities to do better with optimized code, but the
next-ing is not lying or overly jumpy.
Over 4 CLs, compilebench geomean measured binary size
increase of 3.5% and compile user time increase of 3.8%
(this is after optimization to reuse a sparse map instead
of creating multiple maps.)
This CL worsens the optimized-debugging experience with
Delve; we need to work with the delve team so that
they can use the is_stmt marks that we're emitting now.
The reference output changes from time to time depending
on other changes in the compiler, sometimes better,
sometimes worse.
This CL now includes a test ensuring that 99+% of the lines
in the Go command itself (a handy optimized binary) include
is_stmt markers.
Change-Id: I359c94e06843f1eb41f9da437bd614885aa9644a
Reviewed-on: https://go-review.googlesource.com/102435
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-23 22:46:06 -04:00
|
|
|
"cmd/internal/src"
|
2018-10-09 22:55:36 -07:00
|
|
|
"encoding/binary"
|
2015-09-04 06:33:56 -05:00
|
|
|
"fmt"
|
2023-03-17 15:22:31 -05:00
|
|
|
"internal/buildcfg"
|
2017-04-22 18:59:11 -07:00
|
|
|
"io"
|
2015-09-04 06:33:56 -05:00
|
|
|
"math"
|
2018-05-16 11:21:18 +01:00
|
|
|
"math/bits"
|
2016-05-24 15:43:25 -07:00
|
|
|
"os"
|
|
|
|
|
"path/filepath"
|
2023-05-02 17:37:00 +00:00
|
|
|
"strings"
|
2015-09-04 06:33:56 -05:00
|
|
|
)
|
2015-03-23 17:02:11 -07:00
|
|
|
|
2020-06-30 15:59:40 -07:00
|
|
|
type deadValueChoice bool
|
|
|
|
|
|
|
|
|
|
const (
|
|
|
|
|
leaveDeadValues deadValueChoice = false
|
|
|
|
|
removeDeadValues = true
|
2025-06-03 16:23:02 -07:00
|
|
|
|
|
|
|
|
repZeroThreshold = 1408 // size beyond which we use REP STOS for zeroing
|
2025-08-13 09:41:17 -07:00
|
|
|
repMoveThreshold = 1408 // size beyond which we use REP MOVS for copying
|
2020-06-30 15:59:40 -07:00
|
|
|
)
|
|
|
|
|
|
2021-03-07 14:48:08 -08:00
|
|
|
// deadcode indicates whether rewrite should try to remove any values that become dead.
|
2020-06-30 15:59:40 -07:00
|
|
|
func applyRewrite(f *Func, rb blockRewriter, rv valueRewriter, deadcode deadValueChoice) {
|
2015-03-23 17:02:11 -07:00
|
|
|
// repeat rewrites until we find no more rewrites
|
cmd/compile: assign and preserve statement boundaries.
A new pass run after ssa building (before any other
optimization) identifies the "first" ssa node for each
statement. Other "noise" nodes are tagged as being never
appropriate for a statement boundary (e.g., VarKill, VarDef,
Phi).
Rewrite, deadcode, cse, and nilcheck are modified to move
the statement boundaries forward whenever possible if a
boundary-tagged ssa value is removed; never-boundary nodes
are ignored in this search (some operations involving
constants are also tagged as never-boundary and also ignored
because they are likely to be moved or removed during
optimization).
Code generation treats all nodes except those explicitly
marked as statement boundaries as "not statement" nodes,
and floats statement boundaries to the beginning of each
same-line run of instructions found within a basic block.
Line number html conversion was modified to make statement
boundary nodes a bit more obvious by prepending a "+".
The code in fuse.go that glued together the value slices
of two blocks produced a result that depended on the
former capacities (not lengths) of the two slices. This
causes differences in the 386 bootstrap, and also can
sometimes put values into an order that does a worse job
of preserving statement boundaries when values are removed.
Portions of two delve tests that had caught problems were
incorporated into ssa/debug_test.go. There are some
opportunities to do better with optimized code, but the
next-ing is not lying or overly jumpy.
Over 4 CLs, compilebench geomean measured binary size
increase of 3.5% and compile user time increase of 3.8%
(this is after optimization to reuse a sparse map instead
of creating multiple maps.)
This CL worsens the optimized-debugging experience with
Delve; we need to work with the delve team so that
they can use the is_stmt marks that we're emitting now.
The reference output changes from time to time depending
on other changes in the compiler, sometimes better,
sometimes worse.
This CL now includes a test ensuring that 99+% of the lines
in the Go command itself (a handy optimized binary) include
is_stmt markers.
Change-Id: I359c94e06843f1eb41f9da437bd614885aa9644a
Reviewed-on: https://go-review.googlesource.com/102435
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-23 22:46:06 -04:00
|
|
|
pendingLines := f.cachedLineStarts // Holds statement boundaries that need to be moved to a new value/block
|
|
|
|
|
pendingLines.clear()
|
2020-01-24 10:43:09 -05:00
|
|
|
debug := f.pass.debug
|
|
|
|
|
if debug > 1 {
|
|
|
|
|
fmt.Printf("%s: rewriting for %s\n", f.pass.name, f.Name)
|
|
|
|
|
}
|
2024-04-11 15:16:22 +00:00
|
|
|
// if the number of rewrite iterations reaches itersLimit we will
|
|
|
|
|
// at that point turn on cycle detection. Instead of a fixed limit,
|
|
|
|
|
// size the limit according to func size to allow for cases such
|
|
|
|
|
// as the one in issue #66773.
|
|
|
|
|
itersLimit := f.NumBlocks()
|
|
|
|
|
if itersLimit < 20 {
|
|
|
|
|
itersLimit = 20
|
|
|
|
|
}
|
2021-09-04 19:29:08 -07:00
|
|
|
var iters int
|
|
|
|
|
var states map[string]bool
|
2015-03-23 17:02:11 -07:00
|
|
|
for {
|
|
|
|
|
change := false
|
2022-03-14 15:17:43 -07:00
|
|
|
deadChange := false
|
2015-03-23 17:02:11 -07:00
|
|
|
for _, b := range f.Blocks {
|
2020-01-24 10:43:09 -05:00
|
|
|
var b0 *Block
|
|
|
|
|
if debug > 1 {
|
|
|
|
|
b0 = new(Block)
|
|
|
|
|
*b0 = *b
|
|
|
|
|
b0.Succs = append([]Edge{}, b.Succs...) // make a new copy, not aliasing
|
|
|
|
|
}
|
2019-08-12 20:19:58 +01:00
|
|
|
for i, c := range b.ControlValues() {
|
|
|
|
|
for c.Op == OpCopy {
|
|
|
|
|
c = c.Args[0]
|
|
|
|
|
b.ReplaceControl(i, c)
|
2015-05-28 16:45:33 -07:00
|
|
|
}
|
|
|
|
|
}
|
2017-03-17 10:50:20 -07:00
|
|
|
if rb(b) {
|
2015-05-28 16:45:33 -07:00
|
|
|
change = true
|
2020-01-24 10:43:09 -05:00
|
|
|
if debug > 1 {
|
|
|
|
|
fmt.Printf("rewriting %s -> %s\n", b0.LongString(), b.LongString())
|
|
|
|
|
}
|
2015-05-28 16:45:33 -07:00
|
|
|
}
|
cmd/compile: assign and preserve statement boundaries.
A new pass run after ssa building (before any other
optimization) identifies the "first" ssa node for each
statement. Other "noise" nodes are tagged as being never
appropriate for a statement boundary (e.g., VarKill, VarDef,
Phi).
Rewrite, deadcode, cse, and nilcheck are modified to move
the statement boundaries forward whenever possible if a
boundary-tagged ssa value is removed; never-boundary nodes
are ignored in this search (some operations involving
constants are also tagged as never-boundary and also ignored
because they are likely to be moved or removed during
optimization).
Code generation treats all nodes except those explicitly
marked as statement boundaries as "not statement" nodes,
and floats statement boundaries to the beginning of each
same-line run of instructions found within a basic block.
Line number html conversion was modified to make statement
boundary nodes a bit more obvious by prepending a "+".
The code in fuse.go that glued together the value slices
of two blocks produced a result that depended on the
former capacities (not lengths) of the two slices. This
causes differences in the 386 bootstrap, and also can
sometimes put values into an order that does a worse job
of preserving statement boundaries when values are removed.
Portions of two delve tests that had caught problems were
incorporated into ssa/debug_test.go. There are some
opportunities to do better with optimized code, but the
next-ing is not lying or overly jumpy.
Over 4 CLs, compilebench geomean measured binary size
increase of 3.5% and compile user time increase of 3.8%
(this is after optimization to reuse a sparse map instead
of creating multiple maps.)
This CL worsens the optimized-debugging experience with
Delve; we need to work with the delve team so that
they can use the is_stmt marks that we're emitting now.
The reference output changes from time to time depending
on other changes in the compiler, sometimes better,
sometimes worse.
This CL now includes a test ensuring that 99+% of the lines
in the Go command itself (a handy optimized binary) include
is_stmt markers.
Change-Id: I359c94e06843f1eb41f9da437bd614885aa9644a
Reviewed-on: https://go-review.googlesource.com/102435
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-23 22:46:06 -04:00
|
|
|
for j, v := range b.Values {
|
2020-01-24 10:43:09 -05:00
|
|
|
var v0 *Value
|
|
|
|
|
if debug > 1 {
|
|
|
|
|
v0 = new(Value)
|
|
|
|
|
*v0 = *v
|
|
|
|
|
v0.Args = append([]*Value{}, v.Args...) // make a new copy, not aliasing
|
|
|
|
|
}
|
2020-06-30 15:59:40 -07:00
|
|
|
if v.Uses == 0 && v.removeable() {
|
|
|
|
|
if v.Op != OpInvalid && deadcode == removeDeadValues {
|
|
|
|
|
// Reset any values that are now unused, so that we decrement
|
|
|
|
|
// the use count of all of its arguments.
|
|
|
|
|
// Not quite a deadcode pass, because it does not handle cycles.
|
|
|
|
|
// But it should help Uses==1 rules to fire.
|
|
|
|
|
v.reset(OpInvalid)
|
2022-03-14 15:17:43 -07:00
|
|
|
deadChange = true
|
2020-06-30 15:59:40 -07:00
|
|
|
}
|
|
|
|
|
// No point rewriting values which aren't used.
|
|
|
|
|
continue
|
|
|
|
|
}
|
2020-01-24 10:43:09 -05:00
|
|
|
|
|
|
|
|
vchange := phielimValue(v)
|
|
|
|
|
if vchange && debug > 1 {
|
|
|
|
|
fmt.Printf("rewriting %s -> %s\n", v0.LongString(), v.LongString())
|
|
|
|
|
}
|
2016-04-21 10:11:33 +02:00
|
|
|
|
2016-04-11 21:23:11 -07:00
|
|
|
// Eliminate copy inputs.
|
|
|
|
|
// If any copy input becomes unused, mark it
|
|
|
|
|
// as invalid and discard its argument. Repeat
|
|
|
|
|
// recursively on the discarded argument.
|
|
|
|
|
// This phase helps remove phantom "dead copy" uses
|
|
|
|
|
// of a value so that a x.Uses==1 rule condition
|
|
|
|
|
// fires reliably.
|
|
|
|
|
for i, a := range v.Args {
|
|
|
|
|
if a.Op != OpCopy {
|
|
|
|
|
continue
|
|
|
|
|
}
|
cmd/compile: assign and preserve statement boundaries.
A new pass run after ssa building (before any other
optimization) identifies the "first" ssa node for each
statement. Other "noise" nodes are tagged as being never
appropriate for a statement boundary (e.g., VarKill, VarDef,
Phi).
Rewrite, deadcode, cse, and nilcheck are modified to move
the statement boundaries forward whenever possible if a
boundary-tagged ssa value is removed; never-boundary nodes
are ignored in this search (some operations involving
constants are also tagged as never-boundary and also ignored
because they are likely to be moved or removed during
optimization).
Code generation treats all nodes except those explicitly
marked as statement boundaries as "not statement" nodes,
and floats statement boundaries to the beginning of each
same-line run of instructions found within a basic block.
Line number html conversion was modified to make statement
boundary nodes a bit more obvious by prepending a "+".
The code in fuse.go that glued together the value slices
of two blocks produced a result that depended on the
former capacities (not lengths) of the two slices. This
causes differences in the 386 bootstrap, and also can
sometimes put values into an order that does a worse job
of preserving statement boundaries when values are removed.
Portions of two delve tests that had caught problems were
incorporated into ssa/debug_test.go. There are some
opportunities to do better with optimized code, but the
next-ing is not lying or overly jumpy.
Over 4 CLs, compilebench geomean measured binary size
increase of 3.5% and compile user time increase of 3.8%
(this is after optimization to reuse a sparse map instead
of creating multiple maps.)
This CL worsens the optimized-debugging experience with
Delve; we need to work with the delve team so that
they can use the is_stmt marks that we're emitting now.
The reference output changes from time to time depending
on other changes in the compiler, sometimes better,
sometimes worse.
This CL now includes a test ensuring that 99+% of the lines
in the Go command itself (a handy optimized binary) include
is_stmt markers.
Change-Id: I359c94e06843f1eb41f9da437bd614885aa9644a
Reviewed-on: https://go-review.googlesource.com/102435
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-23 22:46:06 -04:00
|
|
|
aa := copySource(a)
|
|
|
|
|
v.SetArg(i, aa)
|
|
|
|
|
// If a, a copy, has a line boundary indicator, attempt to find a new value
|
|
|
|
|
// to hold it. The first candidate is the value that will replace a (aa),
|
|
|
|
|
// if it shares the same block and line and is eligible.
|
|
|
|
|
// The second option is v, which has a as an input. Because aa is earlier in
|
|
|
|
|
// the data flow, it is the better choice.
|
|
|
|
|
if a.Pos.IsStmt() == src.PosIsStmt {
|
|
|
|
|
if aa.Block == a.Block && aa.Pos.Line() == a.Pos.Line() && aa.Pos.IsStmt() != src.PosNotStmt {
|
|
|
|
|
aa.Pos = aa.Pos.WithIsStmt()
|
|
|
|
|
} else if v.Block == a.Block && v.Pos.Line() == a.Pos.Line() && v.Pos.IsStmt() != src.PosNotStmt {
|
|
|
|
|
v.Pos = v.Pos.WithIsStmt()
|
|
|
|
|
} else {
|
|
|
|
|
// Record the lost line and look for a new home after all rewrites are complete.
|
|
|
|
|
// TODO: it's possible (in FOR loops, in particular) for statement boundaries for the same
|
|
|
|
|
// line to appear in more than one block, but only one block is stored, so if both end
|
|
|
|
|
// up here, then one will be lost.
|
2018-12-17 17:23:42 -05:00
|
|
|
pendingLines.set(a.Pos, int32(a.Block.ID))
|
cmd/compile: assign and preserve statement boundaries.
A new pass run after ssa building (before any other
optimization) identifies the "first" ssa node for each
statement. Other "noise" nodes are tagged as being never
appropriate for a statement boundary (e.g., VarKill, VarDef,
Phi).
Rewrite, deadcode, cse, and nilcheck are modified to move
the statement boundaries forward whenever possible if a
boundary-tagged ssa value is removed; never-boundary nodes
are ignored in this search (some operations involving
constants are also tagged as never-boundary and also ignored
because they are likely to be moved or removed during
optimization).
Code generation treats all nodes except those explicitly
marked as statement boundaries as "not statement" nodes,
and floats statement boundaries to the beginning of each
same-line run of instructions found within a basic block.
Line number html conversion was modified to make statement
boundary nodes a bit more obvious by prepending a "+".
The code in fuse.go that glued together the value slices
of two blocks produced a result that depended on the
former capacities (not lengths) of the two slices. This
causes differences in the 386 bootstrap, and also can
sometimes put values into an order that does a worse job
of preserving statement boundaries when values are removed.
Portions of two delve tests that had caught problems were
incorporated into ssa/debug_test.go. There are some
opportunities to do better with optimized code, but the
next-ing is not lying or overly jumpy.
Over 4 CLs, compilebench geomean measured binary size
increase of 3.5% and compile user time increase of 3.8%
(this is after optimization to reuse a sparse map instead
of creating multiple maps.)
This CL worsens the optimized-debugging experience with
Delve; we need to work with the delve team so that
they can use the is_stmt marks that we're emitting now.
The reference output changes from time to time depending
on other changes in the compiler, sometimes better,
sometimes worse.
This CL now includes a test ensuring that 99+% of the lines
in the Go command itself (a handy optimized binary) include
is_stmt markers.
Change-Id: I359c94e06843f1eb41f9da437bd614885aa9644a
Reviewed-on: https://go-review.googlesource.com/102435
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-23 22:46:06 -04:00
|
|
|
}
|
|
|
|
|
a.Pos = a.Pos.WithNotStmt()
|
|
|
|
|
}
|
2020-01-24 10:43:09 -05:00
|
|
|
vchange = true
|
2016-04-11 21:23:11 -07:00
|
|
|
for a.Uses == 0 {
|
|
|
|
|
b := a.Args[0]
|
|
|
|
|
a.reset(OpInvalid)
|
|
|
|
|
a = b
|
|
|
|
|
}
|
|
|
|
|
}
|
2020-01-24 10:43:09 -05:00
|
|
|
if vchange && debug > 1 {
|
|
|
|
|
fmt.Printf("rewriting %s -> %s\n", v0.LongString(), v.LongString())
|
|
|
|
|
}
|
2016-04-11 21:23:11 -07:00
|
|
|
|
2015-05-18 16:44:20 -07:00
|
|
|
// apply rewrite function
|
2017-03-17 10:50:20 -07:00
|
|
|
if rv(v) {
|
2020-01-24 10:43:09 -05:00
|
|
|
vchange = true
|
cmd/compile: assign and preserve statement boundaries.
A new pass run after ssa building (before any other
optimization) identifies the "first" ssa node for each
statement. Other "noise" nodes are tagged as being never
appropriate for a statement boundary (e.g., VarKill, VarDef,
Phi).
Rewrite, deadcode, cse, and nilcheck are modified to move
the statement boundaries forward whenever possible if a
boundary-tagged ssa value is removed; never-boundary nodes
are ignored in this search (some operations involving
constants are also tagged as never-boundary and also ignored
because they are likely to be moved or removed during
optimization).
Code generation treats all nodes except those explicitly
marked as statement boundaries as "not statement" nodes,
and floats statement boundaries to the beginning of each
same-line run of instructions found within a basic block.
Line number html conversion was modified to make statement
boundary nodes a bit more obvious by prepending a "+".
The code in fuse.go that glued together the value slices
of two blocks produced a result that depended on the
former capacities (not lengths) of the two slices. This
causes differences in the 386 bootstrap, and also can
sometimes put values into an order that does a worse job
of preserving statement boundaries when values are removed.
Portions of two delve tests that had caught problems were
incorporated into ssa/debug_test.go. There are some
opportunities to do better with optimized code, but the
next-ing is not lying or overly jumpy.
Over 4 CLs, compilebench geomean measured binary size
increase of 3.5% and compile user time increase of 3.8%
(this is after optimization to reuse a sparse map instead
of creating multiple maps.)
This CL worsens the optimized-debugging experience with
Delve; we need to work with the delve team so that
they can use the is_stmt marks that we're emitting now.
The reference output changes from time to time depending
on other changes in the compiler, sometimes better,
sometimes worse.
This CL now includes a test ensuring that 99+% of the lines
in the Go command itself (a handy optimized binary) include
is_stmt markers.
Change-Id: I359c94e06843f1eb41f9da437bd614885aa9644a
Reviewed-on: https://go-review.googlesource.com/102435
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-23 22:46:06 -04:00
|
|
|
// If value changed to a poor choice for a statement boundary, move the boundary
|
|
|
|
|
if v.Pos.IsStmt() == src.PosIsStmt {
|
|
|
|
|
if k := nextGoodStatementIndex(v, j, b); k != j {
|
|
|
|
|
v.Pos = v.Pos.WithNotStmt()
|
|
|
|
|
b.Values[k].Pos = b.Values[k].Pos.WithIsStmt()
|
|
|
|
|
}
|
|
|
|
|
}
|
2015-03-23 17:02:11 -07:00
|
|
|
}
|
2020-01-24 10:43:09 -05:00
|
|
|
|
|
|
|
|
change = change || vchange
|
|
|
|
|
if vchange && debug > 1 {
|
|
|
|
|
fmt.Printf("rewriting %s -> %s\n", v0.LongString(), v.LongString())
|
|
|
|
|
}
|
2015-03-23 17:02:11 -07:00
|
|
|
}
|
|
|
|
|
}
|
2022-03-14 15:17:43 -07:00
|
|
|
if !change && !deadChange {
|
2016-04-11 21:23:11 -07:00
|
|
|
break
|
|
|
|
|
}
|
2021-09-04 19:29:08 -07:00
|
|
|
iters++
|
2024-04-11 15:16:22 +00:00
|
|
|
if (iters > itersLimit || debug >= 2) && change {
|
2021-09-04 19:29:08 -07:00
|
|
|
// We've done a suspiciously large number of rewrites (or we're in debug mode).
|
|
|
|
|
// As of Sep 2021, 90% of rewrites complete in 4 iterations or fewer
|
|
|
|
|
// and the maximum value encountered during make.bash is 12.
|
|
|
|
|
// Start checking for cycles. (This is too expensive to do routinely.)
|
2022-03-14 15:17:43 -07:00
|
|
|
// Note: we avoid this path for deadChange-only iterations, to fix #51639.
|
2021-09-04 19:29:08 -07:00
|
|
|
if states == nil {
|
|
|
|
|
states = make(map[string]bool)
|
|
|
|
|
}
|
|
|
|
|
h := f.rewriteHash()
|
|
|
|
|
if _, ok := states[h]; ok {
|
|
|
|
|
// We've found a cycle.
|
|
|
|
|
// To diagnose it, set debug to 2 and start again,
|
|
|
|
|
// so that we'll print all rules applied until we complete another cycle.
|
|
|
|
|
// If debug is already >= 2, we've already done that, so it's time to crash.
|
|
|
|
|
if debug < 2 {
|
|
|
|
|
debug = 2
|
|
|
|
|
states = make(map[string]bool)
|
|
|
|
|
} else {
|
|
|
|
|
f.Fatalf("rewrite cycle detected")
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
states[h] = true
|
|
|
|
|
}
|
2016-04-11 21:23:11 -07:00
|
|
|
}
|
2016-04-20 15:02:48 -07:00
|
|
|
// remove clobbered values
|
2016-04-11 21:23:11 -07:00
|
|
|
for _, b := range f.Blocks {
|
|
|
|
|
j := 0
|
|
|
|
|
for i, v := range b.Values {
|
2018-12-17 17:23:42 -05:00
|
|
|
vl := v.Pos
|
2016-04-11 21:23:11 -07:00
|
|
|
if v.Op == OpInvalid {
|
cmd/compile: assign and preserve statement boundaries.
A new pass run after ssa building (before any other
optimization) identifies the "first" ssa node for each
statement. Other "noise" nodes are tagged as being never
appropriate for a statement boundary (e.g., VarKill, VarDef,
Phi).
Rewrite, deadcode, cse, and nilcheck are modified to move
the statement boundaries forward whenever possible if a
boundary-tagged ssa value is removed; never-boundary nodes
are ignored in this search (some operations involving
constants are also tagged as never-boundary and also ignored
because they are likely to be moved or removed during
optimization).
Code generation treats all nodes except those explicitly
marked as statement boundaries as "not statement" nodes,
and floats statement boundaries to the beginning of each
same-line run of instructions found within a basic block.
Line number html conversion was modified to make statement
boundary nodes a bit more obvious by prepending a "+".
The code in fuse.go that glued together the value slices
of two blocks produced a result that depended on the
former capacities (not lengths) of the two slices. This
causes differences in the 386 bootstrap, and also can
sometimes put values into an order that does a worse job
of preserving statement boundaries when values are removed.
Portions of two delve tests that had caught problems were
incorporated into ssa/debug_test.go. There are some
opportunities to do better with optimized code, but the
next-ing is not lying or overly jumpy.
Over 4 CLs, compilebench geomean measured binary size
increase of 3.5% and compile user time increase of 3.8%
(this is after optimization to reuse a sparse map instead
of creating multiple maps.)
This CL worsens the optimized-debugging experience with
Delve; we need to work with the delve team so that
they can use the is_stmt marks that we're emitting now.
The reference output changes from time to time depending
on other changes in the compiler, sometimes better,
sometimes worse.
This CL now includes a test ensuring that 99+% of the lines
in the Go command itself (a handy optimized binary) include
is_stmt markers.
Change-Id: I359c94e06843f1eb41f9da437bd614885aa9644a
Reviewed-on: https://go-review.googlesource.com/102435
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-23 22:46:06 -04:00
|
|
|
if v.Pos.IsStmt() == src.PosIsStmt {
|
|
|
|
|
pendingLines.set(vl, int32(b.ID))
|
|
|
|
|
}
|
2016-04-11 21:23:11 -07:00
|
|
|
f.freeValue(v)
|
|
|
|
|
continue
|
|
|
|
|
}
|
2025-06-13 11:08:53 -07:00
|
|
|
if v.Pos.IsStmt() != src.PosNotStmt && !notStmtBoundary(v.Op) {
|
|
|
|
|
if pl, ok := pendingLines.get(vl); ok && pl == int32(b.ID) {
|
|
|
|
|
pendingLines.remove(vl)
|
|
|
|
|
v.Pos = v.Pos.WithIsStmt()
|
|
|
|
|
}
|
cmd/compile: assign and preserve statement boundaries.
A new pass run after ssa building (before any other
optimization) identifies the "first" ssa node for each
statement. Other "noise" nodes are tagged as being never
appropriate for a statement boundary (e.g., VarKill, VarDef,
Phi).
Rewrite, deadcode, cse, and nilcheck are modified to move
the statement boundaries forward whenever possible if a
boundary-tagged ssa value is removed; never-boundary nodes
are ignored in this search (some operations involving
constants are also tagged as never-boundary and also ignored
because they are likely to be moved or removed during
optimization).
Code generation treats all nodes except those explicitly
marked as statement boundaries as "not statement" nodes,
and floats statement boundaries to the beginning of each
same-line run of instructions found within a basic block.
Line number html conversion was modified to make statement
boundary nodes a bit more obvious by prepending a "+".
The code in fuse.go that glued together the value slices
of two blocks produced a result that depended on the
former capacities (not lengths) of the two slices. This
causes differences in the 386 bootstrap, and also can
sometimes put values into an order that does a worse job
of preserving statement boundaries when values are removed.
Portions of two delve tests that had caught problems were
incorporated into ssa/debug_test.go. There are some
opportunities to do better with optimized code, but the
next-ing is not lying or overly jumpy.
Over 4 CLs, compilebench geomean measured binary size
increase of 3.5% and compile user time increase of 3.8%
(this is after optimization to reuse a sparse map instead
of creating multiple maps.)
This CL worsens the optimized-debugging experience with
Delve; we need to work with the delve team so that
they can use the is_stmt marks that we're emitting now.
The reference output changes from time to time depending
on other changes in the compiler, sometimes better,
sometimes worse.
This CL now includes a test ensuring that 99+% of the lines
in the Go command itself (a handy optimized binary) include
is_stmt markers.
Change-Id: I359c94e06843f1eb41f9da437bd614885aa9644a
Reviewed-on: https://go-review.googlesource.com/102435
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-23 22:46:06 -04:00
|
|
|
}
|
2016-04-11 21:23:11 -07:00
|
|
|
if i != j {
|
|
|
|
|
b.Values[j] = v
|
|
|
|
|
}
|
|
|
|
|
j++
|
|
|
|
|
}
|
2025-06-13 11:08:53 -07:00
|
|
|
if pl, ok := pendingLines.get(b.Pos); ok && pl == int32(b.ID) {
|
cmd/compile: assign and preserve statement boundaries.
A new pass run after ssa building (before any other
optimization) identifies the "first" ssa node for each
statement. Other "noise" nodes are tagged as being never
appropriate for a statement boundary (e.g., VarKill, VarDef,
Phi).
Rewrite, deadcode, cse, and nilcheck are modified to move
the statement boundaries forward whenever possible if a
boundary-tagged ssa value is removed; never-boundary nodes
are ignored in this search (some operations involving
constants are also tagged as never-boundary and also ignored
because they are likely to be moved or removed during
optimization).
Code generation treats all nodes except those explicitly
marked as statement boundaries as "not statement" nodes,
and floats statement boundaries to the beginning of each
same-line run of instructions found within a basic block.
Line number html conversion was modified to make statement
boundary nodes a bit more obvious by prepending a "+".
The code in fuse.go that glued together the value slices
of two blocks produced a result that depended on the
former capacities (not lengths) of the two slices. This
causes differences in the 386 bootstrap, and also can
sometimes put values into an order that does a worse job
of preserving statement boundaries when values are removed.
Portions of two delve tests that had caught problems were
incorporated into ssa/debug_test.go. There are some
opportunities to do better with optimized code, but the
next-ing is not lying or overly jumpy.
Over 4 CLs, compilebench geomean measured binary size
increase of 3.5% and compile user time increase of 3.8%
(this is after optimization to reuse a sparse map instead
of creating multiple maps.)
This CL worsens the optimized-debugging experience with
Delve; we need to work with the delve team so that
they can use the is_stmt marks that we're emitting now.
The reference output changes from time to time depending
on other changes in the compiler, sometimes better,
sometimes worse.
This CL now includes a test ensuring that 99+% of the lines
in the Go command itself (a handy optimized binary) include
is_stmt markers.
Change-Id: I359c94e06843f1eb41f9da437bd614885aa9644a
Reviewed-on: https://go-review.googlesource.com/102435
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-23 22:46:06 -04:00
|
|
|
b.Pos = b.Pos.WithIsStmt()
|
2018-12-17 17:23:42 -05:00
|
|
|
pendingLines.remove(b.Pos)
|
cmd/compile: assign and preserve statement boundaries.
A new pass run after ssa building (before any other
optimization) identifies the "first" ssa node for each
statement. Other "noise" nodes are tagged as being never
appropriate for a statement boundary (e.g., VarKill, VarDef,
Phi).
Rewrite, deadcode, cse, and nilcheck are modified to move
the statement boundaries forward whenever possible if a
boundary-tagged ssa value is removed; never-boundary nodes
are ignored in this search (some operations involving
constants are also tagged as never-boundary and also ignored
because they are likely to be moved or removed during
optimization).
Code generation treats all nodes except those explicitly
marked as statement boundaries as "not statement" nodes,
and floats statement boundaries to the beginning of each
same-line run of instructions found within a basic block.
Line number html conversion was modified to make statement
boundary nodes a bit more obvious by prepending a "+".
The code in fuse.go that glued together the value slices
of two blocks produced a result that depended on the
former capacities (not lengths) of the two slices. This
causes differences in the 386 bootstrap, and also can
sometimes put values into an order that does a worse job
of preserving statement boundaries when values are removed.
Portions of two delve tests that had caught problems were
incorporated into ssa/debug_test.go. There are some
opportunities to do better with optimized code, but the
next-ing is not lying or overly jumpy.
Over 4 CLs, compilebench geomean measured binary size
increase of 3.5% and compile user time increase of 3.8%
(this is after optimization to reuse a sparse map instead
of creating multiple maps.)
This CL worsens the optimized-debugging experience with
Delve; we need to work with the delve team so that
they can use the is_stmt marks that we're emitting now.
The reference output changes from time to time depending
on other changes in the compiler, sometimes better,
sometimes worse.
This CL now includes a test ensuring that 99+% of the lines
in the Go command itself (a handy optimized binary) include
is_stmt markers.
Change-Id: I359c94e06843f1eb41f9da437bd614885aa9644a
Reviewed-on: https://go-review.googlesource.com/102435
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
2018-03-23 22:46:06 -04:00
|
|
|
}
|
2020-04-22 21:35:31 -07:00
|
|
|
b.truncateValues(j)
|
2015-03-23 17:02:11 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Common functions called from rewriting rules
|
|
|
|
|
|
cmd/compile: change ssa.Type into *types.Type
When package ssa was created, Type was in package gc.
To avoid circular dependencies, we used an interface (ssa.Type)
to represent type information in SSA.
In the Go 1.9 cycle, gri extricated the Type type from package gc.
As a result, we can now use it in package ssa.
Now, instead of package types depending on package ssa,
it is the other way.
This is a more sensible dependency tree,
and helps compiler performance a bit.
Though this is a big CL, most of the changes are
mechanical and uninteresting.
Interesting bits:
* Add new singleton globals to package types for the special
SSA types Memory, Void, Invalid, Flags, and Int128.
* Add two new Types, TSSA for the special types,
and TTUPLE, for SSA tuple types.
ssa.MakeTuple is now types.NewTuple.
* Move type comparison result constants CMPlt, CMPeq, and CMPgt
to package types.
* We had picked the name "types" in our rules for the handy
list of types provided by ssa.Config. That conflicted with
the types package name, so change it to "typ".
* Update the type comparison routine to handle tuples and special
types inline.
* Teach gc/fmt.go how to print special types.
* We can now eliminate ElemTypes in favor of just Elem,
and probably also some other duplicated Type methods
designed to return ssa.Type instead of *types.Type.
* The ssa tests were using their own dummy types,
and they were not particularly careful about types in general.
Of necessity, this CL switches them to use *types.Type;
it does not make them more type-accurate.
Unfortunately, using types.Type means initializing a bit
of the types universe.
This is prime for refactoring and improvement.
This shrinks ssa.Value; it now fits in a smaller size class
on 64 bit systems. This doesn't have a giant impact,
though, since most Values are preallocated in a chunk.
name old alloc/op new alloc/op delta
Template 37.9MB ± 0% 37.7MB ± 0% -0.57% (p=0.000 n=10+8)
Unicode 28.9MB ± 0% 28.7MB ± 0% -0.52% (p=0.000 n=10+10)
GoTypes 110MB ± 0% 109MB ± 0% -0.88% (p=0.000 n=10+10)
Flate 24.7MB ± 0% 24.6MB ± 0% -0.66% (p=0.000 n=10+10)
GoParser 31.1MB ± 0% 30.9MB ± 0% -0.61% (p=0.000 n=10+9)
Reflect 73.9MB ± 0% 73.4MB ± 0% -0.62% (p=0.000 n=10+8)
Tar 25.8MB ± 0% 25.6MB ± 0% -0.77% (p=0.000 n=9+10)
XML 41.2MB ± 0% 40.9MB ± 0% -0.80% (p=0.000 n=10+10)
[Geo mean] 40.5MB 40.3MB -0.68%
name old allocs/op new allocs/op delta
Template 385k ± 0% 386k ± 0% ~ (p=0.356 n=10+9)
Unicode 343k ± 1% 344k ± 0% ~ (p=0.481 n=10+10)
GoTypes 1.16M ± 0% 1.16M ± 0% -0.16% (p=0.004 n=10+10)
Flate 238k ± 1% 238k ± 1% ~ (p=0.853 n=10+10)
GoParser 320k ± 0% 320k ± 0% ~ (p=0.720 n=10+9)
Reflect 957k ± 0% 957k ± 0% ~ (p=0.460 n=10+8)
Tar 252k ± 0% 252k ± 0% ~ (p=0.133 n=9+10)
XML 400k ± 0% 400k ± 0% ~ (p=0.796 n=10+10)
[Geo mean] 428k 428k -0.01%
Removing all the interface calls helps non-trivially with CPU, though.
name old time/op new time/op delta
Template 178ms ± 4% 173ms ± 3% -2.90% (p=0.000 n=94+96)
Unicode 85.0ms ± 4% 83.9ms ± 4% -1.23% (p=0.000 n=96+96)
GoTypes 543ms ± 3% 528ms ± 3% -2.73% (p=0.000 n=98+96)
Flate 116ms ± 3% 113ms ± 4% -2.34% (p=0.000 n=96+99)
GoParser 144ms ± 3% 140ms ± 4% -2.80% (p=0.000 n=99+97)
Reflect 344ms ± 3% 334ms ± 4% -3.02% (p=0.000 n=100+99)
Tar 106ms ± 5% 103ms ± 4% -3.30% (p=0.000 n=98+94)
XML 198ms ± 5% 192ms ± 4% -2.88% (p=0.000 n=92+95)
[Geo mean] 178ms 173ms -2.65%
name old user-time/op new user-time/op delta
Template 229ms ± 5% 224ms ± 5% -2.36% (p=0.000 n=95+99)
Unicode 107ms ± 6% 106ms ± 5% -1.13% (p=0.001 n=93+95)
GoTypes 696ms ± 4% 679ms ± 4% -2.45% (p=0.000 n=97+99)
Flate 137ms ± 4% 134ms ± 5% -2.66% (p=0.000 n=99+96)
GoParser 176ms ± 5% 172ms ± 8% -2.27% (p=0.000 n=98+100)
Reflect 430ms ± 6% 411ms ± 5% -4.46% (p=0.000 n=100+92)
Tar 128ms ±13% 123ms ±13% -4.21% (p=0.000 n=100+100)
XML 239ms ± 6% 233ms ± 6% -2.50% (p=0.000 n=95+97)
[Geo mean] 220ms 213ms -2.76%
Change-Id: I15c7d6268347f8358e75066dfdbd77db24e8d0c1
Reviewed-on: https://go-review.googlesource.com/42145
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-04-28 14:12:28 -07:00
|
|
|
func is64BitFloat(t *types.Type) bool {
|
2017-04-28 00:19:49 +00:00
|
|
|
return t.Size() == 8 && t.IsFloat()
|
2015-08-12 16:38:11 -04:00
|
|
|
}
|
|
|
|
|
|
cmd/compile: change ssa.Type into *types.Type
When package ssa was created, Type was in package gc.
To avoid circular dependencies, we used an interface (ssa.Type)
to represent type information in SSA.
In the Go 1.9 cycle, gri extricated the Type type from package gc.
As a result, we can now use it in package ssa.
Now, instead of package types depending on package ssa,
it is the other way.
This is a more sensible dependency tree,
and helps compiler performance a bit.
Though this is a big CL, most of the changes are
mechanical and uninteresting.
Interesting bits:
* Add new singleton globals to package types for the special
SSA types Memory, Void, Invalid, Flags, and Int128.
* Add two new Types, TSSA for the special types,
and TTUPLE, for SSA tuple types.
ssa.MakeTuple is now types.NewTuple.
* Move type comparison result constants CMPlt, CMPeq, and CMPgt
to package types.
* We had picked the name "types" in our rules for the handy
list of types provided by ssa.Config. That conflicted with
the types package name, so change it to "typ".
* Update the type comparison routine to handle tuples and special
types inline.
* Teach gc/fmt.go how to print special types.
* We can now eliminate ElemTypes in favor of just Elem,
and probably also some other duplicated Type methods
designed to return ssa.Type instead of *types.Type.
* The ssa tests were using their own dummy types,
and they were not particularly careful about types in general.
Of necessity, this CL switches them to use *types.Type;
it does not make them more type-accurate.
Unfortunately, using types.Type means initializing a bit
of the types universe.
This is prime for refactoring and improvement.
This shrinks ssa.Value; it now fits in a smaller size class
on 64 bit systems. This doesn't have a giant impact,
though, since most Values are preallocated in a chunk.
name old alloc/op new alloc/op delta
Template 37.9MB ± 0% 37.7MB ± 0% -0.57% (p=0.000 n=10+8)
Unicode 28.9MB ± 0% 28.7MB ± 0% -0.52% (p=0.000 n=10+10)
GoTypes 110MB ± 0% 109MB ± 0% -0.88% (p=0.000 n=10+10)
Flate 24.7MB ± 0% 24.6MB ± 0% -0.66% (p=0.000 n=10+10)
GoParser 31.1MB ± 0% 30.9MB ± 0% -0.61% (p=0.000 n=10+9)
Reflect 73.9MB ± 0% 73.4MB ± 0% -0.62% (p=0.000 n=10+8)
Tar 25.8MB ± 0% 25.6MB ± 0% -0.77% (p=0.000 n=9+10)
XML 41.2MB ± 0% 40.9MB ± 0% -0.80% (p=0.000 n=10+10)
[Geo mean] 40.5MB 40.3MB -0.68%
name old allocs/op new allocs/op delta
Template 385k ± 0% 386k ± 0% ~ (p=0.356 n=10+9)
Unicode 343k ± 1% 344k ± 0% ~ (p=0.481 n=10+10)
GoTypes 1.16M ± 0% 1.16M ± 0% -0.16% (p=0.004 n=10+10)
Flate 238k ± 1% 238k ± 1% ~ (p=0.853 n=10+10)
GoParser 320k ± 0% 320k ± 0% ~ (p=0.720 n=10+9)
Reflect 957k ± 0% 957k ± 0% ~ (p=0.460 n=10+8)
Tar 252k ± 0% 252k ± 0% ~ (p=0.133 n=9+10)
XML 400k ± 0% 400k ± 0% ~ (p=0.796 n=10+10)
[Geo mean] 428k 428k -0.01%
Removing all the interface calls helps non-trivially with CPU, though.
name old time/op new time/op delta
Template 178ms ± 4% 173ms ± 3% -2.90% (p=0.000 n=94+96)
Unicode 85.0ms ± 4% 83.9ms ± 4% -1.23% (p=0.000 n=96+96)
GoTypes 543ms ± 3% 528ms ± 3% -2.73% (p=0.000 n=98+96)
Flate 116ms ± 3% 113ms ± 4% -2.34% (p=0.000 n=96+99)
GoParser 144ms ± 3% 140ms ± 4% -2.80% (p=0.000 n=99+97)
Reflect 344ms ± 3% 334ms ± 4% -3.02% (p=0.000 n=100+99)
Tar 106ms ± 5% 103ms ± 4% -3.30% (p=0.000 n=98+94)
XML 198ms ± 5% 192ms ± 4% -2.88% (p=0.000 n=92+95)
[Geo mean] 178ms 173ms -2.65%
name old user-time/op new user-time/op delta
Template 229ms ± 5% 224ms ± 5% -2.36% (p=0.000 n=95+99)
Unicode 107ms ± 6% 106ms ± 5% -1.13% (p=0.001 n=93+95)
GoTypes 696ms ± 4% 679ms ± 4% -2.45% (p=0.000 n=97+99)
Flate 137ms ± 4% 134ms ± 5% -2.66% (p=0.000 n=99+96)
GoParser 176ms ± 5% 172ms ± 8% -2.27% (p=0.000 n=98+100)
Reflect 430ms ± 6% 411ms ± 5% -4.46% (p=0.000 n=100+92)
Tar 128ms ±13% 123ms ±13% -4.21% (p=0.000 n=100+100)
XML 239ms ± 6% 233ms ± 6% -2.50% (p=0.000 n=95+97)
[Geo mean] 220ms 213ms -2.76%
Change-Id: I15c7d6268347f8358e75066dfdbd77db24e8d0c1
Reviewed-on: https://go-review.googlesource.com/42145
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-04-28 14:12:28 -07:00
|
|
|
func is32BitFloat(t *types.Type) bool {
|
2017-04-28 00:19:49 +00:00
|
|
|
return t.Size() == 4 && t.IsFloat()
|
2015-08-12 16:38:11 -04:00
|
|
|
}
|
|
|
|
|
|
cmd/compile: change ssa.Type into *types.Type
When package ssa was created, Type was in package gc.
To avoid circular dependencies, we used an interface (ssa.Type)
to represent type information in SSA.
In the Go 1.9 cycle, gri extricated the Type type from package gc.
As a result, we can now use it in package ssa.
Now, instead of package types depending on package ssa,
it is the other way.
This is a more sensible dependency tree,
and helps compiler performance a bit.
Though this is a big CL, most of the changes are
mechanical and uninteresting.
Interesting bits:
* Add new singleton globals to package types for the special
SSA types Memory, Void, Invalid, Flags, and Int128.
* Add two new Types, TSSA for the special types,
and TTUPLE, for SSA tuple types.
ssa.MakeTuple is now types.NewTuple.
* Move type comparison result constants CMPlt, CMPeq, and CMPgt
to package types.
* We had picked the name "types" in our rules for the handy
list of types provided by ssa.Config. That conflicted with
the types package name, so change it to "typ".
* Update the type comparison routine to handle tuples and special
types inline.
* Teach gc/fmt.go how to print special types.
* We can now eliminate ElemTypes in favor of just Elem,
and probably also some other duplicated Type methods
designed to return ssa.Type instead of *types.Type.
* The ssa tests were using their own dummy types,
and they were not particularly careful about types in general.
Of necessity, this CL switches them to use *types.Type;
it does not make them more type-accurate.
Unfortunately, using types.Type means initializing a bit
of the types universe.
This is prime for refactoring and improvement.
This shrinks ssa.Value; it now fits in a smaller size class
on 64 bit systems. This doesn't have a giant impact,
though, since most Values are preallocated in a chunk.
name old alloc/op new alloc/op delta
Template 37.9MB ± 0% 37.7MB ± 0% -0.57% (p=0.000 n=10+8)
Unicode 28.9MB ± 0% 28.7MB ± 0% -0.52% (p=0.000 n=10+10)
GoTypes 110MB ± 0% 109MB ± 0% -0.88% (p=0.000 n=10+10)
Flate 24.7MB ± 0% 24.6MB ± 0% -0.66% (p=0.000 n=10+10)
GoParser 31.1MB ± 0% 30.9MB ± 0% -0.61% (p=0.000 n=10+9)
Reflect 73.9MB ± 0% 73.4MB ± 0% -0.62% (p=0.000 n=10+8)
Tar 25.8MB ± 0% 25.6MB ± 0% -0.77% (p=0.000 n=9+10)
XML 41.2MB ± 0% 40.9MB ± 0% -0.80% (p=0.000 n=10+10)
[Geo mean] 40.5MB 40.3MB -0.68%
name old allocs/op new allocs/op delta
Template 385k ± 0% 386k ± 0% ~ (p=0.356 n=10+9)
Unicode 343k ± 1% 344k ± 0% ~ (p=0.481 n=10+10)
GoTypes 1.16M ± 0% 1.16M ± 0% -0.16% (p=0.004 n=10+10)
Flate 238k ± 1% 238k ± 1% ~ (p=0.853 n=10+10)
GoParser 320k ± 0% 320k ± 0% ~ (p=0.720 n=10+9)
Reflect 957k ± 0% 957k ± 0% ~ (p=0.460 n=10+8)
Tar 252k ± 0% 252k ± 0% ~ (p=0.133 n=9+10)
XML 400k ± 0% 400k ± 0% ~ (p=0.796 n=10+10)
[Geo mean] 428k 428k -0.01%
Removing all the interface calls helps non-trivially with CPU, though.
name old time/op new time/op delta
Template 178ms ± 4% 173ms ± 3% -2.90% (p=0.000 n=94+96)
Unicode 85.0ms ± 4% 83.9ms ± 4% -1.23% (p=0.000 n=96+96)
GoTypes 543ms ± 3% 528ms ± 3% -2.73% (p=0.000 n=98+96)
Flate 116ms ± 3% 113ms ± 4% -2.34% (p=0.000 n=96+99)
GoParser 144ms ± 3% 140ms ± 4% -2.80% (p=0.000 n=99+97)
Reflect 344ms ± 3% 334ms ± 4% -3.02% (p=0.000 n=100+99)
Tar 106ms ± 5% 103ms ± 4% -3.30% (p=0.000 n=98+94)
XML 198ms ± 5% 192ms ± 4% -2.88% (p=0.000 n=92+95)
[Geo mean] 178ms 173ms -2.65%
name old user-time/op new user-time/op delta
Template 229ms ± 5% 224ms ± 5% -2.36% (p=0.000 n=95+99)
Unicode 107ms ± 6% 106ms ± 5% -1.13% (p=0.001 n=93+95)
GoTypes 696ms ± 4% 679ms ± 4% -2.45% (p=0.000 n=97+99)
Flate 137ms ± 4% 134ms ± 5% -2.66% (p=0.000 n=99+96)
GoParser 176ms ± 5% 172ms ± 8% -2.27% (p=0.000 n=98+100)
Reflect 430ms ± 6% 411ms ± 5% -4.46% (p=0.000 n=100+92)
Tar 128ms ±13% 123ms ±13% -4.21% (p=0.000 n=100+100)
XML 239ms ± 6% 233ms ± 6% -2.50% (p=0.000 n=95+97)
[Geo mean] 220ms 213ms -2.76%
Change-Id: I15c7d6268347f8358e75066dfdbd77db24e8d0c1
Reviewed-on: https://go-review.googlesource.com/42145
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-04-28 14:12:28 -07:00
|
|
|
func is64BitInt(t *types.Type) bool {
|
2017-04-28 00:19:49 +00:00
|
|
|
return t.Size() == 8 && t.IsInteger()
|
2015-03-23 17:02:11 -07:00
|
|
|
}
|
|
|
|
|
|
cmd/compile: change ssa.Type into *types.Type
When package ssa was created, Type was in package gc.
To avoid circular dependencies, we used an interface (ssa.Type)
to represent type information in SSA.
In the Go 1.9 cycle, gri extricated the Type type from package gc.
As a result, we can now use it in package ssa.
Now, instead of package types depending on package ssa,
it is the other way.
This is a more sensible dependency tree,
and helps compiler performance a bit.
Though this is a big CL, most of the changes are
mechanical and uninteresting.
Interesting bits:
* Add new singleton globals to package types for the special
SSA types Memory, Void, Invalid, Flags, and Int128.
* Add two new Types, TSSA for the special types,
and TTUPLE, for SSA tuple types.
ssa.MakeTuple is now types.NewTuple.
* Move type comparison result constants CMPlt, CMPeq, and CMPgt
to package types.
* We had picked the name "types" in our rules for the handy
list of types provided by ssa.Config. That conflicted with
the types package name, so change it to "typ".
* Update the type comparison routine to handle tuples and special
types inline.
* Teach gc/fmt.go how to print special types.
* We can now eliminate ElemTypes in favor of just Elem,
and probably also some other duplicated Type methods
designed to return ssa.Type instead of *types.Type.
* The ssa tests were using their own dummy types,
and they were not particularly careful about types in general.
Of necessity, this CL switches them to use *types.Type;
it does not make them more type-accurate.
Unfortunately, using types.Type means initializing a bit
of the types universe.
This is prime for refactoring and improvement.
This shrinks ssa.Value; it now fits in a smaller size class
on 64 bit systems. This doesn't have a giant impact,
though, since most Values are preallocated in a chunk.
name old alloc/op new alloc/op delta
Template 37.9MB ± 0% 37.7MB ± 0% -0.57% (p=0.000 n=10+8)
Unicode 28.9MB ± 0% 28.7MB ± 0% -0.52% (p=0.000 n=10+10)
GoTypes 110MB ± 0% 109MB ± 0% -0.88% (p=0.000 n=10+10)
Flate 24.7MB ± 0% 24.6MB ± 0% -0.66% (p=0.000 n=10+10)
GoParser 31.1MB ± 0% 30.9MB ± 0% -0.61% (p=0.000 n=10+9)
Reflect 73.9MB ± 0% 73.4MB ± 0% -0.62% (p=0.000 n=10+8)
Tar 25.8MB ± 0% 25.6MB ± 0% -0.77% (p=0.000 n=9+10)
XML 41.2MB ± 0% 40.9MB ± 0% -0.80% (p=0.000 n=10+10)
[Geo mean] 40.5MB 40.3MB -0.68%
name old allocs/op new allocs/op delta
Template 385k ± 0% 386k ± 0% ~ (p=0.356 n=10+9)
Unicode 343k ± 1% 344k ± 0% ~ (p=0.481 n=10+10)
GoTypes 1.16M ± 0% 1.16M ± 0% -0.16% (p=0.004 n=10+10)
Flate 238k ± 1% 238k ± 1% ~ (p=0.853 n=10+10)
GoParser 320k ± 0% 320k ± 0% ~ (p=0.720 n=10+9)
Reflect 957k ± 0% 957k ± 0% ~ (p=0.460 n=10+8)
Tar 252k ± 0% 252k ± 0% ~ (p=0.133 n=9+10)
XML 400k ± 0% 400k ± 0% ~ (p=0.796 n=10+10)
[Geo mean] 428k 428k -0.01%
Removing all the interface calls helps non-trivially with CPU, though.
name old time/op new time/op delta
Template 178ms ± 4% 173ms ± 3% -2.90% (p=0.000 n=94+96)
Unicode 85.0ms ± 4% 83.9ms ± 4% -1.23% (p=0.000 n=96+96)
GoTypes 543ms ± 3% 528ms ± 3% -2.73% (p=0.000 n=98+96)
Flate 116ms ± 3% 113ms ± 4% -2.34% (p=0.000 n=96+99)
GoParser 144ms ± 3% 140ms ± 4% -2.80% (p=0.000 n=99+97)
Reflect 344ms ± 3% 334ms ± 4% -3.02% (p=0.000 n=100+99)
Tar 106ms ± 5% 103ms ± 4% -3.30% (p=0.000 n=98+94)
XML 198ms ± 5% 192ms ± 4% -2.88% (p=0.000 n=92+95)
[Geo mean] 178ms 173ms -2.65%
name old user-time/op new user-time/op delta
Template 229ms ± 5% 224ms ± 5% -2.36% (p=0.000 n=95+99)
Unicode 107ms ± 6% 106ms ± 5% -1.13% (p=0.001 n=93+95)
GoTypes 696ms ± 4% 679ms ± 4% -2.45% (p=0.000 n=97+99)
Flate 137ms ± 4% 134ms ± 5% -2.66% (p=0.000 n=99+96)
GoParser 176ms ± 5% 172ms ± 8% -2.27% (p=0.000 n=98+100)
Reflect 430ms ± 6% 411ms ± 5% -4.46% (p=0.000 n=100+92)
Tar 128ms ±13% 123ms ±13% -4.21% (p=0.000 n=100+100)
XML 239ms ± 6% 233ms ± 6% -2.50% (p=0.000 n=95+97)
[Geo mean] 220ms 213ms -2.76%
Change-Id: I15c7d6268347f8358e75066dfdbd77db24e8d0c1
Reviewed-on: https://go-review.googlesource.com/42145
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-04-28 14:12:28 -07:00
|
|
|
func is32BitInt(t *types.Type) bool {
|
2017-04-28 00:19:49 +00:00
|
|
|
return t.Size() == 4 && t.IsInteger()
|
2015-04-15 15:51:25 -07:00
|
|
|
}
|
|
|
|
|
|
cmd/compile: change ssa.Type into *types.Type
When package ssa was created, Type was in package gc.
To avoid circular dependencies, we used an interface (ssa.Type)
to represent type information in SSA.
In the Go 1.9 cycle, gri extricated the Type type from package gc.
As a result, we can now use it in package ssa.
Now, instead of package types depending on package ssa,
it is the other way.
This is a more sensible dependency tree,
and helps compiler performance a bit.
Though this is a big CL, most of the changes are
mechanical and uninteresting.
Interesting bits:
* Add new singleton globals to package types for the special
SSA types Memory, Void, Invalid, Flags, and Int128.
* Add two new Types, TSSA for the special types,
and TTUPLE, for SSA tuple types.
ssa.MakeTuple is now types.NewTuple.
* Move type comparison result constants CMPlt, CMPeq, and CMPgt
to package types.
* We had picked the name "types" in our rules for the handy
list of types provided by ssa.Config. That conflicted with
the types package name, so change it to "typ".
* Update the type comparison routine to handle tuples and special
types inline.
* Teach gc/fmt.go how to print special types.
* We can now eliminate ElemTypes in favor of just Elem,
and probably also some other duplicated Type methods
designed to return ssa.Type instead of *types.Type.
* The ssa tests were using their own dummy types,
and they were not particularly careful about types in general.
Of necessity, this CL switches them to use *types.Type;
it does not make them more type-accurate.
Unfortunately, using types.Type means initializing a bit
of the types universe.
This is prime for refactoring and improvement.
This shrinks ssa.Value; it now fits in a smaller size class
on 64 bit systems. This doesn't have a giant impact,
though, since most Values are preallocated in a chunk.
name old alloc/op new alloc/op delta
Template 37.9MB ± 0% 37.7MB ± 0% -0.57% (p=0.000 n=10+8)
Unicode 28.9MB ± 0% 28.7MB ± 0% -0.52% (p=0.000 n=10+10)
GoTypes 110MB ± 0% 109MB ± 0% -0.88% (p=0.000 n=10+10)
Flate 24.7MB ± 0% 24.6MB ± 0% -0.66% (p=0.000 n=10+10)
GoParser 31.1MB ± 0% 30.9MB ± 0% -0.61% (p=0.000 n=10+9)
Reflect 73.9MB ± 0% 73.4MB ± 0% -0.62% (p=0.000 n=10+8)
Tar 25.8MB ± 0% 25.6MB ± 0% -0.77% (p=0.000 n=9+10)
XML 41.2MB ± 0% 40.9MB ± 0% -0.80% (p=0.000 n=10+10)
[Geo mean] 40.5MB 40.3MB -0.68%
name old allocs/op new allocs/op delta
Template 385k ± 0% 386k ± 0% ~ (p=0.356 n=10+9)
Unicode 343k ± 1% 344k ± 0% ~ (p=0.481 n=10+10)
GoTypes 1.16M ± 0% 1.16M ± 0% -0.16% (p=0.004 n=10+10)
Flate 238k ± 1% 238k ± 1% ~ (p=0.853 n=10+10)
GoParser 320k ± 0% 320k ± 0% ~ (p=0.720 n=10+9)
Reflect 957k ± 0% 957k ± 0% ~ (p=0.460 n=10+8)
Tar 252k ± 0% 252k ± 0% ~ (p=0.133 n=9+10)
XML 400k ± 0% 400k ± 0% ~ (p=0.796 n=10+10)
[Geo mean] 428k 428k -0.01%
Removing all the interface calls helps non-trivially with CPU, though.
name old time/op new time/op delta
Template 178ms ± 4% 173ms ± 3% -2.90% (p=0.000 n=94+96)
Unicode 85.0ms ± 4% 83.9ms ± 4% -1.23% (p=0.000 n=96+96)
GoTypes 543ms ± 3% 528ms ± 3% -2.73% (p=0.000 n=98+96)
Flate 116ms ± 3% 113ms ± 4% -2.34% (p=0.000 n=96+99)
GoParser 144ms ± 3% 140ms ± 4% -2.80% (p=0.000 n=99+97)
Reflect 344ms ± 3% 334ms ± 4% -3.02% (p=0.000 n=100+99)
Tar 106ms ± 5% 103ms ± 4% -3.30% (p=0.000 n=98+94)
XML 198ms ± 5% 192ms ± 4% -2.88% (p=0.000 n=92+95)
[Geo mean] 178ms 173ms -2.65%
name old user-time/op new user-time/op delta
Template 229ms ± 5% 224ms ± 5% -2.36% (p=0.000 n=95+99)
Unicode 107ms ± 6% 106ms ± 5% -1.13% (p=0.001 n=93+95)
GoTypes 696ms ± 4% 679ms ± 4% -2.45% (p=0.000 n=97+99)
Flate 137ms ± 4% 134ms ± 5% -2.66% (p=0.000 n=99+96)
GoParser 176ms ± 5% 172ms ± 8% -2.27% (p=0.000 n=98+100)
Reflect 430ms ± 6% 411ms ± 5% -4.46% (p=0.000 n=100+92)
Tar 128ms ±13% 123ms ±13% -4.21% (p=0.000 n=100+100)
XML 239ms ± 6% 233ms ± 6% -2.50% (p=0.000 n=95+97)
[Geo mean] 220ms 213ms -2.76%
Change-Id: I15c7d6268347f8358e75066dfdbd77db24e8d0c1
Reviewed-on: https://go-review.googlesource.com/42145
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-04-28 14:12:28 -07:00
|
|
|
func is16BitInt(t *types.Type) bool {
|
2017-04-28 00:19:49 +00:00
|
|
|
return t.Size() == 2 && t.IsInteger()
|
2015-06-14 11:38:46 -07:00
|
|
|
}
|
|
|
|
|
|
cmd/compile: change ssa.Type into *types.Type
When package ssa was created, Type was in package gc.
To avoid circular dependencies, we used an interface (ssa.Type)
to represent type information in SSA.
In the Go 1.9 cycle, gri extricated the Type type from package gc.
As a result, we can now use it in package ssa.
Now, instead of package types depending on package ssa,
it is the other way.
This is a more sensible dependency tree,
and helps compiler performance a bit.
Though this is a big CL, most of the changes are
mechanical and uninteresting.
Interesting bits:
* Add new singleton globals to package types for the special
SSA types Memory, Void, Invalid, Flags, and Int128.
* Add two new Types, TSSA for the special types,
and TTUPLE, for SSA tuple types.
ssa.MakeTuple is now types.NewTuple.
* Move type comparison result constants CMPlt, CMPeq, and CMPgt
to package types.
* We had picked the name "types" in our rules for the handy
list of types provided by ssa.Config. That conflicted with
the types package name, so change it to "typ".
* Update the type comparison routine to handle tuples and special
types inline.
* Teach gc/fmt.go how to print special types.
* We can now eliminate ElemTypes in favor of just Elem,
and probably also some other duplicated Type methods
designed to return ssa.Type instead of *types.Type.
* The ssa tests were using their own dummy types,
and they were not particularly careful about types in general.
Of necessity, this CL switches them to use *types.Type;
it does not make them more type-accurate.
Unfortunately, using types.Type means initializing a bit
of the types universe.
This is prime for refactoring and improvement.
This shrinks ssa.Value; it now fits in a smaller size class
on 64 bit systems. This doesn't have a giant impact,
though, since most Values are preallocated in a chunk.
name old alloc/op new alloc/op delta
Template 37.9MB ± 0% 37.7MB ± 0% -0.57% (p=0.000 n=10+8)
Unicode 28.9MB ± 0% 28.7MB ± 0% -0.52% (p=0.000 n=10+10)
GoTypes 110MB ± 0% 109MB ± 0% -0.88% (p=0.000 n=10+10)
Flate 24.7MB ± 0% 24.6MB ± 0% -0.66% (p=0.000 n=10+10)
GoParser 31.1MB ± 0% 30.9MB ± 0% -0.61% (p=0.000 n=10+9)
Reflect 73.9MB ± 0% 73.4MB ± 0% -0.62% (p=0.000 n=10+8)
Tar 25.8MB ± 0% 25.6MB ± 0% -0.77% (p=0.000 n=9+10)
XML 41.2MB ± 0% 40.9MB ± 0% -0.80% (p=0.000 n=10+10)
[Geo mean] 40.5MB 40.3MB -0.68%
name old allocs/op new allocs/op delta
Template 385k ± 0% 386k ± 0% ~ (p=0.356 n=10+9)
Unicode 343k ± 1% 344k ± 0% ~ (p=0.481 n=10+10)
GoTypes 1.16M ± 0% 1.16M ± 0% -0.16% (p=0.004 n=10+10)
Flate 238k ± 1% 238k ± 1% ~ (p=0.853 n=10+10)
GoParser 320k ± 0% 320k ± 0% ~ (p=0.720 n=10+9)
Reflect 957k ± 0% 957k ± 0% ~ (p=0.460 n=10+8)
Tar 252k ± 0% 252k ± 0% ~ (p=0.133 n=9+10)
XML 400k ± 0% 400k ± 0% ~ (p=0.796 n=10+10)
[Geo mean] 428k 428k -0.01%
Removing all the interface calls helps non-trivially with CPU, though.
name old time/op new time/op delta
Template 178ms ± 4% 173ms ± 3% -2.90% (p=0.000 n=94+96)
Unicode 85.0ms ± 4% 83.9ms ± 4% -1.23% (p=0.000 n=96+96)
GoTypes 543ms ± 3% 528ms ± 3% -2.73% (p=0.000 n=98+96)
Flate 116ms ± 3% 113ms ± 4% -2.34% (p=0.000 n=96+99)
GoParser 144ms ± 3% 140ms ± 4% -2.80% (p=0.000 n=99+97)
Reflect 344ms ± 3% 334ms ± 4% -3.02% (p=0.000 n=100+99)
Tar 106ms ± 5% 103ms ± 4% -3.30% (p=0.000 n=98+94)
XML 198ms ± 5% 192ms ± 4% -2.88% (p=0.000 n=92+95)
[Geo mean] 178ms 173ms -2.65%
name old user-time/op new user-time/op delta
Template 229ms ± 5% 224ms ± 5% -2.36% (p=0.000 n=95+99)
Unicode 107ms ± 6% 106ms ± 5% -1.13% (p=0.001 n=93+95)
GoTypes 696ms ± 4% 679ms ± 4% -2.45% (p=0.000 n=97+99)
Flate 137ms ± 4% 134ms ± 5% -2.66% (p=0.000 n=99+96)
GoParser 176ms ± 5% 172ms ± 8% -2.27% (p=0.000 n=98+100)
Reflect 430ms ± 6% 411ms ± 5% -4.46% (p=0.000 n=100+92)
Tar 128ms ±13% 123ms ±13% -4.21% (p=0.000 n=100+100)
XML 239ms ± 6% 233ms ± 6% -2.50% (p=0.000 n=95+97)
[Geo mean] 220ms 213ms -2.76%
Change-Id: I15c7d6268347f8358e75066dfdbd77db24e8d0c1
Reviewed-on: https://go-review.googlesource.com/42145
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-04-28 14:12:28 -07:00
|
|
|
func is8BitInt(t *types.Type) bool {
|
2017-04-28 00:19:49 +00:00
|
|
|
return t.Size() == 1 && t.IsInteger()
|
2015-06-14 11:38:46 -07:00
|
|
|
}
|
|
|
|
|
|
cmd/compile: change ssa.Type into *types.Type
When package ssa was created, Type was in package gc.
To avoid circular dependencies, we used an interface (ssa.Type)
to represent type information in SSA.
In the Go 1.9 cycle, gri extricated the Type type from package gc.
As a result, we can now use it in package ssa.
Now, instead of package types depending on package ssa,
it is the other way.
This is a more sensible dependency tree,
and helps compiler performance a bit.
Though this is a big CL, most of the changes are
mechanical and uninteresting.
Interesting bits:
* Add new singleton globals to package types for the special
SSA types Memory, Void, Invalid, Flags, and Int128.
* Add two new Types, TSSA for the special types,
and TTUPLE, for SSA tuple types.
ssa.MakeTuple is now types.NewTuple.
* Move type comparison result constants CMPlt, CMPeq, and CMPgt
to package types.
* We had picked the name "types" in our rules for the handy
list of types provided by ssa.Config. That conflicted with
the types package name, so change it to "typ".
* Update the type comparison routine to handle tuples and special
types inline.
* Teach gc/fmt.go how to print special types.
* We can now eliminate ElemTypes in favor of just Elem,
and probably also some other duplicated Type methods
designed to return ssa.Type instead of *types.Type.
* The ssa tests were using their own dummy types,
and they were not particularly careful about types in general.
Of necessity, this CL switches them to use *types.Type;
it does not make them more type-accurate.
Unfortunately, using types.Type means initializing a bit
of the types universe.
This is prime for refactoring and improvement.
This shrinks ssa.Value; it now fits in a smaller size class
on 64 bit systems. This doesn't have a giant impact,
though, since most Values are preallocated in a chunk.
name old alloc/op new alloc/op delta
Template 37.9MB ± 0% 37.7MB ± 0% -0.57% (p=0.000 n=10+8)
Unicode 28.9MB ± 0% 28.7MB ± 0% -0.52% (p=0.000 n=10+10)
GoTypes 110MB ± 0% 109MB ± 0% -0.88% (p=0.000 n=10+10)
Flate 24.7MB ± 0% 24.6MB ± 0% -0.66% (p=0.000 n=10+10)
GoParser 31.1MB ± 0% 30.9MB ± 0% -0.61% (p=0.000 n=10+9)
Reflect 73.9MB ± 0% 73.4MB ± 0% -0.62% (p=0.000 n=10+8)
Tar 25.8MB ± 0% 25.6MB ± 0% -0.77% (p=0.000 n=9+10)
XML 41.2MB ± 0% 40.9MB ± 0% -0.80% (p=0.000 n=10+10)
[Geo mean] 40.5MB 40.3MB -0.68%
name old allocs/op new allocs/op delta
Template 385k ± 0% 386k ± 0% ~ (p=0.356 n=10+9)
Unicode 343k ± 1% 344k ± 0% ~ (p=0.481 n=10+10)
GoTypes 1.16M ± 0% 1.16M ± 0% -0.16% (p=0.004 n=10+10)
Flate 238k ± 1% 238k ± 1% ~ (p=0.853 n=10+10)
GoParser 320k ± 0% 320k ± 0% ~ (p=0.720 n=10+9)
Reflect 957k ± 0% 957k ± 0% ~ (p=0.460 n=10+8)
Tar 252k ± 0% 252k ± 0% ~ (p=0.133 n=9+10)
XML 400k ± 0% 400k ± 0% ~ (p=0.796 n=10+10)
[Geo mean] 428k 428k -0.01%
Removing all the interface calls helps non-trivially with CPU, though.
name old time/op new time/op delta
Template 178ms ± 4% 173ms ± 3% -2.90% (p=0.000 n=94+96)
Unicode 85.0ms ± 4% 83.9ms ± 4% -1.23% (p=0.000 n=96+96)
GoTypes 543ms ± 3% 528ms ± 3% -2.73% (p=0.000 n=98+96)
Flate 116ms ± 3% 113ms ± 4% -2.34% (p=0.000 n=96+99)
GoParser 144ms ± 3% 140ms ± 4% -2.80% (p=0.000 n=99+97)
Reflect 344ms ± 3% 334ms ± 4% -3.02% (p=0.000 n=100+99)
Tar 106ms ± 5% 103ms ± 4% -3.30% (p=0.000 n=98+94)
XML 198ms ± 5% 192ms ± 4% -2.88% (p=0.000 n=92+95)
[Geo mean] 178ms 173ms -2.65%
name old user-time/op new user-time/op delta
Template 229ms ± 5% 224ms ± 5% -2.36% (p=0.000 n=95+99)
Unicode 107ms ± 6% 106ms ± 5% -1.13% (p=0.001 n=93+95)
GoTypes 696ms ± 4% 679ms ± 4% -2.45% (p=0.000 n=97+99)
Flate 137ms ± 4% 134ms ± 5% -2.66% (p=0.000 n=99+96)
GoParser 176ms ± 5% 172ms ± 8% -2.27% (p=0.000 n=98+100)
Reflect 430ms ± 6% 411ms ± 5% -4.46% (p=0.000 n=100+92)
Tar 128ms ±13% 123ms ±13% -4.21% (p=0.000 n=100+100)
XML 239ms ± 6% 233ms ± 6% -2.50% (p=0.000 n=95+97)
[Geo mean] 220ms 213ms -2.76%
Change-Id: I15c7d6268347f8358e75066dfdbd77db24e8d0c1
Reviewed-on: https://go-review.googlesource.com/42145
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-04-28 14:12:28 -07:00
|
|
|
func isPtr(t *types.Type) bool {
|
2016-03-28 10:55:44 -07:00
|
|
|
return t.IsPtrShaped()
|
2015-03-23 17:02:11 -07:00
|
|
|
}
|
|
|
|
|
|
2025-04-01 18:43:38 +03:00
|
|
|
func copyCompatibleType(t1, t2 *types.Type) bool {
|
|
|
|
|
if t1.Size() != t2.Size() {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
if t1.IsInteger() {
|
|
|
|
|
return t2.IsInteger()
|
|
|
|
|
}
|
|
|
|
|
if isPtr(t1) {
|
|
|
|
|
return isPtr(t2)
|
|
|
|
|
}
|
|
|
|
|
return t1.Compare(t2) == types.CMPeq
|
|
|
|
|
}
|
|
|
|
|
|
2016-03-01 23:21:55 +00:00
|
|
|
// mergeSym merges two symbolic offsets. There is no real merging of
|
2015-08-23 21:14:25 -07:00
|
|
|
// offsets, we just pick the non-nil one.
|
2020-10-28 10:10:55 +01:00
|
|
|
func mergeSym(x, y Sym) Sym {
|
2015-06-19 21:02:28 -07:00
|
|
|
if x == nil {
|
|
|
|
|
return y
|
|
|
|
|
}
|
|
|
|
|
if y == nil {
|
|
|
|
|
return x
|
|
|
|
|
}
|
2020-10-28 10:10:55 +01:00
|
|
|
panic(fmt.Sprintf("mergeSym with two non-nil syms %v %v", x, y))
|
2015-06-19 21:02:28 -07:00
|
|
|
}
|
2020-04-19 10:45:04 +02:00
|
|
|
|
2020-10-28 10:10:55 +01:00
|
|
|
func canMergeSym(x, y Sym) bool {
|
2015-08-23 21:14:25 -07:00
|
|
|
return x == nil || y == nil
|
|
|
|
|
}
|
2020-04-19 10:45:04 +02:00
|
|
|
|
2018-10-26 10:52:59 -07:00
|
|
|
// canMergeLoadClobber reports whether the load can be merged into target without
|
2016-09-14 10:42:14 -04:00
|
|
|
// invalidating the schedule.
|
2017-03-18 11:16:30 -07:00
|
|
|
// It also checks that the other non-load argument x is something we
|
2018-10-26 10:52:59 -07:00
|
|
|
// are ok with clobbering.
|
|
|
|
|
func canMergeLoadClobber(target, load, x *Value) bool {
|
2017-03-18 11:16:30 -07:00
|
|
|
// The register containing x is going to get clobbered.
|
|
|
|
|
// Don't merge if we still need the value of x.
|
|
|
|
|
// We don't have liveness information here, but we can
|
|
|
|
|
// approximate x dying with:
|
|
|
|
|
// 1) target is x's only use.
|
|
|
|
|
// 2) target is not in a deeper loop than x.
|
2025-05-14 16:00:25 -07:00
|
|
|
switch {
|
|
|
|
|
case x.Uses == 2 && x.Op == OpPhi && len(x.Args) == 2 && (x.Args[0] == target || x.Args[1] == target) && target.Uses == 1:
|
|
|
|
|
// This is a simple detector to determine that x is probably
|
|
|
|
|
// not live after target. (It does not need to be perfect,
|
|
|
|
|
// regalloc will issue a reg-reg move to save it if we are wrong.)
|
|
|
|
|
// We have:
|
|
|
|
|
// x = Phi(?, target)
|
|
|
|
|
// target = Op(load, x)
|
|
|
|
|
// Because target has only one use as a Phi argument, we can schedule it
|
|
|
|
|
// very late. Hopefully, later than the other use of x. (The other use died
|
|
|
|
|
// between x and target, or exists on another branch entirely).
|
|
|
|
|
case x.Uses > 1:
|
2017-03-18 11:16:30 -07:00
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
loopnest := x.Block.Func.loopnest()
|
|
|
|
|
if loopnest.depth(target.Block.ID) > loopnest.depth(x.Block.ID) {
|
|
|
|
|
return false
|
|
|
|
|
}
|
2018-10-26 10:52:59 -07:00
|
|
|
return canMergeLoad(target, load)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// canMergeLoad reports whether the load can be merged into target without
|
|
|
|
|
// invalidating the schedule.
|
|
|
|
|
func canMergeLoad(target, load *Value) bool {
|
|
|
|
|
if target.Block.ID != load.Block.ID {
|
|
|
|
|
// If the load is in a different block do not merge it.
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// We can't merge the load into the target if the load
|
|
|
|
|
// has more than one use.
|
|
|
|
|
if load.Uses != 1 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
2017-03-18 11:16:30 -07:00
|
|
|
|
2017-03-03 13:44:18 -08:00
|
|
|
mem := load.MemoryArg()
|
2016-09-14 10:42:14 -04:00
|
|
|
|
|
|
|
|
// We need the load's memory arg to still be alive at target. That
|
|
|
|
|
// can't be the case if one of target's args depends on a memory
|
|
|
|
|
// state that is a successor of load's memory arg.
|
|
|
|
|
//
|
|
|
|
|
// For example, it would be invalid to merge load into target in
|
|
|
|
|
// the following situation because newmem has killed oldmem
|
|
|
|
|
// before target is reached:
|
|
|
|
|
// load = read ... oldmem
|
|
|
|
|
// newmem = write ... oldmem
|
|
|
|
|
// arg0 = read ... newmem
|
|
|
|
|
// target = add arg0 load
|
|
|
|
|
//
|
|
|
|
|
// If the argument comes from a different block then we can exclude
|
|
|
|
|
// it immediately because it must dominate load (which is in the
|
|
|
|
|
// same block as target).
|
|
|
|
|
var args []*Value
|
|
|
|
|
for _, a := range target.Args {
|
|
|
|
|
if a != load && a.Block.ID == target.Block.ID {
|
|
|
|
|
args = append(args, a)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// memPreds contains memory states known to be predecessors of load's
|
|
|
|
|
// memory state. It is lazily initialized.
|
|
|
|
|
var memPreds map[*Value]bool
|
|
|
|
|
for i := 0; len(args) > 0; i++ {
|
|
|
|
|
const limit = 100
|
|
|
|
|
if i >= limit {
|
|
|
|
|
// Give up if we have done a lot of iterations.
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
v := args[len(args)-1]
|
|
|
|
|
args = args[:len(args)-1]
|
|
|
|
|
if target.Block.ID != v.Block.ID {
|
|
|
|
|
// Since target and load are in the same block
|
|
|
|
|
// we can stop searching when we leave the block.
|
2018-09-18 01:22:59 +03:00
|
|
|
continue
|
2016-09-14 10:42:14 -04:00
|
|
|
}
|
|
|
|
|
if v.Op == OpPhi {
|
|
|
|
|
// A Phi implies we have reached the top of the block.
|
2017-06-06 15:25:29 -07:00
|
|
|
// The memory phi, if it exists, is always
|
|
|
|
|
// the first logical store in the block.
|
2018-09-18 01:22:59 +03:00
|
|
|
continue
|
2016-09-14 10:42:14 -04:00
|
|
|
}
|
|
|
|
|
if v.Type.IsTuple() && v.Type.FieldType(1).IsMemory() {
|
|
|
|
|
// We could handle this situation however it is likely
|
|
|
|
|
// to be very rare.
|
|
|
|
|
return false
|
|
|
|
|
}
|
2018-12-11 16:12:57 -08:00
|
|
|
if v.Op.SymEffect()&SymAddr != 0 {
|
|
|
|
|
// This case prevents an operation that calculates the
|
|
|
|
|
// address of a local variable from being forced to schedule
|
|
|
|
|
// before its corresponding VarDef.
|
|
|
|
|
// See issue 28445.
|
|
|
|
|
// v1 = LOAD ...
|
|
|
|
|
// v2 = VARDEF
|
|
|
|
|
// v3 = LEAQ
|
|
|
|
|
// v4 = CMPQ v1 v3
|
|
|
|
|
// We don't want to combine the CMPQ with the load, because
|
|
|
|
|
// that would force the CMPQ to schedule before the VARDEF, which
|
|
|
|
|
// in turn requires the LEAQ to schedule before the VARDEF.
|
|
|
|
|
return false
|
|
|
|
|
}
|
2016-09-14 10:42:14 -04:00
|
|
|
if v.Type.IsMemory() {
|
|
|
|
|
if memPreds == nil {
|
|
|
|
|
// Initialise a map containing memory states
|
|
|
|
|
// known to be predecessors of load's memory
|
|
|
|
|
// state.
|
|
|
|
|
memPreds = make(map[*Value]bool)
|
|
|
|
|
m := mem
|
|
|
|
|
const limit = 50
|
|
|
|
|
for i := 0; i < limit; i++ {
|
|
|
|
|
if m.Op == OpPhi {
|
2017-06-06 15:25:29 -07:00
|
|
|
// The memory phi, if it exists, is always
|
|
|
|
|
// the first logical store in the block.
|
2016-09-14 10:42:14 -04:00
|
|
|
break
|
|
|
|
|
}
|
|
|
|
|
if m.Block.ID != target.Block.ID {
|
|
|
|
|
break
|
|
|
|
|
}
|
|
|
|
|
if !m.Type.IsMemory() {
|
|
|
|
|
break
|
|
|
|
|
}
|
|
|
|
|
memPreds[m] = true
|
|
|
|
|
if len(m.Args) == 0 {
|
|
|
|
|
break
|
|
|
|
|
}
|
2017-03-03 13:44:18 -08:00
|
|
|
m = m.MemoryArg()
|
2016-09-14 10:42:14 -04:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// We can merge if v is a predecessor of mem.
|
|
|
|
|
//
|
|
|
|
|
// For example, we can merge load into target in the
|
|
|
|
|
// following scenario:
|
|
|
|
|
// x = read ... v
|
|
|
|
|
// mem = write ... v
|
|
|
|
|
// load = read ... mem
|
|
|
|
|
// target = add x load
|
|
|
|
|
if memPreds[v] {
|
2018-09-18 01:22:59 +03:00
|
|
|
continue
|
2016-09-14 10:42:14 -04:00
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
if len(v.Args) > 0 && v.Args[len(v.Args)-1] == mem {
|
|
|
|
|
// If v takes mem as an input then we know mem
|
|
|
|
|
// is valid at this point.
|
2018-09-18 01:22:59 +03:00
|
|
|
continue
|
2016-09-14 10:42:14 -04:00
|
|
|
}
|
|
|
|
|
for _, a := range v.Args {
|
|
|
|
|
if target.Block.ID == a.Block.ID {
|
|
|
|
|
args = append(args, a)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2017-03-18 11:16:30 -07:00
|
|
|
|
2016-09-14 10:42:14 -04:00
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
|
2025-03-29 19:49:25 +01:00
|
|
|
// isSameCall reports whether aux is the same as the given named symbol.
|
|
|
|
|
func isSameCall(aux Aux, name string) bool {
|
|
|
|
|
fn := aux.(*AuxCall).Fn
|
2020-08-12 23:47:57 -04:00
|
|
|
return fn != nil && fn.String() == name
|
2016-08-26 15:41:51 -04:00
|
|
|
}
|
|
|
|
|
|
2022-11-11 19:22:35 +08:00
|
|
|
// canLoadUnaligned reports if the architecture supports unaligned load operations.
|
2021-06-16 16:25:57 +00:00
|
|
|
func canLoadUnaligned(c *Config) bool {
|
|
|
|
|
return c.ctxt.Arch.Alignment == 1
|
|
|
|
|
}
|
|
|
|
|
|
2022-11-14 20:13:10 +08:00
|
|
|
// nlzX returns the number of leading zeros.
|
2020-04-20 03:26:53 +07:00
|
|
|
func nlz64(x int64) int { return bits.LeadingZeros64(uint64(x)) }
|
|
|
|
|
func nlz32(x int32) int { return bits.LeadingZeros32(uint32(x)) }
|
|
|
|
|
func nlz16(x int16) int { return bits.LeadingZeros16(uint16(x)) }
|
|
|
|
|
func nlz8(x int8) int { return bits.LeadingZeros8(uint8(x)) }
|
2016-02-11 20:43:15 -06:00
|
|
|
|
2020-04-10 21:38:49 -07:00
|
|
|
// ntzX returns the number of trailing zeros.
|
|
|
|
|
func ntz64(x int64) int { return bits.TrailingZeros64(uint64(x)) }
|
|
|
|
|
func ntz32(x int32) int { return bits.TrailingZeros32(uint32(x)) }
|
|
|
|
|
func ntz16(x int16) int { return bits.TrailingZeros16(uint16(x)) }
|
|
|
|
|
func ntz8(x int8) int { return bits.TrailingZeros8(uint8(x)) }
|
2016-02-11 20:43:15 -06:00
|
|
|
|
2025-08-26 22:12:29 +01:00
|
|
|
// oneBit reports whether x contains exactly one set bit.
|
|
|
|
|
func oneBit[T int8 | int16 | int32 | int64](x T) bool {
|
|
|
|
|
return x&(x-1) == 0 && x != 0
|
|
|
|
|
}
|
2017-08-09 05:01:26 +00:00
|
|
|
|
2016-02-11 20:43:15 -06:00
|
|
|
// nto returns the number of trailing ones.
|
|
|
|
|
func nto(x int64) int64 {
|
2020-04-22 00:52:19 +07:00
|
|
|
return int64(ntz64(^x))
|
2016-02-11 20:43:15 -06:00
|
|
|
}
|
|
|
|
|
|
2020-04-11 10:32:21 -07:00
|
|
|
// logX returns logarithm of n base 2.
|
|
|
|
|
// n must be a positive power of 2 (isPowerOfTwoX returns true).
|
2025-07-29 23:39:08 +07:00
|
|
|
func log8(n int8) int64 { return log8u(uint8(n)) }
|
|
|
|
|
func log16(n int16) int64 { return log16u(uint16(n)) }
|
|
|
|
|
func log32(n int32) int64 { return log32u(uint32(n)) }
|
|
|
|
|
func log64(n int64) int64 { return log64u(uint64(n)) }
|
2020-04-11 10:32:21 -07:00
|
|
|
|
2025-07-23 18:48:18 +07:00
|
|
|
// logXu returns the logarithm of n base 2.
|
|
|
|
|
// n must be a power of 2 (isUnsignedPowerOfTwo returns true)
|
|
|
|
|
func log8u(n uint8) int64 { return int64(bits.Len8(n)) - 1 }
|
|
|
|
|
func log16u(n uint16) int64 { return int64(bits.Len16(n)) - 1 }
|
|
|
|
|
func log32u(n uint32) int64 { return int64(bits.Len32(n)) - 1 }
|
|
|
|
|
func log64u(n uint64) int64 { return int64(bits.Len64(n)) - 1 }
|
|
|
|
|
|
2022-11-14 20:13:10 +08:00
|
|
|
// isPowerOfTwoX functions report whether n is a power of 2.
|
2024-09-19 10:06:55 -07:00
|
|
|
func isPowerOfTwo[T int8 | int16 | int32 | int64](n T) bool {
|
2020-04-11 10:32:21 -07:00
|
|
|
return n > 0 && n&(n-1) == 0
|
|
|
|
|
}
|
2015-07-25 12:53:58 -05:00
|
|
|
|
2025-07-23 18:48:18 +07:00
|
|
|
// isUnsignedPowerOfTwo reports whether n is an unsigned power of 2.
|
|
|
|
|
func isUnsignedPowerOfTwo[T uint8 | uint16 | uint32 | uint64](n T) bool {
|
|
|
|
|
return n != 0 && n&(n-1) == 0
|
|
|
|
|
}
|
|
|
|
|
|
2015-07-25 12:53:58 -05:00
|
|
|
// is32Bit reports whether n can be represented as a signed 32 bit integer.
|
|
|
|
|
func is32Bit(n int64) bool {
|
|
|
|
|
return n == int64(int32(n))
|
|
|
|
|
}
|
2015-09-03 18:24:22 -05:00
|
|
|
|
2016-07-06 13:32:52 -07:00
|
|
|
// is16Bit reports whether n can be represented as a signed 16 bit integer.
|
|
|
|
|
func is16Bit(n int64) bool {
|
|
|
|
|
return n == int64(int16(n))
|
|
|
|
|
}
|
|
|
|
|
|
2019-09-17 07:29:31 -07:00
|
|
|
// is8Bit reports whether n can be represented as a signed 8 bit integer.
|
|
|
|
|
func is8Bit(n int64) bool {
|
|
|
|
|
return n == int64(int8(n))
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// isU8Bit reports whether n can be represented as an unsigned 8 bit integer.
|
|
|
|
|
func isU8Bit(n int64) bool {
|
|
|
|
|
return n == int64(uint8(n))
|
|
|
|
|
}
|
|
|
|
|
|
2025-05-14 14:35:41 +08:00
|
|
|
// is12Bit reports whether n can be represented as a signed 12 bit integer.
|
|
|
|
|
func is12Bit(n int64) bool {
|
|
|
|
|
return -(1<<11) <= n && n < (1<<11)
|
|
|
|
|
}
|
|
|
|
|
|
2017-04-30 14:25:57 -04:00
|
|
|
// isU12Bit reports whether n can be represented as an unsigned 12 bit integer.
|
|
|
|
|
func isU12Bit(n int64) bool {
|
|
|
|
|
return 0 <= n && n < (1<<12)
|
|
|
|
|
}
|
|
|
|
|
|
2016-10-05 13:21:09 -07:00
|
|
|
// isU16Bit reports whether n can be represented as an unsigned 16 bit integer.
|
2016-09-26 10:06:10 -07:00
|
|
|
func isU16Bit(n int64) bool {
|
|
|
|
|
return n == int64(uint16(n))
|
|
|
|
|
}
|
|
|
|
|
|
2016-10-05 13:21:09 -07:00
|
|
|
// isU32Bit reports whether n can be represented as an unsigned 32 bit integer.
|
|
|
|
|
func isU32Bit(n int64) bool {
|
|
|
|
|
return n == int64(uint32(n))
|
|
|
|
|
}
|
|
|
|
|
|
2016-09-12 14:50:10 -04:00
|
|
|
// is20Bit reports whether n can be represented as a signed 20 bit integer.
|
|
|
|
|
func is20Bit(n int64) bool {
|
|
|
|
|
return -(1<<19) <= n && n < (1<<19)
|
|
|
|
|
}
|
|
|
|
|
|
2015-09-03 18:24:22 -05:00
|
|
|
// b2i translates a boolean value to 0 or 1 for assigning to auxInt.
|
|
|
|
|
func b2i(b bool) int64 {
|
|
|
|
|
if b {
|
|
|
|
|
return 1
|
|
|
|
|
}
|
|
|
|
|
return 0
|
|
|
|
|
}
|
2015-09-04 06:33:56 -05:00
|
|
|
|
2020-06-15 14:43:02 -07:00
|
|
|
// b2i32 translates a boolean value to 0 or 1.
|
|
|
|
|
func b2i32(b bool) int32 {
|
|
|
|
|
if b {
|
|
|
|
|
return 1
|
|
|
|
|
}
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
|
2024-11-11 12:21:14 -08:00
|
|
|
func canMulStrengthReduce(config *Config, x int64) bool {
|
|
|
|
|
_, ok := config.mulRecipes[x]
|
|
|
|
|
return ok
|
|
|
|
|
}
|
|
|
|
|
func canMulStrengthReduce32(config *Config, x int32) bool {
|
|
|
|
|
_, ok := config.mulRecipes[int64(x)]
|
|
|
|
|
return ok
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// mulStrengthReduce returns v*x evaluated at the location
|
|
|
|
|
// (block and source position) of m.
|
|
|
|
|
// canMulStrengthReduce must have returned true.
|
|
|
|
|
func mulStrengthReduce(m *Value, v *Value, x int64) *Value {
|
|
|
|
|
return v.Block.Func.Config.mulRecipes[x].build(m, v)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// mulStrengthReduce32 returns v*x evaluated at the location
|
|
|
|
|
// (block and source position) of m.
|
|
|
|
|
// canMulStrengthReduce32 must have returned true.
|
|
|
|
|
// The upper 32 bits of m might be set to junk.
|
|
|
|
|
func mulStrengthReduce32(m *Value, v *Value, x int32) *Value {
|
|
|
|
|
return v.Block.Func.Config.mulRecipes[int64(x)].build(m, v)
|
|
|
|
|
}
|
|
|
|
|
|
2018-04-26 20:56:03 -07:00
|
|
|
// shiftIsBounded reports whether (left/right) shift Value v is known to be bounded.
|
|
|
|
|
// A shift is bounded if it is shifting by less than the width of the shifted value.
|
|
|
|
|
func shiftIsBounded(v *Value) bool {
|
2018-04-29 17:40:47 -07:00
|
|
|
return v.AuxInt != 0
|
2018-04-26 20:56:03 -07:00
|
|
|
}
|
|
|
|
|
|
2020-12-30 12:05:57 -05:00
|
|
|
// canonLessThan returns whether x is "ordered" less than y, for purposes of normalizing
|
|
|
|
|
// generated code as much as possible.
|
|
|
|
|
func canonLessThan(x, y *Value) bool {
|
|
|
|
|
if x.Op != y.Op {
|
|
|
|
|
return x.Op < y.Op
|
|
|
|
|
}
|
|
|
|
|
if !x.Pos.SameFileAndLine(y.Pos) {
|
|
|
|
|
return x.Pos.Before(y.Pos)
|
|
|
|
|
}
|
|
|
|
|
return x.ID < y.ID
|
|
|
|
|
}
|
|
|
|
|
|
2018-09-03 12:14:31 +01:00
|
|
|
// truncate64Fto32F converts a float64 value to a float32 preserving the bit pattern
|
|
|
|
|
// of the mantissa. It will panic if the truncation results in lost information.
|
|
|
|
|
func truncate64Fto32F(f float64) float32 {
|
|
|
|
|
if !isExactFloat32(f) {
|
|
|
|
|
panic("truncate64Fto32F: truncation is not exact")
|
|
|
|
|
}
|
|
|
|
|
if !math.IsNaN(f) {
|
|
|
|
|
return float32(f)
|
|
|
|
|
}
|
|
|
|
|
// NaN bit patterns aren't necessarily preserved across conversion
|
|
|
|
|
// instructions so we need to do the conversion manually.
|
|
|
|
|
b := math.Float64bits(f)
|
|
|
|
|
m := b & ((1 << 52) - 1) // mantissa (a.k.a. significand)
|
|
|
|
|
// | sign | exponent | mantissa |
|
|
|
|
|
r := uint32(((b >> 32) & (1 << 31)) | 0x7f800000 | (m >> (52 - 23)))
|
|
|
|
|
return math.Float32frombits(r)
|
|
|
|
|
}
|
|
|
|
|
|
2020-01-23 22:18:30 -08:00
|
|
|
// DivisionNeedsFixUp reports whether the division needs fix-up code.
|
|
|
|
|
func DivisionNeedsFixUp(v *Value) bool {
|
2018-08-06 19:50:38 +10:00
|
|
|
return v.AuxInt == 0
|
|
|
|
|
}
|
|
|
|
|
|
2018-09-12 12:16:50 +01:00
|
|
|
// auxTo32F decodes a float32 from the AuxInt value provided.
|
|
|
|
|
func auxTo32F(i int64) float32 {
|
|
|
|
|
return truncate64Fto32F(math.Float64frombits(uint64(i)))
|
|
|
|
|
}
|
|
|
|
|
|
2020-04-10 21:38:49 -07:00
|
|
|
func auxIntToBool(i int64) bool {
|
|
|
|
|
if i == 0 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
func auxIntToInt8(i int64) int8 {
|
|
|
|
|
return int8(i)
|
|
|
|
|
}
|
|
|
|
|
func auxIntToInt16(i int64) int16 {
|
|
|
|
|
return int16(i)
|
|
|
|
|
}
|
|
|
|
|
func auxIntToInt32(i int64) int32 {
|
|
|
|
|
return int32(i)
|
|
|
|
|
}
|
|
|
|
|
func auxIntToInt64(i int64) int64 {
|
|
|
|
|
return i
|
|
|
|
|
}
|
2020-04-16 11:40:09 +01:00
|
|
|
func auxIntToUint8(i int64) uint8 {
|
|
|
|
|
return uint8(i)
|
|
|
|
|
}
|
2020-04-10 21:38:49 -07:00
|
|
|
func auxIntToFloat32(i int64) float32 {
|
|
|
|
|
return float32(math.Float64frombits(uint64(i)))
|
|
|
|
|
}
|
|
|
|
|
func auxIntToFloat64(i int64) float64 {
|
|
|
|
|
return math.Float64frombits(uint64(i))
|
|
|
|
|
}
|
|
|
|
|
func auxIntToValAndOff(i int64) ValAndOff {
|
|
|
|
|
return ValAndOff(i)
|
|
|
|
|
}
|
2020-05-14 17:01:11 +08:00
|
|
|
func auxIntToArm64BitField(i int64) arm64BitField {
|
|
|
|
|
return arm64BitField(i)
|
|
|
|
|
}
|
2025-08-21 17:41:13 +03:00
|
|
|
func auxIntToArm64ConditionalParams(i int64) arm64ConditionalParams {
|
|
|
|
|
var params arm64ConditionalParams
|
|
|
|
|
params.cond = Op(i & 0xffff)
|
|
|
|
|
i >>= 16
|
|
|
|
|
params.nzcv = uint8(i & 0x0f)
|
|
|
|
|
i >>= 4
|
|
|
|
|
params.constValue = uint8(i & 0x1f)
|
|
|
|
|
i >>= 5
|
|
|
|
|
params.ind = i == 1
|
|
|
|
|
return params
|
|
|
|
|
}
|
2020-06-15 14:43:02 -07:00
|
|
|
func auxIntToFlagConstant(x int64) flagConstant {
|
|
|
|
|
return flagConstant(x)
|
|
|
|
|
}
|
2020-04-10 21:38:49 -07:00
|
|
|
|
2020-08-27 17:34:59 +08:00
|
|
|
func auxIntToOp(cc int64) Op {
|
|
|
|
|
return Op(cc)
|
|
|
|
|
}
|
|
|
|
|
|
2020-04-10 21:38:49 -07:00
|
|
|
func boolToAuxInt(b bool) int64 {
|
|
|
|
|
if b {
|
|
|
|
|
return 1
|
|
|
|
|
}
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
func int8ToAuxInt(i int8) int64 {
|
|
|
|
|
return int64(i)
|
|
|
|
|
}
|
|
|
|
|
func int16ToAuxInt(i int16) int64 {
|
|
|
|
|
return int64(i)
|
|
|
|
|
}
|
|
|
|
|
func int32ToAuxInt(i int32) int64 {
|
|
|
|
|
return int64(i)
|
|
|
|
|
}
|
|
|
|
|
func int64ToAuxInt(i int64) int64 {
|
|
|
|
|
return int64(i)
|
|
|
|
|
}
|
2020-04-16 11:40:09 +01:00
|
|
|
func uint8ToAuxInt(i uint8) int64 {
|
|
|
|
|
return int64(int8(i))
|
|
|
|
|
}
|
2020-04-10 21:38:49 -07:00
|
|
|
func float32ToAuxInt(f float32) int64 {
|
|
|
|
|
return int64(math.Float64bits(float64(f)))
|
|
|
|
|
}
|
|
|
|
|
func float64ToAuxInt(f float64) int64 {
|
|
|
|
|
return int64(math.Float64bits(f))
|
|
|
|
|
}
|
2020-04-11 19:51:09 -07:00
|
|
|
func valAndOffToAuxInt(v ValAndOff) int64 {
|
2020-04-10 21:38:49 -07:00
|
|
|
return int64(v)
|
|
|
|
|
}
|
2020-05-14 17:01:11 +08:00
|
|
|
func arm64BitFieldToAuxInt(v arm64BitField) int64 {
|
|
|
|
|
return int64(v)
|
|
|
|
|
}
|
2025-08-21 17:41:13 +03:00
|
|
|
func arm64ConditionalParamsToAuxInt(v arm64ConditionalParams) int64 {
|
|
|
|
|
if v.cond&^0xffff != 0 {
|
|
|
|
|
panic("condition value exceeds 16 bits")
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
var i int64
|
|
|
|
|
if v.ind {
|
|
|
|
|
i = 1 << 25
|
|
|
|
|
}
|
|
|
|
|
i |= int64(v.constValue) << 20
|
|
|
|
|
i |= int64(v.nzcv) << 16
|
|
|
|
|
i |= int64(v.cond)
|
|
|
|
|
return i
|
|
|
|
|
}
|
2020-06-15 14:43:02 -07:00
|
|
|
func flagConstantToAuxInt(x flagConstant) int64 {
|
|
|
|
|
return int64(x)
|
|
|
|
|
}
|
2020-04-10 21:38:49 -07:00
|
|
|
|
2020-08-27 17:34:59 +08:00
|
|
|
func opToAuxInt(o Op) int64 {
|
|
|
|
|
return int64(o)
|
|
|
|
|
}
|
|
|
|
|
|
2020-12-07 17:15:44 -08:00
|
|
|
// Aux is an interface to hold miscellaneous data in Blocks and Values.
|
|
|
|
|
type Aux interface {
|
|
|
|
|
CanBeAnSSAAux()
|
2020-04-11 19:51:09 -07:00
|
|
|
}
|
2020-12-07 17:15:44 -08:00
|
|
|
|
2023-01-10 08:36:00 +01:00
|
|
|
// for now only used to mark moves that need to avoid clobbering flags
|
|
|
|
|
type auxMark bool
|
|
|
|
|
|
|
|
|
|
func (auxMark) CanBeAnSSAAux() {}
|
|
|
|
|
|
|
|
|
|
var AuxMark auxMark
|
|
|
|
|
|
2020-12-07 17:15:44 -08:00
|
|
|
// stringAux wraps string values for use in Aux.
|
|
|
|
|
type stringAux string
|
|
|
|
|
|
|
|
|
|
func (stringAux) CanBeAnSSAAux() {}
|
|
|
|
|
|
|
|
|
|
func auxToString(i Aux) string {
|
|
|
|
|
return string(i.(stringAux))
|
|
|
|
|
}
|
|
|
|
|
func auxToSym(i Aux) Sym {
|
2020-04-11 19:51:09 -07:00
|
|
|
// TODO: kind of a hack - allows nil interface through
|
|
|
|
|
s, _ := i.(Sym)
|
|
|
|
|
return s
|
|
|
|
|
}
|
2020-12-07 17:15:44 -08:00
|
|
|
func auxToType(i Aux) *types.Type {
|
2020-04-12 17:11:25 -07:00
|
|
|
return i.(*types.Type)
|
|
|
|
|
}
|
2020-12-07 17:15:44 -08:00
|
|
|
func auxToCall(i Aux) *AuxCall {
|
2020-06-12 13:48:26 -04:00
|
|
|
return i.(*AuxCall)
|
|
|
|
|
}
|
2020-12-07 17:15:44 -08:00
|
|
|
func auxToS390xCCMask(i Aux) s390x.CCMask {
|
2020-04-16 11:40:09 +01:00
|
|
|
return i.(s390x.CCMask)
|
|
|
|
|
}
|
2020-12-07 17:15:44 -08:00
|
|
|
func auxToS390xRotateParams(i Aux) s390x.RotateParams {
|
2020-04-16 11:40:09 +01:00
|
|
|
return i.(s390x.RotateParams)
|
|
|
|
|
}
|
2020-04-11 19:51:09 -07:00
|
|
|
|
2020-12-07 17:15:44 -08:00
|
|
|
func StringToAux(s string) Aux {
|
|
|
|
|
return stringAux(s)
|
2020-04-11 19:51:09 -07:00
|
|
|
}
|
2020-12-07 17:15:44 -08:00
|
|
|
func symToAux(s Sym) Aux {
|
2020-04-11 19:51:09 -07:00
|
|
|
return s
|
|
|
|
|
}
|
2020-12-07 17:15:44 -08:00
|
|
|
func callToAux(s *AuxCall) Aux {
|
2020-06-12 13:48:26 -04:00
|
|
|
return s
|
|
|
|
|
}
|
2020-12-07 17:15:44 -08:00
|
|
|
func typeToAux(t *types.Type) Aux {
|
2020-04-12 17:11:25 -07:00
|
|
|
return t
|
|
|
|
|
}
|
2020-12-07 17:15:44 -08:00
|
|
|
func s390xCCMaskToAux(c s390x.CCMask) Aux {
|
2020-04-16 11:40:09 +01:00
|
|
|
return c
|
|
|
|
|
}
|
2020-12-07 17:15:44 -08:00
|
|
|
func s390xRotateParamsToAux(r s390x.RotateParams) Aux {
|
2020-04-16 11:40:09 +01:00
|
|
|
return r
|
|
|
|
|
}
|
2020-05-12 19:47:23 +08:00
|
|
|
|
2018-11-02 15:18:43 +00:00
|
|
|
// uaddOvf reports whether unsigned a+b would overflow.
|
2016-02-03 06:21:24 -05:00
|
|
|
func uaddOvf(a, b int64) bool {
|
|
|
|
|
return uint64(a)+uint64(b) < uint64(a)
|
|
|
|
|
}
|
|
|
|
|
|
2020-08-07 22:46:43 -04:00
|
|
|
func devirtLECall(v *Value, sym *obj.LSym) *Value {
|
|
|
|
|
v.Op = OpStaticLECall
|
2021-02-04 16:42:35 -05:00
|
|
|
auxcall := v.Aux.(*AuxCall)
|
|
|
|
|
auxcall.Fn = sym
|
2021-10-25 01:02:12 +07:00
|
|
|
// Remove first arg
|
|
|
|
|
v.Args[0].Uses--
|
|
|
|
|
copy(v.Args[0:], v.Args[1:])
|
|
|
|
|
v.Args[len(v.Args)-1] = nil // aid GC
|
|
|
|
|
v.Args = v.Args[:len(v.Args)-1]
|
2023-05-02 17:37:00 +00:00
|
|
|
if f := v.Block.Func; f.pass.debug > 0 {
|
|
|
|
|
f.Warnl(v.Pos, "de-virtualizing call")
|
|
|
|
|
}
|
2020-08-07 22:46:43 -04:00
|
|
|
return v
|
|
|
|
|
}
|
|
|
|
|
|
2016-02-13 17:37:19 -06:00
|
|
|
// isSamePtr reports whether p1 and p2 point to the same address.
|
|
|
|
|
func isSamePtr(p1, p2 *Value) bool {
|
2016-02-24 12:58:47 -08:00
|
|
|
if p1 == p2 {
|
|
|
|
|
return true
|
|
|
|
|
}
|
2016-03-04 18:55:09 -08:00
|
|
|
if p1.Op != p2.Op {
|
2025-04-01 18:43:38 +03:00
|
|
|
for p1.Op == OpOffPtr && p1.AuxInt == 0 {
|
|
|
|
|
p1 = p1.Args[0]
|
|
|
|
|
}
|
|
|
|
|
for p2.Op == OpOffPtr && p2.AuxInt == 0 {
|
|
|
|
|
p2 = p2.Args[0]
|
|
|
|
|
}
|
|
|
|
|
if p1 == p2 {
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
if p1.Op != p2.Op {
|
|
|
|
|
return false
|
|
|
|
|
}
|
2016-03-04 18:55:09 -08:00
|
|
|
}
|
|
|
|
|
switch p1.Op {
|
|
|
|
|
case OpOffPtr:
|
|
|
|
|
return p1.AuxInt == p2.AuxInt && isSamePtr(p1.Args[0], p2.Args[0])
|
2018-07-03 11:34:38 -04:00
|
|
|
case OpAddr, OpLocalAddr:
|
2022-11-21 22:22:36 -08:00
|
|
|
return p1.Aux == p2.Aux
|
2016-03-04 18:55:09 -08:00
|
|
|
case OpAddPtr:
|
|
|
|
|
return p1.Args[1] == p2.Args[1] && isSamePtr(p1.Args[0], p2.Args[0])
|
|
|
|
|
}
|
|
|
|
|
return false
|
2016-02-13 17:37:19 -06:00
|
|
|
}
|
|
|
|
|
|
2018-10-28 12:01:11 -07:00
|
|
|
func isStackPtr(v *Value) bool {
|
|
|
|
|
for v.Op == OpOffPtr || v.Op == OpAddPtr {
|
|
|
|
|
v = v.Args[0]
|
|
|
|
|
}
|
|
|
|
|
return v.Op == OpSP || v.Op == OpLocalAddr
|
|
|
|
|
}
|
|
|
|
|
|
2018-04-11 22:47:24 +01:00
|
|
|
// disjoint reports whether the memory region specified by [p1:p1+n1)
|
|
|
|
|
// does not overlap with [p2:p2+n2).
|
|
|
|
|
// A return value of false does not imply the regions overlap.
|
|
|
|
|
func disjoint(p1 *Value, n1 int64, p2 *Value, n2 int64) bool {
|
|
|
|
|
if n1 == 0 || n2 == 0 {
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
if p1 == p2 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
baseAndOffset := func(ptr *Value) (base *Value, offset int64) {
|
|
|
|
|
base, offset = ptr, 0
|
2018-10-28 11:19:33 -07:00
|
|
|
for base.Op == OpOffPtr {
|
2018-04-11 22:47:24 +01:00
|
|
|
offset += base.AuxInt
|
|
|
|
|
base = base.Args[0]
|
|
|
|
|
}
|
2023-10-25 13:35:13 -07:00
|
|
|
if opcodeTable[base.Op].nilCheck {
|
|
|
|
|
base = base.Args[0]
|
|
|
|
|
}
|
2018-04-11 22:47:24 +01:00
|
|
|
return base, offset
|
|
|
|
|
}
|
2024-11-27 20:47:58 +03:00
|
|
|
|
|
|
|
|
// Run types-based analysis
|
|
|
|
|
if disjointTypes(p1.Type, p2.Type) {
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
|
2018-04-11 22:47:24 +01:00
|
|
|
p1, off1 := baseAndOffset(p1)
|
|
|
|
|
p2, off2 := baseAndOffset(p2)
|
|
|
|
|
if isSamePtr(p1, p2) {
|
|
|
|
|
return !overlap(off1, n1, off2, n2)
|
|
|
|
|
}
|
|
|
|
|
// p1 and p2 are not the same, so if they are both OpAddrs then
|
|
|
|
|
// they point to different variables.
|
|
|
|
|
// If one pointer is on the stack and the other is an argument
|
|
|
|
|
// then they can't overlap.
|
|
|
|
|
switch p1.Op {
|
2018-07-03 11:34:38 -04:00
|
|
|
case OpAddr, OpLocalAddr:
|
|
|
|
|
if p2.Op == OpAddr || p2.Op == OpLocalAddr || p2.Op == OpSP {
|
2018-04-11 22:47:24 +01:00
|
|
|
return true
|
|
|
|
|
}
|
2021-04-11 14:33:28 -04:00
|
|
|
return (p2.Op == OpArg || p2.Op == OpArgIntReg) && p1.Args[0].Op == OpSP
|
|
|
|
|
case OpArg, OpArgIntReg:
|
2018-07-03 11:34:38 -04:00
|
|
|
if p2.Op == OpSP || p2.Op == OpLocalAddr {
|
2018-04-11 22:47:24 +01:00
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
case OpSP:
|
2021-04-11 14:33:28 -04:00
|
|
|
return p2.Op == OpAddr || p2.Op == OpLocalAddr || p2.Op == OpArg || p2.Op == OpArgIntReg || p2.Op == OpSP
|
2018-04-11 22:47:24 +01:00
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
|
2024-11-27 20:47:58 +03:00
|
|
|
// disjointTypes reports whether a memory region pointed to by a pointer of type
|
|
|
|
|
// t1 does not overlap with a memory region pointed to by a pointer of type t2 --
|
|
|
|
|
// based on type aliasing rules.
|
|
|
|
|
func disjointTypes(t1 *types.Type, t2 *types.Type) bool {
|
|
|
|
|
// Unsafe pointer can alias with anything.
|
|
|
|
|
if t1.IsUnsafePtr() || t2.IsUnsafePtr() {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if !t1.IsPtr() || !t2.IsPtr() {
|
|
|
|
|
panic("disjointTypes: one of arguments is not a pointer")
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
t1 = t1.Elem()
|
|
|
|
|
t2 = t2.Elem()
|
|
|
|
|
|
|
|
|
|
// Not-in-heap types are not supported -- they are rare and non-important; also,
|
|
|
|
|
// type.HasPointers check doesn't work for them correctly.
|
|
|
|
|
if t1.NotInHeap() || t2.NotInHeap() {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
isPtrShaped := func(t *types.Type) bool { return int(t.Size()) == types.PtrSize && t.HasPointers() }
|
|
|
|
|
|
|
|
|
|
// Pointers and non-pointers are disjoint (https://pkg.go.dev/unsafe#Pointer).
|
|
|
|
|
if (isPtrShaped(t1) && !t2.HasPointers()) ||
|
|
|
|
|
(isPtrShaped(t2) && !t1.HasPointers()) {
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
|
2022-11-11 19:22:35 +08:00
|
|
|
// moveSize returns the number of bytes an aligned MOV instruction moves.
|
2016-07-22 06:41:14 -04:00
|
|
|
func moveSize(align int64, c *Config) int64 {
|
|
|
|
|
switch {
|
2017-04-21 18:44:34 -07:00
|
|
|
case align%8 == 0 && c.PtrSize == 8:
|
2016-07-22 06:41:14 -04:00
|
|
|
return 8
|
|
|
|
|
case align%4 == 0:
|
|
|
|
|
return 4
|
|
|
|
|
case align%2 == 0:
|
|
|
|
|
return 2
|
|
|
|
|
}
|
|
|
|
|
return 1
|
|
|
|
|
}
|
|
|
|
|
|
2016-03-28 21:45:33 -07:00
|
|
|
// mergePoint finds a block among a's blocks which dominates b and is itself
|
|
|
|
|
// dominated by all of a's blocks. Returns nil if it can't find one.
|
|
|
|
|
// Might return nil even if one does exist.
|
|
|
|
|
func mergePoint(b *Block, a ...*Value) *Block {
|
|
|
|
|
// Walk backward from b looking for one of the a's blocks.
|
|
|
|
|
|
|
|
|
|
// Max distance
|
|
|
|
|
d := 100
|
|
|
|
|
|
|
|
|
|
for d > 0 {
|
|
|
|
|
for _, x := range a {
|
|
|
|
|
if b == x.Block {
|
|
|
|
|
goto found
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
if len(b.Preds) > 1 {
|
|
|
|
|
// Don't know which way to go back. Abort.
|
|
|
|
|
return nil
|
|
|
|
|
}
|
2016-04-28 16:52:47 -07:00
|
|
|
b = b.Preds[0].b
|
2016-03-28 21:45:33 -07:00
|
|
|
d--
|
|
|
|
|
}
|
|
|
|
|
return nil // too far away
|
|
|
|
|
found:
|
|
|
|
|
// At this point, r is the first value in a that we find by walking backwards.
|
|
|
|
|
// if we return anything, r will be it.
|
|
|
|
|
r := b
|
|
|
|
|
|
|
|
|
|
// Keep going, counting the other a's that we find. They must all dominate r.
|
|
|
|
|
na := 0
|
|
|
|
|
for d > 0 {
|
|
|
|
|
for _, x := range a {
|
|
|
|
|
if b == x.Block {
|
|
|
|
|
na++
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
if na == len(a) {
|
|
|
|
|
// Found all of a in a backwards walk. We can return r.
|
|
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
if len(b.Preds) > 1 {
|
|
|
|
|
return nil
|
|
|
|
|
}
|
2016-04-28 16:52:47 -07:00
|
|
|
b = b.Preds[0].b
|
2016-03-28 21:45:33 -07:00
|
|
|
d--
|
|
|
|
|
|
|
|
|
|
}
|
|
|
|
|
return nil // too far away
|
|
|
|
|
}
|
2016-04-20 15:02:48 -07:00
|
|
|
|
2020-02-24 17:30:44 -08:00
|
|
|
// clobber invalidates values. Returns true.
|
2016-04-20 15:02:48 -07:00
|
|
|
// clobber is used by rewrite rules to:
|
2022-02-03 14:12:08 -05:00
|
|
|
//
|
|
|
|
|
// A) make sure the values are really dead and never used again.
|
|
|
|
|
// B) decrement use counts of the values' args.
|
2020-02-24 17:30:44 -08:00
|
|
|
func clobber(vv ...*Value) bool {
|
|
|
|
|
for _, v := range vv {
|
|
|
|
|
v.reset(OpInvalid)
|
|
|
|
|
// Note: leave v.Block intact. The Block field is used after clobber.
|
|
|
|
|
}
|
2016-04-20 15:02:48 -07:00
|
|
|
return true
|
|
|
|
|
}
|
2016-05-24 15:43:25 -07:00
|
|
|
|
2025-01-06 15:28:22 -08:00
|
|
|
// resetCopy resets v to be a copy of arg.
|
|
|
|
|
// Always returns true.
|
|
|
|
|
func resetCopy(v *Value, arg *Value) bool {
|
|
|
|
|
v.reset(OpCopy)
|
|
|
|
|
v.AddArg(arg)
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
|
2018-02-15 14:49:03 -05:00
|
|
|
// clobberIfDead resets v when use count is 1. Returns true.
|
|
|
|
|
// clobberIfDead is used by rewrite rules to decrement
|
|
|
|
|
// use counts of v's args when v is dead and never used.
|
|
|
|
|
func clobberIfDead(v *Value) bool {
|
|
|
|
|
if v.Uses == 1 {
|
|
|
|
|
v.reset(OpInvalid)
|
|
|
|
|
}
|
|
|
|
|
// Note: leave v.Block intact. The Block field is used after clobberIfDead.
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
|
2016-09-16 15:02:47 -07:00
|
|
|
// noteRule is an easy way to track if a rule is matched when writing
|
|
|
|
|
// new ones. Make the rule of interest also conditional on
|
2022-02-03 14:12:08 -05:00
|
|
|
//
|
|
|
|
|
// noteRule("note to self: rule of interest matched")
|
|
|
|
|
//
|
2016-09-16 15:02:47 -07:00
|
|
|
// and that message will print when the rule matches.
|
|
|
|
|
func noteRule(s string) bool {
|
2016-10-25 05:45:52 -07:00
|
|
|
fmt.Println(s)
|
2016-09-16 15:02:47 -07:00
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
|
2019-03-11 19:22:49 -07:00
|
|
|
// countRule increments Func.ruleMatches[key].
|
|
|
|
|
// If Func.ruleMatches is non-nil at the end
|
|
|
|
|
// of compilation, it will be printed to stdout.
|
|
|
|
|
// This is intended to make it easier to find which functions
|
|
|
|
|
// which contain lots of rules matches when developing new rules.
|
|
|
|
|
func countRule(v *Value, key string) bool {
|
|
|
|
|
f := v.Block.Func
|
|
|
|
|
if f.ruleMatches == nil {
|
|
|
|
|
f.ruleMatches = make(map[string]int)
|
|
|
|
|
}
|
|
|
|
|
f.ruleMatches[key]++
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
|
2018-05-28 14:47:35 -07:00
|
|
|
// warnRule generates compiler debug output with string s when
|
|
|
|
|
// v is not in autogenerated code, cond is true and the rule has fired.
|
2016-09-28 10:20:24 -04:00
|
|
|
func warnRule(cond bool, v *Value, s string) bool {
|
2018-05-28 14:47:35 -07:00
|
|
|
if pos := v.Pos; pos.Line() > 1 && cond {
|
|
|
|
|
v.Block.Func.Warnl(pos, s)
|
2016-09-28 10:20:24 -04:00
|
|
|
}
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
|
2022-11-11 19:22:35 +08:00
|
|
|
// for a pseudo-op like (LessThan x), extract x.
|
2017-08-13 22:36:47 +00:00
|
|
|
func flagArg(v *Value) *Value {
|
|
|
|
|
if len(v.Args) != 1 || !v.Args[0].Type.IsFlags() {
|
|
|
|
|
return nil
|
|
|
|
|
}
|
|
|
|
|
return v.Args[0]
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// arm64Negate finds the complement to an ARM64 condition code,
|
cmd/compile: fix wrong complement for arm64 floating-point comparisons
Consider the following example,
func test(a, b float64, x uint64) uint64 {
if a < b {
x = 0
}
return x
}
func main() {
fmt.Println(test(1, math.NaN(), 123))
}
The output is 0, but the expectation is 123.
This is because the rewrite rule
(CSEL [cc] (MOVDconst [0]) y flag) => (CSEL0 [arm64Negate(cc)] y flag)
converts
FCMP NaN, 1
CSEL MI, 0, 123, R0 // if 1 < NaN then R0 = 0 else R0 = 123
to
FCMP NaN, 1
CSEL GE, 123, 0, R0 // if 1 >= NaN then R0 = 123 else R0 = 0
But both 1 < NaN and 1 >= NaN are false. So the output is 0, not 123.
The root cause is arm64Negate not handle negation of floating comparison
correctly. According to the ARM manual, the meaning of MI, GE, and PL
are
MI: Less than
GE: Greater than or equal to
PL: Greater than, equal to, or unordered
Because NaN cannot be compared with other numbers, the result of such
comparison is unordered. So when NaN is involved, unlike integer, the
result of !(a < b) is not a >= b, it is a >= b || a is NaN || b is NaN.
This is exactly what PL means. We add NotLessThanF to represent PL. Then
the negation of LessThanF is NotLessThanF rather than GreaterEqualF. The
same reason for the other floating comparison operations.
Fixes #43619
Change-Id: Ia511b0027ad067436bace9fbfd261dbeaae01bcd
Reviewed-on: https://go-review.googlesource.com/c/go/+/283572
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Keith Randall <khr@golang.org>
2021-01-08 10:20:34 +08:00
|
|
|
// for example !Equal -> NotEqual or !LessThan -> GreaterEqual
|
2017-08-13 22:36:47 +00:00
|
|
|
//
|
cmd/compile: fix wrong complement for arm64 floating-point comparisons
Consider the following example,
func test(a, b float64, x uint64) uint64 {
if a < b {
x = 0
}
return x
}
func main() {
fmt.Println(test(1, math.NaN(), 123))
}
The output is 0, but the expectation is 123.
This is because the rewrite rule
(CSEL [cc] (MOVDconst [0]) y flag) => (CSEL0 [arm64Negate(cc)] y flag)
converts
FCMP NaN, 1
CSEL MI, 0, 123, R0 // if 1 < NaN then R0 = 0 else R0 = 123
to
FCMP NaN, 1
CSEL GE, 123, 0, R0 // if 1 >= NaN then R0 = 123 else R0 = 0
But both 1 < NaN and 1 >= NaN are false. So the output is 0, not 123.
The root cause is arm64Negate not handle negation of floating comparison
correctly. According to the ARM manual, the meaning of MI, GE, and PL
are
MI: Less than
GE: Greater than or equal to
PL: Greater than, equal to, or unordered
Because NaN cannot be compared with other numbers, the result of such
comparison is unordered. So when NaN is involved, unlike integer, the
result of !(a < b) is not a >= b, it is a >= b || a is NaN || b is NaN.
This is exactly what PL means. We add NotLessThanF to represent PL. Then
the negation of LessThanF is NotLessThanF rather than GreaterEqualF. The
same reason for the other floating comparison operations.
Fixes #43619
Change-Id: Ia511b0027ad067436bace9fbfd261dbeaae01bcd
Reviewed-on: https://go-review.googlesource.com/c/go/+/283572
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Keith Randall <khr@golang.org>
2021-01-08 10:20:34 +08:00
|
|
|
// For floating point, it's more subtle because NaN is unordered. We do
|
|
|
|
|
// !LessThanF -> NotLessThanF, the latter takes care of NaNs.
|
2017-08-13 22:36:47 +00:00
|
|
|
func arm64Negate(op Op) Op {
|
|
|
|
|
switch op {
|
|
|
|
|
case OpARM64LessThan:
|
|
|
|
|
return OpARM64GreaterEqual
|
|
|
|
|
case OpARM64LessThanU:
|
|
|
|
|
return OpARM64GreaterEqualU
|
|
|
|
|
case OpARM64GreaterThan:
|
|
|
|
|
return OpARM64LessEqual
|
|
|
|
|
case OpARM64GreaterThanU:
|
|
|
|
|
return OpARM64LessEqualU
|
|
|
|
|
case OpARM64LessEqual:
|
|
|
|
|
return OpARM64GreaterThan
|
|
|
|
|
case OpARM64LessEqualU:
|
|
|
|
|
return OpARM64GreaterThanU
|
|
|
|
|
case OpARM64GreaterEqual:
|
|
|
|
|
return OpARM64LessThan
|
|
|
|
|
case OpARM64GreaterEqualU:
|
|
|
|
|
return OpARM64LessThanU
|
|
|
|
|
case OpARM64Equal:
|
|
|
|
|
return OpARM64NotEqual
|
|
|
|
|
case OpARM64NotEqual:
|
|
|
|
|
return OpARM64Equal
|
2019-03-11 03:51:06 +00:00
|
|
|
case OpARM64LessThanF:
|
cmd/compile: fix wrong complement for arm64 floating-point comparisons
Consider the following example,
func test(a, b float64, x uint64) uint64 {
if a < b {
x = 0
}
return x
}
func main() {
fmt.Println(test(1, math.NaN(), 123))
}
The output is 0, but the expectation is 123.
This is because the rewrite rule
(CSEL [cc] (MOVDconst [0]) y flag) => (CSEL0 [arm64Negate(cc)] y flag)
converts
FCMP NaN, 1
CSEL MI, 0, 123, R0 // if 1 < NaN then R0 = 0 else R0 = 123
to
FCMP NaN, 1
CSEL GE, 123, 0, R0 // if 1 >= NaN then R0 = 123 else R0 = 0
But both 1 < NaN and 1 >= NaN are false. So the output is 0, not 123.
The root cause is arm64Negate not handle negation of floating comparison
correctly. According to the ARM manual, the meaning of MI, GE, and PL
are
MI: Less than
GE: Greater than or equal to
PL: Greater than, equal to, or unordered
Because NaN cannot be compared with other numbers, the result of such
comparison is unordered. So when NaN is involved, unlike integer, the
result of !(a < b) is not a >= b, it is a >= b || a is NaN || b is NaN.
This is exactly what PL means. We add NotLessThanF to represent PL. Then
the negation of LessThanF is NotLessThanF rather than GreaterEqualF. The
same reason for the other floating comparison operations.
Fixes #43619
Change-Id: Ia511b0027ad067436bace9fbfd261dbeaae01bcd
Reviewed-on: https://go-review.googlesource.com/c/go/+/283572
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Keith Randall <khr@golang.org>
2021-01-08 10:20:34 +08:00
|
|
|
return OpARM64NotLessThanF
|
|
|
|
|
case OpARM64NotLessThanF:
|
|
|
|
|
return OpARM64LessThanF
|
2019-03-11 03:51:06 +00:00
|
|
|
case OpARM64LessEqualF:
|
cmd/compile: fix wrong complement for arm64 floating-point comparisons
Consider the following example,
func test(a, b float64, x uint64) uint64 {
if a < b {
x = 0
}
return x
}
func main() {
fmt.Println(test(1, math.NaN(), 123))
}
The output is 0, but the expectation is 123.
This is because the rewrite rule
(CSEL [cc] (MOVDconst [0]) y flag) => (CSEL0 [arm64Negate(cc)] y flag)
converts
FCMP NaN, 1
CSEL MI, 0, 123, R0 // if 1 < NaN then R0 = 0 else R0 = 123
to
FCMP NaN, 1
CSEL GE, 123, 0, R0 // if 1 >= NaN then R0 = 123 else R0 = 0
But both 1 < NaN and 1 >= NaN are false. So the output is 0, not 123.
The root cause is arm64Negate not handle negation of floating comparison
correctly. According to the ARM manual, the meaning of MI, GE, and PL
are
MI: Less than
GE: Greater than or equal to
PL: Greater than, equal to, or unordered
Because NaN cannot be compared with other numbers, the result of such
comparison is unordered. So when NaN is involved, unlike integer, the
result of !(a < b) is not a >= b, it is a >= b || a is NaN || b is NaN.
This is exactly what PL means. We add NotLessThanF to represent PL. Then
the negation of LessThanF is NotLessThanF rather than GreaterEqualF. The
same reason for the other floating comparison operations.
Fixes #43619
Change-Id: Ia511b0027ad067436bace9fbfd261dbeaae01bcd
Reviewed-on: https://go-review.googlesource.com/c/go/+/283572
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Keith Randall <khr@golang.org>
2021-01-08 10:20:34 +08:00
|
|
|
return OpARM64NotLessEqualF
|
|
|
|
|
case OpARM64NotLessEqualF:
|
|
|
|
|
return OpARM64LessEqualF
|
|
|
|
|
case OpARM64GreaterThanF:
|
|
|
|
|
return OpARM64NotGreaterThanF
|
|
|
|
|
case OpARM64NotGreaterThanF:
|
2019-03-11 03:51:06 +00:00
|
|
|
return OpARM64GreaterThanF
|
|
|
|
|
case OpARM64GreaterEqualF:
|
cmd/compile: fix wrong complement for arm64 floating-point comparisons
Consider the following example,
func test(a, b float64, x uint64) uint64 {
if a < b {
x = 0
}
return x
}
func main() {
fmt.Println(test(1, math.NaN(), 123))
}
The output is 0, but the expectation is 123.
This is because the rewrite rule
(CSEL [cc] (MOVDconst [0]) y flag) => (CSEL0 [arm64Negate(cc)] y flag)
converts
FCMP NaN, 1
CSEL MI, 0, 123, R0 // if 1 < NaN then R0 = 0 else R0 = 123
to
FCMP NaN, 1
CSEL GE, 123, 0, R0 // if 1 >= NaN then R0 = 123 else R0 = 0
But both 1 < NaN and 1 >= NaN are false. So the output is 0, not 123.
The root cause is arm64Negate not handle negation of floating comparison
correctly. According to the ARM manual, the meaning of MI, GE, and PL
are
MI: Less than
GE: Greater than or equal to
PL: Greater than, equal to, or unordered
Because NaN cannot be compared with other numbers, the result of such
comparison is unordered. So when NaN is involved, unlike integer, the
result of !(a < b) is not a >= b, it is a >= b || a is NaN || b is NaN.
This is exactly what PL means. We add NotLessThanF to represent PL. Then
the negation of LessThanF is NotLessThanF rather than GreaterEqualF. The
same reason for the other floating comparison operations.
Fixes #43619
Change-Id: Ia511b0027ad067436bace9fbfd261dbeaae01bcd
Reviewed-on: https://go-review.googlesource.com/c/go/+/283572
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Keith Randall <khr@golang.org>
2021-01-08 10:20:34 +08:00
|
|
|
return OpARM64NotGreaterEqualF
|
|
|
|
|
case OpARM64NotGreaterEqualF:
|
|
|
|
|
return OpARM64GreaterEqualF
|
2017-08-13 22:36:47 +00:00
|
|
|
default:
|
|
|
|
|
panic("unreachable")
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// arm64Invert evaluates (InvertFlags op), which
|
|
|
|
|
// is the same as altering the condition codes such
|
|
|
|
|
// that the same result would be produced if the arguments
|
|
|
|
|
// to the flag-generating instruction were reversed, e.g.
|
|
|
|
|
// (InvertFlags (CMP x y)) -> (CMP y x)
|
|
|
|
|
func arm64Invert(op Op) Op {
|
|
|
|
|
switch op {
|
|
|
|
|
case OpARM64LessThan:
|
|
|
|
|
return OpARM64GreaterThan
|
|
|
|
|
case OpARM64LessThanU:
|
|
|
|
|
return OpARM64GreaterThanU
|
|
|
|
|
case OpARM64GreaterThan:
|
|
|
|
|
return OpARM64LessThan
|
|
|
|
|
case OpARM64GreaterThanU:
|
|
|
|
|
return OpARM64LessThanU
|
|
|
|
|
case OpARM64LessEqual:
|
|
|
|
|
return OpARM64GreaterEqual
|
|
|
|
|
case OpARM64LessEqualU:
|
|
|
|
|
return OpARM64GreaterEqualU
|
|
|
|
|
case OpARM64GreaterEqual:
|
|
|
|
|
return OpARM64LessEqual
|
|
|
|
|
case OpARM64GreaterEqualU:
|
|
|
|
|
return OpARM64LessEqualU
|
|
|
|
|
case OpARM64Equal, OpARM64NotEqual:
|
|
|
|
|
return op
|
2019-03-11 03:51:06 +00:00
|
|
|
case OpARM64LessThanF:
|
|
|
|
|
return OpARM64GreaterThanF
|
|
|
|
|
case OpARM64GreaterThanF:
|
|
|
|
|
return OpARM64LessThanF
|
|
|
|
|
case OpARM64LessEqualF:
|
|
|
|
|
return OpARM64GreaterEqualF
|
|
|
|
|
case OpARM64GreaterEqualF:
|
|
|
|
|
return OpARM64LessEqualF
|
cmd/compile: fix wrong complement for arm64 floating-point comparisons
Consider the following example,
func test(a, b float64, x uint64) uint64 {
if a < b {
x = 0
}
return x
}
func main() {
fmt.Println(test(1, math.NaN(), 123))
}
The output is 0, but the expectation is 123.
This is because the rewrite rule
(CSEL [cc] (MOVDconst [0]) y flag) => (CSEL0 [arm64Negate(cc)] y flag)
converts
FCMP NaN, 1
CSEL MI, 0, 123, R0 // if 1 < NaN then R0 = 0 else R0 = 123
to
FCMP NaN, 1
CSEL GE, 123, 0, R0 // if 1 >= NaN then R0 = 123 else R0 = 0
But both 1 < NaN and 1 >= NaN are false. So the output is 0, not 123.
The root cause is arm64Negate not handle negation of floating comparison
correctly. According to the ARM manual, the meaning of MI, GE, and PL
are
MI: Less than
GE: Greater than or equal to
PL: Greater than, equal to, or unordered
Because NaN cannot be compared with other numbers, the result of such
comparison is unordered. So when NaN is involved, unlike integer, the
result of !(a < b) is not a >= b, it is a >= b || a is NaN || b is NaN.
This is exactly what PL means. We add NotLessThanF to represent PL. Then
the negation of LessThanF is NotLessThanF rather than GreaterEqualF. The
same reason for the other floating comparison operations.
Fixes #43619
Change-Id: Ia511b0027ad067436bace9fbfd261dbeaae01bcd
Reviewed-on: https://go-review.googlesource.com/c/go/+/283572
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Keith Randall <khr@golang.org>
2021-01-08 10:20:34 +08:00
|
|
|
case OpARM64NotLessThanF:
|
|
|
|
|
return OpARM64NotGreaterThanF
|
|
|
|
|
case OpARM64NotGreaterThanF:
|
|
|
|
|
return OpARM64NotLessThanF
|
|
|
|
|
case OpARM64NotLessEqualF:
|
|
|
|
|
return OpARM64NotGreaterEqualF
|
|
|
|
|
case OpARM64NotGreaterEqualF:
|
|
|
|
|
return OpARM64NotLessEqualF
|
2017-08-13 22:36:47 +00:00
|
|
|
default:
|
|
|
|
|
panic("unreachable")
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// evaluate an ARM64 op against a flags value
|
|
|
|
|
// that is potentially constant; return 1 for true,
|
|
|
|
|
// -1 for false, and 0 for not constant.
|
2020-05-12 19:47:23 +08:00
|
|
|
func ccARM64Eval(op Op, flags *Value) int {
|
2017-08-13 22:36:47 +00:00
|
|
|
fop := flags.Op
|
2020-06-15 22:52:56 -07:00
|
|
|
if fop == OpARM64InvertFlags {
|
2017-08-13 22:36:47 +00:00
|
|
|
return -ccARM64Eval(op, flags.Args[0])
|
2020-06-15 22:52:56 -07:00
|
|
|
}
|
|
|
|
|
if fop != OpARM64FlagConstant {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
fc := flagConstant(flags.AuxInt)
|
|
|
|
|
b2i := func(b bool) int {
|
|
|
|
|
if b {
|
2017-08-13 22:36:47 +00:00
|
|
|
return 1
|
|
|
|
|
}
|
2020-06-15 22:52:56 -07:00
|
|
|
return -1
|
2017-08-13 22:36:47 +00:00
|
|
|
}
|
2020-06-15 22:52:56 -07:00
|
|
|
switch op {
|
|
|
|
|
case OpARM64Equal:
|
|
|
|
|
return b2i(fc.eq())
|
|
|
|
|
case OpARM64NotEqual:
|
|
|
|
|
return b2i(fc.ne())
|
|
|
|
|
case OpARM64LessThan:
|
|
|
|
|
return b2i(fc.lt())
|
|
|
|
|
case OpARM64LessThanU:
|
|
|
|
|
return b2i(fc.ult())
|
|
|
|
|
case OpARM64GreaterThan:
|
|
|
|
|
return b2i(fc.gt())
|
|
|
|
|
case OpARM64GreaterThanU:
|
|
|
|
|
return b2i(fc.ugt())
|
|
|
|
|
case OpARM64LessEqual:
|
|
|
|
|
return b2i(fc.le())
|
|
|
|
|
case OpARM64LessEqualU:
|
|
|
|
|
return b2i(fc.ule())
|
|
|
|
|
case OpARM64GreaterEqual:
|
|
|
|
|
return b2i(fc.ge())
|
|
|
|
|
case OpARM64GreaterEqualU:
|
|
|
|
|
return b2i(fc.uge())
|
|
|
|
|
}
|
|
|
|
|
return 0
|
2017-08-13 22:36:47 +00:00
|
|
|
}
|
|
|
|
|
|
2016-05-24 15:43:25 -07:00
|
|
|
// logRule logs the use of the rule s. This will only be enabled if
|
2022-10-19 21:24:52 -07:00
|
|
|
// rewrite rules were generated with the -log option, see _gen/rulegen.go.
|
2016-05-24 15:43:25 -07:00
|
|
|
func logRule(s string) {
|
|
|
|
|
if ruleFile == nil {
|
|
|
|
|
// Open a log file to write log to. We open in append
|
|
|
|
|
// mode because all.bash runs the compiler lots of times,
|
|
|
|
|
// and we want the concatenation of all of those logs.
|
|
|
|
|
// This means, of course, that users need to rm the old log
|
|
|
|
|
// to get fresh data.
|
|
|
|
|
// TODO: all.bash runs compilers in parallel. Need to synchronize logging somehow?
|
|
|
|
|
w, err := os.OpenFile(filepath.Join(os.Getenv("GOROOT"), "src", "rulelog"),
|
|
|
|
|
os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
|
|
|
|
|
if err != nil {
|
|
|
|
|
panic(err)
|
|
|
|
|
}
|
|
|
|
|
ruleFile = w
|
|
|
|
|
}
|
2019-05-10 16:31:56 -07:00
|
|
|
_, err := fmt.Fprintln(ruleFile, s)
|
2016-05-24 15:43:25 -07:00
|
|
|
if err != nil {
|
|
|
|
|
panic(err)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2017-04-22 18:59:11 -07:00
|
|
|
var ruleFile io.Writer
|
2016-12-08 16:17:20 -08:00
|
|
|
|
2017-02-03 16:18:01 -05:00
|
|
|
func isConstZero(v *Value) bool {
|
|
|
|
|
switch v.Op {
|
|
|
|
|
case OpConstNil:
|
|
|
|
|
return true
|
|
|
|
|
case OpConst64, OpConst32, OpConst16, OpConst8, OpConstBool, OpConst32F, OpConst64F:
|
|
|
|
|
return v.AuxInt == 0
|
2024-08-13 09:01:05 -07:00
|
|
|
case OpStringMake, OpIMake, OpComplexMake:
|
|
|
|
|
return isConstZero(v.Args[0]) && isConstZero(v.Args[1])
|
|
|
|
|
case OpSliceMake:
|
|
|
|
|
return isConstZero(v.Args[0]) && isConstZero(v.Args[1]) && isConstZero(v.Args[2])
|
|
|
|
|
case OpStringPtr, OpStringLen, OpSlicePtr, OpSliceLen, OpSliceCap, OpITab, OpIData, OpComplexReal, OpComplexImag:
|
|
|
|
|
return isConstZero(v.Args[0])
|
2017-02-03 16:18:01 -05:00
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
2017-04-03 10:17:48 -07:00
|
|
|
|
|
|
|
|
// reciprocalExact64 reports whether 1/c is exactly representable.
|
|
|
|
|
func reciprocalExact64(c float64) bool {
|
|
|
|
|
b := math.Float64bits(c)
|
|
|
|
|
man := b & (1<<52 - 1)
|
|
|
|
|
if man != 0 {
|
|
|
|
|
return false // not a power of 2, denormal, or NaN
|
|
|
|
|
}
|
|
|
|
|
exp := b >> 52 & (1<<11 - 1)
|
|
|
|
|
// exponent bias is 0x3ff. So taking the reciprocal of a number
|
|
|
|
|
// changes the exponent to 0x7fe-exp.
|
|
|
|
|
switch exp {
|
|
|
|
|
case 0:
|
|
|
|
|
return false // ±0
|
|
|
|
|
case 0x7ff:
|
|
|
|
|
return false // ±inf
|
|
|
|
|
case 0x7fe:
|
|
|
|
|
return false // exponent is not representable
|
|
|
|
|
default:
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// reciprocalExact32 reports whether 1/c is exactly representable.
|
|
|
|
|
func reciprocalExact32(c float32) bool {
|
|
|
|
|
b := math.Float32bits(c)
|
|
|
|
|
man := b & (1<<23 - 1)
|
|
|
|
|
if man != 0 {
|
|
|
|
|
return false // not a power of 2, denormal, or NaN
|
|
|
|
|
}
|
|
|
|
|
exp := b >> 23 & (1<<8 - 1)
|
|
|
|
|
// exponent bias is 0x7f. So taking the reciprocal of a number
|
|
|
|
|
// changes the exponent to 0xfe-exp.
|
|
|
|
|
switch exp {
|
|
|
|
|
case 0:
|
|
|
|
|
return false // ±0
|
|
|
|
|
case 0xff:
|
|
|
|
|
return false // ±inf
|
|
|
|
|
case 0xfe:
|
|
|
|
|
return false // exponent is not representable
|
|
|
|
|
default:
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
}
|
2017-04-25 10:53:10 +00:00
|
|
|
|
2022-11-11 19:22:35 +08:00
|
|
|
// check if an immediate can be directly encoded into an ARM's instruction.
|
2017-04-25 10:53:10 +00:00
|
|
|
func isARMImmRot(v uint32) bool {
|
|
|
|
|
for i := 0; i < 16; i++ {
|
|
|
|
|
if v&^0xff == 0 {
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
v = v<<2 | v>>30
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return false
|
|
|
|
|
}
|
cmd/compile: add generic rules to eliminate some unnecessary stores
Eliminates stores of values that have just been loaded from the same
location. Handles the common case where there are up to 3 intermediate
stores to non-overlapping struct fields.
For example the loads and stores of x.a, x.b and x.d in the following
function are now removed:
type T struct {
a, b, c, d int
}
func f(x *T) {
y := *x
y.c += 8
*x = y
}
Before this CL (s390x):
TEXT "".f(SB)
MOVD "".x(R15), R5
MOVD (R5), R1
MOVD 8(R5), R2
MOVD 16(R5), R0
MOVD 24(R5), R4
ADD $8, R0, R3
STMG R1, R4, (R5)
RET
After this CL (s390x):
TEXT "".f(SB)
MOVD "".x(R15), R1
MOVD 16(R1), R0
ADD $8, R0, R0
MOVD R0, 16(R1)
RET
In total these rules are triggered ~5091 times during all.bash,
which is broken down as:
Intermediate stores | Triggered
--------------------+----------
0 | 1434
1 | 2508
2 | 888
3 | 261
--------------------+----------
Change-Id: Ia4721ae40146aceec1fdd3e65b0e9283770bfba5
Reviewed-on: https://go-review.googlesource.com/38793
Run-TryBot: Michael Munday <munday@ca.ibm.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2017-03-29 16:37:12 -04:00
|
|
|
|
|
|
|
|
// overlap reports whether the ranges given by the given offset and
|
|
|
|
|
// size pairs overlap.
|
|
|
|
|
func overlap(offset1, size1, offset2, size2 int64) bool {
|
|
|
|
|
if offset1 >= offset2 && offset2+size2 > offset1 {
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
if offset2 >= offset1 && offset1+size1 > offset2 {
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
2017-08-23 11:08:56 -05:00
|
|
|
|
|
|
|
|
// check if value zeroes out upper 32-bit of 64-bit register.
|
|
|
|
|
// depth limits recursion depth. In AMD64.rules 3 is used as limit,
|
|
|
|
|
// because it catches same amount of cases as 4.
|
|
|
|
|
func zeroUpper32Bits(x *Value, depth int) bool {
|
2024-06-27 20:45:22 -07:00
|
|
|
if x.Type.IsSigned() && x.Type.Size() < 8 {
|
|
|
|
|
// If the value is signed, it might get re-sign-extended
|
|
|
|
|
// during spill and restore. See issue 68227.
|
|
|
|
|
return false
|
|
|
|
|
}
|
2017-08-23 11:08:56 -05:00
|
|
|
switch x.Op {
|
|
|
|
|
case OpAMD64MOVLconst, OpAMD64MOVLload, OpAMD64MOVLQZX, OpAMD64MOVLloadidx1,
|
|
|
|
|
OpAMD64MOVWload, OpAMD64MOVWloadidx1, OpAMD64MOVBload, OpAMD64MOVBloadidx1,
|
2018-05-08 09:11:00 -07:00
|
|
|
OpAMD64MOVLloadidx4, OpAMD64ADDLload, OpAMD64SUBLload, OpAMD64ANDLload,
|
|
|
|
|
OpAMD64ORLload, OpAMD64XORLload, OpAMD64CVTTSD2SL,
|
2017-08-23 11:08:56 -05:00
|
|
|
OpAMD64ADDL, OpAMD64ADDLconst, OpAMD64SUBL, OpAMD64SUBLconst,
|
|
|
|
|
OpAMD64ANDL, OpAMD64ANDLconst, OpAMD64ORL, OpAMD64ORLconst,
|
2020-03-29 14:21:12 +02:00
|
|
|
OpAMD64XORL, OpAMD64XORLconst, OpAMD64NEGL, OpAMD64NOTL,
|
|
|
|
|
OpAMD64SHRL, OpAMD64SHRLconst, OpAMD64SARL, OpAMD64SARLconst,
|
|
|
|
|
OpAMD64SHLL, OpAMD64SHLLconst:
|
2017-08-23 11:08:56 -05:00
|
|
|
return true
|
2022-08-18 01:31:57 +00:00
|
|
|
case OpARM64REV16W, OpARM64REVW, OpARM64RBITW, OpARM64CLZW, OpARM64EXTRWconst,
|
|
|
|
|
OpARM64MULW, OpARM64MNEGW, OpARM64UDIVW, OpARM64DIVW, OpARM64UMODW,
|
|
|
|
|
OpARM64MADDW, OpARM64MSUBW, OpARM64RORW, OpARM64RORWconst:
|
|
|
|
|
return true
|
cmd/compile: fix sign/zero-extension removal
When an opcode generates a known high bit state (typically, a sub-word
operation that zeros the high bits), we can remove any subsequent
extension operation that would be a no-op.
x = (OP ...)
y = (ZeroExt32to64 x)
If OP zeros the high 32 bits, then we can replace y with x, as the
zero extension doesn't do anything.
However, x in this situation normally has a sub-word-sized type. The
semantics of values in registers is typically that the high bits
beyond the value's type size are junk. So although the opcode
generating x *currently* zeros the high bits, after x is rewritten to
another opcode it may not - rewrites of sub-word-sized values can
trash the high bits.
To fix, move the extension-removing rules to late lower. That ensures
that their arguments won't be rewritten to change their high bits.
I am also worried about spilling and restoring. Spilling and restoring
doesn't preserve the high bits, but instead sets them to a known value
(often 0, but in some cases it could be sign-extended). I am unable
to come up with a case that would cause a problem here, so leaving for
another time.
Fixes #66066
Change-Id: I3b5c091b3b3278ccbb7f11beda8b56f4b6d3fde7
Reviewed-on: https://go-review.googlesource.com/c/go/+/568616
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2024-03-02 19:22:07 -08:00
|
|
|
case OpArg: // note: but not ArgIntReg
|
2024-03-12 12:56:03 -07:00
|
|
|
// amd64 always loads args from the stack unsigned.
|
|
|
|
|
// most other architectures load them sign/zero extended based on the type.
|
2024-06-27 20:45:22 -07:00
|
|
|
return x.Type.Size() == 4 && x.Block.Func.Config.arch == "amd64"
|
2018-02-26 14:45:58 -06:00
|
|
|
case OpPhi, OpSelect0, OpSelect1:
|
2017-08-23 11:08:56 -05:00
|
|
|
// Phis can use each-other as an arguments, instead of tracking visited values,
|
|
|
|
|
// just limit recursion depth.
|
|
|
|
|
if depth <= 0 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
for i := range x.Args {
|
|
|
|
|
if !zeroUpper32Bits(x.Args[i], depth-1) {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return true
|
|
|
|
|
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
2017-08-09 14:00:38 -05:00
|
|
|
|
2022-11-11 19:22:35 +08:00
|
|
|
// zeroUpper48Bits is similar to zeroUpper32Bits, but for upper 48 bits.
|
2018-05-31 16:38:18 -05:00
|
|
|
func zeroUpper48Bits(x *Value, depth int) bool {
|
2024-06-27 20:45:22 -07:00
|
|
|
if x.Type.IsSigned() && x.Type.Size() < 8 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
2018-05-31 16:38:18 -05:00
|
|
|
switch x.Op {
|
|
|
|
|
case OpAMD64MOVWQZX, OpAMD64MOVWload, OpAMD64MOVWloadidx1, OpAMD64MOVWloadidx2:
|
|
|
|
|
return true
|
cmd/compile: fix sign/zero-extension removal
When an opcode generates a known high bit state (typically, a sub-word
operation that zeros the high bits), we can remove any subsequent
extension operation that would be a no-op.
x = (OP ...)
y = (ZeroExt32to64 x)
If OP zeros the high 32 bits, then we can replace y with x, as the
zero extension doesn't do anything.
However, x in this situation normally has a sub-word-sized type. The
semantics of values in registers is typically that the high bits
beyond the value's type size are junk. So although the opcode
generating x *currently* zeros the high bits, after x is rewritten to
another opcode it may not - rewrites of sub-word-sized values can
trash the high bits.
To fix, move the extension-removing rules to late lower. That ensures
that their arguments won't be rewritten to change their high bits.
I am also worried about spilling and restoring. Spilling and restoring
doesn't preserve the high bits, but instead sets them to a known value
(often 0, but in some cases it could be sign-extended). I am unable
to come up with a case that would cause a problem here, so leaving for
another time.
Fixes #66066
Change-Id: I3b5c091b3b3278ccbb7f11beda8b56f4b6d3fde7
Reviewed-on: https://go-review.googlesource.com/c/go/+/568616
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2024-03-02 19:22:07 -08:00
|
|
|
case OpArg: // note: but not ArgIntReg
|
2024-06-27 20:45:22 -07:00
|
|
|
return x.Type.Size() == 2 && x.Block.Func.Config.arch == "amd64"
|
2018-05-31 16:38:18 -05:00
|
|
|
case OpPhi, OpSelect0, OpSelect1:
|
|
|
|
|
// Phis can use each-other as an arguments, instead of tracking visited values,
|
|
|
|
|
// just limit recursion depth.
|
|
|
|
|
if depth <= 0 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
for i := range x.Args {
|
|
|
|
|
if !zeroUpper48Bits(x.Args[i], depth-1) {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return true
|
|
|
|
|
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
|
2022-11-11 19:22:35 +08:00
|
|
|
// zeroUpper56Bits is similar to zeroUpper32Bits, but for upper 56 bits.
|
2018-05-31 16:38:18 -05:00
|
|
|
func zeroUpper56Bits(x *Value, depth int) bool {
|
2024-06-27 20:45:22 -07:00
|
|
|
if x.Type.IsSigned() && x.Type.Size() < 8 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
2018-05-31 16:38:18 -05:00
|
|
|
switch x.Op {
|
|
|
|
|
case OpAMD64MOVBQZX, OpAMD64MOVBload, OpAMD64MOVBloadidx1:
|
|
|
|
|
return true
|
cmd/compile: fix sign/zero-extension removal
When an opcode generates a known high bit state (typically, a sub-word
operation that zeros the high bits), we can remove any subsequent
extension operation that would be a no-op.
x = (OP ...)
y = (ZeroExt32to64 x)
If OP zeros the high 32 bits, then we can replace y with x, as the
zero extension doesn't do anything.
However, x in this situation normally has a sub-word-sized type. The
semantics of values in registers is typically that the high bits
beyond the value's type size are junk. So although the opcode
generating x *currently* zeros the high bits, after x is rewritten to
another opcode it may not - rewrites of sub-word-sized values can
trash the high bits.
To fix, move the extension-removing rules to late lower. That ensures
that their arguments won't be rewritten to change their high bits.
I am also worried about spilling and restoring. Spilling and restoring
doesn't preserve the high bits, but instead sets them to a known value
(often 0, but in some cases it could be sign-extended). I am unable
to come up with a case that would cause a problem here, so leaving for
another time.
Fixes #66066
Change-Id: I3b5c091b3b3278ccbb7f11beda8b56f4b6d3fde7
Reviewed-on: https://go-review.googlesource.com/c/go/+/568616
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2024-03-02 19:22:07 -08:00
|
|
|
case OpArg: // note: but not ArgIntReg
|
2024-06-27 20:45:22 -07:00
|
|
|
return x.Type.Size() == 1 && x.Block.Func.Config.arch == "amd64"
|
2018-05-31 16:38:18 -05:00
|
|
|
case OpPhi, OpSelect0, OpSelect1:
|
|
|
|
|
// Phis can use each-other as an arguments, instead of tracking visited values,
|
|
|
|
|
// just limit recursion depth.
|
|
|
|
|
if depth <= 0 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
for i := range x.Args {
|
|
|
|
|
if !zeroUpper56Bits(x.Args[i], depth-1) {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return true
|
|
|
|
|
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
|
2023-01-31 14:27:30 -06:00
|
|
|
func isInlinableMemclr(c *Config, sz int64) bool {
|
2023-03-22 17:45:07 +07:00
|
|
|
if sz < 0 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
2022-11-30 15:31:26 +01:00
|
|
|
// TODO: expand this check to allow other architectures
|
|
|
|
|
// see CL 454255 and issue 56997
|
2023-01-31 14:27:30 -06:00
|
|
|
switch c.arch {
|
|
|
|
|
case "amd64", "arm64":
|
|
|
|
|
return true
|
2024-10-11 11:08:43 +08:00
|
|
|
case "ppc64le", "ppc64", "loong64":
|
2023-01-31 14:27:30 -06:00
|
|
|
return sz < 512
|
|
|
|
|
}
|
|
|
|
|
return false
|
2022-11-30 15:31:26 +01:00
|
|
|
}
|
|
|
|
|
|
2018-04-29 15:12:50 +01:00
|
|
|
// isInlinableMemmove reports whether the given arch performs a Move of the given size
|
|
|
|
|
// faster than memmove. It will only return true if replacing the memmove with a Move is
|
2022-08-22 10:26:50 -07:00
|
|
|
// safe, either because Move will do all of its loads before any of its stores, or
|
|
|
|
|
// because the arguments are known to be disjoint.
|
2018-04-29 15:12:50 +01:00
|
|
|
// This is used as a check for replacing memmove with Move ops.
|
|
|
|
|
func isInlinableMemmove(dst, src *Value, sz int64, c *Config) bool {
|
|
|
|
|
// It is always safe to convert memmove into Move when its arguments are disjoint.
|
|
|
|
|
// Move ops may or may not be faster for large sizes depending on how the platform
|
|
|
|
|
// lowers them, so we only perform this optimization on platforms that we know to
|
|
|
|
|
// have fast Move ops.
|
2017-08-09 14:00:38 -05:00
|
|
|
switch c.arch {
|
2019-10-10 16:16:54 +00:00
|
|
|
case "amd64":
|
2018-05-09 15:49:22 -05:00
|
|
|
return sz <= 16 || (sz < 1024 && disjoint(dst, sz, src, sz))
|
2025-04-24 14:34:10 -07:00
|
|
|
case "arm64":
|
|
|
|
|
return sz <= 64 || (sz <= 1024 && disjoint(dst, sz, src, sz))
|
|
|
|
|
case "386":
|
2017-08-09 14:00:38 -05:00
|
|
|
return sz <= 8
|
2020-03-30 15:23:19 -04:00
|
|
|
case "s390x", "ppc64", "ppc64le":
|
2018-04-29 15:12:50 +01:00
|
|
|
return sz <= 8 || disjoint(dst, sz, src, sz)
|
2021-11-25 10:26:47 +08:00
|
|
|
case "arm", "loong64", "mips", "mips64", "mipsle", "mips64le":
|
2017-08-09 14:00:38 -05:00
|
|
|
return sz <= 4
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
2022-08-22 10:26:50 -07:00
|
|
|
func IsInlinableMemmove(dst, src *Value, sz int64, c *Config) bool {
|
|
|
|
|
return isInlinableMemmove(dst, src, sz, c)
|
|
|
|
|
}
|
cmd/compile/internal/ssa: add patterns for arm64 bitfield opcodes
Add patterns to match common idioms for EXTR, BFI, BFXIL, SBFIZ, SBFX,
UBFIZ and UBFX opcodes.
go1 benchmarks results on Amberwing:
name old time/op new time/op delta
FmtManyArgs 786ns ± 2% 714ns ± 1% -9.20% (p=0.000 n=10+10)
Gzip 437ms ± 0% 402ms ± 0% -7.99% (p=0.000 n=10+10)
FmtFprintfIntInt 196ns ± 0% 182ns ± 0% -7.28% (p=0.000 n=10+9)
FmtFprintfPrefixedInt 207ns ± 0% 199ns ± 0% -3.86% (p=0.000 n=10+10)
FmtFprintfFloat 324ns ± 0% 316ns ± 0% -2.47% (p=0.000 n=10+8)
FmtFprintfInt 119ns ± 0% 117ns ± 0% -1.68% (p=0.000 n=10+9)
GobDecode 12.8ms ± 2% 12.6ms ± 1% -1.62% (p=0.002 n=10+10)
JSONDecode 94.4ms ± 1% 93.4ms ± 0% -1.10% (p=0.000 n=10+10)
RegexpMatchEasy0_32 247ns ± 0% 245ns ± 0% -0.65% (p=0.000 n=10+10)
RegexpMatchMedium_32 314ns ± 0% 312ns ± 0% -0.64% (p=0.000 n=10+10)
RegexpMatchEasy0_1K 541ns ± 0% 538ns ± 0% -0.55% (p=0.000 n=10+9)
TimeParse 450ns ± 1% 448ns ± 1% -0.42% (p=0.035 n=9+9)
RegexpMatchEasy1_32 244ns ± 0% 243ns ± 0% -0.41% (p=0.000 n=10+10)
GoParse 6.03ms ± 0% 6.00ms ± 0% -0.40% (p=0.002 n=10+10)
RegexpMatchEasy1_1K 779ns ± 0% 777ns ± 0% -0.26% (p=0.000 n=10+10)
RegexpMatchHard_32 2.75µs ± 0% 2.74µs ± 1% -0.06% (p=0.026 n=9+9)
BinaryTree17 11.7s ± 0% 11.6s ± 0% ~ (p=0.089 n=10+10)
HTTPClientServer 89.1µs ± 1% 89.5µs ± 2% ~ (p=0.436 n=10+10)
RegexpMatchHard_1K 78.9µs ± 0% 79.5µs ± 2% ~ (p=0.469 n=10+10)
FmtFprintfEmpty 58.5ns ± 0% 58.5ns ± 0% ~ (all equal)
GobEncode 12.0ms ± 1% 12.1ms ± 0% ~ (p=0.075 n=10+10)
Revcomp 669ms ± 0% 668ms ± 0% ~ (p=0.091 n=7+9)
Mandelbrot200 5.35ms ± 0% 5.36ms ± 0% +0.07% (p=0.000 n=9+9)
RegexpMatchMedium_1K 52.1µs ± 0% 52.1µs ± 0% +0.10% (p=0.000 n=9+9)
Fannkuch11 3.25s ± 0% 3.26s ± 0% +0.36% (p=0.000 n=9+10)
FmtFprintfString 114ns ± 1% 115ns ± 0% +0.52% (p=0.011 n=10+10)
JSONEncode 20.2ms ± 0% 20.3ms ± 0% +0.65% (p=0.000 n=10+10)
Template 91.3ms ± 0% 92.3ms ± 0% +1.08% (p=0.000 n=10+10)
TimeFormat 484ns ± 0% 495ns ± 1% +2.30% (p=0.000 n=9+10)
There are some opportunities to improve this change further by adding
patterns to match the "extended register" versions of ADD/SUB/CMP, but I
think that should be evaluated on its own. The regressions in Template
and TimeFormat would likely be recovered by this, as they seem to be due
to generating:
ubfiz x0, x0, #3, #8
add x1, x2, x0
instead of
add x1, x2, x0, lsl #3
Change-Id: I5644a8d70ac7a98e784a377a2b76ab47a3415a4b
Reviewed-on: https://go-review.googlesource.com/88355
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-02-21 16:15:39 -05:00
|
|
|
|
2019-11-02 23:57:11 -04:00
|
|
|
// logLargeCopy logs the occurrence of a large copy.
|
|
|
|
|
// The best place to do this is in the rewrite rules where the size of the move is easy to find.
|
|
|
|
|
// "Large" is arbitrarily chosen to be 128 bytes; this may change.
|
|
|
|
|
func logLargeCopy(v *Value, s int64) bool {
|
|
|
|
|
if s < 128 {
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
if logopt.Enabled() {
|
|
|
|
|
logopt.LogOpt(v.Pos, "copy", "lower", v.Block.Func.Name, fmt.Sprintf("%d bytes", s))
|
|
|
|
|
}
|
|
|
|
|
return true
|
|
|
|
|
}
|
2022-08-22 10:26:50 -07:00
|
|
|
func LogLargeCopy(funcName string, pos src.XPos, s int64) {
|
|
|
|
|
if s < 128 {
|
|
|
|
|
return
|
|
|
|
|
}
|
|
|
|
|
if logopt.Enabled() {
|
|
|
|
|
logopt.LogOpt(pos, "copy", "lower", funcName, fmt.Sprintf("%d bytes", s))
|
|
|
|
|
}
|
|
|
|
|
}
|
2019-11-02 23:57:11 -04:00
|
|
|
|
2019-03-09 21:58:16 -07:00
|
|
|
// hasSmallRotate reports whether the architecture has rotate instructions
|
|
|
|
|
// for sizes < 32-bit. This is used to decide whether to promote some rotations.
|
|
|
|
|
func hasSmallRotate(c *Config) bool {
|
|
|
|
|
switch c.arch {
|
2019-10-10 16:16:54 +00:00
|
|
|
case "amd64", "386":
|
2019-03-09 21:58:16 -07:00
|
|
|
return true
|
|
|
|
|
default:
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2023-03-17 15:22:31 -05:00
|
|
|
func supportsPPC64PCRel() bool {
|
|
|
|
|
// PCRel is currently supported for >= power10, linux only
|
|
|
|
|
// Internal and external linking supports this on ppc64le; internal linking on ppc64.
|
|
|
|
|
return buildcfg.GOPPC64 >= 10 && buildcfg.GOOS == "linux"
|
|
|
|
|
}
|
|
|
|
|
|
2020-08-31 09:43:40 -04:00
|
|
|
func newPPC64ShiftAuxInt(sh, mb, me, sz int64) int32 {
|
|
|
|
|
if sh < 0 || sh >= sz {
|
|
|
|
|
panic("PPC64 shift arg sh out of range")
|
|
|
|
|
}
|
|
|
|
|
if mb < 0 || mb >= sz {
|
|
|
|
|
panic("PPC64 shift arg mb out of range")
|
|
|
|
|
}
|
|
|
|
|
if me < 0 || me >= sz {
|
|
|
|
|
panic("PPC64 shift arg me out of range")
|
|
|
|
|
}
|
|
|
|
|
return int32(sh<<16 | mb<<8 | me)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func GetPPC64Shiftsh(auxint int64) int64 {
|
|
|
|
|
return int64(int8(auxint >> 16))
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func GetPPC64Shiftmb(auxint int64) int64 {
|
|
|
|
|
return int64(int8(auxint >> 8))
|
|
|
|
|
}
|
|
|
|
|
|
2020-10-23 12:12:34 -05:00
|
|
|
// Test if this value can encoded as a mask for a rlwinm like
|
|
|
|
|
// operation. Masks can also extend from the msb and wrap to
|
|
|
|
|
// the lsb too. That is, the valid masks are 32 bit strings
|
|
|
|
|
// of the form: 0..01..10..0 or 1..10..01..1 or 1...1
|
2025-06-04 08:51:11 -05:00
|
|
|
//
|
|
|
|
|
// Note: This ignores the upper 32 bits of the input. When a
|
|
|
|
|
// zero extended result is desired (e.g a 64 bit result), the
|
|
|
|
|
// user must verify the upper 32 bits are 0 and the mask is
|
|
|
|
|
// contiguous (that is, non-wrapping).
|
2020-10-23 12:12:34 -05:00
|
|
|
func isPPC64WordRotateMask(v64 int64) bool {
|
|
|
|
|
// Isolate rightmost 1 (if none 0) and add.
|
|
|
|
|
v := uint32(v64)
|
|
|
|
|
vp := (v & -v) + v
|
|
|
|
|
// Likewise, for the wrapping case.
|
|
|
|
|
vn := ^v
|
|
|
|
|
vpn := (vn & -vn) + vn
|
|
|
|
|
return (v&vp == 0 || vn&vpn == 0) && v != 0
|
|
|
|
|
}
|
|
|
|
|
|
2025-06-04 08:51:11 -05:00
|
|
|
// Test if this mask is a valid, contiguous bitmask which can be
|
|
|
|
|
// represented by a RLWNM mask and also clears the upper 32 bits
|
|
|
|
|
// of the register.
|
|
|
|
|
func isPPC64WordRotateMaskNonWrapping(v64 int64) bool {
|
|
|
|
|
// Isolate rightmost 1 (if none 0) and add.
|
|
|
|
|
v := uint32(v64)
|
|
|
|
|
vp := (v & -v) + v
|
|
|
|
|
return (v&vp == 0) && v != 0 && uint64(uint32(v64)) == uint64(v64)
|
|
|
|
|
}
|
|
|
|
|
|
2021-03-13 11:25:15 +00:00
|
|
|
// Compress mask and shift into single value of the form
|
2020-10-23 12:12:34 -05:00
|
|
|
// me | mb<<8 | rotate<<16 | nbits<<24 where me and mb can
|
|
|
|
|
// be used to regenerate the input mask.
|
|
|
|
|
func encodePPC64RotateMask(rotate, mask, nbits int64) int64 {
|
|
|
|
|
var mb, me, mbn, men int
|
|
|
|
|
|
|
|
|
|
// Determine boundaries and then decode them
|
|
|
|
|
if mask == 0 || ^mask == 0 || rotate >= nbits {
|
2023-09-18 11:29:20 -05:00
|
|
|
panic(fmt.Sprintf("invalid PPC64 rotate mask: %x %d %d", uint64(mask), rotate, nbits))
|
2020-10-23 12:12:34 -05:00
|
|
|
} else if nbits == 32 {
|
|
|
|
|
mb = bits.LeadingZeros32(uint32(mask))
|
|
|
|
|
me = 32 - bits.TrailingZeros32(uint32(mask))
|
|
|
|
|
mbn = bits.LeadingZeros32(^uint32(mask))
|
|
|
|
|
men = 32 - bits.TrailingZeros32(^uint32(mask))
|
|
|
|
|
} else {
|
|
|
|
|
mb = bits.LeadingZeros64(uint64(mask))
|
|
|
|
|
me = 64 - bits.TrailingZeros64(uint64(mask))
|
|
|
|
|
mbn = bits.LeadingZeros64(^uint64(mask))
|
|
|
|
|
men = 64 - bits.TrailingZeros64(^uint64(mask))
|
|
|
|
|
}
|
|
|
|
|
// Check for a wrapping mask (e.g bits at 0 and 63)
|
|
|
|
|
if mb == 0 && me == int(nbits) {
|
|
|
|
|
// swap the inverted values
|
|
|
|
|
mb, me = men, mbn
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return int64(me) | int64(mb<<8) | int64(rotate<<16) | int64(nbits<<24)
|
|
|
|
|
}
|
|
|
|
|
|
2023-06-27 17:17:33 -05:00
|
|
|
// Merge (RLDICL [encoded] (SRDconst [s] x)) into (RLDICL [new_encoded] x)
|
|
|
|
|
// SRDconst on PPC64 is an extended mnemonic of RLDICL. If the input to an
|
|
|
|
|
// RLDICL is an SRDconst, and the RLDICL does not rotate its value, the two
|
|
|
|
|
// operations can be combined. This functions assumes the two opcodes can
|
|
|
|
|
// be merged, and returns an encoded rotate+mask value of the combined RLDICL.
|
|
|
|
|
func mergePPC64RLDICLandSRDconst(encoded, s int64) int64 {
|
|
|
|
|
mb := s
|
|
|
|
|
r := 64 - s
|
|
|
|
|
// A larger mb is a smaller mask.
|
|
|
|
|
if (encoded>>8)&0xFF < mb {
|
|
|
|
|
encoded = (encoded &^ 0xFF00) | mb<<8
|
|
|
|
|
}
|
|
|
|
|
// The rotate is expected to be 0.
|
|
|
|
|
if (encoded & 0xFF0000) != 0 {
|
|
|
|
|
panic("non-zero rotate")
|
|
|
|
|
}
|
|
|
|
|
return encoded | r<<16
|
|
|
|
|
}
|
|
|
|
|
|
2022-11-11 19:22:35 +08:00
|
|
|
// DecodePPC64RotateMask is the inverse operation of encodePPC64RotateMask. The values returned as
|
2020-10-23 12:12:34 -05:00
|
|
|
// mb and me satisfy the POWER ISA definition of MASK(x,y) where MASK(mb,me) = mask.
|
|
|
|
|
func DecodePPC64RotateMask(sauxint int64) (rotate, mb, me int64, mask uint64) {
|
|
|
|
|
auxint := uint64(sauxint)
|
|
|
|
|
rotate = int64((auxint >> 16) & 0xFF)
|
|
|
|
|
mb = int64((auxint >> 8) & 0xFF)
|
|
|
|
|
me = int64((auxint >> 0) & 0xFF)
|
|
|
|
|
nbits := int64((auxint >> 24) & 0xFF)
|
|
|
|
|
mask = ((1 << uint(nbits-mb)) - 1) ^ ((1 << uint(nbits-me)) - 1)
|
|
|
|
|
if mb > me {
|
|
|
|
|
mask = ^mask
|
|
|
|
|
}
|
|
|
|
|
if nbits == 32 {
|
|
|
|
|
mask = uint64(uint32(mask))
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Fixup ME to match ISA definition. The second argument to MASK(..,me)
|
|
|
|
|
// is inclusive.
|
|
|
|
|
me = (me - 1) & (nbits - 1)
|
|
|
|
|
return
|
|
|
|
|
}
|
|
|
|
|
|
2020-11-16 09:40:45 -05:00
|
|
|
// This verifies that the mask is a set of
|
|
|
|
|
// consecutive bits including the least
|
|
|
|
|
// significant bit.
|
2020-08-31 09:43:40 -04:00
|
|
|
func isPPC64ValidShiftMask(v int64) bool {
|
2020-11-16 09:40:45 -05:00
|
|
|
if (v != 0) && ((v+1)&v) == 0 {
|
2020-08-31 09:43:40 -04:00
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func getPPC64ShiftMaskLength(v int64) int64 {
|
|
|
|
|
return int64(bits.Len64(uint64(v)))
|
|
|
|
|
}
|
|
|
|
|
|
2020-10-23 12:12:34 -05:00
|
|
|
// Decompose a shift right into an equivalent rotate/mask,
|
|
|
|
|
// and return mask & m.
|
|
|
|
|
func mergePPC64RShiftMask(m, s, nbits int64) int64 {
|
|
|
|
|
smask := uint64((1<<uint(nbits))-1) >> uint(s)
|
|
|
|
|
return m & int64(smask)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Combine (ANDconst [m] (SRWconst [s])) into (RLWINM [y]) or return 0
|
|
|
|
|
func mergePPC64AndSrwi(m, s int64) int64 {
|
|
|
|
|
mask := mergePPC64RShiftMask(m, s, 32)
|
|
|
|
|
if !isPPC64WordRotateMask(mask) {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
2021-04-15 13:41:01 -05:00
|
|
|
return encodePPC64RotateMask((32-s)&31, mask, 32)
|
2020-10-23 12:12:34 -05:00
|
|
|
}
|
|
|
|
|
|
2024-10-24 09:08:47 -05:00
|
|
|
// Combine (ANDconst [m] (SRDconst [s])) into (RLWINM [y]) or return 0
|
|
|
|
|
func mergePPC64AndSrdi(m, s int64) int64 {
|
|
|
|
|
mask := mergePPC64RShiftMask(m, s, 64)
|
|
|
|
|
|
|
|
|
|
// Verify the rotate and mask result only uses the lower 32 bits.
|
|
|
|
|
rv := bits.RotateLeft64(0xFFFFFFFF00000000, -int(s))
|
|
|
|
|
if rv&uint64(mask) != 0 {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
2025-06-04 08:51:11 -05:00
|
|
|
if !isPPC64WordRotateMaskNonWrapping(mask) {
|
2024-10-24 09:08:47 -05:00
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
return encodePPC64RotateMask((32-s)&31, mask, 32)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Combine (ANDconst [m] (SLDconst [s])) into (RLWINM [y]) or return 0
|
|
|
|
|
func mergePPC64AndSldi(m, s int64) int64 {
|
|
|
|
|
mask := -1 << s & m
|
|
|
|
|
|
|
|
|
|
// Verify the rotate and mask result only uses the lower 32 bits.
|
|
|
|
|
rv := bits.RotateLeft64(0xFFFFFFFF00000000, int(s))
|
|
|
|
|
if rv&uint64(mask) != 0 {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
2025-06-04 08:51:11 -05:00
|
|
|
if !isPPC64WordRotateMaskNonWrapping(mask) {
|
2024-10-24 09:08:47 -05:00
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
return encodePPC64RotateMask(s&31, mask, 32)
|
|
|
|
|
}
|
|
|
|
|
|
2024-04-26 09:26:52 -05:00
|
|
|
// Test if a word shift right feeding into a CLRLSLDI can be merged into RLWINM.
|
2020-10-23 12:12:34 -05:00
|
|
|
// Return the encoded RLWINM constant, or 0 if they cannot be merged.
|
|
|
|
|
func mergePPC64ClrlsldiSrw(sld, srw int64) int64 {
|
|
|
|
|
mask_1 := uint64(0xFFFFFFFF >> uint(srw))
|
2023-06-13 23:01:11 +00:00
|
|
|
// for CLRLSLDI, it's more convenient to think of it as a mask left bits then rotate left.
|
2020-10-23 12:12:34 -05:00
|
|
|
mask_2 := uint64(0xFFFFFFFFFFFFFFFF) >> uint(GetPPC64Shiftmb(int64(sld)))
|
|
|
|
|
|
|
|
|
|
// Rewrite mask to apply after the final left shift.
|
|
|
|
|
mask_3 := (mask_1 & mask_2) << uint(GetPPC64Shiftsh(sld))
|
|
|
|
|
|
|
|
|
|
r_1 := 32 - srw
|
|
|
|
|
r_2 := GetPPC64Shiftsh(sld)
|
|
|
|
|
r_3 := (r_1 + r_2) & 31 // This can wrap.
|
|
|
|
|
|
|
|
|
|
if uint64(uint32(mask_3)) != mask_3 || mask_3 == 0 {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
return encodePPC64RotateMask(int64(r_3), int64(mask_3), 32)
|
|
|
|
|
}
|
|
|
|
|
|
2024-04-26 09:26:52 -05:00
|
|
|
// Test if a doubleword shift right feeding into a CLRLSLDI can be merged into RLWINM.
|
|
|
|
|
// Return the encoded RLWINM constant, or 0 if they cannot be merged.
|
|
|
|
|
func mergePPC64ClrlsldiSrd(sld, srd int64) int64 {
|
|
|
|
|
mask_1 := uint64(0xFFFFFFFFFFFFFFFF) >> uint(srd)
|
|
|
|
|
// for CLRLSLDI, it's more convenient to think of it as a mask left bits then rotate left.
|
|
|
|
|
mask_2 := uint64(0xFFFFFFFFFFFFFFFF) >> uint(GetPPC64Shiftmb(int64(sld)))
|
|
|
|
|
|
|
|
|
|
// Rewrite mask to apply after the final left shift.
|
|
|
|
|
mask_3 := (mask_1 & mask_2) << uint(GetPPC64Shiftsh(sld))
|
|
|
|
|
|
|
|
|
|
r_1 := 64 - srd
|
|
|
|
|
r_2 := GetPPC64Shiftsh(sld)
|
|
|
|
|
r_3 := (r_1 + r_2) & 63 // This can wrap.
|
|
|
|
|
|
|
|
|
|
if uint64(uint32(mask_3)) != mask_3 || mask_3 == 0 {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
// This combine only works when selecting and shifting the lower 32 bits.
|
|
|
|
|
v1 := bits.RotateLeft64(0xFFFFFFFF00000000, int(r_3))
|
|
|
|
|
if v1&mask_3 != 0 {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
2024-05-20 14:44:21 -05:00
|
|
|
return encodePPC64RotateMask(int64(r_3&31), int64(mask_3), 32)
|
2024-04-26 09:26:52 -05:00
|
|
|
}
|
|
|
|
|
|
2020-10-23 12:12:34 -05:00
|
|
|
// Test if a RLWINM feeding into a CLRLSLDI can be merged into RLWINM. Return
|
|
|
|
|
// the encoded RLWINM constant, or 0 if they cannot be merged.
|
|
|
|
|
func mergePPC64ClrlsldiRlwinm(sld int32, rlw int64) int64 {
|
|
|
|
|
r_1, _, _, mask_1 := DecodePPC64RotateMask(rlw)
|
2023-06-13 23:01:11 +00:00
|
|
|
// for CLRLSLDI, it's more convenient to think of it as a mask left bits then rotate left.
|
2020-10-23 12:12:34 -05:00
|
|
|
mask_2 := uint64(0xFFFFFFFFFFFFFFFF) >> uint(GetPPC64Shiftmb(int64(sld)))
|
|
|
|
|
|
|
|
|
|
// combine the masks, and adjust for the final left shift.
|
|
|
|
|
mask_3 := (mask_1 & mask_2) << uint(GetPPC64Shiftsh(int64(sld)))
|
|
|
|
|
r_2 := GetPPC64Shiftsh(int64(sld))
|
|
|
|
|
r_3 := (r_1 + r_2) & 31 // This can wrap.
|
|
|
|
|
|
|
|
|
|
// Verify the result is still a valid bitmask of <= 32 bits.
|
|
|
|
|
if !isPPC64WordRotateMask(int64(mask_3)) || uint64(uint32(mask_3)) != mask_3 {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
return encodePPC64RotateMask(r_3, int64(mask_3), 32)
|
|
|
|
|
}
|
|
|
|
|
|
2024-05-01 15:03:34 -05:00
|
|
|
// Test if RLWINM feeding into an ANDconst can be merged. Return the encoded RLWINM constant,
|
|
|
|
|
// or 0 if they cannot be merged.
|
|
|
|
|
func mergePPC64AndRlwinm(mask uint32, rlw int64) int64 {
|
|
|
|
|
r, _, _, mask_rlw := DecodePPC64RotateMask(rlw)
|
|
|
|
|
mask_out := (mask_rlw & uint64(mask))
|
|
|
|
|
|
|
|
|
|
// Verify the result is still a valid bitmask of <= 32 bits.
|
|
|
|
|
if !isPPC64WordRotateMask(int64(mask_out)) {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
return encodePPC64RotateMask(r, int64(mask_out), 32)
|
|
|
|
|
}
|
|
|
|
|
|
2024-06-05 16:12:49 -05:00
|
|
|
// Test if RLWINM opcode rlw clears the upper 32 bits of the
|
|
|
|
|
// result. Return rlw if it does, 0 otherwise.
|
|
|
|
|
func mergePPC64MovwzregRlwinm(rlw int64) int64 {
|
|
|
|
|
_, mb, me, _ := DecodePPC64RotateMask(rlw)
|
|
|
|
|
if mb > me {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
return rlw
|
|
|
|
|
}
|
|
|
|
|
|
2024-05-01 15:03:34 -05:00
|
|
|
// Test if AND feeding into an ANDconst can be merged. Return the encoded RLWINM constant,
|
|
|
|
|
// or 0 if they cannot be merged.
|
|
|
|
|
func mergePPC64RlwinmAnd(rlw int64, mask uint32) int64 {
|
|
|
|
|
r, _, _, mask_rlw := DecodePPC64RotateMask(rlw)
|
|
|
|
|
|
|
|
|
|
// Rotate the input mask, combine with the rlwnm mask, and test if it is still a valid rlwinm mask.
|
|
|
|
|
r_mask := bits.RotateLeft32(mask, int(r))
|
|
|
|
|
|
|
|
|
|
mask_out := (mask_rlw & uint64(r_mask))
|
|
|
|
|
|
|
|
|
|
// Verify the result is still a valid bitmask of <= 32 bits.
|
|
|
|
|
if !isPPC64WordRotateMask(int64(mask_out)) {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
return encodePPC64RotateMask(r, int64(mask_out), 32)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Test if RLWINM feeding into SRDconst can be merged. Return the encoded RLIWNM constant,
|
|
|
|
|
// or 0 if they cannot be merged.
|
|
|
|
|
func mergePPC64SldiRlwinm(sldi, rlw int64) int64 {
|
|
|
|
|
r_1, mb, me, mask_1 := DecodePPC64RotateMask(rlw)
|
|
|
|
|
if mb > me || mb < sldi {
|
|
|
|
|
// Wrapping masks cannot be merged as the upper 32 bits are effectively undefined in this case.
|
|
|
|
|
// Likewise, if mb is less than the shift amount, it cannot be merged.
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
// combine the masks, and adjust for the final left shift.
|
|
|
|
|
mask_3 := mask_1 << sldi
|
|
|
|
|
r_3 := (r_1 + sldi) & 31 // This can wrap.
|
|
|
|
|
|
|
|
|
|
// Verify the result is still a valid bitmask of <= 32 bits.
|
|
|
|
|
if uint64(uint32(mask_3)) != mask_3 {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
return encodePPC64RotateMask(r_3, int64(mask_3), 32)
|
|
|
|
|
}
|
|
|
|
|
|
2020-10-23 12:12:34 -05:00
|
|
|
// Compute the encoded RLWINM constant from combining (SLDconst [sld] (SRWconst [srw] x)),
|
|
|
|
|
// or return 0 if they cannot be combined.
|
|
|
|
|
func mergePPC64SldiSrw(sld, srw int64) int64 {
|
|
|
|
|
if sld > srw || srw >= 32 {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
mask_r := uint32(0xFFFFFFFF) >> uint(srw)
|
|
|
|
|
mask_l := uint32(0xFFFFFFFF) >> uint(sld)
|
|
|
|
|
mask := (mask_r & mask_l) << uint(sld)
|
|
|
|
|
return encodePPC64RotateMask((32-srw+sld)&31, int64(mask), 32)
|
|
|
|
|
}
|
|
|
|
|
|
cmd/compile/internal/ssa: on PPC64, merge (CMPconst [0] (op ...)) more aggressively
Generate the CC version of many opcodes whose result is compared against
signed 0. The approach taken here works even if the opcode result is used in
multiple places too.
Add support for ADD, ADDconst, ANDN, SUB, NEG, CNTLZD, NOR conversions
to their CC opcode variant. These are the most commonly used variants.
Also, do not set clobberFlags of CNTLZD and CNTLZW, they do not clobber
flags.
This results in about 1% smaller text sections in kubernetes binaries,
and no regressions in the crypto benchmarks.
Change-Id: I9e0381944869c3774106bf348dead5ecb96dffda
Reviewed-on: https://go-review.googlesource.com/c/go/+/538636
Run-TryBot: Paul Murphy <murp@ibm.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Jayanth Krishnamurthy <jayanth.krishnamurthy@ibm.com>
Reviewed-by: Heschi Kreinick <heschi@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
2023-10-24 16:04:42 -05:00
|
|
|
// Convert a PPC64 opcode from the Op to OpCC form. This converts (op x y)
|
|
|
|
|
// to (Select0 (opCC x y)) without having to explicitly fixup every user
|
|
|
|
|
// of op.
|
|
|
|
|
//
|
|
|
|
|
// E.g consider the case:
|
|
|
|
|
// a = (ADD x y)
|
|
|
|
|
// b = (CMPconst [0] a)
|
|
|
|
|
// c = (OR a z)
|
|
|
|
|
//
|
|
|
|
|
// A rule like (CMPconst [0] (ADD x y)) => (CMPconst [0] (Select0 (ADDCC x y)))
|
|
|
|
|
// would produce:
|
|
|
|
|
// a = (ADD x y)
|
|
|
|
|
// a' = (ADDCC x y)
|
|
|
|
|
// a” = (Select0 a')
|
|
|
|
|
// b = (CMPconst [0] a”)
|
|
|
|
|
// c = (OR a z)
|
|
|
|
|
//
|
|
|
|
|
// which makes it impossible to rewrite the second user. Instead the result
|
|
|
|
|
// of this conversion is:
|
|
|
|
|
// a' = (ADDCC x y)
|
|
|
|
|
// a = (Select0 a')
|
|
|
|
|
// b = (CMPconst [0] a)
|
|
|
|
|
// c = (OR a z)
|
|
|
|
|
//
|
|
|
|
|
// Which makes it trivial to rewrite b using a lowering rule.
|
|
|
|
|
func convertPPC64OpToOpCC(op *Value) *Value {
|
|
|
|
|
ccOpMap := map[Op]Op{
|
|
|
|
|
OpPPC64ADD: OpPPC64ADDCC,
|
|
|
|
|
OpPPC64ADDconst: OpPPC64ADDCCconst,
|
|
|
|
|
OpPPC64AND: OpPPC64ANDCC,
|
|
|
|
|
OpPPC64ANDN: OpPPC64ANDNCC,
|
2024-03-27 16:03:11 -05:00
|
|
|
OpPPC64ANDconst: OpPPC64ANDCCconst,
|
cmd/compile/internal/ssa: on PPC64, merge (CMPconst [0] (op ...)) more aggressively
Generate the CC version of many opcodes whose result is compared against
signed 0. The approach taken here works even if the opcode result is used in
multiple places too.
Add support for ADD, ADDconst, ANDN, SUB, NEG, CNTLZD, NOR conversions
to their CC opcode variant. These are the most commonly used variants.
Also, do not set clobberFlags of CNTLZD and CNTLZW, they do not clobber
flags.
This results in about 1% smaller text sections in kubernetes binaries,
and no regressions in the crypto benchmarks.
Change-Id: I9e0381944869c3774106bf348dead5ecb96dffda
Reviewed-on: https://go-review.googlesource.com/c/go/+/538636
Run-TryBot: Paul Murphy <murp@ibm.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Jayanth Krishnamurthy <jayanth.krishnamurthy@ibm.com>
Reviewed-by: Heschi Kreinick <heschi@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
2023-10-24 16:04:42 -05:00
|
|
|
OpPPC64CNTLZD: OpPPC64CNTLZDCC,
|
2024-03-27 16:03:11 -05:00
|
|
|
OpPPC64MULHDU: OpPPC64MULHDUCC,
|
|
|
|
|
OpPPC64NEG: OpPPC64NEGCC,
|
|
|
|
|
OpPPC64NOR: OpPPC64NORCC,
|
cmd/compile/internal/ssa: on PPC64, merge (CMPconst [0] (op ...)) more aggressively
Generate the CC version of many opcodes whose result is compared against
signed 0. The approach taken here works even if the opcode result is used in
multiple places too.
Add support for ADD, ADDconst, ANDN, SUB, NEG, CNTLZD, NOR conversions
to their CC opcode variant. These are the most commonly used variants.
Also, do not set clobberFlags of CNTLZD and CNTLZW, they do not clobber
flags.
This results in about 1% smaller text sections in kubernetes binaries,
and no regressions in the crypto benchmarks.
Change-Id: I9e0381944869c3774106bf348dead5ecb96dffda
Reviewed-on: https://go-review.googlesource.com/c/go/+/538636
Run-TryBot: Paul Murphy <murp@ibm.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Jayanth Krishnamurthy <jayanth.krishnamurthy@ibm.com>
Reviewed-by: Heschi Kreinick <heschi@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
2023-10-24 16:04:42 -05:00
|
|
|
OpPPC64OR: OpPPC64ORCC,
|
2024-05-02 15:08:30 -05:00
|
|
|
OpPPC64RLDICL: OpPPC64RLDICLCC,
|
cmd/compile/internal/ssa: on PPC64, merge (CMPconst [0] (op ...)) more aggressively
Generate the CC version of many opcodes whose result is compared against
signed 0. The approach taken here works even if the opcode result is used in
multiple places too.
Add support for ADD, ADDconst, ANDN, SUB, NEG, CNTLZD, NOR conversions
to their CC opcode variant. These are the most commonly used variants.
Also, do not set clobberFlags of CNTLZD and CNTLZW, they do not clobber
flags.
This results in about 1% smaller text sections in kubernetes binaries,
and no regressions in the crypto benchmarks.
Change-Id: I9e0381944869c3774106bf348dead5ecb96dffda
Reviewed-on: https://go-review.googlesource.com/c/go/+/538636
Run-TryBot: Paul Murphy <murp@ibm.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Jayanth Krishnamurthy <jayanth.krishnamurthy@ibm.com>
Reviewed-by: Heschi Kreinick <heschi@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
2023-10-24 16:04:42 -05:00
|
|
|
OpPPC64SUB: OpPPC64SUBCC,
|
|
|
|
|
OpPPC64XOR: OpPPC64XORCC,
|
|
|
|
|
}
|
|
|
|
|
b := op.Block
|
|
|
|
|
opCC := b.NewValue0I(op.Pos, ccOpMap[op.Op], types.NewTuple(op.Type, types.TypeFlags), op.AuxInt)
|
|
|
|
|
opCC.AddArgs(op.Args...)
|
|
|
|
|
op.reset(OpSelect0)
|
|
|
|
|
op.AddArgs(opCC)
|
|
|
|
|
return op
|
|
|
|
|
}
|
|
|
|
|
|
2024-05-02 15:08:30 -05:00
|
|
|
// Try converting a RLDICL to ANDCC. If successful, return the mask otherwise 0.
|
|
|
|
|
func convertPPC64RldiclAndccconst(sauxint int64) int64 {
|
|
|
|
|
r, _, _, mask := DecodePPC64RotateMask(sauxint)
|
|
|
|
|
if r != 0 || mask&0xFFFF != mask {
|
|
|
|
|
return 0
|
|
|
|
|
}
|
|
|
|
|
return int64(mask)
|
|
|
|
|
}
|
|
|
|
|
|
2020-10-23 12:12:34 -05:00
|
|
|
// Convenience function to rotate a 32 bit constant value by another constant.
|
|
|
|
|
func rotateLeft32(v, rotate int64) int64 {
|
|
|
|
|
return int64(bits.RotateLeft32(uint32(v), int(rotate)))
|
|
|
|
|
}
|
|
|
|
|
|
2021-09-19 13:51:37 -07:00
|
|
|
func rotateRight64(v, rotate int64) int64 {
|
|
|
|
|
return int64(bits.RotateLeft64(uint64(v), int(-rotate)))
|
|
|
|
|
}
|
|
|
|
|
|
2019-02-11 09:40:02 +00:00
|
|
|
// encodes the lsb and width for arm(64) bitfield ops into the expected auxInt format.
|
2020-05-14 17:01:11 +08:00
|
|
|
func armBFAuxInt(lsb, width int64) arm64BitField {
|
cmd/compile/internal/ssa: add patterns for arm64 bitfield opcodes
Add patterns to match common idioms for EXTR, BFI, BFXIL, SBFIZ, SBFX,
UBFIZ and UBFX opcodes.
go1 benchmarks results on Amberwing:
name old time/op new time/op delta
FmtManyArgs 786ns ± 2% 714ns ± 1% -9.20% (p=0.000 n=10+10)
Gzip 437ms ± 0% 402ms ± 0% -7.99% (p=0.000 n=10+10)
FmtFprintfIntInt 196ns ± 0% 182ns ± 0% -7.28% (p=0.000 n=10+9)
FmtFprintfPrefixedInt 207ns ± 0% 199ns ± 0% -3.86% (p=0.000 n=10+10)
FmtFprintfFloat 324ns ± 0% 316ns ± 0% -2.47% (p=0.000 n=10+8)
FmtFprintfInt 119ns ± 0% 117ns ± 0% -1.68% (p=0.000 n=10+9)
GobDecode 12.8ms ± 2% 12.6ms ± 1% -1.62% (p=0.002 n=10+10)
JSONDecode 94.4ms ± 1% 93.4ms ± 0% -1.10% (p=0.000 n=10+10)
RegexpMatchEasy0_32 247ns ± 0% 245ns ± 0% -0.65% (p=0.000 n=10+10)
RegexpMatchMedium_32 314ns ± 0% 312ns ± 0% -0.64% (p=0.000 n=10+10)
RegexpMatchEasy0_1K 541ns ± 0% 538ns ± 0% -0.55% (p=0.000 n=10+9)
TimeParse 450ns ± 1% 448ns ± 1% -0.42% (p=0.035 n=9+9)
RegexpMatchEasy1_32 244ns ± 0% 243ns ± 0% -0.41% (p=0.000 n=10+10)
GoParse 6.03ms ± 0% 6.00ms ± 0% -0.40% (p=0.002 n=10+10)
RegexpMatchEasy1_1K 779ns ± 0% 777ns ± 0% -0.26% (p=0.000 n=10+10)
RegexpMatchHard_32 2.75µs ± 0% 2.74µs ± 1% -0.06% (p=0.026 n=9+9)
BinaryTree17 11.7s ± 0% 11.6s ± 0% ~ (p=0.089 n=10+10)
HTTPClientServer 89.1µs ± 1% 89.5µs ± 2% ~ (p=0.436 n=10+10)
RegexpMatchHard_1K 78.9µs ± 0% 79.5µs ± 2% ~ (p=0.469 n=10+10)
FmtFprintfEmpty 58.5ns ± 0% 58.5ns ± 0% ~ (all equal)
GobEncode 12.0ms ± 1% 12.1ms ± 0% ~ (p=0.075 n=10+10)
Revcomp 669ms ± 0% 668ms ± 0% ~ (p=0.091 n=7+9)
Mandelbrot200 5.35ms ± 0% 5.36ms ± 0% +0.07% (p=0.000 n=9+9)
RegexpMatchMedium_1K 52.1µs ± 0% 52.1µs ± 0% +0.10% (p=0.000 n=9+9)
Fannkuch11 3.25s ± 0% 3.26s ± 0% +0.36% (p=0.000 n=9+10)
FmtFprintfString 114ns ± 1% 115ns ± 0% +0.52% (p=0.011 n=10+10)
JSONEncode 20.2ms ± 0% 20.3ms ± 0% +0.65% (p=0.000 n=10+10)
Template 91.3ms ± 0% 92.3ms ± 0% +1.08% (p=0.000 n=10+10)
TimeFormat 484ns ± 0% 495ns ± 1% +2.30% (p=0.000 n=9+10)
There are some opportunities to improve this change further by adding
patterns to match the "extended register" versions of ADD/SUB/CMP, but I
think that should be evaluated on its own. The regressions in Template
and TimeFormat would likely be recovered by this, as they seem to be due
to generating:
ubfiz x0, x0, #3, #8
add x1, x2, x0
instead of
add x1, x2, x0, lsl #3
Change-Id: I5644a8d70ac7a98e784a377a2b76ab47a3415a4b
Reviewed-on: https://go-review.googlesource.com/88355
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-02-21 16:15:39 -05:00
|
|
|
if lsb < 0 || lsb > 63 {
|
2019-02-11 09:40:02 +00:00
|
|
|
panic("ARM(64) bit field lsb constant out of range")
|
cmd/compile/internal/ssa: add patterns for arm64 bitfield opcodes
Add patterns to match common idioms for EXTR, BFI, BFXIL, SBFIZ, SBFX,
UBFIZ and UBFX opcodes.
go1 benchmarks results on Amberwing:
name old time/op new time/op delta
FmtManyArgs 786ns ± 2% 714ns ± 1% -9.20% (p=0.000 n=10+10)
Gzip 437ms ± 0% 402ms ± 0% -7.99% (p=0.000 n=10+10)
FmtFprintfIntInt 196ns ± 0% 182ns ± 0% -7.28% (p=0.000 n=10+9)
FmtFprintfPrefixedInt 207ns ± 0% 199ns ± 0% -3.86% (p=0.000 n=10+10)
FmtFprintfFloat 324ns ± 0% 316ns ± 0% -2.47% (p=0.000 n=10+8)
FmtFprintfInt 119ns ± 0% 117ns ± 0% -1.68% (p=0.000 n=10+9)
GobDecode 12.8ms ± 2% 12.6ms ± 1% -1.62% (p=0.002 n=10+10)
JSONDecode 94.4ms ± 1% 93.4ms ± 0% -1.10% (p=0.000 n=10+10)
RegexpMatchEasy0_32 247ns ± 0% 245ns ± 0% -0.65% (p=0.000 n=10+10)
RegexpMatchMedium_32 314ns ± 0% 312ns ± 0% -0.64% (p=0.000 n=10+10)
RegexpMatchEasy0_1K 541ns ± 0% 538ns ± 0% -0.55% (p=0.000 n=10+9)
TimeParse 450ns ± 1% 448ns ± 1% -0.42% (p=0.035 n=9+9)
RegexpMatchEasy1_32 244ns ± 0% 243ns ± 0% -0.41% (p=0.000 n=10+10)
GoParse 6.03ms ± 0% 6.00ms ± 0% -0.40% (p=0.002 n=10+10)
RegexpMatchEasy1_1K 779ns ± 0% 777ns ± 0% -0.26% (p=0.000 n=10+10)
RegexpMatchHard_32 2.75µs ± 0% 2.74µs ± 1% -0.06% (p=0.026 n=9+9)
BinaryTree17 11.7s ± 0% 11.6s ± 0% ~ (p=0.089 n=10+10)
HTTPClientServer 89.1µs ± 1% 89.5µs ± 2% ~ (p=0.436 n=10+10)
RegexpMatchHard_1K 78.9µs ± 0% 79.5µs ± 2% ~ (p=0.469 n=10+10)
FmtFprintfEmpty 58.5ns ± 0% 58.5ns ± 0% ~ (all equal)
GobEncode 12.0ms ± 1% 12.1ms ± 0% ~ (p=0.075 n=10+10)
Revcomp 669ms ± 0% 668ms ± 0% ~ (p=0.091 n=7+9)
Mandelbrot200 5.35ms ± 0% 5.36ms ± 0% +0.07% (p=0.000 n=9+9)
RegexpMatchMedium_1K 52.1µs ± 0% 52.1µs ± 0% +0.10% (p=0.000 n=9+9)
Fannkuch11 3.25s ± 0% 3.26s ± 0% +0.36% (p=0.000 n=9+10)
FmtFprintfString 114ns ± 1% 115ns ± 0% +0.52% (p=0.011 n=10+10)
JSONEncode 20.2ms ± 0% 20.3ms ± 0% +0.65% (p=0.000 n=10+10)
Template 91.3ms ± 0% 92.3ms ± 0% +1.08% (p=0.000 n=10+10)
TimeFormat 484ns ± 0% 495ns ± 1% +2.30% (p=0.000 n=9+10)
There are some opportunities to improve this change further by adding
patterns to match the "extended register" versions of ADD/SUB/CMP, but I
think that should be evaluated on its own. The regressions in Template
and TimeFormat would likely be recovered by this, as they seem to be due
to generating:
ubfiz x0, x0, #3, #8
add x1, x2, x0
instead of
add x1, x2, x0, lsl #3
Change-Id: I5644a8d70ac7a98e784a377a2b76ab47a3415a4b
Reviewed-on: https://go-review.googlesource.com/88355
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-02-21 16:15:39 -05:00
|
|
|
}
|
cmd/compile: simiplify arm64 bitfield optimizations
In some rewrite rules for arm64 bitfield optimizations, the
bitfield lsb value and the bitfield width value are related
to datasize, some of them use datasize directly to check the
bitfield lsb value is valid, to get the bitfiled width value,
but some of them call isARM64BFMask() and arm64BFWidth()
functions. In order to be consistent, this patch changes them
all to use datasize.
Besides, this patch sorts the codegen test cases.
Run the "toolstash-check -all" command and find one inconsistent code
is as the following.
new: src/math/fma.go:104 BEQ 247
master: src/math/fma.go:104 BEQ 248
The above inconsistence is due to this patch changing the range of the
field lsb value in "UBFIZ" optimization rules from "lc+(32|16|8)<64" to
"lc<64", so that the following code is generated as "UBFIZ". The logical
of changed code is still correct.
The code of src/math/fma.go:160:
const uvinf = 0x7FF0000000000000
func FMA(a, b uint32) float64 {
ps := a+b
return Float64frombits(uint64(ps)<<63 | uvinf)
}
The new assembly code:
TEXT "".FMA(SB), LEAF|NOFRAME|ABIInternal, $0-16
MOVWU "".a(FP), R0
MOVWU "".b+4(FP), R1
ADD R1, R0, R0
UBFIZ $63, R0, $1, R0
ORR $9218868437227405312, R0, R0
MOVD R0, "".~r2+8(FP)
RET (R30)
The master assembly code:
TEXT "".FMA(SB), LEAF|NOFRAME|ABIInternal, $0-16
MOVWU "".a(FP), R0
MOVWU "".b+4(FP), R1
ADD R1, R0, R0
MOVWU R0, R0
LSL $63, R0, R0
ORR $9218868437227405312, R0, R0
MOVD R0, "".~r2+8(FP)
RET (R30)
Change-Id: I9061104adfdfd3384d0525327ae1e5c8b0df5c35
Reviewed-on: https://go-review.googlesource.com/c/go/+/265038
Trust: fannie zhang <Fannie.Zhang@arm.com>
Run-TryBot: fannie zhang <Fannie.Zhang@arm.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2020-10-21 18:51:42 +08:00
|
|
|
if width < 1 || lsb+width > 64 {
|
2019-02-11 09:40:02 +00:00
|
|
|
panic("ARM(64) bit field width constant out of range")
|
cmd/compile/internal/ssa: add patterns for arm64 bitfield opcodes
Add patterns to match common idioms for EXTR, BFI, BFXIL, SBFIZ, SBFX,
UBFIZ and UBFX opcodes.
go1 benchmarks results on Amberwing:
name old time/op new time/op delta
FmtManyArgs 786ns ± 2% 714ns ± 1% -9.20% (p=0.000 n=10+10)
Gzip 437ms ± 0% 402ms ± 0% -7.99% (p=0.000 n=10+10)
FmtFprintfIntInt 196ns ± 0% 182ns ± 0% -7.28% (p=0.000 n=10+9)
FmtFprintfPrefixedInt 207ns ± 0% 199ns ± 0% -3.86% (p=0.000 n=10+10)
FmtFprintfFloat 324ns ± 0% 316ns ± 0% -2.47% (p=0.000 n=10+8)
FmtFprintfInt 119ns ± 0% 117ns ± 0% -1.68% (p=0.000 n=10+9)
GobDecode 12.8ms ± 2% 12.6ms ± 1% -1.62% (p=0.002 n=10+10)
JSONDecode 94.4ms ± 1% 93.4ms ± 0% -1.10% (p=0.000 n=10+10)
RegexpMatchEasy0_32 247ns ± 0% 245ns ± 0% -0.65% (p=0.000 n=10+10)
RegexpMatchMedium_32 314ns ± 0% 312ns ± 0% -0.64% (p=0.000 n=10+10)
RegexpMatchEasy0_1K 541ns ± 0% 538ns ± 0% -0.55% (p=0.000 n=10+9)
TimeParse 450ns ± 1% 448ns ± 1% -0.42% (p=0.035 n=9+9)
RegexpMatchEasy1_32 244ns ± 0% 243ns ± 0% -0.41% (p=0.000 n=10+10)
GoParse 6.03ms ± 0% 6.00ms ± 0% -0.40% (p=0.002 n=10+10)
RegexpMatchEasy1_1K 779ns ± 0% 777ns ± 0% -0.26% (p=0.000 n=10+10)
RegexpMatchHard_32 2.75µs ± 0% 2.74µs ± 1% -0.06% (p=0.026 n=9+9)
BinaryTree17 11.7s ± 0% 11.6s ± 0% ~ (p=0.089 n=10+10)
HTTPClientServer 89.1µs ± 1% 89.5µs ± 2% ~ (p=0.436 n=10+10)
RegexpMatchHard_1K 78.9µs ± 0% 79.5µs ± 2% ~ (p=0.469 n=10+10)
FmtFprintfEmpty 58.5ns ± 0% 58.5ns ± 0% ~ (all equal)
GobEncode 12.0ms ± 1% 12.1ms ± 0% ~ (p=0.075 n=10+10)
Revcomp 669ms ± 0% 668ms ± 0% ~ (p=0.091 n=7+9)
Mandelbrot200 5.35ms ± 0% 5.36ms ± 0% +0.07% (p=0.000 n=9+9)
RegexpMatchMedium_1K 52.1µs ± 0% 52.1µs ± 0% +0.10% (p=0.000 n=9+9)
Fannkuch11 3.25s ± 0% 3.26s ± 0% +0.36% (p=0.000 n=9+10)
FmtFprintfString 114ns ± 1% 115ns ± 0% +0.52% (p=0.011 n=10+10)
JSONEncode 20.2ms ± 0% 20.3ms ± 0% +0.65% (p=0.000 n=10+10)
Template 91.3ms ± 0% 92.3ms ± 0% +1.08% (p=0.000 n=10+10)
TimeFormat 484ns ± 0% 495ns ± 1% +2.30% (p=0.000 n=9+10)
There are some opportunities to improve this change further by adding
patterns to match the "extended register" versions of ADD/SUB/CMP, but I
think that should be evaluated on its own. The regressions in Template
and TimeFormat would likely be recovered by this, as they seem to be due
to generating:
ubfiz x0, x0, #3, #8
add x1, x2, x0
instead of
add x1, x2, x0, lsl #3
Change-Id: I5644a8d70ac7a98e784a377a2b76ab47a3415a4b
Reviewed-on: https://go-review.googlesource.com/88355
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-02-21 16:15:39 -05:00
|
|
|
}
|
2020-05-14 17:01:11 +08:00
|
|
|
return arm64BitField(width | lsb<<8)
|
cmd/compile/internal/ssa: add patterns for arm64 bitfield opcodes
Add patterns to match common idioms for EXTR, BFI, BFXIL, SBFIZ, SBFX,
UBFIZ and UBFX opcodes.
go1 benchmarks results on Amberwing:
name old time/op new time/op delta
FmtManyArgs 786ns ± 2% 714ns ± 1% -9.20% (p=0.000 n=10+10)
Gzip 437ms ± 0% 402ms ± 0% -7.99% (p=0.000 n=10+10)
FmtFprintfIntInt 196ns ± 0% 182ns ± 0% -7.28% (p=0.000 n=10+9)
FmtFprintfPrefixedInt 207ns ± 0% 199ns ± 0% -3.86% (p=0.000 n=10+10)
FmtFprintfFloat 324ns ± 0% 316ns ± 0% -2.47% (p=0.000 n=10+8)
FmtFprintfInt 119ns ± 0% 117ns ± 0% -1.68% (p=0.000 n=10+9)
GobDecode 12.8ms ± 2% 12.6ms ± 1% -1.62% (p=0.002 n=10+10)
JSONDecode 94.4ms ± 1% 93.4ms ± 0% -1.10% (p=0.000 n=10+10)
RegexpMatchEasy0_32 247ns ± 0% 245ns ± 0% -0.65% (p=0.000 n=10+10)
RegexpMatchMedium_32 314ns ± 0% 312ns ± 0% -0.64% (p=0.000 n=10+10)
RegexpMatchEasy0_1K 541ns ± 0% 538ns ± 0% -0.55% (p=0.000 n=10+9)
TimeParse 450ns ± 1% 448ns ± 1% -0.42% (p=0.035 n=9+9)
RegexpMatchEasy1_32 244ns ± 0% 243ns ± 0% -0.41% (p=0.000 n=10+10)
GoParse 6.03ms ± 0% 6.00ms ± 0% -0.40% (p=0.002 n=10+10)
RegexpMatchEasy1_1K 779ns ± 0% 777ns ± 0% -0.26% (p=0.000 n=10+10)
RegexpMatchHard_32 2.75µs ± 0% 2.74µs ± 1% -0.06% (p=0.026 n=9+9)
BinaryTree17 11.7s ± 0% 11.6s ± 0% ~ (p=0.089 n=10+10)
HTTPClientServer 89.1µs ± 1% 89.5µs ± 2% ~ (p=0.436 n=10+10)
RegexpMatchHard_1K 78.9µs ± 0% 79.5µs ± 2% ~ (p=0.469 n=10+10)
FmtFprintfEmpty 58.5ns ± 0% 58.5ns ± 0% ~ (all equal)
GobEncode 12.0ms ± 1% 12.1ms ± 0% ~ (p=0.075 n=10+10)
Revcomp 669ms ± 0% 668ms ± 0% ~ (p=0.091 n=7+9)
Mandelbrot200 5.35ms ± 0% 5.36ms ± 0% +0.07% (p=0.000 n=9+9)
RegexpMatchMedium_1K 52.1µs ± 0% 52.1µs ± 0% +0.10% (p=0.000 n=9+9)
Fannkuch11 3.25s ± 0% 3.26s ± 0% +0.36% (p=0.000 n=9+10)
FmtFprintfString 114ns ± 1% 115ns ± 0% +0.52% (p=0.011 n=10+10)
JSONEncode 20.2ms ± 0% 20.3ms ± 0% +0.65% (p=0.000 n=10+10)
Template 91.3ms ± 0% 92.3ms ± 0% +1.08% (p=0.000 n=10+10)
TimeFormat 484ns ± 0% 495ns ± 1% +2.30% (p=0.000 n=9+10)
There are some opportunities to improve this change further by adding
patterns to match the "extended register" versions of ADD/SUB/CMP, but I
think that should be evaluated on its own. The regressions in Template
and TimeFormat would likely be recovered by this, as they seem to be due
to generating:
ubfiz x0, x0, #3, #8
add x1, x2, x0
instead of
add x1, x2, x0, lsl #3
Change-Id: I5644a8d70ac7a98e784a377a2b76ab47a3415a4b
Reviewed-on: https://go-review.googlesource.com/88355
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-02-21 16:15:39 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// returns the lsb part of the auxInt field of arm64 bitfield ops.
|
2024-08-05 11:22:07 -07:00
|
|
|
func (bfc arm64BitField) lsb() int64 {
|
cmd/compile/internal/ssa: add patterns for arm64 bitfield opcodes
Add patterns to match common idioms for EXTR, BFI, BFXIL, SBFIZ, SBFX,
UBFIZ and UBFX opcodes.
go1 benchmarks results on Amberwing:
name old time/op new time/op delta
FmtManyArgs 786ns ± 2% 714ns ± 1% -9.20% (p=0.000 n=10+10)
Gzip 437ms ± 0% 402ms ± 0% -7.99% (p=0.000 n=10+10)
FmtFprintfIntInt 196ns ± 0% 182ns ± 0% -7.28% (p=0.000 n=10+9)
FmtFprintfPrefixedInt 207ns ± 0% 199ns ± 0% -3.86% (p=0.000 n=10+10)
FmtFprintfFloat 324ns ± 0% 316ns ± 0% -2.47% (p=0.000 n=10+8)
FmtFprintfInt 119ns ± 0% 117ns ± 0% -1.68% (p=0.000 n=10+9)
GobDecode 12.8ms ± 2% 12.6ms ± 1% -1.62% (p=0.002 n=10+10)
JSONDecode 94.4ms ± 1% 93.4ms ± 0% -1.10% (p=0.000 n=10+10)
RegexpMatchEasy0_32 247ns ± 0% 245ns ± 0% -0.65% (p=0.000 n=10+10)
RegexpMatchMedium_32 314ns ± 0% 312ns ± 0% -0.64% (p=0.000 n=10+10)
RegexpMatchEasy0_1K 541ns ± 0% 538ns ± 0% -0.55% (p=0.000 n=10+9)
TimeParse 450ns ± 1% 448ns ± 1% -0.42% (p=0.035 n=9+9)
RegexpMatchEasy1_32 244ns ± 0% 243ns ± 0% -0.41% (p=0.000 n=10+10)
GoParse 6.03ms ± 0% 6.00ms ± 0% -0.40% (p=0.002 n=10+10)
RegexpMatchEasy1_1K 779ns ± 0% 777ns ± 0% -0.26% (p=0.000 n=10+10)
RegexpMatchHard_32 2.75µs ± 0% 2.74µs ± 1% -0.06% (p=0.026 n=9+9)
BinaryTree17 11.7s ± 0% 11.6s ± 0% ~ (p=0.089 n=10+10)
HTTPClientServer 89.1µs ± 1% 89.5µs ± 2% ~ (p=0.436 n=10+10)
RegexpMatchHard_1K 78.9µs ± 0% 79.5µs ± 2% ~ (p=0.469 n=10+10)
FmtFprintfEmpty 58.5ns ± 0% 58.5ns ± 0% ~ (all equal)
GobEncode 12.0ms ± 1% 12.1ms ± 0% ~ (p=0.075 n=10+10)
Revcomp 669ms ± 0% 668ms ± 0% ~ (p=0.091 n=7+9)
Mandelbrot200 5.35ms ± 0% 5.36ms ± 0% +0.07% (p=0.000 n=9+9)
RegexpMatchMedium_1K 52.1µs ± 0% 52.1µs ± 0% +0.10% (p=0.000 n=9+9)
Fannkuch11 3.25s ± 0% 3.26s ± 0% +0.36% (p=0.000 n=9+10)
FmtFprintfString 114ns ± 1% 115ns ± 0% +0.52% (p=0.011 n=10+10)
JSONEncode 20.2ms ± 0% 20.3ms ± 0% +0.65% (p=0.000 n=10+10)
Template 91.3ms ± 0% 92.3ms ± 0% +1.08% (p=0.000 n=10+10)
TimeFormat 484ns ± 0% 495ns ± 1% +2.30% (p=0.000 n=9+10)
There are some opportunities to improve this change further by adding
patterns to match the "extended register" versions of ADD/SUB/CMP, but I
think that should be evaluated on its own. The regressions in Template
and TimeFormat would likely be recovered by this, as they seem to be due
to generating:
ubfiz x0, x0, #3, #8
add x1, x2, x0
instead of
add x1, x2, x0, lsl #3
Change-Id: I5644a8d70ac7a98e784a377a2b76ab47a3415a4b
Reviewed-on: https://go-review.googlesource.com/88355
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-02-21 16:15:39 -05:00
|
|
|
return int64(uint64(bfc) >> 8)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// returns the width part of the auxInt field of arm64 bitfield ops.
|
2024-08-05 11:22:07 -07:00
|
|
|
func (bfc arm64BitField) width() int64 {
|
2020-05-14 17:01:11 +08:00
|
|
|
return int64(bfc) & 0xff
|
cmd/compile/internal/ssa: add patterns for arm64 bitfield opcodes
Add patterns to match common idioms for EXTR, BFI, BFXIL, SBFIZ, SBFX,
UBFIZ and UBFX opcodes.
go1 benchmarks results on Amberwing:
name old time/op new time/op delta
FmtManyArgs 786ns ± 2% 714ns ± 1% -9.20% (p=0.000 n=10+10)
Gzip 437ms ± 0% 402ms ± 0% -7.99% (p=0.000 n=10+10)
FmtFprintfIntInt 196ns ± 0% 182ns ± 0% -7.28% (p=0.000 n=10+9)
FmtFprintfPrefixedInt 207ns ± 0% 199ns ± 0% -3.86% (p=0.000 n=10+10)
FmtFprintfFloat 324ns ± 0% 316ns ± 0% -2.47% (p=0.000 n=10+8)
FmtFprintfInt 119ns ± 0% 117ns ± 0% -1.68% (p=0.000 n=10+9)
GobDecode 12.8ms ± 2% 12.6ms ± 1% -1.62% (p=0.002 n=10+10)
JSONDecode 94.4ms ± 1% 93.4ms ± 0% -1.10% (p=0.000 n=10+10)
RegexpMatchEasy0_32 247ns ± 0% 245ns ± 0% -0.65% (p=0.000 n=10+10)
RegexpMatchMedium_32 314ns ± 0% 312ns ± 0% -0.64% (p=0.000 n=10+10)
RegexpMatchEasy0_1K 541ns ± 0% 538ns ± 0% -0.55% (p=0.000 n=10+9)
TimeParse 450ns ± 1% 448ns ± 1% -0.42% (p=0.035 n=9+9)
RegexpMatchEasy1_32 244ns ± 0% 243ns ± 0% -0.41% (p=0.000 n=10+10)
GoParse 6.03ms ± 0% 6.00ms ± 0% -0.40% (p=0.002 n=10+10)
RegexpMatchEasy1_1K 779ns ± 0% 777ns ± 0% -0.26% (p=0.000 n=10+10)
RegexpMatchHard_32 2.75µs ± 0% 2.74µs ± 1% -0.06% (p=0.026 n=9+9)
BinaryTree17 11.7s ± 0% 11.6s ± 0% ~ (p=0.089 n=10+10)
HTTPClientServer 89.1µs ± 1% 89.5µs ± 2% ~ (p=0.436 n=10+10)
RegexpMatchHard_1K 78.9µs ± 0% 79.5µs ± 2% ~ (p=0.469 n=10+10)
FmtFprintfEmpty 58.5ns ± 0% 58.5ns ± 0% ~ (all equal)
GobEncode 12.0ms ± 1% 12.1ms ± 0% ~ (p=0.075 n=10+10)
Revcomp 669ms ± 0% 668ms ± 0% ~ (p=0.091 n=7+9)
Mandelbrot200 5.35ms ± 0% 5.36ms ± 0% +0.07% (p=0.000 n=9+9)
RegexpMatchMedium_1K 52.1µs ± 0% 52.1µs ± 0% +0.10% (p=0.000 n=9+9)
Fannkuch11 3.25s ± 0% 3.26s ± 0% +0.36% (p=0.000 n=9+10)
FmtFprintfString 114ns ± 1% 115ns ± 0% +0.52% (p=0.011 n=10+10)
JSONEncode 20.2ms ± 0% 20.3ms ± 0% +0.65% (p=0.000 n=10+10)
Template 91.3ms ± 0% 92.3ms ± 0% +1.08% (p=0.000 n=10+10)
TimeFormat 484ns ± 0% 495ns ± 1% +2.30% (p=0.000 n=9+10)
There are some opportunities to improve this change further by adding
patterns to match the "extended register" versions of ADD/SUB/CMP, but I
think that should be evaluated on its own. The regressions in Template
and TimeFormat would likely be recovered by this, as they seem to be due
to generating:
ubfiz x0, x0, #3, #8
add x1, x2, x0
instead of
add x1, x2, x0, lsl #3
Change-Id: I5644a8d70ac7a98e784a377a2b76ab47a3415a4b
Reviewed-on: https://go-review.googlesource.com/88355
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-02-21 16:15:39 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// checks if mask >> rshift applied at lsb is a valid arm64 bitfield op mask.
|
|
|
|
|
func isARM64BFMask(lsb, mask, rshift int64) bool {
|
|
|
|
|
shiftedMask := int64(uint64(mask) >> uint64(rshift))
|
2024-09-19 10:06:55 -07:00
|
|
|
return shiftedMask != 0 && isPowerOfTwo(shiftedMask+1) && nto(shiftedMask)+lsb < 64
|
cmd/compile/internal/ssa: add patterns for arm64 bitfield opcodes
Add patterns to match common idioms for EXTR, BFI, BFXIL, SBFIZ, SBFX,
UBFIZ and UBFX opcodes.
go1 benchmarks results on Amberwing:
name old time/op new time/op delta
FmtManyArgs 786ns ± 2% 714ns ± 1% -9.20% (p=0.000 n=10+10)
Gzip 437ms ± 0% 402ms ± 0% -7.99% (p=0.000 n=10+10)
FmtFprintfIntInt 196ns ± 0% 182ns ± 0% -7.28% (p=0.000 n=10+9)
FmtFprintfPrefixedInt 207ns ± 0% 199ns ± 0% -3.86% (p=0.000 n=10+10)
FmtFprintfFloat 324ns ± 0% 316ns ± 0% -2.47% (p=0.000 n=10+8)
FmtFprintfInt 119ns ± 0% 117ns ± 0% -1.68% (p=0.000 n=10+9)
GobDecode 12.8ms ± 2% 12.6ms ± 1% -1.62% (p=0.002 n=10+10)
JSONDecode 94.4ms ± 1% 93.4ms ± 0% -1.10% (p=0.000 n=10+10)
RegexpMatchEasy0_32 247ns ± 0% 245ns ± 0% -0.65% (p=0.000 n=10+10)
RegexpMatchMedium_32 314ns ± 0% 312ns ± 0% -0.64% (p=0.000 n=10+10)
RegexpMatchEasy0_1K 541ns ± 0% 538ns ± 0% -0.55% (p=0.000 n=10+9)
TimeParse 450ns ± 1% 448ns ± 1% -0.42% (p=0.035 n=9+9)
RegexpMatchEasy1_32 244ns ± 0% 243ns ± 0% -0.41% (p=0.000 n=10+10)
GoParse 6.03ms ± 0% 6.00ms ± 0% -0.40% (p=0.002 n=10+10)
RegexpMatchEasy1_1K 779ns ± 0% 777ns ± 0% -0.26% (p=0.000 n=10+10)
RegexpMatchHard_32 2.75µs ± 0% 2.74µs ± 1% -0.06% (p=0.026 n=9+9)
BinaryTree17 11.7s ± 0% 11.6s ± 0% ~ (p=0.089 n=10+10)
HTTPClientServer 89.1µs ± 1% 89.5µs ± 2% ~ (p=0.436 n=10+10)
RegexpMatchHard_1K 78.9µs ± 0% 79.5µs ± 2% ~ (p=0.469 n=10+10)
FmtFprintfEmpty 58.5ns ± 0% 58.5ns ± 0% ~ (all equal)
GobEncode 12.0ms ± 1% 12.1ms ± 0% ~ (p=0.075 n=10+10)
Revcomp 669ms ± 0% 668ms ± 0% ~ (p=0.091 n=7+9)
Mandelbrot200 5.35ms ± 0% 5.36ms ± 0% +0.07% (p=0.000 n=9+9)
RegexpMatchMedium_1K 52.1µs ± 0% 52.1µs ± 0% +0.10% (p=0.000 n=9+9)
Fannkuch11 3.25s ± 0% 3.26s ± 0% +0.36% (p=0.000 n=9+10)
FmtFprintfString 114ns ± 1% 115ns ± 0% +0.52% (p=0.011 n=10+10)
JSONEncode 20.2ms ± 0% 20.3ms ± 0% +0.65% (p=0.000 n=10+10)
Template 91.3ms ± 0% 92.3ms ± 0% +1.08% (p=0.000 n=10+10)
TimeFormat 484ns ± 0% 495ns ± 1% +2.30% (p=0.000 n=9+10)
There are some opportunities to improve this change further by adding
patterns to match the "extended register" versions of ADD/SUB/CMP, but I
think that should be evaluated on its own. The regressions in Template
and TimeFormat would likely be recovered by this, as they seem to be due
to generating:
ubfiz x0, x0, #3, #8
add x1, x2, x0
instead of
add x1, x2, x0, lsl #3
Change-Id: I5644a8d70ac7a98e784a377a2b76ab47a3415a4b
Reviewed-on: https://go-review.googlesource.com/88355
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-02-21 16:15:39 -05:00
|
|
|
}
|
|
|
|
|
|
2022-11-11 19:22:35 +08:00
|
|
|
// returns the bitfield width of mask >> rshift for arm64 bitfield ops.
|
cmd/compile/internal/ssa: add patterns for arm64 bitfield opcodes
Add patterns to match common idioms for EXTR, BFI, BFXIL, SBFIZ, SBFX,
UBFIZ and UBFX opcodes.
go1 benchmarks results on Amberwing:
name old time/op new time/op delta
FmtManyArgs 786ns ± 2% 714ns ± 1% -9.20% (p=0.000 n=10+10)
Gzip 437ms ± 0% 402ms ± 0% -7.99% (p=0.000 n=10+10)
FmtFprintfIntInt 196ns ± 0% 182ns ± 0% -7.28% (p=0.000 n=10+9)
FmtFprintfPrefixedInt 207ns ± 0% 199ns ± 0% -3.86% (p=0.000 n=10+10)
FmtFprintfFloat 324ns ± 0% 316ns ± 0% -2.47% (p=0.000 n=10+8)
FmtFprintfInt 119ns ± 0% 117ns ± 0% -1.68% (p=0.000 n=10+9)
GobDecode 12.8ms ± 2% 12.6ms ± 1% -1.62% (p=0.002 n=10+10)
JSONDecode 94.4ms ± 1% 93.4ms ± 0% -1.10% (p=0.000 n=10+10)
RegexpMatchEasy0_32 247ns ± 0% 245ns ± 0% -0.65% (p=0.000 n=10+10)
RegexpMatchMedium_32 314ns ± 0% 312ns ± 0% -0.64% (p=0.000 n=10+10)
RegexpMatchEasy0_1K 541ns ± 0% 538ns ± 0% -0.55% (p=0.000 n=10+9)
TimeParse 450ns ± 1% 448ns ± 1% -0.42% (p=0.035 n=9+9)
RegexpMatchEasy1_32 244ns ± 0% 243ns ± 0% -0.41% (p=0.000 n=10+10)
GoParse 6.03ms ± 0% 6.00ms ± 0% -0.40% (p=0.002 n=10+10)
RegexpMatchEasy1_1K 779ns ± 0% 777ns ± 0% -0.26% (p=0.000 n=10+10)
RegexpMatchHard_32 2.75µs ± 0% 2.74µs ± 1% -0.06% (p=0.026 n=9+9)
BinaryTree17 11.7s ± 0% 11.6s ± 0% ~ (p=0.089 n=10+10)
HTTPClientServer 89.1µs ± 1% 89.5µs ± 2% ~ (p=0.436 n=10+10)
RegexpMatchHard_1K 78.9µs ± 0% 79.5µs ± 2% ~ (p=0.469 n=10+10)
FmtFprintfEmpty 58.5ns ± 0% 58.5ns ± 0% ~ (all equal)
GobEncode 12.0ms ± 1% 12.1ms ± 0% ~ (p=0.075 n=10+10)
Revcomp 669ms ± 0% 668ms ± 0% ~ (p=0.091 n=7+9)
Mandelbrot200 5.35ms ± 0% 5.36ms ± 0% +0.07% (p=0.000 n=9+9)
RegexpMatchMedium_1K 52.1µs ± 0% 52.1µs ± 0% +0.10% (p=0.000 n=9+9)
Fannkuch11 3.25s ± 0% 3.26s ± 0% +0.36% (p=0.000 n=9+10)
FmtFprintfString 114ns ± 1% 115ns ± 0% +0.52% (p=0.011 n=10+10)
JSONEncode 20.2ms ± 0% 20.3ms ± 0% +0.65% (p=0.000 n=10+10)
Template 91.3ms ± 0% 92.3ms ± 0% +1.08% (p=0.000 n=10+10)
TimeFormat 484ns ± 0% 495ns ± 1% +2.30% (p=0.000 n=9+10)
There are some opportunities to improve this change further by adding
patterns to match the "extended register" versions of ADD/SUB/CMP, but I
think that should be evaluated on its own. The regressions in Template
and TimeFormat would likely be recovered by this, as they seem to be due
to generating:
ubfiz x0, x0, #3, #8
add x1, x2, x0
instead of
add x1, x2, x0, lsl #3
Change-Id: I5644a8d70ac7a98e784a377a2b76ab47a3415a4b
Reviewed-on: https://go-review.googlesource.com/88355
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
2018-02-21 16:15:39 -05:00
|
|
|
func arm64BFWidth(mask, rshift int64) int64 {
|
|
|
|
|
shiftedMask := int64(uint64(mask) >> uint64(rshift))
|
|
|
|
|
if shiftedMask == 0 {
|
|
|
|
|
panic("ARM64 BF mask is zero")
|
|
|
|
|
}
|
|
|
|
|
return nto(shiftedMask)
|
|
|
|
|
}
|
2018-04-11 22:47:24 +01:00
|
|
|
|
2025-08-21 17:41:13 +03:00
|
|
|
// encodes condition code and NZCV flags into auxint.
|
|
|
|
|
func arm64ConditionalParamsAuxInt(cond Op, nzcv uint8) arm64ConditionalParams {
|
|
|
|
|
if cond < OpARM64Equal || cond > OpARM64GreaterEqualU {
|
|
|
|
|
panic("Wrong conditional operation")
|
|
|
|
|
}
|
|
|
|
|
if nzcv&0x0f != nzcv {
|
|
|
|
|
panic("Wrong value of NZCV flag")
|
|
|
|
|
}
|
|
|
|
|
return arm64ConditionalParams{cond, nzcv, 0, false}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// encodes condition code, NZCV flags and constant value into auxint.
|
|
|
|
|
func arm64ConditionalParamsAuxIntWithValue(cond Op, nzcv uint8, value uint8) arm64ConditionalParams {
|
|
|
|
|
if value&0x1f != value {
|
|
|
|
|
panic("Wrong value of constant")
|
|
|
|
|
}
|
|
|
|
|
params := arm64ConditionalParamsAuxInt(cond, nzcv)
|
|
|
|
|
params.constValue = value
|
|
|
|
|
params.ind = true
|
|
|
|
|
return params
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// extracts condition code from auxint.
|
|
|
|
|
func (condParams arm64ConditionalParams) Cond() Op {
|
|
|
|
|
return condParams.cond
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// extracts NZCV flags from auxint.
|
|
|
|
|
func (condParams arm64ConditionalParams) Nzcv() int64 {
|
|
|
|
|
return int64(condParams.nzcv)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// extracts constant value from auxint if present.
|
|
|
|
|
func (condParams arm64ConditionalParams) ConstValue() (int64, bool) {
|
|
|
|
|
return int64(condParams.constValue), condParams.ind
|
|
|
|
|
}
|
|
|
|
|
|
2018-04-11 22:47:24 +01:00
|
|
|
// registerizable reports whether t is a primitive type that fits in
|
|
|
|
|
// a register. It assumes float64 values will always fit into registers
|
|
|
|
|
// even if that isn't strictly true.
|
2020-04-23 23:08:59 -07:00
|
|
|
func registerizable(b *Block, typ *types.Type) bool {
|
2022-05-25 12:22:22 +02:00
|
|
|
if typ.IsPtrShaped() || typ.IsFloat() || typ.IsBoolean() {
|
2018-04-11 22:47:24 +01:00
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
if typ.IsInteger() {
|
|
|
|
|
return typ.Size() <= b.Func.Config.RegSize
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
2018-06-27 11:40:24 -05:00
|
|
|
|
|
|
|
|
// needRaceCleanup reports whether this call to racefuncenter/exit isn't needed.
|
2020-06-12 13:48:26 -04:00
|
|
|
func needRaceCleanup(sym *AuxCall, v *Value) bool {
|
2018-06-27 11:40:24 -05:00
|
|
|
f := v.Block.Func
|
|
|
|
|
if !f.Config.Race {
|
|
|
|
|
return false
|
|
|
|
|
}
|
2021-03-01 19:23:42 -05:00
|
|
|
if !isSameCall(sym, "runtime.racefuncenter") && !isSameCall(sym, "runtime.racefuncexit") {
|
2018-06-27 11:40:24 -05:00
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
for _, b := range f.Blocks {
|
|
|
|
|
for _, v := range b.Values {
|
2018-12-28 12:43:48 -08:00
|
|
|
switch v.Op {
|
2021-02-04 16:42:35 -05:00
|
|
|
case OpStaticCall, OpStaticLECall:
|
2021-03-01 19:23:42 -05:00
|
|
|
// Check for racefuncenter will encounter racefuncexit and vice versa.
|
2018-06-27 11:40:24 -05:00
|
|
|
// Allow calls to panic*
|
2020-06-12 13:48:26 -04:00
|
|
|
s := v.Aux.(*AuxCall).Fn.String()
|
2019-04-03 13:16:58 -07:00
|
|
|
switch s {
|
2021-03-01 19:23:42 -05:00
|
|
|
case "runtime.racefuncenter", "runtime.racefuncexit",
|
2019-04-03 13:16:58 -07:00
|
|
|
"runtime.panicdivide", "runtime.panicwrap",
|
|
|
|
|
"runtime.panicshift":
|
|
|
|
|
continue
|
2018-06-27 11:40:24 -05:00
|
|
|
}
|
2019-04-03 13:16:58 -07:00
|
|
|
// If we encountered any call, we need to keep racefunc*,
|
|
|
|
|
// for accurate stacktraces.
|
|
|
|
|
return false
|
|
|
|
|
case OpPanicBounds, OpPanicExtend:
|
|
|
|
|
// Note: these are panic generators that are ok (like the static calls above).
|
2021-02-04 16:42:35 -05:00
|
|
|
case OpClosureCall, OpInterCall, OpClosureLECall, OpInterLECall:
|
2018-12-28 12:43:48 -08:00
|
|
|
// We must keep the race functions if there are any other call types.
|
|
|
|
|
return false
|
2018-06-27 11:40:24 -05:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2020-06-12 13:48:26 -04:00
|
|
|
if isSameCall(sym, "runtime.racefuncenter") {
|
2021-02-04 16:42:35 -05:00
|
|
|
// TODO REGISTER ABI this needs to be cleaned up.
|
2020-08-11 13:19:57 -07:00
|
|
|
// If we're removing racefuncenter, remove its argument as well.
|
|
|
|
|
if v.Args[0].Op != OpStore {
|
2021-02-04 16:42:35 -05:00
|
|
|
if v.Op == OpStaticLECall {
|
|
|
|
|
// there is no store, yet.
|
|
|
|
|
return true
|
|
|
|
|
}
|
2020-08-11 13:19:57 -07:00
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
mem := v.Args[0].Args[2]
|
|
|
|
|
v.Args[0].reset(OpCopy)
|
|
|
|
|
v.Args[0].AddArg(mem)
|
|
|
|
|
}
|
2018-06-27 11:40:24 -05:00
|
|
|
return true
|
|
|
|
|
}
|
2018-10-09 22:55:36 -07:00
|
|
|
|
|
|
|
|
// symIsRO reports whether sym is a read-only global.
|
2025-03-29 19:49:25 +01:00
|
|
|
func symIsRO(sym Sym) bool {
|
2018-10-09 22:55:36 -07:00
|
|
|
lsym := sym.(*obj.LSym)
|
|
|
|
|
return lsym.Type == objabi.SRODATA && len(lsym.R) == 0
|
|
|
|
|
}
|
|
|
|
|
|
2020-04-23 13:28:14 -07:00
|
|
|
// symIsROZero reports whether sym is a read-only global whose data contains all zeros.
|
|
|
|
|
func symIsROZero(sym Sym) bool {
|
|
|
|
|
lsym := sym.(*obj.LSym)
|
|
|
|
|
if lsym.Type != objabi.SRODATA || len(lsym.R) != 0 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
for _, b := range lsym.P {
|
|
|
|
|
if b != 0 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
|
2025-09-05 13:08:21 -07:00
|
|
|
// isFixedLoad returns true if the load can be resolved to fixed address or constant,
|
|
|
|
|
// and can be rewritten by rewriteFixedLoad.
|
|
|
|
|
func isFixedLoad(v *Value, sym Sym, off int64) bool {
|
2023-05-02 17:37:00 +00:00
|
|
|
lsym := sym.(*obj.LSym)
|
2025-09-05 13:08:21 -07:00
|
|
|
if (v.Type.IsPtrShaped() || v.Type.IsUintptr()) && lsym.Type == objabi.SRODATA {
|
|
|
|
|
for _, r := range lsym.R {
|
|
|
|
|
if (r.Type == objabi.R_ADDR || r.Type == objabi.R_WEAKADDR) && int64(r.Off) == off && r.Add == 0 {
|
|
|
|
|
return true
|
|
|
|
|
}
|
2023-05-02 17:37:00 +00:00
|
|
|
}
|
2025-09-05 13:08:21 -07:00
|
|
|
return false
|
2023-05-02 17:37:00 +00:00
|
|
|
}
|
|
|
|
|
|
2025-09-05 13:08:21 -07:00
|
|
|
if strings.HasPrefix(lsym.Name, "type:") {
|
|
|
|
|
// Type symbols do not contain information about their fields, unlike the cases above.
|
|
|
|
|
// Hand-implement field accesses.
|
|
|
|
|
// TODO: can this be replaced with reflectdata.writeType and just use the code above?
|
|
|
|
|
|
|
|
|
|
t := (*lsym.Extra).(*obj.TypeInfo).Type.(*types.Type)
|
|
|
|
|
|
|
|
|
|
for _, f := range rttype.Type.Fields() {
|
|
|
|
|
if f.Offset == off && copyCompatibleType(v.Type, f.Type) {
|
|
|
|
|
switch f.Sym.Name {
|
2025-09-05 13:36:47 -07:00
|
|
|
case "Size_", "PtrBytes", "Hash", "Kind_":
|
2025-09-01 18:31:29 +08:00
|
|
|
return true
|
2025-09-05 13:08:21 -07:00
|
|
|
default:
|
|
|
|
|
// fmt.Println("unknown field", f.Sym.Name)
|
|
|
|
|
return false
|
2025-09-01 18:31:29 +08:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
2025-09-05 13:08:21 -07:00
|
|
|
|
|
|
|
|
if t.IsPtr() && off == rttype.PtrType.OffsetOf("Elem") {
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return false
|
2025-09-01 18:31:29 +08:00
|
|
|
}
|
2025-09-05 13:08:21 -07:00
|
|
|
|
2025-09-01 18:31:29 +08:00
|
|
|
return false
|
|
|
|
|
}
|
2025-09-05 13:08:21 -07:00
|
|
|
|
|
|
|
|
// rewriteFixedLoad rewrites a load to a fixed address or constant, if isFixedLoad returns true.
|
|
|
|
|
func rewriteFixedLoad(v *Value, sym Sym, sb *Value, off int64) *Value {
|
|
|
|
|
b := v.Block
|
|
|
|
|
f := b.Func
|
|
|
|
|
|
2025-09-01 18:31:29 +08:00
|
|
|
lsym := sym.(*obj.LSym)
|
2025-09-05 13:08:21 -07:00
|
|
|
if (v.Type.IsPtrShaped() || v.Type.IsUintptr()) && lsym.Type == objabi.SRODATA {
|
|
|
|
|
for _, r := range lsym.R {
|
|
|
|
|
if (r.Type == objabi.R_ADDR || r.Type == objabi.R_WEAKADDR) && int64(r.Off) == off && r.Add == 0 {
|
|
|
|
|
if strings.HasPrefix(r.Sym.Name, "type:") {
|
|
|
|
|
// In case we're loading a type out of a dictionary, we need to record
|
|
|
|
|
// that the containing function might put that type in an interface.
|
|
|
|
|
// That information is currently recorded in relocations in the dictionary,
|
|
|
|
|
// but if we perform this load at compile time then the dictionary
|
|
|
|
|
// might be dead.
|
|
|
|
|
reflectdata.MarkTypeSymUsedInInterface(r.Sym, f.fe.Func().Linksym())
|
|
|
|
|
} else if strings.HasPrefix(r.Sym.Name, "go:itab") {
|
|
|
|
|
// Same, but if we're using an itab we need to record that the
|
|
|
|
|
// itab._type might be put in an interface.
|
|
|
|
|
reflectdata.MarkTypeSymUsedInInterface(r.Sym, f.fe.Func().Linksym())
|
2025-09-01 18:31:29 +08:00
|
|
|
}
|
2025-09-05 13:08:21 -07:00
|
|
|
v.reset(OpAddr)
|
|
|
|
|
v.Aux = symToAux(r.Sym)
|
|
|
|
|
v.AddArg(sb)
|
|
|
|
|
return v
|
2025-09-01 18:31:29 +08:00
|
|
|
}
|
|
|
|
|
}
|
2025-09-05 13:08:21 -07:00
|
|
|
base.Fatalf("fixedLoad data not known for %s:%d", sym, off)
|
2025-09-01 18:31:29 +08:00
|
|
|
}
|
|
|
|
|
|
2025-09-05 13:08:21 -07:00
|
|
|
if strings.HasPrefix(lsym.Name, "type:") {
|
|
|
|
|
// Type symbols do not contain information about their fields, unlike the cases above.
|
|
|
|
|
// Hand-implement field accesses.
|
|
|
|
|
// TODO: can this be replaced with reflectdata.writeType and just use the code above?
|
|
|
|
|
|
|
|
|
|
t := (*lsym.Extra).(*obj.TypeInfo).Type.(*types.Type)
|
|
|
|
|
|
2025-09-05 13:36:47 -07:00
|
|
|
ptrSizedOpConst := OpConst64
|
|
|
|
|
if f.Config.PtrSize == 4 {
|
|
|
|
|
ptrSizedOpConst = OpConst32
|
|
|
|
|
}
|
|
|
|
|
|
2025-09-05 13:08:21 -07:00
|
|
|
for _, f := range rttype.Type.Fields() {
|
|
|
|
|
if f.Offset == off && copyCompatibleType(v.Type, f.Type) {
|
|
|
|
|
switch f.Sym.Name {
|
2025-09-05 13:36:47 -07:00
|
|
|
case "Size_":
|
|
|
|
|
v.reset(ptrSizedOpConst)
|
|
|
|
|
v.AuxInt = int64(t.Size())
|
|
|
|
|
return v
|
|
|
|
|
case "PtrBytes":
|
|
|
|
|
v.reset(ptrSizedOpConst)
|
|
|
|
|
v.AuxInt = int64(types.PtrDataSize(t))
|
|
|
|
|
return v
|
2025-09-05 13:08:21 -07:00
|
|
|
case "Hash":
|
|
|
|
|
v.reset(OpConst32)
|
|
|
|
|
v.AuxInt = int64(types.TypeHash(t))
|
|
|
|
|
return v
|
2025-09-05 13:36:47 -07:00
|
|
|
case "Kind_":
|
|
|
|
|
v.reset(OpConst8)
|
|
|
|
|
v.AuxInt = int64(reflectdata.ABIKindOfType(t))
|
|
|
|
|
return v
|
2025-09-05 13:08:21 -07:00
|
|
|
default:
|
|
|
|
|
base.Fatalf("unknown field %s for fixedLoad of %s at offset %d", f.Sym.Name, lsym.Name, off)
|
|
|
|
|
}
|
2023-05-02 17:37:00 +00:00
|
|
|
}
|
|
|
|
|
}
|
2025-09-05 13:08:21 -07:00
|
|
|
|
|
|
|
|
if t.IsPtr() && off == rttype.PtrType.OffsetOf("Elem") {
|
|
|
|
|
elemSym := reflectdata.TypeLinksym(t.Elem())
|
|
|
|
|
reflectdata.MarkTypeSymUsedInInterface(elemSym, f.fe.Func().Linksym())
|
|
|
|
|
v.reset(OpAddr)
|
|
|
|
|
v.Aux = symToAux(elemSym)
|
|
|
|
|
v.AddArg(sb)
|
|
|
|
|
return v
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
base.Fatalf("fixedLoad data not known for %s:%d", sym, off)
|
2023-05-02 17:37:00 +00:00
|
|
|
}
|
2025-09-05 13:08:21 -07:00
|
|
|
|
|
|
|
|
base.Fatalf("fixedLoad data not known for %s:%d", sym, off)
|
2023-05-02 17:37:00 +00:00
|
|
|
return nil
|
|
|
|
|
}
|
|
|
|
|
|
2018-10-09 22:55:36 -07:00
|
|
|
// read8 reads one byte from the read-only global sym at offset off.
|
2025-03-29 19:49:25 +01:00
|
|
|
func read8(sym Sym, off int64) uint8 {
|
2018-10-09 22:55:36 -07:00
|
|
|
lsym := sym.(*obj.LSym)
|
2019-02-15 15:01:29 -05:00
|
|
|
if off >= int64(len(lsym.P)) || off < 0 {
|
2018-12-13 09:31:21 -08:00
|
|
|
// Invalid index into the global sym.
|
|
|
|
|
// This can happen in dead code, so we don't want to panic.
|
|
|
|
|
// Just return any value, it will eventually get ignored.
|
|
|
|
|
// See issue 29215.
|
|
|
|
|
return 0
|
|
|
|
|
}
|
2018-10-09 22:55:36 -07:00
|
|
|
return lsym.P[off]
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// read16 reads two bytes from the read-only global sym at offset off.
|
2025-03-29 19:49:25 +01:00
|
|
|
func read16(sym Sym, off int64, byteorder binary.ByteOrder) uint16 {
|
2018-10-09 22:55:36 -07:00
|
|
|
lsym := sym.(*obj.LSym)
|
cmd/compile: mark Lsyms as readonly earlier
The SSA backend has rules to read the contents of readonly Lsyms.
However, this rule was failing to trigger for many readonly Lsyms.
This is because the readonly attribute that was set on the Node.Name
was not propagated to its Lsym until the dump globals phase, after SSA runs.
To work around this phase ordering problem, introduce Node.SetReadonly,
which sets Node.Name.Readonly and also configures the Lsym
enough that SSA can use it.
This change also fixes a latent problem in the rewrite rule function,
namely that reads past the end of lsym.P were treated as entirely zero,
instead of merely requiring padding with trailing zeros.
This change also adds an amd64 rule needed to fully optimize
the results of this change. It would be better not to need this,
but the zero extension that should handle this for us
gets optimized away too soon (see #36897 for a similar problem).
I have not investigated whether other platforms also need new
rules to take full advantage of the new optimizations.
Compiled code for (interface{})(true) on amd64 goes from:
LEAQ type.bool(SB), AX
MOVBLZX ""..stmp_0(SB), BX
LEAQ runtime.staticbytes(SB), CX
ADDQ CX, BX
to
LEAQ type.bool(SB), AX
LEAQ runtime.staticbytes+1(SB), BX
Prior to this change, the readonly symbol rewrite rules
fired a total of 884 times during make.bash.
Afterwards they fire 1807 times.
file before after Δ %
cgo 4827832 4823736 -4096 -0.085%
compile 24907768 24895656 -12112 -0.049%
fix 3376952 3368760 -8192 -0.243%
pprof 14751700 14747604 -4096 -0.028%
total 120343528 120315032 -28496 -0.024%
Change-Id: I59ea52138276c37840f69e30fb109fd376d579ec
Reviewed-on: https://go-review.googlesource.com/c/go/+/220499
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-02-16 17:00:52 -08:00
|
|
|
// lsym.P is written lazily.
|
|
|
|
|
// Bytes requested after the end of lsym.P are 0.
|
|
|
|
|
var src []byte
|
|
|
|
|
if 0 <= off && off < int64(len(lsym.P)) {
|
|
|
|
|
src = lsym.P[off:]
|
2018-10-09 22:55:36 -07:00
|
|
|
}
|
cmd/compile: mark Lsyms as readonly earlier
The SSA backend has rules to read the contents of readonly Lsyms.
However, this rule was failing to trigger for many readonly Lsyms.
This is because the readonly attribute that was set on the Node.Name
was not propagated to its Lsym until the dump globals phase, after SSA runs.
To work around this phase ordering problem, introduce Node.SetReadonly,
which sets Node.Name.Readonly and also configures the Lsym
enough that SSA can use it.
This change also fixes a latent problem in the rewrite rule function,
namely that reads past the end of lsym.P were treated as entirely zero,
instead of merely requiring padding with trailing zeros.
This change also adds an amd64 rule needed to fully optimize
the results of this change. It would be better not to need this,
but the zero extension that should handle this for us
gets optimized away too soon (see #36897 for a similar problem).
I have not investigated whether other platforms also need new
rules to take full advantage of the new optimizations.
Compiled code for (interface{})(true) on amd64 goes from:
LEAQ type.bool(SB), AX
MOVBLZX ""..stmp_0(SB), BX
LEAQ runtime.staticbytes(SB), CX
ADDQ CX, BX
to
LEAQ type.bool(SB), AX
LEAQ runtime.staticbytes+1(SB), BX
Prior to this change, the readonly symbol rewrite rules
fired a total of 884 times during make.bash.
Afterwards they fire 1807 times.
file before after Δ %
cgo 4827832 4823736 -4096 -0.085%
compile 24907768 24895656 -12112 -0.049%
fix 3376952 3368760 -8192 -0.243%
pprof 14751700 14747604 -4096 -0.028%
total 120343528 120315032 -28496 -0.024%
Change-Id: I59ea52138276c37840f69e30fb109fd376d579ec
Reviewed-on: https://go-review.googlesource.com/c/go/+/220499
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-02-16 17:00:52 -08:00
|
|
|
buf := make([]byte, 2)
|
|
|
|
|
copy(buf, src)
|
|
|
|
|
return byteorder.Uint16(buf)
|
2018-10-09 22:55:36 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// read32 reads four bytes from the read-only global sym at offset off.
|
2025-03-29 19:49:25 +01:00
|
|
|
func read32(sym Sym, off int64, byteorder binary.ByteOrder) uint32 {
|
2018-10-09 22:55:36 -07:00
|
|
|
lsym := sym.(*obj.LSym)
|
cmd/compile: mark Lsyms as readonly earlier
The SSA backend has rules to read the contents of readonly Lsyms.
However, this rule was failing to trigger for many readonly Lsyms.
This is because the readonly attribute that was set on the Node.Name
was not propagated to its Lsym until the dump globals phase, after SSA runs.
To work around this phase ordering problem, introduce Node.SetReadonly,
which sets Node.Name.Readonly and also configures the Lsym
enough that SSA can use it.
This change also fixes a latent problem in the rewrite rule function,
namely that reads past the end of lsym.P were treated as entirely zero,
instead of merely requiring padding with trailing zeros.
This change also adds an amd64 rule needed to fully optimize
the results of this change. It would be better not to need this,
but the zero extension that should handle this for us
gets optimized away too soon (see #36897 for a similar problem).
I have not investigated whether other platforms also need new
rules to take full advantage of the new optimizations.
Compiled code for (interface{})(true) on amd64 goes from:
LEAQ type.bool(SB), AX
MOVBLZX ""..stmp_0(SB), BX
LEAQ runtime.staticbytes(SB), CX
ADDQ CX, BX
to
LEAQ type.bool(SB), AX
LEAQ runtime.staticbytes+1(SB), BX
Prior to this change, the readonly symbol rewrite rules
fired a total of 884 times during make.bash.
Afterwards they fire 1807 times.
file before after Δ %
cgo 4827832 4823736 -4096 -0.085%
compile 24907768 24895656 -12112 -0.049%
fix 3376952 3368760 -8192 -0.243%
pprof 14751700 14747604 -4096 -0.028%
total 120343528 120315032 -28496 -0.024%
Change-Id: I59ea52138276c37840f69e30fb109fd376d579ec
Reviewed-on: https://go-review.googlesource.com/c/go/+/220499
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-02-16 17:00:52 -08:00
|
|
|
var src []byte
|
|
|
|
|
if 0 <= off && off < int64(len(lsym.P)) {
|
|
|
|
|
src = lsym.P[off:]
|
2018-10-09 22:55:36 -07:00
|
|
|
}
|
cmd/compile: mark Lsyms as readonly earlier
The SSA backend has rules to read the contents of readonly Lsyms.
However, this rule was failing to trigger for many readonly Lsyms.
This is because the readonly attribute that was set on the Node.Name
was not propagated to its Lsym until the dump globals phase, after SSA runs.
To work around this phase ordering problem, introduce Node.SetReadonly,
which sets Node.Name.Readonly and also configures the Lsym
enough that SSA can use it.
This change also fixes a latent problem in the rewrite rule function,
namely that reads past the end of lsym.P were treated as entirely zero,
instead of merely requiring padding with trailing zeros.
This change also adds an amd64 rule needed to fully optimize
the results of this change. It would be better not to need this,
but the zero extension that should handle this for us
gets optimized away too soon (see #36897 for a similar problem).
I have not investigated whether other platforms also need new
rules to take full advantage of the new optimizations.
Compiled code for (interface{})(true) on amd64 goes from:
LEAQ type.bool(SB), AX
MOVBLZX ""..stmp_0(SB), BX
LEAQ runtime.staticbytes(SB), CX
ADDQ CX, BX
to
LEAQ type.bool(SB), AX
LEAQ runtime.staticbytes+1(SB), BX
Prior to this change, the readonly symbol rewrite rules
fired a total of 884 times during make.bash.
Afterwards they fire 1807 times.
file before after Δ %
cgo 4827832 4823736 -4096 -0.085%
compile 24907768 24895656 -12112 -0.049%
fix 3376952 3368760 -8192 -0.243%
pprof 14751700 14747604 -4096 -0.028%
total 120343528 120315032 -28496 -0.024%
Change-Id: I59ea52138276c37840f69e30fb109fd376d579ec
Reviewed-on: https://go-review.googlesource.com/c/go/+/220499
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-02-16 17:00:52 -08:00
|
|
|
buf := make([]byte, 4)
|
|
|
|
|
copy(buf, src)
|
|
|
|
|
return byteorder.Uint32(buf)
|
2018-10-09 22:55:36 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// read64 reads eight bytes from the read-only global sym at offset off.
|
2025-03-29 19:49:25 +01:00
|
|
|
func read64(sym Sym, off int64, byteorder binary.ByteOrder) uint64 {
|
2018-10-09 22:55:36 -07:00
|
|
|
lsym := sym.(*obj.LSym)
|
cmd/compile: mark Lsyms as readonly earlier
The SSA backend has rules to read the contents of readonly Lsyms.
However, this rule was failing to trigger for many readonly Lsyms.
This is because the readonly attribute that was set on the Node.Name
was not propagated to its Lsym until the dump globals phase, after SSA runs.
To work around this phase ordering problem, introduce Node.SetReadonly,
which sets Node.Name.Readonly and also configures the Lsym
enough that SSA can use it.
This change also fixes a latent problem in the rewrite rule function,
namely that reads past the end of lsym.P were treated as entirely zero,
instead of merely requiring padding with trailing zeros.
This change also adds an amd64 rule needed to fully optimize
the results of this change. It would be better not to need this,
but the zero extension that should handle this for us
gets optimized away too soon (see #36897 for a similar problem).
I have not investigated whether other platforms also need new
rules to take full advantage of the new optimizations.
Compiled code for (interface{})(true) on amd64 goes from:
LEAQ type.bool(SB), AX
MOVBLZX ""..stmp_0(SB), BX
LEAQ runtime.staticbytes(SB), CX
ADDQ CX, BX
to
LEAQ type.bool(SB), AX
LEAQ runtime.staticbytes+1(SB), BX
Prior to this change, the readonly symbol rewrite rules
fired a total of 884 times during make.bash.
Afterwards they fire 1807 times.
file before after Δ %
cgo 4827832 4823736 -4096 -0.085%
compile 24907768 24895656 -12112 -0.049%
fix 3376952 3368760 -8192 -0.243%
pprof 14751700 14747604 -4096 -0.028%
total 120343528 120315032 -28496 -0.024%
Change-Id: I59ea52138276c37840f69e30fb109fd376d579ec
Reviewed-on: https://go-review.googlesource.com/c/go/+/220499
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-02-16 17:00:52 -08:00
|
|
|
var src []byte
|
|
|
|
|
if 0 <= off && off < int64(len(lsym.P)) {
|
|
|
|
|
src = lsym.P[off:]
|
2018-10-09 22:55:36 -07:00
|
|
|
}
|
cmd/compile: mark Lsyms as readonly earlier
The SSA backend has rules to read the contents of readonly Lsyms.
However, this rule was failing to trigger for many readonly Lsyms.
This is because the readonly attribute that was set on the Node.Name
was not propagated to its Lsym until the dump globals phase, after SSA runs.
To work around this phase ordering problem, introduce Node.SetReadonly,
which sets Node.Name.Readonly and also configures the Lsym
enough that SSA can use it.
This change also fixes a latent problem in the rewrite rule function,
namely that reads past the end of lsym.P were treated as entirely zero,
instead of merely requiring padding with trailing zeros.
This change also adds an amd64 rule needed to fully optimize
the results of this change. It would be better not to need this,
but the zero extension that should handle this for us
gets optimized away too soon (see #36897 for a similar problem).
I have not investigated whether other platforms also need new
rules to take full advantage of the new optimizations.
Compiled code for (interface{})(true) on amd64 goes from:
LEAQ type.bool(SB), AX
MOVBLZX ""..stmp_0(SB), BX
LEAQ runtime.staticbytes(SB), CX
ADDQ CX, BX
to
LEAQ type.bool(SB), AX
LEAQ runtime.staticbytes+1(SB), BX
Prior to this change, the readonly symbol rewrite rules
fired a total of 884 times during make.bash.
Afterwards they fire 1807 times.
file before after Δ %
cgo 4827832 4823736 -4096 -0.085%
compile 24907768 24895656 -12112 -0.049%
fix 3376952 3368760 -8192 -0.243%
pprof 14751700 14747604 -4096 -0.028%
total 120343528 120315032 -28496 -0.024%
Change-Id: I59ea52138276c37840f69e30fb109fd376d579ec
Reviewed-on: https://go-review.googlesource.com/c/go/+/220499
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-02-16 17:00:52 -08:00
|
|
|
buf := make([]byte, 8)
|
|
|
|
|
copy(buf, src)
|
|
|
|
|
return byteorder.Uint64(buf)
|
2018-10-09 22:55:36 -07:00
|
|
|
}
|
2020-01-30 10:17:01 -08:00
|
|
|
|
cmd/compile: convert 386 port to use addressing modes pass (take 2)
Retrying CL 222782, with a fix that will hopefully stop the random crashing.
The issue with the previous CL is that it does pointer arithmetic
in a way that may briefly generate an out-of-bounds pointer. If an
interrupt happens to occur in that state, the referenced object may
be collected incorrectly.
Suppose there was code that did s[x+c]. The previous CL had a rule
to the effect of ptr + (x + c) -> c + (ptr + x). But ptr+x is not
guaranteed to point to the same object as ptr. In contrast,
ptr+(x+c) is guaranteed to point to the same object as ptr, because
we would have already checked that x+c is in bounds.
For example, strconv.trim used to have this code:
MOVZX -0x1(BX)(DX*1), BP
CMPL $0x30, AL
After CL 222782, it had this code:
LEAL 0(BX)(DX*1), BP
CMPB $0x30, -0x1(BP)
An interrupt between those last two instructions could see BP pointing
outside the backing store of the slice involved.
It's really hard to actually demonstrate a bug. First, you need to
have an interrupt occur at exactly the right time. Then, there must
be no other pointers to the object in question. Since the interrupted
frame will be scanned conservatively, there can't even be a dead
pointer in another register or on the stack. (In the example above,
a bug can't happen because BX still holds the original pointer.)
Then, the object in question needs to be collected (or at least
scanned?) before the interrupted code continues.
This CL needs to handle load combining somewhat differently than CL 222782
because of the new restriction on arithmetic. That's the only real
difference (other than removing the bad rules) from that old CL.
This bug is also present in the amd64 rewrite rules, and we haven't
seen any crashing as a result. I will fix up that code similarly to
this one in a separate CL.
Update #37881
Change-Id: I5f0d584d9bef4696bfe89a61ef0a27c8d507329f
Reviewed-on: https://go-review.googlesource.com/c/go/+/225798
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-03-24 13:39:44 -07:00
|
|
|
// sequentialAddresses reports true if it can prove that x + n == y
|
|
|
|
|
func sequentialAddresses(x, y *Value, n int64) bool {
|
2022-08-05 14:01:57 +00:00
|
|
|
if x == y && n == 0 {
|
|
|
|
|
return true
|
|
|
|
|
}
|
cmd/compile: convert 386 port to use addressing modes pass (take 2)
Retrying CL 222782, with a fix that will hopefully stop the random crashing.
The issue with the previous CL is that it does pointer arithmetic
in a way that may briefly generate an out-of-bounds pointer. If an
interrupt happens to occur in that state, the referenced object may
be collected incorrectly.
Suppose there was code that did s[x+c]. The previous CL had a rule
to the effect of ptr + (x + c) -> c + (ptr + x). But ptr+x is not
guaranteed to point to the same object as ptr. In contrast,
ptr+(x+c) is guaranteed to point to the same object as ptr, because
we would have already checked that x+c is in bounds.
For example, strconv.trim used to have this code:
MOVZX -0x1(BX)(DX*1), BP
CMPL $0x30, AL
After CL 222782, it had this code:
LEAL 0(BX)(DX*1), BP
CMPB $0x30, -0x1(BP)
An interrupt between those last two instructions could see BP pointing
outside the backing store of the slice involved.
It's really hard to actually demonstrate a bug. First, you need to
have an interrupt occur at exactly the right time. Then, there must
be no other pointers to the object in question. Since the interrupted
frame will be scanned conservatively, there can't even be a dead
pointer in another register or on the stack. (In the example above,
a bug can't happen because BX still holds the original pointer.)
Then, the object in question needs to be collected (or at least
scanned?) before the interrupted code continues.
This CL needs to handle load combining somewhat differently than CL 222782
because of the new restriction on arithmetic. That's the only real
difference (other than removing the bad rules) from that old CL.
This bug is also present in the amd64 rewrite rules, and we haven't
seen any crashing as a result. I will fix up that code similarly to
this one in a separate CL.
Update #37881
Change-Id: I5f0d584d9bef4696bfe89a61ef0a27c8d507329f
Reviewed-on: https://go-review.googlesource.com/c/go/+/225798
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-03-24 13:39:44 -07:00
|
|
|
if x.Op == Op386ADDL && y.Op == Op386LEAL1 && y.AuxInt == n && y.Aux == nil &&
|
|
|
|
|
(x.Args[0] == y.Args[0] && x.Args[1] == y.Args[1] ||
|
|
|
|
|
x.Args[0] == y.Args[1] && x.Args[1] == y.Args[0]) {
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
if x.Op == Op386LEAL1 && y.Op == Op386LEAL1 && y.AuxInt == x.AuxInt+n && x.Aux == y.Aux &&
|
|
|
|
|
(x.Args[0] == y.Args[0] && x.Args[1] == y.Args[1] ||
|
|
|
|
|
x.Args[0] == y.Args[1] && x.Args[1] == y.Args[0]) {
|
|
|
|
|
return true
|
|
|
|
|
}
|
2020-03-27 22:03:33 -07:00
|
|
|
if x.Op == OpAMD64ADDQ && y.Op == OpAMD64LEAQ1 && y.AuxInt == n && y.Aux == nil &&
|
|
|
|
|
(x.Args[0] == y.Args[0] && x.Args[1] == y.Args[1] ||
|
|
|
|
|
x.Args[0] == y.Args[1] && x.Args[1] == y.Args[0]) {
|
2020-01-30 10:17:01 -08:00
|
|
|
return true
|
|
|
|
|
}
|
2020-03-27 22:03:33 -07:00
|
|
|
if x.Op == OpAMD64LEAQ1 && y.Op == OpAMD64LEAQ1 && y.AuxInt == x.AuxInt+n && x.Aux == y.Aux &&
|
|
|
|
|
(x.Args[0] == y.Args[0] && x.Args[1] == y.Args[1] ||
|
|
|
|
|
x.Args[0] == y.Args[1] && x.Args[1] == y.Args[0]) {
|
2020-01-30 10:17:01 -08:00
|
|
|
return true
|
|
|
|
|
}
|
2020-03-27 22:03:33 -07:00
|
|
|
return false
|
2020-01-30 10:17:01 -08:00
|
|
|
}
|
2020-06-15 14:43:02 -07:00
|
|
|
|
|
|
|
|
// flagConstant represents the result of a compile-time comparison.
|
|
|
|
|
// The sense of these flags does not necessarily represent the hardware's notion
|
|
|
|
|
// of a flags register - these are just a compile-time construct.
|
|
|
|
|
// We happen to match the semantics to those of arm/arm64.
|
|
|
|
|
// Note that these semantics differ from x86: the carry flag has the opposite
|
|
|
|
|
// sense on a subtraction!
|
2022-02-03 14:12:08 -05:00
|
|
|
//
|
|
|
|
|
// On amd64, C=1 represents a borrow, e.g. SBB on amd64 does x - y - C.
|
|
|
|
|
// On arm64, C=0 represents a borrow, e.g. SBC on arm64 does x - y - ^C.
|
|
|
|
|
// (because it does x + ^y + C).
|
|
|
|
|
//
|
2020-06-15 14:43:02 -07:00
|
|
|
// See https://en.wikipedia.org/wiki/Carry_flag#Vs._borrow_flag
|
|
|
|
|
type flagConstant uint8
|
|
|
|
|
|
|
|
|
|
// N reports whether the result of an operation is negative (high bit set).
|
|
|
|
|
func (fc flagConstant) N() bool {
|
|
|
|
|
return fc&1 != 0
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Z reports whether the result of an operation is 0.
|
|
|
|
|
func (fc flagConstant) Z() bool {
|
|
|
|
|
return fc&2 != 0
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// C reports whether an unsigned add overflowed (carry), or an
|
|
|
|
|
// unsigned subtract did not underflow (borrow).
|
|
|
|
|
func (fc flagConstant) C() bool {
|
|
|
|
|
return fc&4 != 0
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// V reports whether a signed operation overflowed or underflowed.
|
|
|
|
|
func (fc flagConstant) V() bool {
|
|
|
|
|
return fc&8 != 0
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func (fc flagConstant) eq() bool {
|
|
|
|
|
return fc.Z()
|
|
|
|
|
}
|
|
|
|
|
func (fc flagConstant) ne() bool {
|
|
|
|
|
return !fc.Z()
|
|
|
|
|
}
|
|
|
|
|
func (fc flagConstant) lt() bool {
|
|
|
|
|
return fc.N() != fc.V()
|
|
|
|
|
}
|
|
|
|
|
func (fc flagConstant) le() bool {
|
|
|
|
|
return fc.Z() || fc.lt()
|
|
|
|
|
}
|
|
|
|
|
func (fc flagConstant) gt() bool {
|
|
|
|
|
return !fc.Z() && fc.ge()
|
|
|
|
|
}
|
|
|
|
|
func (fc flagConstant) ge() bool {
|
|
|
|
|
return fc.N() == fc.V()
|
|
|
|
|
}
|
|
|
|
|
func (fc flagConstant) ult() bool {
|
|
|
|
|
return !fc.C()
|
|
|
|
|
}
|
|
|
|
|
func (fc flagConstant) ule() bool {
|
|
|
|
|
return fc.Z() || fc.ult()
|
|
|
|
|
}
|
|
|
|
|
func (fc flagConstant) ugt() bool {
|
|
|
|
|
return !fc.Z() && fc.uge()
|
|
|
|
|
}
|
|
|
|
|
func (fc flagConstant) uge() bool {
|
|
|
|
|
return fc.C()
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func (fc flagConstant) ltNoov() bool {
|
|
|
|
|
return fc.lt() && !fc.V()
|
|
|
|
|
}
|
|
|
|
|
func (fc flagConstant) leNoov() bool {
|
|
|
|
|
return fc.le() && !fc.V()
|
|
|
|
|
}
|
|
|
|
|
func (fc flagConstant) gtNoov() bool {
|
|
|
|
|
return fc.gt() && !fc.V()
|
|
|
|
|
}
|
|
|
|
|
func (fc flagConstant) geNoov() bool {
|
|
|
|
|
return fc.ge() && !fc.V()
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func (fc flagConstant) String() string {
|
|
|
|
|
return fmt.Sprintf("N=%v,Z=%v,C=%v,V=%v", fc.N(), fc.Z(), fc.C(), fc.V())
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
type flagConstantBuilder struct {
|
|
|
|
|
N bool
|
|
|
|
|
Z bool
|
|
|
|
|
C bool
|
|
|
|
|
V bool
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func (fcs flagConstantBuilder) encode() flagConstant {
|
|
|
|
|
var fc flagConstant
|
|
|
|
|
if fcs.N {
|
|
|
|
|
fc |= 1
|
|
|
|
|
}
|
|
|
|
|
if fcs.Z {
|
|
|
|
|
fc |= 2
|
|
|
|
|
}
|
|
|
|
|
if fcs.C {
|
|
|
|
|
fc |= 4
|
|
|
|
|
}
|
|
|
|
|
if fcs.V {
|
|
|
|
|
fc |= 8
|
|
|
|
|
}
|
|
|
|
|
return fc
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// Note: addFlags(x,y) != subFlags(x,-y) in some situations:
|
|
|
|
|
// - the results of the C flag are different
|
|
|
|
|
// - the results of the V flag when y==minint are different
|
|
|
|
|
|
|
|
|
|
// addFlags64 returns the flags that would be set from computing x+y.
|
|
|
|
|
func addFlags64(x, y int64) flagConstant {
|
|
|
|
|
var fcb flagConstantBuilder
|
|
|
|
|
fcb.Z = x+y == 0
|
|
|
|
|
fcb.N = x+y < 0
|
|
|
|
|
fcb.C = uint64(x+y) < uint64(x)
|
|
|
|
|
fcb.V = x >= 0 && y >= 0 && x+y < 0 || x < 0 && y < 0 && x+y >= 0
|
|
|
|
|
return fcb.encode()
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// subFlags64 returns the flags that would be set from computing x-y.
|
|
|
|
|
func subFlags64(x, y int64) flagConstant {
|
|
|
|
|
var fcb flagConstantBuilder
|
|
|
|
|
fcb.Z = x-y == 0
|
|
|
|
|
fcb.N = x-y < 0
|
|
|
|
|
fcb.C = uint64(y) <= uint64(x) // This code follows the arm carry flag model.
|
|
|
|
|
fcb.V = x >= 0 && y < 0 && x-y < 0 || x < 0 && y >= 0 && x-y >= 0
|
|
|
|
|
return fcb.encode()
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// addFlags32 returns the flags that would be set from computing x+y.
|
|
|
|
|
func addFlags32(x, y int32) flagConstant {
|
|
|
|
|
var fcb flagConstantBuilder
|
|
|
|
|
fcb.Z = x+y == 0
|
|
|
|
|
fcb.N = x+y < 0
|
|
|
|
|
fcb.C = uint32(x+y) < uint32(x)
|
|
|
|
|
fcb.V = x >= 0 && y >= 0 && x+y < 0 || x < 0 && y < 0 && x+y >= 0
|
|
|
|
|
return fcb.encode()
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// subFlags32 returns the flags that would be set from computing x-y.
|
|
|
|
|
func subFlags32(x, y int32) flagConstant {
|
|
|
|
|
var fcb flagConstantBuilder
|
|
|
|
|
fcb.Z = x-y == 0
|
|
|
|
|
fcb.N = x-y < 0
|
|
|
|
|
fcb.C = uint32(y) <= uint32(x) // This code follows the arm carry flag model.
|
|
|
|
|
fcb.V = x >= 0 && y < 0 && x-y < 0 || x < 0 && y >= 0 && x-y >= 0
|
|
|
|
|
return fcb.encode()
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// logicFlags64 returns flags set to the sign/zeroness of x.
|
|
|
|
|
// C and V are set to false.
|
|
|
|
|
func logicFlags64(x int64) flagConstant {
|
|
|
|
|
var fcb flagConstantBuilder
|
|
|
|
|
fcb.Z = x == 0
|
|
|
|
|
fcb.N = x < 0
|
|
|
|
|
return fcb.encode()
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// logicFlags32 returns flags set to the sign/zeroness of x.
|
|
|
|
|
// C and V are set to false.
|
|
|
|
|
func logicFlags32(x int32) flagConstant {
|
|
|
|
|
var fcb flagConstantBuilder
|
|
|
|
|
fcb.Z = x == 0
|
|
|
|
|
fcb.N = x < 0
|
|
|
|
|
return fcb.encode()
|
|
|
|
|
}
|
cmd/compile: implement jump tables
Performance is kind of hard to exactly quantify.
One big difference between jump tables and the old binary search
scheme is that there's only 1 branch statement instead of O(n) of
them. That can be both a blessing and a curse, and can make evaluating
jump tables very hard to do.
The single branch can become a choke point for the hardware branch
predictor. A branch table jump must fit all of its state in a single
branch predictor entry (technically, a branch target predictor entry).
With binary search that predictor state can be spread among lots of
entries. In cases where the case selection is repetitive and thus
predictable, binary search can perform better.
The big win for a jump table is that it doesn't consume so much of the
branch predictor's resources. But that benefit is essentially never
observed in microbenchmarks, because the branch predictor can easily
keep state for all the binary search branches in a microbenchmark. So
that benefit is really hard to measure.
So predictable switch microbenchmarks are ~useless - they will almost
always favor the binary search scheme. Fully unpredictable switch
microbenchmarks are better, as they aren't lying to us quite so
much. In a perfectly unpredictable situation, a jump table will expect
to incur 1-1/N branch mispredicts, where a binary search would incur
lg(N)/2 of them. That makes the crossover point at about N=4. But of
course switches in real programs are seldom fully unpredictable, so
we'll use a higher crossover point.
Beyond the branch predictor, jump tables tend to execute more
instructions per switch but have no additional instructions per case,
which also argues for a larger crossover.
As far as code size goes, with this CL cmd/go has a slightly smaller
code segment and a slightly larger overall size (from the jump tables
themselves which live in the data segment).
This is a case where some FDO (feedback-directed optimization) would
be really nice to have. #28262
Some large-program benchmarks might help make the case for this
CL. Especially if we can turn on branch mispredict counters so we can
see how much using jump tables can free up branch prediction resources
that can be gainfully used elsewhere in the program.
name old time/op new time/op delta
Switch8Predictable 1.89ns ± 2% 1.27ns ± 3% -32.58% (p=0.000 n=9+10)
Switch8Unpredictable 9.33ns ± 1% 7.50ns ± 1% -19.60% (p=0.000 n=10+9)
Switch32Predictable 2.20ns ± 2% 1.64ns ± 1% -25.39% (p=0.000 n=10+9)
Switch32Unpredictable 10.0ns ± 2% 7.6ns ± 2% -24.04% (p=0.000 n=10+10)
Fixes #5496
Update #34381
Change-Id: I3ff56011d02be53f605ca5fd3fb96b905517c34f
Reviewed-on: https://go-review.googlesource.com/c/go/+/357330
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
2021-10-04 12:17:46 -07:00
|
|
|
|
|
|
|
|
func makeJumpTableSym(b *Block) *obj.LSym {
|
2023-04-11 16:40:12 -04:00
|
|
|
s := base.Ctxt.Lookup(fmt.Sprintf("%s.jump%d", b.Func.fe.Func().LSym.Name, b.ID))
|
2024-02-20 15:05:29 -05:00
|
|
|
// The jump table symbol is accessed only from the function symbol.
|
|
|
|
|
s.Set(obj.AttrStatic, true)
|
cmd/compile: implement jump tables
Performance is kind of hard to exactly quantify.
One big difference between jump tables and the old binary search
scheme is that there's only 1 branch statement instead of O(n) of
them. That can be both a blessing and a curse, and can make evaluating
jump tables very hard to do.
The single branch can become a choke point for the hardware branch
predictor. A branch table jump must fit all of its state in a single
branch predictor entry (technically, a branch target predictor entry).
With binary search that predictor state can be spread among lots of
entries. In cases where the case selection is repetitive and thus
predictable, binary search can perform better.
The big win for a jump table is that it doesn't consume so much of the
branch predictor's resources. But that benefit is essentially never
observed in microbenchmarks, because the branch predictor can easily
keep state for all the binary search branches in a microbenchmark. So
that benefit is really hard to measure.
So predictable switch microbenchmarks are ~useless - they will almost
always favor the binary search scheme. Fully unpredictable switch
microbenchmarks are better, as they aren't lying to us quite so
much. In a perfectly unpredictable situation, a jump table will expect
to incur 1-1/N branch mispredicts, where a binary search would incur
lg(N)/2 of them. That makes the crossover point at about N=4. But of
course switches in real programs are seldom fully unpredictable, so
we'll use a higher crossover point.
Beyond the branch predictor, jump tables tend to execute more
instructions per switch but have no additional instructions per case,
which also argues for a larger crossover.
As far as code size goes, with this CL cmd/go has a slightly smaller
code segment and a slightly larger overall size (from the jump tables
themselves which live in the data segment).
This is a case where some FDO (feedback-directed optimization) would
be really nice to have. #28262
Some large-program benchmarks might help make the case for this
CL. Especially if we can turn on branch mispredict counters so we can
see how much using jump tables can free up branch prediction resources
that can be gainfully used elsewhere in the program.
name old time/op new time/op delta
Switch8Predictable 1.89ns ± 2% 1.27ns ± 3% -32.58% (p=0.000 n=9+10)
Switch8Unpredictable 9.33ns ± 1% 7.50ns ± 1% -19.60% (p=0.000 n=10+9)
Switch32Predictable 2.20ns ± 2% 1.64ns ± 1% -25.39% (p=0.000 n=10+9)
Switch32Unpredictable 10.0ns ± 2% 7.6ns ± 2% -24.04% (p=0.000 n=10+10)
Fixes #5496
Update #34381
Change-Id: I3ff56011d02be53f605ca5fd3fb96b905517c34f
Reviewed-on: https://go-review.googlesource.com/c/go/+/357330
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
2021-10-04 12:17:46 -07:00
|
|
|
return s
|
|
|
|
|
}
|
2022-08-03 22:58:30 -07:00
|
|
|
|
|
|
|
|
// canRotate reports whether the architecture supports
|
|
|
|
|
// rotates of integer registers with the given number of bits.
|
|
|
|
|
func canRotate(c *Config, bits int64) bool {
|
|
|
|
|
if bits > c.PtrSize*8 {
|
|
|
|
|
// Don't rewrite to rotates bigger than the machine word.
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
switch c.arch {
|
2024-08-12 17:20:44 +08:00
|
|
|
case "386", "amd64", "arm64", "loong64", "riscv64":
|
2022-08-03 22:58:30 -07:00
|
|
|
return true
|
2024-08-12 17:20:44 +08:00
|
|
|
case "arm", "s390x", "ppc64", "ppc64le", "wasm":
|
2022-08-03 22:58:30 -07:00
|
|
|
return bits >= 32
|
|
|
|
|
default:
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
}
|
cmd/compile: add late lower pass for last rules to run
Usually optimization rules have corresponding priorities, some need to
be run first, some run next, and some run last, which produces the best
code. But currently our optimization rules have no priority, this CL
adds a late lower pass that runs those rules that need to be run at last,
such as split unreasonable constant folding. This pass can be seen as
the second round of the lower pass.
For example:
func foo(a, b uint64) uint64 {
d := a+0x1234568
d1 := b+0x1234568
return d&d1
}
The code generated by the master branch:
0x0004 00004 ADD $19088744, R0, R2 // movz+movk+add
0x0010 00016 ADD $19088744, R1, R1 // movz+movk+add
0x001c 00028 AND R1, R2, R0
This is because the current constant folding optimization rules do not
take into account the range of constants, causing the constant to be
loaded repeatedly. This CL splits these unreasonable constants folding
in the late lower pass. With this CL the generated code:
0x0004 00004 MOVD $19088744, R2 // movz+movk
0x000c 00012 ADD R0, R2, R3
0x0010 00016 ADD R1, R2, R1
0x0014 00020 AND R1, R3, R0
This CL also adds constant folding optimization for ADDS instruction.
In addition, in order not to introduce the codegen regression, an
optimization rule is added to change the addition of a negative number
into a subtraction of a positive number.
go1 benchmarks:
name old time/op new time/op delta
BinaryTree17-8 1.22s ± 1% 1.24s ± 0% +1.56% (p=0.008 n=5+5)
Fannkuch11-8 1.54s ± 0% 1.53s ± 0% -0.69% (p=0.016 n=4+5)
FmtFprintfEmpty-8 14.1ns ± 0% 14.1ns ± 0% ~ (p=0.079 n=4+5)
FmtFprintfString-8 26.0ns ± 0% 26.1ns ± 0% +0.23% (p=0.008 n=5+5)
FmtFprintfInt-8 32.3ns ± 0% 32.9ns ± 1% +1.72% (p=0.008 n=5+5)
FmtFprintfIntInt-8 54.5ns ± 0% 55.5ns ± 0% +1.83% (p=0.008 n=5+5)
FmtFprintfPrefixedInt-8 61.5ns ± 0% 62.0ns ± 0% +0.93% (p=0.008 n=5+5)
FmtFprintfFloat-8 72.0ns ± 0% 73.6ns ± 0% +2.24% (p=0.008 n=5+5)
FmtManyArgs-8 221ns ± 0% 224ns ± 0% +1.22% (p=0.008 n=5+5)
GobDecode-8 1.91ms ± 0% 1.93ms ± 0% +0.98% (p=0.008 n=5+5)
GobEncode-8 1.40ms ± 1% 1.39ms ± 0% -0.79% (p=0.032 n=5+5)
Gzip-8 115ms ± 0% 117ms ± 1% +1.17% (p=0.008 n=5+5)
Gunzip-8 19.4ms ± 1% 19.3ms ± 0% -0.71% (p=0.016 n=5+4)
HTTPClientServer-8 27.0µs ± 0% 27.3µs ± 0% +0.80% (p=0.008 n=5+5)
JSONEncode-8 3.36ms ± 1% 3.33ms ± 0% ~ (p=0.056 n=5+5)
JSONDecode-8 17.5ms ± 2% 17.8ms ± 0% +1.71% (p=0.016 n=5+4)
Mandelbrot200-8 2.29ms ± 0% 2.29ms ± 0% ~ (p=0.151 n=5+5)
GoParse-8 1.35ms ± 1% 1.36ms ± 1% ~ (p=0.056 n=5+5)
RegexpMatchEasy0_32-8 24.5ns ± 0% 24.5ns ± 0% ~ (p=0.444 n=4+5)
RegexpMatchEasy0_1K-8 131ns ±11% 118ns ± 6% ~ (p=0.056 n=5+5)
RegexpMatchEasy1_32-8 22.9ns ± 0% 22.9ns ± 0% ~ (p=0.905 n=4+5)
RegexpMatchEasy1_1K-8 126ns ± 0% 127ns ± 0% ~ (p=0.063 n=4+5)
RegexpMatchMedium_32-8 486ns ± 5% 483ns ± 0% ~ (p=0.381 n=5+4)
RegexpMatchMedium_1K-8 15.4µs ± 1% 15.5µs ± 0% ~ (p=0.151 n=5+5)
RegexpMatchHard_32-8 687ns ± 0% 686ns ± 0% ~ (p=0.103 n=5+5)
RegexpMatchHard_1K-8 20.7µs ± 0% 20.7µs ± 1% ~ (p=0.151 n=5+5)
Revcomp-8 175ms ± 2% 176ms ± 3% ~ (p=1.000 n=5+5)
Template-8 20.4ms ± 6% 20.1ms ± 2% ~ (p=0.151 n=5+5)
TimeParse-8 112ns ± 0% 113ns ± 0% +0.97% (p=0.016 n=5+4)
TimeFormat-8 156ns ± 0% 145ns ± 0% -7.14% (p=0.029 n=4+4)
Change-Id: I3ced26e89041f873ac989586514ccc5ee09f13da
Reviewed-on: https://go-review.googlesource.com/c/go/+/425134
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Eric Fang <eric.fang@arm.com>
2022-08-17 10:01:17 +00:00
|
|
|
|
|
|
|
|
// isARM64bitcon reports whether a constant can be encoded into a logical instruction.
|
|
|
|
|
func isARM64bitcon(x uint64) bool {
|
|
|
|
|
if x == 1<<64-1 || x == 0 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
// determine the period and sign-extend a unit to 64 bits
|
|
|
|
|
switch {
|
|
|
|
|
case x != x>>32|x<<32:
|
|
|
|
|
// period is 64
|
|
|
|
|
// nothing to do
|
|
|
|
|
case x != x>>16|x<<48:
|
|
|
|
|
// period is 32
|
|
|
|
|
x = uint64(int64(int32(x)))
|
|
|
|
|
case x != x>>8|x<<56:
|
|
|
|
|
// period is 16
|
|
|
|
|
x = uint64(int64(int16(x)))
|
|
|
|
|
case x != x>>4|x<<60:
|
|
|
|
|
// period is 8
|
|
|
|
|
x = uint64(int64(int8(x)))
|
|
|
|
|
default:
|
|
|
|
|
// period is 4 or 2, always true
|
|
|
|
|
// 0001, 0010, 0100, 1000 -- 0001 rotate
|
|
|
|
|
// 0011, 0110, 1100, 1001 -- 0011 rotate
|
|
|
|
|
// 0111, 1011, 1101, 1110 -- 0111 rotate
|
|
|
|
|
// 0101, 1010 -- 01 rotate, repeat
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
return sequenceOfOnes(x) || sequenceOfOnes(^x)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// sequenceOfOnes tests whether a constant is a sequence of ones in binary, with leading and trailing zeros.
|
|
|
|
|
func sequenceOfOnes(x uint64) bool {
|
|
|
|
|
y := x & -x // lowest set bit of x. x is good iff x+y is a power of 2
|
|
|
|
|
y += x
|
|
|
|
|
return (y-1)&y == 0
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// isARM64addcon reports whether x can be encoded as the immediate value in an ADD or SUB instruction.
|
|
|
|
|
func isARM64addcon(v int64) bool {
|
|
|
|
|
/* uimm12 or uimm24? */
|
|
|
|
|
if v < 0 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
if (v & 0xFFF) == 0 {
|
|
|
|
|
v >>= 12
|
|
|
|
|
}
|
|
|
|
|
return v <= 0xFFF
|
|
|
|
|
}
|
2023-08-21 15:55:35 -07:00
|
|
|
|
|
|
|
|
// setPos sets the position of v to pos, then returns true.
|
|
|
|
|
// Useful for setting the result of a rewrite's position to
|
|
|
|
|
// something other than the default.
|
|
|
|
|
func setPos(v *Value, pos src.XPos) bool {
|
|
|
|
|
v.Pos = pos
|
|
|
|
|
return true
|
|
|
|
|
}
|
2024-08-07 14:25:31 -07:00
|
|
|
|
|
|
|
|
// isNonNegative reports whether v is known to be greater or equal to zero.
|
|
|
|
|
// Note that this is pretty simplistic. The prove pass generates more detailed
|
|
|
|
|
// nonnegative information about values.
|
|
|
|
|
func isNonNegative(v *Value) bool {
|
|
|
|
|
if !v.Type.IsInteger() {
|
|
|
|
|
v.Fatalf("isNonNegative bad type: %v", v.Type)
|
|
|
|
|
}
|
|
|
|
|
// TODO: return true if !v.Type.IsSigned()
|
|
|
|
|
// SSA isn't type-safe enough to do that now (issue 37753).
|
|
|
|
|
// The checks below depend only on the pattern of bits.
|
|
|
|
|
|
|
|
|
|
switch v.Op {
|
|
|
|
|
case OpConst64:
|
|
|
|
|
return v.AuxInt >= 0
|
|
|
|
|
|
|
|
|
|
case OpConst32:
|
|
|
|
|
return int32(v.AuxInt) >= 0
|
|
|
|
|
|
|
|
|
|
case OpConst16:
|
|
|
|
|
return int16(v.AuxInt) >= 0
|
|
|
|
|
|
|
|
|
|
case OpConst8:
|
|
|
|
|
return int8(v.AuxInt) >= 0
|
|
|
|
|
|
|
|
|
|
case OpStringLen, OpSliceLen, OpSliceCap,
|
|
|
|
|
OpZeroExt8to64, OpZeroExt16to64, OpZeroExt32to64,
|
|
|
|
|
OpZeroExt8to32, OpZeroExt16to32, OpZeroExt8to16,
|
|
|
|
|
OpCtz64, OpCtz32, OpCtz16, OpCtz8,
|
|
|
|
|
OpCtz64NonZero, OpCtz32NonZero, OpCtz16NonZero, OpCtz8NonZero,
|
|
|
|
|
OpBitLen64, OpBitLen32, OpBitLen16, OpBitLen8:
|
|
|
|
|
return true
|
|
|
|
|
|
|
|
|
|
case OpRsh64Ux64, OpRsh32Ux64:
|
|
|
|
|
by := v.Args[1]
|
|
|
|
|
return by.Op == OpConst64 && by.AuxInt > 0
|
|
|
|
|
|
|
|
|
|
case OpRsh64x64, OpRsh32x64, OpRsh8x64, OpRsh16x64, OpRsh32x32, OpRsh64x32,
|
|
|
|
|
OpSignExt32to64, OpSignExt16to64, OpSignExt8to64, OpSignExt16to32, OpSignExt8to32:
|
|
|
|
|
return isNonNegative(v.Args[0])
|
|
|
|
|
|
|
|
|
|
case OpAnd64, OpAnd32, OpAnd16, OpAnd8:
|
|
|
|
|
return isNonNegative(v.Args[0]) || isNonNegative(v.Args[1])
|
|
|
|
|
|
|
|
|
|
case OpMod64, OpMod32, OpMod16, OpMod8,
|
|
|
|
|
OpDiv64, OpDiv32, OpDiv16, OpDiv8,
|
|
|
|
|
OpOr64, OpOr32, OpOr16, OpOr8,
|
|
|
|
|
OpXor64, OpXor32, OpXor16, OpXor8:
|
|
|
|
|
return isNonNegative(v.Args[0]) && isNonNegative(v.Args[1])
|
|
|
|
|
|
|
|
|
|
// We could handle OpPhi here, but the improvements from doing
|
|
|
|
|
// so are very minor, and it is neither simple nor cheap.
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
2024-09-05 14:56:43 +07:00
|
|
|
|
|
|
|
|
func rewriteStructLoad(v *Value) *Value {
|
|
|
|
|
b := v.Block
|
|
|
|
|
ptr := v.Args[0]
|
|
|
|
|
mem := v.Args[1]
|
|
|
|
|
|
|
|
|
|
t := v.Type
|
|
|
|
|
args := make([]*Value, t.NumFields())
|
|
|
|
|
for i := range args {
|
|
|
|
|
ft := t.FieldType(i)
|
|
|
|
|
addr := b.NewValue1I(v.Pos, OpOffPtr, ft.PtrTo(), t.FieldOff(i), ptr)
|
|
|
|
|
args[i] = b.NewValue2(v.Pos, OpLoad, ft, addr, mem)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
v.reset(OpStructMake)
|
|
|
|
|
v.AddArgs(args...)
|
|
|
|
|
return v
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func rewriteStructStore(v *Value) *Value {
|
|
|
|
|
b := v.Block
|
|
|
|
|
dst := v.Args[0]
|
|
|
|
|
x := v.Args[1]
|
|
|
|
|
if x.Op != OpStructMake {
|
|
|
|
|
base.Fatalf("invalid struct store: %v", x)
|
|
|
|
|
}
|
|
|
|
|
mem := v.Args[2]
|
|
|
|
|
|
|
|
|
|
t := x.Type
|
|
|
|
|
for i, arg := range x.Args {
|
|
|
|
|
ft := t.FieldType(i)
|
|
|
|
|
|
|
|
|
|
addr := b.NewValue1I(v.Pos, OpOffPtr, ft.PtrTo(), t.FieldOff(i), dst)
|
|
|
|
|
mem = b.NewValue3A(v.Pos, OpStore, types.TypeMem, typeToAux(ft), addr, arg, mem)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return mem
|
|
|
|
|
}
|
2024-12-09 12:55:33 -08:00
|
|
|
|
|
|
|
|
// isDirectType reports whether v represents a type
|
|
|
|
|
// (a *runtime._type) whose value is stored directly in an
|
|
|
|
|
// interface (i.e., is pointer or pointer-like).
|
|
|
|
|
func isDirectType(v *Value) bool {
|
|
|
|
|
return isDirectType1(v)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// v is a type
|
|
|
|
|
func isDirectType1(v *Value) bool {
|
|
|
|
|
switch v.Op {
|
|
|
|
|
case OpITab:
|
|
|
|
|
return isDirectType2(v.Args[0])
|
|
|
|
|
case OpAddr:
|
|
|
|
|
lsym := v.Aux.(*obj.LSym)
|
|
|
|
|
if lsym.Extra == nil {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
if ti, ok := (*lsym.Extra).(*obj.TypeInfo); ok {
|
|
|
|
|
return types.IsDirectIface(ti.Type.(*types.Type))
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// v is an empty interface
|
|
|
|
|
func isDirectType2(v *Value) bool {
|
|
|
|
|
switch v.Op {
|
|
|
|
|
case OpIMake:
|
|
|
|
|
return isDirectType1(v.Args[0])
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// isDirectIface reports whether v represents an itab
|
|
|
|
|
// (a *runtime._itab) for a type whose value is stored directly
|
|
|
|
|
// in an interface (i.e., is pointer or pointer-like).
|
|
|
|
|
func isDirectIface(v *Value) bool {
|
|
|
|
|
return isDirectIface1(v, 9)
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// v is an itab
|
|
|
|
|
func isDirectIface1(v *Value, depth int) bool {
|
|
|
|
|
if depth == 0 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
switch v.Op {
|
|
|
|
|
case OpITab:
|
|
|
|
|
return isDirectIface2(v.Args[0], depth-1)
|
|
|
|
|
case OpAddr:
|
|
|
|
|
lsym := v.Aux.(*obj.LSym)
|
|
|
|
|
if lsym.Extra == nil {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
if ii, ok := (*lsym.Extra).(*obj.ItabInfo); ok {
|
|
|
|
|
return types.IsDirectIface(ii.Type.(*types.Type))
|
|
|
|
|
}
|
|
|
|
|
case OpConstNil:
|
|
|
|
|
// We can treat this as direct, because if the itab is
|
|
|
|
|
// nil, the data field must be nil also.
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// v is an interface
|
|
|
|
|
func isDirectIface2(v *Value, depth int) bool {
|
|
|
|
|
if depth == 0 {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
switch v.Op {
|
|
|
|
|
case OpIMake:
|
|
|
|
|
return isDirectIface1(v.Args[0], depth-1)
|
|
|
|
|
case OpPhi:
|
|
|
|
|
for _, a := range v.Args {
|
|
|
|
|
if !isDirectIface2(a, depth-1) {
|
|
|
|
|
return false
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
return true
|
|
|
|
|
}
|
|
|
|
|
return false
|
|
|
|
|
}
|
2025-03-09 16:39:36 +01:00
|
|
|
|
|
|
|
|
func bitsAdd64(x, y, carry int64) (r struct{ sum, carry int64 }) {
|
|
|
|
|
s, c := bits.Add64(uint64(x), uint64(y), uint64(carry))
|
|
|
|
|
r.sum, r.carry = int64(s), int64(c)
|
|
|
|
|
return
|
|
|
|
|
}
|
2025-04-21 12:44:24 -07:00
|
|
|
|
|
|
|
|
func bitsMulU64(x, y int64) (r struct{ hi, lo int64 }) {
|
|
|
|
|
hi, lo := bits.Mul64(uint64(x), uint64(y))
|
|
|
|
|
r.hi, r.lo = int64(hi), int64(lo)
|
|
|
|
|
return
|
|
|
|
|
}
|
|
|
|
|
func bitsMulU32(x, y int32) (r struct{ hi, lo int32 }) {
|
|
|
|
|
hi, lo := bits.Mul32(uint32(x), uint32(y))
|
|
|
|
|
r.hi, r.lo = int32(hi), int32(lo)
|
|
|
|
|
return
|
|
|
|
|
}
|
2025-05-04 10:34:41 -07:00
|
|
|
|
|
|
|
|
// flagify rewrites v which is (X ...) to (Select0 (Xflags ...)).
|
|
|
|
|
func flagify(v *Value) bool {
|
|
|
|
|
var flagVersion Op
|
|
|
|
|
switch v.Op {
|
|
|
|
|
case OpAMD64ADDQconst:
|
|
|
|
|
flagVersion = OpAMD64ADDQconstflags
|
|
|
|
|
case OpAMD64ADDLconst:
|
|
|
|
|
flagVersion = OpAMD64ADDLconstflags
|
|
|
|
|
default:
|
|
|
|
|
base.Fatalf("can't flagify op %s", v.Op)
|
|
|
|
|
}
|
|
|
|
|
inner := v.copyInto(v.Block)
|
|
|
|
|
inner.Op = flagVersion
|
|
|
|
|
inner.Type = types.NewTuple(v.Type, types.TypeFlags)
|
|
|
|
|
v.reset(OpSelect0)
|
|
|
|
|
v.AddArg(inner)
|
|
|
|
|
return true
|
|
|
|
|
}
|
cmd/compile,runtime: remember idx+len for bounds check failure with less code
Currently we must put the index and length into specific registers so
we can call into the runtime to report a bounds check failure.
So a typical bounds check call is something like:
MOVD R3, R0
MOVD R7, R1
CALL runtime.panicIndex
or, if for instance the index is constant,
MOVD $7, R0
MOVD R9, R1
CALL runtime.panicIndex
Sometimes the MOVD can be avoided, if the value happens to be in the
right register already. But that's not terribly common, and doesn't
work at all for constants.
Let's get rid of those MOVD instructions. They pollute the instruction
cache and are almost never executed.
Instead, we'll encode in a PCDATA table where the runtime should find
the index and length. The table encodes, for each index and length,
whether it is a constant or in a register, and which register or
constant it is.
That way, we can avoid all those useless MOVDs. Instead, we can figure
out the index and length at runtime. This makes the bounds panic path
slower, but that's a good tradeoff.
We can encode registers 0-15 and constants 0-31. Anything outside that
range still needs to use an explicit instruction.
This CL is the foundation, followon CLs will move each architecture
to the new strategy.
Change-Id: I705c511e546e6aac59fed922a8eaed4585e96820
Reviewed-on: https://go-review.googlesource.com/c/go/+/682396
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-06-18 14:50:23 -07:00
|
|
|
|
|
|
|
|
// PanicBoundsC contains a constant for a bounds failure.
|
|
|
|
|
type PanicBoundsC struct {
|
|
|
|
|
C int64
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
// PanicBoundsCC contains 2 constants for a bounds failure.
|
|
|
|
|
type PanicBoundsCC struct {
|
|
|
|
|
Cx int64
|
|
|
|
|
Cy int64
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func (p PanicBoundsC) CanBeAnSSAAux() {
|
|
|
|
|
}
|
|
|
|
|
func (p PanicBoundsCC) CanBeAnSSAAux() {
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func auxToPanicBoundsC(i Aux) PanicBoundsC {
|
|
|
|
|
return i.(PanicBoundsC)
|
|
|
|
|
}
|
|
|
|
|
func auxToPanicBoundsCC(i Aux) PanicBoundsCC {
|
|
|
|
|
return i.(PanicBoundsCC)
|
|
|
|
|
}
|
|
|
|
|
func panicBoundsCToAux(p PanicBoundsC) Aux {
|
|
|
|
|
return p
|
|
|
|
|
}
|
|
|
|
|
func panicBoundsCCToAux(p PanicBoundsCC) Aux {
|
|
|
|
|
return p
|
|
|
|
|
}
|