Introduce new aux type "ARM64ConditionalParams", which contains condition code, NZCV flags and constant with indicator of using it for CCMP instructions
Updates #71268
Change-Id: I322a6cb7077c9a2c4415893c5eb7ff7692d5a2de
Reviewed-on: https://go-review.googlesource.com/c/go/+/698037
Reviewed-by: Mark Freeman <markfreeman@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@golang.org>
With the previous CL in place, we can now pretty easily optimize a few
more loads from abi.Type. I've done Size_, PtrBytes, and Kind_, which
are easily calculated.
Among std/cmd, this rule fires a number of times:
75 abi.Type field Kind_
50 abi.PtrType field Elem
14 abi.Type field Hash
4 abi.Type field Size_
2 abi.Type field PtrBytes
The other ones that show up when compiling std/cmd are TFlag and GCData,
but these are not trivially calculated. Doing TFlag would probably be a
decent help given it's often used in things like switches where
statically knowing the kind could eliminate a bunch of dead code.
Change-Id: Ic7fd2113fa7479af914d06916edbca60cc71819f
Reviewed-on: https://go-review.googlesource.com/c/go/+/701298
Reviewed-by: Mark Freeman <markfreeman@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Many CLs have worked with this bit of code, extending the cases more and
more for various fixed addresses and constants. But, I find that it's
getting duplicitive, and I don't find the current setup very clear that
something like isFixed32 _only_ works for a specific element within the
type data.
This CL rewrites these rules (pun unintended) into a single set of
rewrite rules with shared logic, which stops hardcoding offsets and type
compatibility checks.
This should open the door to optimizing further type:... field loads, of
which most can be done entirely statically but are not yet today outside
Hash and Elem.
Passes toolstash -cmp.
Change-Id: I754138ce1785c6036eada9ed53f0ce2ad2a58b63
Reviewed-on: https://go-review.googlesource.com/c/go/+/701297
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Auto-Submit: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Mark Freeman <markfreeman@google.com>
Reviewed-by: Florian Lehner <lehner.florian86@gmail.com>
CL 700336 let the compiler see into the abi.PtrType.Elem field,
but forgot the MarkTypeSymUsedInInterface to ensure that the symbol
is marked as referenced.
I am not sure how to write a test for this, but I noticed this when
working on further optimizations where I "fixed" this issue and
confusingly failed toolstash -cmp, with diffs like:
@@ -70582,6 +70582,7 @@ reflect.groupAndSlotOf<1> STEXT size=696 args=0x20 locals=0x1e0 funcid=0x0 align
rel 3+0 t=R_USEIFACE type:*reflect.rtype<0>+0
rel 3+0 t=R_USEIFACE type:*reflect.rtype<0>+0
rel 3+0 t=R_USEIFACE type:*uint64<0>+0
+ rel 3+0 t=R_USEIFACE type:uint64<0>+0
rel 71+0 t=R_CALLIND +0
rel 92+4 t=R_PCREL go:itab.*reflect.rtype,reflect.Type<0>+0
rel 114+4 t=R_CALL reflect.(*rtype).ptrTo<1>+0
Updates #75203
Change-Id: Ib8de8a32aeb8a7ea6fcf5d728a2e4944ef227ab2
Reviewed-on: https://go-review.googlesource.com/c/go/+/701296
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Keith Randall <khr@google.com>
This CL makes the generated code for reflect.TypeFor as simple as an
intrinsic function.
Fixes#75203
Change-Id: I7bb48787101f07e77ab5c583292e834c28a028d6
Reviewed-on: https://go-review.googlesource.com/c/go/+/700336
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Allows rewrite rules using oneBit to be made more compact.
Change-Id: I986715f77db5b548759d809fe668e1893048f25c
Reviewed-on: https://go-review.googlesource.com/c/go/+/699295
Reviewed-by: Youlin Feng <fengyoulin@live.com>
Commit-Queue: Keith Randall <khr@golang.org>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
This reverts commit 4e182db5fc (CL 695196),
which is itself a revert of
ec9e1176c3 (CL 678620).
So this CL is exactly the same as CL 678620, but with a regalloc fix
(CL 696035) submitted first.
Change-Id: I743ab32fa3aa6ef3e1b2b6751a2ef4519139057c
Reviewed-on: https://go-review.googlesource.com/c/go/+/696016
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
This reverts commit ec9e1176c3 (CL 678620).
Reason for revert: causing regalloc to get into an infinite loop
Change-Id: Ie53c58c6126804af6d6883ea4acdcfb632a172bd
Reviewed-on: https://go-review.googlesource.com/c/go/+/695196
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org>
This reverts commit cd55f86b8d (CL 681937)
Reason for revert: still causing compiler failures on Google test code
Change-Id: I5cd482fd607fd060a523257082d48821b5f965d6
Reviewed-on: https://go-review.googlesource.com/c/go/+/695016
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Fixes#74076
Change-Id: Icc67b3d4e342f329584433bd1250c56ae8f5a73d
Reviewed-on: https://go-review.googlesource.com/c/go/+/690635
Reviewed-by: Alan Donovan <adonovan@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Commit-Queue: Alan Donovan <adonovan@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Alan Donovan <adonovan@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
If the struct is a bunch of 0-sized fields and one pointer field.
Fixes#74092
Change-Id: I87c5d162c8c9fdba812420d7f9d21de97295b62c
Reviewed-on: https://go-review.googlesource.com/c/go/+/681937
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@google.com>
And use the generic version instead.
While at it, also correct the corresponding rules to use logXu variants
instead of logXu, following discussion in CL 689815.
Change-Id: Iba85d14ff0e26d45a126764e7bd5702586358d23
Reviewed-on: https://go-review.googlesource.com/c/go/+/692917
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
By calling isUnsignedPowerOfTwo instead of duplicating the same ones.
Change-Id: I1e29d3b7eda1bc8773fcd25728d8f508ae633ac9
Reviewed-on: https://go-review.googlesource.com/c/go/+/692916
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Since these functions cast the input to uint64, so the result always
non-negative. The condition should be changed to comparing with zero,
thus maaking it clearer to reader, and open room for simplifying in the
future by using the generic isUnsignedPowerOfTwo function.
Separated this change, so it's easier to do bisecting if there's any
problems happened.
Change-Id: Ibec28c2590f4c52caa36384b710d526459725e49
Reviewed-on: https://go-review.googlesource.com/c/go/+/692915
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
Just using isUnsignedPowerOfTwo and log32u is enough.
Change-Id: I93d49ab71c6245d05f6507adbcb9ef2a696e75d6
Reviewed-on: https://go-review.googlesource.com/c/go/+/691476
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
By calling logXu instead of duplicating the same ones.
Change-Id: Ide7a3ce072a6abafe1979f0158000457d90645c3
Reviewed-on: https://go-review.googlesource.com/c/go/+/691475
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
The previous algorithm was incorrect, as it reused the dominatedByCall
slice without resetting it. It also used the depth fields even though
they were not yet calculated.
Also, clean up a lot of the loop detector code that we never use.
Always compute depths. It is cheap.
Update #71868
Not really sure how to test this. As it is just an advisory bit,
nothing goes really wrong when the result is incorrect.
Change-Id: Ic0ae87a4d3576554831252d88b05b058ca68af41
Reviewed-on: https://go-review.googlesource.com/c/go/+/680775
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
Currently we must put the index and length into specific registers so
we can call into the runtime to report a bounds check failure.
So a typical bounds check call is something like:
MOVD R3, R0
MOVD R7, R1
CALL runtime.panicIndex
or, if for instance the index is constant,
MOVD $7, R0
MOVD R9, R1
CALL runtime.panicIndex
Sometimes the MOVD can be avoided, if the value happens to be in the
right register already. But that's not terribly common, and doesn't
work at all for constants.
Let's get rid of those MOVD instructions. They pollute the instruction
cache and are almost never executed.
Instead, we'll encode in a PCDATA table where the runtime should find
the index and length. The table encodes, for each index and length,
whether it is a constant or in a register, and which register or
constant it is.
That way, we can avoid all those useless MOVDs. Instead, we can figure
out the index and length at runtime. This makes the bounds panic path
slower, but that's a good tradeoff.
We can encode registers 0-15 and constants 0-31. Anything outside that
range still needs to use an explicit instruction.
This CL is the foundation, followon CLs will move each architecture
to the new strategy.
Change-Id: I705c511e546e6aac59fed922a8eaed4585e96820
Reviewed-on: https://go-review.googlesource.com/c/go/+/682396
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Change-Id: I89719b94de74a32402d02309515dffc4989484db
Reviewed-on: https://go-review.googlesource.com/c/go/+/681575
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@google.com>
CL 622236 forgot to check the mask was also a 32 bit rotate mask. Add
a modified version of isPPC64WordRotateMask which valids the mask is
contiguous and fits inside a uint32.
I don't this is possible when merging SRDconst, the first check should
always reject such combines. But, be extra careful and do it there
too.
Fixes#73153
Change-Id: Ie95f74ec5e7d89dc761511126db814f886a7a435
Reviewed-on: https://go-review.googlesource.com/c/go/+/679775
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Jayanth Krishnamurthy <jayanth.krishnamurthy@ibm.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
x += *p
We want to do this with a single load+add operation on amd64.
The tricky part is that we don't want to combine if there are
other uses of x after this instruction.
Implement a simple detector that seems to capture a common situation -
x += *p is in a loop, and the other use of x is after loop exit.
In that case, it does not hurt to do the load+add combo.
Change-Id: I466174cce212e78bde83f908cc1f2752b560c49c
Reviewed-on: https://go-review.googlesource.com/c/go/+/672957
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
OpMove is faster for small moves of fixed size.
For safety, we have to rewrite the Move rewrite rules a bit so that
all the loads are done before any stores happen.
Also use an 8-byte move instead of a 16-byte move if the tail is
at most 8 bytes.
Change-Id: I7f6c7496ac6d5eb2e0706fd59ca4b5d797c51101
Reviewed-on: https://go-review.googlesource.com/c/go/+/672997
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
So we can avoid using a TEST where it isn't needed.
Currently only implemented for ADD{Q,L}const.
Change-Id: Ia9c4c69bb6033051a45cfd3d191376c7cec9d423
Reviewed-on: https://go-review.googlesource.com/c/go/+/669875
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@golang.org>
Use an automatic algorithm to generate strength reduction code.
You give it all the linear combination (a*x+b*y) instructions in your
architecture, it figures out the rest.
Just amd64 and arm64 for now.
Fixes#67575
Change-Id: I35c69382bebb1d2abf4bb4e7c43fd8548c6c59a1
Reviewed-on: https://go-review.googlesource.com/c/go/+/626998
Reviewed-by: Jakub Ciolek <jakub@ciolek.dev>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
The full 64x64->128 multiply comes up when using bits.Mul64.
The 64x64->64+overflow multiply comes up in unsafe.Slice when using
a constant length.
Change-Id: I298515162ca07d804b2d699d03bc957ca30a4ebc
Reviewed-on: https://go-review.googlesource.com/c/go/+/667175
Reviewed-by: Junyang Shao <shaojunyang@google.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Improve the compiler's store-to-load forwarding optimization by relaxing the
type comparison condition. Instead of requiring exact type equality (CMPeq),
we now use copyCompatibleType which allows forwarding between compatible
types where safe.
Fix several size comparison bugs in the nested store patterns. Previously,
we were comparing the size of the outer store with the load type,
rather than comparing with the size of the actual store being forwarded
from.
Skip OpConvert in dead store elimination to help get rid of dead stores such
as zeroing slices. OpConvert, like OpInlMark, doesn't really use the memory.
This optimization is particularly beneficial for code that creates slices with
computed pointers, such as the runtime's heapBitsSlice function, where
intermediate calculations were previously causing the compiler to miss
store-to-load forwarding opportunities.
Local sweet run result on an x86_64 laptop:
│ Orig.res │ Hopt.res │
│ sec/op │ sec/op vs base │
BiogoIgor-8 5.303 ± 1% 5.322 ± 1% ~ (p=0.190 n=10)
BiogoKrishna-8 7.894 ± 1% 7.828 ± 2% ~ (p=0.190 n=10)
BleveIndexBatch100-8 2.257 ± 1% 2.248 ± 2% ~ (p=0.529 n=10)
EtcdPut-8 30.12m ± 1% 30.03m ± 1% ~ (p=0.796 n=10)
EtcdSTM-8 127.1m ± 1% 126.2m ± 0% -0.74% (p=0.023 n=10)
GoBuildKubelet-8 52.21 ± 0% 52.05 ± 1% ~ (p=0.063 n=10)
GoBuildKubeletLink-8 4.342 ± 1% 4.305 ± 0% -0.85% (p=0.000 n=10)
GoBuildIstioctl-8 43.33 ± 0% 43.24 ± 0% -0.22% (p=0.015 n=10)
GoBuildIstioctlLink-8 4.604 ± 1% 4.598 ± 0% ~ (p=0.063 n=10)
GoBuildFrontend-8 15.33 ± 0% 15.29 ± 0% ~ (p=0.143 n=10)
GoBuildFrontendLink-8 740.0m ± 1% 737.7m ± 1% ~ (p=0.912 n=10)
GopherLuaKNucleotide-8 9.590 ± 1% 9.656 ± 1% ~ (p=0.165 n=10)
MarkdownRenderXHTML-8 96.97m ± 1% 97.26m ± 2% ~ (p=0.105 n=10)
Tile38QueryLoad-8 335.9µ ± 1% 335.6µ ± 1% ~ (p=0.481 n=10)
geomean 1.336 1.333 -0.22%
Change-Id: I031552623e6d5a3b1b5be8325e6314706e45534f
Reviewed-on: https://go-review.googlesource.com/c/go/+/662075
Reviewed-by: Carlos Amedee <carlos@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Carlos Amedee <carlos@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Change-Id: I0a3ce2e823697eee5bb5e7d5ea0ef025132c0689
Reviewed-on: https://go-review.googlesource.com/c/go/+/661655
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Make ssa.disjoint call ssa.disjointTypes to disambiguate Values based on
their types. Only one type-based rule is employed: a Type can't alias
with a pointer (https://pkg.go.dev/unsafe#Pointer).
Fixes#70488
Change-Id: I5a7e75292c2b6b5a01fb9048e3e2360e31dbcdd9
Reviewed-on: https://go-review.googlesource.com/c/go/+/632176
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Keith Randall <khr@google.com>
If we call slicebytetostring immediately (with no intervening writes)
before calling map access or delete functions with the resulting
string as the key, then we can just use the ptr/len of the
slicebytetostring argument as the key. This avoids an allocation.
Fixes#44898
Update #71132
There's old code in cmd/compile/internal/walk/order.go that handles
some of these cases.
1. m[string(b)]
2. s := string(b); m[s]
3. m[[2]string{string(b1),string(b2)}]
The old code handled cases 1&3. The new code handles cases 1&2.
We'll leave the old code around to keep 3 working, although it seems
not terribly common.
Case 2 happens particularly after inlining, so it is pretty common.
Change-Id: I8913226ca79d2c65f4e2bd69a38ac8c976a57e43
Reviewed-on: https://go-review.googlesource.com/c/go/+/640656
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
We can just use == if the interface is direct.
Fixes#70738
Change-Id: Ia9a644791a370fec969c04c42d28a9b58f16911f
Reviewed-on: https://go-review.googlesource.com/c/go/+/635435
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
CL 621357 introduced new generic lowering rules which caused
several shift related codegen test failures.
Add new rules to fix the test regressions, and cleanup tests
which are changed but not regressed. Some CLRLSLDI tests are
removed as they are no test CLRLSLDI rules.
Fixes#70003
Change-Id: I1ecc5a7e63ab709a4a0cebf11fa078d5cf164034
Reviewed-on: https://go-review.googlesource.com/c/go/+/622236
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
The SSA backend currently only handle struct with up to 4 fields. Thus,
there are different operations corresponding to number fields of the
struct.
This CL generalizes these with just one OpStructMake, allow struct types
with arbitrary number of fields.
However, the ssa.MaxStruct is still kept as-is, and future CL will
increase this value to optimize large structs.
Updates #24416
Change-Id: I192ffbea881186693584476b5639394e79be45c5
Reviewed-on: https://go-review.googlesource.com/c/go/+/611075
Reviewed-by: Keith Randall <khr@golang.org>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
Change-Id: I50a19bd971176598bf8e4ef86ec98f008abe245c
Reviewed-on: https://go-review.googlesource.com/c/go/+/615198
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Change-Id: I097b53e9f13de6ff6eb18ae2261842b097f26390
Reviewed-on: https://go-review.googlesource.com/c/go/+/615197
Auto-Submit: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
They are already methods on an arm64-specific type, so they don't
need to have arm64-specific names.
Change-Id: I2be29907f9892891d88d52cced043ca248aa4e08
Reviewed-on: https://go-review.googlesource.com/c/go/+/615196
Auto-Submit: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@google.com>
Now that we're bootstrapping from a toolchain that has min/max builtins.
Update #64751
Change-Id: I63eedf3cca00f56f62ca092949cb2dc61db03361
Reviewed-on: https://go-review.googlesource.com/c/go/+/610355
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This can be done efficiently with few instructions.
This also adds MULHDUCC for further codegen improvement.
Change-Id: I06320ba4383a679341b911a237a360ef07b19168
Reviewed-on: https://go-review.googlesource.com/c/go/+/605975
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Archana Ravindar <aravinda@redhat.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
After newobject, we don't need to write zeroes to initialize the
object. It has already been zeroed by the allocator.
This is already handled in most cases, but because we run builtin
decomposition after the opt pass, we don't handle cases where the zero
of a compound builtin is being written. Improve the zero detector to
handle those cases.
Fixes#68845
Change-Id: If3dde2e304a05e5a6a6723565191d5444b334bcc
Reviewed-on: https://go-review.googlesource.com/c/go/+/605255
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Auto-Submit: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>
Handles a lot more cases where constant ranges can eliminate
various (mostly bounds failure) paths.
Fixes#66826Fixes#66692Fixes#48213
Update #57959
TODO: remove constant logic from poset code, no longer needed.
Change-Id: Id196436fcd8a0c84c7d59c04f93bd92e26a0fd7e
Reviewed-on: https://go-review.googlesource.com/c/go/+/599096
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Knyszek <mknyszek@google.com>
v = ... compute some value, which zeros top 32 bits ...
w = zero-extend v
We want to remove the zero-extension operation, as it doesn't do anything.
But if v is typed as a signed value, and it gets spilled/restored, it
might be re-sign-extended upon restore. So the zero-extend isn't actually
a NOP when there might be calls or other reasons to spill in between v and w.
Fixes#68227
Change-Id: I3b30b8e56c7d70deac1fb09d2becc7395acbadf8
Reviewed-on: https://go-review.googlesource.com/c/go/+/595675
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Joedian Reid <joedian@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
RLIWNM does not clear the upper 32 bits of the target register if
the mask wraps around (e.g 0xF000000F). Don't elide MOVWZreg for
such masks. All other usage clears the upper 32 bits.
Fixes#67844.
Change-Id: I11b89f1da9ae077624369bfe2bf25e9b7c9b79bc
Reviewed-on: https://go-review.googlesource.com/c/go/+/590896
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
This allows more effective conversion of rotate and mask opcodes
into their CC equivalents, while simplifying the first lowering
pass.
This was removed before the latelower pass was introduced to fold
more cases of compare against zero. Add ANDconst to push the
conversion of ANDconst to ANDCCconst into latelower with the other
CC opcodes.
This also requires introducing RLDICLCC to prevent regressions
when ANDconst is converted to RLDICL then to RLDICLCC and back
to ANDCCconst when possible.
Change-Id: I9e5f9c99fbefa334db18c6c152c5f967f3ff2590
Reviewed-on: https://go-review.googlesource.com/c/go/+/586160
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Carlos Amedee <carlos@golang.org>