We can't coalesce a non-WB store with a subsequent Move, as the
result of the store might be the source of the move.
There's a simple codegen test. Not sure how we might do a real test,
as all the repro's I've come up with are very expensive and unreliable.
Fixes#71228
Change-Id: If18bf181a266b9b90964e2591cd2e61a7168371c
Reviewed-on: https://go-review.googlesource.com/c/go/+/642197
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: David Chase <drchase@google.com>
Change-Id: I0858568d225daba1c318842dc0c9b5e652dff612
Reviewed-on: https://go-review.googlesource.com/c/go/+/526519
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
This lets us combine more write barriers, getting rid of some of the
test+branch and gcWriteBarrier* calls.
With the new write barriers, it's easy to add a few non-pointer writes
to the set of values written.
We allow up to 2 non-pointer writes between pointer writes. This is enough
for, for example, adjacent slice fields.
Fixes#62126
Change-Id: I872d0fa9cc4eb855e270ffc0223b39fde1723c4b
Reviewed-on: https://go-review.googlesource.com/c/go/+/521498
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Make more use of RecvParams and RecvParamsResults helper methods.
Also, correct misuse of Go spec terminology ("return" is a statement;
"result" is the class of parameters that appear in a function type).
Change-Id: I94807a747c494c9daa5441da7d9e3aea77aae33b
Reviewed-on: https://go-review.googlesource.com/c/go/+/521395
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
When the write barrier does several pointer writes under one
write barrier flag check, the line numbers aren't really correct.
The writes inside the write barrier have a confusing set of positions.
The loads of the old values are given the line number of the
corresponding store instruction, but the stores into the write buffer
are given the line number of the first store. Instead, give them all
line numbers corresponding to the store instruction.
The writes at the merge point, which are the original writes and the
only ones that happen when the barrier is off, are currently all given
the line number of the first write. Instead give them their original
line number.
Change-Id: Id64820b707f45f07b0978f8d03c97900fdc4bc0b
Reviewed-on: https://go-review.googlesource.com/c/go/+/521499
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
User code is unlikely to be correct, but don't crash the compiler
when the offset of a pointer in an object is not a multiple of the
pointer size.
Fixes#61187
Change-Id: Ie56bfcb38556c5dd6f702ae4ec1d4534c6acd420
Reviewed-on: https://go-review.googlesource.com/c/go/+/508555
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
ssagen.ssafn already holds the ir.Func, and ssa.Frontend.SetWBPos and
ssa.Frontend.Lsym are simple wrappers around parts of the ir.Func.
Expose the ir.Func through ssa.Frontend, allowing us to remove these
wrapper methods and allowing future access to additional features of the
ir.Func if needed.
While we're here, drop ssa.Frontend.Line, which is unused.
For #58298.
Change-Id: I30c4cbd2743e9ad991d8c6b388484a7d1e95f3ae
Reviewed-on: https://go-review.googlesource.com/c/go/+/484215
Auto-Submit: Michael Pratt <mpratt@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Michael Pratt <mpratt@google.com>
Have the write barrier call return a pointer to a buffer into which
the generated code records pointers that need write barrier treatment.
Change-Id: I7871764298e0aa1513de417010c8d46b296b199e
Reviewed-on: https://go-review.googlesource.com/c/go/+/447781
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Bypass: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Previously, the write barrier calls themselves did the actual
writes to memory. Instead, move those writes out to a common location
that both the wb-enabled and wb-disabled code paths share.
This enables us to optimize the write barrier path without having
to worry about performing the actual writes.
Change-Id: Ia71ab651908ec124cc33141afb52e4ca19733ac6
Reviewed-on: https://go-review.googlesource.com/c/go/+/447780
Reviewed-by: Michael Knyszek <mknyszek@google.com>
TryBot-Bypass: Keith Randall <khr@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Move this knob from a binary-startup thing to a build-time thing.
This will enable followon optmizations to the write barrier.
Change-Id: Ic3323348621c76a7dc390c09ff55016b19c43018
Reviewed-on: https://go-review.googlesource.com/c/go/+/447778
Reviewed-by: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Instead of keeping track of in which blocks write barriers complete,
introduce a new op that marks the exact memory state where the
write barrier completes.
For future use. This allows us to move some of the write barrier code
to between the start of the merging block and the WBend marker.
Change-Id: If3809b260292667d91bf0ee18d7b4d0eb1e929f0
Reviewed-on: https://go-review.googlesource.com/c/go/+/447777
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
This exposes mightBeHeapPointer and mightContainHeapPointer which
I plan to use in future CLs.
Change-Id: Ice4ae3b33127936868fff6cc045d8703d0b1a79a
Reviewed-on: https://go-review.googlesource.com/c/go/+/447776
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
As part of IsNewObject, we need to go from the SelectN[0] use of
a call to the SelectN[1] use of a call. The current code does this
by just looking through the block. If the block is very large,
this ends up taking quadratic time.
Instead, prepopulate a map from call -> SelectN[1] user of that call.
That lets us find the SelectN[1] user in constant time.
Fixes#57657
Change-Id: Ie2e0b660af5c080314f4f17ba2838510a1147f9e
Reviewed-on: https://go-review.googlesource.com/c/go/+/461080
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Change-Id: I69065f8adf101fdb28682c55997f503013a50e29
Reviewed-on: https://go-review.googlesource.com/c/go/+/449757
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Joedian Reid <joedian@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Joedian Reid <joedian@golang.org>
Run-TryBot: Ian Lance Taylor <iant@google.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
We kind of have this mechanism already, just normalizing it and
using it in a bunch of places. Previously a bunch of places cached
slices only for the duration of a single function compilation. Now
we can reuse slices across a whole compiler run.
Use a sync.Pool of powers-of-two sizes. This lets us use not
too much memory, and avoid holding onto memory we're no longer
using when a GC happens.
There's a few different types we need, so generate the code for it.
Generics would be useful here, but we can't use generics in the
compiler because of bootstrapping.
Change-Id: I6cf37e7b7b2e802882aaa723a0b29770511ccd82
Reviewed-on: https://go-review.googlesource.com/c/go/+/444820
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Heschi Kreinick <heschi@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
Add a new "coverage counter" classification for variables to be used
for storing code coverage counter values (somewhat in the same way
that we identify fuzzer counters). Tagging such variables allows us to
aggregate them in the linker, and to treat updates specially.
Updates #51430.
Change-Id: Ib49fb05736ffece98bcc2f7a7c37e991b7f67bbb
Reviewed-on: https://go-review.googlesource.com/c/go/+/401235
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Than McIntosh <thanm@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
With the introduction of stack objects, VARKILL information is
no longer needed.
With stack objects, an object is dead when there are no more static
references to it, and the stack scanner can't find any live pointers
to it. VARKILL information isn't used to establish live ranges for
address-taken variables any more. In effect, the last static reference
*is* the VARKILL, and there's an additional dynamic liveness check
during stack scanning.
Next CL will actually rip out the VARKILL opcodes.
Change-Id: I030a2ab867445cf4e0e69397911f8a2e2f0ed07b
Reviewed-on: https://go-review.googlesource.com/c/go/+/419234
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
By using libFuzzer’s 8-bit counters instead of extra counters, the
coverage instrumentation in libFuzzer mode is improved in three ways:
1- 8-bit counters are supported on all platforms, including macOS and
Windows, with all relevant versions of libFuzzer, whereas extra
counters are a Linux-only feature that only recently received
support on Windows.
2- Newly covered blocks are now properly reported as new coverage by
libFuzzer, not only as new features.
3- The NeverZero strategy is used to ensure that coverage counters
never become 0 again after having been positive once. This resolves
issues encountered when fuzzing loops with iteration counts that
are multiples of 256 (e.g., larger powers of two).
Change-Id: I9021210d7fbffd07c891ad08750402ee91cb3df5
GitHub-Last-Rev: 9057e4b21d
GitHub-Pull-Request: golang/go#51318
Reviewed-on: https://go-review.googlesource.com/c/go/+/387334
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Since CL 391014, cmd/compile now requires the -p flag to be set the
build system. This CL changes it to initialize LocalPkg.Path to the
provided path, rather than relying on writing out `"".` into object
files and expecting cmd/link to substitute them.
However, this actually involved a rather long tail of fixes. Many have
already been submitted, but a few notable ones that have to land
simultaneously with changing LocalPkg:
1. When compiling package runtime, there are really two "runtime"
packages: types.LocalPkg (the source package itself) and
ir.Pkgs.Runtime (the compiler's internal representation, for synthetic
references). Previously, these ended up creating separate link
symbols (`"".xxx` and `runtime.xxx`, respectively), but now they both
end up as `runtime.xxx`, which causes lsym collisions (notably
inittask and funcsyms).
2. test/codegen tests need to be updated to expect symbols to be named
`command-line-arguments.xxx` rather than `"".foo`.
3. The issue20014 test case is sensitive to the sort order of field
tracking symbols. In particular, the local package now sorts to its
natural place in the list, rather than to the front.
Thanks to David Chase for helping track down all of the fixes needed
for this CL.
Updates #51734.
Change-Id: Iba3041cf7ad967d18c6e17922fa06ba11798b565
Reviewed-on: https://go-review.googlesource.com/c/go/+/393715
Reviewed-by: David Chase <drchase@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
A run of lines that are indented with any number of spaces or tabs
format as a <pre> block. This commit fixes various doc comments
that format badly according to that (standard) rule.
For example, consider:
// - List item.
// Second line.
// - Another item.
Because the - lines are unindented, this is actually two paragraphs
separated by a one-line <pre> block. This CL rewrites it to:
// - List item.
// Second line.
// - Another item.
Today, that will format as a single <pre> block.
In a future release, we hope to format it as a bulleted list.
Various other minor fixes as well, all in preparation for reformatting.
For #51082.
Change-Id: I95cf06040d4186830e571cd50148be3bf8daf189
Reviewed-on: https://go-review.googlesource.com/c/go/+/384257
Trust: Russ Cox <rsc@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Don't add race detector instrumentation to the fuzzing counters,
allowing usage of -race without immediately triggering the
detector. Also fixes a minor race in contextReader.Read.
Fixes#48307
Change-Id: Idb2cfeaa4283f8a74473b4bac6cd68eed577e943
Reviewed-on: https://go-review.googlesource.com/c/go/+/351453
Trust: Roland Shoemaker <roland@golang.org>
Trust: Katie Hockman <katie@golang.org>
Run-TryBot: Roland Shoemaker <roland@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Katie Hockman <katie@golang.org>
For certain type of method wrappers we used to generate a tail
call. That was disabled in CL 307234 when register ABI is used,
because with the current IR it was difficult to generate a tail
call with the arguments in the right places. The problem was that
the IR does not contain a CALL-like node with arguments; instead,
it contains an OAS node that adjusts the receiver, than an
OTAILCALL node that just contains the target, but no argument
(with the assumption that the OAS node will put the adjusted
receiver in the right place). With register ABI, putting
arguments in registers are done in SSA. The assignment (OAS)
doesn't put the receiver in register.
This CL changes the IR of a tail call to take an actual OCALL
node. Specifically, a tail call is represented as
OTAILCALL (OCALL target args...)
This way, the call target and args are connected through the OCALL
node. So the call can be analyzed in SSA and the args can be passed
in the right places.
(Alternatively, we could have OTAILCALL node directly take the
target and the args, without the OCALL node. Using an OCALL node is
convenient as there are existing code that processes OCALL nodes
which do not need to be changed. Also, a tail call is similar to
ORETURN (OCALL target args...), except it doesn't preserve the
frame. I did the former but I'm open to change.)
The SSA representation is similar. Previously, the IR lowers to
a Store the receiver then a BlockRetJmp which jumps to the target
(without putting the arg in register). Now we use a TailCall op,
which takes the target and the args. The call expansion pass and
the register allocator handles TailCall pretty much like a
StaticCall, and it will do the right ABI analysis and put the args
in the right places. (Args other than the receiver are already in
the right places. For register args it generates no code for them.
For stack args currently it generates a self copy. I'll work on
optimize that out.) BlockRetJmp is still used, signaling it is a
tail call. The actual call is made in the TailCall op so
BlockRetJmp generates no code (we could use BlockExit if we like).
This slightly reduces binary size:
old new
cmd/go 14003088 13953936
cmd/link 6275552 6271456
Change-Id: I2d16d8d419fe1f17554916d317427383e17e27f0
Reviewed-on: https://go-review.googlesource.com/c/go/+/350145
Trust: Cherry Mui <cherryyz@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: David Chase <drchase@google.com>
Currently, write barriers aren't emitted for global addresses, but they
are emitted for addresses offset of global addresses.
This CL changes IsGlobalAddr to recognize offsets of global addresses
as globals too, removing write barriers for staticuint64s based
addresses. The logic added is the same as used in IsStackAddr.
Updates #37612
Change-Id: I537579f85b9ad02987d94f3ee0b4508b90097959
Reviewed-on: https://go-review.googlesource.com/c/go/+/342129
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Trust: Michael Knyszek <mknyszek@google.com>
Run-TryBot: Cherry Mui <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
If we are assigning a global address to an object that is
immediately returned from runtime.newobject, we omit the write
barrier because we know that both the source (static address) and
the destination (zeroed memory) do not need to be tracked by the
GC. Currently, the code that matches runtime.newobject's result
is specific to ABI0 layout. Update the code to work with register
ABI as well.
Change-Id: I7ab0833c6f745329271881ee4169956928a3a948
Reviewed-on: https://go-review.googlesource.com/c/go/+/308709
Trust: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
When the write barrier pass emits typedmemmove/typedmemclr calls,
even the arguments are in registers, we still need to leave space
for the spill slots. Count that space. Otherwise when the callee
spills arguments it may clobber locals on the caller's frame.
Change-Id: I5326943427feaf66cab7658a5bef55b3baf5d345
Reviewed-on: https://go-review.googlesource.com/c/go/+/307824
Trust: Cherry Zhang <cherryyz@google.com>
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Than McIntosh <thanm@google.com>
Reviewed-by: David Chase <drchase@google.com>
old code was always ABI0, new code tracks the default
this may cause some write barrier removals to fail to fire
Updates #40724.
Change-Id: I656bdd5511c5bd6ee6e021999e30d842a6b9f0a5
Reviewed-on: https://go-review.googlesource.com/c/go/+/305671
Trust: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Than McIntosh <thanm@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Remove more now-redundant code, methods, and types
associated with transition to register ABI.
Repaired some broken comments.
Tested on link-register architectures (arm64, ppc64le)
Updates #40724.
Change-Id: Ie8433f6d38ec4a1d9705f22dcb596f267d81f203
Reviewed-on: https://go-review.googlesource.com/c/go/+/304189
Trust: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
More cleanup to remove unnecessary parts of AuxCall.
Passed testing on arm64 (a link-register architecture)
in addition to amd64 so very likely okay.
(Gratuitously updated commit message to see if it will
correctly this time.)
Updates #40724
Change-Id: Iaece952ceb5066149a5d32aaa14b36755f26bb8e
Reviewed-on: https://go-review.googlesource.com/c/go/+/303433
Trust: David Chase <drchase@google.com>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
StaticLECall (multiple value in +mem, multiple value result +mem) ->
StaticCall (multiple ergister value in +mem,
multiple register-sized-value result +mem) ->
ARCH CallStatic (multiple ergister value in +mem,
multiple register-sized-value result +mem)
But the architecture-dependent stuff is indifferent to whether
it is mem->mem or (mem)->(mem) until Prog generation.
Deal with OpSelectN -> Prog in ssagen/ssa.go, others, as they
appear.
For #40724.
Change-Id: I1d0436f6371054f1881862641d8e7e418e4a6a16
Reviewed-on: https://go-review.googlesource.com/c/go/+/293391
Trust: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Plumb abi information into ssa/ssagen for plain calls
and plain functions (not methods). Does not extend all the
way through the compiler (yet).
One test disabled because it extends far enough to break the test.
Normalized all the compiler's register args TODOs to
// TODO(register args) ...
For #40724.
Change-Id: I0173a4579f032ac3c9db3aef1749d40da5ea01ff
Reviewed-on: https://go-review.googlesource.com/c/go/+/293389
Trust: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Thanks to package reorganizing, we can remove types.TypeLinkSym by
simply having its only callers use reflectdata.TypeLinksym directly.
Passes toolstash -cmp.
Change-Id: I5bc5dbb6bf0664af43ae5130cfe1f19bd23b2bfe
Reviewed-on: https://go-review.googlesource.com/c/go/+/280644
Trust: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
This change incorporates the decision that it should be possible to
run call expansion relatively late in the optimization chain, so that
(1) calls themselves can be exposed to useful optimizations
(2) the effect of selectors on aggregates is seen at the rewrite,
so that assignment of parts into registers is less complicated
(at least I hope it works that way).
That means that selectors feeding into SelectN need to be processed,
and Make* feeding into call parameters need to be processed.
This does however require that call expansion run before decompose
builtins.
This doesn't yet handle rewrites of strings, slices, interfaces,
and complex numbers.
Passes run.bash and race.bash
Change-Id: I71ff23d3c491043beb30e926949970c4f63ef1a4
Reviewed-on: https://go-review.googlesource.com/c/go/+/245133
Trust: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Still needs to generate the calls that will need lowering.
Change-Id: Ifd4e510193441a5e27c462c1f1d704f07bf6dec3
Reviewed-on: https://go-review.googlesource.com/c/go/+/242359
Trust: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Extend use of AuxCall
Change-Id: I68b6d9bad09506532e1415fd70d44cf6c15b4b93
Reviewed-on: https://go-review.googlesource.com/c/go/+/239081
Trust: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
This is prerequisite to moving call expansion later into SSA,
and probably a good idea anyway. Passes tests.
This is the first minimal CL that does a 1-for-1 substitution
of *ssa.AuxCall for *obj.LSym. Next step (next CL) is to make
this change for all calls so that additional information can
be stored in AuxCall.
Change-Id: Ia3a7715648fd9fb1a176850767a726e6f5b959eb
Reviewed-on: https://go-review.googlesource.com/c/go/+/237680
Trust: David Chase <drchase@google.com>
Run-TryBot: David Chase <drchase@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
Right now we just prevent such types from being on the heap. This CL
makes it so they cannot appear on the stack either. The distinction
between heap and stack is pretty vague at the language level (e.g. it
is affected by -N), and we don't need the flexibility anyway.
Once go:notinheap types cannot be in either place, we don't need to
consider pointers to such types to be pointers, at least according to
the garbage collector and stack copying. (This is the big win of this
CL, in my opinion.)
The distinction between HasPointers and HasHeapPointer no longer
exists. There is only HasPointers.
This CL is cleanup before possible use of go:notinheap to fix#40954.
Update #13386
Change-Id: Ibd895aadf001c0385078a6d4809c3f374991231a
Reviewed-on: https://go-review.googlesource.com/c/go/+/249917
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
CL 220499 started marking readonly syms as SRODATA earlier,
so we can use that in the writebarrier pass now.
Passes toolstash-check.
Change-Id: Ic4d49714b8bffbe03c8e9a75ca96df4475bae732
Reviewed-on: https://go-review.googlesource.com/c/go/+/230559
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
This:
* Simplifies and shortens the generated code for rewrite rules.
* Shrinks cmd/compile by 86k (0.4%) and makes it easier to compile.
* Removes the stmt boundary code wrangling from Value.reset,
in favor of doing it in the one place where it actually does some work,
namely the writebarrier pass. (This was ascertained by inspecting the
code for cases in which notStmtBoundary values were generated.)
Passes toolstash-check -all.
Change-Id: I25671d4c4bbd772f235195d11da090878ea2cc07
Reviewed-on: https://go-review.googlesource.com/c/go/+/221421
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: David Chase <drchase@google.com>
We're allowed to remove a write barrier when both the old
value in memory and the new value we're writing are not heap pointers.
Improve both those checks a little bit.
A pointer is known to not be a heap pointer if it is read from
read-only memory. This sometimes happens for loads of pointers
from string constants in read-only memory.
Do a better job of tracking which parts of memory are known to be
zero. Before we just kept track of a range of offsets in the most
recently allocated object. For code that initializes the new object's
fields in a nonstandard order, that tracking is imprecise. Instead,
keep a bit map of the first 64 words of that object, so we can track
precisely what we know to be zeroed.
The new scheme is only precise up to the first 512 bytes of the object.
After that, we'll use write barriers unnecessarily. Hopefully most
initializers of large objects will use typedmemmove, which does only one
write barrier check for the whole initialization.
Fixes#34723
Update #21561
Change-Id: Idf6e1b7d525042fb67961302d4fc6f941393cac8
Reviewed-on: https://go-review.googlesource.com/c/go/+/199558
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
We used to be more aggressive about constant folding in the frontend,
handling expressions that the Go spec does not consider constant;
e.g., "(*int)(unsafe.Pointer(uintptr(200)))". However, that led to a
lot of subtle Go spec conformance issues, so we've since abandoned
that effort (CL 151320), leaving SSA to handle these cases instead.
As such, the only time we now end up with pointer-typed OLITERALs is
when "nil" is implicitly converted to a pointer-typed variable.
Instead of representing these OLITERALs with an CTINT of 0, we can
just use CTNIL.
Saves a few bytes of memory and lines of code.
Change-Id: Ibc5c756b992fdc89c3bdaf4fda3aa352e8e2b101
Reviewed-on: https://go-review.googlesource.com/c/go/+/193437
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
The compiler can crash if the compiled code tries to
unconditionally read from a nil pointer. This should cause
the generated binary to panic, not the compiler.
Fixes#33438
Change-Id: Ic8fa89646d6968e2cc4e27da0ad9286662f8bc49
Reviewed-on: https://go-review.googlesource.com/c/go/+/188760
Reviewed-by: Austin Clements <austin@google.com>
It is possible that a "volatile" value (one that can be clobbered
by preparing args of a call) to be used in multiple write barrier
calls. We used to copy the volatile value right before each call.
But this doesn't work if the value is used the second time, after
the first call where it is already clobbered. Copy it before
emitting any call.
Fixes#30977.
Change-Id: Iedcc91ad848d5ded547bf37a8359c125d32e994c
Reviewed-on: https://go-review.googlesource.com/c/go/+/168677
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
When initializing a new object, we're often writing
1) to a location that doesn't have a pointer to a heap object
2) a pointer that doesn't point to a heap object
When both those conditions are true, we can avoid the write barrier.
This CL detects case 1 by looking for writes to known-zeroed
locations. The results of runtime.newobject are zeroed, and we
perform a simple tracking of which parts of that object are written so
we can determine what part remains zero at each write.
This CL detects case 2 by looking for addresses of globals (including
the types and itabs which are used in interfaces) and for nil pointers.
Makes cmd/go 0.3% smaller. Some particular cases, like the slice
literal in #29573, can get much smaller.
TODO: we can remove actual zero writes also with this mechanism.
Update #29573
Change-Id: Ie74a3533775ea88da0495ba02458391e5db26cb9
Reviewed-on: https://go-review.googlesource.com/c/go/+/156363
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cherry Zhang <cherryyz@google.com>
CL 142497 renamed "statictmp_N" to ".stmp_N", but missed an instance
that was added by CL 151498 (submitted between the window that CL
142497 was reviewed/tested and later rebased/merged).
Change-Id: I597ee59dfa40821c7af2881b47e06f84a8140ec8
Reviewed-on: https://go-review.googlesource.com/c/163877
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>