Currently, order desugars map assignment operations like
m[k] op= r
into
m[k] = m[k] op r
which in turn is transformed during walk into:
tmp := *mapaccess(m, k)
tmp = tmp op r
*mapassign(m, k) = tmp
However, this is suboptimal, as we could instead produce just:
*mapassign(m, k) op= r
One complication though is if "r == 0", then "m[k] /= r" and "m[k] %=
r" will panic, and they need to do so *before* calling mapassign,
otherwise we may insert a new zero-value element into the map.
It would be spec compliant to just emit the "r != 0" check before
calling mapassign (see #23735), but currently these checks aren't
generated until SSA construction. For now, it's simpler to continue
desugaring /= and %= into two map indexing operations.
Fixes#23661.
Change-Id: I46e3739d9adef10e92b46fdd78b88d5aabe68952
Reviewed-on: https://go-review.googlesource.com/91557
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Add helper methods that validate n.Op and convert to/from the
appropriate type.
Notably, there was a lot of code in walk.go that thought setting
Etype=1 on an OADDR node affected escape analysis.
Passes toolstash-check.
TBR=marvin
Change-Id: Ieae7c67225c1459c9719f9e6a748a25b975cf758
Reviewed-on: https://go-review.googlesource.com/99535
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Instead of creating a new &nodfp expression for every recover() call,
or a new nodpc variable for every function instrumented by the race
detector, this CL introduces two new uintptr-typed pseudo-variables
callerSP and callerPC. These pseudo-variables act just like calls to
the runtime's getcallersp() and getcallerpc() functions.
For consistency, change runtime.gorecover's builtin stub's parameter
type from "*int32" to "uintptr".
Passes toolstash-check, but toolstash-check -race fails because of
register allocator changes.
Change-Id: I985d644653de2dac8b7b03a28829ad04dfd4f358
Reviewed-on: https://go-review.googlesource.com/99416
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Daniel Martí <mvdan@mvdan.cc>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
We already require expressions to have already been typechecked before
reaching walk. Moreover, all untyped expressions should have been
converted to their default type by walk.
However, in practice, we've been somewhat sloppy and inconsistent
about ensuring this. In particular, a lot of AST rewrites ended up
leaving untyped bool expressions scattered around. These likely aren't
harmful in practice, but it seems worth cleaning up.
The two most common cases addressed by this CL are:
1) When generating OIF and OFOR nodes, we would often typecheck the
conditional expression, but not apply defaultlit to force it to the
expression's default type.
2) When rewriting string comparisons into more fundamental primitives,
we were simply overwriting r.Type with the desired type, which didn't
propagate the type to nested subexpressions. These are fixed by
utilizing finishcompare, which correctly handles this (and is already
used by other comparison lowering rewrites).
Lastly, walkexpr is extended to assert that it's not called on untyped
expressions.
Fixes#23834.
Change-Id: Icbd29648a293555e4015d3b06a95a24ccbd3f790
Reviewed-on: https://go-review.googlesource.com/98337
Reviewed-by: Robert Griesemer <gri@golang.org>
There were only two large classes of use for these variables:
1) Testing "funcdepth != 0" or "funcdepth > 0", which is equivalent to
checking "Curfn != nil".
2) In oldname, detecting whether a closure variable has been created
for the current function, which can be handled by instead testing
"n.Name.Curfn != Curfn".
Lastly, merge funcstart into funchdr, since it's only called once, and
it better matches up with funcbody now.
Passes toolstash-check.
Change-Id: I8fe159a9d37ef7debc4cd310354cea22a8b23394
Reviewed-on: https://go-review.googlesource.com/99076
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Previously, for slow map key types (i.e., any type other than a 32-bit
or 64-bit plain memory type), we would rewrite
defer delete(m, k)
into
ktmp := k
defer delete(m, &ktmp)
However, if the defer statement was inside a loop, we would end up
reusing the same ktmp value for all of the deferred deletes.
We already rewrite
defer print(x, y, z)
into
defer func(a1, a2, a3) {
print(a1, a2, a3)
}(x, y, z)
This CL generalizes this rewrite to also apply for slow map deletes.
This could be extended to apply even more generally to other builtins,
but as discussed on #24259, there are cases where we must *not* do
this (e.g., "defer recover()"). However, if we elect to do this more
generally, this CL should still make that easier.
Lastly, while here, fix a few isues in wrapCall (nee walkprintfunc):
1) lookupN appends the generation number to the symbol anyway, so "%d"
was being literally included in the generated function names.
2) walkstmt will be called when the function is compiled later anyway,
so no need to do it now.
Fixes#24259.
Change-Id: I70286867c64c69c18e9552f69e3f4154a0fc8b04
Reviewed-on: https://go-review.googlesource.com/99017
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
When recursively calling walkexpr, r.Type is still the untyped value.
It then sometimes recursively calls finishcompare, which complains that
you can't compare the resulting expression to that untyped value.
Updates #23834.
Change-Id: I6b7acd3970ceaff8da9216bfa0ae24aca5dee828
Reviewed-on: https://go-review.googlesource.com/97856
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Previously, finishcompare just used SetTypecheck, but this didn't
recursively update any untyped bool typed subexpressions. This CL
changes it to call typecheck, which correctly handles this.
Also cleaned up outdated code for simplifying logic.
Updates #23834
Change-Id: Ic7f92d2a77c2eb74024ee97815205371761c1c90
Reviewed-on: https://go-review.googlesource.com/97035
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Rename all map implementation and test files to use "map"
as a file name prefix instead of "hashmap" for the implementation
and "map" for the test file names.
Change-Id: I7b317c1f7a660b95c6d1f1a185866f2839e69446
Reviewed-on: https://go-review.googlesource.com/90336
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
When loading multiple elements of an array into a single register,
make sure we treat them as unsigned. When treated as signed, the
upper bits might all be set, causing the shift-or combo to clobber
the values higher in the register.
Fixes#23719.
Change-Id: Ic87da03e9bd0fe2c60bb214b99f846e4e9446052
Reviewed-on: https://go-review.googlesource.com/92335
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ilya Tocar <ilya.tocar@intel.com>
Found a few functions in cmd/compile that aren't used.
Change-Id: I53957dae6f1a645feb8b95383f0f050964b4f7d4
Reviewed-on: https://go-review.googlesource.com/79975
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
The signature of the mapassign_fast* routines need to distinguish
the pointerness of their key argument. If the affected routines
suspend part way through, the object pointed to by the key might
get garbage collected because the key is typed as a uint{32,64}.
This is not a problem for mapaccess or mapdelete because the key
in those situations do not live beyond the call involved. If the
object referenced by the key is garbage collected prematurely, the
code still works fine. Even if that object is subsequently reallocated,
it can't be written to the map in time to affect the lookup/delete.
Fixes#22781
Change-Id: I0bbbc5e9883d5ce702faf4e655348be1191ee439
Reviewed-on: https://go-review.googlesource.com/79018
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Austin Clements <austin@google.com>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
The linker will refuse to work on objects larger than
2e9 bytes (see issue #9862 for why).
With this change, the compiler gives a useful error
message explaining this, instead of leaving it to the
linker to give a cryptic message later.
Fixes#1700.
Change-Id: I3933ce08ef846721ece7405bdba81dff644cb004
Reviewed-on: https://go-review.googlesource.com/74330
Reviewed-by: Robert Griesemer <gri@golang.org>
Run-TryBot: Robert Griesemer <gri@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
These are likely from the time when gc was written in C. There is no
need for any of these to be passed pointers, as the previous values are
not kept in any way, and the pointers are never nil. Others were left
untouched as they fell into one of these useful cases.
While at it, also turn some 0/1 integers into booleans.
Passes toolstash -cmp on std cmd.
Change-Id: Id3a9c9e84ef89536c4dc69a7fdbacd0fd7a76a9b
Reviewed-on: https://go-review.googlesource.com/72990
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Handle make(map[any]any) and make(map[any]any, hint) where
hint <= BUCKETSIZE special to allow for faster map initialization
and to improve binary size by using runtime calls with fewer arguments.
Given hint is smaller or equal to BUCKETSIZE in which case
overLoadFactor(hint, 0) is false and no buckets would be allocated by makemap:
* If hmap needs to be allocated on the stack then only hmap's hash0
field needs to be initialized and no call to makemap is needed.
* If hmap needs to be allocated on the heap then a new special
makehmap function will allocate hmap and intialize hmap's
hash0 field.
Reduces size of the godoc by ~36kb.
AMD64
name old time/op new time/op delta
NewEmptyMap 16.6ns ± 2% 5.5ns ± 2% -66.72% (p=0.000 n=10+10)
NewSmallMap 64.8ns ± 1% 56.5ns ± 1% -12.75% (p=0.000 n=9+10)
Updates #6853
Change-Id: I624e90da6775afaa061178e95db8aca674f44e9b
Reviewed-on: https://go-review.googlesource.com/61190
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Currently copy and append for types containing only scalars and
notinheap pointers still get compiled to have write barriers, even
though those write barriers are unnecessary. Fix these to use
HasHeapPointer instead of just Haspointer so that they elide write
barriers when possible.
This fixes the unnecessary write barrier in runtime.recordspan when it
grows the h.allspans slice. This is important because recordspan gets
called (*very* indirectly) from (*gcWork).tryGet, which is
go:nowritebarrierrec. Unfortunately, the compiler's analysis has no
hope of seeing this because it goes through the indirect call
fixalloc.first, but I saw it happen.
Change-Id: Ieba3abc555a45f573705eab780debcfe5c4f5dd1
Reviewed-on: https://go-review.googlesource.com/73413
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Most write barrier calls are inserted by SSA, but copy and append are
lowered to runtime.typedslicecopy during walk. Fix these to set
Func.WBPos and emit the "write barrier" warning, as done for the write
barriers inserted by SSA. As part of this, we refactor setting WBPos
and emitting this warning into the frontend so it can be shared by
both walk and SSA.
Change-Id: I5fe9997d9bdb55e03e01dd58aee28908c35f606b
Reviewed-on: https://go-review.googlesource.com/73411
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
* replace a copy of IsMethod with a call of it.
* a few more switches where they simplify the code.
* prefer composite literals over "n := new(...); n.x = y; ...".
* use defers to get rid of three goto labels.
* rewrite updateHasCall into two funcs to remove gotos.
Passes toolstash-check on std cmd.
Change-Id: Icb5442a89a87319ef4b640bbc5faebf41b193ef1
Reviewed-on: https://go-review.googlesource.com/72070
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Use them to replace if/else chains with at least three comparisons,
where the code becomes clearly simpler.
Passes toolstash -cmp on std cmd.
Change-Id: Ic98aa3905944ddcab5aef5f9d9ba376853263d94
Reviewed-on: https://go-review.googlesource.com/70934
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
Run-TryBot: Emmanuel Odeke <emm.odeke@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
The write barrier insertion has moved to the SSA backend's
writebarrier pass. There is still needwritebarrier function
left in the frontend. This function is used in two places:
- fncall, which is called in ascompatet, which is called in
walking OAS2FUNC. For OAS2FUNC, in order pass we've already
created temporaries, and there is no write barrier for the
assignments of these temporaries.
- updateHasCall, which updates the HasCall flag of a node. the
HasCall flag is then used in
- fncall, mentioned above.
- ascompatet. As mentioned above, this is an assignment to
a temporary, no write barrier.
- reorder1, which is always called with a list produced by
ascompatte, which is a list of assignments to stack, which
have no write barrier.
- vmatch1, which is called in oaslit with r.Op as OSTRUCTLIT,
OARRAYLIT, OSLICELIT, or OMAPLIT. There is no write barrier
in those literals.
Therefore, the needwritebarrier function is unnecessary. This
CL removes it.
Passes "toolstash -cmp" on std cmd.
Updates #17583.
Change-Id: I4b87ba8363d6583e4282a9e607a9ec8ce3ab124a
Reviewed-on: https://go-review.googlesource.com/43640
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reduce the scope of some. Also remove vars that were simply the index or
the value in a range statement. While at it, remove a var that was
exactly the length of a slice.
Also replaced 'bad' with a more clear 'errored' of type bool, and
renamed a single-char name with a comment to a name that is
self-explanatory.
And removed a few unnecessary Index calls within loops.
Passes toolstash -cmp on std cmd.
Change-Id: I26eee5f04e8f7e5418e43e25dca34f89cca5c80a
Reviewed-on: https://go-review.googlesource.com/70930
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Rework the logic to remove them. These were the low hanging fruit,
with labels that were used only once and logic that was fairly
straightforward.
Change-Id: I02a01c59c247b8b2972d8d73ff23f96f271de038
Reviewed-on: https://go-review.googlesource.com/63410
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Previously, we used OXFALL vs OFALL to distinguish fallthrough
statements that had been validated. Because in the Node AST we flatten
statement blocks, OXCASE and OXFALL needed to keep track of their
block scopes for this purpose.
Now that we have an AST that keeps these separate, we can just perform
the validation earlier.
Passes toolstash-check.
Fixes#14540.
Change-Id: I8421eaba16c2b3b72c9c5483b5cf20b14261385e
Reviewed-on: https://go-review.googlesource.com/61130
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
As per golint's suggestions.
Change-Id: Ie0c6ad9aa5dc69966a279562a341c7b095c47ede
Reviewed-on: https://go-review.googlesource.com/64192
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
These were likely written in C or added by an automated tool. Either
way, they're unnecessary now. Clean up the code.
Change-Id: I56de2c7bb60ebab8c500803a8b6586bdf4bf75c7
Reviewed-on: https://go-review.googlesource.com/62951
Run-TryBot: Daniel Martí <mvdan@mvdan.cc>
Reviewed-by: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
runtime.makemap will allocate map buckets on the heap for hints larger
than the number of elements a single map bucket can hold.
Do not allocate any map bucket on the stack if it is known at compile time
that hint is larger than the number of elements one map bucket can hold.
This avoids zeroing and reserving memory on the stack that will not be used.
Change-Id: I1a5ab853fb16f6a18d67674a77701bf0cf29b550
Reviewed-on: https://go-review.googlesource.com/60450
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This makes the name of the function to construct the map bucket type
consistent with runtimes naming and the existing hmap function.
Change-Id: If4d8b4a54c92ab914d4adcb96022b48d8b5db631
Reviewed-on: https://go-review.googlesource.com/59915
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
mkcall is used to construct calls to builtin functions.
Instead of silently ignoring any additional arguments to mkcall
abort compilation with an error.
This protects against accidentally supplying too many arguments to mkcall
when compiler changes are made.
Change appendslice and copyany to construct calls to
slicestringcopy and slicecopy explicitly instead of
relying on the old behavior as a feature.
Change-Id: I3cfe815a57d454a129e3c08aac824f6107779a42
Reviewed-on: https://go-review.googlesource.com/57770
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Where possible generate calls to runtime makemap with int hint argument
during compile time instead of makemap with int64 hint argument.
This eliminates converting the hint argument for calls to makemap with
int64 hint argument for platforms where int64 values do not fit into
an argument of type int.
A similar optimization for makeslice was introduced in CL
golang.org/cl/27851.
386:
name old time/op new time/op delta
NewEmptyMap 53.5ns ± 5% 41.9ns ± 5% -21.56% (p=0.000 n=10+10)
NewSmallMap 182ns ± 1% 165ns ± 1% -8.92% (p=0.000 n=10+10)
Change-Id: Ibd2b4c57b36f171b173bf7a0602b3a59771e6e44
Reviewed-on: https://go-review.googlesource.com/55142
Reviewed-by: Keith Randall <khr@golang.org>
eqstring is only called for strings with equal lengths.
Instead of pushing a pointer and length for each argument string
on the stack we can omit pushing one of the lengths on the stack.
Changing eqstrings signature to eqstring(*uint8, *uint8, int) bool
to implement the above optimization would make it very similar to the
existing memequal(*any, *any, uintptr) bool function.
Since string lengths are positive we can avoid code redundancy and
use memequal instead of using eqstring with an optimized signature.
go command binary size reduced by 4128 bytes on amd64.
name old time/op new time/op delta
CompareStringEqual 6.03ns ± 1% 5.71ns ± 1% -5.23% (p=0.000 n=19+18)
CompareStringIdentical 2.88ns ± 1% 3.22ns ± 7% +11.86% (p=0.000 n=20+20)
CompareStringSameLength 4.31ns ± 1% 4.01ns ± 1% -7.17% (p=0.000 n=19+19)
CompareStringDifferentLength 0.29ns ± 2% 0.29ns ± 2% ~ (p=1.000 n=20+20)
CompareStringBigUnaligned 64.3µs ± 2% 64.1µs ± 3% ~ (p=0.164 n=20+19)
CompareStringBig 61.9µs ± 1% 61.6µs ± 2% -0.46% (p=0.033 n=20+19)
Change-Id: Ice15f3b937c981f0d3bc8479a9ea0d10658ac8df
Reviewed-on: https://go-review.googlesource.com/53650
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This reduces the code footprint of code like:
println("foo=", foo, "bar=", bar)
which is fairly common in the runtime.
Prior to this change, this makes function calls to print each of:
"foo=", " ", foo, " ", "bar=", " ", bar, "\n"
After this change, this prints:
"foo= ", foo, " bar= ", bar, "\n"
This shrinks the hello world binary by 0.4%.
More importantly, this improves the instruction
density of important runtime routines.
Change-Id: I8971bdf5382fbaaf4a82bad4442f9da07c28d395
Reviewed-on: https://go-review.googlesource.com/55098
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Rather than emitting spaces and newlines for println
as we walk the expression, construct it all up front.
This enables further optimizations.
This requires using printstring instead of print in
the implementation of printsp and printnl,
on pain of infinite recursion.
That's ok; it's more efficient anyway, and just as simple.
While we're here, do it for other print routines as well.
Change-Id: I61d7df143810e00710c4d4d948d904007a7fd190
Reviewed-on: https://go-review.googlesource.com/55097
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>