Currently, "x &^ y" gets rewriten into "x & ^y" during walk. It adds
unnecessary complexity to other parts, which must aware about this.
Instead, we can just implement "&^" in the conversion to SSA, so "&^"
can be handled like other binary operators.
However, this CL does not pass toolstash-check. It seems that implements
"&^" in the conversion to SSA causes registers allocation change.
With the parent:
obj: 00212 (.../src/runtime/complex.go:47) MOVQ X0, AX
obj: 00213 (.../src/runtime/complex.go:47) BTRQ $63, AX
obj: 00214 (.../src/runtime/complex.go:47) MOVQ "".n(SP), CX
obj: 00215 (.../src/runtime/complex.go:47) MOVQ $-9223372036854775808, DX
obj: 00216 (.../src/runtime/complex.go:47) ANDQ DX, CX
obj: 00217 (.../src/runtime/complex.go:47) ORQ AX, CX
With this CL:
obj: 00212 (.../src/runtime/complex.go:47) MOVQ X0, AX
obj: 00213 (.../src/runtime/complex.go:47) BTRQ $63, AX
obj: 00214 (.../src/runtime/complex.go:47) MOVQ $-9223372036854775808, CX
obj: 00215 (.../src/runtime/complex.go:47) MOVQ "".n(SP), DX
obj: 00216 (.../src/runtime/complex.go:47) ANDQ CX, DX
obj: 00217 (.../src/runtime/complex.go:47) ORQ AX, DX
Change-Id: I80acf8496a91be4804fb7ef3df04c19baae2754c
Reviewed-on: https://go-review.googlesource.com/c/go/+/264660
Trust: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
gc debug flags are currently stored in a 256-long array, that is then
addressed using the ASCII numeric value of the flag itself (a quirk
inherited from the old C compiler). It is also a little wasteful,
since we only define 16 flags, and the other 240 array elements are
always empty.
This change makes Debug a struct, which also provides static checking
that we're not referencing flags that does not exist.
Change-Id: I2f0dfef2529325514b3398cf78635543cdf48fe0
Reviewed-on: https://go-review.googlesource.com/c/go/+/263539
Trust: Alberto Donizetti <alb.donizetti@gmail.com>
Run-TryBot: Alberto Donizetti <alb.donizetti@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Before generating wrapper function, turn any f(a, b, []T{c, d, e}...)
calls back into f(a, b, c, d, e). This allows the existing code for
recognizing and specially handling unsafe.Pointer->uintptr conversions
to correctly handle variadic arguments too.
Fixes#41460.
Change-Id: I0a1255abdd1bd5dafd3e89547aedd4aec878394c
Reviewed-on: https://go-review.googlesource.com/c/go/+/263297
Trust: Matthew Dempsky <mdempsky@google.com>
Trust: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
The Node type has shortcuts to access bool and int Values:
func (n *Node) Int64() int64
for n.Val().U.(*Mpint).Int64()
func (n *Node) Bool() bool
for n.Val().U.(bool)
I was convinced we didn't have one for string literal nodes, until I
noticed that we do, it's just called strlit, it's not a method, and
it's later in the file:
func strlit(n *Node) string
This change, for consistency:
- Renames strlit to StringVal and makes it a *Node method
- Renames Bool and Int64 to BoolVal and Int64Val
- Moves StringVal near the other two
Change-Id: I18e635384c35eb3a238fd52b1ccd322b1a74d733
Reviewed-on: https://go-review.googlesource.com/c/go/+/261361
Trust: Alberto Donizetti <alb.donizetti@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
"aliased" is the function responsible for detecting whether we can
turn "a, b = x, y" into just "a = x; b = y", or we need to pre-compute
y and save it in a temporary variable because it might depend on a.
It currently has two issues:
1. It suboptimally treats assignments to blank as writes to heap
memory. Users generally won't write "_, b = x, y" directly, but it
comes up a lot in generated code within the compiler.
This CL changes it to ignore blank assignments.
2. When deciding whether the assigned variable might be referenced by
pointers, it mistakenly checks Class() and Name.Addrtaken() on "n"
(the *value* expression being assigned) rather than "a" (the
destination expression).
It doesn't appear to result in correctness issues (i.e.,
incorrectly reporting no aliasing when there is potential aliasing),
due to all the (overly conservative) rewrite passes before code
reaches here. But it generates unnecessary code and could have
correctness issues if we improve those other passes to be more
aggressive.
This CL fixes the misuse of "n" for "a" by renaming the variables
to "r" and "l", respectively, to make their meaning clearer.
Improving these two cases shaves 4.6kB of text from cmd/go, and 93kB
from k8s.io/kubernetes/cmd/kubelet:
text data bss dec hex filename
9732136 290072 231552 10253760 9c75c0 go.before
9727542 290072 231552 10249166 9c63ce go.after
97977637 1007051 301344 99286032 5eafc10 kubelet.before
97884549 1007051 301344 99192944 5e99070 kubelet.after
While here, this CL also collapses "memwrite" and "varwrite" into a
single variable. Logically, they're detecting the same thing: are we
assigning to a memory location that a pointer might alias. There's no
need for two variables.
Updates #6853.
Updates #23017.
Change-Id: I5a307b8e20bcd2196e85c55eb025d3f01e303008
Reviewed-on: https://go-review.googlesource.com/c/go/+/261677
Trust: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This change renames mustHeapAlloc to heapAllocReason, and changes it
to return the reason why the argument must escape, so we don't have to
re-deduce it in its callers just to print the escape reason. It also
embeds isSmallMakeSlice body in heapAllocReason, since the former was
only used by the latter, and deletes isSmallMakeSlice.
An outdated TODO to remove smallintconst, which the TODO claimed was
only used in one place, was also removed, since grepping shows we
currently call smallintconst in 11 different places.
Change-Id: I0bd11bf29b92c4126f5bb455877ff73217d5a155
Reviewed-on: https://go-review.googlesource.com/c/go/+/258678
Run-TryBot: Alberto Donizetti <alb.donizetti@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Alberto Donizetti <alb.donizetti@gmail.com>
Trust: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Currently, in the linker's deadcode pass, when an interface type
is live, the linker thinks all its methods are live, and uses
them to match methods on concrete types. The interface method may
never be used, though.
This CL changes it to only keep used interface methods, for
matching concrete type methods. To do that, when an interface
method is used, the compiler generates a mark relocation. The
linker uses the marker relocations to mark used interface
methods, and only the used ones.
binary size before after
cmd/compile 18887400 18812200
cmd/go 13470652 13470492
Change-Id: I3cfd9df4a53783330ba87735853f2a0ec3c42802
Reviewed-on: https://go-review.googlesource.com/c/go/+/256798
Trust: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
Reviewed-by: Jeremy Faller <jeremy@golang.org>
The linker prunes methods that are not directly reachable if the
receiver type is never converted to interface. Currently, this
"never" is too strong: it is invalidated even if the interface
conversion is in an unreachable function. This CL improves it by
only considering interface conversions in reachable code. To do
that, we introduce a marker relocation R_USEIFACE, which marks
the target symbol as UsedInIface if the source symbol is reached.
binary size before after
cmd/compile 18897528 18887400
cmd/go 13607372 13470652
Change-Id: I66c6b69eeff9ae02d84d2e6f2bc7f1b29dd53910
Reviewed-on: https://go-review.googlesource.com/c/go/+/256797
Trust: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Jeremy Faller <jeremy@golang.org>
Reviewed-by: Than McIntosh <thanm@google.com>
The revised test now checks that unsafe-uintptr correctly works for
variadic uintptr parameters too, and the CL corrects the code so this
code compiles again.
The pointers are still not kept alive properly. That will be fixed by
a followup CL. But this CL at least allows programs not affected by
that to build again.
Updates #24991.
Updates #41460.
Change-Id: If4c39167b6055e602213fb7522c4f527c43ebda9
Reviewed-on: https://go-review.googlesource.com/c/go/+/255877
Trust: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Go Bot <gobot@golang.org>
Update #40954
Change-Id: Ifaab7349631ccb12fc892882bbdf7f0ebf3d845f
Reviewed-on: https://go-review.googlesource.com/c/go/+/251158
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Keith Randall <khr@golang.org>
Use a common runtime slicecopy function to copy strings or slices
into slices. This deduplicates similar code previously used in
reflect.slicecopy and runtime.stringslicecopy.
Change-Id: I09572ff0647a9e12bb5c6989689ce1c43f16b7f1
Reviewed-on: https://go-review.googlesource.com/c/go/+/254658
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Go Bot <gobot@golang.org>
Trust: Martin Möhrmann <moehrmann@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
CL 254397 attached OVARLIVE nodes to OCALLxxx nodes Nbody.
The NeedsWrapper flag is now redundant with n.Nbody.Len() > 0
condition, so use that condition instead and remove the flag.
Passes toolstash-check.
Change-Id: Iebc3e674d3c0040a876ca4be05025943d2b4fb31
Reviewed-on: https://go-review.googlesource.com/c/go/+/254398
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Currently, the statement:
go g(uintptr(f()))
gets rewritten into:
tmp := f()
newproc(8, g, uintptr(tmp))
runtime.KeepAlive(tmp)
which doesn't guarantee that tmp is still alive by time the g call is
scheduled to run.
This CL fixes the issue, by wrapping g call in a closure:
go func(p unsafe.Pointer) {
g(uintptr(p))
}(f())
then this will be rewritten into:
tmp := f()
go func(p unsafe.Pointer) {
g(uintptr(p))
runtime.KeepAlive(p)
}(tmp)
runtime.KeepAlive(tmp) // superfluous, but harmless
So the unsafe.Pointer p will be kept alive at the time g call runs.
Updates #24491
Change-Id: Ic10821251cbb1b0073daec92b82a866c6ebaf567
Reviewed-on: https://go-review.googlesource.com/c/go/+/253457
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The primary responsibility of declare() to associate a symbol (Sym) with
a declaration (Node), so "oldname" will work. Function literals are
anonymous, so their symbols does not need to be declared.
Passes toolstash-check.
Change-Id: I739b1054e3953e85fbd74a99148b9cfd7e5a57eb
Reviewed-on: https://go-review.googlesource.com/c/go/+/249078
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Right now we just prevent such types from being on the heap. This CL
makes it so they cannot appear on the stack either. The distinction
between heap and stack is pretty vague at the language level (e.g. it
is affected by -N), and we don't need the flexibility anyway.
Once go:notinheap types cannot be in either place, we don't need to
consider pointers to such types to be pointers, at least according to
the garbage collector and stack copying. (This is the big win of this
CL, in my opinion.)
The distinction between HasPointers and HasHeapPointer no longer
exists. There is only HasPointers.
This CL is cleanup before possible use of go:notinheap to fix#40954.
Update #13386
Change-Id: Ibd895aadf001c0385078a6d4809c3f374991231a
Reviewed-on: https://go-review.googlesource.com/c/go/+/249917
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
More ergonomic that way. Also change Haspointers to HasPointers
while we are here.
Change-Id: I45bedc294c1a8c2bd01dc14bd04615ae77555375
Reviewed-on: https://go-review.googlesource.com/c/go/+/249959
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Emmanuel Odeke <emm.odeke@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
checkptr has code to recognize &^ expressions, but it didn't take into
account that "p &^ x" gets rewritten to "p & ^x" during walk, which
resulted in false positive diagnostics.
This CL changes walkexpr to mark OANDNOT expressions with Implicit
when they're rewritten to OAND, so that walkCheckPtrArithmetic can
still recognize them later.
It would be slightly more idiomatic to instead mark the OBITNOT
expression as Implicit (as it's a compiler-generated Node), but the
OBITNOT expression might get constant folded. It's not worth the extra
complexity/subtlety of relying on n.Right.Orig, so we set Implicit on
the OAND node instead.
To atone for this transgression, I add documentation for nodeImplicit.
Fixes#40917.
Change-Id: I386304171ad299c530e151e5924f179e9a5fd5b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/249477
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
For now, we only do this for symbols without relocations.
Mark static temps "local", as they are not referenced across DSO
boundaries. And deduplicating a local symbol and a non-local
symbol can be problematic.
Change-Id: I0a3dc4138aaeea7fd4f326998f32ab6305da8e4b
Reviewed-on: https://go-review.googlesource.com/c/go/+/243141
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Jeremy Faller <jeremy@golang.org>
Currently, a method of a reachable type is live if it matches a
method of a reachable interface. In fact, we only need to retain
the method if the type is actually converted to an interface. If
the type is never converted to an interface, there is no way to
call the method through an interface method call (but the type
descriptor could still be used, e.g. in calling
runtime.newobject).
A type can be used in an interface in two ways:
- directly converted to interface. (Any interface counts, as it
is possible to convert one interface to another.)
- obtained by reflection from a related type (e.g. obtaining an
interface of T from []T).
For the former, we let the compiler emit a marker on the type
descriptor symbol when it is converted to an interface. In the
linker, we only need to check methods of marked types.
For the latter, when the linker visits a marked type, it needs to
visit all its "child" types as marked (i.e. potentially could be
converted to interface).
This reduces binary size:
cmd/compile 18792016 18706096 (-0.5%)
cmd/go 14120572 13398948 (-5.1%)
Change-Id: I4465c7eeabf575f4dc84017214c610fa05ae31fd
Reviewed-on: https://go-review.googlesource.com/c/go/+/237298
Run-TryBot: Cherry Zhang <cherryyz@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Than McIntosh <thanm@google.com>
Reviewed-by: Jeremy Faller <jeremy@golang.org>
match:
m = make([]T, x); copy(m, s)
for pointer free T and x==len(s) rewrite to:
m = mallocgc(x*elemsize(T), nil, false); memmove(&m, &s, x*elemsize(T))
otherwise rewrite to:
m = makeslicecopy([]T, x, s)
This avoids memclear and shading of pointers in the newly created slice
before the copy.
With this CL "s" is only be allowed to bev a variable and not a more
complex expression. This restriction could be lifted in future versions
of this optimization when it can be proven that "s" is not referencing "m".
Triggers 450 times during make.bash..
Reduces go binary size by ~8 kbyte.
name old time/op new time/op delta
MakeSliceCopy/mallocmove/Byte 71.1ns ± 1% 65.8ns ± 0% -7.49% (p=0.000 n=10+9)
MakeSliceCopy/mallocmove/Int 71.2ns ± 1% 66.0ns ± 0% -7.27% (p=0.000 n=10+8)
MakeSliceCopy/mallocmove/Ptr 104ns ± 4% 99ns ± 1% -5.13% (p=0.000 n=10+10)
MakeSliceCopy/makecopy/Byte 70.3ns ± 0% 68.0ns ± 0% -3.22% (p=0.000 n=10+9)
MakeSliceCopy/makecopy/Int 70.3ns ± 0% 68.5ns ± 1% -2.59% (p=0.000 n=9+10)
MakeSliceCopy/makecopy/Ptr 102ns ± 0% 99ns ± 1% -2.97% (p=0.000 n=9+9)
MakeSliceCopy/nilappend/Byte 75.4ns ± 0% 74.9ns ± 2% -0.63% (p=0.015 n=9+9)
MakeSliceCopy/nilappend/Int 75.6ns ± 0% 76.4ns ± 3% ~ (p=0.245 n=9+10)
MakeSliceCopy/nilappend/Ptr 107ns ± 0% 108ns ± 1% +0.93% (p=0.005 n=9+10)
Fixes#26252
Change-Id: Iec553dd1fef6ded16197216a472351c8799a8e71
Reviewed-on: https://go-review.googlesource.com/c/go/+/146719
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Martin Möhrmann <moehrmann@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Refactor out creating the two Nodes needed to check interface equality.
Preliminary work to other optimizations.
Passes toolstash-check.
Change-Id: Id6b39e8e78f07289193423d0ef905d70826acf89
Reviewed-on: https://go-review.googlesource.com/c/go/+/230206
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Refactor out creating the two Nodes needed to check string equality.
Preliminary work to other optimizations.
Passes toolstash-check.
Change-Id: I72e824dac904e579b8ba9a3669a94fa1471112d2
Reviewed-on: https://go-review.googlesource.com/c/go/+/230204
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
This CL uses fixVariadicCall before escape analyzing function calls.
This has a number of benefits, though also some minor obstacles:
Most notably, it allows us to remove ODDDARG along with the logic
involved in setting it up, manipulating EscHoles, and later copying
its escape analysis flags to the actual slice argument. Instead, we
uniformly handle all variadic calls the same way. (E.g., issue31573.go
is updated because now f() and f(nil...) are handled identically.)
It also allows us to simplify handling of builtins and generic
function calls. Previously handling of calls was hairy enough to
require multiple dispatches on n.Op, whereas now the logic is uniform
enough that we can easily handle it with a single dispatch.
The downside is handling //go:uintptrescapes is now somewhat clumsy.
(It used to be clumsy, but it still is, too.) The proper fix here is
probably to stop using escape analysis tags for //go:uintptrescapes
and unsafe-uintptr, and have an earlier pass responsible for them.
Finally, note that while we now call fixVariadicCall in Escape, we
still have to call it in Order, because we don't (yet) run Escape on
all compiler-generated functions. In particular, the generated "init"
function for initializing package-level variables can contain calls to
variadic functions and isn't escape analyzed.
Passes toolstash-check -race.
Change-Id: I4cdb92a393ac487910aeee58a5cb8c1500eef881
Reviewed-on: https://go-review.googlesource.com/c/go/+/229759
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
This CL moves fixVariadicCall from mid-Walk of function calls to
early-Order, in preparation for moving it even earlier in the future.
Notably, rewriting variadic calls this early introduces two
compilation output changes:
1. Previously, Order visited the ODDDARG before the rest of the
arguments list, whereas the natural time to visit it is at the end of
the list (as we visit arguments left-to-right, and the ... argument is
the rightmost one). Changing this ordering permutes the autotmp
allocation order, which in turn permutes autotmp naming and stack
offsets.
2. Previously, Walk separately walked all of the variadic arguments
before walking the entire slice literal, whereas the more natural
thing to do is just walk the entire slice literal. This triggers
slightly different code paths for composite literal construction in
some cases.
Neither of these have semantic impact. They simply mean we're now
compiling f(a,b,c) the same way as we were already compiling
f([]T{a,b,c}...).
Change-Id: I40ccc5725697a116370111ebe746b2639562fe87
Reviewed-on: https://go-review.googlesource.com/c/go/+/229601
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
In mid-Walk, we rewrite calls to variadic functions to use explicit
slice literals; e.g., rewriting f(a,b,c) into f([]T{a,b,c}...).
However, it would be useful to do that rewrite much earlier in the
compiler, so that other compiler passes can be simplified.
This CL refactors the rewrite logic into a new fixVariadicCall
function, which subsequent CLs can more easily move into earlier
compiler passes.
Passes toolstash-check -race.
Change-Id: I408e655f2d3aa00446a2e6accf8765abc3b16a8a
Reviewed-on: https://go-review.googlesource.com/c/go/+/229486
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Node.NonNil and Node.Bounded were a bit muddled. This led to #38496.
This change clarifies and documents them.
It also corrects one misuse.
However, since ssa conversion doesn't make full use of the bounded hint,
this correction doesn't change any generated code.
The next change will fix that.
Passes toolstash-check.
Change-Id: I2bcd487a0a4aef5d7f6090e653974fce0dce3b8e
Reviewed-on: https://go-review.googlesource.com/c/go/+/228787
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
reflect.Type.Method (and MethodByName) can be used to obtain a
reference of a method by reflection. The linker needs to know
if reflect.Type.Method is called, and retain all exported methods
accordingly. This is handled by the compiler, which marks the
caller of reflect.Type.Method with REFLECTMETHOD attribute. The
current code failed to handle the reflect package itself, so the
method wrapper reflect.Type.Method is not marked. This CL fixes
it.
Fixes#38515.
Change-Id: I12904d23eda664cf1794bc3676152f3218fb762b
Reviewed-on: https://go-review.googlesource.com/c/go/+/228880
Run-TryBot: Cherry Zhang <cherryyz@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
The call does nothing when applied to an OLSH node.
It would be unnecessary anyway, since we're shifting by a small constant.
Passes toolstash-check.
Change-Id: If858711f1704f44637fa0f6a4c66cbaad6db24b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/228699
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
This lets us provide a better position in its use in swt.go.
Change-Id: I7c0da6bd0adea81acfc9a591e6a01b241a5e0942
Reviewed-on: https://go-review.googlesource.com/c/go/+/228219
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
ascompatee does not generate 'x = x' during return, so we don't have to
check for samelist and disguising special return anymore.
While at it, also remove samelist, as this is the only place it's used.
Passes toolstash-check.
Change-Id: I41c7b077d562aadb5916a61e2ab6229bae3cdef4
Reviewed-on: https://go-review.googlesource.com/c/go/+/227807
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
CL 226737 optimizes len check when make slice. The comment that cap is
constrainted to [0, 2^31) is not quite true, it's 31 or 63 depends on
whether it's 32/64-bit systems.
Change-Id: I6f54e41827ffe4d0b67a44975da3ce07b2fabbad
Reviewed-on: https://go-review.googlesource.com/c/go/+/227803
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
For map with hint larger than BUCKETSIZE, makemap ignore allocated
bucket and allocate buckets itself. So do not allocate bucket in
this case, save us the cost of zeroing+assignment to the bucket.
name old time/op new time/op delta
NewEmptyMap-12 3.89ns ± 4% 3.88ns ± 2% ~ (p=0.939 n=19+20)
NewSmallMap-12 23.3ns ± 3% 23.1ns ± 2% ~ (p=0.307 n=18+17)
NewEmptyMapHintLessThan8-12 6.43ns ± 3% 6.31ns ± 2% -1.72% (p=0.000 n=19+18)
NewEmptyMapHintGreaterThan8-12 159ns ± 2% 150ns ± 1% -5.79% (p=0.000 n=20+18)
Benchmark run with commit ab7c174 reverted, see #38314.
Fixes#20184
Change-Id: Ic021f57454c3a0dd50601d73bbd77b8faf8d93b6
Reviewed-on: https://go-review.googlesource.com/c/go/+/227458
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
Some runtime calls accept a slice, but only use ptr and len.
This change modifies most such routines to accept only ptr and len.
After this change, the only runtime calls that accept an unnecessary
cap arg are concatstrings and slicerunetostring.
Neither is particularly common, and both are complicated to modify.
Negligible compiler performance impact. Shrinks binaries a little.
There are only a few regressions; the one I investigated was
due to register allocation fluctuation.
Passes 'go test -race std cmd', modulo #38265 and #38266.
Wow, does that take a long time to run.
Updates #36890
file before after Δ %
compile 19655024 19655152 +128 +0.001%
cover 5244840 5236648 -8192 -0.156%
dist 3662376 3658280 -4096 -0.112%
link 6680056 6675960 -4096 -0.061%
pprof 14789844 14777556 -12288 -0.083%
test2json 2824744 2820648 -4096 -0.145%
trace 11647876 11639684 -8192 -0.070%
vet 8260472 8256376 -4096 -0.050%
total 115163736 115118808 -44928 -0.039%
Change-Id: Idb29fa6a81d6a82bfd3b65740b98cf3275ca0a78
Reviewed-on: https://go-review.googlesource.com/c/go/+/227163
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
In CL 226278, we did:
if len < 0 { panicmakeslicelen }
if len > cap { panicmakeslicecap }
But due to the fact that cap is constrained to [0,2^31), so it is safe
to do:
if uint64(len) > cap {
if len < 0 { panicmakeslicelen() }
panicmakeslicecap()
}
save us a comparison in common case when len is within range.
Passes toolstash-check.
Change-Id: I0ebd52914ccde4cbb45f16c9e020b0c8f42e0663
Reviewed-on: https://go-review.googlesource.com/c/go/+/226737
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
If slice cap is not set, it will be equal to slice len. So
isSmallMakeSlice only needs to check whether slice cap is constant.
While at it, also add test to make sure panicmakeslicecap is called
when make slice contains invalid non-constant len.
For this benchmark:
func BenchmarkMakeSliceNonConstantLen(b *testing.B) {
len := 1
for i := 0; i < b.N; i++ {
s := make([]int, len, 2)
_ = s
}
}
Result compare with parent:
name old time/op new time/op delta
MakeSliceNonConstantLen-12 18.4ns ± 1% 0.2ns ± 2% -98.66% (p=0.008 n=5+5)
Fixes#37975
Change-Id: I4bc926361bc2ffeab4cfaa888ef0a30cbc3b80e8
Reviewed-on: https://go-review.googlesource.com/c/go/+/226278
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
There are still two places in src/runtime/string.go that use
staticbytes, so we cannot delete it just yet.
There is a new codegen test to verify that the index calculation
is constant-folded, at least on amd64. ppc64, mips[64] and s390x
cannot currently do that.
There is also a new runtime benchmark to ensure that this does not
slow down performance (tested against parent commit):
name old time/op new time/op delta
ConvT2EByteSized/bool-4 1.07ns ± 1% 1.07ns ± 1% ~ (p=0.060 n=14+15)
ConvT2EByteSized/uint8-4 1.06ns ± 1% 1.07ns ± 1% ~ (p=0.095 n=14+15)
Updates #37612
Change-Id: I5ec30738edaa48cda78dfab4a78e24a32fa7fd6a
Reviewed-on: https://go-review.googlesource.com/c/go/+/221957
Run-TryBot: Ian Lance Taylor <iant@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Josh Bleecher Snyder <josharian@gmail.com>
The SSA backend has rules to read the contents of readonly Lsyms.
However, this rule was failing to trigger for many readonly Lsyms.
This is because the readonly attribute that was set on the Node.Name
was not propagated to its Lsym until the dump globals phase, after SSA runs.
To work around this phase ordering problem, introduce Node.SetReadonly,
which sets Node.Name.Readonly and also configures the Lsym
enough that SSA can use it.
This change also fixes a latent problem in the rewrite rule function,
namely that reads past the end of lsym.P were treated as entirely zero,
instead of merely requiring padding with trailing zeros.
This change also adds an amd64 rule needed to fully optimize
the results of this change. It would be better not to need this,
but the zero extension that should handle this for us
gets optimized away too soon (see #36897 for a similar problem).
I have not investigated whether other platforms also need new
rules to take full advantage of the new optimizations.
Compiled code for (interface{})(true) on amd64 goes from:
LEAQ type.bool(SB), AX
MOVBLZX ""..stmp_0(SB), BX
LEAQ runtime.staticbytes(SB), CX
ADDQ CX, BX
to
LEAQ type.bool(SB), AX
LEAQ runtime.staticbytes+1(SB), BX
Prior to this change, the readonly symbol rewrite rules
fired a total of 884 times during make.bash.
Afterwards they fire 1807 times.
file before after Δ %
cgo 4827832 4823736 -4096 -0.085%
compile 24907768 24895656 -12112 -0.049%
fix 3376952 3368760 -8192 -0.243%
pprof 14751700 14747604 -4096 -0.028%
total 120343528 120315032 -28496 -0.024%
Change-Id: I59ea52138276c37840f69e30fb109fd376d579ec
Reviewed-on: https://go-review.googlesource.com/c/go/+/220499
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
This CL extends cmd/compile's experimental libFuzzer support with
calls to __sanitizer_cov_trace_{,const_}cmp{1,2,4,8}. This allows much
more efficient fuzzing of comparisons.
Only supports amd64 and arm64 for now.
Updates #14565.
Change-Id: Ibf82a8d9658f2bc50d955bdb1ae26723a3f0584d
Reviewed-on: https://go-review.googlesource.com/c/go/+/203887
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
n.Opt() is used in walkCheckPtrArithmetic to prevent infinite loops. The
fact that it's used today because n.Opt() is not used for OCONVNOP
during walk.go. If that changes, then it's not safe to repalce it
anymore. So doing hard fail if that case happens, the author of new
changes will be noticed and must change the usage of n.Opt() inside
walkCheckPtrArithmetic, too.
Change-Id: Ic7094baa1759c647fc10e82457c19026099a0d47
Reviewed-on: https://go-review.googlesource.com/c/go/+/202497
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
"declared and not used" is technically correct, but might confuse
the user. Switching "and" to "but" will hopefully create the
contrast for the users: they did one thing (declaration), but
not the other --- actually using the variable.
This new message is still not ideal (specifically, declared is not
entirely precise here), but at least it matches the other parsers
and is one step in the right direction.
Change-Id: I725c7c663535f9ab9725c4b0bf35b4fa74b0eb20
Reviewed-on: https://go-review.googlesource.com/c/go/+/203282
Run-TryBot: Minux Ma <minux@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Robert Griesemer <gri@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Generate inline code at defer time to save the args of defer calls to unique
(autotmp) stack slots, and generate inline code at exit time to check which defer
calls were made and make the associated function/method/interface calls. We
remember that a particular defer statement was reached by storing in the deferBits
variable (always stored on the stack). At exit time, we check the bits of the
deferBits variable to determine which defer function calls to make (in reverse
order). These low-cost defers are only used for functions where no defers
appear in loops. In addition, we don't do these low-cost defers if there are too
many defer statements or too many exits in a function (to limit code increase).
When a function uses open-coded defers, we produce extra
FUNCDATA_OpenCodedDeferInfo information that specifies the number of defers, and
for each defer, the stack slots where the closure and associated args have been
stored. The funcdata also includes the location of the deferBits variable.
Therefore, for panics, we can use this funcdata to determine exactly which defers
are active, and call the appropriate functions/methods/closures with the correct
arguments for each active defer.
In order to unwind the stack correctly after a recover(), we need to add an extra
code segment to functions with open-coded defers that simply calls deferreturn()
and returns. This segment is not reachable by the normal function, but is returned
to by the runtime during recovery. We set the liveness information of this
deferreturn() to be the same as the liveness at the first function call during the
last defer exit code (so all return values and all stack slots needed by the defer
calls will be live).
I needed to increase the stackguard constant from 880 to 896, because of a small
amount of new code in deferreturn().
The -N flag disables open-coded defers. '-d defer' prints out the kind of defer
being used at each defer statement (heap-allocated, stack-allocated, or
open-coded).
Cost of defer statement [ go test -run NONE -bench BenchmarkDefer$ runtime ]
With normal (stack-allocated) defers only: 35.4 ns/op
With open-coded defers: 5.6 ns/op
Cost of function call alone (remove defer keyword): 4.4 ns/op
Text size increase (including funcdata) for go binary without/with open-coded defers: 0.09%
The average size increase (including funcdata) for only the functions that use
open-coded defers is 1.1%.
The cost of a panic followed by a recover got noticeably slower, since panic
processing now requires a scan of the stack for open-coded defer frames. This scan
is required, even if no frames are using open-coded defers:
Cost of panic and recover [ go test -run NONE -bench BenchmarkPanicRecover runtime ]
Without open-coded defers: 62.0 ns/op
With open-coded defers: 255 ns/op
A CGO Go-to-C-to-Go benchmark got noticeably faster because of open-coded defers:
CGO Go-to-C-to-Go benchmark [cd misc/cgo/test; go test -run NONE -bench BenchmarkCGoCallback ]
Without open-coded defers: 443 ns/op
With open-coded defers: 347 ns/op
Updates #14939 (defer performance)
Updates #34481 (design doc)
Change-Id: I63b1a60d1ebf28126f55ee9fd7ecffe9cb23d1ff
Reviewed-on: https://go-review.googlesource.com/c/go/+/202340
Reviewed-by: Austin Clements <austin@google.com>
A common idiom for turning an unsafe.Pointer into a slice is to write:
s := (*[Big]T)(ptr)[:n:m]
This technically violates Go's unsafe pointer rules (rule #1 says T2
can't be bigger than T1), but it's fairly common and not too difficult
to recognize, so might as well allow it for now so we can make
progress on #34972.
This should be revisited if #19367 is accepted.
Updates #22218.
Updates #34972.
Change-Id: Id824e2461904e770910b6e728b4234041d2cc8bc
Reviewed-on: https://go-review.googlesource.com/c/go/+/201839
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>