2016-02-26 14:56:31 -08:00
|
|
|
// Copyright 2016 The Go Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style
|
|
|
|
// license that can be found in the LICENSE file.
|
|
|
|
|
|
|
|
package gc
|
|
|
|
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
import (
|
|
|
|
"cmd/compile/internal/types"
|
cmd/compile,runtime: generate hash functions only for types which are map keys
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-08-06 15:22:51 -07:00
|
|
|
"cmd/internal/obj"
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
"fmt"
|
cmd/compile: make runtime calls last in eq algs
type T struct {
f float64
a [64]uint64
g float64
}
Prior to this change, the generated equality algorithm for T was:
func eqT(p, q *T) bool {
return p.f == q.f && runtime.memequal(p.a, q.a, 512) && p.g == q.g
}
In handwritten code, we would normally put the cheapest checks first.
This change takes a step in that direction. We now generate:
func eqT(p, q *T) bool {
return p.f == q.f && p.g == q.g && runtime.memequal(p.a, q.a, 512)
}
For most types, this also generates considerably shorter code. Examples:
runtime
.eq."".mstats 406 -> 391 (-3.69%)
.eq.""._func 114 -> 101 (-11.40%)
.eq."".itab 115 -> 102 (-11.30%)
.eq."".scase 125 -> 116 (-7.20%)
.eq."".traceStack 119 -> 102 (-14.29%)
.eq."".gcControllerState 169 -> 161 (-4.73%)
.eq."".sweepdata 121 -> 112 (-7.44%)
However, for types in which we make unwise choices about inlining
memory-only comparisons (#38494), this generates longer code.
Example:
cmd/internal/obj
.eq."".objWriter 211 -> 214 (+1.42%)
.eq."".Addr 185 -> 187 (+1.08%)
Fortunately, such cases are not common.
Change-Id: I47a27da93c1f88ec71fa350c192f36b29548a217
Reviewed-on: https://go-review.googlesource.com/c/go/+/230203
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2020-04-24 09:49:35 -07:00
|
|
|
"sort"
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
)
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2016-04-01 11:22:03 -07:00
|
|
|
// AlgKind describes the kind of algorithms used for comparing and
|
|
|
|
// hashing a Type.
|
|
|
|
type AlgKind int
|
|
|
|
|
2020-04-14 15:16:09 -07:00
|
|
|
//go:generate stringer -type AlgKind -trimprefix A
|
|
|
|
|
2016-02-26 14:56:31 -08:00
|
|
|
const (
|
|
|
|
// These values are known by runtime.
|
2016-04-01 11:22:03 -07:00
|
|
|
ANOEQ AlgKind = iota
|
2016-02-26 14:56:31 -08:00
|
|
|
AMEM0
|
|
|
|
AMEM8
|
|
|
|
AMEM16
|
|
|
|
AMEM32
|
|
|
|
AMEM64
|
|
|
|
AMEM128
|
|
|
|
ASTRING
|
|
|
|
AINTER
|
|
|
|
ANILINTER
|
|
|
|
AFLOAT32
|
|
|
|
AFLOAT64
|
|
|
|
ACPLX64
|
|
|
|
ACPLX128
|
2016-04-01 11:22:03 -07:00
|
|
|
|
|
|
|
// Type can be compared/hashed as regular memory.
|
|
|
|
AMEM AlgKind = 100
|
|
|
|
|
|
|
|
// Type needs special comparison/hashing functions.
|
|
|
|
ASPECIAL AlgKind = -1
|
2016-02-26 14:56:31 -08:00
|
|
|
)
|
|
|
|
|
2016-04-01 11:22:03 -07:00
|
|
|
// IsComparable reports whether t is a comparable type.
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
func IsComparable(t *types.Type) bool {
|
2016-04-01 11:22:03 -07:00
|
|
|
a, _ := algtype1(t)
|
|
|
|
return a != ANOEQ
|
|
|
|
}
|
|
|
|
|
|
|
|
// IsRegularMemory reports whether t can be compared/hashed as regular memory.
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
func IsRegularMemory(t *types.Type) bool {
|
2016-04-01 11:22:03 -07:00
|
|
|
a, _ := algtype1(t)
|
|
|
|
return a == AMEM
|
|
|
|
}
|
|
|
|
|
|
|
|
// IncomparableField returns an incomparable Field of struct Type t, if any.
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
func IncomparableField(t *types.Type) *types.Field {
|
2016-04-01 11:22:03 -07:00
|
|
|
for _, f := range t.FieldSlice() {
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
if !IsComparable(f.Type) {
|
2016-04-01 11:22:03 -07:00
|
|
|
return f
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2020-06-15 09:17:18 -07:00
|
|
|
// EqCanPanic reports whether == on type t could panic (has an interface somewhere).
|
|
|
|
// t must be comparable.
|
|
|
|
func EqCanPanic(t *types.Type) bool {
|
|
|
|
switch t.Etype {
|
|
|
|
default:
|
|
|
|
return false
|
|
|
|
case TINTER:
|
|
|
|
return true
|
|
|
|
case TARRAY:
|
|
|
|
return EqCanPanic(t.Elem())
|
|
|
|
case TSTRUCT:
|
|
|
|
for _, f := range t.FieldSlice() {
|
|
|
|
if !f.Sym.IsBlank() && EqCanPanic(f.Type) {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-01 11:22:03 -07:00
|
|
|
// algtype is like algtype1, except it returns the fixed-width AMEMxx variants
|
|
|
|
// instead of the general AMEM kind when possible.
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
func algtype(t *types.Type) AlgKind {
|
2016-04-01 11:22:03 -07:00
|
|
|
a, _ := algtype1(t)
|
2016-02-26 14:56:31 -08:00
|
|
|
if a == AMEM {
|
|
|
|
switch t.Width {
|
|
|
|
case 0:
|
|
|
|
return AMEM0
|
|
|
|
case 1:
|
|
|
|
return AMEM8
|
|
|
|
case 2:
|
|
|
|
return AMEM16
|
|
|
|
case 4:
|
|
|
|
return AMEM32
|
|
|
|
case 8:
|
|
|
|
return AMEM64
|
|
|
|
case 16:
|
|
|
|
return AMEM128
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return a
|
|
|
|
}
|
|
|
|
|
2016-04-01 11:22:03 -07:00
|
|
|
// algtype1 returns the AlgKind used for comparing and hashing Type t.
|
|
|
|
// If it returns ANOEQ, it also returns the component type of t that
|
|
|
|
// makes it incomparable.
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
func algtype1(t *types.Type) (AlgKind, *types.Type) {
|
2017-02-27 19:56:38 +02:00
|
|
|
if t.Broke() {
|
2016-04-01 11:22:03 -07:00
|
|
|
return AMEM, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
2017-02-27 19:56:38 +02:00
|
|
|
if t.Noalg() {
|
2016-04-01 11:22:03 -07:00
|
|
|
return ANOEQ, t
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
switch t.Etype {
|
|
|
|
case TANY, TFORW:
|
2016-02-28 14:56:31 -08:00
|
|
|
// will be defined later.
|
2016-04-01 11:22:03 -07:00
|
|
|
return ANOEQ, t
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2016-02-28 14:56:31 -08:00
|
|
|
case TINT8, TUINT8, TINT16, TUINT16,
|
|
|
|
TINT32, TUINT32, TINT64, TUINT64,
|
|
|
|
TINT, TUINT, TUINTPTR,
|
2017-11-06 14:50:30 -08:00
|
|
|
TBOOL, TPTR,
|
2016-02-28 14:56:31 -08:00
|
|
|
TCHAN, TUNSAFEPTR:
|
2016-04-01 11:22:03 -07:00
|
|
|
return AMEM, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
case TFUNC, TMAP:
|
2016-04-01 11:22:03 -07:00
|
|
|
return ANOEQ, t
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
case TFLOAT32:
|
2016-04-01 11:22:03 -07:00
|
|
|
return AFLOAT32, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
case TFLOAT64:
|
2016-04-01 11:22:03 -07:00
|
|
|
return AFLOAT64, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
case TCOMPLEX64:
|
2016-04-01 11:22:03 -07:00
|
|
|
return ACPLX64, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
case TCOMPLEX128:
|
2016-04-01 11:22:03 -07:00
|
|
|
return ACPLX128, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
case TSTRING:
|
2016-04-01 11:22:03 -07:00
|
|
|
return ASTRING, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
case TINTER:
|
2016-04-01 13:36:24 -07:00
|
|
|
if t.IsEmptyInterface() {
|
2016-04-01 11:22:03 -07:00
|
|
|
return ANILINTER, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
2016-04-01 11:22:03 -07:00
|
|
|
return AINTER, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2016-04-18 14:02:08 -07:00
|
|
|
case TSLICE:
|
|
|
|
return ANOEQ, t
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2016-04-18 14:02:08 -07:00
|
|
|
case TARRAY:
|
2016-04-01 11:22:03 -07:00
|
|
|
a, bad := algtype1(t.Elem())
|
2016-02-28 14:56:31 -08:00
|
|
|
switch a {
|
|
|
|
case AMEM:
|
2016-04-01 11:22:03 -07:00
|
|
|
return AMEM, nil
|
2016-02-28 14:56:31 -08:00
|
|
|
case ANOEQ:
|
2016-04-01 11:22:03 -07:00
|
|
|
return ANOEQ, bad
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
2016-03-31 14:46:04 -07:00
|
|
|
switch t.NumElem() {
|
2016-02-26 14:56:31 -08:00
|
|
|
case 0:
|
|
|
|
// We checked above that the element type is comparable.
|
2016-04-01 11:22:03 -07:00
|
|
|
return AMEM, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
case 1:
|
|
|
|
// Single-element array is same as its lone element.
|
2016-04-01 11:22:03 -07:00
|
|
|
return a, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
2016-04-01 11:22:03 -07:00
|
|
|
return ASPECIAL, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
case TSTRUCT:
|
2016-03-10 20:07:00 -08:00
|
|
|
fields := t.FieldSlice()
|
|
|
|
|
|
|
|
// One-field struct is same as that one field alone.
|
2017-04-21 07:51:41 -07:00
|
|
|
if len(fields) == 1 && !fields[0].Sym.IsBlank() {
|
2016-04-01 11:22:03 -07:00
|
|
|
return algtype1(fields[0].Type)
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
ret := AMEM
|
2016-03-10 20:07:00 -08:00
|
|
|
for i, f := range fields {
|
2016-02-26 14:56:31 -08:00
|
|
|
// All fields must be comparable.
|
2016-04-01 11:22:03 -07:00
|
|
|
a, bad := algtype1(f.Type)
|
2016-02-26 14:56:31 -08:00
|
|
|
if a == ANOEQ {
|
2016-04-01 11:22:03 -07:00
|
|
|
return ANOEQ, bad
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
// Blank fields, padded fields, fields with non-memory
|
|
|
|
// equality need special compare.
|
2017-04-21 07:51:41 -07:00
|
|
|
if a != AMEM || f.Sym.IsBlank() || ispaddedfield(t, i) {
|
2016-04-01 11:22:03 -07:00
|
|
|
ret = ASPECIAL
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-01 11:22:03 -07:00
|
|
|
return ret, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
Fatalf("algtype1: unexpected type %v", t)
|
2016-04-01 11:22:03 -07:00
|
|
|
return 0, nil
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
cmd/compile,runtime: generate hash functions only for types which are map keys
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-08-06 15:22:51 -07:00
|
|
|
// genhash returns a symbol which is the closure used to compute
|
|
|
|
// the hash of a value of type t.
|
2020-03-06 14:01:26 -08:00
|
|
|
// Note: the generated function must match runtime.typehash exactly.
|
cmd/compile,runtime: generate hash functions only for types which are map keys
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-08-06 15:22:51 -07:00
|
|
|
func genhash(t *types.Type) *obj.LSym {
|
|
|
|
switch algtype(t) {
|
|
|
|
default:
|
|
|
|
// genhash is only called for types that have equality
|
|
|
|
Fatalf("genhash %v", t)
|
|
|
|
case AMEM0:
|
|
|
|
return sysClosure("memhash0")
|
|
|
|
case AMEM8:
|
|
|
|
return sysClosure("memhash8")
|
|
|
|
case AMEM16:
|
|
|
|
return sysClosure("memhash16")
|
|
|
|
case AMEM32:
|
|
|
|
return sysClosure("memhash32")
|
|
|
|
case AMEM64:
|
|
|
|
return sysClosure("memhash64")
|
|
|
|
case AMEM128:
|
|
|
|
return sysClosure("memhash128")
|
|
|
|
case ASTRING:
|
|
|
|
return sysClosure("strhash")
|
|
|
|
case AINTER:
|
|
|
|
return sysClosure("interhash")
|
|
|
|
case ANILINTER:
|
|
|
|
return sysClosure("nilinterhash")
|
|
|
|
case AFLOAT32:
|
|
|
|
return sysClosure("f32hash")
|
|
|
|
case AFLOAT64:
|
|
|
|
return sysClosure("f64hash")
|
|
|
|
case ACPLX64:
|
|
|
|
return sysClosure("c64hash")
|
|
|
|
case ACPLX128:
|
|
|
|
return sysClosure("c128hash")
|
|
|
|
case AMEM:
|
|
|
|
// For other sizes of plain memory, we build a closure
|
|
|
|
// that calls memhash_varlen. The size of the memory is
|
|
|
|
// encoded in the first slot of the closure.
|
|
|
|
closure := typeLookup(fmt.Sprintf(".hashfunc%d", t.Width)).Linksym()
|
|
|
|
if len(closure.P) > 0 { // already generated
|
|
|
|
return closure
|
|
|
|
}
|
|
|
|
if memhashvarlen == nil {
|
|
|
|
memhashvarlen = sysfunc("memhash_varlen")
|
|
|
|
}
|
|
|
|
ot := 0
|
|
|
|
ot = dsymptr(closure, ot, memhashvarlen, 0)
|
|
|
|
ot = duintptr(closure, ot, uint64(t.Width)) // size encoded in closure
|
|
|
|
ggloblsym(closure, int32(ot), obj.DUPOK|obj.RODATA)
|
|
|
|
return closure
|
|
|
|
case ASPECIAL:
|
|
|
|
break
|
|
|
|
}
|
|
|
|
|
|
|
|
closure := typesymprefix(".hashfunc", t).Linksym()
|
|
|
|
if len(closure.P) > 0 { // already generated
|
|
|
|
return closure
|
|
|
|
}
|
|
|
|
|
|
|
|
// Generate hash functions for subtypes.
|
|
|
|
// There are cases where we might not use these hashes,
|
|
|
|
// but in that case they will get dead-code eliminated.
|
|
|
|
// (And the closure generated by genhash will also get
|
|
|
|
// dead-code eliminated, as we call the subtype hashers
|
|
|
|
// directly.)
|
|
|
|
switch t.Etype {
|
|
|
|
case types.TARRAY:
|
|
|
|
genhash(t.Elem())
|
|
|
|
case types.TSTRUCT:
|
|
|
|
for _, f := range t.FieldSlice() {
|
|
|
|
genhash(f.Type)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sym := typesymprefix(".hash", t)
|
2016-02-26 14:56:31 -08:00
|
|
|
if Debug['r'] != 0 {
|
cmd/compile,runtime: generate hash functions only for types which are map keys
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-08-06 15:22:51 -07:00
|
|
|
fmt.Printf("genhash %v %v %v\n", closure, sym, t)
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
2017-03-28 13:52:14 -07:00
|
|
|
lineno = autogeneratedPos // less confusing than end of input
|
2016-02-26 14:56:31 -08:00
|
|
|
dclcontext = PEXTERN
|
|
|
|
|
|
|
|
// func sym(p *T, h uintptr) uintptr
|
2016-09-16 11:00:54 +10:00
|
|
|
tfn := nod(OTFUNC, nil, nil)
|
2018-04-18 23:22:26 -07:00
|
|
|
tfn.List.Set2(
|
|
|
|
namedfield("p", types.NewPtr(t)),
|
|
|
|
namedfield("h", types.Types[TUINTPTR]),
|
|
|
|
)
|
|
|
|
tfn.Rlist.Set1(anonfield(types.Types[TUINTPTR]))
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2017-04-10 13:03:14 -07:00
|
|
|
fn := dclfunc(sym, tfn)
|
2018-04-18 23:22:26 -07:00
|
|
|
np := asNode(tfn.Type.Params().Field(0).Nname)
|
|
|
|
nh := asNode(tfn.Type.Params().Field(1).Nname)
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
switch t.Etype {
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
case types.TARRAY:
|
2016-02-26 14:56:31 -08:00
|
|
|
// An array of pure memory would be handled by the
|
|
|
|
// standard algorithm, so the element type must not be
|
|
|
|
// pure memory.
|
2016-03-30 10:57:47 -07:00
|
|
|
hashel := hashfor(t.Elem())
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2018-11-18 08:34:38 -08:00
|
|
|
n := nod(ORANGE, nil, nod(ODEREF, np, nil))
|
2016-09-15 15:45:10 +10:00
|
|
|
ni := newname(lookup("i"))
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
ni.Type = types.Types[TINT]
|
2016-03-10 10:13:42 -08:00
|
|
|
n.List.Set1(ni)
|
2017-02-27 19:56:38 +02:00
|
|
|
n.SetColas(true)
|
2016-03-25 15:34:55 -07:00
|
|
|
colasdefn(n.List.Slice(), n)
|
2016-03-08 15:10:26 -08:00
|
|
|
ni = n.List.First()
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
// h = hashel(&p[i], h)
|
2016-09-16 11:00:54 +10:00
|
|
|
call := nod(OCALL, hashel, nil)
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2016-09-16 11:00:54 +10:00
|
|
|
nx := nod(OINDEX, np, ni)
|
2017-02-27 19:56:38 +02:00
|
|
|
nx.SetBounded(true)
|
2016-09-16 11:00:54 +10:00
|
|
|
na := nod(OADDR, nx, nil)
|
2016-03-08 15:10:26 -08:00
|
|
|
call.List.Append(na)
|
|
|
|
call.List.Append(nh)
|
2016-09-16 11:00:54 +10:00
|
|
|
n.Nbody.Append(nod(OAS, nh, call))
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2016-02-27 14:31:33 -08:00
|
|
|
fn.Nbody.Append(n)
|
2016-02-26 14:56:31 -08:00
|
|
|
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
case types.TSTRUCT:
|
2016-03-08 03:40:50 -08:00
|
|
|
// Walk the struct using memhash for runs of AMEM
|
|
|
|
// and calling specific hash functions for the others.
|
2016-03-10 20:07:00 -08:00
|
|
|
for i, fields := 0, t.FieldSlice(); i < len(fields); {
|
|
|
|
f := fields[i]
|
|
|
|
|
2016-03-08 03:40:50 -08:00
|
|
|
// Skip blank fields.
|
2017-04-21 07:51:41 -07:00
|
|
|
if f.Sym.IsBlank() {
|
2016-03-10 20:07:00 -08:00
|
|
|
i++
|
2016-03-08 03:40:50 -08:00
|
|
|
continue
|
|
|
|
}
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2016-03-08 03:40:50 -08:00
|
|
|
// Hash non-memory fields with appropriate hash function.
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
if !IsRegularMemory(f.Type) {
|
2016-03-08 03:40:50 -08:00
|
|
|
hashel := hashfor(f.Type)
|
2016-09-16 11:00:54 +10:00
|
|
|
call := nod(OCALL, hashel, nil)
|
2016-09-15 15:45:10 +10:00
|
|
|
nx := nodSym(OXDOT, np, f.Sym) // TODO: fields from other packages?
|
2016-09-16 11:00:54 +10:00
|
|
|
na := nod(OADDR, nx, nil)
|
2016-03-08 15:10:26 -08:00
|
|
|
call.List.Append(na)
|
|
|
|
call.List.Append(nh)
|
2016-09-16 11:00:54 +10:00
|
|
|
fn.Nbody.Append(nod(OAS, nh, call))
|
2016-03-10 20:07:00 -08:00
|
|
|
i++
|
2016-02-26 14:56:31 -08:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2016-03-08 03:40:50 -08:00
|
|
|
// Otherwise, hash a maximal length run of raw memory.
|
2016-03-28 10:35:13 -07:00
|
|
|
size, next := memrun(t, i)
|
2016-03-08 03:40:50 -08:00
|
|
|
|
|
|
|
// h = hashel(&p.first, size, h)
|
|
|
|
hashel := hashmem(f.Type)
|
2016-09-16 11:00:54 +10:00
|
|
|
call := nod(OCALL, hashel, nil)
|
2016-09-15 15:45:10 +10:00
|
|
|
nx := nodSym(OXDOT, np, f.Sym) // TODO: fields from other packages?
|
2016-09-16 11:00:54 +10:00
|
|
|
na := nod(OADDR, nx, nil)
|
2016-03-08 15:10:26 -08:00
|
|
|
call.List.Append(na)
|
|
|
|
call.List.Append(nh)
|
2016-09-15 14:34:20 +10:00
|
|
|
call.List.Append(nodintconst(size))
|
2016-09-16 11:00:54 +10:00
|
|
|
fn.Nbody.Append(nod(OAS, nh, call))
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2016-03-10 20:07:00 -08:00
|
|
|
i = next
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-09-16 11:00:54 +10:00
|
|
|
r := nod(ORETURN, nil, nil)
|
2016-03-08 15:10:26 -08:00
|
|
|
r.List.Append(nh)
|
2016-02-27 14:31:33 -08:00
|
|
|
fn.Nbody.Append(r)
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
if Debug['r'] != 0 {
|
2016-03-04 13:16:48 -08:00
|
|
|
dumplist("genhash body", fn.Nbody)
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
2017-08-09 16:13:09 +09:00
|
|
|
funcbody()
|
2018-04-18 23:22:26 -07:00
|
|
|
|
2017-02-27 19:56:38 +02:00
|
|
|
fn.Func.SetDupok(true)
|
2018-11-18 08:34:38 -08:00
|
|
|
fn = typecheck(fn, ctxStmt)
|
2018-04-18 23:22:26 -07:00
|
|
|
|
|
|
|
Curfn = fn
|
2018-11-18 08:34:38 -08:00
|
|
|
typecheckslice(fn.Nbody.Slice(), ctxStmt)
|
2016-02-26 14:56:31 -08:00
|
|
|
Curfn = nil
|
2018-04-18 23:22:26 -07:00
|
|
|
|
2017-01-11 13:53:34 -08:00
|
|
|
if debug_dclstack != 0 {
|
|
|
|
testdclstack()
|
|
|
|
}
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2017-05-04 07:55:00 -07:00
|
|
|
fn.Func.SetNilCheckDisabled(true)
|
2016-02-26 14:56:31 -08:00
|
|
|
funccompile(fn)
|
cmd/compile,runtime: generate hash functions only for types which are map keys
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-08-06 15:22:51 -07:00
|
|
|
|
|
|
|
// Build closure. It doesn't close over any variables, so
|
|
|
|
// it contains just the function pointer.
|
|
|
|
dsymptr(closure, 0, sym.Linksym(), 0)
|
|
|
|
ggloblsym(closure, int32(Widthptr), obj.DUPOK|obj.RODATA)
|
|
|
|
|
|
|
|
return closure
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
func hashfor(t *types.Type) *Node {
|
|
|
|
var sym *types.Sym
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2016-04-01 11:22:03 -07:00
|
|
|
switch a, _ := algtype1(t); a {
|
2016-02-26 14:56:31 -08:00
|
|
|
case AMEM:
|
|
|
|
Fatalf("hashfor with AMEM type")
|
|
|
|
case AINTER:
|
2017-03-30 13:19:18 -07:00
|
|
|
sym = Runtimepkg.Lookup("interhash")
|
2016-02-26 14:56:31 -08:00
|
|
|
case ANILINTER:
|
2017-03-30 13:19:18 -07:00
|
|
|
sym = Runtimepkg.Lookup("nilinterhash")
|
2016-02-26 14:56:31 -08:00
|
|
|
case ASTRING:
|
2017-03-30 13:19:18 -07:00
|
|
|
sym = Runtimepkg.Lookup("strhash")
|
2016-02-26 14:56:31 -08:00
|
|
|
case AFLOAT32:
|
2017-03-30 13:19:18 -07:00
|
|
|
sym = Runtimepkg.Lookup("f32hash")
|
2016-02-26 14:56:31 -08:00
|
|
|
case AFLOAT64:
|
2017-03-30 13:19:18 -07:00
|
|
|
sym = Runtimepkg.Lookup("f64hash")
|
2016-02-26 14:56:31 -08:00
|
|
|
case ACPLX64:
|
2017-03-30 13:19:18 -07:00
|
|
|
sym = Runtimepkg.Lookup("c64hash")
|
2016-02-26 14:56:31 -08:00
|
|
|
case ACPLX128:
|
2017-03-30 13:19:18 -07:00
|
|
|
sym = Runtimepkg.Lookup("c128hash")
|
2016-02-26 14:56:31 -08:00
|
|
|
default:
|
cmd/compile,runtime: generate hash functions only for types which are map keys
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-08-06 15:22:51 -07:00
|
|
|
// Note: the caller of hashfor ensured that this symbol
|
|
|
|
// exists and has a body by calling genhash for t.
|
2016-02-26 14:56:31 -08:00
|
|
|
sym = typesymprefix(".hash", t)
|
|
|
|
}
|
|
|
|
|
|
|
|
n := newname(sym)
|
2017-04-25 18:14:12 -07:00
|
|
|
n.SetClass(PFUNC)
|
2018-11-01 12:20:28 -04:00
|
|
|
n.Sym.SetFunc(true)
|
2017-04-07 15:39:36 -07:00
|
|
|
n.Type = functype(nil, []*Node{
|
|
|
|
anonfield(types.NewPtr(t)),
|
|
|
|
anonfield(types.Types[TUINTPTR]),
|
|
|
|
}, []*Node{
|
|
|
|
anonfield(types.Types[TUINTPTR]),
|
|
|
|
})
|
2016-02-26 14:56:31 -08:00
|
|
|
return n
|
|
|
|
}
|
|
|
|
|
cmd/compile,runtime: generate hash functions only for types which are map keys
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-08-06 15:22:51 -07:00
|
|
|
// sysClosure returns a closure which will call the
|
|
|
|
// given runtime function (with no closed-over variables).
|
|
|
|
func sysClosure(name string) *obj.LSym {
|
|
|
|
s := sysvar(name + "·f")
|
|
|
|
if len(s.P) == 0 {
|
|
|
|
f := sysfunc(name)
|
|
|
|
dsymptr(s, 0, f, 0)
|
|
|
|
ggloblsym(s, int32(Widthptr), obj.DUPOK|obj.RODATA)
|
|
|
|
}
|
|
|
|
return s
|
|
|
|
}
|
|
|
|
|
|
|
|
// geneq returns a symbol which is the closure used to compute
|
|
|
|
// equality for two objects of type t.
|
|
|
|
func geneq(t *types.Type) *obj.LSym {
|
|
|
|
switch algtype(t) {
|
|
|
|
case ANOEQ:
|
|
|
|
// The runtime will panic if it tries to compare
|
|
|
|
// a type with a nil equality function.
|
|
|
|
return nil
|
|
|
|
case AMEM0:
|
|
|
|
return sysClosure("memequal0")
|
|
|
|
case AMEM8:
|
|
|
|
return sysClosure("memequal8")
|
|
|
|
case AMEM16:
|
|
|
|
return sysClosure("memequal16")
|
|
|
|
case AMEM32:
|
|
|
|
return sysClosure("memequal32")
|
|
|
|
case AMEM64:
|
|
|
|
return sysClosure("memequal64")
|
|
|
|
case AMEM128:
|
|
|
|
return sysClosure("memequal128")
|
|
|
|
case ASTRING:
|
|
|
|
return sysClosure("strequal")
|
|
|
|
case AINTER:
|
|
|
|
return sysClosure("interequal")
|
|
|
|
case ANILINTER:
|
|
|
|
return sysClosure("nilinterequal")
|
|
|
|
case AFLOAT32:
|
|
|
|
return sysClosure("f32equal")
|
|
|
|
case AFLOAT64:
|
|
|
|
return sysClosure("f64equal")
|
|
|
|
case ACPLX64:
|
|
|
|
return sysClosure("c64equal")
|
|
|
|
case ACPLX128:
|
|
|
|
return sysClosure("c128equal")
|
|
|
|
case AMEM:
|
|
|
|
// make equality closure. The size of the type
|
|
|
|
// is encoded in the closure.
|
|
|
|
closure := typeLookup(fmt.Sprintf(".eqfunc%d", t.Width)).Linksym()
|
|
|
|
if len(closure.P) != 0 {
|
|
|
|
return closure
|
|
|
|
}
|
|
|
|
if memequalvarlen == nil {
|
|
|
|
memequalvarlen = sysvar("memequal_varlen") // asm func
|
|
|
|
}
|
|
|
|
ot := 0
|
|
|
|
ot = dsymptr(closure, ot, memequalvarlen, 0)
|
|
|
|
ot = duintptr(closure, ot, uint64(t.Width))
|
|
|
|
ggloblsym(closure, int32(ot), obj.DUPOK|obj.RODATA)
|
|
|
|
return closure
|
|
|
|
case ASPECIAL:
|
|
|
|
break
|
|
|
|
}
|
|
|
|
|
|
|
|
closure := typesymprefix(".eqfunc", t).Linksym()
|
|
|
|
if len(closure.P) > 0 { // already generated
|
|
|
|
return closure
|
|
|
|
}
|
|
|
|
sym := typesymprefix(".eq", t)
|
2016-02-26 14:56:31 -08:00
|
|
|
if Debug['r'] != 0 {
|
cmd/compile,runtime: generate hash functions only for types which are map keys
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-08-06 15:22:51 -07:00
|
|
|
fmt.Printf("geneq %v\n", t)
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
cmd/compile,runtime: generate hash functions only for types which are map keys
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-08-06 15:22:51 -07:00
|
|
|
// Autogenerate code for equality of structs and arrays.
|
|
|
|
|
2017-03-28 13:52:14 -07:00
|
|
|
lineno = autogeneratedPos // less confusing than end of input
|
2016-02-26 14:56:31 -08:00
|
|
|
dclcontext = PEXTERN
|
|
|
|
|
|
|
|
// func sym(p, q *T) bool
|
2016-09-16 11:00:54 +10:00
|
|
|
tfn := nod(OTFUNC, nil, nil)
|
2018-04-18 23:22:26 -07:00
|
|
|
tfn.List.Set2(
|
|
|
|
namedfield("p", types.NewPtr(t)),
|
|
|
|
namedfield("q", types.NewPtr(t)),
|
|
|
|
)
|
2020-04-24 15:59:17 -07:00
|
|
|
tfn.Rlist.Set1(namedfield("r", types.Types[TBOOL]))
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2017-04-10 13:03:14 -07:00
|
|
|
fn := dclfunc(sym, tfn)
|
2018-04-18 23:22:26 -07:00
|
|
|
np := asNode(tfn.Type.Params().Field(0).Nname)
|
|
|
|
nq := asNode(tfn.Type.Params().Field(1).Nname)
|
2016-02-26 14:56:31 -08:00
|
|
|
|
cmd/compile,runtime: generate hash functions only for types which are map keys
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-08-06 15:22:51 -07:00
|
|
|
// We reach here only for types that have equality but
|
2016-02-26 14:56:31 -08:00
|
|
|
// cannot be handled by the standard algorithms,
|
|
|
|
// so t must be either an array or a struct.
|
|
|
|
switch t.Etype {
|
|
|
|
default:
|
|
|
|
Fatalf("geneq %v", t)
|
|
|
|
|
|
|
|
case TARRAY:
|
cmd/compile: eliminate some array equality alg loops
type T [3]string
Prior to this change, we generated this equality alg for T:
func eqT(p, q *T) (r bool) {
for i := range *p {
if len(p[i]) == len(q[i]) {
} else {
return
}
}
for j := range *p {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
That first loop can be profitably eliminated;
it's cheaper to spell out 3 length equality checks.
We now generate:
func eqT(p, q *T) (r bool) {
if len(p[0]) == len(q[0]) &&
len(p[1]) == len(q[1]) &&
len(p[2]) == len(q[2]) {
} else {
return
}
for i := 0; i < len(p); i++ {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
We now also eliminate loops for small float arrays as well,
and for any array of size 1.
These cutoffs were selected to minimize code size on amd64
at this moment, for lack of a more compelling methodology.
Any smallish number would do.
The switch from range loops to plain for loops allowed me
to use a temp instead of a named var, which eliminated
a pointless argument to checkAll.
The code to construct them is also a bit clearer, in my opinion.
Change-Id: I1bdd8ee4a2739d00806e66b17a4e76b46e71231a
Reviewed-on: https://go-review.googlesource.com/c/go/+/230210
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-04-25 18:14:42 -07:00
|
|
|
nelem := t.NumElem()
|
|
|
|
|
|
|
|
// checkAll generates code to check the equality of all array elements.
|
|
|
|
// If unroll is greater than nelem, checkAll generates:
|
|
|
|
//
|
|
|
|
// if eq(p[0], q[0]) && eq(p[1], q[1]) && ... {
|
|
|
|
// } else {
|
|
|
|
// return
|
|
|
|
// }
|
2020-04-24 15:59:17 -07:00
|
|
|
//
|
cmd/compile: eliminate some array equality alg loops
type T [3]string
Prior to this change, we generated this equality alg for T:
func eqT(p, q *T) (r bool) {
for i := range *p {
if len(p[i]) == len(q[i]) {
} else {
return
}
}
for j := range *p {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
That first loop can be profitably eliminated;
it's cheaper to spell out 3 length equality checks.
We now generate:
func eqT(p, q *T) (r bool) {
if len(p[0]) == len(q[0]) &&
len(p[1]) == len(q[1]) &&
len(p[2]) == len(q[2]) {
} else {
return
}
for i := 0; i < len(p); i++ {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
We now also eliminate loops for small float arrays as well,
and for any array of size 1.
These cutoffs were selected to minimize code size on amd64
at this moment, for lack of a more compelling methodology.
Any smallish number would do.
The switch from range loops to plain for loops allowed me
to use a temp instead of a named var, which eliminated
a pointless argument to checkAll.
The code to construct them is also a bit clearer, in my opinion.
Change-Id: I1bdd8ee4a2739d00806e66b17a4e76b46e71231a
Reviewed-on: https://go-review.googlesource.com/c/go/+/230210
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-04-25 18:14:42 -07:00
|
|
|
// And so on.
|
|
|
|
//
|
|
|
|
// Otherwise it generates:
|
|
|
|
//
|
|
|
|
// for i := 0; i < nelem; i++ {
|
2020-04-24 15:59:17 -07:00
|
|
|
// if eq(p[i], q[i]) {
|
|
|
|
// } else {
|
|
|
|
// return
|
|
|
|
// }
|
|
|
|
// }
|
cmd/compile: eliminate some array equality alg loops
type T [3]string
Prior to this change, we generated this equality alg for T:
func eqT(p, q *T) (r bool) {
for i := range *p {
if len(p[i]) == len(q[i]) {
} else {
return
}
}
for j := range *p {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
That first loop can be profitably eliminated;
it's cheaper to spell out 3 length equality checks.
We now generate:
func eqT(p, q *T) (r bool) {
if len(p[0]) == len(q[0]) &&
len(p[1]) == len(q[1]) &&
len(p[2]) == len(q[2]) {
} else {
return
}
for i := 0; i < len(p); i++ {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
We now also eliminate loops for small float arrays as well,
and for any array of size 1.
These cutoffs were selected to minimize code size on amd64
at this moment, for lack of a more compelling methodology.
Any smallish number would do.
The switch from range loops to plain for loops allowed me
to use a temp instead of a named var, which eliminated
a pointless argument to checkAll.
The code to construct them is also a bit clearer, in my opinion.
Change-Id: I1bdd8ee4a2739d00806e66b17a4e76b46e71231a
Reviewed-on: https://go-review.googlesource.com/c/go/+/230210
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-04-25 18:14:42 -07:00
|
|
|
//
|
|
|
|
// TODO(josharian): consider doing some loop unrolling
|
|
|
|
// for larger nelem as well, processing a few elements at a time in a loop.
|
|
|
|
checkAll := func(unroll int64, eq func(pi, qi *Node) *Node) {
|
|
|
|
// checkIdx generates a node to check for equality at index i.
|
|
|
|
checkIdx := func(i *Node) *Node {
|
|
|
|
// pi := p[i]
|
|
|
|
pi := nod(OINDEX, np, i)
|
|
|
|
pi.SetBounded(true)
|
|
|
|
pi.Type = t.Elem()
|
|
|
|
// qi := q[i]
|
|
|
|
qi := nod(OINDEX, nq, i)
|
|
|
|
qi.SetBounded(true)
|
|
|
|
qi.Type = t.Elem()
|
|
|
|
return eq(pi, qi)
|
|
|
|
}
|
|
|
|
|
|
|
|
if nelem <= unroll {
|
|
|
|
// Generate a series of checks.
|
|
|
|
var cond *Node
|
|
|
|
for i := int64(0); i < nelem; i++ {
|
|
|
|
c := nodintconst(i)
|
|
|
|
check := checkIdx(c)
|
|
|
|
if cond == nil {
|
|
|
|
cond = check
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
cond = nod(OANDAND, cond, check)
|
|
|
|
}
|
|
|
|
nif := nod(OIF, cond, nil)
|
|
|
|
nif.Rlist.Append(nod(ORETURN, nil, nil))
|
|
|
|
fn.Nbody.Append(nif)
|
|
|
|
return
|
|
|
|
}
|
2020-04-24 15:59:17 -07:00
|
|
|
|
cmd/compile: eliminate some array equality alg loops
type T [3]string
Prior to this change, we generated this equality alg for T:
func eqT(p, q *T) (r bool) {
for i := range *p {
if len(p[i]) == len(q[i]) {
} else {
return
}
}
for j := range *p {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
That first loop can be profitably eliminated;
it's cheaper to spell out 3 length equality checks.
We now generate:
func eqT(p, q *T) (r bool) {
if len(p[0]) == len(q[0]) &&
len(p[1]) == len(q[1]) &&
len(p[2]) == len(q[2]) {
} else {
return
}
for i := 0; i < len(p); i++ {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
We now also eliminate loops for small float arrays as well,
and for any array of size 1.
These cutoffs were selected to minimize code size on amd64
at this moment, for lack of a more compelling methodology.
Any smallish number would do.
The switch from range loops to plain for loops allowed me
to use a temp instead of a named var, which eliminated
a pointless argument to checkAll.
The code to construct them is also a bit clearer, in my opinion.
Change-Id: I1bdd8ee4a2739d00806e66b17a4e76b46e71231a
Reviewed-on: https://go-review.googlesource.com/c/go/+/230210
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-04-25 18:14:42 -07:00
|
|
|
// Generate a for loop.
|
|
|
|
// for i := 0; i < nelem; i++
|
|
|
|
i := temp(types.Types[TINT])
|
|
|
|
init := nod(OAS, i, nodintconst(0))
|
|
|
|
cond := nod(OLT, i, nodintconst(nelem))
|
|
|
|
post := nod(OAS, i, nod(OADD, i, nodintconst(1)))
|
|
|
|
loop := nod(OFOR, cond, post)
|
|
|
|
loop.Ninit.Append(init)
|
2020-04-24 15:59:17 -07:00
|
|
|
// if eq(pi, qi) {} else { return }
|
cmd/compile: eliminate some array equality alg loops
type T [3]string
Prior to this change, we generated this equality alg for T:
func eqT(p, q *T) (r bool) {
for i := range *p {
if len(p[i]) == len(q[i]) {
} else {
return
}
}
for j := range *p {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
That first loop can be profitably eliminated;
it's cheaper to spell out 3 length equality checks.
We now generate:
func eqT(p, q *T) (r bool) {
if len(p[0]) == len(q[0]) &&
len(p[1]) == len(q[1]) &&
len(p[2]) == len(q[2]) {
} else {
return
}
for i := 0; i < len(p); i++ {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
We now also eliminate loops for small float arrays as well,
and for any array of size 1.
These cutoffs were selected to minimize code size on amd64
at this moment, for lack of a more compelling methodology.
Any smallish number would do.
The switch from range loops to plain for loops allowed me
to use a temp instead of a named var, which eliminated
a pointless argument to checkAll.
The code to construct them is also a bit clearer, in my opinion.
Change-Id: I1bdd8ee4a2739d00806e66b17a4e76b46e71231a
Reviewed-on: https://go-review.googlesource.com/c/go/+/230210
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-04-25 18:14:42 -07:00
|
|
|
check := checkIdx(i)
|
|
|
|
nif := nod(OIF, check, nil)
|
|
|
|
nif.Rlist.Append(nod(ORETURN, nil, nil))
|
|
|
|
loop.Nbody.Append(nif)
|
|
|
|
fn.Nbody.Append(loop)
|
2020-04-24 15:59:17 -07:00
|
|
|
}
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2020-04-24 15:59:17 -07:00
|
|
|
switch t.Elem().Etype {
|
2020-04-25 15:14:35 -07:00
|
|
|
case TSTRING:
|
|
|
|
// Do two loops. First, check that all the lengths match (cheap).
|
|
|
|
// Second, check that all the contents match (expensive).
|
|
|
|
// TODO: when the array size is small, unroll the length match checks.
|
cmd/compile: eliminate some array equality alg loops
type T [3]string
Prior to this change, we generated this equality alg for T:
func eqT(p, q *T) (r bool) {
for i := range *p {
if len(p[i]) == len(q[i]) {
} else {
return
}
}
for j := range *p {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
That first loop can be profitably eliminated;
it's cheaper to spell out 3 length equality checks.
We now generate:
func eqT(p, q *T) (r bool) {
if len(p[0]) == len(q[0]) &&
len(p[1]) == len(q[1]) &&
len(p[2]) == len(q[2]) {
} else {
return
}
for i := 0; i < len(p); i++ {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
We now also eliminate loops for small float arrays as well,
and for any array of size 1.
These cutoffs were selected to minimize code size on amd64
at this moment, for lack of a more compelling methodology.
Any smallish number would do.
The switch from range loops to plain for loops allowed me
to use a temp instead of a named var, which eliminated
a pointless argument to checkAll.
The code to construct them is also a bit clearer, in my opinion.
Change-Id: I1bdd8ee4a2739d00806e66b17a4e76b46e71231a
Reviewed-on: https://go-review.googlesource.com/c/go/+/230210
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-04-25 18:14:42 -07:00
|
|
|
checkAll(3, func(pi, qi *Node) *Node {
|
2020-04-25 15:14:35 -07:00
|
|
|
// Compare lengths.
|
|
|
|
eqlen, _ := eqstring(pi, qi)
|
|
|
|
return eqlen
|
|
|
|
})
|
cmd/compile: eliminate some array equality alg loops
type T [3]string
Prior to this change, we generated this equality alg for T:
func eqT(p, q *T) (r bool) {
for i := range *p {
if len(p[i]) == len(q[i]) {
} else {
return
}
}
for j := range *p {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
That first loop can be profitably eliminated;
it's cheaper to spell out 3 length equality checks.
We now generate:
func eqT(p, q *T) (r bool) {
if len(p[0]) == len(q[0]) &&
len(p[1]) == len(q[1]) &&
len(p[2]) == len(q[2]) {
} else {
return
}
for i := 0; i < len(p); i++ {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
We now also eliminate loops for small float arrays as well,
and for any array of size 1.
These cutoffs were selected to minimize code size on amd64
at this moment, for lack of a more compelling methodology.
Any smallish number would do.
The switch from range loops to plain for loops allowed me
to use a temp instead of a named var, which eliminated
a pointless argument to checkAll.
The code to construct them is also a bit clearer, in my opinion.
Change-Id: I1bdd8ee4a2739d00806e66b17a4e76b46e71231a
Reviewed-on: https://go-review.googlesource.com/c/go/+/230210
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-04-25 18:14:42 -07:00
|
|
|
checkAll(1, func(pi, qi *Node) *Node {
|
2020-04-25 15:14:35 -07:00
|
|
|
// Compare contents.
|
|
|
|
_, eqmem := eqstring(pi, qi)
|
|
|
|
return eqmem
|
|
|
|
})
|
cmd/compile: eliminate some array equality alg loops
type T [3]string
Prior to this change, we generated this equality alg for T:
func eqT(p, q *T) (r bool) {
for i := range *p {
if len(p[i]) == len(q[i]) {
} else {
return
}
}
for j := range *p {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
That first loop can be profitably eliminated;
it's cheaper to spell out 3 length equality checks.
We now generate:
func eqT(p, q *T) (r bool) {
if len(p[0]) == len(q[0]) &&
len(p[1]) == len(q[1]) &&
len(p[2]) == len(q[2]) {
} else {
return
}
for i := 0; i < len(p); i++ {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
We now also eliminate loops for small float arrays as well,
and for any array of size 1.
These cutoffs were selected to minimize code size on amd64
at this moment, for lack of a more compelling methodology.
Any smallish number would do.
The switch from range loops to plain for loops allowed me
to use a temp instead of a named var, which eliminated
a pointless argument to checkAll.
The code to construct them is also a bit clearer, in my opinion.
Change-Id: I1bdd8ee4a2739d00806e66b17a4e76b46e71231a
Reviewed-on: https://go-review.googlesource.com/c/go/+/230210
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-04-25 18:14:42 -07:00
|
|
|
case TFLOAT32, TFLOAT64:
|
|
|
|
checkAll(2, func(pi, qi *Node) *Node {
|
|
|
|
// p[i] == q[i]
|
|
|
|
return nod(OEQ, pi, qi)
|
|
|
|
})
|
|
|
|
// TODO: pick apart structs, do them piecemeal too
|
2020-04-24 15:59:17 -07:00
|
|
|
default:
|
cmd/compile: eliminate some array equality alg loops
type T [3]string
Prior to this change, we generated this equality alg for T:
func eqT(p, q *T) (r bool) {
for i := range *p {
if len(p[i]) == len(q[i]) {
} else {
return
}
}
for j := range *p {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
That first loop can be profitably eliminated;
it's cheaper to spell out 3 length equality checks.
We now generate:
func eqT(p, q *T) (r bool) {
if len(p[0]) == len(q[0]) &&
len(p[1]) == len(q[1]) &&
len(p[2]) == len(q[2]) {
} else {
return
}
for i := 0; i < len(p); i++ {
if runtime.memeq(p[j].ptr, q[j].ptr, len(p[j])) {
} else {
return
}
}
return true
}
We now also eliminate loops for small float arrays as well,
and for any array of size 1.
These cutoffs were selected to minimize code size on amd64
at this moment, for lack of a more compelling methodology.
Any smallish number would do.
The switch from range loops to plain for loops allowed me
to use a temp instead of a named var, which eliminated
a pointless argument to checkAll.
The code to construct them is also a bit clearer, in my opinion.
Change-Id: I1bdd8ee4a2739d00806e66b17a4e76b46e71231a
Reviewed-on: https://go-review.googlesource.com/c/go/+/230210
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
2020-04-25 18:14:42 -07:00
|
|
|
checkAll(1, func(pi, qi *Node) *Node {
|
2020-04-24 15:59:17 -07:00
|
|
|
// p[i] == q[i]
|
|
|
|
return nod(OEQ, pi, qi)
|
|
|
|
})
|
|
|
|
}
|
2016-02-26 14:56:31 -08:00
|
|
|
// return true
|
2016-09-16 11:00:54 +10:00
|
|
|
ret := nod(ORETURN, nil, nil)
|
2016-09-15 15:45:10 +10:00
|
|
|
ret.List.Append(nodbool(true))
|
2016-02-27 14:31:33 -08:00
|
|
|
fn.Nbody.Append(ret)
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
case TSTRUCT:
|
cmd/compile: make runtime calls last in eq algs
type T struct {
f float64
a [64]uint64
g float64
}
Prior to this change, the generated equality algorithm for T was:
func eqT(p, q *T) bool {
return p.f == q.f && runtime.memequal(p.a, q.a, 512) && p.g == q.g
}
In handwritten code, we would normally put the cheapest checks first.
This change takes a step in that direction. We now generate:
func eqT(p, q *T) bool {
return p.f == q.f && p.g == q.g && runtime.memequal(p.a, q.a, 512)
}
For most types, this also generates considerably shorter code. Examples:
runtime
.eq."".mstats 406 -> 391 (-3.69%)
.eq.""._func 114 -> 101 (-11.40%)
.eq."".itab 115 -> 102 (-11.30%)
.eq."".scase 125 -> 116 (-7.20%)
.eq."".traceStack 119 -> 102 (-14.29%)
.eq."".gcControllerState 169 -> 161 (-4.73%)
.eq."".sweepdata 121 -> 112 (-7.44%)
However, for types in which we make unwise choices about inlining
memory-only comparisons (#38494), this generates longer code.
Example:
cmd/internal/obj
.eq."".objWriter 211 -> 214 (+1.42%)
.eq."".Addr 185 -> 187 (+1.08%)
Fortunately, such cases are not common.
Change-Id: I47a27da93c1f88ec71fa350c192f36b29548a217
Reviewed-on: https://go-review.googlesource.com/c/go/+/230203
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2020-04-24 09:49:35 -07:00
|
|
|
// Build a list of conditions to satisfy.
|
2020-06-15 09:17:18 -07:00
|
|
|
// The conditions are a list-of-lists. Conditions are reorderable
|
|
|
|
// within each inner list. The outer lists must be evaluated in order.
|
2020-06-15 11:08:36 -07:00
|
|
|
var conds [][]*Node
|
|
|
|
conds = append(conds, []*Node{})
|
2016-03-10 20:07:00 -08:00
|
|
|
and := func(n *Node) {
|
2020-06-15 09:17:18 -07:00
|
|
|
i := len(conds) - 1
|
2020-06-15 11:08:36 -07:00
|
|
|
conds[i] = append(conds[i], n)
|
2016-03-10 20:07:00 -08:00
|
|
|
}
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2016-03-08 03:40:50 -08:00
|
|
|
// Walk the struct using memequal for runs of AMEM
|
|
|
|
// and calling specific equality tests for the others.
|
2016-03-10 20:07:00 -08:00
|
|
|
for i, fields := 0, t.FieldSlice(); i < len(fields); {
|
|
|
|
f := fields[i]
|
|
|
|
|
2016-03-08 03:40:50 -08:00
|
|
|
// Skip blank-named fields.
|
2017-04-21 07:51:41 -07:00
|
|
|
if f.Sym.IsBlank() {
|
2016-03-10 20:07:00 -08:00
|
|
|
i++
|
2016-02-26 14:56:31 -08:00
|
|
|
continue
|
|
|
|
}
|
2016-03-08 03:40:50 -08:00
|
|
|
|
|
|
|
// Compare non-memory fields with field equality.
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
if !IsRegularMemory(f.Type) {
|
2020-06-15 09:17:18 -07:00
|
|
|
if EqCanPanic(f.Type) {
|
|
|
|
// Enforce ordering by starting a new set of reorderable conditions.
|
2020-06-15 11:08:36 -07:00
|
|
|
conds = append(conds, []*Node{})
|
2020-06-15 09:17:18 -07:00
|
|
|
}
|
2020-04-24 09:54:13 -07:00
|
|
|
p := nodSym(OXDOT, np, f.Sym)
|
|
|
|
q := nodSym(OXDOT, nq, f.Sym)
|
|
|
|
switch {
|
|
|
|
case f.Type.IsString():
|
|
|
|
eqlen, eqmem := eqstring(p, q)
|
|
|
|
and(eqlen)
|
|
|
|
and(eqmem)
|
|
|
|
default:
|
|
|
|
and(nod(OEQ, p, q))
|
|
|
|
}
|
2020-06-15 09:17:18 -07:00
|
|
|
if EqCanPanic(f.Type) {
|
|
|
|
// Also enforce ordering after something that can panic.
|
2020-06-15 11:08:36 -07:00
|
|
|
conds = append(conds, []*Node{})
|
2020-06-15 09:17:18 -07:00
|
|
|
}
|
2016-03-10 20:07:00 -08:00
|
|
|
i++
|
2016-02-26 14:56:31 -08:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2016-03-08 03:40:50 -08:00
|
|
|
// Find maximal length run of memory-only fields.
|
2016-03-28 10:35:13 -07:00
|
|
|
size, next := memrun(t, i)
|
2016-03-08 03:40:50 -08:00
|
|
|
|
|
|
|
// TODO(rsc): All the calls to newname are wrong for
|
|
|
|
// cross-package unexported fields.
|
2016-03-10 20:07:00 -08:00
|
|
|
if s := fields[i:next]; len(s) <= 2 {
|
|
|
|
// Two or fewer fields: use plain field equality.
|
|
|
|
for _, f := range s {
|
cmd/compile: change ODOT and friends to use Sym, not Right
The Node type ODOT and its variants all represent a selector, with a
simple name to the right of the dot. Before this change this was
represented by using an ONAME Node in the Right field. This ONAME node
served no useful purpose. This CL changes these Node types to store the
symbol in the Sym field instead, thus not requiring allocating a Node
for each selector.
When compiling x/tools/go/types this CL eliminates nearly 5000 calls to
newname and reduces the total number of Nodes allocated by about 6.6%.
It seems to cut compilation time by 1 to 2 percent.
Getting this right was somewhat subtle, and I added two dubious changes
to produce the exact same output as before. One is to ishairy in
inl.go: the ONAME node increased the cost of ODOT and friends by 1, and
I retained that, although really ODOT is not more expensive than any
other node. The other is to varexpr in walk.go: because the ONAME in
the Right field of an ODOT has no class, varexpr would always return
false for an ODOT, although in fact for some ODOT's it seemingly ought
to return true; I added an && false for now. I will send separate CLs,
that will break toolstash -cmp, to clean these up.
This CL passes toolstash -cmp.
Change-Id: I4af8a10cc59078c436130ce472f25abc3a9b2f80
Reviewed-on: https://go-review.googlesource.com/20890
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-03-18 16:52:30 -07:00
|
|
|
and(eqfield(np, nq, f.Sym))
|
2016-03-10 20:07:00 -08:00
|
|
|
}
|
2016-03-08 03:40:50 -08:00
|
|
|
} else {
|
|
|
|
// More than two fields: use memequal.
|
cmd/compile: change ODOT and friends to use Sym, not Right
The Node type ODOT and its variants all represent a selector, with a
simple name to the right of the dot. Before this change this was
represented by using an ONAME Node in the Right field. This ONAME node
served no useful purpose. This CL changes these Node types to store the
symbol in the Sym field instead, thus not requiring allocating a Node
for each selector.
When compiling x/tools/go/types this CL eliminates nearly 5000 calls to
newname and reduces the total number of Nodes allocated by about 6.6%.
It seems to cut compilation time by 1 to 2 percent.
Getting this right was somewhat subtle, and I added two dubious changes
to produce the exact same output as before. One is to ishairy in
inl.go: the ONAME node increased the cost of ODOT and friends by 1, and
I retained that, although really ODOT is not more expensive than any
other node. The other is to varexpr in walk.go: because the ONAME in
the Right field of an ODOT has no class, varexpr would always return
false for an ODOT, although in fact for some ODOT's it seemingly ought
to return true; I added an && false for now. I will send separate CLs,
that will break toolstash -cmp, to clean these up.
This CL passes toolstash -cmp.
Change-Id: I4af8a10cc59078c436130ce472f25abc3a9b2f80
Reviewed-on: https://go-review.googlesource.com/20890
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2016-03-18 16:52:30 -07:00
|
|
|
and(eqmem(np, nq, f.Sym, size))
|
2016-03-08 03:40:50 -08:00
|
|
|
}
|
2016-03-10 20:07:00 -08:00
|
|
|
i = next
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
cmd/compile: make runtime calls last in eq algs
type T struct {
f float64
a [64]uint64
g float64
}
Prior to this change, the generated equality algorithm for T was:
func eqT(p, q *T) bool {
return p.f == q.f && runtime.memequal(p.a, q.a, 512) && p.g == q.g
}
In handwritten code, we would normally put the cheapest checks first.
This change takes a step in that direction. We now generate:
func eqT(p, q *T) bool {
return p.f == q.f && p.g == q.g && runtime.memequal(p.a, q.a, 512)
}
For most types, this also generates considerably shorter code. Examples:
runtime
.eq."".mstats 406 -> 391 (-3.69%)
.eq.""._func 114 -> 101 (-11.40%)
.eq."".itab 115 -> 102 (-11.30%)
.eq."".scase 125 -> 116 (-7.20%)
.eq."".traceStack 119 -> 102 (-14.29%)
.eq."".gcControllerState 169 -> 161 (-4.73%)
.eq."".sweepdata 121 -> 112 (-7.44%)
However, for types in which we make unwise choices about inlining
memory-only comparisons (#38494), this generates longer code.
Example:
cmd/internal/obj
.eq."".objWriter 211 -> 214 (+1.42%)
.eq."".Addr 185 -> 187 (+1.08%)
Fortunately, such cases are not common.
Change-Id: I47a27da93c1f88ec71fa350c192f36b29548a217
Reviewed-on: https://go-review.googlesource.com/c/go/+/230203
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2020-04-24 09:49:35 -07:00
|
|
|
// Sort conditions to put runtime calls last.
|
|
|
|
// Preserve the rest of the ordering.
|
2020-06-15 11:08:36 -07:00
|
|
|
var flatConds []*Node
|
2020-06-15 09:17:18 -07:00
|
|
|
for _, c := range conds {
|
2020-06-15 11:08:36 -07:00
|
|
|
isCall := func(n *Node) bool {
|
|
|
|
return n.Op == OCALL || n.Op == OCALLFUNC
|
|
|
|
}
|
2020-06-15 09:17:18 -07:00
|
|
|
sort.SliceStable(c, func(i, j int) bool {
|
2020-06-15 11:08:36 -07:00
|
|
|
return !isCall(c[i]) && isCall(c[j])
|
2020-06-15 09:17:18 -07:00
|
|
|
})
|
|
|
|
flatConds = append(flatConds, c...)
|
|
|
|
}
|
cmd/compile: make runtime calls last in eq algs
type T struct {
f float64
a [64]uint64
g float64
}
Prior to this change, the generated equality algorithm for T was:
func eqT(p, q *T) bool {
return p.f == q.f && runtime.memequal(p.a, q.a, 512) && p.g == q.g
}
In handwritten code, we would normally put the cheapest checks first.
This change takes a step in that direction. We now generate:
func eqT(p, q *T) bool {
return p.f == q.f && p.g == q.g && runtime.memequal(p.a, q.a, 512)
}
For most types, this also generates considerably shorter code. Examples:
runtime
.eq."".mstats 406 -> 391 (-3.69%)
.eq.""._func 114 -> 101 (-11.40%)
.eq."".itab 115 -> 102 (-11.30%)
.eq."".scase 125 -> 116 (-7.20%)
.eq."".traceStack 119 -> 102 (-14.29%)
.eq."".gcControllerState 169 -> 161 (-4.73%)
.eq."".sweepdata 121 -> 112 (-7.44%)
However, for types in which we make unwise choices about inlining
memory-only comparisons (#38494), this generates longer code.
Example:
cmd/internal/obj
.eq."".objWriter 211 -> 214 (+1.42%)
.eq."".Addr 185 -> 187 (+1.08%)
Fortunately, such cases are not common.
Change-Id: I47a27da93c1f88ec71fa350c192f36b29548a217
Reviewed-on: https://go-review.googlesource.com/c/go/+/230203
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2020-04-24 09:49:35 -07:00
|
|
|
|
|
|
|
var cond *Node
|
2020-06-15 09:17:18 -07:00
|
|
|
if len(flatConds) == 0 {
|
2016-09-15 15:45:10 +10:00
|
|
|
cond = nodbool(true)
|
cmd/compile: make runtime calls last in eq algs
type T struct {
f float64
a [64]uint64
g float64
}
Prior to this change, the generated equality algorithm for T was:
func eqT(p, q *T) bool {
return p.f == q.f && runtime.memequal(p.a, q.a, 512) && p.g == q.g
}
In handwritten code, we would normally put the cheapest checks first.
This change takes a step in that direction. We now generate:
func eqT(p, q *T) bool {
return p.f == q.f && p.g == q.g && runtime.memequal(p.a, q.a, 512)
}
For most types, this also generates considerably shorter code. Examples:
runtime
.eq."".mstats 406 -> 391 (-3.69%)
.eq.""._func 114 -> 101 (-11.40%)
.eq."".itab 115 -> 102 (-11.30%)
.eq."".scase 125 -> 116 (-7.20%)
.eq."".traceStack 119 -> 102 (-14.29%)
.eq."".gcControllerState 169 -> 161 (-4.73%)
.eq."".sweepdata 121 -> 112 (-7.44%)
However, for types in which we make unwise choices about inlining
memory-only comparisons (#38494), this generates longer code.
Example:
cmd/internal/obj
.eq."".objWriter 211 -> 214 (+1.42%)
.eq."".Addr 185 -> 187 (+1.08%)
Fortunately, such cases are not common.
Change-Id: I47a27da93c1f88ec71fa350c192f36b29548a217
Reviewed-on: https://go-review.googlesource.com/c/go/+/230203
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2020-04-24 09:49:35 -07:00
|
|
|
} else {
|
2020-06-15 11:08:36 -07:00
|
|
|
cond = flatConds[0]
|
2020-06-15 09:17:18 -07:00
|
|
|
for _, c := range flatConds[1:] {
|
2020-06-15 11:08:36 -07:00
|
|
|
cond = nod(OANDAND, cond, c)
|
cmd/compile: make runtime calls last in eq algs
type T struct {
f float64
a [64]uint64
g float64
}
Prior to this change, the generated equality algorithm for T was:
func eqT(p, q *T) bool {
return p.f == q.f && runtime.memequal(p.a, q.a, 512) && p.g == q.g
}
In handwritten code, we would normally put the cheapest checks first.
This change takes a step in that direction. We now generate:
func eqT(p, q *T) bool {
return p.f == q.f && p.g == q.g && runtime.memequal(p.a, q.a, 512)
}
For most types, this also generates considerably shorter code. Examples:
runtime
.eq."".mstats 406 -> 391 (-3.69%)
.eq.""._func 114 -> 101 (-11.40%)
.eq."".itab 115 -> 102 (-11.30%)
.eq."".scase 125 -> 116 (-7.20%)
.eq."".traceStack 119 -> 102 (-14.29%)
.eq."".gcControllerState 169 -> 161 (-4.73%)
.eq."".sweepdata 121 -> 112 (-7.44%)
However, for types in which we make unwise choices about inlining
memory-only comparisons (#38494), this generates longer code.
Example:
cmd/internal/obj
.eq."".objWriter 211 -> 214 (+1.42%)
.eq."".Addr 185 -> 187 (+1.08%)
Fortunately, such cases are not common.
Change-Id: I47a27da93c1f88ec71fa350c192f36b29548a217
Reviewed-on: https://go-review.googlesource.com/c/go/+/230203
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
2020-04-24 09:49:35 -07:00
|
|
|
}
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
2016-09-16 11:00:54 +10:00
|
|
|
ret := nod(ORETURN, nil, nil)
|
2016-03-10 20:07:00 -08:00
|
|
|
ret.List.Append(cond)
|
2016-02-27 14:31:33 -08:00
|
|
|
fn.Nbody.Append(ret)
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
if Debug['r'] != 0 {
|
2016-03-04 13:16:48 -08:00
|
|
|
dumplist("geneq body", fn.Nbody)
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
2017-08-09 16:13:09 +09:00
|
|
|
funcbody()
|
2018-04-18 23:22:26 -07:00
|
|
|
|
2017-02-27 19:56:38 +02:00
|
|
|
fn.Func.SetDupok(true)
|
2018-11-18 08:34:38 -08:00
|
|
|
fn = typecheck(fn, ctxStmt)
|
2018-04-18 23:22:26 -07:00
|
|
|
|
|
|
|
Curfn = fn
|
2018-11-18 08:34:38 -08:00
|
|
|
typecheckslice(fn.Nbody.Slice(), ctxStmt)
|
2016-02-26 14:56:31 -08:00
|
|
|
Curfn = nil
|
2018-04-18 23:22:26 -07:00
|
|
|
|
2017-01-11 13:53:34 -08:00
|
|
|
if debug_dclstack != 0 {
|
|
|
|
testdclstack()
|
|
|
|
}
|
2016-02-26 14:56:31 -08:00
|
|
|
|
|
|
|
// Disable checknils while compiling this code.
|
|
|
|
// We are comparing a struct or an array,
|
|
|
|
// neither of which can be nil, and our comparisons
|
|
|
|
// are shallow.
|
2017-05-04 07:55:00 -07:00
|
|
|
fn.Func.SetNilCheckDisabled(true)
|
2016-02-26 14:56:31 -08:00
|
|
|
funccompile(fn)
|
cmd/compile,runtime: generate hash functions only for types which are map keys
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
cmd/go now has 10 generated hash functions, instead of 504. Makes
cmd/go 1.0% smaller. Update #6853.
Speed on non-interface keys is unchanged. Speed on interface keys
is ~20% slower:
name old time/op new time/op delta
MapInterfaceString-8 23.0ns ±21% 27.6ns ±14% +20.01% (p=0.002 n=10+10)
MapInterfacePtr-8 19.4ns ±16% 23.7ns ± 7% +22.48% (p=0.000 n=10+8)
Change-Id: I7c2e42292a46b5d4e288aaec4029bdbb01089263
Reviewed-on: https://go-review.googlesource.com/c/go/+/191198
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Martin Möhrmann <moehrmann@google.com>
2019-08-06 15:22:51 -07:00
|
|
|
|
|
|
|
// Generate a closure which points at the function we just generated.
|
|
|
|
dsymptr(closure, 0, sym.Linksym(), 0)
|
|
|
|
ggloblsym(closure, int32(Widthptr), obj.DUPOK|obj.RODATA)
|
|
|
|
return closure
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
// eqfield returns the node
|
|
|
|
// p.field == q.field
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
func eqfield(p *Node, q *Node, field *types.Sym) *Node {
|
2016-09-15 15:45:10 +10:00
|
|
|
nx := nodSym(OXDOT, p, field)
|
|
|
|
ny := nodSym(OXDOT, q, field)
|
2016-09-16 11:00:54 +10:00
|
|
|
ne := nod(OEQ, nx, ny)
|
2016-02-26 14:56:31 -08:00
|
|
|
return ne
|
|
|
|
}
|
|
|
|
|
2020-04-24 09:43:49 -07:00
|
|
|
// eqstring returns the nodes
|
|
|
|
// len(s) == len(t)
|
|
|
|
// and
|
|
|
|
// memequal(s.ptr, t.ptr, len(s))
|
|
|
|
// which can be used to construct string equality comparison.
|
|
|
|
// eqlen must be evaluated before eqmem, and shortcircuiting is required.
|
|
|
|
func eqstring(s, t *Node) (eqlen, eqmem *Node) {
|
|
|
|
s = conv(s, types.Types[TSTRING])
|
|
|
|
t = conv(t, types.Types[TSTRING])
|
|
|
|
sptr := nod(OSPTR, s, nil)
|
|
|
|
tptr := nod(OSPTR, t, nil)
|
|
|
|
slen := conv(nod(OLEN, s, nil), types.Types[TUINTPTR])
|
|
|
|
tlen := conv(nod(OLEN, t, nil), types.Types[TUINTPTR])
|
|
|
|
|
|
|
|
fn := syslook("memequal")
|
|
|
|
fn = substArgTypes(fn, types.Types[TUINT8], types.Types[TUINT8])
|
|
|
|
call := nod(OCALL, fn, nil)
|
|
|
|
call.List.Append(sptr, tptr, slen.copy())
|
|
|
|
call = typecheck(call, ctxExpr|ctxMultiOK)
|
|
|
|
|
|
|
|
cmp := nod(OEQ, slen, tlen)
|
|
|
|
cmp = typecheck(cmp, ctxExpr)
|
|
|
|
cmp.Type = types.Types[TBOOL]
|
|
|
|
return cmp, call
|
|
|
|
}
|
|
|
|
|
2020-04-24 11:00:44 -07:00
|
|
|
// eqinterface returns the nodes
|
|
|
|
// s.tab == t.tab (or s.typ == t.typ, as appropriate)
|
|
|
|
// and
|
|
|
|
// ifaceeq(s.tab, s.data, t.data) (or efaceeq(s.typ, s.data, t.data), as appropriate)
|
|
|
|
// which can be used to construct interface equality comparison.
|
|
|
|
// eqtab must be evaluated before eqdata, and shortcircuiting is required.
|
|
|
|
func eqinterface(s, t *Node) (eqtab, eqdata *Node) {
|
|
|
|
if !types.Identical(s.Type, t.Type) {
|
|
|
|
Fatalf("eqinterface %v %v", s.Type, t.Type)
|
|
|
|
}
|
|
|
|
// func ifaceeq(tab *uintptr, x, y unsafe.Pointer) (ret bool)
|
|
|
|
// func efaceeq(typ *uintptr, x, y unsafe.Pointer) (ret bool)
|
|
|
|
var fn *Node
|
|
|
|
if s.Type.IsEmptyInterface() {
|
|
|
|
fn = syslook("efaceeq")
|
|
|
|
} else {
|
|
|
|
fn = syslook("ifaceeq")
|
|
|
|
}
|
|
|
|
|
|
|
|
stab := nod(OITAB, s, nil)
|
|
|
|
ttab := nod(OITAB, t, nil)
|
|
|
|
sdata := nod(OIDATA, s, nil)
|
|
|
|
tdata := nod(OIDATA, t, nil)
|
|
|
|
sdata.Type = types.Types[TUNSAFEPTR]
|
|
|
|
tdata.Type = types.Types[TUNSAFEPTR]
|
|
|
|
sdata.SetTypecheck(1)
|
|
|
|
tdata.SetTypecheck(1)
|
|
|
|
|
|
|
|
call := nod(OCALL, fn, nil)
|
|
|
|
call.List.Append(stab, sdata, tdata)
|
|
|
|
call = typecheck(call, ctxExpr|ctxMultiOK)
|
|
|
|
|
|
|
|
cmp := nod(OEQ, stab, ttab)
|
|
|
|
cmp = typecheck(cmp, ctxExpr)
|
|
|
|
cmp.Type = types.Types[TBOOL]
|
|
|
|
return cmp, call
|
|
|
|
}
|
|
|
|
|
2016-02-26 14:56:31 -08:00
|
|
|
// eqmem returns the node
|
|
|
|
// memequal(&p.field, &q.field [, size])
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
func eqmem(p *Node, q *Node, field *types.Sym, size int64) *Node {
|
2016-09-16 11:00:54 +10:00
|
|
|
nx := nod(OADDR, nodSym(OXDOT, p, field), nil)
|
|
|
|
ny := nod(OADDR, nodSym(OXDOT, q, field), nil)
|
2018-11-18 08:34:38 -08:00
|
|
|
nx = typecheck(nx, ctxExpr)
|
|
|
|
ny = typecheck(ny, ctxExpr)
|
2016-02-26 14:56:31 -08:00
|
|
|
|
2016-03-30 10:57:47 -07:00
|
|
|
fn, needsize := eqmemfunc(size, nx.Type.Elem())
|
2016-09-16 11:00:54 +10:00
|
|
|
call := nod(OCALL, fn, nil)
|
2016-03-08 15:10:26 -08:00
|
|
|
call.List.Append(nx)
|
|
|
|
call.List.Append(ny)
|
2016-02-28 14:56:31 -08:00
|
|
|
if needsize {
|
2016-09-15 14:34:20 +10:00
|
|
|
call.List.Append(nodintconst(size))
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
return call
|
|
|
|
}
|
|
|
|
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
func eqmemfunc(size int64, t *types.Type) (fn *Node, needsize bool) {
|
2016-02-26 14:56:31 -08:00
|
|
|
switch size {
|
|
|
|
default:
|
2016-03-04 15:19:06 -08:00
|
|
|
fn = syslook("memequal")
|
2016-02-28 14:56:31 -08:00
|
|
|
needsize = true
|
2016-02-26 14:56:31 -08:00
|
|
|
case 1, 2, 4, 8, 16:
|
|
|
|
buf := fmt.Sprintf("memequal%d", int(size)*8)
|
2016-03-04 15:19:06 -08:00
|
|
|
fn = syslook(buf)
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
cmd/compile: reduce use of **Node parameters
Escape analysis has a hard time with tree-like
structures (see #13493 and #14858).
This is unlikely to change.
As a result, when invoking a function that accepts
a **Node parameter, we usually allocate a *Node
on the heap. This happens a whole lot.
This CL changes functions from taking a **Node
to acting more like append: It both modifies
the input and returns a replacement for it.
Because of the cascading nature of escape analysis,
in order to get the benefits, I had to modify
almost all such functions. The remaining functions
are in racewalk and the backend. I would be happy
to update them as well in a separate CL.
This CL was created by manually updating the
function signatures and the directly impacted
bits of code. The callsites were then automatically
updated using a bespoke script:
https://gist.github.com/josharian/046b1be7aceae244de39
For ease of reviewing and future understanding,
this CL is also broken down into four CLs,
mailed separately, which show the manual
and the automated changes separately.
They are CLs 20990, 20991, 20992, and 20993.
Passes toolstash -cmp.
name old time/op new time/op delta
Template 335ms ± 5% 324ms ± 5% -3.35% (p=0.000 n=23+24)
Unicode 176ms ± 9% 165ms ± 6% -6.12% (p=0.000 n=23+24)
GoTypes 1.10s ± 4% 1.07s ± 2% -2.77% (p=0.000 n=24+24)
Compiler 5.31s ± 3% 5.15s ± 3% -2.95% (p=0.000 n=24+24)
MakeBash 41.6s ± 1% 41.7s ± 2% ~ (p=0.586 n=23+23)
name old alloc/op new alloc/op delta
Template 63.3MB ± 0% 62.4MB ± 0% -1.36% (p=0.000 n=25+23)
Unicode 42.4MB ± 0% 41.6MB ± 0% -1.99% (p=0.000 n=24+25)
GoTypes 220MB ± 0% 217MB ± 0% -1.11% (p=0.000 n=25+25)
Compiler 994MB ± 0% 973MB ± 0% -2.08% (p=0.000 n=24+25)
name old allocs/op new allocs/op delta
Template 681k ± 0% 574k ± 0% -15.71% (p=0.000 n=24+25)
Unicode 518k ± 0% 413k ± 0% -20.34% (p=0.000 n=25+24)
GoTypes 2.08M ± 0% 1.78M ± 0% -14.62% (p=0.000 n=25+25)
Compiler 9.26M ± 0% 7.64M ± 0% -17.48% (p=0.000 n=25+25)
name old text-bytes new text-bytes delta
HelloSize 578k ± 0% 578k ± 0% ~ (all samples are equal)
CmdGoSize 6.46M ± 0% 6.46M ± 0% ~ (all samples are equal)
name old data-bytes new data-bytes delta
HelloSize 128k ± 0% 128k ± 0% ~ (all samples are equal)
CmdGoSize 281k ± 0% 281k ± 0% ~ (all samples are equal)
name old exe-bytes new exe-bytes delta
HelloSize 921k ± 0% 921k ± 0% ~ (all samples are equal)
CmdGoSize 9.86M ± 0% 9.86M ± 0% ~ (all samples are equal)
Change-Id: I277d95bd56d51c166ef7f560647aeaa092f3f475
Reviewed-on: https://go-review.googlesource.com/20959
Reviewed-by: Dave Cheney <dave@cheney.net>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
2016-03-20 08:03:31 -07:00
|
|
|
fn = substArgTypes(fn, t, t)
|
2016-02-28 14:56:31 -08:00
|
|
|
return fn, needsize
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
|
|
|
// memrun finds runs of struct fields for which memory-only algs are appropriate.
|
2016-03-10 20:07:00 -08:00
|
|
|
// t is the parent struct type, and start is the field index at which to start the run.
|
2016-02-26 14:56:31 -08:00
|
|
|
// size is the length in bytes of the memory included in the run.
|
2016-03-10 20:07:00 -08:00
|
|
|
// next is the index just after the end of the memory run.
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
func memrun(t *types.Type, start int) (size int64, next int) {
|
2016-03-08 03:40:50 -08:00
|
|
|
next = start
|
2016-02-26 14:56:31 -08:00
|
|
|
for {
|
2016-03-10 20:07:00 -08:00
|
|
|
next++
|
2016-03-28 10:35:13 -07:00
|
|
|
if next == t.NumFields() {
|
2016-02-26 14:56:31 -08:00
|
|
|
break
|
|
|
|
}
|
2016-03-08 03:40:50 -08:00
|
|
|
// Stop run after a padded field.
|
2016-03-28 10:35:13 -07:00
|
|
|
if ispaddedfield(t, next-1) {
|
2016-03-08 03:40:50 -08:00
|
|
|
break
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
2016-03-08 03:40:50 -08:00
|
|
|
// Also, stop before a blank or non-memory field.
|
2017-04-21 07:51:41 -07:00
|
|
|
if f := t.Field(next); f.Sym.IsBlank() || !IsRegularMemory(f.Type) {
|
2016-02-26 14:56:31 -08:00
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
2016-03-28 10:35:13 -07:00
|
|
|
return t.Field(next-1).End() - t.Field(start).Offset, next
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
|
|
|
|
2016-03-10 20:07:00 -08:00
|
|
|
// ispaddedfield reports whether the i'th field of struct type t is followed
|
2016-03-28 10:35:13 -07:00
|
|
|
// by padding.
|
cmd/compile: factor out Pkg, Sym, and Type into package types
- created new package cmd/compile/internal/types
- moved Pkg, Sym, Type to new package
- to break cycles, for now we need the (ugly) types/utils.go
file which contains a handful of functions that must be installed
early by the gc frontend
- to break cycles, for now we need two functions to convert between
*gc.Node and *types.Node (the latter is a dummy type)
- adjusted the gc's code to use the new package and the conversion
functions as needed
- made several Pkg, Sym, and Type methods functions as needed
- renamed constructors typ, typPtr, typArray, etc. to types.New,
types.NewPtr, types.NewArray, etc.
Passes toolstash-check -all.
Change-Id: I8adfa5e85c731645d0a7fd2030375ed6ebf54b72
Reviewed-on: https://go-review.googlesource.com/39855
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2017-04-04 17:54:02 -07:00
|
|
|
func ispaddedfield(t *types.Type, i int) bool {
|
2016-03-30 14:56:08 -07:00
|
|
|
if !t.IsStruct() {
|
2016-03-08 03:40:50 -08:00
|
|
|
Fatalf("ispaddedfield called non-struct %v", t)
|
|
|
|
}
|
|
|
|
end := t.Width
|
2016-03-28 10:35:13 -07:00
|
|
|
if i+1 < t.NumFields() {
|
|
|
|
end = t.Field(i + 1).Offset
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|
2016-03-28 10:35:13 -07:00
|
|
|
return t.Field(i).End() != end
|
2016-02-26 14:56:31 -08:00
|
|
|
}
|