[dev.link] all: merge branch 'master' into dev.link

The only conflict is a modify-deletion conflict in
cmd/link/internal/ld/link.go, where the old error reporter is
deleted in the new linker. Ported to
cmd/link/internal/ld/errors.go.

Change-Id: I5c78f398ea95bc1d7e6579c84dd8252c9f2196b7
This commit is contained in:
Cherry Zhang 2020-04-02 14:00:59 -04:00
commit 6b6eb23041
113 changed files with 5279 additions and 6832 deletions

View file

@ -552,9 +552,7 @@ $ ./all.bash
</pre> </pre>
<p> <p>
(To build under Windows use <code>all.bat</code>; this also requires (To build under Windows use <code>all.bat</code>)
setting the environment variable <code>GOROOT_BOOTSTRAP</code> to the
directory holding the Go tree for the bootstrap compiler.)
</p> </p>
<p> <p>
@ -1008,7 +1006,7 @@ followed by <code>run.bash</code>.
<li> <li>
In this section, we'll call the directory into which you cloned the Go repository <code>$GODIR</code>. In this section, we'll call the directory into which you cloned the Go repository <code>$GODIR</code>.
The <code>go</code> tool built by <code>$GODIR/make.bash</code> will be installed The <code>go</code> tool built by <code>$GODIR/src/make.bash</code> will be installed
in <code>$GODIR/bin/go</code> and you in <code>$GODIR/bin/go</code> and you
can invoke it to test your code. can invoke it to test your code.
For instance, if you For instance, if you

View file

@ -43,6 +43,18 @@ TODO
<h3 id="go-command">Go command</h3> <h3 id="go-command">Go command</h3>
<p><!-- golang.org/issue/37367 -->
The <code>GOPROXY</code> environment variable now supports skipping proxies
that return errors. Proxy URLs may now be separated with either commas
(<code>,</code>) or pipe characters (<code>|</code>). If a proxy URL is
followed by a comma, the <code>go</code> command will only try the next proxy
in the list after a 404 or 410 HTTP response. If a proxy URL is followed by a
pipe character, the <code>go</code> command will try the next proxy in the
list after any error. Note that the default value of <code>GOPROXY</code>
remains <code>https://proxy.golang.org,direct</code>, which does not fall
back to <code>direct</code> in case of errors.
</p>
<p> <p>
TODO TODO
</p> </p>

View file

@ -106,23 +106,17 @@ Go does not support CentOS 6 on these systems.
</div> </div>
<h2 id="go14">Install Go compiler binaries</h2> <h2 id="go14">Install Go compiler binaries for bootstrap</h2>
<p> <p>
The Go toolchain is written in Go. To build it, you need a Go compiler installed. The Go toolchain is written in Go. To build it, you need a Go compiler installed.
The scripts that do the initial build of the tools look for an existing Go tool The scripts that do the initial build of the tools look for a "go" command
chain in <code>$GOROOT_BOOTSTRAP</code>. in <code>$PATH</code>, so as long as you have Go installed in your
If unset, the default value of <code>GOROOT_BOOTSTRAP</code> system and configured in your <code>$PATH</code>, you are ready to build Go
is <code>$HOME/go1.4</code>. from source.
</p> Or if you prefer you can set <code>$GOROOT_BOOTSTRAP</code> to the
root of a Go installation to use to build the new Go toolchain;
<p> <code>$GOROOT_BOOTSTRAP/bin/go</code> should be the go command to use.</p>
There are many options for the bootstrap toolchain.
After obtaining one, set <code>GOROOT_BOOTSTRAP</code> to the
directory containing the unpacked tree.
For example, <code>$GOROOT_BOOTSTRAP/bin/go</code> should be
the <code>go</code> command binary for the bootstrap toolchain.
</p>
<h3 id="bootstrapFromBinaryRelease">Bootstrap toolchain from binary release</h3> <h3 id="bootstrapFromBinaryRelease">Bootstrap toolchain from binary release</h3>

View file

@ -17,7 +17,7 @@
<p> <p>
<a href="/dl/" target="_blank">Official binary <a href="/dl/" target="_blank">Official binary
distributions</a> are available for the FreeBSD (release 10-STABLE and above), distributions</a> are available for the FreeBSD (release 10-STABLE and above),
Linux, macOS (10.10 and above), and Windows operating systems and Linux, macOS (10.11 and above), and Windows operating systems and
the 32-bit (<code>386</code>) and 64-bit (<code>amd64</code>) x86 processor the 32-bit (<code>386</code>) and 64-bit (<code>amd64</code>) x86 processor
architectures. architectures.
</p> </p>
@ -49,7 +49,7 @@ If your OS or architecture is not on the list, you may be able to
<tr><td colspan="3"><hr></td></tr> <tr><td colspan="3"><hr></td></tr>
<tr><td>FreeBSD 10.3 or later</td> <td>amd64, 386</td> <td>Debian GNU/kFreeBSD not supported</td></tr> <tr><td>FreeBSD 10.3 or later</td> <td>amd64, 386</td> <td>Debian GNU/kFreeBSD not supported</td></tr>
<tr valign='top'><td>Linux 2.6.23 or later with glibc</td> <td>amd64, 386, arm, arm64,<br>s390x, ppc64le</td> <td>CentOS/RHEL 5.x not supported.<br>Install from source for other libc.</td></tr> <tr valign='top'><td>Linux 2.6.23 or later with glibc</td> <td>amd64, 386, arm, arm64,<br>s390x, ppc64le</td> <td>CentOS/RHEL 5.x not supported.<br>Install from source for other libc.</td></tr>
<tr><td>macOS 10.10 or later</td> <td>amd64</td> <td>use the clang or gcc<sup>&#8224;</sup> that comes with Xcode<sup>&#8225;</sup> for <code>cgo</code> support</td></tr> <tr><td>macOS 10.11 or later</td> <td>amd64</td> <td>use the clang or gcc<sup>&#8224;</sup> that comes with Xcode<sup>&#8225;</sup> for <code>cgo</code> support</td></tr>
<tr valign='top'><td>Windows 7, Server 2008R2 or later</td> <td>amd64, 386</td> <td>use MinGW (<code>386</code>) or MinGW-W64 (<code>amd64</code>) gcc<sup>&#8224;</sup>.<br>No need for cygwin or msys.</td></tr> <tr valign='top'><td>Windows 7, Server 2008R2 or later</td> <td>amd64, 386</td> <td>use MinGW (<code>386</code>) or MinGW-W64 (<code>amd64</code>) gcc<sup>&#8224;</sup>.<br>No need for cygwin or msys.</td></tr>
</table> </table>

View file

@ -0,0 +1,33 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// +build ignore
package main
/*
typedef struct A A;
typedef struct {
struct A *next;
struct A **prev;
} N;
struct A
{
N n;
};
typedef struct B
{
A* a;
} B;
*/
import "C"
type N C.N
type A C.A
type B C.B

View file

@ -0,0 +1,23 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//
// +build ignore
package main
/*
struct tt {
long long a;
long long b;
};
struct s {
struct tt ts[3];
};
*/
import "C"
type TT C.struct_tt
type S C.struct_s

View file

@ -11,5 +11,13 @@ var v2 = v1.L
// Test that P, Q, and R all point to byte. // Test that P, Q, and R all point to byte.
var v3 = Issue8478{P: (*byte)(nil), Q: (**byte)(nil), R: (***byte)(nil)} var v3 = Issue8478{P: (*byte)(nil), Q: (**byte)(nil), R: (***byte)(nil)}
// Test that N, A and B are fully defined
var v4 = N{}
var v5 = A{}
var v6 = B{}
// Test that S is fully defined
var v7 = S{}
func main() { func main() {
} }

View file

@ -21,6 +21,8 @@ var filePrefixes = []string{
"anonunion", "anonunion",
"issue8478", "issue8478",
"fieldtypedef", "fieldtypedef",
"issue37479",
"issue37621",
} }
func TestGoDefs(t *testing.T) { func TestGoDefs(t *testing.T) {

View file

@ -88,7 +88,8 @@ func jumpX86(word string) bool {
func jumpRISCV(word string) bool { func jumpRISCV(word string) bool {
switch word { switch word {
case "BEQ", "BNE", "BLT", "BGE", "BLTU", "BGEU", "CALL", "JAL", "JALR", "JMP": case "BEQ", "BEQZ", "BGE", "BGEU", "BGEZ", "BGT", "BGTU", "BGTZ", "BLE", "BLEU", "BLEZ",
"BLT", "BLTU", "BLTZ", "BNE", "BNEZ", "CALL", "JAL", "JALR", "JMP":
return true return true
} }
return false return false

View file

@ -330,6 +330,19 @@ start:
CALL asmtest(SB) // 970f0000 CALL asmtest(SB) // 970f0000
JMP asmtest(SB) // 970f0000 JMP asmtest(SB) // 970f0000
// Branch pseudo-instructions
BEQZ X5, start // BEQZ X5, 2 // e38a02c2
BGEZ X5, start // BGEZ X5, 2 // e3d802c2
BGT X5, X6, start // BGT X5, X6, 2 // e3c662c2
BGTU X5, X6, start // BGTU X5, X6, 2 // e3e462c2
BGTZ X5, start // BGTZ X5, 2 // e34250c2
BLE X5, X6, start // BLE X5, X6, 2 // e3d062c2
BLEU X5, X6, start // BLEU X5, X6, 2 // e3fe62c0
BLEZ X5, start // BLEZ X5, 2 // e35c50c0
BLTZ X5, start // BLTZ X5, 2 // e3ca02c0
BNEZ X5, start // BNEZ X5, 2 // e39802c0
// Set pseudo-instructions
SEQZ X15, X15 // 93b71700 SEQZ X15, X15 // 93b71700
SNEZ X15, X15 // b337f000 SNEZ X15, X15 // b337f000

View file

@ -2243,7 +2243,7 @@ func (c *typeConv) loadType(dtype dwarf.Type, pos token.Pos, parent string) *Typ
// Translate to zero-length array instead. // Translate to zero-length array instead.
count = 0 count = 0
} }
sub := c.loadType(dt.Type, pos, key) sub := c.Type(dt.Type, pos)
t.Align = sub.Align t.Align = sub.Align
t.Go = &ast.ArrayType{ t.Go = &ast.ArrayType{
Len: c.intExpr(count), Len: c.intExpr(count),
@ -2388,7 +2388,7 @@ func (c *typeConv) loadType(dtype dwarf.Type, pos token.Pos, parent string) *Typ
c.ptrs[key] = append(c.ptrs[key], t) c.ptrs[key] = append(c.ptrs[key], t)
case *dwarf.QualType: case *dwarf.QualType:
t1 := c.loadType(dt.Type, pos, key) t1 := c.Type(dt.Type, pos)
t.Size = t1.Size t.Size = t1.Size
t.Align = t1.Align t.Align = t1.Align
t.Go = t1.Go t.Go = t1.Go
@ -2472,7 +2472,13 @@ func (c *typeConv) loadType(dtype dwarf.Type, pos token.Pos, parent string) *Typ
} }
name := c.Ident("_Ctype_" + dt.Name) name := c.Ident("_Ctype_" + dt.Name)
goIdent[name.Name] = name goIdent[name.Name] = name
sub := c.loadType(dt.Type, pos, key) akey := ""
if c.anonymousStructTypedef(dt) {
// only load type recursively for typedefs of anonymous
// structs, see issues 37479 and 37621.
akey = key
}
sub := c.loadType(dt.Type, pos, akey)
if c.badPointerTypedef(dt) { if c.badPointerTypedef(dt) {
// Treat this typedef as a uintptr. // Treat this typedef as a uintptr.
s := *sub s := *sub
@ -2993,6 +2999,13 @@ func fieldPrefix(fld []*ast.Field) string {
return prefix return prefix
} }
// anonymousStructTypedef reports whether dt is a C typedef for an anonymous
// struct.
func (c *typeConv) anonymousStructTypedef(dt *dwarf.TypedefType) bool {
st, ok := dt.Type.(*dwarf.StructType)
return ok && st.StructName == ""
}
// badPointerTypedef reports whether t is a C typedef that should not be considered a pointer in Go. // badPointerTypedef reports whether t is a C typedef that should not be considered a pointer in Go.
// A typedef is bad if C code sometimes stores non-pointers in this type. // A typedef is bad if C code sometimes stores non-pointers in this type.
// TODO: Currently our best solution is to find these manually and list them as // TODO: Currently our best solution is to find these manually and list them as

View file

@ -681,6 +681,19 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
gc.AddAux2(&p.From, v, sc.Off()) gc.AddAux2(&p.From, v, sc.Off())
p.To.Type = obj.TYPE_CONST p.To.Type = obj.TYPE_CONST
p.To.Offset = sc.Val() p.To.Offset = sc.Val()
case ssa.OpAMD64CMPQloadidx8, ssa.OpAMD64CMPQloadidx1, ssa.OpAMD64CMPLloadidx4, ssa.OpAMD64CMPLloadidx1, ssa.OpAMD64CMPWloadidx2, ssa.OpAMD64CMPWloadidx1, ssa.OpAMD64CMPBloadidx1:
p := s.Prog(v.Op.Asm())
memIdx(&p.From, v)
gc.AddAux(&p.From, v)
p.To.Type = obj.TYPE_REG
p.To.Reg = v.Args[2].Reg()
case ssa.OpAMD64CMPQconstloadidx8, ssa.OpAMD64CMPQconstloadidx1, ssa.OpAMD64CMPLconstloadidx4, ssa.OpAMD64CMPLconstloadidx1, ssa.OpAMD64CMPWconstloadidx2, ssa.OpAMD64CMPWconstloadidx1, ssa.OpAMD64CMPBconstloadidx1:
sc := v.AuxValAndOff()
p := s.Prog(v.Op.Asm())
memIdx(&p.From, v)
gc.AddAux2(&p.From, v, sc.Off())
p.To.Type = obj.TYPE_CONST
p.To.Offset = sc.Val()
case ssa.OpAMD64MOVLconst, ssa.OpAMD64MOVQconst: case ssa.OpAMD64MOVLconst, ssa.OpAMD64MOVQconst:
x := v.Reg() x := v.Reg()
@ -947,7 +960,8 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
p := s.Prog(obj.ACALL) p := s.Prog(obj.ACALL)
p.To.Type = obj.TYPE_MEM p.To.Type = obj.TYPE_MEM
p.To.Name = obj.NAME_EXTERN p.To.Name = obj.NAME_EXTERN
p.To.Sym = v.Aux.(*obj.LSym) // arg0 is in DI. Set sym to match where regalloc put arg1.
p.To.Sym = gc.GCWriteBarrierReg[v.Args[1].Reg()]
case ssa.OpAMD64LoweredPanicBoundsA, ssa.OpAMD64LoweredPanicBoundsB, ssa.OpAMD64LoweredPanicBoundsC: case ssa.OpAMD64LoweredPanicBoundsA, ssa.OpAMD64LoweredPanicBoundsB, ssa.OpAMD64LoweredPanicBoundsC:
p := s.Prog(obj.ACALL) p := s.Prog(obj.ACALL)

View file

@ -13,6 +13,7 @@ var runtimeDecls = [...]struct {
{"panicdivide", funcTag, 5}, {"panicdivide", funcTag, 5},
{"panicshift", funcTag, 5}, {"panicshift", funcTag, 5},
{"panicmakeslicelen", funcTag, 5}, {"panicmakeslicelen", funcTag, 5},
{"panicmakeslicecap", funcTag, 5},
{"throwinit", funcTag, 5}, {"throwinit", funcTag, 5},
{"panicwrap", funcTag, 5}, {"panicwrap", funcTag, 5},
{"gopanic", funcTag, 7}, {"gopanic", funcTag, 7},

View file

@ -18,6 +18,7 @@ func newobject(typ *byte) *any
func panicdivide() func panicdivide()
func panicshift() func panicshift()
func panicmakeslicelen() func panicmakeslicelen()
func panicmakeslicecap()
func throwinit() func throwinit()
func panicwrap() func panicwrap()

View file

@ -334,3 +334,6 @@ var (
WasmTruncU, WasmTruncU,
SigPanic *obj.LSym SigPanic *obj.LSym
) )
// GCWriteBarrierReg maps from registers to gcWriteBarrier implementation LSyms.
var GCWriteBarrierReg map[int16]*obj.LSym

View file

@ -575,6 +575,12 @@ func inlnode(n *Node, maxCost int32) *Node {
// so escape analysis can avoid more heapmoves. // so escape analysis can avoid more heapmoves.
case OCLOSURE: case OCLOSURE:
return n return n
case OCALLMETH:
// Prevent inlining some reflect.Value methods when using checkptr,
// even when package reflect was compiled without it (#35073).
if s := n.Left.Sym; Debug_checkptr != 0 && s.Pkg.Path == "reflect" && (s.Name == "Value.UnsafeAddr" || s.Name == "Value.Pointer") {
return n
}
} }
lno := setlineno(n) lno := setlineno(n)

View file

@ -16,6 +16,7 @@ import (
"cmd/compile/internal/ssa" "cmd/compile/internal/ssa"
"cmd/compile/internal/types" "cmd/compile/internal/types"
"cmd/internal/obj" "cmd/internal/obj"
"cmd/internal/obj/x86"
"cmd/internal/objabi" "cmd/internal/objabi"
"cmd/internal/src" "cmd/internal/src"
"cmd/internal/sys" "cmd/internal/sys"
@ -104,6 +105,20 @@ func initssaconfig() {
writeBarrier = sysvar("writeBarrier") // struct { bool; ... } writeBarrier = sysvar("writeBarrier") // struct { bool; ... }
zerobaseSym = sysvar("zerobase") zerobaseSym = sysvar("zerobase")
// asm funcs with special ABI
if thearch.LinkArch.Name == "amd64" {
GCWriteBarrierReg = map[int16]*obj.LSym{
x86.REG_AX: sysvar("gcWriteBarrier"),
x86.REG_CX: sysvar("gcWriteBarrierCX"),
x86.REG_DX: sysvar("gcWriteBarrierDX"),
x86.REG_BX: sysvar("gcWriteBarrierBX"),
x86.REG_BP: sysvar("gcWriteBarrierBP"),
x86.REG_SI: sysvar("gcWriteBarrierSI"),
x86.REG_R8: sysvar("gcWriteBarrierR8"),
x86.REG_R9: sysvar("gcWriteBarrierR9"),
}
}
if thearch.LinkArch.Family == sys.Wasm { if thearch.LinkArch.Family == sys.Wasm {
BoundsCheckFunc[ssa.BoundsIndex] = sysvar("goPanicIndex") BoundsCheckFunc[ssa.BoundsIndex] = sysvar("goPanicIndex")
BoundsCheckFunc[ssa.BoundsIndexU] = sysvar("goPanicIndexU") BoundsCheckFunc[ssa.BoundsIndexU] = sysvar("goPanicIndexU")

View file

@ -542,7 +542,7 @@ func methtype(t *types.Type) *types.Type {
// Is type src assignment compatible to type dst? // Is type src assignment compatible to type dst?
// If so, return op code to use in conversion. // If so, return op code to use in conversion.
// If not, return OXXX. // If not, return OXXX.
func assignop(src *types.Type, dst *types.Type, why *string) Op { func assignop(src, dst *types.Type, why *string) Op {
if why != nil { if why != nil {
*why = "" *why = ""
} }
@ -665,7 +665,8 @@ func assignop(src *types.Type, dst *types.Type, why *string) Op {
// Can we convert a value of type src to a value of type dst? // Can we convert a value of type src to a value of type dst?
// If so, return op code to use in conversion (maybe OCONVNOP). // If so, return op code to use in conversion (maybe OCONVNOP).
// If not, return OXXX. // If not, return OXXX.
func convertop(src *types.Type, dst *types.Type, why *string) Op { // srcConstant indicates whether the value of type src is a constant.
func convertop(srcConstant bool, src, dst *types.Type, why *string) Op {
if why != nil { if why != nil {
*why = "" *why = ""
} }
@ -741,6 +742,13 @@ func convertop(src *types.Type, dst *types.Type, why *string) Op {
return OCONV return OCONV
} }
// Special case for constant conversions: any numeric
// conversion is potentially okay. We'll validate further
// within evconst. See #38117.
if srcConstant && (src.IsInteger() || src.IsFloat() || src.IsComplex()) && (dst.IsInteger() || dst.IsFloat() || dst.IsComplex()) {
return OCONV
}
// 6. src is an integer or has type []byte or []rune // 6. src is an integer or has type []byte or []rune
// and dst is a string type. // and dst is a string type.
if src.IsInteger() && dst.IsString() { if src.IsInteger() && dst.IsString() {

View file

@ -1634,7 +1634,7 @@ func typecheck1(n *Node, top int) (res *Node) {
return n return n
} }
var why string var why string
n.Op = convertop(t, n.Type, &why) n.Op = convertop(n.Left.Op == OLITERAL, t, n.Type, &why)
if n.Op == 0 { if n.Op == 0 {
if !n.Diag() && !n.Type.Broke() && !n.Left.Diag() { if !n.Diag() && !n.Type.Broke() && !n.Left.Diag() {
yyerror("cannot convert %L to type %v%s", n.Left, n.Type, why) yyerror("cannot convert %L to type %v%s", n.Left, n.Type, why)

View file

@ -354,14 +354,13 @@ func isSmallMakeSlice(n *Node) bool {
if n.Op != OMAKESLICE { if n.Op != OMAKESLICE {
return false return false
} }
l := n.Left
r := n.Right r := n.Right
if r == nil { if r == nil {
r = l r = n.Left
} }
t := n.Type t := n.Type
return smallintconst(l) && smallintconst(r) && (t.Elem().Width == 0 || r.Int64() < maxImplicitStackVarSize/t.Elem().Width) return smallintconst(r) && (t.Elem().Width == 0 || r.Int64() < maxImplicitStackVarSize/t.Elem().Width)
} }
// walk the whole tree of the body of an // walk the whole tree of the body of an
@ -1338,6 +1337,20 @@ opswitch:
if i < 0 { if i < 0 {
Fatalf("walkexpr: invalid index %v", r) Fatalf("walkexpr: invalid index %v", r)
} }
// cap is constrained to [0,2^31), so it's safe to do:
//
// if uint64(len) > cap {
// if len < 0 { panicmakeslicelen() }
// panicmakeslicecap()
// }
nif := nod(OIF, nod(OGT, conv(l, types.Types[TUINT64]), nodintconst(i)), nil)
niflen := nod(OIF, nod(OLT, l, nodintconst(0)), nil)
niflen.Nbody.Set1(mkcall("panicmakeslicelen", nil, init))
nif.Nbody.Append(niflen, mkcall("panicmakeslicecap", nil, init))
nif = typecheck(nif, ctxStmt)
init.Append(nif)
t = types.NewArray(t.Elem(), i) // [r]T t = types.NewArray(t.Elem(), i) // [r]T
var_ := temp(t) var_ := temp(t)
a := nod(OAS, var_, nil) // zero temp a := nod(OAS, var_, nil) // zero temp

View file

@ -11,8 +11,8 @@ func addressingModes(f *Func) {
default: default:
// Most architectures can't do this. // Most architectures can't do this.
return return
case "amd64": case "amd64", "386":
// TODO: 386, s390x? // TODO: s390x?
} }
var tmp []*Value var tmp []*Value
@ -21,7 +21,17 @@ func addressingModes(f *Func) {
if !combineFirst[v.Op] { if !combineFirst[v.Op] {
continue continue
} }
p := v.Args[0] // All matched operations have the pointer in arg[0].
// All results have the pointer in arg[0] and the index in arg[1].
// *Except* for operations which update a register,
// which are marked with resultInArg0. Those have
// the pointer in arg[1], and the corresponding result op
// has the pointer in arg[1] and the index in arg[2].
ptrIndex := 0
if opcodeTable[v.Op].resultInArg0 {
ptrIndex = 1
}
p := v.Args[ptrIndex]
c, ok := combine[[2]Op{v.Op, p.Op}] c, ok := combine[[2]Op{v.Op, p.Op}]
if !ok { if !ok {
continue continue
@ -71,10 +81,11 @@ func addressingModes(f *Func) {
f.Fatalf("unknown aux combining for %s and %s\n", v.Op, p.Op) f.Fatalf("unknown aux combining for %s and %s\n", v.Op, p.Op)
} }
// Combine the operations. // Combine the operations.
tmp = append(tmp[:0], v.Args[1:]...) tmp = append(tmp[:0], v.Args[:ptrIndex]...)
tmp = append(tmp, p.Args...)
tmp = append(tmp, v.Args[ptrIndex+1:]...)
v.resetArgs() v.resetArgs()
v.Op = c v.Op = c
v.AddArgs(p.Args...)
v.AddArgs(tmp...) v.AddArgs(tmp...)
} }
} }
@ -97,6 +108,7 @@ func init() {
// x.Args[0].Args + x.Args[1:] // x.Args[0].Args + x.Args[1:]
// Additionally, the Aux/AuxInt from x.Args[0] is merged into x. // Additionally, the Aux/AuxInt from x.Args[0] is merged into x.
var combine = map[[2]Op]Op{ var combine = map[[2]Op]Op{
// amd64
[2]Op{OpAMD64MOVBload, OpAMD64ADDQ}: OpAMD64MOVBloadidx1, [2]Op{OpAMD64MOVBload, OpAMD64ADDQ}: OpAMD64MOVBloadidx1,
[2]Op{OpAMD64MOVWload, OpAMD64ADDQ}: OpAMD64MOVWloadidx1, [2]Op{OpAMD64MOVWload, OpAMD64ADDQ}: OpAMD64MOVWloadidx1,
[2]Op{OpAMD64MOVLload, OpAMD64ADDQ}: OpAMD64MOVLloadidx1, [2]Op{OpAMD64MOVLload, OpAMD64ADDQ}: OpAMD64MOVLloadidx1,
@ -150,5 +162,90 @@ var combine = map[[2]Op]Op{
[2]Op{OpAMD64MOVQstoreconst, OpAMD64LEAQ1}: OpAMD64MOVQstoreconstidx1, [2]Op{OpAMD64MOVQstoreconst, OpAMD64LEAQ1}: OpAMD64MOVQstoreconstidx1,
[2]Op{OpAMD64MOVQstoreconst, OpAMD64LEAQ8}: OpAMD64MOVQstoreconstidx8, [2]Op{OpAMD64MOVQstoreconst, OpAMD64LEAQ8}: OpAMD64MOVQstoreconstidx8,
// TODO: 386 [2]Op{OpAMD64CMPBload, OpAMD64ADDQ}: OpAMD64CMPBloadidx1,
[2]Op{OpAMD64CMPWload, OpAMD64ADDQ}: OpAMD64CMPWloadidx1,
[2]Op{OpAMD64CMPLload, OpAMD64ADDQ}: OpAMD64CMPLloadidx1,
[2]Op{OpAMD64CMPQload, OpAMD64ADDQ}: OpAMD64CMPQloadidx1,
[2]Op{OpAMD64CMPBload, OpAMD64LEAQ1}: OpAMD64CMPBloadidx1,
[2]Op{OpAMD64CMPWload, OpAMD64LEAQ1}: OpAMD64CMPWloadidx1,
[2]Op{OpAMD64CMPWload, OpAMD64LEAQ2}: OpAMD64CMPWloadidx2,
[2]Op{OpAMD64CMPLload, OpAMD64LEAQ1}: OpAMD64CMPLloadidx1,
[2]Op{OpAMD64CMPLload, OpAMD64LEAQ4}: OpAMD64CMPLloadidx4,
[2]Op{OpAMD64CMPQload, OpAMD64LEAQ1}: OpAMD64CMPQloadidx1,
[2]Op{OpAMD64CMPQload, OpAMD64LEAQ8}: OpAMD64CMPQloadidx8,
[2]Op{OpAMD64CMPBconstload, OpAMD64ADDQ}: OpAMD64CMPBconstloadidx1,
[2]Op{OpAMD64CMPWconstload, OpAMD64ADDQ}: OpAMD64CMPWconstloadidx1,
[2]Op{OpAMD64CMPLconstload, OpAMD64ADDQ}: OpAMD64CMPLconstloadidx1,
[2]Op{OpAMD64CMPQconstload, OpAMD64ADDQ}: OpAMD64CMPQconstloadidx1,
[2]Op{OpAMD64CMPBconstload, OpAMD64LEAQ1}: OpAMD64CMPBconstloadidx1,
[2]Op{OpAMD64CMPWconstload, OpAMD64LEAQ1}: OpAMD64CMPWconstloadidx1,
[2]Op{OpAMD64CMPWconstload, OpAMD64LEAQ2}: OpAMD64CMPWconstloadidx2,
[2]Op{OpAMD64CMPLconstload, OpAMD64LEAQ1}: OpAMD64CMPLconstloadidx1,
[2]Op{OpAMD64CMPLconstload, OpAMD64LEAQ4}: OpAMD64CMPLconstloadidx4,
[2]Op{OpAMD64CMPQconstload, OpAMD64LEAQ1}: OpAMD64CMPQconstloadidx1,
[2]Op{OpAMD64CMPQconstload, OpAMD64LEAQ8}: OpAMD64CMPQconstloadidx8,
// 386
[2]Op{Op386MOVBload, Op386ADDL}: Op386MOVBloadidx1,
[2]Op{Op386MOVWload, Op386ADDL}: Op386MOVWloadidx1,
[2]Op{Op386MOVLload, Op386ADDL}: Op386MOVLloadidx1,
[2]Op{Op386MOVSSload, Op386ADDL}: Op386MOVSSloadidx1,
[2]Op{Op386MOVSDload, Op386ADDL}: Op386MOVSDloadidx1,
[2]Op{Op386MOVBstore, Op386ADDL}: Op386MOVBstoreidx1,
[2]Op{Op386MOVWstore, Op386ADDL}: Op386MOVWstoreidx1,
[2]Op{Op386MOVLstore, Op386ADDL}: Op386MOVLstoreidx1,
[2]Op{Op386MOVSSstore, Op386ADDL}: Op386MOVSSstoreidx1,
[2]Op{Op386MOVSDstore, Op386ADDL}: Op386MOVSDstoreidx1,
[2]Op{Op386MOVBstoreconst, Op386ADDL}: Op386MOVBstoreconstidx1,
[2]Op{Op386MOVWstoreconst, Op386ADDL}: Op386MOVWstoreconstidx1,
[2]Op{Op386MOVLstoreconst, Op386ADDL}: Op386MOVLstoreconstidx1,
[2]Op{Op386MOVBload, Op386LEAL1}: Op386MOVBloadidx1,
[2]Op{Op386MOVWload, Op386LEAL1}: Op386MOVWloadidx1,
[2]Op{Op386MOVWload, Op386LEAL2}: Op386MOVWloadidx2,
[2]Op{Op386MOVLload, Op386LEAL1}: Op386MOVLloadidx1,
[2]Op{Op386MOVLload, Op386LEAL4}: Op386MOVLloadidx4,
[2]Op{Op386MOVSSload, Op386LEAL1}: Op386MOVSSloadidx1,
[2]Op{Op386MOVSSload, Op386LEAL4}: Op386MOVSSloadidx4,
[2]Op{Op386MOVSDload, Op386LEAL1}: Op386MOVSDloadidx1,
[2]Op{Op386MOVSDload, Op386LEAL8}: Op386MOVSDloadidx8,
[2]Op{Op386MOVBstore, Op386LEAL1}: Op386MOVBstoreidx1,
[2]Op{Op386MOVWstore, Op386LEAL1}: Op386MOVWstoreidx1,
[2]Op{Op386MOVWstore, Op386LEAL2}: Op386MOVWstoreidx2,
[2]Op{Op386MOVLstore, Op386LEAL1}: Op386MOVLstoreidx1,
[2]Op{Op386MOVLstore, Op386LEAL4}: Op386MOVLstoreidx4,
[2]Op{Op386MOVSSstore, Op386LEAL1}: Op386MOVSSstoreidx1,
[2]Op{Op386MOVSSstore, Op386LEAL4}: Op386MOVSSstoreidx4,
[2]Op{Op386MOVSDstore, Op386LEAL1}: Op386MOVSDstoreidx1,
[2]Op{Op386MOVSDstore, Op386LEAL8}: Op386MOVSDstoreidx8,
[2]Op{Op386MOVBstoreconst, Op386LEAL1}: Op386MOVBstoreconstidx1,
[2]Op{Op386MOVWstoreconst, Op386LEAL1}: Op386MOVWstoreconstidx1,
[2]Op{Op386MOVWstoreconst, Op386LEAL2}: Op386MOVWstoreconstidx2,
[2]Op{Op386MOVLstoreconst, Op386LEAL1}: Op386MOVLstoreconstidx1,
[2]Op{Op386MOVLstoreconst, Op386LEAL4}: Op386MOVLstoreconstidx4,
[2]Op{Op386ADDLload, Op386LEAL4}: Op386ADDLloadidx4,
[2]Op{Op386SUBLload, Op386LEAL4}: Op386SUBLloadidx4,
[2]Op{Op386MULLload, Op386LEAL4}: Op386MULLloadidx4,
[2]Op{Op386ANDLload, Op386LEAL4}: Op386ANDLloadidx4,
[2]Op{Op386ORLload, Op386LEAL4}: Op386ORLloadidx4,
[2]Op{Op386XORLload, Op386LEAL4}: Op386XORLloadidx4,
[2]Op{Op386ADDLmodify, Op386LEAL4}: Op386ADDLmodifyidx4,
[2]Op{Op386SUBLmodify, Op386LEAL4}: Op386SUBLmodifyidx4,
[2]Op{Op386ANDLmodify, Op386LEAL4}: Op386ANDLmodifyidx4,
[2]Op{Op386ORLmodify, Op386LEAL4}: Op386ORLmodifyidx4,
[2]Op{Op386XORLmodify, Op386LEAL4}: Op386XORLmodifyidx4,
[2]Op{Op386ADDLconstmodify, Op386LEAL4}: Op386ADDLconstmodifyidx4,
[2]Op{Op386ANDLconstmodify, Op386LEAL4}: Op386ANDLconstmodifyidx4,
[2]Op{Op386ORLconstmodify, Op386LEAL4}: Op386ORLconstmodifyidx4,
[2]Op{Op386XORLconstmodify, Op386LEAL4}: Op386XORLconstmodifyidx4,
} }

View file

@ -588,10 +588,6 @@
(MOVWLSX x:(MOVWload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWLSXload <v.Type> [off] {sym} ptr mem) (MOVWLSX x:(MOVWload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWLSXload <v.Type> [off] {sym} ptr mem)
(MOVWLZX x:(MOVWload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWload <v.Type> [off] {sym} ptr mem) (MOVWLZX x:(MOVWload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWload <v.Type> [off] {sym} ptr mem)
(MOVBLZX x:(MOVBloadidx1 [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBloadidx1 <v.Type> [off] {sym} ptr idx mem)
(MOVWLZX x:(MOVWloadidx1 [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWloadidx1 <v.Type> [off] {sym} ptr idx mem)
(MOVWLZX x:(MOVWloadidx2 [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWloadidx2 <v.Type> [off] {sym} ptr idx mem)
// replace load from same location as preceding store with zero/sign extension (or copy in case of full width) // replace load from same location as preceding store with zero/sign extension (or copy in case of full width)
(MOVBload [off] {sym} ptr (MOVBstore [off2] {sym2} ptr2 x _)) && sym == sym2 && off == off2 && isSamePtr(ptr, ptr2) -> (MOVBLZX x) (MOVBload [off] {sym} ptr (MOVBstore [off2] {sym2} ptr2 x _)) && sym == sym2 && off == off2 && isSamePtr(ptr, ptr2) -> (MOVBLZX x)
(MOVWload [off] {sym} ptr (MOVWstore [off2] {sym2} ptr2 x _)) && sym == sym2 && off == off2 && isSamePtr(ptr, ptr2) -> (MOVWLZX x) (MOVWload [off] {sym} ptr (MOVWstore [off2] {sym2} ptr2 x _)) && sym == sym2 && off == off2 && isSamePtr(ptr, ptr2) -> (MOVWLZX x)
@ -611,34 +607,22 @@
// fold constants into memory operations // fold constants into memory operations
// Note that this is not always a good idea because if not all the uses of // Note that this is not always a good idea because if not all the uses of
// the ADDQconst get eliminated, we still have to compute the ADDQconst and we now // the ADDLconst get eliminated, we still have to compute the ADDLconst and we now
// have potentially two live values (ptr and (ADDQconst [off] ptr)) instead of one. // have potentially two live values (ptr and (ADDLconst [off] ptr)) instead of one.
// Nevertheless, let's do it! // Nevertheless, let's do it!
(MOV(L|W|B|SS|SD)load [off1] {sym} (ADDLconst [off2] ptr) mem) && is32Bit(off1+off2) -> (MOV(L|W|B|SS|SD)load [off1+off2] {sym} ptr mem) (MOV(L|W|B|SS|SD)load [off1] {sym} (ADDLconst [off2] ptr) mem) && is32Bit(off1+off2) -> (MOV(L|W|B|SS|SD)load [off1+off2] {sym} ptr mem)
(MOV(L|W|B|SS|SD)store [off1] {sym} (ADDLconst [off2] ptr) val mem) && is32Bit(off1+off2) -> (MOV(L|W|B|SS|SD)store [off1+off2] {sym} ptr val mem) (MOV(L|W|B|SS|SD)store [off1] {sym} (ADDLconst [off2] ptr) val mem) && is32Bit(off1+off2) -> (MOV(L|W|B|SS|SD)store [off1+off2] {sym} ptr val mem)
((ADD|SUB|MUL|AND|OR|XOR)Lload [off1] {sym} val (ADDLconst [off2] base) mem) && is32Bit(off1+off2) -> ((ADD|SUB|MUL|AND|OR|XOR)Lload [off1] {sym} val (ADDLconst [off2] base) mem) && is32Bit(off1+off2) ->
((ADD|SUB|MUL|AND|OR|XOR)Lload [off1+off2] {sym} val base mem) ((ADD|SUB|MUL|AND|OR|XOR)Lload [off1+off2] {sym} val base mem)
((ADD|SUB|MUL|AND|OR|XOR)Lloadidx4 [off1] {sym} val (ADDLconst [off2] base) idx mem) && is32Bit(off1+off2) ->
((ADD|SUB|MUL|AND|OR|XOR)Lloadidx4 [off1+off2] {sym} val base idx mem)
((ADD|SUB|MUL|AND|OR|XOR)Lloadidx4 [off1] {sym} val base (ADDLconst [off2] idx) mem) && is32Bit(off1+off2*4) ->
((ADD|SUB|MUL|AND|OR|XOR)Lloadidx4 [off1+off2*4] {sym} val base idx mem)
((ADD|SUB|MUL|DIV)SSload [off1] {sym} val (ADDLconst [off2] base) mem) && is32Bit(off1+off2) -> ((ADD|SUB|MUL|DIV)SSload [off1] {sym} val (ADDLconst [off2] base) mem) && is32Bit(off1+off2) ->
((ADD|SUB|MUL|DIV)SSload [off1+off2] {sym} val base mem) ((ADD|SUB|MUL|DIV)SSload [off1+off2] {sym} val base mem)
((ADD|SUB|MUL|DIV)SDload [off1] {sym} val (ADDLconst [off2] base) mem) && is32Bit(off1+off2) -> ((ADD|SUB|MUL|DIV)SDload [off1] {sym} val (ADDLconst [off2] base) mem) && is32Bit(off1+off2) ->
((ADD|SUB|MUL|DIV)SDload [off1+off2] {sym} val base mem) ((ADD|SUB|MUL|DIV)SDload [off1+off2] {sym} val base mem)
((ADD|SUB|AND|OR|XOR)Lmodify [off1] {sym} (ADDLconst [off2] base) val mem) && is32Bit(off1+off2) -> ((ADD|SUB|AND|OR|XOR)Lmodify [off1] {sym} (ADDLconst [off2] base) val mem) && is32Bit(off1+off2) ->
((ADD|SUB|AND|OR|XOR)Lmodify [off1+off2] {sym} base val mem) ((ADD|SUB|AND|OR|XOR)Lmodify [off1+off2] {sym} base val mem)
((ADD|SUB|AND|OR|XOR)Lmodifyidx4 [off1] {sym} (ADDLconst [off2] base) idx val mem) && is32Bit(off1+off2) ->
((ADD|SUB|AND|OR|XOR)Lmodifyidx4 [off1+off2] {sym} base idx val mem)
((ADD|SUB|AND|OR|XOR)Lmodifyidx4 [off1] {sym} base (ADDLconst [off2] idx) val mem) && is32Bit(off1+off2*4) ->
((ADD|SUB|AND|OR|XOR)Lmodifyidx4 [off1+off2*4] {sym} base idx val mem)
((ADD|AND|OR|XOR)Lconstmodify [valoff1] {sym} (ADDLconst [off2] base) mem) && ValAndOff(valoff1).canAdd(off2) -> ((ADD|AND|OR|XOR)Lconstmodify [valoff1] {sym} (ADDLconst [off2] base) mem) && ValAndOff(valoff1).canAdd(off2) ->
((ADD|AND|OR|XOR)Lconstmodify [ValAndOff(valoff1).add(off2)] {sym} base mem) ((ADD|AND|OR|XOR)Lconstmodify [ValAndOff(valoff1).add(off2)] {sym} base mem)
((ADD|AND|OR|XOR)Lconstmodifyidx4 [valoff1] {sym} (ADDLconst [off2] base) idx mem) && ValAndOff(valoff1).canAdd(off2) ->
((ADD|AND|OR|XOR)Lconstmodifyidx4 [ValAndOff(valoff1).add(off2)] {sym} base idx mem)
((ADD|AND|OR|XOR)Lconstmodifyidx4 [valoff1] {sym} base (ADDLconst [off2] idx) mem) && ValAndOff(valoff1).canAdd(off2*4) ->
((ADD|AND|OR|XOR)Lconstmodifyidx4 [ValAndOff(valoff1).add(off2*4)] {sym} base idx mem)
// Fold constants into stores. // Fold constants into stores.
(MOVLstore [off] {sym} ptr (MOVLconst [c]) mem) && validOff(off) -> (MOVLstore [off] {sym} ptr (MOVLconst [c]) mem) && validOff(off) ->
@ -652,7 +636,7 @@
(MOV(L|W|B)storeconst [sc] {s} (ADDLconst [off] ptr) mem) && ValAndOff(sc).canAdd(off) -> (MOV(L|W|B)storeconst [sc] {s} (ADDLconst [off] ptr) mem) && ValAndOff(sc).canAdd(off) ->
(MOV(L|W|B)storeconst [ValAndOff(sc).add(off)] {s} ptr mem) (MOV(L|W|B)storeconst [ValAndOff(sc).add(off)] {s} ptr mem)
// We need to fold LEAQ into the MOVx ops so that the live variable analysis knows // We need to fold LEAL into the MOVx ops so that the live variable analysis knows
// what variables are being read/written by the ops. // what variables are being read/written by the ops.
// Note: we turn off this merging for operations on globals when building // Note: we turn off this merging for operations on globals when building
// position-independent code (when Flag_shared is set). // position-independent code (when Flag_shared is set).
@ -672,31 +656,9 @@
&& (ptr.Op != OpSB || !config.ctxt.Flag_shared) -> && (ptr.Op != OpSB || !config.ctxt.Flag_shared) ->
(MOV(L|W|B)storeconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem) (MOV(L|W|B)storeconst [ValAndOff(sc).add(off)] {mergeSym(sym1, sym2)} ptr mem)
// generating indexed loads and stores
(MOV(B|W|L|SS|SD)load [off1] {sym1} (LEAL1 [off2] {sym2} ptr idx) mem) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
(MOV(B|W|L|SS|SD)loadidx1 [off1+off2] {mergeSym(sym1,sym2)} ptr idx mem)
(MOVWload [off1] {sym1} (LEAL2 [off2] {sym2} ptr idx) mem) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
(MOVWloadidx2 [off1+off2] {mergeSym(sym1,sym2)} ptr idx mem)
(MOV(L|SS)load [off1] {sym1} (LEAL4 [off2] {sym2} ptr idx) mem) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
(MOV(L|SS)loadidx4 [off1+off2] {mergeSym(sym1,sym2)} ptr idx mem)
(MOVSDload [off1] {sym1} (LEAL8 [off2] {sym2} ptr idx) mem) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
(MOVSDloadidx8 [off1+off2] {mergeSym(sym1,sym2)} ptr idx mem)
(MOV(B|W|L|SS|SD)store [off1] {sym1} (LEAL1 [off2] {sym2} ptr idx) val mem) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
(MOV(B|W|L|SS|SD)storeidx1 [off1+off2] {mergeSym(sym1,sym2)} ptr idx val mem)
(MOVWstore [off1] {sym1} (LEAL2 [off2] {sym2} ptr idx) val mem) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
(MOVWstoreidx2 [off1+off2] {mergeSym(sym1,sym2)} ptr idx val mem)
(MOV(L|SS)store [off1] {sym1} (LEAL4 [off2] {sym2} ptr idx) val mem) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
(MOV(L|SS)storeidx4 [off1+off2] {mergeSym(sym1,sym2)} ptr idx val mem)
(MOVSDstore [off1] {sym1} (LEAL8 [off2] {sym2} ptr idx) val mem) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
(MOVSDstoreidx8 [off1+off2] {mergeSym(sym1,sym2)} ptr idx val mem)
((ADD|SUB|MUL|AND|OR|XOR)Lload [off1] {sym1} val (LEAL [off2] {sym2} base) mem) ((ADD|SUB|MUL|AND|OR|XOR)Lload [off1] {sym1} val (LEAL [off2] {sym2} base) mem)
&& is32Bit(off1+off2) && canMergeSym(sym1, sym2) && (base.Op != OpSB || !config.ctxt.Flag_shared) -> && is32Bit(off1+off2) && canMergeSym(sym1, sym2) && (base.Op != OpSB || !config.ctxt.Flag_shared) ->
((ADD|SUB|MUL|AND|OR|XOR)Lload [off1+off2] {mergeSym(sym1,sym2)} val base mem) ((ADD|SUB|MUL|AND|OR|XOR)Lload [off1+off2] {mergeSym(sym1,sym2)} val base mem)
((ADD|SUB|MUL|AND|OR|XOR)Lloadidx4 [off1] {sym1} val (LEAL [off2] {sym2} base) idx mem)
&& is32Bit(off1+off2) && canMergeSym(sym1, sym2) && (base.Op != OpSB || !config.ctxt.Flag_shared) ->
((ADD|SUB|MUL|AND|OR|XOR)Lloadidx4 [off1+off2] {mergeSym(sym1,sym2)} val base idx mem)
((ADD|SUB|MUL|DIV)SSload [off1] {sym1} val (LEAL [off2] {sym2} base) mem) ((ADD|SUB|MUL|DIV)SSload [off1] {sym1} val (LEAL [off2] {sym2} base) mem)
&& is32Bit(off1+off2) && canMergeSym(sym1, sym2) && (base.Op != OpSB || !config.ctxt.Flag_shared) -> && is32Bit(off1+off2) && canMergeSym(sym1, sym2) && (base.Op != OpSB || !config.ctxt.Flag_shared) ->
((ADD|SUB|MUL|DIV)SSload [off1+off2] {mergeSym(sym1,sym2)} val base mem) ((ADD|SUB|MUL|DIV)SSload [off1+off2] {mergeSym(sym1,sym2)} val base mem)
@ -706,97 +668,20 @@
((ADD|SUB|AND|OR|XOR)Lmodify [off1] {sym1} (LEAL [off2] {sym2} base) val mem) ((ADD|SUB|AND|OR|XOR)Lmodify [off1] {sym1} (LEAL [off2] {sym2} base) val mem)
&& is32Bit(off1+off2) && canMergeSym(sym1, sym2) && (base.Op != OpSB || !config.ctxt.Flag_shared) -> && is32Bit(off1+off2) && canMergeSym(sym1, sym2) && (base.Op != OpSB || !config.ctxt.Flag_shared) ->
((ADD|SUB|AND|OR|XOR)Lmodify [off1+off2] {mergeSym(sym1,sym2)} base val mem) ((ADD|SUB|AND|OR|XOR)Lmodify [off1+off2] {mergeSym(sym1,sym2)} base val mem)
((ADD|SUB|AND|OR|XOR)Lmodifyidx4 [off1] {sym1} (LEAL [off2] {sym2} base) idx val mem)
&& is32Bit(off1+off2) && canMergeSym(sym1, sym2) && (base.Op != OpSB || !config.ctxt.Flag_shared) ->
((ADD|SUB|AND|OR|XOR)Lmodifyidx4 [off1+off2] {mergeSym(sym1,sym2)} base idx val mem)
((ADD|AND|OR|XOR)Lconstmodify [valoff1] {sym1} (LEAL [off2] {sym2} base) mem) ((ADD|AND|OR|XOR)Lconstmodify [valoff1] {sym1} (LEAL [off2] {sym2} base) mem)
&& ValAndOff(valoff1).canAdd(off2) && canMergeSym(sym1, sym2) && (base.Op != OpSB || !config.ctxt.Flag_shared) -> && ValAndOff(valoff1).canAdd(off2) && canMergeSym(sym1, sym2) && (base.Op != OpSB || !config.ctxt.Flag_shared) ->
((ADD|AND|OR|XOR)Lconstmodify [ValAndOff(valoff1).add(off2)] {mergeSym(sym1,sym2)} base mem) ((ADD|AND|OR|XOR)Lconstmodify [ValAndOff(valoff1).add(off2)] {mergeSym(sym1,sym2)} base mem)
((ADD|AND|OR|XOR)Lconstmodifyidx4 [valoff1] {sym1} (LEAL [off2] {sym2} base) idx mem)
&& ValAndOff(valoff1).canAdd(off2) && canMergeSym(sym1, sym2) && (base.Op != OpSB || !config.ctxt.Flag_shared) ->
((ADD|AND|OR|XOR)Lconstmodifyidx4 [ValAndOff(valoff1).add(off2)] {mergeSym(sym1,sym2)} base idx mem)
(MOV(B|W|L|SS|SD)load [off] {sym} (ADDL ptr idx) mem) && ptr.Op != OpSB -> (MOV(B|W|L|SS|SD)loadidx1 [off] {sym} ptr idx mem)
(MOV(B|W|L|SS|SD)store [off] {sym} (ADDL ptr idx) val mem) && ptr.Op != OpSB -> (MOV(B|W|L|SS|SD)storeidx1 [off] {sym} ptr idx val mem)
(MOV(B|W|L)storeconst [x] {sym1} (LEAL1 [off] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
(MOV(B|W|L)storeconstidx1 [ValAndOff(x).add(off)] {mergeSym(sym1,sym2)} ptr idx mem)
(MOVWstoreconst [x] {sym1} (LEAL2 [off] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
(MOVWstoreconstidx2 [ValAndOff(x).add(off)] {mergeSym(sym1,sym2)} ptr idx mem)
(MOVLstoreconst [x] {sym1} (LEAL4 [off] {sym2} ptr idx) mem) && canMergeSym(sym1, sym2) ->
(MOVLstoreconstidx4 [ValAndOff(x).add(off)] {mergeSym(sym1,sym2)} ptr idx mem)
(MOV(B|W|L)storeconst [x] {sym} (ADDL ptr idx) mem) -> (MOV(B|W|L)storeconstidx1 [x] {sym} ptr idx mem)
// combine SHLL into indexed loads and stores
(MOVWloadidx1 [c] {sym} ptr (SHLLconst [1] idx) mem) -> (MOVWloadidx2 [c] {sym} ptr idx mem)
(MOVLloadidx1 [c] {sym} ptr (SHLLconst [2] idx) mem) -> (MOVLloadidx4 [c] {sym} ptr idx mem)
(MOVWstoreidx1 [c] {sym} ptr (SHLLconst [1] idx) val mem) -> (MOVWstoreidx2 [c] {sym} ptr idx val mem)
(MOVLstoreidx1 [c] {sym} ptr (SHLLconst [2] idx) val mem) -> (MOVLstoreidx4 [c] {sym} ptr idx val mem)
(MOVWstoreconstidx1 [c] {sym} ptr (SHLLconst [1] idx) mem) -> (MOVWstoreconstidx2 [c] {sym} ptr idx mem)
(MOVLstoreconstidx1 [c] {sym} ptr (SHLLconst [2] idx) mem) -> (MOVLstoreconstidx4 [c] {sym} ptr idx mem)
// combine ADDL into indexed loads and stores
(MOV(B|W|L|SS|SD)loadidx1 [c] {sym} (ADDLconst [d] ptr) idx mem) -> (MOV(B|W|L|SS|SD)loadidx1 [int64(int32(c+d))] {sym} ptr idx mem)
(MOVWloadidx2 [c] {sym} (ADDLconst [d] ptr) idx mem) -> (MOVWloadidx2 [int64(int32(c+d))] {sym} ptr idx mem)
(MOV(L|SS)loadidx4 [c] {sym} (ADDLconst [d] ptr) idx mem) -> (MOV(L|SS)loadidx4 [int64(int32(c+d))] {sym} ptr idx mem)
(MOVSDloadidx8 [c] {sym} (ADDLconst [d] ptr) idx mem) -> (MOVSDloadidx8 [int64(int32(c+d))] {sym} ptr idx mem)
(MOV(B|W|L|SS|SD)storeidx1 [c] {sym} (ADDLconst [d] ptr) idx val mem) -> (MOV(B|W|L|SS|SD)storeidx1 [int64(int32(c+d))] {sym} ptr idx val mem)
(MOVWstoreidx2 [c] {sym} (ADDLconst [d] ptr) idx val mem) -> (MOVWstoreidx2 [int64(int32(c+d))] {sym} ptr idx val mem)
(MOV(L|SS)storeidx4 [c] {sym} (ADDLconst [d] ptr) idx val mem) -> (MOV(L|SS)storeidx4 [int64(int32(c+d))] {sym} ptr idx val mem)
(MOVSDstoreidx8 [c] {sym} (ADDLconst [d] ptr) idx val mem) -> (MOVSDstoreidx8 [int64(int32(c+d))] {sym} ptr idx val mem)
(MOV(B|W|L|SS|SD)loadidx1 [c] {sym} ptr (ADDLconst [d] idx) mem) -> (MOV(B|W|L|SS|SD)loadidx1 [int64(int32(c+d))] {sym} ptr idx mem)
(MOVWloadidx2 [c] {sym} ptr (ADDLconst [d] idx) mem) -> (MOVWloadidx2 [int64(int32(c+2*d))] {sym} ptr idx mem)
(MOV(L|SS)loadidx4 [c] {sym} ptr (ADDLconst [d] idx) mem) -> (MOV(L|SS)loadidx4 [int64(int32(c+4*d))] {sym} ptr idx mem)
(MOVSDloadidx8 [c] {sym} ptr (ADDLconst [d] idx) mem) -> (MOVSDloadidx8 [int64(int32(c+8*d))] {sym} ptr idx mem)
(MOV(B|W|L|SS|SD)storeidx1 [c] {sym} ptr (ADDLconst [d] idx) val mem) -> (MOV(B|W|L|SS|SD)storeidx1 [int64(int32(c+d))] {sym} ptr idx val mem)
(MOVWstoreidx2 [c] {sym} ptr (ADDLconst [d] idx) val mem) -> (MOVWstoreidx2 [int64(int32(c+2*d))] {sym} ptr idx val mem)
(MOV(L|SS)storeidx4 [c] {sym} ptr (ADDLconst [d] idx) val mem) -> (MOV(L|SS)storeidx4 [int64(int32(c+4*d))] {sym} ptr idx val mem)
(MOVSDstoreidx8 [c] {sym} ptr (ADDLconst [d] idx) val mem) -> (MOVSDstoreidx8 [int64(int32(c+8*d))] {sym} ptr idx val mem)
// Merge load/store to op // Merge load/store to op
((ADD|AND|OR|XOR|SUB|MUL)L x l:(MOVLload [off] {sym} ptr mem)) && canMergeLoadClobber(v, l, x) && clobber(l) -> ((ADD|AND|OR|XOR|SUB|MUL)Lload x [off] {sym} ptr mem) ((ADD|AND|OR|XOR|SUB|MUL)L x l:(MOVLload [off] {sym} ptr mem)) && canMergeLoadClobber(v, l, x) && clobber(l) -> ((ADD|AND|OR|XOR|SUB|MUL)Lload x [off] {sym} ptr mem)
((ADD|AND|OR|XOR|SUB|MUL)L x l:(MOVLloadidx4 [off] {sym} ptr idx mem)) && canMergeLoadClobber(v, l, x) && clobber(l) ->
((ADD|AND|OR|XOR|SUB|MUL)Lloadidx4 x [off] {sym} ptr idx mem)
((ADD|SUB|MUL|AND|OR|XOR)Lload [off1] {sym1} val (LEAL4 [off2] {sym2} ptr idx) mem)
&& is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
((ADD|SUB|MUL|AND|OR|XOR)Lloadidx4 [off1+off2] {mergeSym(sym1,sym2)} val ptr idx mem)
((ADD|SUB|MUL|DIV)SD x l:(MOVSDload [off] {sym} ptr mem)) && canMergeLoadClobber(v, l, x) && !config.use387 && clobber(l) -> ((ADD|SUB|MUL|DIV)SDload x [off] {sym} ptr mem) ((ADD|SUB|MUL|DIV)SD x l:(MOVSDload [off] {sym} ptr mem)) && canMergeLoadClobber(v, l, x) && !config.use387 && clobber(l) -> ((ADD|SUB|MUL|DIV)SDload x [off] {sym} ptr mem)
((ADD|SUB|MUL|DIV)SS x l:(MOVSSload [off] {sym} ptr mem)) && canMergeLoadClobber(v, l, x) && !config.use387 && clobber(l) -> ((ADD|SUB|MUL|DIV)SSload x [off] {sym} ptr mem) ((ADD|SUB|MUL|DIV)SS x l:(MOVSSload [off] {sym} ptr mem)) && canMergeLoadClobber(v, l, x) && !config.use387 && clobber(l) -> ((ADD|SUB|MUL|DIV)SSload x [off] {sym} ptr mem)
(MOVLstore {sym} [off] ptr y:((ADD|AND|OR|XOR)Lload x [off] {sym} ptr mem) mem) && y.Uses==1 && clobber(y) -> ((ADD|AND|OR|XOR)Lmodify [off] {sym} ptr x mem) (MOVLstore {sym} [off] ptr y:((ADD|AND|OR|XOR)Lload x [off] {sym} ptr mem) mem) && y.Uses==1 && clobber(y) -> ((ADD|AND|OR|XOR)Lmodify [off] {sym} ptr x mem)
(MOVLstore {sym} [off] ptr y:((ADD|SUB|AND|OR|XOR)L l:(MOVLload [off] {sym} ptr mem) x) mem) && y.Uses==1 && l.Uses==1 && clobber(y, l) -> (MOVLstore {sym} [off] ptr y:((ADD|SUB|AND|OR|XOR)L l:(MOVLload [off] {sym} ptr mem) x) mem) && y.Uses==1 && l.Uses==1 && clobber(y, l) ->
((ADD|SUB|AND|OR|XOR)Lmodify [off] {sym} ptr x mem) ((ADD|SUB|AND|OR|XOR)Lmodify [off] {sym} ptr x mem)
(MOVLstoreidx4 {sym} [off] ptr idx y:((ADD|AND|OR|XOR)Lloadidx4 x [off] {sym} ptr idx mem) mem) && y.Uses==1 && clobber(y) ->
((ADD|AND|OR|XOR)Lmodifyidx4 [off] {sym} ptr idx x mem)
(MOVLstoreidx4 {sym} [off] ptr idx y:((ADD|SUB|AND|OR|XOR)L l:(MOVLloadidx4 [off] {sym} ptr idx mem) x) mem) && y.Uses==1 && l.Uses==1 && clobber(y, l) ->
((ADD|SUB|AND|OR|XOR)Lmodifyidx4 [off] {sym} ptr idx x mem)
(MOVLstore {sym} [off] ptr y:((ADD|AND|OR|XOR)Lconst [c] l:(MOVLload [off] {sym} ptr mem)) mem) (MOVLstore {sym} [off] ptr y:((ADD|AND|OR|XOR)Lconst [c] l:(MOVLload [off] {sym} ptr mem)) mem)
&& y.Uses==1 && l.Uses==1 && clobber(y, l) && validValAndOff(c,off) -> && y.Uses==1 && l.Uses==1 && clobber(y, l) && validValAndOff(c,off) ->
((ADD|AND|OR|XOR)Lconstmodify [makeValAndOff(c,off)] {sym} ptr mem) ((ADD|AND|OR|XOR)Lconstmodify [makeValAndOff(c,off)] {sym} ptr mem)
(MOVLstoreidx4 {sym} [off] ptr idx y:((ADD|AND|OR|XOR)Lconst [c] l:(MOVLloadidx4 [off] {sym} ptr idx mem)) mem)
&& y.Uses==1 && l.Uses==1 && clobber(y, l) && validValAndOff(c,off) ->
((ADD|AND|OR|XOR)Lconstmodifyidx4 [makeValAndOff(c,off)] {sym} ptr idx mem)
((ADD|AND|OR|XOR)Lmodifyidx4 [off] {sym} ptr idx (MOVLconst [c]) mem) && validValAndOff(c,off) ->
((ADD|AND|OR|XOR)Lconstmodifyidx4 [makeValAndOff(c,off)] {sym} ptr idx mem)
(SUBLmodifyidx4 [off] {sym} ptr idx (MOVLconst [c]) mem) && validValAndOff(-c,off) ->
(ADDLconstmodifyidx4 [makeValAndOff(-c,off)] {sym} ptr idx mem)
(MOV(B|W|L)storeconstidx1 [x] {sym} (ADDLconst [c] ptr) idx mem) ->
(MOV(B|W|L)storeconstidx1 [ValAndOff(x).add(c)] {sym} ptr idx mem)
(MOVWstoreconstidx2 [x] {sym} (ADDLconst [c] ptr) idx mem) ->
(MOVWstoreconstidx2 [ValAndOff(x).add(c)] {sym} ptr idx mem)
(MOVLstoreconstidx4 [x] {sym} (ADDLconst [c] ptr) idx mem) ->
(MOVLstoreconstidx4 [ValAndOff(x).add(c)] {sym} ptr idx mem)
(MOV(B|W|L)storeconstidx1 [x] {sym} ptr (ADDLconst [c] idx) mem) ->
(MOV(B|W|L)storeconstidx1 [ValAndOff(x).add(c)] {sym} ptr idx mem)
(MOVWstoreconstidx2 [x] {sym} ptr (ADDLconst [c] idx) mem) ->
(MOVWstoreconstidx2 [ValAndOff(x).add(2*c)] {sym} ptr idx mem)
(MOVLstoreconstidx4 [x] {sym} ptr (ADDLconst [c] idx) mem) ->
(MOVLstoreconstidx4 [ValAndOff(x).add(4*c)] {sym} ptr idx mem)
// fold LEALs together // fold LEALs together
(LEAL [off1] {sym1} (LEAL [off2] {sym2} x)) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) -> (LEAL [off1] {sym1} (LEAL [off2] {sym2} x)) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
@ -826,6 +711,16 @@
(LEAL [off1] {sym1} (LEAL8 [off2] {sym2} x y)) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) -> (LEAL [off1] {sym1} (LEAL8 [off2] {sym2} x y)) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
(LEAL8 [off1+off2] {mergeSym(sym1,sym2)} x y) (LEAL8 [off1+off2] {mergeSym(sym1,sym2)} x y)
// LEAL[1248] into LEAL[1248]. Only some such merges are possible.
(LEAL1 [off1] {sym1} x (LEAL1 [off2] {sym2} y y)) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
(LEAL2 [off1+off2] {mergeSym(sym1, sym2)} x y)
(LEAL1 [off1] {sym1} x (LEAL1 [off2] {sym2} x y)) && is32Bit(off1+off2) && canMergeSym(sym1, sym2) ->
(LEAL2 [off1+off2] {mergeSym(sym1, sym2)} y x)
(LEAL2 [off1] {sym} x (LEAL1 [off2] {nil} y y)) && is32Bit(off1+2*off2) ->
(LEAL4 [off1+2*off2] {sym} x y)
(LEAL4 [off1] {sym} x (LEAL1 [off2] {nil} y y)) && is32Bit(off1+4*off2) ->
(LEAL8 [off1+4*off2] {sym} x y)
// Absorb InvertFlags into branches. // Absorb InvertFlags into branches.
(LT (InvertFlags cmp) yes no) -> (GT cmp yes no) (LT (InvertFlags cmp) yes no) -> (GT cmp yes no)
(GT (InvertFlags cmp) yes no) -> (LT cmp yes no) (GT (InvertFlags cmp) yes no) -> (LT cmp yes no)
@ -1039,6 +934,9 @@
// TEST %reg,%reg is shorter than CMP // TEST %reg,%reg is shorter than CMP
(CMP(L|W|B)const x [0]) -> (TEST(L|W|B) x x) (CMP(L|W|B)const x [0]) -> (TEST(L|W|B) x x)
// Convert LEAL1 back to ADDL if we can
(LEAL1 [0] {nil} x y) -> (ADDL x y)
// Combining byte loads into larger (unaligned) loads. // Combining byte loads into larger (unaligned) loads.
// There are many ways these combinations could occur. This is // There are many ways these combinations could occur. This is
// designed to match the way encoding/binary.LittleEndian does it. // designed to match the way encoding/binary.LittleEndian does it.
@ -1052,6 +950,16 @@
&& clobber(x0, x1, s0) && clobber(x0, x1, s0)
-> @mergePoint(b,x0,x1) (MOVWload [i0] {s} p mem) -> @mergePoint(b,x0,x1) (MOVWload [i0] {s} p mem)
(ORL x0:(MOVBload [i] {s} p0 mem)
s0:(SHLLconst [8] x1:(MOVBload [i] {s} p1 mem)))
&& x0.Uses == 1
&& x1.Uses == 1
&& s0.Uses == 1
&& sequentialAddresses(p0, p1, 1)
&& mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, s0)
-> @mergePoint(b,x0,x1) (MOVWload [i] {s} p0 mem)
(ORL o0:(ORL (ORL o0:(ORL
x0:(MOVWload [i0] {s} p mem) x0:(MOVWload [i0] {s} p mem)
s0:(SHLLconst [16] x1:(MOVBload [i2] {s} p mem))) s0:(SHLLconst [16] x1:(MOVBload [i2] {s} p mem)))
@ -1068,31 +976,21 @@
&& clobber(x0, x1, x2, s0, s1, o0) && clobber(x0, x1, x2, s0, s1, o0)
-> @mergePoint(b,x0,x1,x2) (MOVLload [i0] {s} p mem) -> @mergePoint(b,x0,x1,x2) (MOVLload [i0] {s} p mem)
(ORL x0:(MOVBloadidx1 [i0] {s} p idx mem)
s0:(SHLLconst [8] x1:(MOVBloadidx1 [i1] {s} p idx mem)))
&& i1==i0+1
&& x0.Uses == 1
&& x1.Uses == 1
&& s0.Uses == 1
&& mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, s0)
-> @mergePoint(b,x0,x1) (MOVWloadidx1 <v.Type> [i0] {s} p idx mem)
(ORL o0:(ORL (ORL o0:(ORL
x0:(MOVWloadidx1 [i0] {s} p idx mem) x0:(MOVWload [i] {s} p0 mem)
s0:(SHLLconst [16] x1:(MOVBloadidx1 [i2] {s} p idx mem))) s0:(SHLLconst [16] x1:(MOVBload [i] {s} p1 mem)))
s1:(SHLLconst [24] x2:(MOVBloadidx1 [i3] {s} p idx mem))) s1:(SHLLconst [24] x2:(MOVBload [i] {s} p2 mem)))
&& i2 == i0+2
&& i3 == i0+3
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& x2.Uses == 1 && x2.Uses == 1
&& s0.Uses == 1 && s0.Uses == 1
&& s1.Uses == 1 && s1.Uses == 1
&& o0.Uses == 1 && o0.Uses == 1
&& sequentialAddresses(p0, p1, 2)
&& sequentialAddresses(p1, p2, 1)
&& mergePoint(b,x0,x1,x2) != nil && mergePoint(b,x0,x1,x2) != nil
&& clobber(x0, x1, x2, s0, s1, o0) && clobber(x0, x1, x2, s0, s1, o0)
-> @mergePoint(b,x0,x1,x2) (MOVLloadidx1 <v.Type> [i0] {s} p idx mem) -> @mergePoint(b,x0,x1,x2) (MOVLload [i] {s} p0 mem)
// Combine constant stores into larger (unaligned) stores. // Combine constant stores into larger (unaligned) stores.
(MOVBstoreconst [c] {s} p x:(MOVBstoreconst [a] {s} p mem)) (MOVBstoreconst [c] {s} p x:(MOVBstoreconst [a] {s} p mem))
@ -1105,6 +1003,20 @@
&& ValAndOff(a).Off() + 1 == ValAndOff(c).Off() && ValAndOff(a).Off() + 1 == ValAndOff(c).Off()
&& clobber(x) && clobber(x)
-> (MOVWstoreconst [makeValAndOff(ValAndOff(a).Val()&0xff | ValAndOff(c).Val()<<8, ValAndOff(a).Off())] {s} p mem) -> (MOVWstoreconst [makeValAndOff(ValAndOff(a).Val()&0xff | ValAndOff(c).Val()<<8, ValAndOff(a).Off())] {s} p mem)
(MOVBstoreconst [c] {s} p1 x:(MOVBstoreconst [a] {s} p0 mem))
&& x.Uses == 1
&& ValAndOff(a).Off() == ValAndOff(c).Off()
&& sequentialAddresses(p0, p1, 1)
&& clobber(x)
-> (MOVWstoreconst [makeValAndOff(ValAndOff(a).Val()&0xff | ValAndOff(c).Val()<<8, ValAndOff(a).Off())] {s} p0 mem)
(MOVBstoreconst [a] {s} p0 x:(MOVBstoreconst [c] {s} p1 mem))
&& x.Uses == 1
&& ValAndOff(a).Off() == ValAndOff(c).Off()
&& sequentialAddresses(p0, p1, 1)
&& clobber(x)
-> (MOVWstoreconst [makeValAndOff(ValAndOff(a).Val()&0xff | ValAndOff(c).Val()<<8, ValAndOff(a).Off())] {s} p0 mem)
(MOVWstoreconst [c] {s} p x:(MOVWstoreconst [a] {s} p mem)) (MOVWstoreconst [c] {s} p x:(MOVWstoreconst [a] {s} p mem))
&& x.Uses == 1 && x.Uses == 1
&& ValAndOff(a).Off() + 2 == ValAndOff(c).Off() && ValAndOff(a).Off() + 2 == ValAndOff(c).Off()
@ -1116,22 +1028,18 @@
&& clobber(x) && clobber(x)
-> (MOVLstoreconst [makeValAndOff(ValAndOff(a).Val()&0xffff | ValAndOff(c).Val()<<16, ValAndOff(a).Off())] {s} p mem) -> (MOVLstoreconst [makeValAndOff(ValAndOff(a).Val()&0xffff | ValAndOff(c).Val()<<16, ValAndOff(a).Off())] {s} p mem)
(MOVBstoreconstidx1 [c] {s} p i x:(MOVBstoreconstidx1 [a] {s} p i mem)) (MOVWstoreconst [c] {s} p1 x:(MOVWstoreconst [a] {s} p0 mem))
&& x.Uses == 1 && x.Uses == 1
&& ValAndOff(a).Off() + 1 == ValAndOff(c).Off() && ValAndOff(a).Off() == ValAndOff(c).Off()
&& sequentialAddresses(p0, p1, 2)
&& clobber(x) && clobber(x)
-> (MOVWstoreconstidx1 [makeValAndOff(ValAndOff(a).Val()&0xff | ValAndOff(c).Val()<<8, ValAndOff(a).Off())] {s} p i mem) -> (MOVLstoreconst [makeValAndOff(ValAndOff(a).Val()&0xffff | ValAndOff(c).Val()<<16, ValAndOff(a).Off())] {s} p0 mem)
(MOVWstoreconstidx1 [c] {s} p i x:(MOVWstoreconstidx1 [a] {s} p i mem)) (MOVWstoreconst [a] {s} p0 x:(MOVWstoreconst [c] {s} p1 mem))
&& x.Uses == 1 && x.Uses == 1
&& ValAndOff(a).Off() + 2 == ValAndOff(c).Off() && ValAndOff(a).Off() == ValAndOff(c).Off()
&& sequentialAddresses(p0, p1, 2)
&& clobber(x) && clobber(x)
-> (MOVLstoreconstidx1 [makeValAndOff(ValAndOff(a).Val()&0xffff | ValAndOff(c).Val()<<16, ValAndOff(a).Off())] {s} p i mem) -> (MOVLstoreconst [makeValAndOff(ValAndOff(a).Val()&0xffff | ValAndOff(c).Val()<<16, ValAndOff(a).Off())] {s} p0 mem)
(MOVWstoreconstidx2 [c] {s} p i x:(MOVWstoreconstidx2 [a] {s} p i mem))
&& x.Uses == 1
&& ValAndOff(a).Off() + 2 == ValAndOff(c).Off()
&& clobber(x)
-> (MOVLstoreconstidx1 [makeValAndOff(ValAndOff(a).Val()&0xffff | ValAndOff(c).Val()<<16, ValAndOff(a).Off())] {s} p (SHLLconst <i.Type> [1] i) mem)
// Combine stores into larger (unaligned) stores. // Combine stores into larger (unaligned) stores.
(MOVBstore [i] {s} p (SHR(W|L)const [8] w) x:(MOVBstore [i-1] {s} p w mem)) (MOVBstore [i] {s} p (SHR(W|L)const [8] w) x:(MOVBstore [i-1] {s} p w mem))
@ -1146,6 +1054,23 @@
&& x.Uses == 1 && x.Uses == 1
&& clobber(x) && clobber(x)
-> (MOVWstore [i-1] {s} p w0 mem) -> (MOVWstore [i-1] {s} p w0 mem)
(MOVBstore [i] {s} p1 (SHR(W|L)const [8] w) x:(MOVBstore [i] {s} p0 w mem))
&& x.Uses == 1
&& sequentialAddresses(p0, p1, 1)
&& clobber(x)
-> (MOVWstore [i] {s} p0 w mem)
(MOVBstore [i] {s} p0 w x:(MOVBstore {s} [i] p1 (SHR(W|L)const [8] w) mem))
&& x.Uses == 1
&& sequentialAddresses(p0, p1, 1)
&& clobber(x)
-> (MOVWstore [i] {s} p0 w mem)
(MOVBstore [i] {s} p1 (SHRLconst [j] w) x:(MOVBstore [i] {s} p0 w0:(SHRLconst [j-8] w) mem))
&& x.Uses == 1
&& sequentialAddresses(p0, p1, 1)
&& clobber(x)
-> (MOVWstore [i] {s} p0 w0 mem)
(MOVWstore [i] {s} p (SHRLconst [16] w) x:(MOVWstore [i-2] {s} p w mem)) (MOVWstore [i] {s} p (SHRLconst [16] w) x:(MOVWstore [i-2] {s} p w mem))
&& x.Uses == 1 && x.Uses == 1
&& clobber(x) && clobber(x)
@ -1155,35 +1080,16 @@
&& clobber(x) && clobber(x)
-> (MOVLstore [i-2] {s} p w0 mem) -> (MOVLstore [i-2] {s} p w0 mem)
(MOVBstoreidx1 [i] {s} p idx (SHR(L|W)const [8] w) x:(MOVBstoreidx1 [i-1] {s} p idx w mem)) (MOVWstore [i] {s} p1 (SHRLconst [16] w) x:(MOVWstore [i] {s} p0 w mem))
&& x.Uses == 1 && x.Uses == 1
&& sequentialAddresses(p0, p1, 2)
&& clobber(x) && clobber(x)
-> (MOVWstoreidx1 [i-1] {s} p idx w mem) -> (MOVLstore [i] {s} p0 w mem)
(MOVBstoreidx1 [i] {s} p idx w x:(MOVBstoreidx1 [i+1] {s} p idx (SHR(L|W)const [8] w) mem)) (MOVWstore [i] {s} p1 (SHRLconst [j] w) x:(MOVWstore [i] {s} p0 w0:(SHRLconst [j-16] w) mem))
&& x.Uses == 1 && x.Uses == 1
&& sequentialAddresses(p0, p1, 2)
&& clobber(x) && clobber(x)
-> (MOVWstoreidx1 [i] {s} p idx w mem) -> (MOVLstore [i] {s} p0 w0 mem)
(MOVBstoreidx1 [i] {s} p idx (SHRLconst [j] w) x:(MOVBstoreidx1 [i-1] {s} p idx w0:(SHRLconst [j-8] w) mem))
&& x.Uses == 1
&& clobber(x)
-> (MOVWstoreidx1 [i-1] {s} p idx w0 mem)
(MOVWstoreidx1 [i] {s} p idx (SHRLconst [16] w) x:(MOVWstoreidx1 [i-2] {s} p idx w mem))
&& x.Uses == 1
&& clobber(x)
-> (MOVLstoreidx1 [i-2] {s} p idx w mem)
(MOVWstoreidx1 [i] {s} p idx (SHRLconst [j] w) x:(MOVWstoreidx1 [i-2] {s} p idx w0:(SHRLconst [j-16] w) mem))
&& x.Uses == 1
&& clobber(x)
-> (MOVLstoreidx1 [i-2] {s} p idx w0 mem)
(MOVWstoreidx2 [i] {s} p idx (SHRLconst [16] w) x:(MOVWstoreidx2 [i-2] {s} p idx w mem))
&& x.Uses == 1
&& clobber(x)
-> (MOVLstoreidx1 [i-2] {s} p (SHLLconst <idx.Type> [1] idx) w mem)
(MOVWstoreidx2 [i] {s} p idx (SHRLconst [j] w) x:(MOVWstoreidx2 [i-2] {s} p idx w0:(SHRLconst [j-16] w) mem))
&& x.Uses == 1
&& clobber(x)
-> (MOVLstoreidx1 [i-2] {s} p (SHLLconst <idx.Type> [1] idx) w0 mem)
// For PIC, break floating-point constant loading into two instructions so we have // For PIC, break floating-point constant loading into two instructions so we have
// a register to use for holding the address of the constant pool entry. // a register to use for holding the address of the constant pool entry.

View file

@ -1491,65 +1491,70 @@
// Little-endian loads // Little-endian loads
(ORL x0:(MOVBload [i0] {s} p0 mem) (OR(L|Q) x0:(MOVBload [i0] {s} p mem)
sh:(SHLLconst [8] x1:(MOVBload [i1] {s} p1 mem))) sh:(SHL(L|Q)const [8] x1:(MOVBload [i1] {s} p mem)))
&& i1 == i0+1 && i1 == i0+1
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& sh.Uses == 1 && sh.Uses == 1
&& same(p0, p1, 1)
&& mergePoint(b,x0,x1) != nil && mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, sh) && clobber(x0, x1, sh)
-> @mergePoint(b,x0,x1) (MOVWload [i0] {s} p0 mem) -> @mergePoint(b,x0,x1) (MOVWload [i0] {s} p mem)
(ORQ x0:(MOVBload [i0] {s} p0 mem) (OR(L|Q) x0:(MOVBload [i] {s} p0 mem)
sh:(SHLQconst [8] x1:(MOVBload [i1] {s} p1 mem))) sh:(SHL(L|Q)const [8] x1:(MOVBload [i] {s} p1 mem)))
&& i1 == i0+1
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& sh.Uses == 1 && sh.Uses == 1
&& same(p0, p1, 1) && sequentialAddresses(p0, p1, 1)
&& mergePoint(b,x0,x1) != nil && mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, sh) && clobber(x0, x1, sh)
-> @mergePoint(b,x0,x1) (MOVWload [i0] {s} p0 mem) -> @mergePoint(b,x0,x1) (MOVWload [i] {s} p0 mem)
(ORL x0:(MOVWload [i0] {s} p0 mem) (OR(L|Q) x0:(MOVWload [i0] {s} p mem)
sh:(SHLLconst [16] x1:(MOVWload [i1] {s} p1 mem))) sh:(SHL(L|Q)const [16] x1:(MOVWload [i1] {s} p mem)))
&& i1 == i0+2 && i1 == i0+2
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& sh.Uses == 1 && sh.Uses == 1
&& same(p0, p1, 1)
&& mergePoint(b,x0,x1) != nil && mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, sh) && clobber(x0, x1, sh)
-> @mergePoint(b,x0,x1) (MOVLload [i0] {s} p0 mem) -> @mergePoint(b,x0,x1) (MOVLload [i0] {s} p mem)
(ORQ x0:(MOVWload [i0] {s} p0 mem) (OR(L|Q) x0:(MOVWload [i] {s} p0 mem)
sh:(SHLQconst [16] x1:(MOVWload [i1] {s} p1 mem))) sh:(SHL(L|Q)const [16] x1:(MOVWload [i] {s} p1 mem)))
&& i1 == i0+2
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& sh.Uses == 1 && sh.Uses == 1
&& same(p0, p1, 1) && sequentialAddresses(p0, p1, 2)
&& mergePoint(b,x0,x1) != nil && mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, sh) && clobber(x0, x1, sh)
-> @mergePoint(b,x0,x1) (MOVLload [i0] {s} p0 mem) -> @mergePoint(b,x0,x1) (MOVLload [i] {s} p0 mem)
(ORQ x0:(MOVLload [i0] {s} p0 mem) (ORQ x0:(MOVLload [i0] {s} p mem)
sh:(SHLQconst [32] x1:(MOVLload [i1] {s} p1 mem))) sh:(SHLQconst [32] x1:(MOVLload [i1] {s} p mem)))
&& i1 == i0+4 && i1 == i0+4
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& sh.Uses == 1 && sh.Uses == 1
&& same(p0, p1, 1)
&& mergePoint(b,x0,x1) != nil && mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, sh) && clobber(x0, x1, sh)
-> @mergePoint(b,x0,x1) (MOVQload [i0] {s} p0 mem) -> @mergePoint(b,x0,x1) (MOVQload [i0] {s} p mem)
(ORL (ORQ x0:(MOVLload [i] {s} p0 mem)
s1:(SHLLconst [j1] x1:(MOVBload [i1] {s} p0 mem)) sh:(SHLQconst [32] x1:(MOVLload [i] {s} p1 mem)))
or:(ORL && x0.Uses == 1
s0:(SHLLconst [j0] x0:(MOVBload [i0] {s} p1 mem)) && x1.Uses == 1
&& sh.Uses == 1
&& sequentialAddresses(p0, p1, 4)
&& mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, sh)
-> @mergePoint(b,x0,x1) (MOVQload [i] {s} p0 mem)
(OR(L|Q)
s1:(SHL(L|Q)const [j1] x1:(MOVBload [i1] {s} p mem))
or:(OR(L|Q)
s0:(SHL(L|Q)const [j0] x0:(MOVBload [i0] {s} p mem))
y)) y))
&& i1 == i0+1 && i1 == i0+1
&& j1 == j0+8 && j1 == j0+8
@ -1559,17 +1564,15 @@
&& s0.Uses == 1 && s0.Uses == 1
&& s1.Uses == 1 && s1.Uses == 1
&& or.Uses == 1 && or.Uses == 1
&& same(p0, p1, 1)
&& mergePoint(b,x0,x1,y) != nil && mergePoint(b,x0,x1,y) != nil
&& clobber(x0, x1, s0, s1, or) && clobber(x0, x1, s0, s1, or)
-> @mergePoint(b,x0,x1,y) (ORL <v.Type> (SHLLconst <v.Type> [j0] (MOVWload [i0] {s} p0 mem)) y) -> @mergePoint(b,x0,x1,y) (OR(L|Q) <v.Type> (SHL(L|Q)const <v.Type> [j0] (MOVWload [i0] {s} p mem)) y)
(ORQ (OR(L|Q)
s1:(SHLQconst [j1] x1:(MOVBload [i1] {s} p0 mem)) s1:(SHL(L|Q)const [j1] x1:(MOVBload [i] {s} p1 mem))
or:(ORQ or:(OR(L|Q)
s0:(SHLQconst [j0] x0:(MOVBload [i0] {s} p1 mem)) s0:(SHL(L|Q)const [j0] x0:(MOVBload [i] {s} p0 mem))
y)) y))
&& i1 == i0+1
&& j1 == j0+8 && j1 == j0+8
&& j0 % 16 == 0 && j0 % 16 == 0
&& x0.Uses == 1 && x0.Uses == 1
@ -1577,15 +1580,15 @@
&& s0.Uses == 1 && s0.Uses == 1
&& s1.Uses == 1 && s1.Uses == 1
&& or.Uses == 1 && or.Uses == 1
&& same(p0, p1, 1) && sequentialAddresses(p0, p1, 1)
&& mergePoint(b,x0,x1,y) != nil && mergePoint(b,x0,x1,y) != nil
&& clobber(x0, x1, s0, s1, or) && clobber(x0, x1, s0, s1, or)
-> @mergePoint(b,x0,x1,y) (ORQ <v.Type> (SHLQconst <v.Type> [j0] (MOVWload [i0] {s} p0 mem)) y) -> @mergePoint(b,x0,x1,y) (OR(L|Q) <v.Type> (SHL(L|Q)const <v.Type> [j0] (MOVWload [i] {s} p0 mem)) y)
(ORQ (ORQ
s1:(SHLQconst [j1] x1:(MOVWload [i1] {s} p0 mem)) s1:(SHLQconst [j1] x1:(MOVWload [i1] {s} p mem))
or:(ORQ or:(ORQ
s0:(SHLQconst [j0] x0:(MOVWload [i0] {s} p1 mem)) s0:(SHLQconst [j0] x0:(MOVWload [i0] {s} p mem))
y)) y))
&& i1 == i0+2 && i1 == i0+2
&& j1 == j0+16 && j1 == j0+16
@ -1595,105 +1598,107 @@
&& s0.Uses == 1 && s0.Uses == 1
&& s1.Uses == 1 && s1.Uses == 1
&& or.Uses == 1 && or.Uses == 1
&& same(p0, p1, 1)
&& mergePoint(b,x0,x1,y) != nil && mergePoint(b,x0,x1,y) != nil
&& clobber(x0, x1, s0, s1, or) && clobber(x0, x1, s0, s1, or)
-> @mergePoint(b,x0,x1,y) (ORQ <v.Type> (SHLQconst <v.Type> [j0] (MOVLload [i0] {s} p0 mem)) y) -> @mergePoint(b,x0,x1,y) (ORQ <v.Type> (SHLQconst <v.Type> [j0] (MOVLload [i0] {s} p mem)) y)
// Little-endian indexed loads (ORQ
s1:(SHLQconst [j1] x1:(MOVWload [i] {s} p1 mem))
// Move constants offsets from LEAQx up into load. This lets the above combining or:(ORQ
// rules discover indexed load-combining instances. s0:(SHLQconst [j0] x0:(MOVWload [i] {s} p0 mem))
(MOV(B|W|L|Q)load [i0] {s0} l:(LEAQ1 [i1] {s1} x y) mem) && i1 != 0 && is32Bit(i0+i1) y))
-> (MOV(B|W|L|Q)load [i0+i1] {s0} (LEAQ1 <l.Type> [0] {s1} x y) mem) && j1 == j0+16
(MOV(B|W|L|Q)load [i0] {s0} l:(LEAQ2 [i1] {s1} x y) mem) && i1 != 0 && is32Bit(i0+i1) && j0 % 32 == 0
-> (MOV(B|W|L|Q)load [i0+i1] {s0} (LEAQ2 <l.Type> [0] {s1} x y) mem) && x0.Uses == 1
(MOV(B|W|L|Q)load [i0] {s0} l:(LEAQ4 [i1] {s1} x y) mem) && i1 != 0 && is32Bit(i0+i1) && x1.Uses == 1
-> (MOV(B|W|L|Q)load [i0+i1] {s0} (LEAQ4 <l.Type> [0] {s1} x y) mem) && s0.Uses == 1
(MOV(B|W|L|Q)load [i0] {s0} l:(LEAQ8 [i1] {s1} x y) mem) && i1 != 0 && is32Bit(i0+i1) && s1.Uses == 1
-> (MOV(B|W|L|Q)load [i0+i1] {s0} (LEAQ8 <l.Type> [0] {s1} x y) mem) && or.Uses == 1
&& sequentialAddresses(p0, p1, 2)
(MOV(B|W|L|Q)store [i0] {s0} l:(LEAQ1 [i1] {s1} x y) val mem) && i1 != 0 && is32Bit(i0+i1) && mergePoint(b,x0,x1,y) != nil
-> (MOV(B|W|L|Q)store [i0+i1] {s0} (LEAQ1 <l.Type> [0] {s1} x y) val mem) && clobber(x0, x1, s0, s1, or)
(MOV(B|W|L|Q)store [i0] {s0} l:(LEAQ2 [i1] {s1} x y) val mem) && i1 != 0 && is32Bit(i0+i1) -> @mergePoint(b,x0,x1,y) (ORQ <v.Type> (SHLQconst <v.Type> [j0] (MOVLload [i] {s} p0 mem)) y)
-> (MOV(B|W|L|Q)store [i0+i1] {s0} (LEAQ2 <l.Type> [0] {s1} x y) val mem)
(MOV(B|W|L|Q)store [i0] {s0} l:(LEAQ4 [i1] {s1} x y) val mem) && i1 != 0 && is32Bit(i0+i1)
-> (MOV(B|W|L|Q)store [i0+i1] {s0} (LEAQ4 <l.Type> [0] {s1} x y) val mem)
(MOV(B|W|L|Q)store [i0] {s0} l:(LEAQ8 [i1] {s1} x y) val mem) && i1 != 0 && is32Bit(i0+i1)
-> (MOV(B|W|L|Q)store [i0+i1] {s0} (LEAQ8 <l.Type> [0] {s1} x y) val mem)
// Big-endian loads // Big-endian loads
(ORL (OR(L|Q)
x1:(MOVBload [i1] {s} p0 mem) x1:(MOVBload [i1] {s} p mem)
sh:(SHLLconst [8] x0:(MOVBload [i0] {s} p1 mem))) sh:(SHL(L|Q)const [8] x0:(MOVBload [i0] {s} p mem)))
&& i1 == i0+1 && i1 == i0+1
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& sh.Uses == 1 && sh.Uses == 1
&& same(p0, p1, 1)
&& mergePoint(b,x0,x1) != nil && mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, sh) && clobber(x0, x1, sh)
-> @mergePoint(b,x0,x1) (ROLWconst <v.Type> [8] (MOVWload [i0] {s} p0 mem)) -> @mergePoint(b,x0,x1) (ROLWconst <v.Type> [8] (MOVWload [i0] {s} p mem))
(ORQ (OR(L|Q)
x1:(MOVBload [i1] {s} p0 mem) x1:(MOVBload [i] {s} p1 mem)
sh:(SHLQconst [8] x0:(MOVBload [i0] {s} p1 mem))) sh:(SHL(L|Q)const [8] x0:(MOVBload [i] {s} p0 mem)))
&& i1 == i0+1
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& sh.Uses == 1 && sh.Uses == 1
&& same(p0, p1, 1) && sequentialAddresses(p0, p1, 1)
&& mergePoint(b,x0,x1) != nil && mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, sh) && clobber(x0, x1, sh)
-> @mergePoint(b,x0,x1) (ROLWconst <v.Type> [8] (MOVWload [i0] {s} p0 mem)) -> @mergePoint(b,x0,x1) (ROLWconst <v.Type> [8] (MOVWload [i] {s} p0 mem))
(ORL (OR(L|Q)
r1:(ROLWconst [8] x1:(MOVWload [i1] {s} p0 mem)) r1:(ROLWconst [8] x1:(MOVWload [i1] {s} p mem))
sh:(SHLLconst [16] r0:(ROLWconst [8] x0:(MOVWload [i0] {s} p1 mem)))) sh:(SHL(L|Q)const [16] r0:(ROLWconst [8] x0:(MOVWload [i0] {s} p mem))))
&& i1 == i0+2 && i1 == i0+2
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& r0.Uses == 1 && r0.Uses == 1
&& r1.Uses == 1 && r1.Uses == 1
&& sh.Uses == 1 && sh.Uses == 1
&& same(p0, p1, 1)
&& mergePoint(b,x0,x1) != nil && mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, r0, r1, sh) && clobber(x0, x1, r0, r1, sh)
-> @mergePoint(b,x0,x1) (BSWAPL <v.Type> (MOVLload [i0] {s} p0 mem)) -> @mergePoint(b,x0,x1) (BSWAPL <v.Type> (MOVLload [i0] {s} p mem))
(ORQ (OR(L|Q)
r1:(ROLWconst [8] x1:(MOVWload [i1] {s} p0 mem)) r1:(ROLWconst [8] x1:(MOVWload [i] {s} p1 mem))
sh:(SHLQconst [16] r0:(ROLWconst [8] x0:(MOVWload [i0] {s} p1 mem)))) sh:(SHL(L|Q)const [16] r0:(ROLWconst [8] x0:(MOVWload [i] {s} p0 mem))))
&& i1 == i0+2
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& r0.Uses == 1 && r0.Uses == 1
&& r1.Uses == 1 && r1.Uses == 1
&& sh.Uses == 1 && sh.Uses == 1
&& same(p0, p1, 1) && sequentialAddresses(p0, p1, 2)
&& mergePoint(b,x0,x1) != nil && mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, r0, r1, sh) && clobber(x0, x1, r0, r1, sh)
-> @mergePoint(b,x0,x1) (BSWAPL <v.Type> (MOVLload [i0] {s} p0 mem)) -> @mergePoint(b,x0,x1) (BSWAPL <v.Type> (MOVLload [i] {s} p0 mem))
(ORQ (ORQ
r1:(BSWAPL x1:(MOVLload [i1] {s} p0 mem)) r1:(BSWAPL x1:(MOVLload [i1] {s} p mem))
sh:(SHLQconst [32] r0:(BSWAPL x0:(MOVLload [i0] {s} p1 mem)))) sh:(SHLQconst [32] r0:(BSWAPL x0:(MOVLload [i0] {s} p mem))))
&& i1 == i0+4 && i1 == i0+4
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& r0.Uses == 1 && r0.Uses == 1
&& r1.Uses == 1 && r1.Uses == 1
&& sh.Uses == 1 && sh.Uses == 1
&& same(p0, p1, 1)
&& mergePoint(b,x0,x1) != nil && mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, r0, r1, sh) && clobber(x0, x1, r0, r1, sh)
-> @mergePoint(b,x0,x1) (BSWAPQ <v.Type> (MOVQload [i0] {s} p0 mem)) -> @mergePoint(b,x0,x1) (BSWAPQ <v.Type> (MOVQload [i0] {s} p mem))
(ORL (ORQ
s0:(SHLLconst [j0] x0:(MOVBload [i0] {s} p0 mem)) r1:(BSWAPL x1:(MOVLload [i] {s} p1 mem))
or:(ORL sh:(SHLQconst [32] r0:(BSWAPL x0:(MOVLload [i] {s} p0 mem))))
s1:(SHLLconst [j1] x1:(MOVBload [i1] {s} p1 mem)) && x0.Uses == 1
&& x1.Uses == 1
&& r0.Uses == 1
&& r1.Uses == 1
&& sh.Uses == 1
&& sequentialAddresses(p0, p1, 4)
&& mergePoint(b,x0,x1) != nil
&& clobber(x0, x1, r0, r1, sh)
-> @mergePoint(b,x0,x1) (BSWAPQ <v.Type> (MOVQload [i] {s} p0 mem))
(OR(L|Q)
s0:(SHL(L|Q)const [j0] x0:(MOVBload [i0] {s} p mem))
or:(OR(L|Q)
s1:(SHL(L|Q)const [j1] x1:(MOVBload [i1] {s} p mem))
y)) y))
&& i1 == i0+1 && i1 == i0+1
&& j1 == j0-8 && j1 == j0-8
@ -1703,17 +1708,15 @@
&& s0.Uses == 1 && s0.Uses == 1
&& s1.Uses == 1 && s1.Uses == 1
&& or.Uses == 1 && or.Uses == 1
&& same(p0, p1, 1)
&& mergePoint(b,x0,x1,y) != nil && mergePoint(b,x0,x1,y) != nil
&& clobber(x0, x1, s0, s1, or) && clobber(x0, x1, s0, s1, or)
-> @mergePoint(b,x0,x1,y) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p0 mem))) y) -> @mergePoint(b,x0,x1,y) (OR(L|Q) <v.Type> (SHL(L|Q)const <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
(ORQ (OR(L|Q)
s0:(SHLQconst [j0] x0:(MOVBload [i0] {s} p0 mem)) s0:(SHL(L|Q)const [j0] x0:(MOVBload [i] {s} p0 mem))
or:(ORQ or:(OR(L|Q)
s1:(SHLQconst [j1] x1:(MOVBload [i1] {s} p1 mem)) s1:(SHL(L|Q)const [j1] x1:(MOVBload [i] {s} p1 mem))
y)) y))
&& i1 == i0+1
&& j1 == j0-8 && j1 == j0-8
&& j1 % 16 == 0 && j1 % 16 == 0
&& x0.Uses == 1 && x0.Uses == 1
@ -1721,15 +1724,15 @@
&& s0.Uses == 1 && s0.Uses == 1
&& s1.Uses == 1 && s1.Uses == 1
&& or.Uses == 1 && or.Uses == 1
&& same(p0, p1, 1) && sequentialAddresses(p0, p1, 1)
&& mergePoint(b,x0,x1,y) != nil && mergePoint(b,x0,x1,y) != nil
&& clobber(x0, x1, s0, s1, or) && clobber(x0, x1, s0, s1, or)
-> @mergePoint(b,x0,x1,y) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p0 mem))) y) -> @mergePoint(b,x0,x1,y) (OR(L|Q) <v.Type> (SHL(L|Q)const <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i] {s} p0 mem))) y)
(ORQ (ORQ
s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWload [i0] {s} p0 mem))) s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWload [i0] {s} p mem)))
or:(ORQ or:(ORQ
s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWload [i1] {s} p1 mem))) s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWload [i1] {s} p mem)))
y)) y))
&& i1 == i0+2 && i1 == i0+2
&& j1 == j0-16 && j1 == j0-16
@ -1741,41 +1744,73 @@
&& s0.Uses == 1 && s0.Uses == 1
&& s1.Uses == 1 && s1.Uses == 1
&& or.Uses == 1 && or.Uses == 1
&& same(p0, p1, 1)
&& mergePoint(b,x0,x1,y) != nil && mergePoint(b,x0,x1,y) != nil
&& clobber(x0, x1, r0, r1, s0, s1, or) && clobber(x0, x1, r0, r1, s0, s1, or)
-> @mergePoint(b,x0,x1,y) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLload [i0] {s} p0 mem))) y) -> @mergePoint(b,x0,x1,y) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLload [i0] {s} p mem))) y)
(ORQ
s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWload [i] {s} p0 mem)))
or:(ORQ
s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWload [i] {s} p1 mem)))
y))
&& j1 == j0-16
&& j1 % 32 == 0
&& x0.Uses == 1
&& x1.Uses == 1
&& r0.Uses == 1
&& r1.Uses == 1
&& s0.Uses == 1
&& s1.Uses == 1
&& or.Uses == 1
&& sequentialAddresses(p0, p1, 2)
&& mergePoint(b,x0,x1,y) != nil
&& clobber(x0, x1, r0, r1, s0, s1, or)
-> @mergePoint(b,x0,x1,y) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLload [i] {s} p0 mem))) y)
// Combine 2 byte stores + shift into rolw 8 + word store // Combine 2 byte stores + shift into rolw 8 + word store
(MOVBstore [i] {s} p1 w (MOVBstore [i] {s} p w
x0:(MOVBstore [i-1] {s} p0 (SHRWconst [8] w) mem)) x0:(MOVBstore [i-1] {s} p (SHRWconst [8] w) mem))
&& x0.Uses == 1 && x0.Uses == 1
&& same(p0, p1, 1)
&& clobber(x0) && clobber(x0)
-> (MOVWstore [i-1] {s} p0 (ROLWconst <w.Type> [8] w) mem) -> (MOVWstore [i-1] {s} p (ROLWconst <w.Type> [8] w) mem)
(MOVBstore [i] {s} p1 w
x0:(MOVBstore [i] {s} p0 (SHRWconst [8] w) mem))
&& x0.Uses == 1
&& sequentialAddresses(p0, p1, 1)
&& clobber(x0)
-> (MOVWstore [i] {s} p0 (ROLWconst <w.Type> [8] w) mem)
// Combine stores + shifts into bswap and larger (unaligned) stores // Combine stores + shifts into bswap and larger (unaligned) stores
(MOVBstore [i] {s} p3 w (MOVBstore [i] {s} p w
x2:(MOVBstore [i-1] {s} p2 (SHRLconst [8] w) x2:(MOVBstore [i-1] {s} p (SHRLconst [8] w)
x1:(MOVBstore [i-2] {s} p1 (SHRLconst [16] w) x1:(MOVBstore [i-2] {s} p (SHRLconst [16] w)
x0:(MOVBstore [i-3] {s} p0 (SHRLconst [24] w) mem)))) x0:(MOVBstore [i-3] {s} p (SHRLconst [24] w) mem))))
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& x2.Uses == 1 && x2.Uses == 1
&& same(p0, p1, 1)
&& same(p1, p2, 1)
&& same(p2, p3, 1)
&& clobber(x0, x1, x2) && clobber(x0, x1, x2)
-> (MOVLstore [i-3] {s} p0 (BSWAPL <w.Type> w) mem) -> (MOVLstore [i-3] {s} p (BSWAPL <w.Type> w) mem)
(MOVBstore [i] {s} p3 w
x2:(MOVBstore [i] {s} p2 (SHRLconst [8] w)
x1:(MOVBstore [i] {s} p1 (SHRLconst [16] w)
x0:(MOVBstore [i] {s} p0 (SHRLconst [24] w) mem))))
&& x0.Uses == 1
&& x1.Uses == 1
&& x2.Uses == 1
&& sequentialAddresses(p0, p1, 1)
&& sequentialAddresses(p1, p2, 1)
&& sequentialAddresses(p2, p3, 1)
&& clobber(x0, x1, x2)
-> (MOVLstore [i] {s} p0 (BSWAPL <w.Type> w) mem)
(MOVBstore [i] {s} p7 w (MOVBstore [i] {s} p w
x6:(MOVBstore [i-1] {s} p6 (SHRQconst [8] w) x6:(MOVBstore [i-1] {s} p (SHRQconst [8] w)
x5:(MOVBstore [i-2] {s} p5 (SHRQconst [16] w) x5:(MOVBstore [i-2] {s} p (SHRQconst [16] w)
x4:(MOVBstore [i-3] {s} p4 (SHRQconst [24] w) x4:(MOVBstore [i-3] {s} p (SHRQconst [24] w)
x3:(MOVBstore [i-4] {s} p3 (SHRQconst [32] w) x3:(MOVBstore [i-4] {s} p (SHRQconst [32] w)
x2:(MOVBstore [i-5] {s} p2 (SHRQconst [40] w) x2:(MOVBstore [i-5] {s} p (SHRQconst [40] w)
x1:(MOVBstore [i-6] {s} p1 (SHRQconst [48] w) x1:(MOVBstore [i-6] {s} p (SHRQconst [48] w)
x0:(MOVBstore [i-7] {s} p0 (SHRQconst [56] w) mem)))))))) x0:(MOVBstore [i-7] {s} p (SHRQconst [56] w) mem))))))))
&& x0.Uses == 1 && x0.Uses == 1
&& x1.Uses == 1 && x1.Uses == 1
&& x2.Uses == 1 && x2.Uses == 1
@ -1783,99 +1818,139 @@
&& x4.Uses == 1 && x4.Uses == 1
&& x5.Uses == 1 && x5.Uses == 1
&& x6.Uses == 1 && x6.Uses == 1
&& same(p0, p1, 1)
&& same(p1, p2, 1)
&& same(p2, p3, 1)
&& same(p3, p4, 1)
&& same(p4, p5, 1)
&& same(p5, p6, 1)
&& same(p6, p7, 1)
&& clobber(x0, x1, x2, x3, x4, x5, x6) && clobber(x0, x1, x2, x3, x4, x5, x6)
-> (MOVQstore [i-7] {s} p0 (BSWAPQ <w.Type> w) mem) -> (MOVQstore [i-7] {s} p (BSWAPQ <w.Type> w) mem)
(MOVBstore [i] {s} p7 w
x6:(MOVBstore [i] {s} p6 (SHRQconst [8] w)
x5:(MOVBstore [i] {s} p5 (SHRQconst [16] w)
x4:(MOVBstore [i] {s} p4 (SHRQconst [24] w)
x3:(MOVBstore [i] {s} p3 (SHRQconst [32] w)
x2:(MOVBstore [i] {s} p2 (SHRQconst [40] w)
x1:(MOVBstore [i] {s} p1 (SHRQconst [48] w)
x0:(MOVBstore [i] {s} p0 (SHRQconst [56] w) mem))))))))
&& x0.Uses == 1
&& x1.Uses == 1
&& x2.Uses == 1
&& x3.Uses == 1
&& x4.Uses == 1
&& x5.Uses == 1
&& x6.Uses == 1
&& sequentialAddresses(p0, p1, 1)
&& sequentialAddresses(p1, p2, 1)
&& sequentialAddresses(p2, p3, 1)
&& sequentialAddresses(p3, p4, 1)
&& sequentialAddresses(p4, p5, 1)
&& sequentialAddresses(p5, p6, 1)
&& sequentialAddresses(p6, p7, 1)
&& clobber(x0, x1, x2, x3, x4, x5, x6)
-> (MOVQstore [i] {s} p0 (BSWAPQ <w.Type> w) mem)
// Combine constant stores into larger (unaligned) stores. // Combine constant stores into larger (unaligned) stores.
(MOVBstoreconst [c] {s} p1 x:(MOVBstoreconst [a] {s} p0 mem)) (MOVBstoreconst [c] {s} p x:(MOVBstoreconst [a] {s} p mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1)
&& ValAndOff(a).Off() + 1 == ValAndOff(c).Off() && ValAndOff(a).Off() + 1 == ValAndOff(c).Off()
&& clobber(x) && clobber(x)
-> (MOVWstoreconst [makeValAndOff(ValAndOff(a).Val()&0xff | ValAndOff(c).Val()<<8, ValAndOff(a).Off())] {s} p0 mem) -> (MOVWstoreconst [makeValAndOff(ValAndOff(a).Val()&0xff | ValAndOff(c).Val()<<8, ValAndOff(a).Off())] {s} p mem)
(MOVBstoreconst [a] {s} p1 x:(MOVBstoreconst [c] {s} p0 mem)) (MOVBstoreconst [a] {s} p x:(MOVBstoreconst [c] {s} p mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1)
&& ValAndOff(a).Off() + 1 == ValAndOff(c).Off() && ValAndOff(a).Off() + 1 == ValAndOff(c).Off()
&& clobber(x) && clobber(x)
-> (MOVWstoreconst [makeValAndOff(ValAndOff(a).Val()&0xff | ValAndOff(c).Val()<<8, ValAndOff(a).Off())] {s} p0 mem) -> (MOVWstoreconst [makeValAndOff(ValAndOff(a).Val()&0xff | ValAndOff(c).Val()<<8, ValAndOff(a).Off())] {s} p mem)
(MOVWstoreconst [c] {s} p1 x:(MOVWstoreconst [a] {s} p0 mem)) (MOVWstoreconst [c] {s} p x:(MOVWstoreconst [a] {s} p mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1)
&& ValAndOff(a).Off() + 2 == ValAndOff(c).Off() && ValAndOff(a).Off() + 2 == ValAndOff(c).Off()
&& clobber(x) && clobber(x)
-> (MOVLstoreconst [makeValAndOff(ValAndOff(a).Val()&0xffff | ValAndOff(c).Val()<<16, ValAndOff(a).Off())] {s} p0 mem) -> (MOVLstoreconst [makeValAndOff(ValAndOff(a).Val()&0xffff | ValAndOff(c).Val()<<16, ValAndOff(a).Off())] {s} p mem)
(MOVWstoreconst [a] {s} p1 x:(MOVWstoreconst [c] {s} p0 mem)) (MOVWstoreconst [a] {s} p x:(MOVWstoreconst [c] {s} p mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1)
&& ValAndOff(a).Off() + 2 == ValAndOff(c).Off() && ValAndOff(a).Off() + 2 == ValAndOff(c).Off()
&& clobber(x) && clobber(x)
-> (MOVLstoreconst [makeValAndOff(ValAndOff(a).Val()&0xffff | ValAndOff(c).Val()<<16, ValAndOff(a).Off())] {s} p0 mem) -> (MOVLstoreconst [makeValAndOff(ValAndOff(a).Val()&0xffff | ValAndOff(c).Val()<<16, ValAndOff(a).Off())] {s} p mem)
(MOVLstoreconst [c] {s} p1 x:(MOVLstoreconst [a] {s} p0 mem)) (MOVLstoreconst [c] {s} p x:(MOVLstoreconst [a] {s} p mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1)
&& ValAndOff(a).Off() + 4 == ValAndOff(c).Off() && ValAndOff(a).Off() + 4 == ValAndOff(c).Off()
&& clobber(x) && clobber(x)
-> (MOVQstore [ValAndOff(a).Off()] {s} p0 (MOVQconst [ValAndOff(a).Val()&0xffffffff | ValAndOff(c).Val()<<32]) mem) -> (MOVQstore [ValAndOff(a).Off()] {s} p (MOVQconst [ValAndOff(a).Val()&0xffffffff | ValAndOff(c).Val()<<32]) mem)
(MOVLstoreconst [a] {s} p1 x:(MOVLstoreconst [c] {s} p0 mem)) (MOVLstoreconst [a] {s} p x:(MOVLstoreconst [c] {s} p mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1)
&& ValAndOff(a).Off() + 4 == ValAndOff(c).Off() && ValAndOff(a).Off() + 4 == ValAndOff(c).Off()
&& clobber(x) && clobber(x)
-> (MOVQstore [ValAndOff(a).Off()] {s} p0 (MOVQconst [ValAndOff(a).Val()&0xffffffff | ValAndOff(c).Val()<<32]) mem) -> (MOVQstore [ValAndOff(a).Off()] {s} p (MOVQconst [ValAndOff(a).Val()&0xffffffff | ValAndOff(c).Val()<<32]) mem)
(MOVQstoreconst [c] {s} p1 x:(MOVQstoreconst [c2] {s} p0 mem)) (MOVQstoreconst [c] {s} p x:(MOVQstoreconst [c2] {s} p mem))
&& config.useSSE && config.useSSE
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1)
&& ValAndOff(c2).Off() + 8 == ValAndOff(c).Off() && ValAndOff(c2).Off() + 8 == ValAndOff(c).Off()
&& ValAndOff(c).Val() == 0 && ValAndOff(c).Val() == 0
&& ValAndOff(c2).Val() == 0 && ValAndOff(c2).Val() == 0
&& clobber(x) && clobber(x)
-> (MOVOstore [ValAndOff(c2).Off()] {s} p0 (MOVOconst [0]) mem) -> (MOVOstore [ValAndOff(c2).Off()] {s} p (MOVOconst [0]) mem)
// Combine stores into larger (unaligned) stores. // Combine stores into larger (unaligned) stores. Little endian.
(MOVBstore [i] {s} p1 (SHR(W|L|Q)const [8] w) x:(MOVBstore [i-1] {s} p0 w mem)) (MOVBstore [i] {s} p (SHR(W|L|Q)const [8] w) x:(MOVBstore [i-1] {s} p w mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1)
&& clobber(x) && clobber(x)
-> (MOVWstore [i-1] {s} p0 w mem) -> (MOVWstore [i-1] {s} p w mem)
(MOVBstore [i] {s} p1 w x:(MOVBstore [i+1] {s} p0 (SHR(W|L|Q)const [8] w) mem)) (MOVBstore [i] {s} p w x:(MOVBstore [i+1] {s} p (SHR(W|L|Q)const [8] w) mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1) && clobber(x)
-> (MOVWstore [i] {s} p w mem)
(MOVBstore [i] {s} p (SHR(L|Q)const [j] w) x:(MOVBstore [i-1] {s} p w0:(SHR(L|Q)const [j-8] w) mem))
&& x.Uses == 1
&& clobber(x)
-> (MOVWstore [i-1] {s} p w0 mem)
(MOVBstore [i] {s} p1 (SHR(W|L|Q)const [8] w) x:(MOVBstore [i] {s} p0 w mem))
&& x.Uses == 1
&& sequentialAddresses(p0, p1, 1)
&& clobber(x) && clobber(x)
-> (MOVWstore [i] {s} p0 w mem) -> (MOVWstore [i] {s} p0 w mem)
(MOVBstore [i] {s} p1 (SHR(L|Q)const [j] w) x:(MOVBstore [i-1] {s} p0 w0:(SHR(L|Q)const [j-8] w) mem)) (MOVBstore [i] {s} p0 w x:(MOVBstore [i] {s} p1 (SHR(W|L|Q)const [8] w) mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1) && sequentialAddresses(p0, p1, 1)
&& clobber(x) && clobber(x)
-> (MOVWstore [i-1] {s} p0 w0 mem) -> (MOVWstore [i] {s} p0 w mem)
(MOVWstore [i] {s} p1 (SHR(L|Q)const [16] w) x:(MOVWstore [i-2] {s} p0 w mem)) (MOVBstore [i] {s} p1 (SHR(L|Q)const [j] w) x:(MOVBstore [i] {s} p0 w0:(SHR(L|Q)const [j-8] w) mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1) && sequentialAddresses(p0, p1, 1)
&& clobber(x) && clobber(x)
-> (MOVLstore [i-2] {s} p0 w mem) -> (MOVWstore [i] {s} p0 w0 mem)
(MOVWstore [i] {s} p1 (SHR(L|Q)const [j] w) x:(MOVWstore [i-2] {s} p0 w0:(SHR(L|Q)const [j-16] w) mem))
(MOVWstore [i] {s} p (SHR(L|Q)const [16] w) x:(MOVWstore [i-2] {s} p w mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1)
&& clobber(x) && clobber(x)
-> (MOVLstore [i-2] {s} p0 w0 mem) -> (MOVLstore [i-2] {s} p w mem)
(MOVLstore [i] {s} p1 (SHRQconst [32] w) x:(MOVLstore [i-4] {s} p0 w mem)) (MOVWstore [i] {s} p (SHR(L|Q)const [j] w) x:(MOVWstore [i-2] {s} p w0:(SHR(L|Q)const [j-16] w) mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1)
&& clobber(x) && clobber(x)
-> (MOVQstore [i-4] {s} p0 w mem) -> (MOVLstore [i-2] {s} p w0 mem)
(MOVLstore [i] {s} p1 (SHRQconst [j] w) x:(MOVLstore [i-4] {s} p0 w0:(SHRQconst [j-32] w) mem)) (MOVWstore [i] {s} p1 (SHR(L|Q)const [16] w) x:(MOVWstore [i] {s} p0 w mem))
&& x.Uses == 1 && x.Uses == 1
&& same(p0, p1, 1) && sequentialAddresses(p0, p1, 2)
&& clobber(x) && clobber(x)
-> (MOVQstore [i-4] {s} p0 w0 mem) -> (MOVLstore [i] {s} p0 w mem)
(MOVWstore [i] {s} p1 (SHR(L|Q)const [j] w) x:(MOVWstore [i] {s} p0 w0:(SHR(L|Q)const [j-16] w) mem))
&& x.Uses == 1
&& sequentialAddresses(p0, p1, 2)
&& clobber(x)
-> (MOVLstore [i] {s} p0 w0 mem)
(MOVLstore [i] {s} p (SHRQconst [32] w) x:(MOVLstore [i-4] {s} p w mem))
&& x.Uses == 1
&& clobber(x)
-> (MOVQstore [i-4] {s} p w mem)
(MOVLstore [i] {s} p (SHRQconst [j] w) x:(MOVLstore [i-4] {s} p w0:(SHRQconst [j-32] w) mem))
&& x.Uses == 1
&& clobber(x)
-> (MOVQstore [i-4] {s} p w0 mem)
(MOVLstore [i] {s} p1 (SHRQconst [32] w) x:(MOVLstore [i] {s} p0 w mem))
&& x.Uses == 1
&& sequentialAddresses(p0, p1, 4)
&& clobber(x)
-> (MOVQstore [i] {s} p0 w mem)
(MOVLstore [i] {s} p1 (SHRQconst [j] w) x:(MOVLstore [i] {s} p0 w0:(SHRQconst [j-32] w) mem))
&& x.Uses == 1
&& sequentialAddresses(p0, p1, 4)
&& clobber(x)
-> (MOVQstore [i] {s} p0 w0 mem)
(MOVBstore [i] {s} p (MOVBstore [i] {s} p
x1:(MOVBload [j] {s2} p2 mem) x1:(MOVBload [j] {s2} p2 mem)

View file

@ -127,6 +127,7 @@ func init() {
gp1flags = regInfo{inputs: []regMask{gpsp}} gp1flags = regInfo{inputs: []regMask{gpsp}}
gp0flagsLoad = regInfo{inputs: []regMask{gpspsb, 0}} gp0flagsLoad = regInfo{inputs: []regMask{gpspsb, 0}}
gp1flagsLoad = regInfo{inputs: []regMask{gpspsb, gpsp, 0}} gp1flagsLoad = regInfo{inputs: []regMask{gpspsb, gpsp, 0}}
gp2flagsLoad = regInfo{inputs: []regMask{gpspsb, gpsp, gpsp, 0}}
flagsgp = regInfo{inputs: nil, outputs: gponly} flagsgp = regInfo{inputs: nil, outputs: gponly}
gp11flags = regInfo{inputs: []regMask{gp}, outputs: []regMask{gp, 0}} gp11flags = regInfo{inputs: []regMask{gp}, outputs: []regMask{gp, 0}}
@ -299,6 +300,24 @@ func init() {
{name: "CMPWconstload", argLength: 2, reg: gp0flagsLoad, asm: "CMPW", aux: "SymValAndOff", typ: "Flags", symEffect: "Read", faultOnNilArg0: true}, {name: "CMPWconstload", argLength: 2, reg: gp0flagsLoad, asm: "CMPW", aux: "SymValAndOff", typ: "Flags", symEffect: "Read", faultOnNilArg0: true},
{name: "CMPBconstload", argLength: 2, reg: gp0flagsLoad, asm: "CMPB", aux: "SymValAndOff", typ: "Flags", symEffect: "Read", faultOnNilArg0: true}, {name: "CMPBconstload", argLength: 2, reg: gp0flagsLoad, asm: "CMPB", aux: "SymValAndOff", typ: "Flags", symEffect: "Read", faultOnNilArg0: true},
// compare *(arg0+N*arg1+auxint+aux) to arg2 (in that order). arg3=mem.
{name: "CMPQloadidx8", argLength: 4, reg: gp2flagsLoad, asm: "CMPQ", scale: 8, aux: "SymOff", typ: "Flags", symEffect: "Read"},
{name: "CMPQloadidx1", argLength: 4, reg: gp2flagsLoad, asm: "CMPQ", scale: 1, commutative: true, aux: "SymOff", typ: "Flags", symEffect: "Read"},
{name: "CMPLloadidx4", argLength: 4, reg: gp2flagsLoad, asm: "CMPL", scale: 4, aux: "SymOff", typ: "Flags", symEffect: "Read"},
{name: "CMPLloadidx1", argLength: 4, reg: gp2flagsLoad, asm: "CMPL", scale: 1, commutative: true, aux: "SymOff", typ: "Flags", symEffect: "Read"},
{name: "CMPWloadidx2", argLength: 4, reg: gp2flagsLoad, asm: "CMPW", scale: 2, aux: "SymOff", typ: "Flags", symEffect: "Read"},
{name: "CMPWloadidx1", argLength: 4, reg: gp2flagsLoad, asm: "CMPW", scale: 1, commutative: true, aux: "SymOff", typ: "Flags", symEffect: "Read"},
{name: "CMPBloadidx1", argLength: 4, reg: gp2flagsLoad, asm: "CMPB", scale: 1, commutative: true, aux: "SymOff", typ: "Flags", symEffect: "Read"},
// compare *(arg0+N*arg1+ValAndOff(AuxInt).Off()+aux) to ValAndOff(AuxInt).Val() (in that order). arg2=mem.
{name: "CMPQconstloadidx8", argLength: 3, reg: gp1flagsLoad, asm: "CMPQ", scale: 8, aux: "SymValAndOff", typ: "Flags", symEffect: "Read"},
{name: "CMPQconstloadidx1", argLength: 3, reg: gp1flagsLoad, asm: "CMPQ", scale: 1, commutative: true, aux: "SymValAndOff", typ: "Flags", symEffect: "Read"},
{name: "CMPLconstloadidx4", argLength: 3, reg: gp1flagsLoad, asm: "CMPL", scale: 4, aux: "SymValAndOff", typ: "Flags", symEffect: "Read"},
{name: "CMPLconstloadidx1", argLength: 3, reg: gp1flagsLoad, asm: "CMPL", scale: 1, commutative: true, aux: "SymValAndOff", typ: "Flags", symEffect: "Read"},
{name: "CMPWconstloadidx2", argLength: 3, reg: gp1flagsLoad, asm: "CMPW", scale: 2, aux: "SymValAndOff", typ: "Flags", symEffect: "Read"},
{name: "CMPWconstloadidx1", argLength: 3, reg: gp1flagsLoad, asm: "CMPW", scale: 1, commutative: true, aux: "SymValAndOff", typ: "Flags", symEffect: "Read"},
{name: "CMPBconstloadidx1", argLength: 3, reg: gp1flagsLoad, asm: "CMPB", scale: 1, commutative: true, aux: "SymValAndOff", typ: "Flags", symEffect: "Read"},
{name: "UCOMISS", argLength: 2, reg: fp2flags, asm: "UCOMISS", typ: "Flags"}, // arg0 compare to arg1, f32 {name: "UCOMISS", argLength: 2, reg: fp2flags, asm: "UCOMISS", typ: "Flags"}, // arg0 compare to arg1, f32
{name: "UCOMISD", argLength: 2, reg: fp2flags, asm: "UCOMISD", typ: "Flags"}, // arg0 compare to arg1, f64 {name: "UCOMISD", argLength: 2, reg: fp2flags, asm: "UCOMISD", typ: "Flags"}, // arg0 compare to arg1, f64
@ -717,7 +736,7 @@ func init() {
{name: "LoweredNilCheck", argLength: 2, reg: regInfo{inputs: []regMask{gpsp}}, clobberFlags: true, nilCheck: true, faultOnNilArg0: true}, {name: "LoweredNilCheck", argLength: 2, reg: regInfo{inputs: []regMask{gpsp}}, clobberFlags: true, nilCheck: true, faultOnNilArg0: true},
// LoweredWB invokes runtime.gcWriteBarrier. arg0=destptr, arg1=srcptr, arg2=mem, aux=runtime.gcWriteBarrier // LoweredWB invokes runtime.gcWriteBarrier. arg0=destptr, arg1=srcptr, arg2=mem, aux=runtime.gcWriteBarrier
// It saves all GP registers if necessary, but may clobber others. // It saves all GP registers if necessary, but may clobber others.
{name: "LoweredWB", argLength: 3, reg: regInfo{inputs: []regMask{buildReg("DI"), ax}, clobbers: callerSave &^ gp}, clobberFlags: true, aux: "Sym", symEffect: "None"}, {name: "LoweredWB", argLength: 3, reg: regInfo{inputs: []regMask{buildReg("DI"), buildReg("AX CX DX BX BP SI R8 R9")}, clobbers: callerSave &^ gp}, clobberFlags: true, aux: "Sym", symEffect: "None"},
// There are three of these functions so that they can have three different register inputs. // There are three of these functions so that they can have three different register inputs.
// When we check 0 <= c <= cap (A), then 0 <= b <= c (B), then 0 <= a <= b (C), we want the // When we check 0 <= c <= cap (A), then 0 <= b <= c (B), then 0 <= a <= b (C), we want the

View file

@ -14,3 +14,13 @@
(CMP(Q|L|W|B)load {sym} [off] ptr x mem) -> (CMP(Q|L|W|B) (MOV(Q|L|W|B)load {sym} [off] ptr mem) x) (CMP(Q|L|W|B)load {sym} [off] ptr x mem) -> (CMP(Q|L|W|B) (MOV(Q|L|W|B)load {sym} [off] ptr mem) x)
(CMP(Q|L|W|B)constload {sym} [vo] ptr mem) -> (CMP(Q|L|W|B)const (MOV(Q|L|W|B)load {sym} [offOnly(vo)] ptr mem) [valOnly(vo)]) (CMP(Q|L|W|B)constload {sym} [vo] ptr mem) -> (CMP(Q|L|W|B)const (MOV(Q|L|W|B)load {sym} [offOnly(vo)] ptr mem) [valOnly(vo)])
(CMP(Q|L|W|B)loadidx1 {sym} [off] ptr idx x mem) -> (CMP(Q|L|W|B) (MOV(Q|L|W|B)loadidx1 {sym} [off] ptr idx mem) x)
(CMPQloadidx8 {sym} [off] ptr idx x mem) -> (CMPQ (MOVQloadidx8 {sym} [off] ptr idx mem) x)
(CMPLloadidx4 {sym} [off] ptr idx x mem) -> (CMPL (MOVLloadidx4 {sym} [off] ptr idx mem) x)
(CMPWloadidx2 {sym} [off] ptr idx x mem) -> (CMPW (MOVWloadidx2 {sym} [off] ptr idx mem) x)
(CMP(Q|L|W|B)constloadidx1 {sym} [vo] ptr idx mem) -> (CMP(Q|L|W|B)const (MOV(Q|L|W|B)loadidx1 {sym} [offOnly(vo)] ptr idx mem) [valOnly(vo)])
(CMPQconstloadidx8 {sym} [vo] ptr idx mem) -> (CMPQconst (MOVQloadidx8 {sym} [offOnly(vo)] ptr idx mem) [valOnly(vo)])
(CMPLconstloadidx4 {sym} [vo] ptr idx mem) -> (CMPLconst (MOVLloadidx4 {sym} [offOnly(vo)] ptr idx mem) [valOnly(vo)])
(CMPWconstloadidx2 {sym} [vo] ptr idx mem) -> (CMPWconst (MOVWloadidx2 {sym} [offOnly(vo)] ptr idx mem) [valOnly(vo)])

View file

@ -630,6 +630,7 @@ func init() {
asm: "STMG", asm: "STMG",
faultOnNilArg0: true, faultOnNilArg0: true,
symEffect: "Write", symEffect: "Write",
clobberFlags: true, // TODO(mundaym): currently uses AGFI to handle large offsets
}, },
{ {
name: "STMG3", name: "STMG3",
@ -640,6 +641,7 @@ func init() {
asm: "STMG", asm: "STMG",
faultOnNilArg0: true, faultOnNilArg0: true,
symEffect: "Write", symEffect: "Write",
clobberFlags: true, // TODO(mundaym): currently uses AGFI to handle large offsets
}, },
{ {
name: "STMG4", name: "STMG4",
@ -657,6 +659,7 @@ func init() {
asm: "STMG", asm: "STMG",
faultOnNilArg0: true, faultOnNilArg0: true,
symEffect: "Write", symEffect: "Write",
clobberFlags: true, // TODO(mundaym): currently uses AGFI to handle large offsets
}, },
{ {
name: "STM2", name: "STM2",
@ -667,6 +670,7 @@ func init() {
asm: "STMY", asm: "STMY",
faultOnNilArg0: true, faultOnNilArg0: true,
symEffect: "Write", symEffect: "Write",
clobberFlags: true, // TODO(mundaym): currently uses AGFI to handle large offsets
}, },
{ {
name: "STM3", name: "STM3",
@ -677,6 +681,7 @@ func init() {
asm: "STMY", asm: "STMY",
faultOnNilArg0: true, faultOnNilArg0: true,
symEffect: "Write", symEffect: "Write",
clobberFlags: true, // TODO(mundaym): currently uses AGFI to handle large offsets
}, },
{ {
name: "STM4", name: "STM4",
@ -694,6 +699,7 @@ func init() {
asm: "STMY", asm: "STMY",
faultOnNilArg0: true, faultOnNilArg0: true,
symEffect: "Write", symEffect: "Write",
clobberFlags: true, // TODO(mundaym): currently uses AGFI to handle large offsets
}, },
// large move // large move

View file

@ -137,6 +137,16 @@
(Xor32 (Const32 [c]) (Const32 [d])) -> (Const32 [int64(int32(c^d))]) (Xor32 (Const32 [c]) (Const32 [d])) -> (Const32 [int64(int32(c^d))])
(Xor64 (Const64 [c]) (Const64 [d])) -> (Const64 [c^d]) (Xor64 (Const64 [c]) (Const64 [d])) -> (Const64 [c^d])
(Ctz64 (Const64 [c])) && config.PtrSize == 4 -> (Const32 [ntz(c)])
(Ctz32 (Const32 [c])) && config.PtrSize == 4 -> (Const32 [ntz32(c)])
(Ctz16 (Const16 [c])) && config.PtrSize == 4 -> (Const32 [ntz16(c)])
(Ctz8 (Const8 [c])) && config.PtrSize == 4 -> (Const32 [ntz8(c)])
(Ctz64 (Const64 [c])) && config.PtrSize == 8 -> (Const64 [ntz(c)])
(Ctz32 (Const32 [c])) && config.PtrSize == 8 -> (Const64 [ntz32(c)])
(Ctz16 (Const16 [c])) && config.PtrSize == 8 -> (Const64 [ntz16(c)])
(Ctz8 (Const8 [c])) && config.PtrSize == 8 -> (Const64 [ntz8(c)])
(Div8 (Const8 [c]) (Const8 [d])) && d != 0 -> (Const8 [int64(int8(c)/int8(d))]) (Div8 (Const8 [c]) (Const8 [d])) && d != 0 -> (Const8 [int64(int8(c)/int8(d))])
(Div16 (Const16 [c]) (Const16 [d])) && d != 0 -> (Const16 [int64(int16(c)/int16(d))]) (Div16 (Const16 [c]) (Const16 [d])) && d != 0 -> (Const16 [int64(int16(c)/int16(d))])
(Div32 (Const32 [c]) (Const32 [d])) && d != 0 -> (Const32 [int64(int32(c)/int32(d))]) (Div32 (Const32 [c]) (Const32 [d])) && d != 0 -> (Const32 [int64(int32(c)/int32(d))])
@ -917,7 +927,7 @@
(If (ConstBool [c]) yes no) && c == 0 -> (First no yes) (If (ConstBool [c]) yes no) && c == 0 -> (First no yes)
// Get rid of Convert ops for pointer arithmetic on unsafe.Pointer. // Get rid of Convert ops for pointer arithmetic on unsafe.Pointer.
(Convert (Add(64|32) (Convert ptr mem) off) mem) -> (Add(64|32) ptr off) (Convert (Add(64|32) (Convert ptr mem) off) mem) -> (AddPtr ptr off)
(Convert (Convert ptr mem) mem) -> ptr (Convert (Convert ptr mem) mem) -> ptr
// strength reduction of divide by a constant. // strength reduction of divide by a constant.
@ -1780,6 +1790,10 @@
// is constant, which pushes constants to the outside // is constant, which pushes constants to the outside
// of the expression. At that point, any constant-folding // of the expression. At that point, any constant-folding
// opportunities should be obvious. // opportunities should be obvious.
// Note: don't include AddPtr here! In order to maintain the
// invariant that pointers must stay within the pointed-to object,
// we can't pull part of a pointer computation above the AddPtr.
// See issue 37881.
// x + (C + z) -> C + (x + z) // x + (C + z) -> C + (x + z)
(Add64 (Add64 i:(Const64 <t>) z) x) && (z.Op != OpConst64 && x.Op != OpConst64) -> (Add64 i (Add64 <t> z x)) (Add64 (Add64 i:(Const64 <t>) z) x) && (z.Op != OpConst64 && x.Op != OpConst64) -> (Add64 i (Add64 <t> z x))

View file

@ -715,6 +715,11 @@ func (w *bodyBase) add(node Statement) {
// declared reports if the body contains a Declare with the given name. // declared reports if the body contains a Declare with the given name.
func (w *bodyBase) declared(name string) bool { func (w *bodyBase) declared(name string) bool {
if name == "nil" {
// Treat "nil" as having already been declared.
// This lets us use nil to match an aux field.
return true
}
for _, s := range w.list { for _, s := range w.list {
if decl, ok := s.(*Declare); ok && decl.name == name { if decl, ok := s.(*Declare); ok && decl.name == name {
return true return true

View file

@ -19,9 +19,12 @@ import (
type HTMLWriter struct { type HTMLWriter struct {
Logger Logger
w io.WriteCloser w io.WriteCloser
path string path string
dot *dotWriter dot *dotWriter
prevHash []byte
pendingPhases []string
pendingTitles []string
} }
func NewHTMLWriter(path string, logger Logger, funcname, cfgMask string) *HTMLWriter { func NewHTMLWriter(path string, logger Logger, funcname, cfgMask string) *HTMLWriter {
@ -88,27 +91,22 @@ th, td {
td > h2 { td > h2 {
cursor: pointer; cursor: pointer;
font-size: 120%; font-size: 120%;
margin: 5px 0px 5px 0px;
} }
td.collapsed { td.collapsed {
font-size: 12px; font-size: 12px;
width: 12px; width: 12px;
border: 1px solid white; border: 1px solid white;
padding: 0; padding: 2px;
cursor: pointer; cursor: pointer;
background: #fafafa; background: #fafafa;
} }
td.collapsed div { td.collapsed div {
-moz-transform: rotate(-90.0deg); /* FF3.5+ */ /* TODO: Flip the direction of the phase's title 90 degrees on a collapsed column. */
-o-transform: rotate(-90.0deg); /* Opera 10.5 */ writing-mode: vertical-lr;
-webkit-transform: rotate(-90.0deg); /* Saf3.1+, Chrome */ white-space: pre;
filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=0.083); /* IE6,IE7 */
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=0.083)"; /* IE8 */
margin-top: 10.3em;
margin-left: -10em;
margin-right: -10em;
text-align: right;
} }
code, pre, .lines, .ast { code, pre, .lines, .ast {
@ -263,6 +261,14 @@ body.darkmode table, th {
border: 1px solid gray; border: 1px solid gray;
} }
body.darkmode text {
fill: white;
}
body.darkmode svg polygon:first-child {
fill: rgb(21, 21, 21);
}
.highlight-aquamarine { background-color: aquamarine; color: black; } .highlight-aquamarine { background-color: aquamarine; color: black; }
.highlight-coral { background-color: coral; color: black; } .highlight-coral { background-color: coral; color: black; }
.highlight-lightpink { background-color: lightpink; color: black; } .highlight-lightpink { background-color: lightpink; color: black; }
@ -304,7 +310,7 @@ body.darkmode table, th {
color: gray; color: gray;
} }
.outline-blue { outline: blue solid 2px; } .outline-blue { outline: #2893ff solid 2px; }
.outline-red { outline: red solid 2px; } .outline-red { outline: red solid 2px; }
.outline-blueviolet { outline: blueviolet solid 2px; } .outline-blueviolet { outline: blueviolet solid 2px; }
.outline-darkolivegreen { outline: darkolivegreen solid 2px; } .outline-darkolivegreen { outline: darkolivegreen solid 2px; }
@ -316,7 +322,7 @@ body.darkmode table, th {
.outline-maroon { outline: maroon solid 2px; } .outline-maroon { outline: maroon solid 2px; }
.outline-black { outline: black solid 2px; } .outline-black { outline: black solid 2px; }
ellipse.outline-blue { stroke-width: 2px; stroke: blue; } ellipse.outline-blue { stroke-width: 2px; stroke: #2893ff; }
ellipse.outline-red { stroke-width: 2px; stroke: red; } ellipse.outline-red { stroke-width: 2px; stroke: red; }
ellipse.outline-blueviolet { stroke-width: 2px; stroke: blueviolet; } ellipse.outline-blueviolet { stroke-width: 2px; stroke: blueviolet; }
ellipse.outline-darkolivegreen { stroke-width: 2px; stroke: darkolivegreen; } ellipse.outline-darkolivegreen { stroke-width: 2px; stroke: darkolivegreen; }
@ -473,7 +479,7 @@ window.onload = function() {
"deadcode", "deadcode",
"opt", "opt",
"lower", "lower",
"late deadcode", "late-deadcode",
"regalloc", "regalloc",
"genssa", "genssa",
]; ];
@ -495,15 +501,34 @@ window.onload = function() {
} }
// Go through all columns and collapse needed phases. // Go through all columns and collapse needed phases.
var td = document.getElementsByTagName("td"); const td = document.getElementsByTagName("td");
for (var i = 0; i < td.length; i++) { for (let i = 0; i < td.length; i++) {
var id = td[i].id; const id = td[i].id;
var phase = id.substr(0, id.length-4); const phase = id.substr(0, id.length-4);
var show = expandedDefault.indexOf(phase) !== -1 let show = expandedDefault.indexOf(phase) !== -1
// If show == false, check to see if this is a combined column (multiple phases).
// If combined, check each of the phases to see if they are in our expandedDefaults.
// If any are found, that entire combined column gets shown.
if (!show) {
const combined = phase.split('--+--');
const len = combined.length;
if (len > 1) {
for (let i = 0; i < len; i++) {
if (expandedDefault.indexOf(combined[i]) !== -1) {
show = true;
break;
}
}
}
}
if (id.endsWith("-exp")) { if (id.endsWith("-exp")) {
var h2 = td[i].getElementsByTagName("h2"); const h2Els = td[i].getElementsByTagName("h2");
if (h2 && h2[0]) { const len = h2Els.length;
h2[0].addEventListener('click', toggler(phase)); if (len > 0) {
for (let i = 0; i < len; i++) {
h2Els[i].addEventListener('click', toggler(phase));
}
} }
} else { } else {
td[i].addEventListener('click', toggler(phase)); td[i].addEventListener('click', toggler(phase));
@ -642,12 +667,35 @@ function makeDraggable(event) {
function toggleDarkMode() { function toggleDarkMode() {
document.body.classList.toggle('darkmode'); document.body.classList.toggle('darkmode');
// Collect all of the "collapsed" elements and apply dark mode on each collapsed column
const collapsedEls = document.getElementsByClassName('collapsed'); const collapsedEls = document.getElementsByClassName('collapsed');
const len = collapsedEls.length; const len = collapsedEls.length;
for (let i = 0; i < len; i++) { for (let i = 0; i < len; i++) {
collapsedEls[i].classList.toggle('darkmode'); collapsedEls[i].classList.toggle('darkmode');
} }
// Collect and spread the appropriate elements from all of the svgs on the page into one array
const svgParts = [
...document.querySelectorAll('path'),
...document.querySelectorAll('ellipse'),
...document.querySelectorAll('polygon'),
];
// Iterate over the svgParts specifically looking for white and black fill/stroke to be toggled.
// The verbose conditional is intentional here so that we do not mutate any svg path, ellipse, or polygon that is of any color other than white or black.
svgParts.forEach(el => {
if (el.attributes.stroke.value === 'white') {
el.attributes.stroke.value = 'black';
} else if (el.attributes.stroke.value === 'black') {
el.attributes.stroke.value = 'white';
}
if (el.attributes.fill.value === 'white') {
el.attributes.fill.value = 'black';
} else if (el.attributes.fill.value === 'black') {
el.attributes.fill.value = 'white';
}
});
} }
</script> </script>
@ -707,8 +755,16 @@ func (w *HTMLWriter) WriteFunc(phase, title string, f *Func) {
if w == nil { if w == nil {
return // avoid generating HTML just to discard it return // avoid generating HTML just to discard it
} }
//w.WriteColumn(phase, title, "", f.HTML()) hash := hashFunc(f)
w.WriteColumn(phase, title, "", f.HTML(phase, w.dot)) w.pendingPhases = append(w.pendingPhases, phase)
w.pendingTitles = append(w.pendingTitles, title)
if !bytes.Equal(hash, w.prevHash) {
phases := strings.Join(w.pendingPhases, " + ")
w.WriteMultiTitleColumn(phases, w.pendingTitles, fmt.Sprintf("hash-%x", hash), f.HTML(phase, w.dot))
w.pendingPhases = w.pendingPhases[:0]
w.pendingTitles = w.pendingTitles[:0]
}
w.prevHash = hash
} }
// FuncLines contains source code for a function to be displayed // FuncLines contains source code for a function to be displayed
@ -822,6 +878,10 @@ func (w *HTMLWriter) WriteAST(phase string, buf *bytes.Buffer) {
// WriteColumn writes raw HTML in a column headed by title. // WriteColumn writes raw HTML in a column headed by title.
// It is intended for pre- and post-compilation log output. // It is intended for pre- and post-compilation log output.
func (w *HTMLWriter) WriteColumn(phase, title, class, html string) { func (w *HTMLWriter) WriteColumn(phase, title, class, html string) {
w.WriteMultiTitleColumn(phase, []string{title}, class, html)
}
func (w *HTMLWriter) WriteMultiTitleColumn(phase string, titles []string, class, html string) {
if w == nil { if w == nil {
return return
} }
@ -834,9 +894,11 @@ func (w *HTMLWriter) WriteColumn(phase, title, class, html string) {
} else { } else {
w.Printf("<td id=\"%v-exp\" class=\"%v\">", id, class) w.Printf("<td id=\"%v-exp\" class=\"%v\">", id, class)
} }
w.WriteString("<h2>" + title + "</h2>") for _, title := range titles {
w.WriteString("<h2>" + title + "</h2>")
}
w.WriteString(html) w.WriteString(html)
w.WriteString("</td>") w.WriteString("</td>\n")
} }
func (w *HTMLWriter) Printf(msg string, v ...interface{}) { func (w *HTMLWriter) Printf(msg string, v ...interface{}) {
@ -1016,7 +1078,7 @@ func (d *dotWriter) writeFuncSVG(w io.Writer, phase string, f *Func) {
arrow = "dotvee" arrow = "dotvee"
layoutDrawn[s.b.ID] = true layoutDrawn[s.b.ID] = true
} else if isBackEdge(b.ID, s.b.ID) { } else if isBackEdge(b.ID, s.b.ID) {
color = "blue" color = "#2893ff"
} }
fmt.Fprintf(pipe, `%v -> %v [label=" %d ",style="%s",color="%s",arrowhead="%s"];`, b, s.b, i, style, color, arrow) fmt.Fprintf(pipe, `%v -> %v [label=" %d ",style="%s",color="%s",arrowhead="%s"];`, b, s.b, i, style, color, arrow)
} }

View file

@ -602,6 +602,20 @@ const (
OpAMD64CMPLconstload OpAMD64CMPLconstload
OpAMD64CMPWconstload OpAMD64CMPWconstload
OpAMD64CMPBconstload OpAMD64CMPBconstload
OpAMD64CMPQloadidx8
OpAMD64CMPQloadidx1
OpAMD64CMPLloadidx4
OpAMD64CMPLloadidx1
OpAMD64CMPWloadidx2
OpAMD64CMPWloadidx1
OpAMD64CMPBloadidx1
OpAMD64CMPQconstloadidx8
OpAMD64CMPQconstloadidx1
OpAMD64CMPLconstloadidx4
OpAMD64CMPLconstloadidx1
OpAMD64CMPWconstloadidx2
OpAMD64CMPWconstloadidx1
OpAMD64CMPBconstloadidx1
OpAMD64UCOMISS OpAMD64UCOMISS
OpAMD64UCOMISD OpAMD64UCOMISD
OpAMD64BTL OpAMD64BTL
@ -7534,6 +7548,217 @@ var opcodeTable = [...]opInfo{
}, },
}, },
}, },
{
name: "CMPQloadidx8",
auxType: auxSymOff,
argLen: 4,
symEffect: SymRead,
asm: x86.ACMPQ,
scale: 8,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{2, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPQloadidx1",
auxType: auxSymOff,
argLen: 4,
commutative: true,
symEffect: SymRead,
asm: x86.ACMPQ,
scale: 1,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{2, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPLloadidx4",
auxType: auxSymOff,
argLen: 4,
symEffect: SymRead,
asm: x86.ACMPL,
scale: 4,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{2, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPLloadidx1",
auxType: auxSymOff,
argLen: 4,
commutative: true,
symEffect: SymRead,
asm: x86.ACMPL,
scale: 1,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{2, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPWloadidx2",
auxType: auxSymOff,
argLen: 4,
symEffect: SymRead,
asm: x86.ACMPW,
scale: 2,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{2, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPWloadidx1",
auxType: auxSymOff,
argLen: 4,
commutative: true,
symEffect: SymRead,
asm: x86.ACMPW,
scale: 1,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{2, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPBloadidx1",
auxType: auxSymOff,
argLen: 4,
commutative: true,
symEffect: SymRead,
asm: x86.ACMPB,
scale: 1,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{2, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPQconstloadidx8",
auxType: auxSymValAndOff,
argLen: 3,
symEffect: SymRead,
asm: x86.ACMPQ,
scale: 8,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPQconstloadidx1",
auxType: auxSymValAndOff,
argLen: 3,
commutative: true,
symEffect: SymRead,
asm: x86.ACMPQ,
scale: 1,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPLconstloadidx4",
auxType: auxSymValAndOff,
argLen: 3,
symEffect: SymRead,
asm: x86.ACMPL,
scale: 4,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPLconstloadidx1",
auxType: auxSymValAndOff,
argLen: 3,
commutative: true,
symEffect: SymRead,
asm: x86.ACMPL,
scale: 1,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPWconstloadidx2",
auxType: auxSymValAndOff,
argLen: 3,
symEffect: SymRead,
asm: x86.ACMPW,
scale: 2,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPWconstloadidx1",
auxType: auxSymValAndOff,
argLen: 3,
commutative: true,
symEffect: SymRead,
asm: x86.ACMPW,
scale: 1,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{
name: "CMPBconstloadidx1",
auxType: auxSymValAndOff,
argLen: 3,
commutative: true,
symEffect: SymRead,
asm: x86.ACMPB,
scale: 1,
reg: regInfo{
inputs: []inputInfo{
{1, 65535}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15
{0, 4295032831}, // AX CX DX BX SP BP SI DI R8 R9 R10 R11 R12 R13 R14 R15 SB
},
},
},
{ {
name: "UCOMISS", name: "UCOMISS",
argLen: 2, argLen: 2,
@ -11420,7 +11645,7 @@ var opcodeTable = [...]opInfo{
reg: regInfo{ reg: regInfo{
inputs: []inputInfo{ inputs: []inputInfo{
{0, 128}, // DI {0, 128}, // DI
{1, 1}, // AX {1, 879}, // AX CX DX BX BP SI R8 R9
}, },
clobbers: 4294901760, // X0 X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 clobbers: 4294901760, // X0 X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15
}, },
@ -29885,6 +30110,7 @@ var opcodeTable = [...]opInfo{
name: "STMG2", name: "STMG2",
auxType: auxSymOff, auxType: auxSymOff,
argLen: 4, argLen: 4,
clobberFlags: true,
faultOnNilArg0: true, faultOnNilArg0: true,
symEffect: SymWrite, symEffect: SymWrite,
asm: s390x.ASTMG, asm: s390x.ASTMG,
@ -29900,6 +30126,7 @@ var opcodeTable = [...]opInfo{
name: "STMG3", name: "STMG3",
auxType: auxSymOff, auxType: auxSymOff,
argLen: 5, argLen: 5,
clobberFlags: true,
faultOnNilArg0: true, faultOnNilArg0: true,
symEffect: SymWrite, symEffect: SymWrite,
asm: s390x.ASTMG, asm: s390x.ASTMG,
@ -29916,6 +30143,7 @@ var opcodeTable = [...]opInfo{
name: "STMG4", name: "STMG4",
auxType: auxSymOff, auxType: auxSymOff,
argLen: 6, argLen: 6,
clobberFlags: true,
faultOnNilArg0: true, faultOnNilArg0: true,
symEffect: SymWrite, symEffect: SymWrite,
asm: s390x.ASTMG, asm: s390x.ASTMG,
@ -29933,6 +30161,7 @@ var opcodeTable = [...]opInfo{
name: "STM2", name: "STM2",
auxType: auxSymOff, auxType: auxSymOff,
argLen: 4, argLen: 4,
clobberFlags: true,
faultOnNilArg0: true, faultOnNilArg0: true,
symEffect: SymWrite, symEffect: SymWrite,
asm: s390x.ASTMY, asm: s390x.ASTMY,
@ -29948,6 +30177,7 @@ var opcodeTable = [...]opInfo{
name: "STM3", name: "STM3",
auxType: auxSymOff, auxType: auxSymOff,
argLen: 5, argLen: 5,
clobberFlags: true,
faultOnNilArg0: true, faultOnNilArg0: true,
symEffect: SymWrite, symEffect: SymWrite,
asm: s390x.ASTMY, asm: s390x.ASTMY,
@ -29964,6 +30194,7 @@ var opcodeTable = [...]opInfo{
name: "STM4", name: "STM4",
auxType: auxSymOff, auxType: auxSymOff,
argLen: 6, argLen: 6,
clobberFlags: true,
faultOnNilArg0: true, faultOnNilArg0: true,
symEffect: SymWrite, symEffect: SymWrite,
asm: s390x.ASTMY, asm: s390x.ASTMY,

View file

@ -6,6 +6,7 @@ package ssa
import ( import (
"bytes" "bytes"
"crypto/sha256"
"fmt" "fmt"
"io" "io"
) )
@ -14,6 +15,13 @@ func printFunc(f *Func) {
f.Logf("%s", f) f.Logf("%s", f)
} }
func hashFunc(f *Func) []byte {
h := sha256.New()
p := stringFuncPrinter{w: h}
fprintFunc(p, f)
return h.Sum(nil)
}
func (f *Func) String() string { func (f *Func) String() string {
var buf bytes.Buffer var buf bytes.Buffer
p := stringFuncPrinter{w: &buf} p := stringFuncPrinter{w: &buf}

View file

@ -347,9 +347,10 @@ func nlz(x int64) int64 {
} }
// ntz returns the number of trailing zeros. // ntz returns the number of trailing zeros.
func ntz(x int64) int64 { func ntz(x int64) int64 { return int64(bits.TrailingZeros64(uint64(x))) }
return int64(bits.TrailingZeros64(uint64(x))) func ntz32(x int64) int64 { return int64(bits.TrailingZeros32(uint32(x))) }
} func ntz16(x int64) int64 { return int64(bits.TrailingZeros16(uint16(x))) }
func ntz8(x int64) int64 { return int64(bits.TrailingZeros8(uint8(x))) }
func oneBit(x int64) bool { func oneBit(x int64) bool {
return bits.OnesCount64(uint64(x)) == 1 return bits.OnesCount64(uint64(x)) == 1
@ -990,7 +991,9 @@ func zeroUpper32Bits(x *Value, depth int) bool {
OpAMD64ORLload, OpAMD64XORLload, OpAMD64CVTTSD2SL, OpAMD64ORLload, OpAMD64XORLload, OpAMD64CVTTSD2SL,
OpAMD64ADDL, OpAMD64ADDLconst, OpAMD64SUBL, OpAMD64SUBLconst, OpAMD64ADDL, OpAMD64ADDLconst, OpAMD64SUBL, OpAMD64SUBLconst,
OpAMD64ANDL, OpAMD64ANDLconst, OpAMD64ORL, OpAMD64ORLconst, OpAMD64ANDL, OpAMD64ANDLconst, OpAMD64ORL, OpAMD64ORLconst,
OpAMD64XORL, OpAMD64XORLconst, OpAMD64NEGL, OpAMD64NOTL: OpAMD64XORL, OpAMD64XORLconst, OpAMD64NEGL, OpAMD64NOTL,
OpAMD64SHRL, OpAMD64SHRLconst, OpAMD64SARL, OpAMD64SARLconst,
OpAMD64SHLL, OpAMD64SHLLconst:
return true return true
case OpArg: case OpArg:
return x.Type.Width == 4 return x.Type.Width == 4
@ -1248,42 +1251,27 @@ func read64(sym interface{}, off int64, byteorder binary.ByteOrder) uint64 {
return byteorder.Uint64(buf) return byteorder.Uint64(buf)
} }
// same reports whether x and y are the same value. // sequentialAddresses reports true if it can prove that x + n == y
// It checks to a maximum depth of d, so it may report func sequentialAddresses(x, y *Value, n int64) bool {
// a false negative. if x.Op == Op386ADDL && y.Op == Op386LEAL1 && y.AuxInt == n && y.Aux == nil &&
func same(x, y *Value, depth int) bool { (x.Args[0] == y.Args[0] && x.Args[1] == y.Args[1] ||
if x == y { x.Args[0] == y.Args[1] && x.Args[1] == y.Args[0]) {
return true return true
} }
if depth <= 0 { if x.Op == Op386LEAL1 && y.Op == Op386LEAL1 && y.AuxInt == x.AuxInt+n && x.Aux == y.Aux &&
return false (x.Args[0] == y.Args[0] && x.Args[1] == y.Args[1] ||
} x.Args[0] == y.Args[1] && x.Args[1] == y.Args[0]) {
if x.Op != y.Op || x.Aux != y.Aux || x.AuxInt != y.AuxInt {
return false
}
if len(x.Args) != len(y.Args) {
return false
}
if opcodeTable[x.Op].commutative {
// Check exchanged ordering first.
for i, a := range x.Args {
j := i
if j < 2 {
j ^= 1
}
b := y.Args[j]
if !same(a, b, depth-1) {
goto checkNormalOrder
}
}
return true return true
checkNormalOrder:
} }
for i, a := range x.Args { if x.Op == OpAMD64ADDQ && y.Op == OpAMD64LEAQ1 && y.AuxInt == n && y.Aux == nil &&
b := y.Args[i] (x.Args[0] == y.Args[0] && x.Args[1] == y.Args[1] ||
if !same(a, b, depth-1) { x.Args[0] == y.Args[1] && x.Args[1] == y.Args[0]) {
return false return true
}
} }
return true if x.Op == OpAMD64LEAQ1 && y.Op == OpAMD64LEAQ1 && y.AuxInt == x.AuxInt+n && x.Aux == y.Aux &&
(x.Args[0] == y.Args[0] && x.Args[1] == y.Args[1] ||
x.Args[0] == y.Args[1] && x.Args[1] == y.Args[0]) {
return true
}
return false
} }

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -7,20 +7,48 @@ func rewriteValueAMD64splitload(v *Value) bool {
switch v.Op { switch v.Op {
case OpAMD64CMPBconstload: case OpAMD64CMPBconstload:
return rewriteValueAMD64splitload_OpAMD64CMPBconstload(v) return rewriteValueAMD64splitload_OpAMD64CMPBconstload(v)
case OpAMD64CMPBconstloadidx1:
return rewriteValueAMD64splitload_OpAMD64CMPBconstloadidx1(v)
case OpAMD64CMPBload: case OpAMD64CMPBload:
return rewriteValueAMD64splitload_OpAMD64CMPBload(v) return rewriteValueAMD64splitload_OpAMD64CMPBload(v)
case OpAMD64CMPBloadidx1:
return rewriteValueAMD64splitload_OpAMD64CMPBloadidx1(v)
case OpAMD64CMPLconstload: case OpAMD64CMPLconstload:
return rewriteValueAMD64splitload_OpAMD64CMPLconstload(v) return rewriteValueAMD64splitload_OpAMD64CMPLconstload(v)
case OpAMD64CMPLconstloadidx1:
return rewriteValueAMD64splitload_OpAMD64CMPLconstloadidx1(v)
case OpAMD64CMPLconstloadidx4:
return rewriteValueAMD64splitload_OpAMD64CMPLconstloadidx4(v)
case OpAMD64CMPLload: case OpAMD64CMPLload:
return rewriteValueAMD64splitload_OpAMD64CMPLload(v) return rewriteValueAMD64splitload_OpAMD64CMPLload(v)
case OpAMD64CMPLloadidx1:
return rewriteValueAMD64splitload_OpAMD64CMPLloadidx1(v)
case OpAMD64CMPLloadidx4:
return rewriteValueAMD64splitload_OpAMD64CMPLloadidx4(v)
case OpAMD64CMPQconstload: case OpAMD64CMPQconstload:
return rewriteValueAMD64splitload_OpAMD64CMPQconstload(v) return rewriteValueAMD64splitload_OpAMD64CMPQconstload(v)
case OpAMD64CMPQconstloadidx1:
return rewriteValueAMD64splitload_OpAMD64CMPQconstloadidx1(v)
case OpAMD64CMPQconstloadidx8:
return rewriteValueAMD64splitload_OpAMD64CMPQconstloadidx8(v)
case OpAMD64CMPQload: case OpAMD64CMPQload:
return rewriteValueAMD64splitload_OpAMD64CMPQload(v) return rewriteValueAMD64splitload_OpAMD64CMPQload(v)
case OpAMD64CMPQloadidx1:
return rewriteValueAMD64splitload_OpAMD64CMPQloadidx1(v)
case OpAMD64CMPQloadidx8:
return rewriteValueAMD64splitload_OpAMD64CMPQloadidx8(v)
case OpAMD64CMPWconstload: case OpAMD64CMPWconstload:
return rewriteValueAMD64splitload_OpAMD64CMPWconstload(v) return rewriteValueAMD64splitload_OpAMD64CMPWconstload(v)
case OpAMD64CMPWconstloadidx1:
return rewriteValueAMD64splitload_OpAMD64CMPWconstloadidx1(v)
case OpAMD64CMPWconstloadidx2:
return rewriteValueAMD64splitload_OpAMD64CMPWconstloadidx2(v)
case OpAMD64CMPWload: case OpAMD64CMPWload:
return rewriteValueAMD64splitload_OpAMD64CMPWload(v) return rewriteValueAMD64splitload_OpAMD64CMPWload(v)
case OpAMD64CMPWloadidx1:
return rewriteValueAMD64splitload_OpAMD64CMPWloadidx1(v)
case OpAMD64CMPWloadidx2:
return rewriteValueAMD64splitload_OpAMD64CMPWloadidx2(v)
} }
return false return false
} }
@ -46,6 +74,30 @@ func rewriteValueAMD64splitload_OpAMD64CMPBconstload(v *Value) bool {
return true return true
} }
} }
func rewriteValueAMD64splitload_OpAMD64CMPBconstloadidx1(v *Value) bool {
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPBconstloadidx1 {sym} [vo] ptr idx mem)
// result: (CMPBconst (MOVBloadidx1 {sym} [offOnly(vo)] ptr idx mem) [valOnly(vo)])
for {
vo := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
mem := v_2
v.reset(OpAMD64CMPBconst)
v.AuxInt = valOnly(vo)
v0 := b.NewValue0(v.Pos, OpAMD64MOVBloadidx1, typ.UInt8)
v0.AuxInt = offOnly(vo)
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg(v0)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPBload(v *Value) bool { func rewriteValueAMD64splitload_OpAMD64CMPBload(v *Value) bool {
v_2 := v.Args[2] v_2 := v.Args[2]
v_1 := v.Args[1] v_1 := v.Args[1]
@ -69,6 +121,31 @@ func rewriteValueAMD64splitload_OpAMD64CMPBload(v *Value) bool {
return true return true
} }
} }
func rewriteValueAMD64splitload_OpAMD64CMPBloadidx1(v *Value) bool {
v_3 := v.Args[3]
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPBloadidx1 {sym} [off] ptr idx x mem)
// result: (CMPB (MOVBloadidx1 {sym} [off] ptr idx mem) x)
for {
off := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
x := v_2
mem := v_3
v.reset(OpAMD64CMPB)
v0 := b.NewValue0(v.Pos, OpAMD64MOVBloadidx1, typ.UInt8)
v0.AuxInt = off
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg2(v0, x)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPLconstload(v *Value) bool { func rewriteValueAMD64splitload_OpAMD64CMPLconstload(v *Value) bool {
v_1 := v.Args[1] v_1 := v.Args[1]
v_0 := v.Args[0] v_0 := v.Args[0]
@ -91,6 +168,54 @@ func rewriteValueAMD64splitload_OpAMD64CMPLconstload(v *Value) bool {
return true return true
} }
} }
func rewriteValueAMD64splitload_OpAMD64CMPLconstloadidx1(v *Value) bool {
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPLconstloadidx1 {sym} [vo] ptr idx mem)
// result: (CMPLconst (MOVLloadidx1 {sym} [offOnly(vo)] ptr idx mem) [valOnly(vo)])
for {
vo := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
mem := v_2
v.reset(OpAMD64CMPLconst)
v.AuxInt = valOnly(vo)
v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v0.AuxInt = offOnly(vo)
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg(v0)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPLconstloadidx4(v *Value) bool {
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPLconstloadidx4 {sym} [vo] ptr idx mem)
// result: (CMPLconst (MOVLloadidx4 {sym} [offOnly(vo)] ptr idx mem) [valOnly(vo)])
for {
vo := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
mem := v_2
v.reset(OpAMD64CMPLconst)
v.AuxInt = valOnly(vo)
v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx4, typ.UInt32)
v0.AuxInt = offOnly(vo)
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg(v0)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPLload(v *Value) bool { func rewriteValueAMD64splitload_OpAMD64CMPLload(v *Value) bool {
v_2 := v.Args[2] v_2 := v.Args[2]
v_1 := v.Args[1] v_1 := v.Args[1]
@ -114,6 +239,56 @@ func rewriteValueAMD64splitload_OpAMD64CMPLload(v *Value) bool {
return true return true
} }
} }
func rewriteValueAMD64splitload_OpAMD64CMPLloadidx1(v *Value) bool {
v_3 := v.Args[3]
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPLloadidx1 {sym} [off] ptr idx x mem)
// result: (CMPL (MOVLloadidx1 {sym} [off] ptr idx mem) x)
for {
off := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
x := v_2
mem := v_3
v.reset(OpAMD64CMPL)
v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v0.AuxInt = off
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg2(v0, x)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPLloadidx4(v *Value) bool {
v_3 := v.Args[3]
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPLloadidx4 {sym} [off] ptr idx x mem)
// result: (CMPL (MOVLloadidx4 {sym} [off] ptr idx mem) x)
for {
off := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
x := v_2
mem := v_3
v.reset(OpAMD64CMPL)
v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx4, typ.UInt32)
v0.AuxInt = off
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg2(v0, x)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPQconstload(v *Value) bool { func rewriteValueAMD64splitload_OpAMD64CMPQconstload(v *Value) bool {
v_1 := v.Args[1] v_1 := v.Args[1]
v_0 := v.Args[0] v_0 := v.Args[0]
@ -136,6 +311,54 @@ func rewriteValueAMD64splitload_OpAMD64CMPQconstload(v *Value) bool {
return true return true
} }
} }
func rewriteValueAMD64splitload_OpAMD64CMPQconstloadidx1(v *Value) bool {
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPQconstloadidx1 {sym} [vo] ptr idx mem)
// result: (CMPQconst (MOVQloadidx1 {sym} [offOnly(vo)] ptr idx mem) [valOnly(vo)])
for {
vo := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
mem := v_2
v.reset(OpAMD64CMPQconst)
v.AuxInt = valOnly(vo)
v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v0.AuxInt = offOnly(vo)
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg(v0)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPQconstloadidx8(v *Value) bool {
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPQconstloadidx8 {sym} [vo] ptr idx mem)
// result: (CMPQconst (MOVQloadidx8 {sym} [offOnly(vo)] ptr idx mem) [valOnly(vo)])
for {
vo := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
mem := v_2
v.reset(OpAMD64CMPQconst)
v.AuxInt = valOnly(vo)
v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx8, typ.UInt64)
v0.AuxInt = offOnly(vo)
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg(v0)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPQload(v *Value) bool { func rewriteValueAMD64splitload_OpAMD64CMPQload(v *Value) bool {
v_2 := v.Args[2] v_2 := v.Args[2]
v_1 := v.Args[1] v_1 := v.Args[1]
@ -159,6 +382,56 @@ func rewriteValueAMD64splitload_OpAMD64CMPQload(v *Value) bool {
return true return true
} }
} }
func rewriteValueAMD64splitload_OpAMD64CMPQloadidx1(v *Value) bool {
v_3 := v.Args[3]
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPQloadidx1 {sym} [off] ptr idx x mem)
// result: (CMPQ (MOVQloadidx1 {sym} [off] ptr idx mem) x)
for {
off := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
x := v_2
mem := v_3
v.reset(OpAMD64CMPQ)
v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v0.AuxInt = off
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg2(v0, x)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPQloadidx8(v *Value) bool {
v_3 := v.Args[3]
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPQloadidx8 {sym} [off] ptr idx x mem)
// result: (CMPQ (MOVQloadidx8 {sym} [off] ptr idx mem) x)
for {
off := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
x := v_2
mem := v_3
v.reset(OpAMD64CMPQ)
v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx8, typ.UInt64)
v0.AuxInt = off
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg2(v0, x)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPWconstload(v *Value) bool { func rewriteValueAMD64splitload_OpAMD64CMPWconstload(v *Value) bool {
v_1 := v.Args[1] v_1 := v.Args[1]
v_0 := v.Args[0] v_0 := v.Args[0]
@ -181,6 +454,54 @@ func rewriteValueAMD64splitload_OpAMD64CMPWconstload(v *Value) bool {
return true return true
} }
} }
func rewriteValueAMD64splitload_OpAMD64CMPWconstloadidx1(v *Value) bool {
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPWconstloadidx1 {sym} [vo] ptr idx mem)
// result: (CMPWconst (MOVWloadidx1 {sym} [offOnly(vo)] ptr idx mem) [valOnly(vo)])
for {
vo := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
mem := v_2
v.reset(OpAMD64CMPWconst)
v.AuxInt = valOnly(vo)
v0 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v0.AuxInt = offOnly(vo)
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg(v0)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPWconstloadidx2(v *Value) bool {
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPWconstloadidx2 {sym} [vo] ptr idx mem)
// result: (CMPWconst (MOVWloadidx2 {sym} [offOnly(vo)] ptr idx mem) [valOnly(vo)])
for {
vo := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
mem := v_2
v.reset(OpAMD64CMPWconst)
v.AuxInt = valOnly(vo)
v0 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx2, typ.UInt16)
v0.AuxInt = offOnly(vo)
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg(v0)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPWload(v *Value) bool { func rewriteValueAMD64splitload_OpAMD64CMPWload(v *Value) bool {
v_2 := v.Args[2] v_2 := v.Args[2]
v_1 := v.Args[1] v_1 := v.Args[1]
@ -204,6 +525,56 @@ func rewriteValueAMD64splitload_OpAMD64CMPWload(v *Value) bool {
return true return true
} }
} }
func rewriteValueAMD64splitload_OpAMD64CMPWloadidx1(v *Value) bool {
v_3 := v.Args[3]
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPWloadidx1 {sym} [off] ptr idx x mem)
// result: (CMPW (MOVWloadidx1 {sym} [off] ptr idx mem) x)
for {
off := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
x := v_2
mem := v_3
v.reset(OpAMD64CMPW)
v0 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v0.AuxInt = off
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg2(v0, x)
return true
}
}
func rewriteValueAMD64splitload_OpAMD64CMPWloadidx2(v *Value) bool {
v_3 := v.Args[3]
v_2 := v.Args[2]
v_1 := v.Args[1]
v_0 := v.Args[0]
b := v.Block
typ := &b.Func.Config.Types
// match: (CMPWloadidx2 {sym} [off] ptr idx x mem)
// result: (CMPW (MOVWloadidx2 {sym} [off] ptr idx mem) x)
for {
off := v.AuxInt
sym := v.Aux
ptr := v_0
idx := v_1
x := v_2
mem := v_3
v.reset(OpAMD64CMPW)
v0 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx2, typ.UInt16)
v0.AuxInt = off
v0.Aux = sym
v0.AddArg3(ptr, idx, mem)
v.AddArg2(v0, x)
return true
}
}
func rewriteBlockAMD64splitload(b *Block) bool { func rewriteBlockAMD64splitload(b *Block) bool {
switch b.Kind { switch b.Kind {
} }

View file

@ -50,6 +50,14 @@ func rewriteValuegeneric(v *Value) bool {
return rewriteValuegeneric_OpConstString(v) return rewriteValuegeneric_OpConstString(v)
case OpConvert: case OpConvert:
return rewriteValuegeneric_OpConvert(v) return rewriteValuegeneric_OpConvert(v)
case OpCtz16:
return rewriteValuegeneric_OpCtz16(v)
case OpCtz32:
return rewriteValuegeneric_OpCtz32(v)
case OpCtz64:
return rewriteValuegeneric_OpCtz64(v)
case OpCtz8:
return rewriteValuegeneric_OpCtz8(v)
case OpCvt32Fto32: case OpCvt32Fto32:
return rewriteValuegeneric_OpCvt32Fto32(v) return rewriteValuegeneric_OpCvt32Fto32(v)
case OpCvt32Fto64: case OpCvt32Fto64:
@ -3983,7 +3991,7 @@ func rewriteValuegeneric_OpConvert(v *Value) bool {
v_1 := v.Args[1] v_1 := v.Args[1]
v_0 := v.Args[0] v_0 := v.Args[0]
// match: (Convert (Add64 (Convert ptr mem) off) mem) // match: (Convert (Add64 (Convert ptr mem) off) mem)
// result: (Add64 ptr off) // result: (AddPtr ptr off)
for { for {
if v_0.Op != OpAdd64 { if v_0.Op != OpAdd64 {
break break
@ -4001,14 +4009,14 @@ func rewriteValuegeneric_OpConvert(v *Value) bool {
if mem != v_1 { if mem != v_1 {
continue continue
} }
v.reset(OpAdd64) v.reset(OpAddPtr)
v.AddArg2(ptr, off) v.AddArg2(ptr, off)
return true return true
} }
break break
} }
// match: (Convert (Add32 (Convert ptr mem) off) mem) // match: (Convert (Add32 (Convert ptr mem) off) mem)
// result: (Add32 ptr off) // result: (AddPtr ptr off)
for { for {
if v_0.Op != OpAdd32 { if v_0.Op != OpAdd32 {
break break
@ -4026,7 +4034,7 @@ func rewriteValuegeneric_OpConvert(v *Value) bool {
if mem != v_1 { if mem != v_1 {
continue continue
} }
v.reset(OpAdd32) v.reset(OpAddPtr)
v.AddArg2(ptr, off) v.AddArg2(ptr, off)
return true return true
} }
@ -4048,6 +4056,150 @@ func rewriteValuegeneric_OpConvert(v *Value) bool {
} }
return false return false
} }
func rewriteValuegeneric_OpCtz16(v *Value) bool {
v_0 := v.Args[0]
b := v.Block
config := b.Func.Config
// match: (Ctz16 (Const16 [c]))
// cond: config.PtrSize == 4
// result: (Const32 [ntz16(c)])
for {
if v_0.Op != OpConst16 {
break
}
c := v_0.AuxInt
if !(config.PtrSize == 4) {
break
}
v.reset(OpConst32)
v.AuxInt = ntz16(c)
return true
}
// match: (Ctz16 (Const16 [c]))
// cond: config.PtrSize == 8
// result: (Const64 [ntz16(c)])
for {
if v_0.Op != OpConst16 {
break
}
c := v_0.AuxInt
if !(config.PtrSize == 8) {
break
}
v.reset(OpConst64)
v.AuxInt = ntz16(c)
return true
}
return false
}
func rewriteValuegeneric_OpCtz32(v *Value) bool {
v_0 := v.Args[0]
b := v.Block
config := b.Func.Config
// match: (Ctz32 (Const32 [c]))
// cond: config.PtrSize == 4
// result: (Const32 [ntz32(c)])
for {
if v_0.Op != OpConst32 {
break
}
c := v_0.AuxInt
if !(config.PtrSize == 4) {
break
}
v.reset(OpConst32)
v.AuxInt = ntz32(c)
return true
}
// match: (Ctz32 (Const32 [c]))
// cond: config.PtrSize == 8
// result: (Const64 [ntz32(c)])
for {
if v_0.Op != OpConst32 {
break
}
c := v_0.AuxInt
if !(config.PtrSize == 8) {
break
}
v.reset(OpConst64)
v.AuxInt = ntz32(c)
return true
}
return false
}
func rewriteValuegeneric_OpCtz64(v *Value) bool {
v_0 := v.Args[0]
b := v.Block
config := b.Func.Config
// match: (Ctz64 (Const64 [c]))
// cond: config.PtrSize == 4
// result: (Const32 [ntz(c)])
for {
if v_0.Op != OpConst64 {
break
}
c := v_0.AuxInt
if !(config.PtrSize == 4) {
break
}
v.reset(OpConst32)
v.AuxInt = ntz(c)
return true
}
// match: (Ctz64 (Const64 [c]))
// cond: config.PtrSize == 8
// result: (Const64 [ntz(c)])
for {
if v_0.Op != OpConst64 {
break
}
c := v_0.AuxInt
if !(config.PtrSize == 8) {
break
}
v.reset(OpConst64)
v.AuxInt = ntz(c)
return true
}
return false
}
func rewriteValuegeneric_OpCtz8(v *Value) bool {
v_0 := v.Args[0]
b := v.Block
config := b.Func.Config
// match: (Ctz8 (Const8 [c]))
// cond: config.PtrSize == 4
// result: (Const32 [ntz8(c)])
for {
if v_0.Op != OpConst8 {
break
}
c := v_0.AuxInt
if !(config.PtrSize == 4) {
break
}
v.reset(OpConst32)
v.AuxInt = ntz8(c)
return true
}
// match: (Ctz8 (Const8 [c]))
// cond: config.PtrSize == 8
// result: (Const64 [ntz8(c)])
for {
if v_0.Op != OpConst8 {
break
}
c := v_0.AuxInt
if !(config.PtrSize == 8) {
break
}
v.reset(OpConst64)
v.AuxInt = ntz8(c)
return true
}
return false
}
func rewriteValuegeneric_OpCvt32Fto32(v *Value) bool { func rewriteValuegeneric_OpCvt32Fto32(v *Value) bool {
v_0 := v.Args[0] v_0 := v.Args[0]
// match: (Cvt32Fto32 (Const32F [c])) // match: (Cvt32Fto32 (Const32F [c]))

View file

@ -2694,15 +2694,15 @@
// Go module mirror run by Google and fall back to a direct connection // Go module mirror run by Google and fall back to a direct connection
// if the proxy reports that it does not have the module (HTTP error 404 or 410). // if the proxy reports that it does not have the module (HTTP error 404 or 410).
// See https://proxy.golang.org/privacy for the service's privacy policy. // See https://proxy.golang.org/privacy for the service's privacy policy.
// If GOPROXY is set to the string "direct", downloads use a direct connection //
// to source control servers. Setting GOPROXY to "off" disallows downloading // If GOPROXY is set to the string "direct", downloads use a direct connection to
// modules from any source. Otherwise, GOPROXY is expected to be a comma-separated // source control servers. Setting GOPROXY to "off" disallows downloading modules
// list of the URLs of module proxies, in which case the go command will fetch // from any source. Otherwise, GOPROXY is expected to be list of module proxy URLs
// modules from those proxies. For each request, the go command tries each proxy // separated by either comma (,) or pipe (|) characters, which control error
// in sequence, only moving to the next if the current proxy returns a 404 or 410 // fallback behavior. For each request, the go command tries each proxy in
// HTTP response. The string "direct" may appear in the proxy list, // sequence. If there is an error, the go command will try the next proxy in the
// to cause a direct connection to be attempted at that point in the search. // list if the error is a 404 or 410 HTTP response or if the current proxy is
// Any proxies listed after "direct" are never consulted. // followed by a pipe character, indicating it is safe to fall back on any error.
// //
// The GOPRIVATE and GONOPROXY environment variables allow bypassing // The GOPRIVATE and GONOPROXY environment variables allow bypassing
// the proxy for selected modules. See 'go help module-private' for details. // the proxy for selected modules. See 'go help module-private' for details.

View file

@ -2662,7 +2662,7 @@ func TestBadCommandLines(t *testing.T) {
tg.tempFile("src/-x/x.go", "package x\n") tg.tempFile("src/-x/x.go", "package x\n")
tg.setenv("GOPATH", tg.path(".")) tg.setenv("GOPATH", tg.path("."))
tg.runFail("build", "--", "-x") tg.runFail("build", "--", "-x")
tg.grepStderr("invalid input directory name \"-x\"", "did not reject -x directory") tg.grepStderr("invalid import path \"-x\"", "did not reject -x import path")
tg.tempFile("src/-x/y/y.go", "package y\n") tg.tempFile("src/-x/y/y.go", "package y\n")
tg.setenv("GOPATH", tg.path(".")) tg.setenv("GOPATH", tg.path("."))

View file

@ -318,16 +318,16 @@ func (p *Package) copyBuild(pp *build.Package) {
// A PackageError describes an error loading information about a package. // A PackageError describes an error loading information about a package.
type PackageError struct { type PackageError struct {
ImportStack []string // shortest path from package named on command line to this one ImportStack []string // shortest path from package named on command line to this one
Pos string // position of error Pos string // position of error
Err error // the error itself Err error // the error itself
IsImportCycle bool // the error is an import cycle IsImportCycle bool // the error is an import cycle
Hard bool // whether the error is soft or hard; soft errors are ignored in some places Hard bool // whether the error is soft or hard; soft errors are ignored in some places
alwaysPrintStack bool // whether to always print the ImportStack
} }
func (p *PackageError) Error() string { func (p *PackageError) Error() string {
// Import cycles deserve special treatment. if p.Pos != "" && (len(p.ImportStack) == 0 || !p.alwaysPrintStack) {
if p.Pos != "" && !p.IsImportCycle {
// Omit import stack. The full path to the file where the error // Omit import stack. The full path to the file where the error
// is the most important thing. // is the most important thing.
return p.Pos + ": " + p.Err.Error() return p.Pos + ": " + p.Err.Error()
@ -339,15 +339,14 @@ func (p *PackageError) Error() string {
// last path on the stack, we don't omit the path. An error like // last path on the stack, we don't omit the path. An error like
// "package A imports B: error loading C caused by B" would not be clearer // "package A imports B: error loading C caused by B" would not be clearer
// if "imports B" were omitted. // if "imports B" were omitted.
stack := p.ImportStack if len(p.ImportStack) == 0 {
var ierr ImportPathError
if len(stack) > 0 && errors.As(p.Err, &ierr) && ierr.ImportPath() == stack[len(stack)-1] {
stack = stack[:len(stack)-1]
}
if len(stack) == 0 {
return p.Err.Error() return p.Err.Error()
} }
return "package " + strings.Join(stack, "\n\timports ") + ": " + p.Err.Error() var optpos string
if p.Pos != "" {
optpos = "\n\t" + p.Pos
}
return "package " + strings.Join(p.ImportStack, "\n\timports ") + optpos + ": " + p.Err.Error()
} }
func (p *PackageError) Unwrap() error { return p.Err } func (p *PackageError) Unwrap() error { return p.Err }
@ -549,9 +548,6 @@ func loadImport(pre *preload, path, srcDir string, parent *Package, stk *ImportS
panic("LoadImport called with empty package path") panic("LoadImport called with empty package path")
} }
stk.Push(path)
defer stk.Pop()
var parentPath, parentRoot string var parentPath, parentRoot string
parentIsStd := false parentIsStd := false
if parent != nil { if parent != nil {
@ -564,6 +560,11 @@ func loadImport(pre *preload, path, srcDir string, parent *Package, stk *ImportS
pre.preloadImports(bp.Imports, bp) pre.preloadImports(bp.Imports, bp)
} }
if bp == nil { if bp == nil {
if importErr, ok := err.(ImportPathError); !ok || importErr.ImportPath() != path {
// Only add path to the error's import stack if it's not already present on the error.
stk.Push(path)
defer stk.Pop()
}
return &Package{ return &Package{
PackagePublic: PackagePublic{ PackagePublic: PackagePublic{
ImportPath: path, ImportPath: path,
@ -578,7 +579,9 @@ func loadImport(pre *preload, path, srcDir string, parent *Package, stk *ImportS
importPath := bp.ImportPath importPath := bp.ImportPath
p := packageCache[importPath] p := packageCache[importPath]
if p != nil { if p != nil {
stk.Push(path)
p = reusePackage(p, stk) p = reusePackage(p, stk)
stk.Pop()
} else { } else {
p = new(Package) p = new(Package)
p.Internal.Local = build.IsLocalImport(path) p.Internal.Local = build.IsLocalImport(path)
@ -588,8 +591,11 @@ func loadImport(pre *preload, path, srcDir string, parent *Package, stk *ImportS
// Load package. // Load package.
// loadPackageData may return bp != nil even if an error occurs, // loadPackageData may return bp != nil even if an error occurs,
// in order to return partial information. // in order to return partial information.
p.load(stk, bp, err) p.load(path, stk, bp, err)
if p.Error != nil && p.Error.Pos == "" { // Add position information unless this is a NoGoError or an ImportCycle error.
// Import cycles deserve special treatment.
var g *build.NoGoError
if p.Error != nil && p.Error.Pos == "" && !errors.As(err, &g) && !p.Error.IsImportCycle {
p = setErrorPos(p, importPos) p = setErrorPos(p, importPos)
} }
@ -608,7 +614,7 @@ func loadImport(pre *preload, path, srcDir string, parent *Package, stk *ImportS
return setErrorPos(perr, importPos) return setErrorPos(perr, importPos)
} }
if mode&ResolveImport != 0 { if mode&ResolveImport != 0 {
if perr := disallowVendor(srcDir, path, p, stk); perr != p { if perr := disallowVendor(srcDir, path, parentPath, p, stk); perr != p {
return setErrorPos(perr, importPos) return setErrorPos(perr, importPos)
} }
} }
@ -1246,7 +1252,7 @@ func disallowInternal(srcDir string, importer *Package, importerPath string, p *
// as if it were generated into the testing directory tree // as if it were generated into the testing directory tree
// (it's actually in a temporary directory outside any Go tree). // (it's actually in a temporary directory outside any Go tree).
// This cleans up a former kludge in passing functionality to the testing package. // This cleans up a former kludge in passing functionality to the testing package.
if strings.HasPrefix(p.ImportPath, "testing/internal") && len(*stk) >= 2 && (*stk)[len(*stk)-2] == "testmain" { if str.HasPathPrefix(p.ImportPath, "testing/internal") && importerPath == "testmain" {
return p return p
} }
@ -1262,11 +1268,10 @@ func disallowInternal(srcDir string, importer *Package, importerPath string, p *
return p return p
} }
// The stack includes p.ImportPath. // importerPath is empty: we started
// If that's the only thing on the stack, we started
// with a name given on the command line, not an // with a name given on the command line, not an
// import. Anything listed on the command line is fine. // import. Anything listed on the command line is fine.
if len(*stk) == 1 { if importerPath == "" {
return p return p
} }
@ -1315,8 +1320,9 @@ func disallowInternal(srcDir string, importer *Package, importerPath string, p *
// Internal is present, and srcDir is outside parent's tree. Not allowed. // Internal is present, and srcDir is outside parent's tree. Not allowed.
perr := *p perr := *p
perr.Error = &PackageError{ perr.Error = &PackageError{
ImportStack: stk.Copy(), alwaysPrintStack: true,
Err: ImportErrorf(p.ImportPath, "use of internal package "+p.ImportPath+" not allowed"), ImportStack: stk.Copy(),
Err: ImportErrorf(p.ImportPath, "use of internal package "+p.ImportPath+" not allowed"),
} }
perr.Incomplete = true perr.Incomplete = true
return &perr return &perr
@ -1344,16 +1350,15 @@ func findInternal(path string) (index int, ok bool) {
// disallowVendor checks that srcDir is allowed to import p as path. // disallowVendor checks that srcDir is allowed to import p as path.
// If the import is allowed, disallowVendor returns the original package p. // If the import is allowed, disallowVendor returns the original package p.
// If not, it returns a new package containing just an appropriate error. // If not, it returns a new package containing just an appropriate error.
func disallowVendor(srcDir string, path string, p *Package, stk *ImportStack) *Package { func disallowVendor(srcDir string, path string, importerPath string, p *Package, stk *ImportStack) *Package {
// The stack includes p.ImportPath. // If the importerPath is empty, we started
// If that's the only thing on the stack, we started
// with a name given on the command line, not an // with a name given on the command line, not an
// import. Anything listed on the command line is fine. // import. Anything listed on the command line is fine.
if len(*stk) == 1 { if importerPath == "" {
return p return p
} }
if perr := disallowVendorVisibility(srcDir, p, stk); perr != p { if perr := disallowVendorVisibility(srcDir, p, importerPath, stk); perr != p {
return perr return perr
} }
@ -1376,12 +1381,12 @@ func disallowVendor(srcDir string, path string, p *Package, stk *ImportStack) *P
// is not subject to the rules, only subdirectories of vendor. // is not subject to the rules, only subdirectories of vendor.
// This allows people to have packages and commands named vendor, // This allows people to have packages and commands named vendor,
// for maximal compatibility with existing source trees. // for maximal compatibility with existing source trees.
func disallowVendorVisibility(srcDir string, p *Package, stk *ImportStack) *Package { func disallowVendorVisibility(srcDir string, p *Package, importerPath string, stk *ImportStack) *Package {
// The stack includes p.ImportPath. // The stack does not include p.ImportPath.
// If that's the only thing on the stack, we started // If there's nothing on the stack, we started
// with a name given on the command line, not an // with a name given on the command line, not an
// import. Anything listed on the command line is fine. // import. Anything listed on the command line is fine.
if len(*stk) == 1 { if importerPath == "" {
return p return p
} }
@ -1525,7 +1530,8 @@ func (p *Package) DefaultExecName() string {
// load populates p using information from bp, err, which should // load populates p using information from bp, err, which should
// be the result of calling build.Context.Import. // be the result of calling build.Context.Import.
func (p *Package) load(stk *ImportStack, bp *build.Package, err error) { // stk contains the import stack, not including path itself.
func (p *Package) load(path string, stk *ImportStack, bp *build.Package, err error) {
p.copyBuild(bp) p.copyBuild(bp)
// The localPrefix is the path we interpret ./ imports relative to. // The localPrefix is the path we interpret ./ imports relative to.
@ -1548,7 +1554,16 @@ func (p *Package) load(stk *ImportStack, bp *build.Package, err error) {
if err != nil { if err != nil {
p.Incomplete = true p.Incomplete = true
// Report path in error stack unless err is an ImportPathError with path already set.
pushed := false
if e, ok := err.(ImportPathError); !ok || e.ImportPath() != path {
stk.Push(path)
pushed = true // Remember to pop after setError.
}
setError(base.ExpandScanner(p.rewordError(err))) setError(base.ExpandScanner(p.rewordError(err)))
if pushed {
stk.Pop()
}
if _, isScanErr := err.(scanner.ErrorList); !isScanErr { if _, isScanErr := err.(scanner.ErrorList); !isScanErr {
return return
} }
@ -1675,6 +1690,23 @@ func (p *Package) load(stk *ImportStack, bp *build.Package, err error) {
} }
} }
// Check for case-insensitive collisions of import paths.
fold := str.ToFold(p.ImportPath)
if other := foldPath[fold]; other == "" {
foldPath[fold] = p.ImportPath
} else if other != p.ImportPath {
setError(ImportErrorf(p.ImportPath, "case-insensitive import collision: %q and %q", p.ImportPath, other))
return
}
if !SafeArg(p.ImportPath) {
setError(ImportErrorf(p.ImportPath, "invalid import path %q", p.ImportPath))
return
}
stk.Push(path)
defer stk.Pop()
// Check for case-insensitive collision of input files. // Check for case-insensitive collision of input files.
// To avoid problems on case-insensitive files, we reject any package // To avoid problems on case-insensitive files, we reject any package
// where two different input files have equal names under a case-insensitive // where two different input files have equal names under a case-insensitive
@ -1703,10 +1735,6 @@ func (p *Package) load(stk *ImportStack, bp *build.Package, err error) {
setError(fmt.Errorf("invalid input directory name %q", name)) setError(fmt.Errorf("invalid input directory name %q", name))
return return
} }
if !SafeArg(p.ImportPath) {
setError(ImportErrorf(p.ImportPath, "invalid import path %q", p.ImportPath))
return
}
// Build list of imported packages and full dependency list. // Build list of imported packages and full dependency list.
imports := make([]*Package, 0, len(p.Imports)) imports := make([]*Package, 0, len(p.Imports))
@ -1770,15 +1798,6 @@ func (p *Package) load(stk *ImportStack, bp *build.Package, err error) {
return return
} }
// Check for case-insensitive collisions of import paths.
fold := str.ToFold(p.ImportPath)
if other := foldPath[fold]; other == "" {
foldPath[fold] = p.ImportPath
} else if other != p.ImportPath {
setError(ImportErrorf(p.ImportPath, "case-insensitive import collision: %q and %q", p.ImportPath, other))
return
}
if cfg.ModulesEnabled && p.Error == nil { if cfg.ModulesEnabled && p.Error == nil {
mainPath := p.ImportPath mainPath := p.ImportPath
if p.Internal.CmdlineFiles { if p.Internal.CmdlineFiles {
@ -2266,9 +2285,7 @@ func GoFilesPackage(gofiles []string) *Package {
pkg := new(Package) pkg := new(Package)
pkg.Internal.Local = true pkg.Internal.Local = true
pkg.Internal.CmdlineFiles = true pkg.Internal.CmdlineFiles = true
stk.Push("main") pkg.load("command-line-arguments", &stk, bp, err)
pkg.load(&stk, bp, err)
stk.Pop()
pkg.Internal.LocalPrefix = dirToImportPath(dir) pkg.Internal.LocalPrefix = dirToImportPath(dir)
pkg.ImportPath = "command-line-arguments" pkg.ImportPath = "command-line-arguments"
pkg.Target = "" pkg.Target = ""

View file

@ -56,7 +56,6 @@ func TestPackagesFor(p *Package, cover *TestCover) (pmain, ptest, pxtest *Packag
} }
if len(p1.DepsErrors) > 0 { if len(p1.DepsErrors) > 0 {
perr := p1.DepsErrors[0] perr := p1.DepsErrors[0]
perr.Pos = "" // show full import stack
err = perr err = perr
break break
} }

View file

@ -101,27 +101,51 @@ cached module versions with GOPROXY=https://example.com/proxy.
var proxyOnce struct { var proxyOnce struct {
sync.Once sync.Once
list []string list []proxySpec
err error err error
} }
func proxyURLs() ([]string, error) { type proxySpec struct {
// url is the proxy URL or one of "off", "direct", "noproxy".
url string
// fallBackOnError is true if a request should be attempted on the next proxy
// in the list after any error from this proxy. If fallBackOnError is false,
// the request will only be attempted on the next proxy if the error is
// equivalent to os.ErrNotFound, which is true for 404 and 410 responses.
fallBackOnError bool
}
func proxyList() ([]proxySpec, error) {
proxyOnce.Do(func() { proxyOnce.Do(func() {
if cfg.GONOPROXY != "" && cfg.GOPROXY != "direct" { if cfg.GONOPROXY != "" && cfg.GOPROXY != "direct" {
proxyOnce.list = append(proxyOnce.list, "noproxy") proxyOnce.list = append(proxyOnce.list, proxySpec{url: "noproxy"})
} }
for _, proxyURL := range strings.Split(cfg.GOPROXY, ",") {
proxyURL = strings.TrimSpace(proxyURL) goproxy := cfg.GOPROXY
if proxyURL == "" { for goproxy != "" {
var url string
fallBackOnError := false
if i := strings.IndexAny(goproxy, ",|"); i >= 0 {
url = goproxy[:i]
fallBackOnError = goproxy[i] == '|'
goproxy = goproxy[i+1:]
} else {
url = goproxy
goproxy = ""
}
url = strings.TrimSpace(url)
if url == "" {
continue continue
} }
if proxyURL == "off" { if url == "off" {
// "off" always fails hard, so can stop walking list. // "off" always fails hard, so can stop walking list.
proxyOnce.list = append(proxyOnce.list, "off") proxyOnce.list = append(proxyOnce.list, proxySpec{url: "off"})
break break
} }
if proxyURL == "direct" { if url == "direct" {
proxyOnce.list = append(proxyOnce.list, "direct") proxyOnce.list = append(proxyOnce.list, proxySpec{url: "direct"})
// For now, "direct" is the end of the line. We may decide to add some // For now, "direct" is the end of the line. We may decide to add some
// sort of fallback behavior for them in the future, so ignore // sort of fallback behavior for them in the future, so ignore
// subsequent entries for forward-compatibility. // subsequent entries for forward-compatibility.
@ -131,18 +155,21 @@ func proxyURLs() ([]string, error) {
// Single-word tokens are reserved for built-in behaviors, and anything // Single-word tokens are reserved for built-in behaviors, and anything
// containing the string ":/" or matching an absolute file path must be a // containing the string ":/" or matching an absolute file path must be a
// complete URL. For all other paths, implicitly add "https://". // complete URL. For all other paths, implicitly add "https://".
if strings.ContainsAny(proxyURL, ".:/") && !strings.Contains(proxyURL, ":/") && !filepath.IsAbs(proxyURL) && !path.IsAbs(proxyURL) { if strings.ContainsAny(url, ".:/") && !strings.Contains(url, ":/") && !filepath.IsAbs(url) && !path.IsAbs(url) {
proxyURL = "https://" + proxyURL url = "https://" + url
} }
// Check that newProxyRepo accepts the URL. // Check that newProxyRepo accepts the URL.
// It won't do anything with the path. // It won't do anything with the path.
_, err := newProxyRepo(proxyURL, "golang.org/x/text") if _, err := newProxyRepo(url, "golang.org/x/text"); err != nil {
if err != nil {
proxyOnce.err = err proxyOnce.err = err
return return
} }
proxyOnce.list = append(proxyOnce.list, proxyURL)
proxyOnce.list = append(proxyOnce.list, proxySpec{
url: url,
fallBackOnError: fallBackOnError,
})
} }
}) })
@ -150,15 +177,16 @@ func proxyURLs() ([]string, error) {
} }
// TryProxies iterates f over each configured proxy (including "noproxy" and // TryProxies iterates f over each configured proxy (including "noproxy" and
// "direct" if applicable) until f returns an error that is not // "direct" if applicable) until f returns no error or until f returns an
// equivalent to os.ErrNotExist. // error that is not equivalent to os.ErrNotExist on a proxy configured
// not to fall back on errors.
// //
// TryProxies then returns that final error. // TryProxies then returns that final error.
// //
// If GOPROXY is set to "off", TryProxies invokes f once with the argument // If GOPROXY is set to "off", TryProxies invokes f once with the argument
// "off". // "off".
func TryProxies(f func(proxy string) error) error { func TryProxies(f func(proxy string) error) error {
proxies, err := proxyURLs() proxies, err := proxyList()
if err != nil { if err != nil {
return err return err
} }
@ -166,28 +194,39 @@ func TryProxies(f func(proxy string) error) error {
return f("off") return f("off")
} }
var lastAttemptErr error // We try to report the most helpful error to the user. "direct" and "noproxy"
// errors are best, followed by proxy errors other than ErrNotExist, followed
// by ErrNotExist. Note that errProxyOff, errNoproxy, and errUseProxy are
// equivalent to ErrNotExist.
const (
notExistRank = iota
proxyRank
directRank
)
var bestErr error
bestErrRank := notExistRank
for _, proxy := range proxies { for _, proxy := range proxies {
err = f(proxy) err := f(proxy.url)
if !errors.Is(err, os.ErrNotExist) { if err == nil {
lastAttemptErr = err return nil
break }
isNotExistErr := errors.Is(err, os.ErrNotExist)
if proxy.url == "direct" || proxy.url == "noproxy" {
bestErr = err
bestErrRank = directRank
} else if bestErrRank <= proxyRank && !isNotExistErr {
bestErr = err
bestErrRank = proxyRank
} else if bestErrRank == notExistRank {
bestErr = err
} }
// The error indicates that the module does not exist. if !proxy.fallBackOnError && !isNotExistErr {
// In general we prefer to report the last such error, break
// because it indicates the error that occurs after all other
// options have been exhausted.
//
// However, for modules in the NOPROXY list, the most useful error occurs
// first (with proxy set to "noproxy"), and the subsequent errors are all
// errNoProxy (which is not particularly helpful). Do not overwrite a more
// useful error with errNoproxy.
if lastAttemptErr == nil || !errors.Is(err, errNoproxy) {
lastAttemptErr = err
} }
} }
return lastAttemptErr return bestErr
} }
type proxyRepo struct { type proxyRepo struct {

View file

@ -26,6 +26,7 @@ import (
"cmd/go/internal/lockedfile" "cmd/go/internal/lockedfile"
"cmd/go/internal/str" "cmd/go/internal/str"
"cmd/go/internal/web" "cmd/go/internal/web"
"golang.org/x/mod/module" "golang.org/x/mod/module"
"golang.org/x/mod/sumdb" "golang.org/x/mod/sumdb"
"golang.org/x/mod/sumdb/note" "golang.org/x/mod/sumdb/note"
@ -146,49 +147,50 @@ func (c *dbClient) initBase() {
} }
// Try proxies in turn until we find out how to connect to this database. // Try proxies in turn until we find out how to connect to this database.
urls, err := proxyURLs() //
if err != nil { // Before accessing any checksum database URL using a proxy, the proxy
c.baseErr = err // client should first fetch <proxyURL>/sumdb/<sumdb-name>/supported.
return //
} // If that request returns a successful (HTTP 200) response, then the proxy
for _, proxyURL := range urls { // supports proxying checksum database requests. In that case, the client
if proxyURL == "noproxy" { // should use the proxied access method only, never falling back to a direct
continue // connection to the database.
} //
if proxyURL == "direct" || proxyURL == "off" { // If the /sumdb/<sumdb-name>/supported check fails with a “not found” (HTTP
break // 404) or “gone” (HTTP 410) response, or if the proxy is configured to fall
} // back on errors, the client will try the next proxy. If there are no
proxy, err := url.Parse(proxyURL) // proxies left or if the proxy is "direct" or "off", the client should
if err != nil { // connect directly to that database.
c.baseErr = err //
return // Any other response is treated as the database being unavailable.
} //
// Quoting https://golang.org/design/25530-sumdb#proxying-a-checksum-database: // See https://golang.org/design/25530-sumdb#proxying-a-checksum-database.
// err := TryProxies(func(proxy string) error {
// Before accessing any checksum database URL using a proxy, switch proxy {
// the proxy client should first fetch <proxyURL>/sumdb/<sumdb-name>/supported. case "noproxy":
// If that request returns a successful (HTTP 200) response, then the proxy supports return errUseProxy
// proxying checksum database requests. In that case, the client should use case "direct", "off":
// the proxied access method only, never falling back to a direct connection to the database. return errProxyOff
// If the /sumdb/<sumdb-name>/supported check fails with a “not found” (HTTP 404) default:
// or “gone” (HTTP 410) response, the proxy is unwilling to proxy the checksum database, proxyURL, err := url.Parse(proxy)
// and the client should connect directly to the database. if err != nil {
// Any other response is treated as the database being unavailable. return err
_, err = web.GetBytes(web.Join(proxy, "sumdb/"+c.name+"/supported")) }
if err == nil { if _, err := web.GetBytes(web.Join(proxyURL, "sumdb/"+c.name+"/supported")); err != nil {
return err
}
// Success! This proxy will help us. // Success! This proxy will help us.
c.base = web.Join(proxy, "sumdb/"+c.name) c.base = web.Join(proxyURL, "sumdb/"+c.name)
return return nil
}
// If the proxy serves a non-404/410, give up.
if !errors.Is(err, os.ErrNotExist) {
c.baseErr = err
return
} }
})
if errors.Is(err, os.ErrNotExist) {
// No proxies, or all proxies failed (with 404, 410, or were were allowed
// to fall back), or we reached an explicit "direct" or "off".
c.base = c.direct
} else if err != nil {
c.baseErr = err
} }
// No proxies, or all proxies said 404, or we reached an explicit "direct".
c.base = c.direct
} }
// ReadConfig reads the key from c.key // ReadConfig reads the key from c.key

View file

@ -363,15 +363,15 @@ variable (see 'go help env'). The default setting for GOPROXY is
Go module mirror run by Google and fall back to a direct connection Go module mirror run by Google and fall back to a direct connection
if the proxy reports that it does not have the module (HTTP error 404 or 410). if the proxy reports that it does not have the module (HTTP error 404 or 410).
See https://proxy.golang.org/privacy for the service's privacy policy. See https://proxy.golang.org/privacy for the service's privacy policy.
If GOPROXY is set to the string "direct", downloads use a direct connection
to source control servers. Setting GOPROXY to "off" disallows downloading If GOPROXY is set to the string "direct", downloads use a direct connection to
modules from any source. Otherwise, GOPROXY is expected to be a comma-separated source control servers. Setting GOPROXY to "off" disallows downloading modules
list of the URLs of module proxies, in which case the go command will fetch from any source. Otherwise, GOPROXY is expected to be list of module proxy URLs
modules from those proxies. For each request, the go command tries each proxy separated by either comma (,) or pipe (|) characters, which control error
in sequence, only moving to the next if the current proxy returns a 404 or 410 fallback behavior. For each request, the go command tries each proxy in
HTTP response. The string "direct" may appear in the proxy list, sequence. If there is an error, the go command will try the next proxy in the
to cause a direct connection to be attempted at that point in the search. list if the error is a 404 or 410 HTTP response or if the current proxy is
Any proxies listed after "direct" are never consulted. followed by a pipe character, indicating it is safe to fall back on any error.
The GOPRIVATE and GONOPROXY environment variables allow bypassing The GOPRIVATE and GONOPROXY environment variables allow bypassing
the proxy for selected modules. See 'go help module-private' for details. the proxy for selected modules. See 'go help module-private' for details.

View file

@ -10,7 +10,7 @@ go list -e -f {{.Error}} ./empty
stdout 'no Go files in \$WORK[/\\]empty' stdout 'no Go files in \$WORK[/\\]empty'
go list -e -f {{.Error}} ./exclude go list -e -f {{.Error}} ./exclude
stdout 'package example.com/m/exclude: build constraints exclude all Go files in \$WORK[/\\]exclude' stdout 'build constraints exclude all Go files in \$WORK[/\\]exclude'
go list -e -f {{.Error}} ./missing go list -e -f {{.Error}} ./missing
stdout 'stat '$WORK'[/\\]missing: directory not found' stdout 'stat '$WORK'[/\\]missing: directory not found'

View file

@ -10,17 +10,25 @@ stderr '404 Not Found'
env GOPROXY=$proxy/404,$proxy/410,$proxy env GOPROXY=$proxy/404,$proxy/410,$proxy
go get rsc.io/quote@v1.1.0 go get rsc.io/quote@v1.1.0
# get should not walk past other 4xx errors. # get should not walk past other 4xx errors if proxies are separated with ','.
env GOPROXY=$proxy/403,$proxy env GOPROXY=$proxy/403,$proxy
! go get rsc.io/quote@v1.2.0 ! go get rsc.io/quote@v1.2.0
stderr 'reading.*/403/rsc.io/.*: 403 Forbidden' stderr 'reading.*/403/rsc.io/.*: 403 Forbidden'
# get should not walk past non-4xx errors. # get should not walk past non-4xx errors if proxies are separated with ','.
env GOPROXY=$proxy/500,$proxy env GOPROXY=$proxy/500,$proxy
! go get rsc.io/quote@v1.3.0 ! go get rsc.io/quote@v1.3.0
stderr 'reading.*/500/rsc.io/.*: 500 Internal Server Error' stderr 'reading.*/500/rsc.io/.*: 500 Internal Server Error'
# get should return the final 404/410 if that's all we have. # get should walk past other 4xx errors if proxies are separated with '|'.
env GOPROXY=$proxy/403|https://0.0.0.0|$proxy
go get rsc.io/quote@v1.2.0
# get should walk past non-4xx errors if proxies are separated with '|'.
env GOPROXY=$proxy/500|https://0.0.0.0|$proxy
go get rsc.io/quote@v1.3.0
# get should return the final error if that's all we have.
env GOPROXY=$proxy/404,$proxy/410 env GOPROXY=$proxy/404,$proxy/410
! go get rsc.io/quote@v1.4.0 ! go get rsc.io/quote@v1.4.0
stderr 'reading.*/410/rsc.io/.*: 410 Gone' stderr 'reading.*/410/rsc.io/.*: 410 Gone'

View file

@ -46,5 +46,22 @@ stderr '503 Service Unavailable'
rm $GOPATH/pkg/mod/cache/download/sumdb rm $GOPATH/pkg/mod/cache/download/sumdb
rm go.sum rm go.sum
# the error from the last attempted proxy should be returned.
cp go.mod.orig go.mod
env GOSUMDB=$sumdb
env GOPROXY=$proxy/sumdb-404,$proxy/sumdb-503
! go get -d rsc.io/fortune@v1.0.0
stderr '503 Service Unavailable'
rm $GOPATH/pkg/mod/cache/download/sumdb
rm go.sum
# if proxies are separated with '|', fallback is allowed on any error.
cp go.mod.orig go.mod
env GOSUMDB=$sumdb
env GOPROXY=$proxy/sumdb-503|https://0.0.0.0|$proxy
go get -d rsc.io/fortune@v1.0.0
rm $GOPATH/pkg/mod/cache/download/sumdb
rm go.sum
-- go.mod.orig -- -- go.mod.orig --
module m module m

View file

@ -1,6 +1,9 @@
! go test testdep/p1 ! go test testdep/p1
stderr 'package testdep/p1 \(test\)\n\timports testdep/p2\n\timports testdep/p3: build constraints exclude all Go files ' # check for full import stack stderr 'package testdep/p1 \(test\)\n\timports testdep/p2\n\timports testdep/p3: build constraints exclude all Go files ' # check for full import stack
! go vet testdep/p1
stderr 'package testdep/p1 \(test\)\n\timports testdep/p2\n\timports testdep/p3: build constraints exclude all Go files ' # check for full import stack
-- testdep/p1/p1.go -- -- testdep/p1/p1.go --
package p1 package p1
-- testdep/p1/p1_test.go -- -- testdep/p1/p1_test.go --

View file

@ -3,28 +3,28 @@ env GO111MODULE=off
# Issue 36173. Verify that "go vet" prints line numbers on load errors. # Issue 36173. Verify that "go vet" prints line numbers on load errors.
! go vet a/a.go ! go vet a/a.go
stderr '^a[/\\]a.go:5:3: use of internal package' stderr '^package command-line-arguments\n\ta[/\\]a.go:5:3: use of internal package'
! go vet a/a_test.go ! go vet a/a_test.go
stderr '^package command-line-arguments \(test\): use of internal package' # BUG stderr '^package command-line-arguments \(test\)\n\ta[/\\]a_test.go:4:3: use of internal package'
! go vet a ! go vet a
stderr '^a[/\\]a.go:5:3: use of internal package' stderr '^package a\n\ta[/\\]a.go:5:3: use of internal package'
go vet b/b.go go vet b/b.go
! stderr 'use of internal package' ! stderr 'use of internal package'
! go vet b/b_test.go ! go vet b/b_test.go
stderr '^package command-line-arguments \(test\): use of internal package' # BUG stderr '^package command-line-arguments \(test\)\n\tb[/\\]b_test.go:4:3: use of internal package'
! go vet depends-on-a/depends-on-a.go ! go vet depends-on-a/depends-on-a.go
stderr '^a[/\\]a.go:5:3: use of internal package' stderr '^package command-line-arguments\n\timports a\n\ta[/\\]a.go:5:3: use of internal package'
! go vet depends-on-a/depends-on-a_test.go ! go vet depends-on-a/depends-on-a_test.go
stderr '^package command-line-arguments \(test\)\n\timports a: use of internal package a/x/internal/y not allowed$' # BUG stderr '^package command-line-arguments \(test\)\n\timports a\n\ta[/\\]a.go:5:3: use of internal package a/x/internal/y not allowed'
! go vet depends-on-a ! go vet depends-on-a
stderr '^a[/\\]a.go:5:3: use of internal package' stderr '^package depends-on-a\n\timports a\n\ta[/\\]a.go:5:3: use of internal package'
-- a/a.go -- -- a/a.go --
// A package with bad imports in both src and test // A package with bad imports in both src and test

View file

@ -226,6 +226,16 @@ var Anames = []string{
"HFENCEGVMA", "HFENCEGVMA",
"HFENCEVVMA", "HFENCEVVMA",
"WORD", "WORD",
"BEQZ",
"BGEZ",
"BGT",
"BGTU",
"BGTZ",
"BLE",
"BLEU",
"BLEZ",
"BLTZ",
"BNEZ",
"FNEGD", "FNEGD",
"FNEGS", "FNEGS",
"FNED", "FNED",

View file

@ -12,6 +12,7 @@ import (
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"runtime"
"testing" "testing"
) )
@ -131,3 +132,20 @@ TEXT _stub(SB),$0-0
t.Errorf("%v\n%s", err, out) t.Errorf("%v\n%s", err, out)
} }
} }
func TestBranch(t *testing.T) {
if testing.Short() {
t.Skip("Skipping in short mode")
}
if runtime.GOARCH != "riscv64" {
t.Skip("Requires riscv64 to run")
}
testenv.MustHaveGoBuild(t)
cmd := exec.Command(testenv.GoToolPath(t), "test")
cmd.Dir = "testdata/testbranch"
if out, err := testenv.CleanCmdEnv(cmd).CombinedOutput(); err != nil {
t.Errorf("Branch test failed: %v\n%s", err, out)
}
}

View file

@ -576,6 +576,16 @@ const (
// Pseudo-instructions. These get translated by the assembler into other // Pseudo-instructions. These get translated by the assembler into other
// instructions, based on their operands. // instructions, based on their operands.
ABEQZ
ABGEZ
ABGT
ABGTU
ABGTZ
ABLE
ABLEU
ABLEZ
ABLTZ
ABNEZ
AFNEGD AFNEGD
AFNEGS AFNEGS
AFNED AFNED

View file

@ -406,20 +406,40 @@ func rewriteMOV(ctxt *obj.Link, newprog obj.ProgAlloc, p *obj.Prog) {
} }
// InvertBranch inverts the condition of a conditional branch. // InvertBranch inverts the condition of a conditional branch.
func InvertBranch(i obj.As) obj.As { func InvertBranch(as obj.As) obj.As {
switch i { switch as {
case ABEQ: case ABEQ:
return ABNE return ABNE
case ABNE: case ABEQZ:
return ABEQ return ABNEZ
case ABLT:
return ABGE
case ABGE: case ABGE:
return ABLT return ABLT
case ABLTU:
return ABGEU
case ABGEU: case ABGEU:
return ABLTU return ABLTU
case ABGEZ:
return ABLTZ
case ABGT:
return ABLE
case ABGTU:
return ABLEU
case ABGTZ:
return ABLEZ
case ABLE:
return ABGT
case ABLEU:
return ABGTU
case ABLEZ:
return ABGTZ
case ABLT:
return ABGE
case ABLTU:
return ABGEU
case ABLTZ:
return ABGEZ
case ABNE:
return ABEQ
case ABNEZ:
return ABEQZ
default: default:
panic("InvertBranch: not a branch") panic("InvertBranch: not a branch")
} }
@ -860,7 +880,7 @@ func preprocess(ctxt *obj.Link, cursym *obj.LSym, newprog obj.ProgAlloc) {
for p := cursym.Func.Text; p != nil; p = p.Link { for p := cursym.Func.Text; p != nil; p = p.Link {
switch p.As { switch p.As {
case ABEQ, ABNE, ABLT, ABGE, ABLTU, ABGEU: case ABEQ, ABEQZ, ABGE, ABGEU, ABGEZ, ABGT, ABGTU, ABGTZ, ABLE, ABLEU, ABLEZ, ABLT, ABLTU, ABLTZ, ABNE, ABNEZ:
if p.To.Type != obj.TYPE_BRANCH { if p.To.Type != obj.TYPE_BRANCH {
panic("assemble: instruction with branch-like opcode lacks destination") panic("assemble: instruction with branch-like opcode lacks destination")
} }
@ -917,7 +937,7 @@ func preprocess(ctxt *obj.Link, cursym *obj.LSym, newprog obj.ProgAlloc) {
// instructions will break everything--don't do it! // instructions will break everything--don't do it!
for p := cursym.Func.Text; p != nil; p = p.Link { for p := cursym.Func.Text; p != nil; p = p.Link {
switch p.As { switch p.As {
case AJAL, ABEQ, ABNE, ABLT, ABLTU, ABGE, ABGEU: case ABEQ, ABEQZ, ABGE, ABGEU, ABGEZ, ABGT, ABGTU, ABGTZ, ABLE, ABLEU, ABLEZ, ABLT, ABLTU, ABLTZ, ABNE, ABNEZ, AJAL:
switch p.To.Type { switch p.To.Type {
case obj.TYPE_BRANCH: case obj.TYPE_BRANCH:
p.To.Type, p.To.Offset = obj.TYPE_CONST, p.Pcond.Pc-p.Pc p.To.Type, p.To.Offset = obj.TYPE_CONST, p.Pcond.Pc-p.Pc
@ -1778,7 +1798,29 @@ func instructionsForProg(p *obj.Prog) []*instruction {
ins.rd, ins.rs2 = uint32(p.From.Reg), obj.REG_NONE ins.rd, ins.rs2 = uint32(p.From.Reg), obj.REG_NONE
ins.imm = p.To.Offset ins.imm = p.To.Offset
case ABEQ, ABNE, ABLT, ABGE, ABLTU, ABGEU: case ABEQ, ABEQZ, ABGE, ABGEU, ABGEZ, ABGT, ABGTU, ABGTZ, ABLE, ABLEU, ABLEZ, ABLT, ABLTU, ABLTZ, ABNE, ABNEZ:
switch ins.as {
case ABEQZ:
ins.as, ins.rs1, ins.rs2 = ABEQ, REG_ZERO, uint32(p.From.Reg)
case ABGEZ:
ins.as, ins.rs1, ins.rs2 = ABGE, REG_ZERO, uint32(p.From.Reg)
case ABGT:
ins.as, ins.rs1, ins.rs2 = ABLT, uint32(p.Reg), uint32(p.From.Reg)
case ABGTU:
ins.as, ins.rs1, ins.rs2 = ABLTU, uint32(p.Reg), uint32(p.From.Reg)
case ABGTZ:
ins.as, ins.rs1, ins.rs2 = ABLT, uint32(p.From.Reg), REG_ZERO
case ABLE:
ins.as, ins.rs1, ins.rs2 = ABGE, uint32(p.Reg), uint32(p.From.Reg)
case ABLEU:
ins.as, ins.rs1, ins.rs2 = ABGEU, uint32(p.Reg), uint32(p.From.Reg)
case ABLEZ:
ins.as, ins.rs1, ins.rs2 = ABGE, uint32(p.From.Reg), REG_ZERO
case ABLTZ:
ins.as, ins.rs1, ins.rs2 = ABLT, REG_ZERO, uint32(p.From.Reg)
case ABNEZ:
ins.as, ins.rs1, ins.rs2 = ABNE, REG_ZERO, uint32(p.From.Reg)
}
ins.imm = p.To.Offset ins.imm = p.To.Offset
case ALW, ALWU, ALH, ALHU, ALB, ALBU, ALD, AFLW, AFLD: case ALW, ALWU, ALH, ALHU, ALB, ALBU, ALD, AFLW, AFLD:

View file

@ -0,0 +1,94 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build riscv64
package testbranch
import (
"testing"
)
func testBEQZ(a int64) (r bool)
func testBGEZ(a int64) (r bool)
func testBGT(a, b int64) (r bool)
func testBGTU(a, b int64) (r bool)
func testBGTZ(a int64) (r bool)
func testBLE(a, b int64) (r bool)
func testBLEU(a, b int64) (r bool)
func testBLEZ(a int64) (r bool)
func testBLTZ(a int64) (r bool)
func testBNEZ(a int64) (r bool)
func TestBranchCondition(t *testing.T) {
tests := []struct{
ins string
a int64
b int64
fn func(a, b int64) bool
want bool
}{
{"BGT", 0, 1, testBGT, true},
{"BGT", 0, 0, testBGT, false},
{"BGT", 0, -1, testBGT, false},
{"BGT", -1, 0, testBGT, true},
{"BGT", 1, 0, testBGT, false},
{"BGTU", 0, 1, testBGTU, true},
{"BGTU", 0, -1, testBGTU, true},
{"BGTU", -1, 0, testBGTU, false},
{"BGTU", 1, 0, testBGTU, false},
{"BLE", 0, 1, testBLE, false},
{"BLE", 0, -1, testBLE, true},
{"BLE", 0, 0, testBLE, true},
{"BLE", -1, 0, testBLE, false},
{"BLE", 1, 0, testBLE, true},
{"BLEU", 0, 1, testBLEU, false},
{"BLEU", 0, -1, testBLEU, false},
{"BLEU", 0, 0, testBLEU, true},
{"BLEU", -1, 0, testBLEU, true},
{"BLEU", 1, 0, testBLEU, true},
}
for _, test := range tests {
t.Run(test.ins, func(t *testing.T) {
if got := test.fn(test.a, test.b); got != test.want {
t.Errorf("%v %v, %v = %v, want %v", test.ins, test.a, test.b, got, test.want)
}
})
}
}
func TestBranchZero(t *testing.T) {
tests := []struct{
ins string
a int64
fn func(a int64) bool
want bool
}{
{"BEQZ", -1, testBEQZ, false},
{"BEQZ", 0, testBEQZ, true},
{"BEQZ", 1, testBEQZ, false},
{"BGEZ", -1, testBGEZ, false},
{"BGEZ", 0, testBGEZ, true},
{"BGEZ", 1, testBGEZ, true},
{"BGTZ", -1, testBGTZ, false},
{"BGTZ", 0, testBGTZ, false},
{"BGTZ", 1, testBGTZ, true},
{"BLEZ", -1, testBLEZ, true},
{"BLEZ", 0, testBLEZ, true},
{"BLEZ", 1, testBLEZ, false},
{"BLTZ", -1, testBLTZ, true},
{"BLTZ", 0, testBLTZ, false},
{"BLTZ", 1, testBLTZ, false},
{"BNEZ", -1, testBNEZ, true},
{"BNEZ", 0, testBNEZ, false},
{"BNEZ", 1, testBNEZ, true},
}
for _, test := range tests {
t.Run(test.ins, func(t *testing.T) {
if got := test.fn(test.a); got != test.want {
t.Errorf("%v %v = %v, want %v", test.ins, test.a, got, test.want)
}
})
}
}

View file

@ -0,0 +1,111 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build riscv64
#include "textflag.h"
// func testBEQZ(a int64) (r bool)
TEXT ·testBEQZ(SB),NOSPLIT,$0-0
MOV a+0(FP), X5
MOV $1, X6
BEQZ X5, b
MOV $0, X6
b:
MOV X6, r+8(FP)
RET
// func testBGEZ(a int64) (r bool)
TEXT ·testBGEZ(SB),NOSPLIT,$0-0
MOV a+0(FP), X5
MOV $1, X6
BGEZ X5, b
MOV $0, X6
b:
MOV X6, r+8(FP)
RET
// func testBGT(a, b int64) (r bool)
TEXT ·testBGT(SB),NOSPLIT,$0-0
MOV a+0(FP), X5
MOV b+8(FP), X6
MOV $1, X7
BGT X5, X6, b
MOV $0, X7
b:
MOV X7, r+16(FP)
RET
// func testBGTU(a, b int64) (r bool)
TEXT ·testBGTU(SB),NOSPLIT,$0-0
MOV a+0(FP), X5
MOV b+8(FP), X6
MOV $1, X7
BGTU X5, X6, b
MOV $0, X7
b:
MOV X7, r+16(FP)
RET
// func testBGTZ(a int64) (r bool)
TEXT ·testBGTZ(SB),NOSPLIT,$0-0
MOV a+0(FP), X5
MOV $1, X6
BGTZ X5, b
MOV $0, X6
b:
MOV X6, r+8(FP)
RET
// func testBLE(a, b int64) (r bool)
TEXT ·testBLE(SB),NOSPLIT,$0-0
MOV a+0(FP), X5
MOV b+8(FP), X6
MOV $1, X7
BLE X5, X6, b
MOV $0, X7
b:
MOV X7, r+16(FP)
RET
// func testBLEU(a, b int64) (r bool)
TEXT ·testBLEU(SB),NOSPLIT,$0-0
MOV a+0(FP), X5
MOV b+8(FP), X6
MOV $1, X7
BLEU X5, X6, b
MOV $0, X7
b:
MOV X7, r+16(FP)
RET
// func testBLEZ(a int64) (r bool)
TEXT ·testBLEZ(SB),NOSPLIT,$0-0
MOV a+0(FP), X5
MOV $1, X6
BLEZ X5, b
MOV $0, X6
b:
MOV X6, r+8(FP)
RET
// func testBLTZ(a int64) (r bool)
TEXT ·testBLTZ(SB),NOSPLIT,$0-0
MOV a+0(FP), X5
MOV $1, X6
BLTZ X5, b
MOV $0, X6
b:
MOV X6, r+8(FP)
RET
// func testBNEZ(a int64) (r bool)
TEXT ·testBNEZ(SB),NOSPLIT,$0-0
MOV a+0(FP), X5
MOV $1, X6
BNEZ X5, b
MOV $0, X6
b:
MOV X6, r+8(FP)
RET

View file

@ -187,7 +187,7 @@ func (fc *FileCache) Line(filename string, line int) ([]byte, error) {
// If filter is non-nil, the disassembly only includes functions with names matching filter. // If filter is non-nil, the disassembly only includes functions with names matching filter.
// If printCode is true, the disassembly includs corresponding source lines. // If printCode is true, the disassembly includs corresponding source lines.
// The disassembly only includes functions that overlap the range [start, end). // The disassembly only includes functions that overlap the range [start, end).
func (d *Disasm) Print(w io.Writer, filter *regexp.Regexp, start, end uint64, printCode bool) { func (d *Disasm) Print(w io.Writer, filter *regexp.Regexp, start, end uint64, printCode bool, gnuAsm bool) {
if start < d.textStart { if start < d.textStart {
start = d.textStart start = d.textStart
} }
@ -229,7 +229,7 @@ func (d *Disasm) Print(w io.Writer, filter *regexp.Regexp, start, end uint64, pr
var lastFile string var lastFile string
var lastLine int var lastLine int
d.Decode(symStart, symEnd, relocs, func(pc, size uint64, file string, line int, text string) { d.Decode(symStart, symEnd, relocs, gnuAsm, func(pc, size uint64, file string, line int, text string) {
i := pc - d.textStart i := pc - d.textStart
if printCode { if printCode {
@ -266,7 +266,7 @@ func (d *Disasm) Print(w io.Writer, filter *regexp.Regexp, start, end uint64, pr
} }
// Decode disassembles the text segment range [start, end), calling f for each instruction. // Decode disassembles the text segment range [start, end), calling f for each instruction.
func (d *Disasm) Decode(start, end uint64, relocs []Reloc, f func(pc, size uint64, file string, line int, text string)) { func (d *Disasm) Decode(start, end uint64, relocs []Reloc, gnuAsm bool, f func(pc, size uint64, file string, line int, text string)) {
if start < d.textStart { if start < d.textStart {
start = d.textStart start = d.textStart
} }
@ -277,7 +277,7 @@ func (d *Disasm) Decode(start, end uint64, relocs []Reloc, f func(pc, size uint6
lookup := d.lookup lookup := d.lookup
for pc := start; pc < end; { for pc := start; pc < end; {
i := pc - d.textStart i := pc - d.textStart
text, size := d.disasm(code[i:], pc, lookup, d.byteOrder) text, size := d.disasm(code[i:], pc, lookup, d.byteOrder, gnuAsm)
file, line, _ := d.pcln.PCToLine(pc) file, line, _ := d.pcln.PCToLine(pc)
sep := "\t" sep := "\t"
for len(relocs) > 0 && relocs[0].Addr < i+uint64(size) { for len(relocs) > 0 && relocs[0].Addr < i+uint64(size) {
@ -291,17 +291,17 @@ func (d *Disasm) Decode(start, end uint64, relocs []Reloc, f func(pc, size uint6
} }
type lookupFunc = func(addr uint64) (sym string, base uint64) type lookupFunc = func(addr uint64) (sym string, base uint64)
type disasmFunc func(code []byte, pc uint64, lookup lookupFunc, ord binary.ByteOrder) (text string, size int) type disasmFunc func(code []byte, pc uint64, lookup lookupFunc, ord binary.ByteOrder, _ bool) (text string, size int)
func disasm_386(code []byte, pc uint64, lookup lookupFunc, _ binary.ByteOrder) (string, int) { func disasm_386(code []byte, pc uint64, lookup lookupFunc, _ binary.ByteOrder, gnuAsm bool) (string, int) {
return disasm_x86(code, pc, lookup, 32) return disasm_x86(code, pc, lookup, 32, gnuAsm)
} }
func disasm_amd64(code []byte, pc uint64, lookup lookupFunc, _ binary.ByteOrder) (string, int) { func disasm_amd64(code []byte, pc uint64, lookup lookupFunc, _ binary.ByteOrder, gnuAsm bool) (string, int) {
return disasm_x86(code, pc, lookup, 64) return disasm_x86(code, pc, lookup, 64, gnuAsm)
} }
func disasm_x86(code []byte, pc uint64, lookup lookupFunc, arch int) (string, int) { func disasm_x86(code []byte, pc uint64, lookup lookupFunc, arch int, gnuAsm bool) (string, int) {
inst, err := x86asm.Decode(code, arch) inst, err := x86asm.Decode(code, arch)
var text string var text string
size := inst.Len size := inst.Len
@ -309,7 +309,11 @@ func disasm_x86(code []byte, pc uint64, lookup lookupFunc, arch int) (string, in
size = 1 size = 1
text = "?" text = "?"
} else { } else {
text = x86asm.GoSyntax(inst, pc, lookup) if gnuAsm {
text = fmt.Sprintf("%-36s // %s", x86asm.GoSyntax(inst, pc, lookup), x86asm.GNUSyntax(inst, pc, nil))
} else {
text = x86asm.GoSyntax(inst, pc, lookup)
}
} }
return text, size return text, size
} }
@ -334,31 +338,35 @@ func (r textReader) ReadAt(data []byte, off int64) (n int, err error) {
return return
} }
func disasm_arm(code []byte, pc uint64, lookup lookupFunc, _ binary.ByteOrder) (string, int) { func disasm_arm(code []byte, pc uint64, lookup lookupFunc, _ binary.ByteOrder, gnuAsm bool) (string, int) {
inst, err := armasm.Decode(code, armasm.ModeARM) inst, err := armasm.Decode(code, armasm.ModeARM)
var text string var text string
size := inst.Len size := inst.Len
if err != nil || size == 0 || inst.Op == 0 { if err != nil || size == 0 || inst.Op == 0 {
size = 4 size = 4
text = "?" text = "?"
} else if gnuAsm {
text = fmt.Sprintf("%-36s // %s", armasm.GoSyntax(inst, pc, lookup, textReader{code, pc}), armasm.GNUSyntax(inst))
} else { } else {
text = armasm.GoSyntax(inst, pc, lookup, textReader{code, pc}) text = armasm.GoSyntax(inst, pc, lookup, textReader{code, pc})
} }
return text, size return text, size
} }
func disasm_arm64(code []byte, pc uint64, lookup lookupFunc, byteOrder binary.ByteOrder) (string, int) { func disasm_arm64(code []byte, pc uint64, lookup lookupFunc, byteOrder binary.ByteOrder, gnuAsm bool) (string, int) {
inst, err := arm64asm.Decode(code) inst, err := arm64asm.Decode(code)
var text string var text string
if err != nil || inst.Op == 0 { if err != nil || inst.Op == 0 {
text = "?" text = "?"
} else if gnuAsm {
text = fmt.Sprintf("%-36s // %s", arm64asm.GoSyntax(inst, pc, lookup, textReader{code, pc}), arm64asm.GNUSyntax(inst))
} else { } else {
text = arm64asm.GoSyntax(inst, pc, lookup, textReader{code, pc}) text = arm64asm.GoSyntax(inst, pc, lookup, textReader{code, pc})
} }
return text, 4 return text, 4
} }
func disasm_ppc64(code []byte, pc uint64, lookup lookupFunc, byteOrder binary.ByteOrder) (string, int) { func disasm_ppc64(code []byte, pc uint64, lookup lookupFunc, byteOrder binary.ByteOrder, gnuAsm bool) (string, int) {
inst, err := ppc64asm.Decode(code, byteOrder) inst, err := ppc64asm.Decode(code, byteOrder)
var text string var text string
size := inst.Len size := inst.Len
@ -366,7 +374,11 @@ func disasm_ppc64(code []byte, pc uint64, lookup lookupFunc, byteOrder binary.By
size = 4 size = 4
text = "?" text = "?"
} else { } else {
text = ppc64asm.GoSyntax(inst, pc, lookup) if gnuAsm {
text = fmt.Sprintf("%-36s // %s", ppc64asm.GoSyntax(inst, pc, lookup), ppc64asm.GNUSyntax(inst, pc))
} else {
text = ppc64asm.GoSyntax(inst, pc, lookup)
}
} }
return text, size return text, size
} }

View file

@ -49,7 +49,7 @@ func (reporter *ErrorReporter) errorUnresolved(s *sym.Symbol, r *sym.Reloc) {
if v == -1 { if v == -1 {
continue continue
} }
if rs := reporter.lookup(r.Sym.Name, v); rs != nil && rs.Type != sym.Sxxx { if rs := reporter.lookup(r.Sym.Name, v); rs != nil && rs.Type != sym.Sxxx && rs.Type != sym.SXREF {
haveABI = abi haveABI = abi
} }
} }

View file

@ -172,6 +172,93 @@ main.x: relocation target main.zero not defined
} }
} }
func TestIssue33979(t *testing.T) {
testenv.MustHaveGoBuild(t)
testenv.MustHaveCGO(t)
// Skip test on platforms that do not support cgo internal linking.
switch runtime.GOARCH {
case "mips", "mipsle", "mips64", "mips64le":
t.Skipf("Skipping on %s/%s", runtime.GOOS, runtime.GOARCH)
}
if runtime.GOOS == "aix" {
t.Skipf("Skipping on %s/%s", runtime.GOOS, runtime.GOARCH)
}
tmpdir, err := ioutil.TempDir("", "unresolved-")
if err != nil {
t.Fatalf("failed to create temp dir: %v", err)
}
defer os.RemoveAll(tmpdir)
write := func(name, content string) {
err := ioutil.WriteFile(filepath.Join(tmpdir, name), []byte(content), 0666)
if err != nil {
t.Fatal(err)
}
}
run := func(name string, args ...string) string {
cmd := exec.Command(name, args...)
cmd.Dir = tmpdir
out, err := cmd.CombinedOutput()
if err != nil {
t.Fatalf("'go %s' failed: %v, output: %s", strings.Join(args, " "), err, out)
}
return string(out)
}
runGo := func(args ...string) string {
return run(testenv.GoToolPath(t), args...)
}
// Test object with undefined reference that was not generated
// by Go, resulting in an SXREF symbol being loaded during linking.
// Because of issue #33979, the SXREF symbol would be found during
// error reporting, resulting in confusing error messages.
write("main.go", `package main
func main() {
x()
}
func x()
`)
// The following assembly must work on all architectures.
write("x.s", `
TEXT ·x(SB),0,$0
CALL foo(SB)
RET
`)
write("x.c", `
void undefined();
void foo() {
undefined();
}
`)
cc := strings.TrimSpace(runGo("env", "CC"))
cflags := strings.Fields(runGo("env", "GOGCCFLAGS"))
// Compile, assemble and pack the Go and C code.
runGo("tool", "asm", "-gensymabis", "-o", "symabis", "x.s")
runGo("tool", "compile", "-symabis", "symabis", "-p", "main", "-o", "x1.o", "main.go")
runGo("tool", "asm", "-o", "x2.o", "x.s")
run(cc, append(cflags, "-c", "-o", "x3.o", "x.c")...)
runGo("tool", "pack", "c", "x.a", "x1.o", "x2.o", "x3.o")
// Now attempt to link using the internal linker.
cmd := exec.Command(testenv.GoToolPath(t), "tool", "link", "-linkmode=internal", "x.a")
cmd.Dir = tmpdir
out, err := cmd.CombinedOutput()
if err == nil {
t.Fatalf("expected link to fail, but it succeeded")
}
re := regexp.MustCompile(`(?m)^main\(.*text\): relocation target undefined not defined$`)
if !re.Match(out) {
t.Fatalf("got:\n%q\nwant:\n%s", out, re)
}
}
func TestBuildForTvOS(t *testing.T) { func TestBuildForTvOS(t *testing.T) {
testenv.MustHaveCGO(t) testenv.MustHaveCGO(t)
testenv.MustHaveGoBuild(t) testenv.MustHaveGoBuild(t)

View file

@ -43,12 +43,13 @@ import (
"cmd/internal/objfile" "cmd/internal/objfile"
) )
var printCode = flag.Bool("S", false, "print go code alongside assembly") var printCode = flag.Bool("S", false, "print Go code alongside assembly")
var symregexp = flag.String("s", "", "only dump symbols matching this regexp") var symregexp = flag.String("s", "", "only dump symbols matching this regexp")
var gnuAsm = flag.Bool("gnu", false, "print GNU assembly next to Go assembly (where supported)")
var symRE *regexp.Regexp var symRE *regexp.Regexp
func usage() { func usage() {
fmt.Fprintf(os.Stderr, "usage: go tool objdump [-S] [-s symregexp] binary [start end]\n\n") fmt.Fprintf(os.Stderr, "usage: go tool objdump [-S] [-gnu] [-s symregexp] binary [start end]\n\n")
flag.PrintDefaults() flag.PrintDefaults()
os.Exit(2) os.Exit(2)
} }
@ -87,7 +88,7 @@ func main() {
usage() usage()
case 1: case 1:
// disassembly of entire object // disassembly of entire object
dis.Print(os.Stdout, symRE, 0, ^uint64(0), *printCode) dis.Print(os.Stdout, symRE, 0, ^uint64(0), *printCode, *gnuAsm)
case 3: case 3:
// disassembly of PC range // disassembly of PC range
@ -99,6 +100,6 @@ func main() {
if err != nil { if err != nil {
log.Fatalf("invalid end PC: %v", err) log.Fatalf("invalid end PC: %v", err)
} }
dis.Print(os.Stdout, symRE, start, end, *printCode) dis.Print(os.Stdout, symRE, start, end, *printCode, *gnuAsm)
} }
} }

View file

@ -58,24 +58,54 @@ func buildObjdump() error {
return nil return nil
} }
var x86Need = []string{ var x86Need = []string{ // for both 386 and AMD64
"JMP main.main(SB)", "JMP main.main(SB)",
"CALL main.Println(SB)", "CALL main.Println(SB)",
"RET", "RET",
} }
var amd64GnuNeed = []string{
"movq",
"callq",
"cmpb",
}
var i386GnuNeed = []string{
"mov",
"call",
"cmp",
}
var armNeed = []string{ var armNeed = []string{
"B main.main(SB)", "B main.main(SB)",
"BL main.Println(SB)", "BL main.Println(SB)",
"RET", "RET",
} }
var arm64Need = []string{
"JMP main.main(SB)",
"CALL main.Println(SB)",
"RET",
}
var armGnuNeed = []string{ // for both ARM and AMR64
"ldr",
"bl",
"cmp",
}
var ppcNeed = []string{ var ppcNeed = []string{
"BR main.main(SB)", "BR main.main(SB)",
"CALL main.Println(SB)", "CALL main.Println(SB)",
"RET", "RET",
} }
var ppcGnuNeed = []string{
"mflr",
"lbz",
"cmpw",
}
var target = flag.String("target", "", "test disassembly of `goos/goarch` binary") var target = flag.String("target", "", "test disassembly of `goos/goarch` binary")
// objdump is fully cross platform: it can handle binaries // objdump is fully cross platform: it can handle binaries
@ -87,7 +117,7 @@ var target = flag.String("target", "", "test disassembly of `goos/goarch` binary
// binary for the current system (only) and test that objdump // binary for the current system (only) and test that objdump
// can handle that one. // can handle that one.
func testDisasm(t *testing.T, printCode bool, flags ...string) { func testDisasm(t *testing.T, printCode bool, printGnuAsm bool, flags ...string) {
t.Parallel() t.Parallel()
goarch := runtime.GOARCH goarch := runtime.GOARCH
if *target != "" { if *target != "" {
@ -102,7 +132,7 @@ func testDisasm(t *testing.T, printCode bool, flags ...string) {
goarch = f[1] goarch = f[1]
} }
hash := md5.Sum([]byte(fmt.Sprintf("%v-%v", flags, printCode))) hash := md5.Sum([]byte(fmt.Sprintf("%v-%v-%v", flags, printCode, printGnuAsm)))
hello := filepath.Join(tmp, fmt.Sprintf("hello-%x.exe", hash)) hello := filepath.Join(tmp, fmt.Sprintf("hello-%x.exe", hash))
args := []string{"build", "-o", hello} args := []string{"build", "-o", hello}
args = append(args, flags...) args = append(args, flags...)
@ -129,10 +159,24 @@ func testDisasm(t *testing.T, printCode bool, flags ...string) {
need = append(need, x86Need...) need = append(need, x86Need...)
case "arm": case "arm":
need = append(need, armNeed...) need = append(need, armNeed...)
case "arm64":
need = append(need, arm64Need...)
case "ppc64", "ppc64le": case "ppc64", "ppc64le":
need = append(need, ppcNeed...) need = append(need, ppcNeed...)
} }
if printGnuAsm {
switch goarch {
case "amd64":
need = append(need, amd64GnuNeed...)
case "386":
need = append(need, i386GnuNeed...)
case "arm", "arm64":
need = append(need, armGnuNeed...)
case "ppc64", "ppc64le":
need = append(need, ppcGnuNeed...)
}
}
args = []string{ args = []string{
"-s", "main.main", "-s", "main.main",
hello, hello,
@ -142,6 +186,9 @@ func testDisasm(t *testing.T, printCode bool, flags ...string) {
args = append([]string{"-S"}, args...) args = append([]string{"-S"}, args...)
} }
if printGnuAsm {
args = append([]string{"-gnu"}, args...)
}
cmd = exec.Command(exe, args...) cmd = exec.Command(exe, args...)
cmd.Dir = "testdata" // "Bad line" bug #36683 is sensitive to being run in the source directory cmd.Dir = "testdata" // "Bad line" bug #36683 is sensitive to being run in the source directory
out, err = cmd.CombinedOutput() out, err = cmd.CombinedOutput()
@ -180,7 +227,7 @@ func TestDisasm(t *testing.T) {
case "s390x": case "s390x":
t.Skipf("skipping on %s, issue 15255", runtime.GOARCH) t.Skipf("skipping on %s, issue 15255", runtime.GOARCH)
} }
testDisasm(t, false) testDisasm(t, false, false)
} }
func TestDisasmCode(t *testing.T) { func TestDisasmCode(t *testing.T) {
@ -188,7 +235,15 @@ func TestDisasmCode(t *testing.T) {
case "mips", "mipsle", "mips64", "mips64le", "riscv64", "s390x": case "mips", "mipsle", "mips64", "mips64le", "riscv64", "s390x":
t.Skipf("skipping on %s, issue 19160", runtime.GOARCH) t.Skipf("skipping on %s, issue 19160", runtime.GOARCH)
} }
testDisasm(t, true) testDisasm(t, true, false)
}
func TestDisasmGnuAsm(t *testing.T) {
switch runtime.GOARCH {
case "mips", "mipsle", "mips64", "mips64le", "riscv64", "s390x":
t.Skipf("skipping on %s, issue 19160", runtime.GOARCH)
}
testDisasm(t, false, true)
} }
func TestDisasmExtld(t *testing.T) { func TestDisasmExtld(t *testing.T) {
@ -209,7 +264,7 @@ func TestDisasmExtld(t *testing.T) {
if !build.Default.CgoEnabled { if !build.Default.CgoEnabled {
t.Skip("skipping because cgo is not enabled") t.Skip("skipping because cgo is not enabled")
} }
testDisasm(t, false, "-ldflags=-linkmode=external") testDisasm(t, false, false, "-ldflags=-linkmode=external")
} }
func TestDisasmGoobj(t *testing.T) { func TestDisasmGoobj(t *testing.T) {

View file

@ -177,7 +177,7 @@ func (t *objTool) Disasm(file string, start, end uint64) ([]driver.Inst, error)
return nil, err return nil, err
} }
var asm []driver.Inst var asm []driver.Inst
d.Decode(start, end, nil, func(pc, size uint64, file string, line int, text string) { d.Decode(start, end, nil, false, func(pc, size uint64, file string, line int, text string) {
asm = append(asm, driver.Inst{Addr: pc, File: file, Line: line, Text: text}) asm = append(asm, driver.Inst{Addr: pc, File: file, Line: line, Text: text})
}) })
return asm, nil return asm, nil

View file

@ -27,6 +27,7 @@ type testingT interface {
Log(args ...interface{}) Log(args ...interface{})
Logf(format string, args ...interface{}) Logf(format string, args ...interface{})
Name() string Name() string
Parallel()
Skip(args ...interface{}) Skip(args ...interface{})
SkipNow() SkipNow()
Skipf(format string, args ...interface{}) Skipf(format string, args ...interface{})
@ -284,6 +285,8 @@ func testDeadline(c Context, name string, t testingT) {
} }
func XTestDeadline(t testingT) { func XTestDeadline(t testingT) {
t.Parallel()
c, _ := WithDeadline(Background(), time.Now().Add(shortDuration)) c, _ := WithDeadline(Background(), time.Now().Add(shortDuration))
if got, prefix := fmt.Sprint(c), "context.Background.WithDeadline("; !strings.HasPrefix(got, prefix) { if got, prefix := fmt.Sprint(c), "context.Background.WithDeadline("; !strings.HasPrefix(got, prefix) {
t.Errorf("c.String() = %q want prefix %q", got, prefix) t.Errorf("c.String() = %q want prefix %q", got, prefix)
@ -307,6 +310,8 @@ func XTestDeadline(t testingT) {
} }
func XTestTimeout(t testingT) { func XTestTimeout(t testingT) {
t.Parallel()
c, _ := WithTimeout(Background(), shortDuration) c, _ := WithTimeout(Background(), shortDuration)
if got, prefix := fmt.Sprint(c), "context.Background.WithDeadline("; !strings.HasPrefix(got, prefix) { if got, prefix := fmt.Sprint(c), "context.Background.WithDeadline("; !strings.HasPrefix(got, prefix) {
t.Errorf("c.String() = %q want prefix %q", got, prefix) t.Errorf("c.String() = %q want prefix %q", got, prefix)
@ -417,9 +422,9 @@ func XTestAllocs(t testingT, testingShort func() bool, testingAllocsPerRun func(
gccgoLimit: 3, gccgoLimit: 3,
}, },
{ {
desc: "WithTimeout(bg, 15*time.Millisecond)", desc: "WithTimeout(bg, 1*time.Nanosecond)",
f: func() { f: func() {
c, _ := WithTimeout(bg, 15*time.Millisecond) c, _ := WithTimeout(bg, 1*time.Nanosecond)
<-c.Done() <-c.Done()
}, },
limit: 12, limit: 12,
@ -545,7 +550,9 @@ func XTestLayersTimeout(t testingT) {
} }
func testLayers(t testingT, seed int64, testTimeout bool) { func testLayers(t testingT, seed int64, testTimeout bool) {
rand.Seed(seed) t.Parallel()
r := rand.New(rand.NewSource(seed))
errorf := func(format string, a ...interface{}) { errorf := func(format string, a ...interface{}) {
t.Errorf(fmt.Sprintf("seed=%d: %s", seed, format), a...) t.Errorf(fmt.Sprintf("seed=%d: %s", seed, format), a...)
} }
@ -560,7 +567,7 @@ func testLayers(t testingT, seed int64, testTimeout bool) {
ctx = Background() ctx = Background()
) )
for i := 0; i < minLayers || numTimers == 0 || len(cancels) == 0 || len(vals) == 0; i++ { for i := 0; i < minLayers || numTimers == 0 || len(cancels) == 0 || len(vals) == 0; i++ {
switch rand.Intn(3) { switch r.Intn(3) {
case 0: case 0:
v := new(value) v := new(value)
ctx = WithValue(ctx, v, v) ctx = WithValue(ctx, v, v)
@ -587,10 +594,12 @@ func testLayers(t testingT, seed int64, testTimeout bool) {
} }
} }
} }
select { if !testTimeout {
case <-ctx.Done(): select {
errorf("ctx should not be canceled yet") case <-ctx.Done():
default: errorf("ctx should not be canceled yet")
default:
}
} }
if s, prefix := fmt.Sprint(ctx), "context.Background."; !strings.HasPrefix(s, prefix) { if s, prefix := fmt.Sprint(ctx), "context.Background."; !strings.HasPrefix(s, prefix) {
t.Errorf("ctx.String() = %q want prefix %q", s, prefix) t.Errorf("ctx.String() = %q want prefix %q", s, prefix)
@ -608,7 +617,7 @@ func testLayers(t testingT, seed int64, testTimeout bool) {
} }
checkValues("after timeout") checkValues("after timeout")
} else { } else {
cancel := cancels[rand.Intn(len(cancels))] cancel := cancels[r.Intn(len(cancels))]
cancel() cancel()
select { select {
case <-ctx.Done(): case <-ctx.Done():

View file

@ -10,6 +10,8 @@ import (
"time" "time"
) )
const shortDuration = 1 * time.Millisecond // a reasonable duration to block in an example
// This example demonstrates the use of a cancelable context to prevent a // This example demonstrates the use of a cancelable context to prevent a
// goroutine leak. By the end of the example function, the goroutine started // goroutine leak. By the end of the example function, the goroutine started
// by gen will return without leaking. // by gen will return without leaking.
@ -55,7 +57,7 @@ func ExampleWithCancel() {
// This example passes a context with an arbitrary deadline to tell a blocking // This example passes a context with an arbitrary deadline to tell a blocking
// function that it should abandon its work as soon as it gets to it. // function that it should abandon its work as soon as it gets to it.
func ExampleWithDeadline() { func ExampleWithDeadline() {
d := time.Now().Add(50 * time.Millisecond) d := time.Now().Add(shortDuration)
ctx, cancel := context.WithDeadline(context.Background(), d) ctx, cancel := context.WithDeadline(context.Background(), d)
// Even though ctx will be expired, it is good practice to call its // Even though ctx will be expired, it is good practice to call its
@ -79,7 +81,7 @@ func ExampleWithDeadline() {
func ExampleWithTimeout() { func ExampleWithTimeout() {
// Pass a context with a timeout to tell a blocking function that it // Pass a context with a timeout to tell a blocking function that it
// should abandon its work after the timeout elapses. // should abandon its work after the timeout elapses.
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond) ctx, cancel := context.WithTimeout(context.Background(), shortDuration)
defer cancel() defer cancel()
select { select {

View file

@ -277,6 +277,13 @@ func VerifyPKCS1v15(pub *PublicKey, hash crypto.Hash, hashed []byte, sig []byte)
return ErrVerification return ErrVerification
} }
// RFC 8017 Section 8.2.2: If the length of the signature S is not k
// octets (where k is the length in octets of the RSA modulus n), output
// "invalid signature" and stop.
if k != len(sig) {
return ErrVerification
}
c := new(big.Int).SetBytes(sig) c := new(big.Int).SetBytes(sig)
m := encrypt(new(big.Int), pub, c) m := encrypt(new(big.Int), pub, c)
em := leftPad(m.Bytes(), k) em := leftPad(m.Bytes(), k)

View file

@ -9,6 +9,7 @@ import (
"crypto" "crypto"
"crypto/rand" "crypto/rand"
"crypto/sha1" "crypto/sha1"
"crypto/sha256"
"encoding/base64" "encoding/base64"
"encoding/hex" "encoding/hex"
"io" "io"
@ -296,3 +297,20 @@ var rsaPrivateKey = &PrivateKey{
fromBase10("94560208308847015747498523884063394671606671904944666360068158221458669711639"), fromBase10("94560208308847015747498523884063394671606671904944666360068158221458669711639"),
}, },
} }
func TestShortPKCS1v15Signature(t *testing.T) {
pub := &PublicKey{
E: 65537,
N: fromBase10("8272693557323587081220342447407965471608219912416565371060697606400726784709760494166080686904546560026343451112103559482851304715739629410219358933351333"),
}
sig, err := hex.DecodeString("193a310d0dcf64094c6e3a00c8219b80ded70535473acff72c08e1222974bb24a93a535b1dc4c59fc0e65775df7ba2007dd20e9193f4c4025a18a7070aee93")
if err != nil {
t.Fatalf("failed to decode signature: %s", err)
}
h := sha256.Sum256([]byte("hello"))
err = VerifyPKCS1v15(pub, crypto.SHA256, h[:], sig)
if err == nil {
t.Fatal("VerifyPKCS1v15 accepted a truncated signature")
}
}

View file

@ -15,63 +15,75 @@ const (
) )
const ( const (
alertCloseNotify alert = 0 alertCloseNotify alert = 0
alertUnexpectedMessage alert = 10 alertUnexpectedMessage alert = 10
alertBadRecordMAC alert = 20 alertBadRecordMAC alert = 20
alertDecryptionFailed alert = 21 alertDecryptionFailed alert = 21
alertRecordOverflow alert = 22 alertRecordOverflow alert = 22
alertDecompressionFailure alert = 30 alertDecompressionFailure alert = 30
alertHandshakeFailure alert = 40 alertHandshakeFailure alert = 40
alertBadCertificate alert = 42 alertBadCertificate alert = 42
alertUnsupportedCertificate alert = 43 alertUnsupportedCertificate alert = 43
alertCertificateRevoked alert = 44 alertCertificateRevoked alert = 44
alertCertificateExpired alert = 45 alertCertificateExpired alert = 45
alertCertificateUnknown alert = 46 alertCertificateUnknown alert = 46
alertIllegalParameter alert = 47 alertIllegalParameter alert = 47
alertUnknownCA alert = 48 alertUnknownCA alert = 48
alertAccessDenied alert = 49 alertAccessDenied alert = 49
alertDecodeError alert = 50 alertDecodeError alert = 50
alertDecryptError alert = 51 alertDecryptError alert = 51
alertProtocolVersion alert = 70 alertExportRestriction alert = 60
alertInsufficientSecurity alert = 71 alertProtocolVersion alert = 70
alertInternalError alert = 80 alertInsufficientSecurity alert = 71
alertInappropriateFallback alert = 86 alertInternalError alert = 80
alertUserCanceled alert = 90 alertInappropriateFallback alert = 86
alertNoRenegotiation alert = 100 alertUserCanceled alert = 90
alertMissingExtension alert = 109 alertNoRenegotiation alert = 100
alertUnsupportedExtension alert = 110 alertMissingExtension alert = 109
alertUnrecognizedName alert = 112 alertUnsupportedExtension alert = 110
alertNoApplicationProtocol alert = 120 alertCertificateUnobtainable alert = 111
alertUnrecognizedName alert = 112
alertBadCertificateStatusResponse alert = 113
alertBadCertificateHashValue alert = 114
alertUnknownPSKIdentity alert = 115
alertCertificateRequired alert = 116
alertNoApplicationProtocol alert = 120
) )
var alertText = map[alert]string{ var alertText = map[alert]string{
alertCloseNotify: "close notify", alertCloseNotify: "close notify",
alertUnexpectedMessage: "unexpected message", alertUnexpectedMessage: "unexpected message",
alertBadRecordMAC: "bad record MAC", alertBadRecordMAC: "bad record MAC",
alertDecryptionFailed: "decryption failed", alertDecryptionFailed: "decryption failed",
alertRecordOverflow: "record overflow", alertRecordOverflow: "record overflow",
alertDecompressionFailure: "decompression failure", alertDecompressionFailure: "decompression failure",
alertHandshakeFailure: "handshake failure", alertHandshakeFailure: "handshake failure",
alertBadCertificate: "bad certificate", alertBadCertificate: "bad certificate",
alertUnsupportedCertificate: "unsupported certificate", alertUnsupportedCertificate: "unsupported certificate",
alertCertificateRevoked: "revoked certificate", alertCertificateRevoked: "revoked certificate",
alertCertificateExpired: "expired certificate", alertCertificateExpired: "expired certificate",
alertCertificateUnknown: "unknown certificate", alertCertificateUnknown: "unknown certificate",
alertIllegalParameter: "illegal parameter", alertIllegalParameter: "illegal parameter",
alertUnknownCA: "unknown certificate authority", alertUnknownCA: "unknown certificate authority",
alertAccessDenied: "access denied", alertAccessDenied: "access denied",
alertDecodeError: "error decoding message", alertDecodeError: "error decoding message",
alertDecryptError: "error decrypting message", alertDecryptError: "error decrypting message",
alertProtocolVersion: "protocol version not supported", alertExportRestriction: "export restriction",
alertInsufficientSecurity: "insufficient security level", alertProtocolVersion: "protocol version not supported",
alertInternalError: "internal error", alertInsufficientSecurity: "insufficient security level",
alertInappropriateFallback: "inappropriate fallback", alertInternalError: "internal error",
alertUserCanceled: "user canceled", alertInappropriateFallback: "inappropriate fallback",
alertNoRenegotiation: "no renegotiation", alertUserCanceled: "user canceled",
alertMissingExtension: "missing extension", alertNoRenegotiation: "no renegotiation",
alertUnsupportedExtension: "unsupported extension", alertMissingExtension: "missing extension",
alertUnrecognizedName: "unrecognized name", alertUnsupportedExtension: "unsupported extension",
alertNoApplicationProtocol: "no application protocol", alertCertificateUnobtainable: "certificate unobtainable",
alertUnrecognizedName: "unrecognized name",
alertBadCertificateStatusResponse: "bad certificate status response",
alertBadCertificateHashValue: "bad certificate hash value",
alertUnknownPSKIdentity: "unknown PSK identity",
alertCertificateRequired: "certificate required",
alertNoApplicationProtocol: "no application protocol",
} }
func (e alert) String() string { func (e alert) String() string {

View file

@ -1806,7 +1806,7 @@ func TestMD5(t *testing.T) {
} }
} }
// certMissingRSANULL contains an RSA public key where the AlgorithmIdentifer // certMissingRSANULL contains an RSA public key where the AlgorithmIdentifier
// parameters are omitted rather than being an ASN.1 NULL. // parameters are omitted rather than being an ASN.1 NULL.
const certMissingRSANULL = ` const certMissingRSANULL = `
-----BEGIN CERTIFICATE----- -----BEGIN CERTIFICATE-----

View file

@ -261,15 +261,15 @@ type SessionResetter interface {
ResetSession(ctx context.Context) error ResetSession(ctx context.Context) error
} }
// ConnectionValidator may be implemented by Conn to allow drivers to // Validator may be implemented by Conn to allow drivers to
// signal if a connection is valid or if it should be discarded. // signal if a connection is valid or if it should be discarded.
// //
// If implemented, drivers may return the underlying error from queries, // If implemented, drivers may return the underlying error from queries,
// even if the connection should be discarded by the connection pool. // even if the connection should be discarded by the connection pool.
type ConnectionValidator interface { type Validator interface {
// ValidConnection is called prior to placing the connection into the // IsValid is called prior to placing the connection into the
// connection pool. The connection will be discarded if false is returned. // connection pool. The connection will be discarded if false is returned.
ValidConnection() bool IsValid() bool
} }
// Result is the result of a query execution. // Result is the result of a query execution.

View file

@ -396,9 +396,9 @@ func (c *fakeConn) ResetSession(ctx context.Context) error {
return nil return nil
} }
var _ driver.ConnectionValidator = (*fakeConn)(nil) var _ driver.Validator = (*fakeConn)(nil)
func (c *fakeConn) ValidConnection() bool { func (c *fakeConn) IsValid() bool {
return !c.isBad() return !c.isBad()
} }

View file

@ -512,8 +512,8 @@ func (dc *driverConn) validateConnection(needsReset bool) bool {
if needsReset { if needsReset {
dc.needReset = true dc.needReset = true
} }
if cv, ok := dc.ci.(driver.ConnectionValidator); ok { if cv, ok := dc.ci.(driver.Validator); ok {
return cv.ValidConnection() return cv.IsValid()
} }
return true return true
} }

View file

@ -1543,6 +1543,37 @@ func TestConnTx(t *testing.T) {
} }
} }
// TestConnIsValid verifies that a database connection that should be discarded,
// is actually discarded and does not re-enter the connection pool.
// If the IsValid method from *fakeConn is removed, this test will fail.
func TestConnIsValid(t *testing.T) {
db := newTestDB(t, "people")
defer closeDB(t, db)
db.SetMaxOpenConns(1)
ctx := context.Background()
c, err := db.Conn(ctx)
if err != nil {
t.Fatal(err)
}
err = c.Raw(func(raw interface{}) error {
dc := raw.(*fakeConn)
dc.stickyBad = true
return nil
})
if err != nil {
t.Fatal(err)
}
c.Close()
if len(db.freeConn) > 0 && db.freeConn[0].ci.(*fakeConn).stickyBad {
t.Fatal("bad connection returned to pool; expected bad connection to be discarded")
}
}
// Tests fix for issue 2542, that we release a lock when querying on // Tests fix for issue 2542, that we release a lock when querying on
// a closed connection. // a closed connection.
func TestIssue2542Deadlock(t *testing.T) { func TestIssue2542Deadlock(t *testing.T) {

View file

@ -326,7 +326,7 @@ func (p *parser) parseConstValue(pkg *types.Package) (val constant.Value, typ ty
if p.tok == '$' { if p.tok == '$' {
p.next() p.next()
if p.tok != scanner.Ident { if p.tok != scanner.Ident {
p.errorf("expected identifer after '$', got %s (%q)", scanner.TokenString(p.tok), p.lit) p.errorf("expected identifier after '$', got %s (%q)", scanner.TokenString(p.tok), p.lit)
} }
} }

View file

@ -107,15 +107,24 @@ func (pd *pollDesc) pollable() bool {
return pd.runtimeCtx != 0 return pd.runtimeCtx != 0
} }
// Error values returned by runtime_pollReset and runtime_pollWait.
// These must match the values in runtime/netpoll.go.
const (
pollNoError = 0
pollErrClosing = 1
pollErrTimeout = 2
pollErrNotPollable = 3
)
func convertErr(res int, isFile bool) error { func convertErr(res int, isFile bool) error {
switch res { switch res {
case 0: case pollNoError:
return nil return nil
case 1: case pollErrClosing:
return errClosing(isFile) return errClosing(isFile)
case 2: case pollErrTimeout:
return ErrTimeout return ErrTimeout
case 3: case pollErrNotPollable:
return ErrNotPollable return ErrNotPollable
} }
println("unreachable: ", res) println("unreachable: ", res)

View file

@ -999,7 +999,7 @@ func (fd *FD) ReadMsg(p []byte, oob []byte) (int, int, int, syscall.Sockaddr, er
o := &fd.rop o := &fd.rop
o.InitMsg(p, oob) o.InitMsg(p, oob)
o.rsa = new(syscall.RawSockaddrAny) o.rsa = new(syscall.RawSockaddrAny)
o.msg.Name = o.rsa o.msg.Name = (syscall.Pointer)(unsafe.Pointer(o.rsa))
o.msg.Namelen = int32(unsafe.Sizeof(*o.rsa)) o.msg.Namelen = int32(unsafe.Sizeof(*o.rsa))
n, err := execIO(o, func(o *operation) error { n, err := execIO(o, func(o *operation) error {
return windows.WSARecvMsg(o.fd.Sysfd, &o.msg, &o.qty, &o.o, nil) return windows.WSARecvMsg(o.fd.Sysfd, &o.msg, &o.qty, &o.o, nil)
@ -1030,7 +1030,7 @@ func (fd *FD) WriteMsg(p []byte, oob []byte, sa syscall.Sockaddr) (int, int, err
if err != nil { if err != nil {
return 0, 0, err return 0, 0, err
} }
o.msg.Name = (*syscall.RawSockaddrAny)(rsa) o.msg.Name = (syscall.Pointer)(rsa)
o.msg.Namelen = len o.msg.Namelen = len
} }
n, err := execIO(o, func(o *operation) error { n, err := execIO(o, func(o *operation) error {

View file

@ -176,7 +176,7 @@ var sendRecvMsgFunc struct {
} }
type WSAMsg struct { type WSAMsg struct {
Name *syscall.RawSockaddrAny Name syscall.Pointer
Namelen int32 Namelen int32
Buffers *syscall.WSABuf Buffers *syscall.WSABuf
BufferCount uint32 BufferCount uint32

View file

@ -269,7 +269,20 @@ func send(ireq *Request, rt RoundTripper, deadline time.Time) (resp *Response, d
return nil, didTimeout, fmt.Errorf("http: RoundTripper implementation (%T) returned a nil *Response with a nil error", rt) return nil, didTimeout, fmt.Errorf("http: RoundTripper implementation (%T) returned a nil *Response with a nil error", rt)
} }
if resp.Body == nil { if resp.Body == nil {
return nil, didTimeout, fmt.Errorf("http: RoundTripper implementation (%T) returned a *Response with a nil Body", rt) // The documentation on the Body field says “The http Client and Transport
// guarantee that Body is always non-nil, even on responses without a body
// or responses with a zero-length body.” Unfortunately, we didn't document
// that same constraint for arbitrary RoundTripper implementations, and
// RoundTripper implementations in the wild (mostly in tests) assume that
// they can use a nil Body to mean an empty one (similar to Request.Body).
// (See https://golang.org/issue/38095.)
//
// If the ContentLength allows the Body to be empty, fill in an empty one
// here to ensure that it is non-nil.
if resp.ContentLength > 0 && req.Method != "HEAD" {
return nil, didTimeout, fmt.Errorf("http: RoundTripper implementation (%T) returned a *Response with content length %d but a nil Body", rt, resp.ContentLength)
}
resp.Body = ioutil.NopCloser(strings.NewReader(""))
} }
if !deadline.IsZero() { if !deadline.IsZero() {
resp.Body = &cancelTimerBody{ resp.Body = &cancelTimerBody{

View file

@ -1991,3 +1991,38 @@ func testClientDoCanceledVsTimeout(t *testing.T, h2 bool) {
}) })
} }
} }
type nilBodyRoundTripper struct{}
func (nilBodyRoundTripper) RoundTrip(req *Request) (*Response, error) {
return &Response{
StatusCode: StatusOK,
Status: StatusText(StatusOK),
Body: nil,
Request: req,
}, nil
}
func TestClientPopulatesNilResponseBody(t *testing.T) {
c := &Client{Transport: nilBodyRoundTripper{}}
resp, err := c.Get("http://localhost/anything")
if err != nil {
t.Fatalf("Client.Get rejected Response with nil Body: %v", err)
}
if resp.Body == nil {
t.Fatalf("Client failed to provide a non-nil Body as documented")
}
defer func() {
if err := resp.Body.Close(); err != nil {
t.Fatalf("error from Close on substitute Response.Body: %v", err)
}
}()
if b, err := ioutil.ReadAll(resp.Body); err != nil {
t.Errorf("read error from substitute Response.Body: %v", err)
} else if len(b) != 0 {
t.Errorf("substitute Response.Body was unexpectedly non-empty: %q", b)
}
}

View file

@ -157,7 +157,7 @@ func (t *Transport) RoundTrip(req *Request) (*Response, error) {
}) })
defer success.Release() defer success.Release()
failure := js.FuncOf(func(this js.Value, args []js.Value) interface{} { failure := js.FuncOf(func(this js.Value, args []js.Value) interface{} {
err := fmt.Errorf("net/http: fetch() failed: %s", args[0].String()) err := fmt.Errorf("net/http: fetch() failed: %s", args[0].Get("message").String())
select { select {
case errCh <- err: case errCh <- err:
case <-req.Context().Done(): case <-req.Context().Done():

View file

@ -1057,16 +1057,13 @@ func TestIdentityResponse(t *testing.T) {
t.Fatalf("error writing: %v", err) t.Fatalf("error writing: %v", err)
} }
// The ReadAll will hang for a failing test, so use a Timer to // The ReadAll will hang for a failing test.
// fail explicitly. got, _ := ioutil.ReadAll(conn)
goTimeout(t, 2*time.Second, func() { expectedSuffix := "\r\n\r\ntoo short"
got, _ := ioutil.ReadAll(conn) if !strings.HasSuffix(string(got), expectedSuffix) {
expectedSuffix := "\r\n\r\ntoo short" t.Errorf("Expected output to end with %q; got response body %q",
if !strings.HasSuffix(string(got), expectedSuffix) { expectedSuffix, string(got))
t.Errorf("Expected output to end with %q; got response body %q", }
expectedSuffix, string(got))
}
})
} }
func testTCPConnectionCloses(t *testing.T, req string, h Handler) { func testTCPConnectionCloses(t *testing.T, req string, h Handler) {
@ -1438,13 +1435,13 @@ func TestTLSHandshakeTimeout(t *testing.T) {
t.Fatalf("Dial: %v", err) t.Fatalf("Dial: %v", err)
} }
defer conn.Close() defer conn.Close()
goTimeout(t, 10*time.Second, func() {
var buf [1]byte var buf [1]byte
n, err := conn.Read(buf[:]) n, err := conn.Read(buf[:])
if err == nil || n != 0 { if err == nil || n != 0 {
t.Errorf("Read = %d, %v; want an error and no bytes", n, err) t.Errorf("Read = %d, %v; want an error and no bytes", n, err)
} }
})
select { select {
case v := <-errc: case v := <-errc:
if !strings.Contains(v, "timeout") && !strings.Contains(v, "TLS handshake") { if !strings.Contains(v, "timeout") && !strings.Contains(v, "TLS handshake") {
@ -1479,30 +1476,29 @@ func TestTLSServer(t *testing.T) {
t.Fatalf("Dial: %v", err) t.Fatalf("Dial: %v", err)
} }
defer idleConn.Close() defer idleConn.Close()
goTimeout(t, 10*time.Second, func() {
if !strings.HasPrefix(ts.URL, "https://") { if !strings.HasPrefix(ts.URL, "https://") {
t.Errorf("expected test TLS server to start with https://, got %q", ts.URL) t.Errorf("expected test TLS server to start with https://, got %q", ts.URL)
return return
} }
client := ts.Client() client := ts.Client()
res, err := client.Get(ts.URL) res, err := client.Get(ts.URL)
if err != nil { if err != nil {
t.Error(err) t.Error(err)
return return
} }
if res == nil { if res == nil {
t.Errorf("got nil Response") t.Errorf("got nil Response")
return return
} }
defer res.Body.Close() defer res.Body.Close()
if res.Header.Get("X-TLS-Set") != "true" { if res.Header.Get("X-TLS-Set") != "true" {
t.Errorf("expected X-TLS-Set response header") t.Errorf("expected X-TLS-Set response header")
return return
} }
if res.Header.Get("X-TLS-HandshakeComplete") != "true" { if res.Header.Get("X-TLS-HandshakeComplete") != "true" {
t.Errorf("expected X-TLS-HandshakeComplete header") t.Errorf("expected X-TLS-HandshakeComplete header")
} }
})
} }
func TestServeTLS(t *testing.T) { func TestServeTLS(t *testing.T) {
@ -3629,21 +3625,6 @@ func TestHeaderToWire(t *testing.T) {
} }
} }
// goTimeout runs f, failing t if f takes more than ns to complete.
func goTimeout(t *testing.T, d time.Duration, f func()) {
ch := make(chan bool, 2)
timer := time.AfterFunc(d, func() {
t.Errorf("Timeout expired after %v", d)
ch <- true
})
defer timer.Stop()
go func() {
defer func() { ch <- true }()
f()
}()
<-ch
}
type errorListener struct { type errorListener struct {
errs []error errs []error
} }

View file

@ -79,6 +79,13 @@ func helperCommandContext(t *testing.T, ctx context.Context, s ...string) (cmd *
} else { } else {
cmd = exec.Command(os.Args[0], cs...) cmd = exec.Command(os.Args[0], cs...)
} }
// Temporary code to try to resolve #25628.
// TODO(iant): Remove this when we no longer need it.
if runtime.GOARCH == "386" && runtime.GOOS == "linux" && testenv.Builder() != "" && len(s) == 1 && s[0] == "read3" && ctx == nil {
cmd = exec.Command("/usr/bin/strace", append([]string{"-f", os.Args[0]}, cs...)...)
}
cmd.Env = append(os.Environ(), "GO_WANT_HELPER_PROCESS=1") cmd.Env = append(os.Environ(), "GO_WANT_HELPER_PROCESS=1")
return cmd return cmd
} }

View file

@ -122,12 +122,6 @@ func Notify(c chan<- os.Signal, sig ...os.Signal) {
panic("os/signal: Notify using nil channel") panic("os/signal: Notify using nil channel")
} }
watchSignalLoopOnce.Do(func() {
if watchSignalLoop != nil {
go watchSignalLoop()
}
})
handlers.Lock() handlers.Lock()
defer handlers.Unlock() defer handlers.Unlock()
@ -148,6 +142,14 @@ func Notify(c chan<- os.Signal, sig ...os.Signal) {
h.set(n) h.set(n)
if handlers.ref[n] == 0 { if handlers.ref[n] == 0 {
enableSignal(n) enableSignal(n)
// The runtime requires that we enable a
// signal before starting the watcher.
watchSignalLoopOnce.Do(func() {
if watchSignalLoop != nil {
go watchSignalLoop()
}
})
} }
handlers.ref[n]++ handlers.ref[n]++
} }

View file

@ -11,7 +11,7 @@ import (
var sigtab = make(map[os.Signal]int) var sigtab = make(map[os.Signal]int)
// In sig.s; jumps to runtime. // Defined by the runtime package.
func signal_disable(uint32) func signal_disable(uint32)
func signal_enable(uint32) func signal_enable(uint32)
func signal_ignore(uint32) func signal_ignore(uint32)
@ -19,8 +19,6 @@ func signal_ignored(uint32) bool
func signal_recv() string func signal_recv() string
func init() { func init() {
signal_enable(0) // first call - initialize
watchSignalLoop = loop watchSignalLoop = loop
} }

View file

@ -22,21 +22,10 @@ import (
"time" "time"
) )
var testDeadline time.Time // settleTime is an upper bound on how long we expect signals to take to be
// delivered. Lower values make the test faster, but also flakier — especially
func TestMain(m *testing.M) { // on heavily loaded systems.
flag.Parse() const settleTime = 100 * time.Millisecond
// TODO(golang.org/issue/28135): Remove this setup and use t.Deadline instead.
timeoutFlag := flag.Lookup("test.timeout")
if timeoutFlag != nil {
if d := timeoutFlag.Value.(flag.Getter).Get().(time.Duration); d != 0 {
testDeadline = time.Now().Add(d)
}
}
os.Exit(m.Run())
}
func waitSig(t *testing.T, c <-chan os.Signal, sig os.Signal) { func waitSig(t *testing.T, c <-chan os.Signal, sig os.Signal) {
waitSig1(t, c, sig, false) waitSig1(t, c, sig, false)
@ -48,27 +37,45 @@ func waitSigAll(t *testing.T, c <-chan os.Signal, sig os.Signal) {
func waitSig1(t *testing.T, c <-chan os.Signal, sig os.Signal, all bool) { func waitSig1(t *testing.T, c <-chan os.Signal, sig os.Signal, all bool) {
// Sleep multiple times to give the kernel more tries to // Sleep multiple times to give the kernel more tries to
// deliver the signal. // deliver the signal.
for i := 0; i < 10; i++ { start := time.Now()
timer := time.NewTimer(settleTime / 10)
defer timer.Stop()
// If the caller notified for all signals on c, filter out SIGURG,
// which is used for runtime preemption and can come at unpredictable times.
// General user code should filter out all unexpected signals instead of just
// SIGURG, but since os/signal is tightly coupled to the runtime it seems
// appropriate to be stricter here.
for time.Since(start) < settleTime {
select { select {
case s := <-c: case s := <-c:
// If the caller notified for all signals on if s == sig {
// c, filter out SIGURG, which is used for return
// runtime preemption and can come at
// unpredictable times.
if all && s == syscall.SIGURG {
continue
} }
if s != sig { if !all || s != syscall.SIGURG {
t.Fatalf("signal was %v, want %v", s, sig) t.Fatalf("signal was %v, want %v", s, sig)
} }
return case <-timer.C:
timer.Reset(settleTime / 10)
case <-time.After(100 * time.Millisecond):
} }
} }
t.Fatalf("timeout waiting for %v", sig) t.Fatalf("timeout waiting for %v", sig)
} }
// quiesce waits until we can be reasonably confident that all pending signals
// have been delivered by the OS.
func quiesce() {
// The kernel will deliver a signal as a thread returns
// from a syscall. If the only active thread is sleeping,
// and the system is busy, the kernel may not get around
// to waking up a thread to catch the signal.
// We try splitting up the sleep to give the kernel
// many chances to deliver the signal.
start := time.Now()
for time.Since(start) < settleTime {
time.Sleep(settleTime / 10)
}
}
// Test that basic signal handling works. // Test that basic signal handling works.
func TestSignal(t *testing.T) { func TestSignal(t *testing.T) {
// Ask for SIGHUP // Ask for SIGHUP
@ -112,50 +119,39 @@ func TestStress(t *testing.T) {
dur = 100 * time.Millisecond dur = 100 * time.Millisecond
} }
defer runtime.GOMAXPROCS(runtime.GOMAXPROCS(4)) defer runtime.GOMAXPROCS(runtime.GOMAXPROCS(4))
done := make(chan bool)
finished := make(chan bool) sig := make(chan os.Signal, 1)
Notify(sig, syscall.SIGUSR1)
go func() { go func() {
sig := make(chan os.Signal, 1) stop := time.After(dur)
Notify(sig, syscall.SIGUSR1)
defer Stop(sig)
Loop:
for { for {
select { select {
case <-sig: case <-stop:
case <-done: // Allow enough time for all signals to be delivered before we stop
break Loop // listening for them.
} quiesce()
} Stop(sig)
finished <- true // According to its documentation, “[w]hen Stop returns, it in
}() // guaranteed that c will receive no more signals.” So we can safely
go func() { // close sig here: if there is a send-after-close race here, that is a
Loop: // bug in Stop and we would like to detect it.
for { close(sig)
select { return
case <-done:
break Loop
default: default:
syscall.Kill(syscall.Getpid(), syscall.SIGUSR1) syscall.Kill(syscall.Getpid(), syscall.SIGUSR1)
runtime.Gosched() runtime.Gosched()
} }
} }
finished <- true
}() }()
time.Sleep(dur)
close(done) for range sig {
<-finished // Receive signals until the sender closes sig.
<-finished }
// When run with 'go test -cpu=1,2,4' SIGUSR1 from this test can slip
// into subsequent TestSignal() causing failure.
// Sleep for a while to reduce the possibility of the failure.
time.Sleep(10 * time.Millisecond)
} }
func testCancel(t *testing.T, ignore bool) { func testCancel(t *testing.T, ignore bool) {
// Send SIGWINCH. By default this signal should be ignored.
syscall.Kill(syscall.Getpid(), syscall.SIGWINCH)
time.Sleep(100 * time.Millisecond)
// Ask to be notified on c1 when a SIGWINCH is received. // Ask to be notified on c1 when a SIGWINCH is received.
c1 := make(chan os.Signal, 1) c1 := make(chan os.Signal, 1)
Notify(c1, syscall.SIGWINCH) Notify(c1, syscall.SIGWINCH)
@ -175,25 +171,16 @@ func testCancel(t *testing.T, ignore bool) {
waitSig(t, c2, syscall.SIGHUP) waitSig(t, c2, syscall.SIGHUP)
// Ignore, or reset the signal handlers for, SIGWINCH and SIGHUP. // Ignore, or reset the signal handlers for, SIGWINCH and SIGHUP.
// Either way, this should undo both calls to Notify above.
if ignore { if ignore {
Ignore(syscall.SIGWINCH, syscall.SIGHUP) Ignore(syscall.SIGWINCH, syscall.SIGHUP)
// Don't bother deferring a call to Reset: it is documented to undo Notify,
// but its documentation says nothing about Ignore, and (as of the time of
// writing) it empirically does not undo an Ignore.
} else { } else {
Reset(syscall.SIGWINCH, syscall.SIGHUP) Reset(syscall.SIGWINCH, syscall.SIGHUP)
} }
// At this point we do not expect any further signals on c1.
// However, it is just barely possible that the initial SIGWINCH
// at the start of this function was delivered after we called
// Notify on c1. In that case the waitSig for SIGWINCH may have
// picked up that initial SIGWINCH, and the second SIGWINCH may
// then have been delivered on the channel. This sequence of events
// may have caused issue 15661.
// So, read any possible signal from the channel now.
select {
case <-c1:
default:
}
// Send this process a SIGWINCH. It should be ignored. // Send this process a SIGWINCH. It should be ignored.
syscall.Kill(syscall.Getpid(), syscall.SIGWINCH) syscall.Kill(syscall.Getpid(), syscall.SIGWINCH)
@ -202,22 +189,28 @@ func testCancel(t *testing.T, ignore bool) {
syscall.Kill(syscall.Getpid(), syscall.SIGHUP) syscall.Kill(syscall.Getpid(), syscall.SIGHUP)
} }
quiesce()
select { select {
case s := <-c1: case s := <-c1:
t.Fatalf("unexpected signal %v", s) t.Errorf("unexpected signal %v", s)
case <-time.After(100 * time.Millisecond): default:
// nothing to read - good // nothing to read - good
} }
select { select {
case s := <-c2: case s := <-c2:
t.Fatalf("unexpected signal %v", s) t.Errorf("unexpected signal %v", s)
case <-time.After(100 * time.Millisecond): default:
// nothing to read - good // nothing to read - good
} }
// Reset the signal handlers for all signals. // One or both of the signals may have been blocked for this process
Reset() // by the calling process.
// Discard any queued signals now to avoid interfering with other tests.
Notify(c1, syscall.SIGWINCH)
Notify(c2, syscall.SIGHUP)
quiesce()
} }
// Test that Reset cancels registration for listed signals on all channels. // Test that Reset cancels registration for listed signals on all channels.
@ -289,7 +282,10 @@ func TestDetectNohup(t *testing.T) {
} }
} }
var sendUncaughtSighup = flag.Int("send_uncaught_sighup", 0, "send uncaught SIGHUP during TestStop") var (
sendUncaughtSighup = flag.Int("send_uncaught_sighup", 0, "send uncaught SIGHUP during TestStop")
dieFromSighup = flag.Bool("die_from_sighup", false, "wait to die from uncaught SIGHUP")
)
// Test that Stop cancels the channel's registrations. // Test that Stop cancels the channel's registrations.
func TestStop(t *testing.T) { func TestStop(t *testing.T) {
@ -300,59 +296,74 @@ func TestStop(t *testing.T) {
} }
for _, sig := range sigs { for _, sig := range sigs {
// Send the signal. sig := sig
// If it's SIGWINCH, we should not see it. t.Run(fmt.Sprint(sig), func(t *testing.T) {
// If it's SIGHUP, maybe we'll die. Let the flag tell us what to do. // When calling Notify with a specific signal,
if sig == syscall.SIGWINCH || (sig == syscall.SIGHUP && *sendUncaughtSighup == 1) { // independent signals should not interfere with each other,
// and we end up needing to wait for signals to quiesce a lot.
// Test the three different signals concurrently.
t.Parallel()
// If the signal is not ignored, send the signal before registering a
// channel to verify the behavior of the default Go handler.
// If it's SIGWINCH or SIGUSR1 we should not see it.
// If it's SIGHUP, maybe we'll die. Let the flag tell us what to do.
mayHaveBlockedSignal := false
if !Ignored(sig) && (sig != syscall.SIGHUP || *sendUncaughtSighup == 1) {
syscall.Kill(syscall.Getpid(), sig)
quiesce()
// We don't know whether sig is blocked for this process; see
// https://golang.org/issue/38165. Assume that it could be.
mayHaveBlockedSignal = true
}
// Ask for signal
c := make(chan os.Signal, 1)
Notify(c, sig)
// Send this process the signal again.
syscall.Kill(syscall.Getpid(), sig) syscall.Kill(syscall.Getpid(), sig)
} waitSig(t, c, sig)
// The kernel will deliver a signal as a thread returns if mayHaveBlockedSignal {
// from a syscall. If the only active thread is sleeping, // We may have received a queued initial signal in addition to the one
// and the system is busy, the kernel may not get around // that we sent after Notify. If so, waitSig may have observed that
// to waking up a thread to catch the signal. // initial signal instead of the second one, and we may need to wait for
// We try splitting up the sleep to give the kernel // the second signal to clear. Do that now.
// another chance to deliver the signal. quiesce()
time.Sleep(50 * time.Millisecond) select {
time.Sleep(50 * time.Millisecond) case <-c:
default:
}
}
// Ask for signal // Stop watching for the signal and send it again.
c := make(chan os.Signal, 1) // If it's SIGHUP, maybe we'll die. Let the flag tell us what to do.
Notify(c, sig) Stop(c)
defer Stop(c) if sig != syscall.SIGHUP || *sendUncaughtSighup == 2 {
syscall.Kill(syscall.Getpid(), sig)
quiesce()
// Send this process that signal select {
syscall.Kill(syscall.Getpid(), sig) case s := <-c:
waitSig(t, c, sig) t.Errorf("unexpected signal %v", s)
default:
// nothing to read - good
}
Stop(c) // If we're going to receive a signal, it has almost certainly been
time.Sleep(50 * time.Millisecond) // received by now. However, it may have been blocked for this process —
select { // we don't know. Explicitly unblock it and wait for it to clear now.
case s := <-c: Notify(c, sig)
t.Fatalf("unexpected signal %v", s) quiesce()
case <-time.After(50 * time.Millisecond): Stop(c)
// nothing to read - good }
} })
// Send the signal.
// If it's SIGWINCH, we should not see it.
// If it's SIGHUP, maybe we'll die. Let the flag tell us what to do.
if sig != syscall.SIGHUP || *sendUncaughtSighup == 2 {
syscall.Kill(syscall.Getpid(), sig)
}
time.Sleep(50 * time.Millisecond)
select {
case s := <-c:
t.Fatalf("unexpected signal %v", s)
case <-time.After(50 * time.Millisecond):
// nothing to read - good
}
} }
} }
// Test that when run under nohup, an uncaught SIGHUP does not kill the program, // Test that when run under nohup, an uncaught SIGHUP does not kill the program.
// but a
func TestNohup(t *testing.T) { func TestNohup(t *testing.T) {
// Ugly: ask for SIGHUP so that child will not have no-hup set // Ugly: ask for SIGHUP so that child will not have no-hup set
// even if test is running under nohup environment. // even if test is running under nohup environment.
@ -371,12 +382,38 @@ func TestNohup(t *testing.T) {
// //
// Both should fail without nohup and succeed with nohup. // Both should fail without nohup and succeed with nohup.
for i := 1; i <= 2; i++ { var subTimeout time.Duration
out, err := exec.Command(os.Args[0], "-test.run=TestStop", "-send_uncaught_sighup="+strconv.Itoa(i)).CombinedOutput()
if err == nil { var wg sync.WaitGroup
t.Fatalf("ran test with -send_uncaught_sighup=%d and it succeeded: expected failure.\nOutput:\n%s", i, out) wg.Add(2)
} if deadline, ok := t.Deadline(); ok {
subTimeout = time.Until(deadline)
subTimeout -= subTimeout / 10 // Leave 10% headroom for propagating output.
} }
for i := 1; i <= 2; i++ {
i := i
go t.Run(fmt.Sprintf("uncaught-%d", i), func(t *testing.T) {
defer wg.Done()
args := []string{
"-test.v",
"-test.run=TestStop",
"-send_uncaught_sighup=" + strconv.Itoa(i),
"-die_from_sighup",
}
if subTimeout != 0 {
args = append(args, fmt.Sprintf("-test.timeout=%v", subTimeout))
}
out, err := exec.Command(os.Args[0], args...).CombinedOutput()
if err == nil {
t.Errorf("ran test with -send_uncaught_sighup=%d and it succeeded: expected failure.\nOutput:\n%s", i, out)
} else {
t.Logf("test with -send_uncaught_sighup=%d failed as expected.\nError: %v\nOutput:\n%s", i, err, out)
}
})
}
wg.Wait()
Stop(c) Stop(c)
@ -387,21 +424,46 @@ func TestNohup(t *testing.T) {
} }
// Again, this time with nohup, assuming we can find it. // Again, this time with nohup, assuming we can find it.
_, err := os.Stat("/usr/bin/nohup") _, err := exec.LookPath("nohup")
if err != nil { if err != nil {
t.Skip("cannot find nohup; skipping second half of test") t.Skip("cannot find nohup; skipping second half of test")
} }
for i := 1; i <= 2; i++ { wg.Add(2)
os.Remove("nohup.out") if deadline, ok := t.Deadline(); ok {
out, err := exec.Command("/usr/bin/nohup", os.Args[0], "-test.run=TestStop", "-send_uncaught_sighup="+strconv.Itoa(i)).CombinedOutput() subTimeout = time.Until(deadline)
subTimeout -= subTimeout / 10 // Leave 10% headroom for propagating output.
data, _ := ioutil.ReadFile("nohup.out")
os.Remove("nohup.out")
if err != nil {
t.Fatalf("ran test with -send_uncaught_sighup=%d under nohup and it failed: expected success.\nError: %v\nOutput:\n%s%s", i, err, out, data)
}
} }
for i := 1; i <= 2; i++ {
i := i
go t.Run(fmt.Sprintf("nohup-%d", i), func(t *testing.T) {
defer wg.Done()
// POSIX specifies that nohup writes to a file named nohup.out if standard
// output is a terminal. However, for an exec.Command, standard output is
// not a terminal — so we don't need to read or remove that file (and,
// indeed, cannot even create it if the current user is unable to write to
// GOROOT/src, such as when GOROOT is installed and owned by root).
args := []string{
os.Args[0],
"-test.v",
"-test.run=TestStop",
"-send_uncaught_sighup=" + strconv.Itoa(i),
}
if subTimeout != 0 {
args = append(args, fmt.Sprintf("-test.timeout=%v", subTimeout))
}
out, err := exec.Command("nohup", args...).CombinedOutput()
if err != nil {
t.Errorf("ran test with -send_uncaught_sighup=%d under nohup and it failed: expected success.\nError: %v\nOutput:\n%s", i, err, out)
} else {
t.Logf("ran test with -send_uncaught_sighup=%d under nohup.\nOutput:\n%s", i, out)
}
})
}
wg.Wait()
} }
// Test that SIGCONT works (issue 8953). // Test that SIGCONT works (issue 8953).
@ -416,7 +478,7 @@ func TestSIGCONT(t *testing.T) {
// Test race between stopping and receiving a signal (issue 14571). // Test race between stopping and receiving a signal (issue 14571).
func TestAtomicStop(t *testing.T) { func TestAtomicStop(t *testing.T) {
if os.Getenv("GO_TEST_ATOMIC_STOP") != "" { if os.Getenv("GO_TEST_ATOMIC_STOP") != "" {
atomicStopTestProgram() atomicStopTestProgram(t)
t.Fatal("atomicStopTestProgram returned") t.Fatal("atomicStopTestProgram returned")
} }
@ -438,8 +500,8 @@ func TestAtomicStop(t *testing.T) {
const execs = 10 const execs = 10
for i := 0; i < execs; i++ { for i := 0; i < execs; i++ {
timeout := "0" timeout := "0"
if !testDeadline.IsZero() { if deadline, ok := t.Deadline(); ok {
timeout = testDeadline.Sub(time.Now()).String() timeout = time.Until(deadline).String()
} }
cmd := exec.Command(os.Args[0], "-test.run=TestAtomicStop", "-test.timeout="+timeout) cmd := exec.Command(os.Args[0], "-test.run=TestAtomicStop", "-test.timeout="+timeout)
cmd.Env = append(os.Environ(), "GO_TEST_ATOMIC_STOP=1") cmd.Env = append(os.Environ(), "GO_TEST_ATOMIC_STOP=1")
@ -478,7 +540,7 @@ func TestAtomicStop(t *testing.T) {
// atomicStopTestProgram is run in a subprocess by TestAtomicStop. // atomicStopTestProgram is run in a subprocess by TestAtomicStop.
// It tries to trigger a signal delivery race. This function should // It tries to trigger a signal delivery race. This function should
// either catch a signal or die from it. // either catch a signal or die from it.
func atomicStopTestProgram() { func atomicStopTestProgram(t *testing.T) {
// This test won't work if SIGINT is ignored here. // This test won't work if SIGINT is ignored here.
if Ignored(syscall.SIGINT) { if Ignored(syscall.SIGINT) {
fmt.Println("SIGINT is ignored") fmt.Println("SIGINT is ignored")
@ -488,10 +550,10 @@ func atomicStopTestProgram() {
const tries = 10 const tries = 10
timeout := 2 * time.Second timeout := 2 * time.Second
if !testDeadline.IsZero() { if deadline, ok := t.Deadline(); ok {
// Give each try an equal slice of the deadline, with one slice to spare for // Give each try an equal slice of the deadline, with one slice to spare for
// cleanup. // cleanup.
timeout = testDeadline.Sub(time.Now()) / (tries + 1) timeout = time.Until(deadline) / (tries + 1)
} }
pid := syscall.Getpid() pid := syscall.Getpid()
@ -541,43 +603,45 @@ func TestTime(t *testing.T) {
dur = 100 * time.Millisecond dur = 100 * time.Millisecond
} }
defer runtime.GOMAXPROCS(runtime.GOMAXPROCS(4)) defer runtime.GOMAXPROCS(runtime.GOMAXPROCS(4))
done := make(chan bool)
finished := make(chan bool) sig := make(chan os.Signal, 1)
Notify(sig, syscall.SIGUSR1)
stop := make(chan struct{})
go func() { go func() {
sig := make(chan os.Signal, 1)
Notify(sig, syscall.SIGUSR1)
defer Stop(sig)
Loop:
for { for {
select { select {
case <-sig: case <-stop:
case <-done: // Allow enough time for all signals to be delivered before we stop
break Loop // listening for them.
} quiesce()
} Stop(sig)
finished <- true // According to its documentation, “[w]hen Stop returns, it in
}() // guaranteed that c will receive no more signals.” So we can safely
go func() { // close sig here: if there is a send-after-close race, that is a bug in
Loop: // Stop and we would like to detect it.
for { close(sig)
select { return
case <-done:
break Loop
default: default:
syscall.Kill(syscall.Getpid(), syscall.SIGUSR1) syscall.Kill(syscall.Getpid(), syscall.SIGUSR1)
runtime.Gosched() runtime.Gosched()
} }
} }
finished <- true
}() }()
done := make(chan struct{})
go func() {
for range sig {
// Receive signals until the sender closes sig.
}
close(done)
}()
t0 := time.Now() t0 := time.Now()
for t1 := t0; t1.Sub(t0) < dur; t1 = time.Now() { for t1 := t0; t1.Sub(t0) < dur; t1 = time.Now() {
} // hammering on getting time } // hammering on getting time
close(done)
<-finished close(stop)
<-finished <-done
// When run with 'go test -cpu=1,2,4' SIGUSR1 from this test can slip
// into subsequent TestSignal() causing failure.
// Sleep for a while to reduce the possibility of the failure.
time.Sleep(10 * time.Millisecond)
} }

View file

@ -25,8 +25,6 @@ func loop() {
} }
func init() { func init() {
signal_enable(0) // first call - initialize
watchSignalLoop = loop watchSignalLoop = loop
} }

View file

@ -4163,6 +4163,37 @@ func TestConvert(t *testing.T) {
} }
} }
var gFloat32 float32
func TestConvertNaNs(t *testing.T) {
const snan uint32 = 0x7f800001
// Test to see if a store followed by a load of a signaling NaN
// maintains the signaling bit. The only platform known to fail
// this test is 386,GO386=387. The real test below will always fail
// if the platform can't even store+load a float without mucking
// with the bits.
gFloat32 = math.Float32frombits(snan)
runtime.Gosched() // make sure we don't optimize the store/load away
r := math.Float32bits(gFloat32)
if r != snan {
// This should only happen on 386,GO386=387. We have no way to
// test for 387, so we just make sure we're at least on 386.
if runtime.GOARCH != "386" {
t.Errorf("store/load of sNaN not faithful")
}
t.Skip("skipping test, float store+load not faithful")
}
type myFloat32 float32
x := V(myFloat32(math.Float32frombits(snan)))
y := x.Convert(TypeOf(float32(0)))
z := y.Interface().(float32)
if got := math.Float32bits(z); got != snan {
t.Errorf("signaling nan conversion got %x, want %x", got, snan)
}
}
type ComparableStruct struct { type ComparableStruct struct {
X int X int
} }

View file

@ -2541,6 +2541,14 @@ func makeFloat(f flag, v float64, t Type) Value {
return Value{typ, ptr, f | flagIndir | flag(typ.Kind())} return Value{typ, ptr, f | flagIndir | flag(typ.Kind())}
} }
// makeFloat returns a Value of type t equal to v, where t is a float32 type.
func makeFloat32(f flag, v float32, t Type) Value {
typ := t.common()
ptr := unsafe_New(typ)
*(*float32)(ptr) = v
return Value{typ, ptr, f | flagIndir | flag(typ.Kind())}
}
// makeComplex returns a Value of type t equal to v (possibly truncated to complex64), // makeComplex returns a Value of type t equal to v (possibly truncated to complex64),
// where t is a complex64 or complex128 type. // where t is a complex64 or complex128 type.
func makeComplex(f flag, v complex128, t Type) Value { func makeComplex(f flag, v complex128, t Type) Value {
@ -2613,6 +2621,12 @@ func cvtUintFloat(v Value, t Type) Value {
// convertOp: floatXX -> floatXX // convertOp: floatXX -> floatXX
func cvtFloat(v Value, t Type) Value { func cvtFloat(v Value, t Type) Value {
if v.Type().Kind() == Float32 && t.Kind() == Float32 {
// Don't do any conversion if both types have underlying type float32.
// This avoids converting to float64 and back, which will
// convert a signaling NaN to a quiet NaN. See issue 36400.
return makeFloat32(v.flag.ro(), *(*float32)(v.ptr), t)
}
return makeFloat(v.flag.ro(), v.Float(), t) return makeFloat(v.flag.ro(), v.Float(), t)
} }

View file

@ -1475,6 +1475,55 @@ flush:
MOVQ 96(SP), R15 MOVQ 96(SP), R15
JMP ret JMP ret
// gcWriteBarrierCX is gcWriteBarrier, but with args in DI and CX.
TEXT runtime·gcWriteBarrierCX(SB),NOSPLIT,$0
XCHGQ CX, AX
CALL runtime·gcWriteBarrier(SB)
XCHGQ CX, AX
RET
// gcWriteBarrierDX is gcWriteBarrier, but with args in DI and DX.
TEXT runtime·gcWriteBarrierDX(SB),NOSPLIT,$0
XCHGQ DX, AX
CALL runtime·gcWriteBarrier(SB)
XCHGQ DX, AX
RET
// gcWriteBarrierBX is gcWriteBarrier, but with args in DI and BX.
TEXT runtime·gcWriteBarrierBX(SB),NOSPLIT,$0
XCHGQ BX, AX
CALL runtime·gcWriteBarrier(SB)
XCHGQ BX, AX
RET
// gcWriteBarrierBP is gcWriteBarrier, but with args in DI and BP.
TEXT runtime·gcWriteBarrierBP(SB),NOSPLIT,$0
XCHGQ BP, AX
CALL runtime·gcWriteBarrier(SB)
XCHGQ BP, AX
RET
// gcWriteBarrierSI is gcWriteBarrier, but with args in DI and SI.
TEXT runtime·gcWriteBarrierSI(SB),NOSPLIT,$0
XCHGQ SI, AX
CALL runtime·gcWriteBarrier(SB)
XCHGQ SI, AX
RET
// gcWriteBarrierR8 is gcWriteBarrier, but with args in DI and R8.
TEXT runtime·gcWriteBarrierR8(SB),NOSPLIT,$0
XCHGQ R8, AX
CALL runtime·gcWriteBarrier(SB)
XCHGQ R8, AX
RET
// gcWriteBarrierR9 is gcWriteBarrier, but with args in DI and R9.
TEXT runtime·gcWriteBarrierR9(SB),NOSPLIT,$0
XCHGQ R9, AX
CALL runtime·gcWriteBarrier(SB)
XCHGQ R9, AX
RET
DATA debugCallFrameTooLarge<>+0x00(SB)/20, $"call frame too large" DATA debugCallFrameTooLarge<>+0x00(SB)/20, $"call frame too large"
GLOBL debugCallFrameTooLarge<>(SB), RODATA, $20 // Size duplicated below GLOBL debugCallFrameTooLarge<>(SB), RODATA, $20 // Size duplicated below

View file

@ -137,7 +137,5 @@ TEXT runtime·duffzero(SB), NOSPLIT|NOFRAME, $0-0
MOVDU R0, 8(R3) MOVDU R0, 8(R3)
RET RET
// TODO: Implement runtime·duffcopy. TEXT runtime·duffcopy(SB), NOSPLIT|NOFRAME, $0-0
TEXT runtime·duffcopy(SB),NOSPLIT|NOFRAME,$0-0 UNDEF
MOVD R0, 0(R0)
RET

View file

@ -66,7 +66,7 @@ const (
bucketCnt = 1 << bucketCntBits bucketCnt = 1 << bucketCntBits
// Maximum average load of a bucket that triggers growth is 6.5. // Maximum average load of a bucket that triggers growth is 6.5.
// Represent as loadFactorNum/loadFactDen, to allow integer math. // Represent as loadFactorNum/loadFactorDen, to allow integer math.
loadFactorNum = 13 loadFactorNum = 13
loadFactorDen = 2 loadFactorDen = 2

View file

@ -194,7 +194,9 @@ func zeroPPC64x(w io.Writer) {
} }
func copyPPC64x(w io.Writer) { func copyPPC64x(w io.Writer) {
fmt.Fprintln(w, "// TODO: Implement runtime·duffcopy.") // duffcopy is not used on PPC64.
fmt.Fprintln(w, "TEXT runtime·duffcopy(SB), NOSPLIT|NOFRAME, $0-0")
fmt.Fprintln(w, "\tUNDEF")
} }
func tagsMIPS64x(w io.Writer) { func tagsMIPS64x(w io.Writer) {

View file

@ -724,7 +724,7 @@ nextLevel:
// is what the final level represents. // is what the final level represents.
ci := chunkIdx(i) ci := chunkIdx(i)
j, searchIdx := s.chunkOf(ci).find(npages, 0) j, searchIdx := s.chunkOf(ci).find(npages, 0)
if j < 0 { if j == ^uint(0) {
// We couldn't find any space in this chunk despite the summaries telling // We couldn't find any space in this chunk despite the summaries telling
// us it should be there. There's likely a bug, so dump some state and throw. // us it should be there. There's likely a bug, so dump some state and throw.
sum := s.summary[len(s.summary)-1][i] sum := s.summary[len(s.summary)-1][i]
@ -766,7 +766,7 @@ func (s *pageAlloc) alloc(npages uintptr) (addr uintptr, scav uintptr) {
i := chunkIndex(s.searchAddr) i := chunkIndex(s.searchAddr)
if max := s.summary[len(s.summary)-1][i].max(); max >= uint(npages) { if max := s.summary[len(s.summary)-1][i].max(); max >= uint(npages) {
j, searchIdx := s.chunkOf(i).find(npages, chunkPageIndex(s.searchAddr)) j, searchIdx := s.chunkOf(i).find(npages, chunkPageIndex(s.searchAddr))
if j < 0 { if j == ^uint(0) {
print("runtime: max = ", max, ", npages = ", npages, "\n") print("runtime: max = ", max, ", npages = ", npages, "\n")
print("runtime: searchIdx = ", chunkPageIndex(s.searchAddr), ", s.searchAddr = ", hex(s.searchAddr), "\n") print("runtime: searchIdx = ", chunkPageIndex(s.searchAddr), ", s.searchAddr = ", hex(s.searchAddr), "\n")
throw("bad summary data") throw("bad summary data")

View file

@ -115,7 +115,7 @@ func (s *pageAlloc) allocToCache() pageCache {
// Fast path: there's free pages at or near the searchAddr address. // Fast path: there's free pages at or near the searchAddr address.
chunk := s.chunkOf(ci) chunk := s.chunkOf(ci)
j, _ := chunk.find(1, chunkPageIndex(s.searchAddr)) j, _ := chunk.find(1, chunkPageIndex(s.searchAddr))
if j < 0 { if j == ^uint(0) {
throw("bad summary data") throw("bad summary data")
} }
c = pageCache{ c = pageCache{

View file

@ -33,6 +33,15 @@ import (
// func netpollIsPollDescriptor(fd uintptr) bool // func netpollIsPollDescriptor(fd uintptr) bool
// Reports whether fd is a file descriptor used by the poller. // Reports whether fd is a file descriptor used by the poller.
// Error codes returned by runtime_pollReset and runtime_pollWait.
// These must match the values in internal/poll/fd_poll_runtime.go.
const (
pollNoError = 0 // no error
pollErrClosing = 1 // descriptor is closed
pollErrTimeout = 2 // I/O timeout
pollErrNotPollable = 3 // general error polling descriptor
)
// pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer // pollDesc contains 2 binary semaphores, rg and wg, to park reader and writer
// goroutines respectively. The semaphore can be in the following states: // goroutines respectively. The semaphore can be in the following states:
// pdReady - io readiness notification is pending; // pdReady - io readiness notification is pending;
@ -176,40 +185,47 @@ func (c *pollCache) free(pd *pollDesc) {
unlock(&c.lock) unlock(&c.lock)
} }
// poll_runtime_pollReset, which is internal/poll.runtime_pollReset,
// prepares a descriptor for polling in mode, which is 'r' or 'w'.
// This returns an error code; the codes are defined above.
//go:linkname poll_runtime_pollReset internal/poll.runtime_pollReset //go:linkname poll_runtime_pollReset internal/poll.runtime_pollReset
func poll_runtime_pollReset(pd *pollDesc, mode int) int { func poll_runtime_pollReset(pd *pollDesc, mode int) int {
err := netpollcheckerr(pd, int32(mode)) errcode := netpollcheckerr(pd, int32(mode))
if err != 0 { if errcode != pollNoError {
return err return errcode
} }
if mode == 'r' { if mode == 'r' {
pd.rg = 0 pd.rg = 0
} else if mode == 'w' { } else if mode == 'w' {
pd.wg = 0 pd.wg = 0
} }
return 0 return pollNoError
} }
// poll_runtime_pollWait, which is internal/poll.runtime_pollWait,
// waits for a descriptor to be ready for reading or writing,
// according to mode, which is 'r' or 'w'.
// This returns an error code; the codes are defined above.
//go:linkname poll_runtime_pollWait internal/poll.runtime_pollWait //go:linkname poll_runtime_pollWait internal/poll.runtime_pollWait
func poll_runtime_pollWait(pd *pollDesc, mode int) int { func poll_runtime_pollWait(pd *pollDesc, mode int) int {
err := netpollcheckerr(pd, int32(mode)) errcode := netpollcheckerr(pd, int32(mode))
if err != 0 { if errcode != pollNoError {
return err return errcode
} }
// As for now only Solaris, illumos, and AIX use level-triggered IO. // As for now only Solaris, illumos, and AIX use level-triggered IO.
if GOOS == "solaris" || GOOS == "illumos" || GOOS == "aix" { if GOOS == "solaris" || GOOS == "illumos" || GOOS == "aix" {
netpollarm(pd, mode) netpollarm(pd, mode)
} }
for !netpollblock(pd, int32(mode), false) { for !netpollblock(pd, int32(mode), false) {
err = netpollcheckerr(pd, int32(mode)) errcode = netpollcheckerr(pd, int32(mode))
if err != 0 { if errcode != pollNoError {
return err return errcode
} }
// Can happen if timeout has fired and unblocked us, // Can happen if timeout has fired and unblocked us,
// but before we had a chance to run, timeout has been reset. // but before we had a chance to run, timeout has been reset.
// Pretend it has not happened and retry. // Pretend it has not happened and retry.
} }
return 0 return pollNoError
} }
//go:linkname poll_runtime_pollWaitCanceled internal/poll.runtime_pollWaitCanceled //go:linkname poll_runtime_pollWaitCanceled internal/poll.runtime_pollWaitCanceled
@ -359,18 +375,18 @@ func netpollready(toRun *gList, pd *pollDesc, mode int32) {
func netpollcheckerr(pd *pollDesc, mode int32) int { func netpollcheckerr(pd *pollDesc, mode int32) int {
if pd.closing { if pd.closing {
return 1 // ErrFileClosing or ErrNetClosing return pollErrClosing
} }
if (mode == 'r' && pd.rd < 0) || (mode == 'w' && pd.wd < 0) { if (mode == 'r' && pd.rd < 0) || (mode == 'w' && pd.wd < 0) {
return 2 // ErrTimeout return pollErrTimeout
} }
// Report an event scanning error only on a read event. // Report an event scanning error only on a read event.
// An error on a write event will be captured in a subsequent // An error on a write event will be captured in a subsequent
// write call that is able to report a more specific error. // write call that is able to report a more specific error.
if mode == 'r' && pd.everr { if mode == 'r' && pd.everr {
return 3 // ErrNotPollable return pollErrNotPollable
} }
return 0 return pollNoError
} }
func netpollblockcommit(gp *g, gpp unsafe.Pointer) bool { func netpollblockcommit(gp *g, gpp unsafe.Pointer) bool {
@ -447,7 +463,7 @@ func netpollunblock(pd *pollDesc, mode int32, ioready bool) *g {
new = pdReady new = pdReady
} }
if atomic.Casuintptr(gpp, old, new) { if atomic.Casuintptr(gpp, old, new) {
if old == pdReady || old == pdWait { if old == pdWait {
old = 0 old = 0
} }
return (*g)(unsafe.Pointer(old)) return (*g)(unsafe.Pointer(old))

View file

@ -4,7 +4,10 @@
package runtime package runtime
import "unsafe" import (
"runtime/internal/atomic"
"unsafe"
)
// This is based on the former libgo/runtime/netpoll_select.c implementation // This is based on the former libgo/runtime/netpoll_select.c implementation
// except that it uses poll instead of select and is written in Go. // except that it uses poll instead of select and is written in Go.
@ -41,6 +44,8 @@ var (
rdwake int32 rdwake int32
wrwake int32 wrwake int32
pendingUpdates int32 pendingUpdates int32
netpollWakeSig uintptr // used to avoid duplicate calls of netpollBreak
) )
func netpollinit() { func netpollinit() {
@ -130,7 +135,10 @@ func netpollarm(pd *pollDesc, mode int) {
// netpollBreak interrupts a poll. // netpollBreak interrupts a poll.
func netpollBreak() { func netpollBreak() {
netpollwakeup() if atomic.Casuintptr(&netpollWakeSig, 0, 1) {
b := [1]byte{0}
write(uintptr(wrwake), unsafe.Pointer(&b[0]), 1)
}
} }
// netpoll checks for ready network connections. // netpoll checks for ready network connections.
@ -184,6 +192,7 @@ retry:
var b [1]byte var b [1]byte
for read(rdwake, unsafe.Pointer(&b[0]), 1) == 1 { for read(rdwake, unsafe.Pointer(&b[0]), 1) == 1 {
} }
atomic.Storeuintptr(&netpollWakeSig, 0)
} }
// Still look at the other fds even if the mode may have // Still look at the other fds even if the mode may have
// changed, as netpollBreak might have been called. // changed, as netpollBreak might have been called.

View file

@ -1171,6 +1171,18 @@ func TestTryAdd(t *testing.T) {
{Value: []int64{10, 10 * period}, Location: []*profile.Location{{ID: 1}, {ID: 1}}}, {Value: []int64{10, 10 * period}, Location: []*profile.Location{{ID: 1}, {ID: 1}}},
{Value: []int64{20, 20 * period}, Location: []*profile.Location{{ID: 1}}}, {Value: []int64{20, 20 * period}, Location: []*profile.Location{{ID: 1}}},
}, },
}, {
name: "bug38096",
input: []uint64{
3, 0, 500, // hz = 500. Must match the period.
// count (data[2]) == 0 && len(stk) == 1 is an overflow
// entry. The "stk" entry is actually the count.
4, 0, 0, 4242,
},
wantLocs: [][]string{{"runtime/pprof.lostProfileEvent"}},
wantSamples: []*profile.Sample{
{Value: []int64{4242, 4242 * period}, Location: []*profile.Location{{ID: 1}}},
},
}, { }, {
// If a function is called recursively then it must not be // If a function is called recursively then it must not be
// inlined in the caller. // inlined in the caller.

View file

@ -322,7 +322,10 @@ func (b *profileBuilder) addCPUData(data []uint64, tags []unsafe.Pointer) error
// overflow record // overflow record
count = uint64(stk[0]) count = uint64(stk[0])
stk = []uint64{ stk = []uint64{
uint64(funcPC(lostProfileEvent)), // gentraceback guarantees that PCs in the
// stack can be unconditionally decremented and
// still be valid, so we must do the same.
uint64(funcPC(lostProfileEvent)+1),
} }
} }
b.m.lookup(stk, tag).count += int64(count) b.m.lookup(stk, tag).count += int64(count)

View file

@ -41,6 +41,8 @@ func checkGdbEnvironment(t *testing.T) {
if testing.Short() { if testing.Short() {
t.Skip("skipping gdb tests on AIX; see https://golang.org/issue/35710") t.Skip("skipping gdb tests on AIX; see https://golang.org/issue/35710")
} }
case "plan9":
t.Skip("there is no gdb on Plan 9")
} }
if final := os.Getenv("GOROOT_FINAL"); final != "" && runtime.GOROOT() != final { if final := os.Getenv("GOROOT_FINAL"); final != "" && runtime.GOROOT() != final {
t.Skip("gdb test can fail with GOROOT_FINAL pending") t.Skip("gdb test can fail with GOROOT_FINAL pending")

View file

@ -192,16 +192,13 @@ func signalWaitUntilIdle() {
//go:linkname signal_enable os/signal.signal_enable //go:linkname signal_enable os/signal.signal_enable
func signal_enable(s uint32) { func signal_enable(s uint32) {
if !sig.inuse { if !sig.inuse {
// The first call to signal_enable is for us // This is the first call to signal_enable. Initialize.
// to use for initialization. It does not pass
// signal information in m.
sig.inuse = true // enable reception of signals; cannot disable sig.inuse = true // enable reception of signals; cannot disable
if GOOS == "darwin" { if GOOS == "darwin" {
sigNoteSetup(&sig.note) sigNoteSetup(&sig.note)
return } else {
noteclear(&sig.note)
} }
noteclear(&sig.note)
return
} }
if s >= uint32(len(sig.wanted)*32) { if s >= uint32(len(sig.wanted)*32) {

View file

@ -134,12 +134,9 @@ func signalWaitUntilIdle() {
//go:linkname signal_enable os/signal.signal_enable //go:linkname signal_enable os/signal.signal_enable
func signal_enable(s uint32) { func signal_enable(s uint32) {
if !sig.inuse { if !sig.inuse {
// The first call to signal_enable is for us // This is the first call to signal_enable. Initialize.
// to use for initialization. It does not pass
// signal information in m.
sig.inuse = true // enable reception of signals; cannot disable sig.inuse = true // enable reception of signals; cannot disable
noteclear(&sig.note) noteclear(&sig.note)
return
} }
} }

View file

@ -4,6 +4,15 @@
package runtime package runtime
// Called from compiled code; declared for vet; do NOT call from Go.
func gcWriteBarrierCX()
func gcWriteBarrierDX()
func gcWriteBarrierBX()
func gcWriteBarrierBP()
func gcWriteBarrierSI()
func gcWriteBarrierR8()
func gcWriteBarrierR9()
// stackcheck checks that SP is in range [g->stack.lo, g->stack.hi). // stackcheck checks that SP is in range [g->stack.lo, g->stack.hi).
func stackcheck() func stackcheck()

View file

@ -828,7 +828,7 @@ func makeCutsetFunc(cutset string) func(rune) bool {
// Trim returns a slice of the string s with all leading and // Trim returns a slice of the string s with all leading and
// trailing Unicode code points contained in cutset removed. // trailing Unicode code points contained in cutset removed.
func Trim(s string, cutset string) string { func Trim(s, cutset string) string {
if s == "" || cutset == "" { if s == "" || cutset == "" {
return s return s
} }
@ -839,7 +839,7 @@ func Trim(s string, cutset string) string {
// Unicode code points contained in cutset removed. // Unicode code points contained in cutset removed.
// //
// To remove a prefix, use TrimPrefix instead. // To remove a prefix, use TrimPrefix instead.
func TrimLeft(s string, cutset string) string { func TrimLeft(s, cutset string) string {
if s == "" || cutset == "" { if s == "" || cutset == "" {
return s return s
} }
@ -850,7 +850,7 @@ func TrimLeft(s string, cutset string) string {
// Unicode code points contained in cutset removed. // Unicode code points contained in cutset removed.
// //
// To remove a suffix, use TrimSuffix instead. // To remove a suffix, use TrimSuffix instead.
func TrimRight(s string, cutset string) string { func TrimRight(s, cutset string) string {
if s == "" || cutset == "" { if s == "" || cutset == "" {
return s return s
} }

Some files were not shown because too many files have changed in this diff Show more