SB register (R28) is introduced for access external addresses with shorter
instruction sequences. It is loaded at entry points. External data within
2G of SB can be accessed this way.
cmd/internal/obj: relocaltion R_ADDRMIPS is split into two relocations
R_ADDRMIPS and R_ADDRMIPSU, handling the low 16 bits and the "upper" 16
bits of external addresses, respectively, since the instructios may not
be adjacent. It might be better if relocation Variant could be used.
cmd/link/internal/mips64: support new relocations.
cmd/compile/internal/mips64: reserve SB register.
runtime: initialize SB register at entry points.
Change-Id: I5f34868f88c5a9698c042a8a1f12f76806c187b9
Reviewed-on: https://go-review.googlesource.com/19802
Reviewed-by: Minux Ma <minux@golang.org>
This CL introduces the typeOff type and a lookup method of the same
name that can turn a typeOff offset into an *rtype.
In a typical Go binary (built with buildmode=exe, pie, c-archive, or
c-shared), there is one moduledata and all typeOff values are offsets
relative to firstmoduledata.types. This makes computing the pointer
cheap in typical programs.
With buildmode=shared (and one day, buildmode=plugin) there are
multiple modules whose relative offset is determined at runtime.
We identify a type in the general case by the pair of the original
*rtype that references it and its typeOff value. We determine
the module from the original pointer, and then use the typeOff from
there to compute the final *rtype.
To ensure there is only one *rtype representing each type, the
runtime initializes a typemap for each module, using any identical
type from an earlier module when resolving that offset. This means
that types computed from an offset match the type mapped by the
pointer dynamic relocations.
A series of followup CLs will replace other *rtype values with typeOff
(and name/*string with nameOff).
For types created at runtime by reflect, type offsets are treated as
global IDs and reference into a reflect offset map kept by the runtime.
darwin/amd64:
cmd/go: -57KB (0.6%)
jujud: -557KB (0.8%)
linux/amd64 PIE:
cmd/go: -361KB (3.0%)
jujud: -3.5MB (4.2%)
For #6853.
Change-Id: Icf096fd884a0a0cb9f280f46f7a26c70a9006c96
Reviewed-on: https://go-review.googlesource.com/21285
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Only splits into separate files, no other changes.
Change-Id: Icc0da2c5f18e03e9ed7c0043bd7c790f741900f2
Reviewed-on: https://go-review.googlesource.com/21804
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This is the first in a series of CLs to replace the use of pointers
in binary read-only data with offsets.
In standard Go binaries these CLs have a small effect, shrinking
8-byte pointers to 4-bytes. In position-independent code, it also
saves the dynamic relocation for the pointer. This has a significant
effect on the binary size when building as PIE, c-archive, or
c-shared.
darwin/amd64:
cmd/go: -12KB (0.1%)
jujud: -82KB (0.1%)
linux/amd64 PIE:
cmd/go: -86KB (0.7%)
jujud: -569KB (0.7%)
For #6853.
Change-Id: Iad5625bbeba58dabfd4d334dbee3fcbfe04b2dcf
Reviewed-on: https://go-review.googlesource.com/21284
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
bio.BufReader was never used.
bio.BufWriter was used to wrap an existing io.Writer, but the
bio.Writer returned would not be seekable, so replace all occurences
with bufio.Reader instead.
Change-Id: I9c6779e35c63178aa4e104c17bb5bb8b52de0359
Reviewed-on: https://go-review.googlesource.com/21722
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Replace the bidirectional bio.Buf type with a pair of unidirectional
buffered seekable Reader and Writers.
Change-Id: I86664a06f93c94595dc67c2cbd21356feb6680ef
Reviewed-on: https://go-review.googlesource.com/21720
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
API could still be made more Go-ey.
Updates #15165.
Change-Id: I514ffceffa43c293ae5d7e5f1e9193fda0098865
Reviewed-on: https://go-review.googlesource.com/21644
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Information about CPU architectures (e.g., name, family, byte
ordering, pointer and register size) is currently redundantly
scattered around the source tree. Instead consolidate the basic
information into a single new package cmd/internal/sys.
Also, introduce new sys.I386, sys.AMD64, etc. names for the constants
'8', '6', etc. and replace most uses of the latter. The notable
exceptions are a couple of error messages that still refer to the old
char-based toolchain names and function reltype in cmd/link.
Passes toolstash/buildall.
Change-Id: I8a6f0cbd49577ec1672a98addebc45f767e36461
Reviewed-on: https://go-review.googlesource.com/21623
Reviewed-by: Michael Hudson-Doyle <michael.hudson@canonical.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This updates dwarf.go to generate debug information as symbols
instead of directly writing to the output file. This should make
it easier to move generation of some of the debug info into the compiler.
Change-Id: Id2358988bfb689865ab4d68f82716f0676336df4
Reviewed-on: https://go-review.googlesource.com/20679
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
We create appropriate ELF files automatically based on GOOS. There's
no point in supporting -H elf flag, particularly since we need to emit
different flavors of ELF depending on GOOS anyway.
If that weren't reason enough, -H elf appears to be broken since at
least Go 1.4. At least I wasn't able to find a way to make use of it.
As best I can tell digging through commit history, -H elf is just an
artifact leftover from Plan 9's 6l linker.
Change-Id: I7393caaadbc60107bbd6bc99b976a4f4fe6b5451
Reviewed-on: https://go-review.googlesource.com/21343
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
See #14874
This change tells the linker to collect all the itablink symbols and
collect them so that moduledata can have a slice of all compiler
generated itabs.
The logic is shamelessly adapted from what is done with typelink symbols.
Change-Id: Ie93b59acf0fcba908a876d506afbf796f222dbac
Reviewed-on: https://go-review.googlesource.com/20889
Reviewed-by: Keith Randall <khr@golang.org>
Adds a new R_PCRELDBL relocation for 2-byte aligned relative
relocations on s390x. Should be removed once #14218 is
implemented.
Change-Id: I79dd2d8e746ba8cbc26c570faccfdd691e8161e8
Reviewed-on: https://go-review.googlesource.com/20941
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
This CL addresses a long standing CL by rsc by pushing the use of
Link.Windows down to its two users.
Link.Window was always initalised with the value of runtime.GOOS so
this does not affect cross compilation.
Change-Id: Ibbae068f8b5aad06336909691f094384caf12352
Reviewed-on: https://go-review.googlesource.com/20869
Run-TryBot: Dave Cheney <dave@cheney.net>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
For every string constant the compiler was creating 2 Sym's and 2
Node's. It would never refer to them again, but would keep them alive
in gostringpkg. This changes the code to just use obj.LSym's instead.
When compiling x/tools/go/types, this yields about a 15% reduction in
the number of calls to newname and a 3% reduction in the total number of
Node objects. Unfortunately I couldn't see any change in compile time,
but reducing memory usage is desirable anyhow.
Passes toolstash -cmp.
Change-Id: I24f1cb1e6cff0a3afba4ca66f7166874917a036b
Reviewed-on: https://go-review.googlesource.com/20792
Reviewed-by: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
c2go translated writing and advancing a pointer using slices.
Switch to something more idiomatic.
It is also more efficient, but not enough to matter.
Change-Id: I67709632ac53253615a35365824ae97bbe5458d5
Reviewed-on: https://go-review.googlesource.com/20767
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Use a local variable instead.
Passes toolstash -cmp.
Change-Id: I9623a40ff0d568f11afd1279b6aaa1c33eda644c
Reviewed-on: https://go-review.googlesource.com/20730
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Symbols in the object file currently refer to each other using symbol name
and version. Referring to the same symbol many times in an object file takes
up space and causes redundant map lookups. Instead write out a list of unique
symbol references and have symbols refer to each other using indexes into this
list.
Credit to Michael Hudson-Doyle for kicking this off.
Reduces pkg/linux_amd64 size by 30% from 61MB to 43MB
name old s/op new s/op delta
LinkCmdGo 0.74 ± 3% 0.63 ± 4% -15.22% (p=0.000 n=20+20)
LinkJuju 6.38 ± 6% 5.73 ± 6% -10.16% (p=0.000 n=20+19)
Change-Id: I7e101a0c80b8e673a3ba688295e6f80ea04e1cfb
Reviewed-on: https://go-review.googlesource.com/20099
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Just recognize "DATA" as a special pseudo op word in the assembler
directly.
Change-Id: I508e111fd71f561efa600ad69567a7089a57adb2
Reviewed-on: https://go-review.googlesource.com/20648
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Rob Pike <r@golang.org>
The only remaining place that generated ADATA
Prog was the assembler. Stop, and delete some
now-dead code.
Passes toolstash -cmp.
Change-Id: I26578ff1b4868e98562b44f69d909c083e96f8d5
Reviewed-on: https://go-review.googlesource.com/20646
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Ian Lance Taylor <iant@golang.org>
In addition to reflect.Value.Call, exported methods can be invoked
by the Func value in the reflect.Method struct. This CL has the
compiler track what functions get access to a legitimate reflect.Method
struct by looking for interface calls to either of:
Method(int) reflect.Method
MethodByName(string) (reflect.Method, bool)
This is a little overly conservative. If a user implements a type
with one of these methods without using the underlying calls on
reflect.Type, the linker will assume the worst and include all
exported methods. But it's cheap.
No change to any of the binary sizes reported in cl/20483.
For #14740
Change-Id: Ie17786395d0453ce0384d8b240ecb043b7726137
Reviewed-on: https://go-review.googlesource.com/20489
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Today the linker keeps all methods of reachable types. This is
necessary if a program uses reflect.Value.Call. But while use of
reflection is widespread in Go for encoders and decoders, using
it to call a method is rare.
This CL looks for the use of reflect.Value.Call in a program, and
if it is absent, adopts a (reasonably conservative) method pruning
strategy as part of dead code elimination. Any method that is
directly called is kept, and any method that matches a used
interface's method signature is kept.
Whether or not a method body is kept is determined by the relocation
from its receiver's *rtype to its *rtype. A small change in the
compiler marks these relocations as R_METHOD so they can be easily
collected and manipulated by the linker.
As a bonus, this technique removes the text segment of methods that
have been inlined. Looking at the output of building cmd/objdump with
-ldflags=-v=2 shows that inlined methods like
runtime.(*traceAllocBlockPtr).ptr are removed from the program.
Relatively little work is necessary to do this. Linking two
examples, jujud and cmd/objdump show no more than +2% link time.
Binaries that do not use reflect.Call.Value drop 4 - 20% in size:
addr2line: -793KB (18%)
asm: -346KB (8%)
cgo: -490KB (10%)
compile: -564KB (4%)
dist: -736KB (17%)
fix: -404KB (12%)
link: -328KB (7%)
nm: -827KB (19%)
objdump: -712KB (16%)
pack: -327KB (14%)
yacc: -350KB (10%)
Binaries that do use reflect.Call.Value see a modest size decrease
of 2 - 6% thanks to pruning of unexported methods:
api: -151KB (3%)
cover: -222KB (4%)
doc: -106KB (2.5%)
pprof: -314KB (3%)
trace: -357KB (4%)
vet: -187KB (2.7%)
jujud: -4.4MB (5.8%)
cmd/go: -384KB (3.4%)
The trivial Hello example program goes from 2MB to 1.68MB:
package main
import "fmt"
func main() {
fmt.Println("Hello, 世界")
}
Method pruning also helps when building small binaries with
"-ldflags=-s -w". The above program goes from 1.43MB to 1.2MB.
Unfortunately the linker can only tell if reflect.Value.Call has been
statically linked, not if it is dynamically used. And while use is
rare, it is linked into a very common standard library package,
text/template. The result is programs like cmd/go, which don't use
reflect.Value.Call, see limited benefit from this CL. If binary size
is important enough it may be possible to address this in future work.
For #6853.
Change-Id: Iabe90e210e813b08c3f8fd605f841f0458973396
Reviewed-on: https://go-review.googlesource.com/20483
Reviewed-by: Russ Cox <rsc@golang.org>
Deleting the string merging pass makes the linker 30-35% faster
but makes jujud (using the github.com/davecheney/benchjuju snapshot) 2.5% larger.
Two optimizations bring the space overhead down to 0.6%.
First, change the default alignment for string data to 1 byte.
(It was previously defaulting to larger amounts, usually pointer width.)
Second, write out the type string for T (usually a bigger expression) as "*T"[1:],
so that the type strings for T and *T share storage.
Combined, these obtain the bulk of the benefit of string merging
at essentially no cost. The remaining benefit from string merging
is not worth the excessive cost, so delete it.
As penance for making the jujud binary 0.6% larger,
the next CL in this sequence trims the reflect functype
information enough to make the jujud binary overall 0.75% smaller
(that is, that CL has a net -1.35% effect).
For #6853.
Fixes#14648.
Change-Id: I3fdd74c85410930c36bb66160ca4174ed540fc6e
Reviewed-on: https://go-review.googlesource.com/20334
Reviewed-by: David Crawshaw <crawshaw@golang.org>
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
Run-TryBot: Russ Cox <rsc@golang.org>
No immediate reduction in the size of Addr.
Passes toolstash -cmp.
Change-Id: I78ea4c6e181b6e571ce70a5f1ae8158844eb197d
Reviewed-on: https://go-review.googlesource.com/20276
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
The tree's pretty inconsistent about single space vs double space
after a period in documentation. Make it consistently a single space,
per earlier decisions. This means contributors won't be confused by
misleading precedence.
This CL doesn't use go/doc to parse. It only addresses // comments.
It was generated with:
$ perl -i -npe 's,^(\s*// .+[a-z]\.) +([A-Z]),$1 $2,' $(git grep -l -E '^\s*//(.+\.) +([A-Z])')
$ go test go/doc -update
Change-Id: Iccdb99c37c797ef1f804a94b22ba5ee4b500c4f7
Reviewed-on: https://go-review.googlesource.com/20022
Reviewed-by: Rob Pike <r@golang.org>
Reviewed-by: Dave Day <djd@golang.org>
Run-TryBot: Brad Fitzpatrick <bradfitz@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Simplifies some code as ptrToThis was unreliable under dynamic
linking. Now the same type lookup is used regardless of execution
mode.
A synthetic relocation, R_USETYPE, is introduced to make sure the
linker includes *T on use of T, if *T is carrying methods.
Changes the heap dump format. Anything reading the format needs to
look at the last bool of a type of an interface value to determine
if the type should be the pointer-to type.
Reduces binary size of cmd/go by 0.2%.
For #6853.
Change-Id: I79fcb19a97402bdb0193f3c7f6d94ddf061ee7b2
Reviewed-on: https://go-review.googlesource.com/19695
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: David Crawshaw <crawshaw@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
When -N, make sure we don't drop every instruction from
a block, even ones which would otherwise be empty.
Helps keep line numbers around for debugging, particularly
for break and continue statements (which often compile
down to nothing).
Fixes#14379
Change-Id: I33722c4f0dcd502f146fa48af262ba3a477c959a
Reviewed-on: https://go-review.googlesource.com/19854
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Minux Ma <minux@golang.org>
Semi-regular merge from tip to dev.ssa.
Two fixes:
1) Mark selectgo as not returning. This caused problems
because there are no VARKILL ops on the selectgo path,
causing things to be marked live that shouldn't be.
2) Tell the amd64 assembler that addressing modes like
name(SP)(AX*4) are ok.
Change-Id: I9ca81c76391b1a65cc47edc8610c70ff1a621913
The x86 backend automatically rewrites MOV $0, AX to
XOR AX, AX. That rewrite isn't ok when the flags register
is live across the MOV. Keep track of which moves care
about preserving flags, then disable this rewrite for them.
On x86, Prog.Mark was being used to hold the length of the
instruction. We already store that in Prog.Isize, so no
need to store it in Prog.Mark also. This frees up Prog.Mark
to hold a bitmask on x86 just like all the other architectures.
Update #12405
Change-Id: Ibad8a8f41fc6222bec1e4904221887d3cc3ca029
Reviewed-on: https://go-review.googlesource.com/18861
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Russ Cox <rsc@golang.org>
Add test for assembly errors, to verify fix.
Make sure invalid instruction errors are printed just once
(was printing them once per span iteration, so typically twice).
Fixes#13282.
Change-Id: Id5f66f80a80b3bc4832e00084b0a91f1afec7f8f
Reviewed-on: https://go-review.googlesource.com/18858
Reviewed-by: Rob Pike <r@golang.org>
This will allow the compiler to crunch Prog lists down to code as each
function is compiled, instead of waiting until the end, which should
reduce the working set of the compiler. But not until Go 1.7.
This also makes it easier to write some machine code output tests
for the assembler, which is why it's being done now.
For #13822.
Change-Id: I0811123bc6e5717cebb8948f9cea18e1b9baf6f7
Reviewed-on: https://go-review.googlesource.com/18311
Reviewed-by: Ian Lance Taylor <iant@golang.org>
Consider this code:
func f(*int)
func g() {
p := new(int)
f(p)
}
where f is an assembly function.
In general liveness analysis assumes that during the call to f, p is dead
in this frame. If f has retained p, p will be found alive in f's frame and keep
the new(int) from being garbage collected. This is all correct and works.
We use the Go func declaration for f to give the assembly function
liveness information (the arguments are assumed live for the entire call).
Now consider this code:
func h1() {
p := new(int)
syscall.Syscall(1, 2, 3, uintptr(unsafe.Pointer(p)))
}
Here syscall.Syscall is taking the place of f, but because its arguments
are uintptr, the liveness analysis and the garbage collector ignore them.
Since p is no longer live in h once the call starts, if the garbage collector
scans the stack while the system call is blocked, it will find no reference
to the new(int) and reclaim it. If the kernel is going to write to *p once
the call finishes, reclaiming the memory is a mistake.
We can't change the arguments or the liveness information for
syscall.Syscall itself, both for compatibility and because sometimes the
arguments really are integers, and the garbage collector will get quite upset
if it finds an integer where it expects a pointer. The problem is that
these arguments are fundamentally untyped.
The solution we have taken in the syscall package's wrappers in past
releases is to insert a call to a dummy function named "use", to make
it look like the argument is live during the call to syscall.Syscall:
func h2() {
p := new(int)
syscall.Syscall(1, 2, 3, uintptr(unsafe.Pointer(p)))
use(unsafe.Pointer(p))
}
Keeping p alive during the call means that if the garbage collector
scans the stack during the system call now, it will find the reference to p.
Unfortunately, this approach is not available to users outside syscall,
because 'use' is unexported, and people also have to realize they need
to use it and do so. There is much existing code using syscall.Syscall
without a 'use'-like function. That code will fail very occasionally in
mysterious ways (see #13372).
This CL fixes all that existing code by making the compiler do the right
thing automatically, without any code modifications. That is, it takes h1
above, which is incorrect code today, and makes it correct code.
Specifically, if the compiler sees a foreign func definition (one
without a body) that has uintptr arguments, it marks those arguments
as "unsafe uintptrs". If it later sees the function being called
with uintptr(unsafe.Pointer(x)) as an argument, it arranges to mark x
as having escaped, and it makes sure to hold x in a live temporary
variable until the call returns, so that the garbage collector cannot
reclaim whatever heap memory x points to.
For now I am leaving the explicit calls to use in package syscall,
but they can be removed early in a future cycle (likely Go 1.7).
The rule has no effect on escape analysis, only on liveness analysis.
Fixes#13372.
Change-Id: I2addb83f70d08db08c64d394f9d06ff0a063c500
Reviewed-on: https://go-review.googlesource.com/18584
Reviewed-by: Ian Lance Taylor <iant@golang.org>
This requires changing the tls access code to match the patterns documented in
the ABI documentation or the system linker will "optimize" it into ridiculousness.
With this change, -buildmode=pie works, although as it is tested in testshared,
the tests are not run yet.
Change-Id: I1efa6687af0a5b8db3385b10f6542a49056b2eb3
Reviewed-on: https://go-review.googlesource.com/15971
Reviewed-by: Russ Cox <rsc@golang.org>
The PowerPC ISA does not have a PC-relative load instruction, which poses
obvious challenges when generating position-independent code. The way the ELFv2
ABI addresses this is to specify that r2 points to a per "module" (shared
library or executable) TOC pointer. Maintaining this pointer requires
cooperation between codegen and the system linker:
* Non-leaf functions leave space on the stack at r1+24 to save the TOC pointer.
* A call to a function that *might* have to go via a PLT stub must be followed
by a nop instruction that the system linker can replace with "ld r1, 24(r1)"
to restore the TOC pointer (only when dynamically linking Go code).
* When calling a function via a function pointer, the address of the function
must be in r12, and the first couple of instructions (the "global entry
point") of the called function use this to derive the address of the TOC
for the module it is in.
* When calling a function that is implemented in the same module, the system
linker adjusts the call to skip over the instructions mentioned above (the
"local entry point"), assuming that r2 is already correctly set.
So this changeset adds the global entry point instructions, sets the metadata so
the system linker knows where the local entry point is, inserts code to save the
TOC pointer at 24(r1), adds a nop after any call not known to be local and copes
with the odd non-local code transfer in the runtime (e.g. the stuff around
jmpdefer). It does not actually compile PIC yet.
Change-Id: I7522e22bdfd2f891745a900c60254fe9e372c854
Reviewed-on: https://go-review.googlesource.com/15967
Reviewed-by: Russ Cox <rsc@golang.org>
The larger stack frames causes the nosplit stack to overflow so the next change
increases the stackguard.
Change-Id: Ib2b4f24f0649eb1d13e3a58d265f13d1b6cc9bf9
Reviewed-on: https://go-review.googlesource.com/15964
Reviewed-by: Russ Cox <rsc@golang.org>
MIPS64 has 32 general purpose 64-bit integer registers (R0-R31), 32
64-bit floating point registers (F0-F31). Instructions are fixed-width,
and are 32-bit wide. Instructions are all in standard 1-, 2-, 3-operand
forms.
MIPS64-specific relocations are added. For this reason, test data of
cmd/newlink are regenerated.
No other changes are made to portable structures.
Branch delay slots are current filled with NOP instructions. The function
for instruction scheduling (try to fill the delay slot with a useful
instruction) is implemented but disabled for now.
Change-Id: Ic364999c7a33245260c1381fc26a2fa8972d38b3
Reviewed-on: https://go-review.googlesource.com/14442
Reviewed-by: Minux Ma <minux@golang.org>