cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
// asmcheck
|
|
|
|
|
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
//go:build !goexperiment.runtimefreegc
|
|
|
|
|
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
// Copyright 2025 The Go Authors. All rights reserved.
|
|
|
|
|
// Use of this source code is governed by a BSD-style
|
|
|
|
|
// license that can be found in the LICENSE file.
|
|
|
|
|
|
|
|
|
|
package codegen
|
|
|
|
|
|
|
|
|
|
func Append1(n int) []int {
|
|
|
|
|
var r []int
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growslice\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoCapNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append2(n int) (r []int) {
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growslice\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoCapNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append3(n int) (r []int) {
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growslice\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoCapNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append4(n int) []int {
|
|
|
|
|
var r []int
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growsliceBuf\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
|
|
|
|
println(cap(r))
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append5(n int) []int {
|
|
|
|
|
var r []int
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growsliceBuf\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
|
|
|
|
useSlice(r)
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoScan\b`
|
|
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append5b(n int) []int {
|
|
|
|
|
var r []int
|
|
|
|
|
useSlice(r)
|
|
|
|
|
for i := range n {
|
|
|
|
|
// amd64:`.*growsliceBuf\b`
|
|
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
|
|
|
|
// amd64:`.*moveSliceNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append6(n int) []*int {
|
|
|
|
|
var r []*int
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growslice\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, new(i))
|
|
|
|
|
}
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoCap\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append7(n int) []*int {
|
|
|
|
|
var r []*int
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growsliceBuf\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, new(i))
|
|
|
|
|
}
|
|
|
|
|
println(cap(r))
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSlice\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append8(n int, p *[]int) {
|
|
|
|
|
var r []int
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growslice\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoCapNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
*p = r
|
|
|
|
|
}
|
|
|
|
|
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
func Append8b(n int, p *[]int) {
|
|
|
|
|
var r []int
|
|
|
|
|
// amd64:`.*moveSliceNoCapNoScan\b`
|
|
|
|
|
*p = r
|
|
|
|
|
for i := range n {
|
|
|
|
|
// amd64:`.*growslice\b`
|
|
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
func Append9(n int) []int {
|
|
|
|
|
var r []int
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growslice\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
|
|
|
|
println(len(r))
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoCapNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append10(n int) []int {
|
|
|
|
|
var r []int
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growslice\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
|
|
|
|
println(r[3])
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoCapNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append11(n int) []int {
|
|
|
|
|
var r []int
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growsliceBuf\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
|
|
|
|
r = r[3:5]
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append12(n int) []int {
|
|
|
|
|
var r []int
|
|
|
|
|
r = nil
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growslice\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoCapNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append13(n int) []int {
|
|
|
|
|
var r []int
|
|
|
|
|
r, r = nil, nil
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growslice\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoCapNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append14(n int) []int {
|
|
|
|
|
var r []int
|
|
|
|
|
r = []int{3, 4, 5}
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growsliceBuf\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append15(n int) []int {
|
|
|
|
|
r := []int{3, 4, 5}
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growsliceBuf\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append16(r []int, n int) []int {
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growslice\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoCapNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append17(n int) []int {
|
|
|
|
|
var r []int
|
|
|
|
|
for i := range n {
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*growslice\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
|
|
|
|
for i, x := range r {
|
|
|
|
|
println(i, x)
|
|
|
|
|
}
|
cmd/compile: introduce alias analysis and automatically free non-aliased memory after growslice
This CL is part of a set of CLs that attempt to reduce how much work the
GC must do. See the design in https://go.dev/design/74299-runtime-freegc
This CL updates the compiler to examine append calls to prove
whether or not the slice is aliased.
If proven unaliased, the compiler automatically inserts a call to a new
runtime function introduced with this CL, runtime.growsliceNoAlias,
which frees the old backing memory immediately after slice growth is
complete and the old storage is logically dead.
Two append benchmarks below show promising results, executing up to
~2x faster and up to factor of ~3 memory reduction with this CL.
The approach works with multiple append calls for the same slice,
including inside loops, and the final slice memory can be escaping,
such as in a classic pattern of returning a slice from a function
after the slice is built. (The final slice memory is never freed with
this CL, though we have other work that tackles that.)
An example target for this CL is we automatically free the
intermediate memory for the appends in the loop in this function:
func f1(input []int) []int {
var s []int
for _, x := range input {
s = append(s, g(x)) // s cannot be aliased here
if h(x) {
s = append(s, x) // s cannot be aliased here
}
}
return s // slice escapes at end
}
In this case, the compiler and the runtime collaborate so that
the heap allocated backing memory for s is automatically freed after
a successful grow. (For the first grow, there is nothing to free,
but for the second and subsequent growths, the old heap memory is
freed automatically.)
The new runtime.growsliceNoAlias is primarily implemented
by calling runtime.freegc, which we introduced in CL 673695.
The high-level approach here is we step through the IR starting
from a slice declaration and look for any operations that either
alias the slice or might do so, and treat any IR construct we
don't specifically handle as a potential alias (and therefore
conservatively fall back to treating the slice as aliased when
encountering something not understood).
For loops, some additional care is required. We arrange the analysis
so that an alias in the body of a loop causes all the appends in that
same loop body to be marked aliased, even if the aliasing occurs after
the append in the IR:
func f2() {
var s []int
for i := range 10 {
s = append(s, i) // aliased due to next line
alias = s
}
}
For nested loops, we analyse the nesting appropriately so that
for example this append is still proven as non-aliased in the
inner loop even though it aliased for the outer loop:
func f3() {
for range 10 {
var s []int
for i := range 10 {
s = append(s, i) // append using non-aliased slice
}
alias = s
}
}
A good starting point is the beginning of the test/escape_alias.go file,
which starts with ~10 introductory examples with brief comments that
attempt to illustrate the high-level approach.
For more details, see the new .../internal/escape/alias.go file,
especially the (*aliasAnalysis).analyze method.
In the first benchmark, an append in a loop builds up a slice from
nothing, where the slice elements are each 64 bytes. In the table below,
'count' is the number of appends. With 1 append, there is no opportunity
for this CL to free memory. Once there are 2 appends, the growth from
1 element to 2 elements means the compiler-inserted growsliceNoAlias
frees the 1-element array, and we see a ~33% reduction in memory use
and a small reported speed improvement.
As the number of appends increases for example to 5, we are at
a ~20% speed improvement and ~45% memory reduction, and so on until
we reach ~40% faster and ~50% less memory allocated at the end of
the table.
There can be variation in the reported numbers based on -randlayout, so
this table is for 30 different values of -randlayout with a total
n=150. (Even so, there is still some variation, so we probably should
not read too much into small changes.) This is with GOAMD64=v3 on
a VM that gcc reports is cascadelake.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
Append64Bytes/count=1-4 31.09n ± 2% 31.69n ± 1% +1.95% (n=150)
Append64Bytes/count=2-4 73.31n ± 1% 70.27n ± 0% -4.15% (n=150)
Append64Bytes/count=3-4 142.7n ± 1% 124.6n ± 1% -12.68% (n=150)
Append64Bytes/count=4-4 149.6n ± 1% 127.7n ± 0% -14.64% (n=150)
Append64Bytes/count=5-4 277.1n ± 1% 213.6n ± 0% -22.90% (n=150)
Append64Bytes/count=6-4 280.7n ± 1% 216.5n ± 1% -22.87% (n=150)
Append64Bytes/count=10-4 544.3n ± 1% 386.6n ± 0% -28.97% (n=150)
Append64Bytes/count=20-4 1058.5n ± 1% 715.6n ± 1% -32.39% (n=150)
Append64Bytes/count=50-4 2.121µ ± 1% 1.404µ ± 1% -33.83% (n=150)
Append64Bytes/count=100-4 4.152µ ± 1% 2.736µ ± 1% -34.11% (n=150)
Append64Bytes/count=200-4 7.753µ ± 1% 4.882µ ± 1% -37.03% (n=150)
Append64Bytes/count=400-4 15.163µ ± 2% 9.273µ ± 1% -38.84% (n=150)
geomean 601.8n 455.0n -24.39%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
Append64Bytes/count=1-4 64.00 ± 0% 64.00 ± 0% ~ (n=150)
Append64Bytes/count=2-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
Append64Bytes/count=3-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=4-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
Append64Bytes/count=5-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=6-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
Append64Bytes/count=10-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
Append64Bytes/count=20-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
Append64Bytes/count=50-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
Append64Bytes/count=100-4 15.938Ki ± 0% 8.021Ki ± 0% -49.67% (n=150)
Append64Bytes/count=200-4 31.94Ki ± 0% 16.08Ki ± 0% -49.64% (n=150)
Append64Bytes/count=400-4 63.94Ki ± 0% 32.33Ki ± 0% -49.44% (n=150)
geomean 1.991Ki 1.124Ki -43.54%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
Append64Bytes/count=1-4 1.000 ± 0% 1.000 ± 0% ~ (n=150)
Append64Bytes/count=2-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
Append64Bytes/count=3-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=4-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
Append64Bytes/count=5-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=6-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
Append64Bytes/count=10-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
Append64Bytes/count=20-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
Append64Bytes/count=50-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
Append64Bytes/count=100-4 8.000 ± 0% 1.000 ± 0% -87.50% (n=150)
Append64Bytes/count=200-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
Append64Bytes/count=400-4 10.000 ± 0% 1.000 ± 0% -90.00% (n=150)
geomean 4.331 1.000 -76.91%
The second benchmark is similar, but instead uses an 8-byte integer
for the slice element. The first 4 appends in the loop never call into
the runtime thanks to the excellent CL 664299 introduced by Keith in
Go 1.25 that allows some <= 32 byte dynamically-sized slices to be on
the stack, so this CL is neutral for <= 32 bytes. Once the 5th append
occurs at count=5, a grow happens via the runtime and heap allocates
as normal, but freegc does not yet have anything to free, so we see
a small ~1.4ns penalty reported there. But once the second growth
happens, the older heap memory is now automatically freed by freegc,
so we start to see some benefit in memory reductions and speed
improvements, starting at a tiny speed improvement (close to a wash,
or maybe noise) by the second growth before count=10, and building up to
~2x faster with ~68% fewer allocated bytes reported.
goos: linux
goarch: amd64
pkg: runtime
cpu: Intel(R) Xeon(R) CPU @ 2.80GHz
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ sec/op │ sec/op vs base │
AppendInt/count=1-4 2.978n ± 0% 2.969n ± 0% -0.30% (p=0.000 n=150)
AppendInt/count=4-4 4.292n ± 3% 4.163n ± 3% ~ (p=0.528 n=150)
AppendInt/count=5-4 33.50n ± 0% 34.93n ± 0% +4.25% (p=0.000 n=150)
AppendInt/count=10-4 76.21n ± 1% 75.67n ± 0% -0.72% (p=0.000 n=150)
AppendInt/count=20-4 150.6n ± 1% 133.0n ± 0% -11.65% (n=150)
AppendInt/count=50-4 284.1n ± 1% 225.6n ± 0% -20.59% (n=150)
AppendInt/count=100-4 544.2n ± 1% 392.4n ± 1% -27.89% (n=150)
AppendInt/count=200-4 1051.5n ± 1% 702.3n ± 0% -33.21% (n=150)
AppendInt/count=400-4 2.041µ ± 1% 1.312µ ± 1% -35.70% (n=150)
AppendInt/count=1000-4 5.224µ ± 2% 2.851µ ± 1% -45.43% (n=150)
AppendInt/count=2000-4 11.770µ ± 1% 6.010µ ± 1% -48.94% (n=150)
AppendInt/count=3000-4 17.747µ ± 2% 8.264µ ± 1% -53.44% (n=150)
geomean 331.8n 246.4n -25.72%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ B/op │ B/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 64.00 ± 0% 64.00 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 192.0 ± 0% 128.0 ± 0% -33.33% (n=150)
AppendInt/count=20-4 448.0 ± 0% 256.0 ± 0% -42.86% (n=150)
AppendInt/count=50-4 960.0 ± 0% 512.0 ± 0% -46.67% (n=150)
AppendInt/count=100-4 1.938Ki ± 0% 1.000Ki ± 0% -48.39% (n=150)
AppendInt/count=200-4 3.938Ki ± 0% 2.001Ki ± 0% -49.18% (n=150)
AppendInt/count=400-4 7.938Ki ± 0% 4.005Ki ± 0% -49.54% (n=150)
AppendInt/count=1000-4 24.56Ki ± 0% 10.05Ki ± 0% -59.07% (n=150)
AppendInt/count=2000-4 58.56Ki ± 0% 20.31Ki ± 0% -65.32% (n=150)
AppendInt/count=3000-4 85.19Ki ± 0% 27.30Ki ± 0% -67.95% (n=150)
geomean ² -42.81%
│ old-1bb1f2bf0c │ freegc-8ba7421-ps16 │
│ allocs/op │ allocs/op vs base │
AppendInt/count=1-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=4-4 0.000 ± 0% 0.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=5-4 1.000 ± 0% 1.000 ± 0% ~ (p=1.000 n=150)
AppendInt/count=10-4 2.000 ± 0% 1.000 ± 0% -50.00% (n=150)
AppendInt/count=20-4 3.000 ± 0% 1.000 ± 0% -66.67% (n=150)
AppendInt/count=50-4 4.000 ± 0% 1.000 ± 0% -75.00% (n=150)
AppendInt/count=100-4 5.000 ± 0% 1.000 ± 0% -80.00% (n=150)
AppendInt/count=200-4 6.000 ± 0% 1.000 ± 0% -83.33% (n=150)
AppendInt/count=400-4 7.000 ± 0% 1.000 ± 0% -85.71% (n=150)
AppendInt/count=1000-4 9.000 ± 0% 1.000 ± 0% -88.89% (n=150)
AppendInt/count=2000-4 11.000 ± 0% 1.000 ± 0% -90.91% (n=150)
AppendInt/count=3000-4 12.000 ± 0% 1.000 ± 0% -91.67% (n=150)
geomean ² -72.76% ²
Of course, these are just microbenchmarks, but likely indicate
there are some opportunities here.
The immediately following CL 712422 tackles inlining and is able to get
runtime.freegc working automatically with iterators such as used by
slices.Collect, which becomes able to automatically free the
intermediate memory from its repeated appends (which earlier
in this work required a temporary hand edit to the slices package).
For now, we only use the NoAlias version for element types without
pointers while waiting on additional runtime support in CL 698515.
Updates #74299
Change-Id: I1b9d286aa97c170dcc2e203ec0f8ca72d84e8221
Reviewed-on: https://go-review.googlesource.com/c/go/+/710015
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2025-11-25 09:57:50 -05:00
|
|
|
// amd64:`.*moveSliceNoCapNoScan\b`
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
return r
|
|
|
|
|
}
|
|
|
|
|
|
2025-11-20 09:42:16 -08:00
|
|
|
func Append18(n int, p *[]int) {
|
|
|
|
|
var r []int
|
|
|
|
|
for i := range n {
|
|
|
|
|
// amd64:-`.*moveSliceNoCapNoScan`
|
|
|
|
|
*p = r
|
|
|
|
|
// amd64:`.*growslice`
|
|
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append19(n int, p [][]int) {
|
|
|
|
|
for j := range p {
|
|
|
|
|
var r []int
|
|
|
|
|
for i := range n {
|
|
|
|
|
// amd64:`.*growslice`
|
|
|
|
|
r = append(r, i)
|
|
|
|
|
}
|
|
|
|
|
// amd64:`.*moveSliceNoCapNoScan`
|
|
|
|
|
p[j] = r
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
func Append20(n int, p [][]int) {
|
|
|
|
|
for j := range p {
|
|
|
|
|
var r []int
|
|
|
|
|
// amd64:`.*growslice`
|
|
|
|
|
r = append(r, 0)
|
|
|
|
|
// amd64:-`.*moveSliceNoCapNoScan`
|
|
|
|
|
p[j] = r
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
cmd/compile: stack allocate backing stores during append
We can already stack allocate the backing store during append if the
resulting backing store doesn't escape. See CL 664299.
This CL enables us to often stack allocate the backing store during
append *even if* the result escapes. Typically, for code like:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
return r
}
the backing store for r escapes, but only by returning it.
Could we operate with r on the stack for most of its lifeime,
and only move it to the heap at the return point?
The current implementation of append will need to do an allocation
each time it calls growslice. This will happen on the 1st, 2nd, 4th,
8th, etc. append calls. The allocations done by all but the
last growslice call will then immediately be garbage.
We'd like to avoid doing some of those intermediate allocations
if possible. We rewrite the above code by introducing a move2heap
operation:
func f(n int) []int {
var r []int
for i := range n {
r = append(r, i)
}
r = move2heap(r)
return r
}
Using the move2heap runtime function, which does:
move2heap(r):
If r is already backed by heap storage, return r.
Otherwise, copy r to the heap and return the copy.
Now we can treat the backing store of r allocated at the
append site as not escaping. Previous stack allocation
optimizations now apply, which can use a fixed-size
stack-allocated backing store for r when appending.
See the description in cmd/compile/internal/slice/slice.go
for how we ensure that this optimization is safe.
Change-Id: I81f36e58bade2241d07f67967d8d547fff5302b8
Reviewed-on: https://go-review.googlesource.com/c/go/+/707755
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2025-09-12 14:43:19 -07:00
|
|
|
//go:noinline
|
|
|
|
|
func useSlice(s []int) {
|
|
|
|
|
}
|