I’ve also been thinking more about unnecessary allocations in my Go code and how to avoid them by pre-declaring the length of a slice up front.

Normally, I’d write something like this:

var s []int
for _, val := otherSlice {
    s = append(s, val)
}

Since I don’t specify the size of s, if otherSlice is large, the array underlying s might not be large enough to hold all the values; then a new array will have to be allocated and (I presume) all existing values copied out of it one at a time to fill the new array.

So nowadays, I would write this like so:

s := make([]int, len(otherSlice)
for i, val := otherSlice {
    s[i] = val
}

I was curious how much of an impact these sort of unnecessary allocations have, so I decided to benchmark it.

Here’s my functions:

// allocbench.go
package allocbench

func buildBad(n int) []int {
    var s []int
    for i := 0; i < n; i++ {
        s = append(s, i)
    }

    return s
}

func buildGood(n int) []int {
    s := make([]int, n)
    for i := 0; i < n; i++ {
        s[i] = i
    }

    return s
}

And here are my benchmark tests:

// allocbench_test.go
package allocbench

import (
    "testing"
)

func BenchmarkBuildBad10(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buildBad(10)
    }
}

func BenchmarkBuildBad100(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buildBad(100)
    }
}

func BenchmarkBuildBad1000(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buildBad(1000)
    }
}

func BenchmarkBuildBad10000(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buildBad(10000)
    }
}

func BenchmarkBuildBad100000(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buildBad(100000)
    }
}

func BenchmarkBuildBad1000000(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buildBad(100000)
    }
}

func BenchmarkBuildGood10(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buildGood(10)
    }
}

func BenchmarkBuildGood100(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buildGood(100)
    }
}

func BenchmarkBuildGood1000(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buildGood(1000)
    }
}

func BenchmarkBuildGood10000(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buildGood(10000)
    }
}

func BenchmarkBuildGood100000(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buildGood(100000)
    }
}

func BenchmarkBuildGood1000000(b *testing.B) {
    for n := 0; n < b.N; n++ {
        buildGood(100000)
    }
}

I tested them like so:

$ go test -bench=.
goos: darwin
goarch: arm64
pkg: allocbenchtest
BenchmarkBuildBad10-10              11132912            98.84 ns/op
BenchmarkBuildBad100-10              4043415           296.2 ns/op
BenchmarkBuildBad1000-10              531997          2217 ns/op
BenchmarkBuildBad10000-10              46240         25841 ns/op
BenchmarkBuildBad100000-10              4632        247356 ns/op
BenchmarkBuildBad1000000-10             4702        247128 ns/op // <--
BenchmarkBuildGood10-10             57219716            20.24 ns/op
BenchmarkBuildGood100-10            10907318           109.3 ns/op
BenchmarkBuildGood1000-10            1252094           966.6 ns/op
BenchmarkBuildGood10000-10            172495          6768 ns/op
BenchmarkBuildGood100000-10            19878         60465 ns/op
BenchmarkBuildGood1000000-10           19930         59969 ns/op // <--
PASS
ok      allocbenchtest  17.475s

So, in the worst case with a large n, the “bad” function took ~250,000ns to complete each operation; whereas the “good” function took ~60,000ns, roughly 4x faster.

I’m not sure if this speed increase is caused only by the fact that the “good” function doesn’t have to re-allocate an underlying array a bunch of times. It is possible that the use of append() in the “bad” function is also slowing things down.

Regardless, this was a helpful exercise to get precise performance characteristics on the two implementations.

I still wonder if the readability trade-off is worth it in most functions, though. If the input of the function (aka n) is small, the difference between these is negligible and I find the “bad” approach reads easier.