;

Fast Multi-Platform Builds on GitHub

2026-02-08

If you want to build multi-architecture Docker containers in GitHub Actions, the standard recommendation you’ll find online is to install BuildX and QEMU. The downside of this approach is that QEMU emulation is about 10x slower than native hardware. Building my simple Hello World project went from 30 seconds to 3 minutes.

QEMU Workflow YAML
 1name: "Build: QEMU"
 2
 3on:
 4  workflow_dispatch:
 5
 6jobs:
 7  build:
 8    runs-on: ubuntu-latest
 9    permissions:
10      contents: read
11      packages: write
12    steps:
13    - name: checkout
14      uses: actions/checkout@v6
15
16    - name: Set up QEMU
17      uses: docker/setup-qemu-action@v3
18
19    - name: Set up Docker Buildx
20      uses: docker/setup-buildx-action@v3
21
22    - name: Log in to the Container registry
23      uses: docker/login-action@v3
24      with:
25        registry: ghcr.io
26        username: ${{ github.actor }}
27        password: ${{ secrets.GITHUB_TOKEN }}
28
29    - name: Extract metadata (tags, labels) for Docker
30      id: meta
31      uses: docker/metadata-action@v5
32      with:
33        images: ghcr.io/nabsul/gh/hello
34        tags: type=sha,prefix=qemu-
35
36    - name: Build and push Docker image
37      uses: docker/build-push-action@v6
38      with:
39        context: .
40        platforms: linux/amd64,linux/arm64
41        push: true
42        tags: ${{ steps.meta.outputs.tags }}
43        labels: ${{ steps.meta.outputs.labels }}

In this post, I will show you several ways to speed up your builds. The options you have are:

  • Switching to runners that have cross-platform remote builds pre-configured
  • Building single-architecture images in a matrix and merging them manually
  • Using GitHub Actions instances as remote builders with Tailscale
  • Setting up your own machines for remote builds
  • Using a Kubernetes cluster for remote builds

Switching to a different Runner Provider

This is the simplest solution, but it requires signing up for a new service and will cost some money. There are alternative runner providers that have their runners preconfigured with BuildX and remote builders with native hardware.

One such provider that I tried is Namespace.so. Signing up was fast and easy, and switching to them in my workflows only required changing the runs-on field in my YAML.

Namespace.so Workflow YAML
 1name: "Build: Namespace.so"
 2
 3on:
 4  workflow_dispatch:
 5
 6jobs:
 7  build:
 8    runs-on: namespace-profile-default
 9    permissions:
10      contents: read
11      packages: write
12    steps:
13    - name: checkout
14      uses: actions/checkout@v6
15
16    - name: Log in to the Container registry
17      uses: docker/login-action@v3
18      with:
19        registry: ghcr.io
20        username: ${{ github.actor }}
21        password: ${{ secrets.GITHUB_TOKEN }}
22
23    - name: Extract metadata (tags, labels) for Docker
24      id: meta
25      uses: docker/metadata-action@v5
26      with:
27        images: ghcr.io/nabsul/gh/hello
28        tags: type=sha,prefix=namespace-
29
30    - name: Build and push Docker image
31      uses: docker/build-push-action@v6
32      with:
33        context: .
34        platforms: linux/amd64,linux/arm64
35        push: true
36        tags: ${{ steps.meta.outputs.tags }}
37        labels: ${{ steps.meta.outputs.labels }}

Building Single Images and Merging

This option is probably the simplest way to get cross-platform builds without leaving GitHub. You build an individual image for each of the architectures that you want, then merge them with a buildx command like:

1docker buildx imagetools create -t nabsul/myproject:v1.0.0 nabsul/myproject:v1.0.0-amd64 nabsul/myproject:v1.0.0-arm64

In this example, I use a matrix of jobs to reduce duplicate YAML, and then a merge job to create the final image.

Matrix Workflow YAML
 1name: "Build: Matrix"
 2
 3on:
 4  workflow_dispatch:
 5
 6jobs:
 7  build:
 8    runs-on: ${{ matrix.runner }}
 9    strategy:
10      matrix:
11        include:
12          - platform: linux/amd64
13            runner: ubuntu-latest
14            suffix: amd64
15          - platform: linux/arm64
16            runner: ubuntu-24.04-arm
17            suffix: arm64
18
19    permissions:
20      contents: read
21      packages: write
22
23    steps:
24    - name: checkout
25      uses: actions/checkout@v6
26
27    - name: Log in to the Container registry
28      uses: docker/login-action@v3
29      with:
30        registry: ghcr.io
31        username: ${{ github.actor }}
32        password: ${{ secrets.GITHUB_TOKEN }}
33
34    - name: Extract metadata (tags, labels) for Docker
35      id: meta
36      uses: docker/metadata-action@v5
37      with:
38        images: ghcr.io/nabsul/gh/hello
39        tags: type=sha,prefix=matrix-,suffix=-${{ matrix.suffix }}
40
41    - name: Build and push Docker image
42      uses: docker/build-push-action@v6
43      with:
44        context: .
45        push: true
46        tags: ${{ steps.meta.outputs.tags }}
47        labels: ${{ steps.meta.outputs.labels }}
48
49  merge:
50    runs-on: ubuntu-latest
51    needs: build
52    permissions:
53      contents: read
54      packages: write
55
56    steps:
57    - name: Set up Docker Buildx
58      uses: docker/setup-buildx-action@v3
59    - name: Log in to the Container registry
60      uses: docker/login-action@v3
61      with:
62        registry: ghcr.io
63        username: ${{ github.actor }}
64        password: ${{ secrets.GITHUB_TOKEN }}
65
66    - name: Log in to the Container registry
67      uses: docker/login-action@v3
68      with:
69        registry: ghcr.io
70        username: ${{ github.actor }}
71        password: ${{ secrets.GITHUB_TOKEN }}
72
73    - name: Extract metadata (tags, labels) for Docker
74      id: meta
75      uses: docker/metadata-action@v5
76      with:
77        images: ghcr.io/nabsul/gh/hello
78        tags: type=sha,prefix=matrix-
79
80    - name: Create and push multi-arch manifest
81      run: |
82        docker buildx imagetools create \
83          -t ${{ steps.meta.outputs.tags }} \
84          ${{ steps.meta.outputs.tags }}-amd64 \
85          ${{ steps.meta.outputs.tags }}-arm64

GitHub Remote Builders with Tailscale

Honestly, this is cool in a nerdy way, but I wouldn’t recommend it for production. For each hardware architecture, I spin up a job that starts a buildkitd server. I then join my tailnet with a pre-determined hostname that allows the builder to find the machine. The final step in the job is printf "HTTP/1.1 200 OK\r\nContent-Length: 16\r\n\r\nShutting down..." | nc -l -p 8080 which just waits until someone hits port 8080 and then shuts down.

The main build step configures itself with the remote builders from the previous step. It then joins the tailnet and uses those remote instances to do a cross-platform build. After the build is done, I use a curl command to cause the other jobs to end.

Like I said, this is a pretty cool setup, but there’s just so much that can go wrong. If the curl fails, you’ll get jobs that hang for a long time, and you’ll have to worry about tailnet configurations and security.

Tailscale Workflow YAML
 1name: "Build: Tailscale"
 2
 3on:
 4  workflow_dispatch:
 5
 6jobs:
 7
 8  builders:
 9    runs-on: ${{ matrix.runner }}
10    permissions:
11      contents: read
12      packages: write
13    strategy:
14      matrix:
15        include:
16          - host: amd64
17            runner: ubuntu-latest
18          - host: arm64
19            runner: ubuntu-24.04-arm
20    steps:
21    - name: Log in to the Container registry
22      uses: docker/login-action@v3
23      with:
24        registry: ghcr.io
25        username: ${{ github.actor }}
26        password: ${{ secrets.GITHUB_TOKEN }}
27
28    - name: start buildkit
29      run: docker run -d --name buildkit --privileged -p 1234:1234 moby/buildkit:latest --addr tcp://0.0.0.0:1234
30
31    - name: Tailscale
32      uses: tailscale/github-action@v4
33      with:
34        oauth-client-id: ${{ secrets.TS_CLIENT_ID }}
35        oauth-secret: ${{ secrets.TS_CLIENT_SECRET }}
36        tags: tag:gh-action
37        version: latest
38        hostname: actions-${{ matrix.host }}
39    - name: wait for end signal
40      run: |
41        # Listen on 8080, and exit as soon as a request is received
42        printf "HTTP/1.1 200 OK\r\nContent-Length: 16\r\n\r\nShutting down..." | nc -l -p 8080
43
44  build:
45    runs-on: ubuntu-latest
46    permissions:
47      contents: read
48      packages: write
49    steps:
50    - name: checkout
51      uses: actions/checkout@v6
52
53    - name: Set up Docker Buildx
54      uses: docker/setup-buildx-action@v3
55      
56    - name: Log in to the Container registry
57      uses: docker/login-action@v3
58      with:
59        registry: ghcr.io
60        username: ${{ github.actor }}
61        password: ${{ secrets.GITHUB_TOKEN }}
62
63    - name: Extract metadata (tags, labels) for Docker
64      id: meta
65      uses: docker/metadata-action@v5
66      with:
67        images: ghcr.io/nabsul/gh/hello
68        tags: type=sha,prefix=tailscale-
69
70    - name: Tailscale
71      uses: tailscale/github-action@v4
72      with:
73        oauth-client-id: ${{ secrets.TS_CLIENT_ID }}
74        oauth-secret: ${{ secrets.TS_CLIENT_SECRET }}
75        tags: tag:gh-action
76        version: latest
77        hostname: actions-builder
78        ping: actions-amd64,actions-arm64
79
80    - name: Setup Buildx Remote TCP
81      run: |
82        docker buildx create --name remote-build --use    --driver remote tcp://actions-amd64:1234
83        docker buildx create --name remote-build --append --driver remote tcp://actions-arm64:1234
84
85    - name: Build and push Docker image
86      uses: docker/build-push-action@v6
87      with:
88        context: .
89        platforms: linux/amd64,linux/arm64
90        push: true
91        tags: ${{ steps.meta.outputs.tags }}
92        labels: ${{ steps.meta.outputs.labels }}
93
94    - name: shut down builders
95      if: ${{ always() }}
96      run: |
97        curl -v http://actions-amd64:8080/kill || true
98        curl -v http://actions-arm64:8080/kill || true

Kubernetes Remote Builders

If you happen to have a Kubernetes cluster that has both Intel and ARM nodes in it, you can use them as remote builders. In this example, I create a temporary namespace for each build, run the builds there, and then clean up afterwards.

Overall this is not a bad option if you already have a Kubernetes cluster being used for other purposes. But you probably don’t want to be creating one just for the purpose of your builds.

Kubernetes Workflow YAML
 1name: "Build: Kubernetes Remote"
 2
 3on:
 4  workflow_dispatch:
 5
 6jobs:
 7  build-and-push:
 8    runs-on: ubuntu-latest
 9    permissions:
10      contents: read
11      packages: write
12
13    steps:
14      - name: Checkout repository
15        uses: actions/checkout@v4
16
17      - name: Set up Docker Buildx
18        uses: docker/setup-buildx-action@v3
19
20      - name: Setup Kunernetes Buildx Remote
21        run: |
22          mkdir -p /tmp/buildx-config
23          echo "${{ secrets.KUBECONFIG_CONTENT }}" | base64 -d > /tmp/buildx-config/kubeconfig
24          export KUBECONFIG=/tmp/buildx-config/kubeconfig
25          kubectl create namespace buildx-${{ github.sha }} || true
26          kubectl label namespace buildx-${{ github.sha }} pod-security.kubernetes.io/enforce=privileged --overwrite
27          docker buildx create --name remote-build --use --driver kubernetes --driver-opt namespace=buildx-${{ github.sha }} --driver-opt "nodeselector=kubernetes.io/arch=amd64" --platform linux/amd64
28          docker buildx create --name remote-build --append --driver kubernetes --driver-opt namespace=buildx-${{ github.sha }} --driver-opt "nodeselector=kubernetes.io/arch=arm64" --platform linux/arm64
29          docker buildx inspect --bootstrap
30
31      - name: Log in to the Container registry
32        uses: docker/login-action@v3
33        with:
34          registry: ghcr.io
35          username: ${{ github.actor }}
36          password: ${{ secrets.GITHUB_TOKEN }}
37
38      - name: Extract metadata (tags, labels) for Docker
39        id: meta
40        uses: docker/metadata-action@v5
41        with:
42          images: ghcr.io/nabsul/gh/hello
43          tags: type=sha,prefix=k8s-
44
45      - name: Build and push Docker image
46        uses: docker/build-push-action@v5
47        with:
48          context: .
49          platforms: linux/amd64,linux/arm64
50          push: true
51          tags: ${{ steps.meta.outputs.tags }}
52          labels: ${{ steps.meta.outputs.labels }}
53
54      - name: Remove Docker Buildx Remote
55        run: |
56          export KUBECONFIG=/tmp/buildx-config/kubeconfig
57          kubectl delete namespace buildx-${{ github.sha }}

TCP Remote Builders

You can also just run individual VMs of your different hardware types and use them for remote builds. In this example, I created one ARM and one Intel VM and secured them with TLS certs. I then configured BuildX to use those remote runners for the build. You could also leverage Tailscale for this and avoid the need for TLS certificates.

TCP Remote Builders Workflow YAML
 1name: Build - GitHub Remote TCP
 2
 3on:
 4  workflow_dispatch:
 5
 6jobs:
 7  build-and-push:
 8    runs-on: ubuntu-latest
 9    permissions:
10      contents: read
11      packages: write
12
13    steps:
14      - name: Checkout repository
15        uses: actions/checkout@v4
16
17      - name: Set up Docker Buildx
18        uses: docker/setup-buildx-action@v3
19
20      - name: Setup Buildx Remote TCP
21        run: |
22          mkdir -p /tmp/buildx-certs
23          echo "${{ secrets.CA_CERT }}" > /tmp/buildx-certs/ca.crt
24          echo "${{ secrets.CLIENT_CERT }}" > /tmp/buildx-certs/client.crt
25          echo "${{ secrets.CLIENT_KEY }}" > /tmp/buildx-certs/client.key
26          docker buildx create --name remote-build --use    --driver remote --driver-opt cacert=/tmp/buildx-certs/ca.crt --driver-opt cert=/tmp/buildx-certs/client.crt --driver-opt key=/tmp/buildx-certs/client.key --platform linux/amd64 tcp://20.51.113.181:1234
27          docker buildx create --name remote-build --append --driver remote --driver-opt cacert=/tmp/buildx-certs/ca.crt --driver-opt cert=/tmp/buildx-certs/client.crt --driver-opt key=/tmp/buildx-certs/client.key --platform linux/arm64 tcp://20.120.181.102:1234
28
29      - name: Log in to the Container registry
30        uses: docker/login-action@v3
31        with:
32          registry: ghcr.io
33          username: ${{ github.actor }}
34          password: ${{ secrets.GITHUB_TOKEN }}
35
36      - name: Extract metadata (tags, labels) for Docker
37        id: meta
38        uses: docker/metadata-action@v5
39        with:
40          images: ghcr.io/nabsul/gh/hello
41          tags: type=sha,prefix=tailscale-
42
43      - name: Build and push Docker image
44        uses: docker/build-push-action@v5
45        with:
46          context: .
47          platforms: linux/amd64,linux/arm64
48          push: true
49          tags: ${{ steps.meta.outputs.tags }}
50          labels: ${{ steps.meta.outputs.labels }}

Conclusion

So there you have it, several options to get faster multi-architecture builds on GitHub Actions. Personally, I currently lean towards Namespace.so simply because it only costs me about $2 a month and I’m lazy. If Namespace started to get expensive, I would probably go with the separate builds and merge pattern. And if my builds were starting to get expensive on GitHub, I might look into setting up builders at home and doing remote builds over Tailscale.