-
Notifications
You must be signed in to change notification settings - Fork 586
Open
Description
Contributing guidelines
- I've read the contributing guidelines and wholeheartedly agree
I've found a bug and checked that ...
- ... the documentation does not mention anything about my problem
- ... there are no open or closed issues that are related to my problem
Description
Hi, I'm running docker buildx bake with around 40 targets.
I'm creating a multi node builder, and I've noticed that all targets are being built on the same node.
Is there a way to spread the load across the nodes?
Expected behaviour
Expected the targets to spread across the pods
Actual behaviour
All of the targets are being built on the same pod
Buildx version
github.com/docker/buildx v0.25.0 Homebrew
Docker info
Client: Docker Engine - Community
Version: 28.1.1
Context: desktop-linux
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.25.0
Path: /opt/homebrew/lib/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.35.1-desktop.1
Path: /Users/morerel/.docker/cli-plugins/docker-compose
Server:
Containers: 27
Running: 0
Paused: 0
Stopped: 27
Images: 125
Server Version: 28.1.1
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
CDI spec directories:
/etc/cdi
/var/run/cdi
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 05044ec0a9a75232cad458027ca83437aae3f4da
runc version: v1.2.5-0-g59923ef
init version: de40ad0
Security Options:
seccomp
Profile: unconfined
cgroupns
Kernel Version: 6.10.14-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: aarch64
CPUs: 11
Total Memory: 7.654GiB
Name: docker-desktop
ID: b7c74a23-00e4-4a33-b9c8-7db46ef250cf
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal
Labels:
com.docker.desktop.address=unix:///Users/morerel/Library/Containers/com.docker.docker/Data/docker-cli.sock
Experimental: false
Insecure Registries:
hubproxy.docker.internal:5555
::1/128
127.0.0.0/8
Live Restore Enabled: false
Builders list
docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
kube* kubernetes
\_ kube0 \_ kubernetes:///kube?deployment=buildkit-9ece06e2-06e4-4ca5-9075-c44c7cc27c6a-202c1&kubeconfig= running v0.23.2 linux/amd64 (+4), linux/386
\_ kube0 \_ kubernetes:///kube?deployment=buildkit-9ece06e2-06e4-4ca5-9075-c44c7cc27c6a-202c1&kubeconfig= running v0.23.2 linux/amd64 (+4), linux/386
default docker
\_ default \_ default running v0.21.0 linux/amd64 (+2), linux/arm64, linux/arm (+2), linux/ppc64le, (3 more)
desktop-linux docker
\_ desktop-linux \_ desktop-linux running v0.21.0 linux/amd64 (+2), linux/arm64, linux/arm (+2), linux/ppc64le, (3 more)
Configuration
// docker-bake.hcl
group "default" {
targets = ["imageA", "imageB", "imageC", "imageD"]
}
target "imageA" {
context = "imageA-dir"
dockerfile = "Dockerfile"
tags = ["image:latest"]
}
target "imageB" {
context = "imageB-dir"
dockerfile = "Dockerfile"
tags = ["image:latest"]
}
target "imageC" {
context = "imageC-dir"
dockerfile = "Dockerfile"
tags = ["image:latest"]
}
target "imageD" {
context = "imageD-dir"
dockerfile = "Dockerfile"
tags = ["image:latest"]
}
Build logs
Additional info
No response
cregev