You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After starting VM I got no error, and this is the output:
❯
colima ls
PROFILE STATUS ARCH CPUS MEMORY DISK RUNTIME ADDRESS
k8s-aarch64 Running aarch64 4 8GiB 60GiB docker+k3s 192.168.106.4
But, limactl is not reporting anything:
❯
limactl ls
WARN[0000] No instance found. Run `limactl create` to create an instance.
And after switching to colima kubernetes context:
❯
kubectx
Switched to context "colima-k8s-aarch64".
I cannot get any response from the kubernetes cluster:
❯
kubectl get pods
E0313 16:17:51.870777 26410 memcache.go:265] couldn't get current server API group list: Get "https://192.168.106.4:6443/api?timeout=32s": dial tcp 192.168.106.4:6443: i/o timeout
although docker is reporting containers running:
❯
docker context list
NAME DESCRIPTION DOCKER ENDPOINT ERROR
colima-k8s-aarch64 * colima [profile=k8s-aarch64] unix:///Users/mmesnjak/.colima/k8s-aarch64/docker.sock
default Current DOCKER_HOST based configuration unix:///var/run/docker.sock
desktop-linux Docker Desktop unix:///Users/mmesnjak/.docker/run/docker.sock
❯
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
db9127e2b7fe 8e22bf689cda "/metrics-server --c…" About an hour ago Up About an hour k8s_metrics-server_metrics-server-67c658944b-9thph_kube-system_2323d259-800e-4b16-bc1f-474051430dcd_3
b6e9a73a01ec 10ada9a7f8ab "local-path-provisio…" About an hour ago Up About an hour k8s_local-path-provisioner_local-path-provisioner-84db5d44d9-s47g5_kube-system_58fcfe41-44db-46d3-a0d0-7c204516d9ba_3
3311b251cf06 97e04611ad43 "/coredns -conf /etc…" About an hour ago Up About an hour k8s_coredns_coredns-6799fbcd5-htvcp_kube-system_04063124-e17d-4f80-8e43-431ab7e5293d_2
c1fb0354375f rancher/mirrored-pause:3.6 "/pause" About an hour ago Up About an hour k8s_POD_local-path-provisioner-84db5d44d9-s47g5_kube-system_58fcfe41-44db-46d3-a0d0-7c204516d9ba_2
f509fe34eb50 rancher/mirrored-pause:3.6 "/pause" About an hour ago Up About an hour k8s_POD_coredns-6799fbcd5-htvcp_kube-system_04063124-e17d-4f80-8e43-431ab7e5293d_2
63a7218c2ee4 rancher/mirrored-pause:3.6 "/pause" About an hour ago Up About an hour k8s_POD_metrics-server-67c658944b-9thph_kube-system_2323d259-800e-4b16-bc1f-474051430dcd_2
Version
❯
colima version && limactl --version && qemu-img --version
colima version 0.6.8
git commit: 9b0809d0ed9ad3ff1e57c405f27324e6298ca04f
limactl version 0.20.2
qemu-img version 8.2.1
Copyright (c) 2003-2023 Fabrice Bellard and the QEMU Project developers
Description
I have installed Colima on my Mac M1 Max machine (Mac OS Sonoma Version 14.4 (23E214)) using brew:
brew install colima
After that, I have started the aarch64 profile:
colima start -p k8s-aarch64 --edit
And changed these properties in the profile yaml file:
After starting VM I got no error, and this is the output:
But,
limactl
is not reporting anything:❯ limactl ls WARN[0000] No instance found. Run `limactl create` to create an instance.
And after switching to colima kubernetes context:
❯ kubectx Switched to context "colima-k8s-aarch64".
I cannot get any response from the kubernetes cluster:
❯ kubectl get pods E0313 16:17:51.870777 26410 memcache.go:265] couldn't get current server API group list: Get "https://192.168.106.4:6443/api?timeout=32s": dial tcp 192.168.106.4:6443: i/o timeout
although docker is reporting containers running:
Version
Operating System
Output of
colima status
❯ colima status -p k8s-aarch64 INFO[0000] colima [profile=k8s-aarch64] is running using macOS Virtualization.Framework INFO[0000] arch: aarch64 INFO[0000] runtime: docker INFO[0000] mountType: virtiofs INFO[0000] address: 192.168.106.4 INFO[0000] socket: unix:///Users/mmesnjak/.colima/k8s-aarch64/docker.sock INFO[0000] kubernetes: enabled
Reproduction Steps
Expected behaviour
I expect to be able to connect to kubernetes cluster from the host machine terminal.
Additional context
No response
The text was updated successfully, but these errors were encountered: