title | author | date | styles | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
liquid-metal |
weaveworks |
2022-05-17 |
|
- Overview of Liquid Metal components
- Creating a management CAPI cluster in a local kinD
- Creating a workload Bare Metal cluster via CAPMVM (cluster api provider microvm)
- Static, not dynamic, placement
- Cluster API (CAPI)
- Equinix machines, configured to run Liquid Metal components
- Will be hosts for MicroVMs running Kubernetes nodes
- 2 in this example
- Any number can be used
- The service which creates and manages MicroVMs
systemctl status flintlockd.service
- MicroVMs need a kernel and operating system
- Liquid Metal ships these as container images
- Containerd pulls, stores and mounts the base images used by the MicroVMs
- Containerd is an internal component of docker
systemctl status containerd.service
- Tool used by flintlockd to launch lightweight virtual machines
- A binary which will be executed for each MicroVM, running them as processes
- Basic yet practical networking setup
- Aim to do as much as possible in private networks
- All hosts connected to a VLAN
- Private subnet
- MicroVMs assigned a private address from that subnet
- DHCP server
- NAT and filter rules applied
- MicroVMs need egress traffic
- Eventual cluster endpoint will be private
- Address from the private subnet
- VPN configured to route traffic from VLAN subnet
- Tailscale VPN
- Connected to our local workstation(s)
- CAPI workload clusters coordinated via a management cluster
- Management clusters can be run anywhere
- Kind is a good option
- Run a small Kubernetes cluster in a Docker container
kind create cluster
- Liquid Metal ships a custom CAPI infrastructure provider
- Cluster API Provider MicroVM (capmvm)
- Works with CAPI to create MicroVMs on our bare metal hosts
- CAPI will then schedule Kubernetes nodes on those MicroVMs
clusterctl
installs the CAPI controllers in the management cluster- Config file instructs
clusterctl
where to find the custom provider image and how to run it
export CAPMVM_VERSION=v0.5.1
mkdir -p ~/.cluster-api
cat << EOF >>~/.cluster-api/clusterctl.yaml
providers:
- name: "microvm"
url: "https://github.com/weaveworks-liquidmetal/cluster-api-provider-microvm/releases/$CAPMVM_VERSION/infrastructure-components.yaml"
type: "InfrastructureProvider"
EOF
- Enable Cluster Resource Set feature flag in CAPI
- So we can use Cilium for our microvm cluster's CNI.
export EXP_CLUSTER_RESOURCE_SET=true
Now we can initialise the local cluster as a management cluster.
clusterctl init --infrastructure microvm
- Export some simple configuration.
export CLUSTER_NAME=mvm-test
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=5
- Configure the control plane endpoint address
- Here a private address from my Equinix host's VLAN.
export CONTROL_PLANE_VIP="192.168.10.25"
- Use
clusterctl
to generate a manifest definition for the microvm cluster
clusterctl generate cluster \
--infrastructure microvm:$CAPMVM_VERSION \
--flavor cilium \
$CLUSTER_NAME > cluster.yaml
cluster.yaml
is a Kubernetes object manifest- Instructs the CAPI controllers and the CAPMVM controller how to build the cluster
- We can see instructions for how to deploy control plane and worker nodes
- Generated template is a "getting started" point
- For clusters across multiple hosts, we will need to edit the manifest
- Add the endpoints for each of our Equinix hosts
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: MicrovmCluster
metadata:
name: mvm-test
namespace: default
spec:
controlPlaneEndpoint:
host: 192.168.10.25
port: 6443
placement:
staticPool:
hosts:
- controlplaneAllowed: true
endpoint: 147.75.80.43:9090
- controlplaneAllowed: true
endpoint: 147.75.80.45:9090
We can now use kubectl
to apply our manifest and create our cluster.
kubectl apply -f cluster.yaml
(If you have access to your MicroVM hosts, you can watch the MicroVMs coming up
by looking at the /var/lib/flintlock/vm/<namespace>/<name>
directory.)
- The microvm cluster's kubeconfig can be found in a secret
- Named
<cluster-name>-kubeconfig
- This is created almost immediately by CAPI
- The cluster may not be ready, by the config is
- Named
- We can extract and decode this config with
jq
andbase64 -d
kubectl get secret $CLUSTER_NAME-kubeconfig -o json | \
jq -r .data.value | \
base64 -d > config.yaml
- Use this config to watch the nodes come up
kubectl --kubeconfig config.yaml get nodes
- And inspect the pods
kubectl --kubeconfig config.yaml get pods -A