Create a Liquid Metal cluster
We will use clusterctl
again to generate a manifest for our workload cluster.
Configure
First, we need to configure some options:
export CLUSTER_NAME=lm-demo
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=10
This will result in a cluster with a single control plane, and 10 worker nodes. You may change these values to whatever you wish.
CAPMVM will use kube-vip
to assign a virtual IP to our Liquid Metal
cluster.
This IP will from outside the range set as the dhcp
pool in the network-hub
device. For us this is 192.168.10.25
.
export CONTROL_PLANE_VIP="192.168.10.25"
In this demo we are setting the cluster API to be only privately accessible via our
VPN. While this is great for security, it means that if we, say, wanted to use
kube-vip
to also provide Load Balancers for services, those services would also
only be accessible via the VPN.
Your options if you do not want this behaviour are to either acquire a public
IPv4 address for your cluster endpoint, or to use another tool to expose your services,
such as ingress-nginx
.
Generate
Now we can use clusterctl
to generate a cluster manifest:
clusterctl generate cluster -i microvm:$CAPMVM_VERSION -f cilium $CLUSTER_NAME > cluster.yaml
We need to edit the file to add the addresses to the flintlockd
servers. These
will have been printed in the outputs
under microvm_host_ips
after the terraform applied.
These are configured on the MicrovmCluster
spec at spec.placement.staticPool.hosts
.
Add one entry under hosts
for each device created as a MicroVM host.
While you are there, you can also add some sshPublicKeys
if you like.
Expand to see required file changes
Once you have made those changes, save and close the file.
Apply
Once you are happy with the manifest, use kubectl
to apply it to your management
cluster:
kubectl apply -f cluster.yaml
Output
Use
After a moment, you can fetch the MicroVMs workload cluster's kubeconfig
from
your management cluster. This kubeconfig
is written to a secret by CAPI:
kubectl get secret $CLUSTER_NAME-kubeconfig -o json | jq -r .data.value | base64 -d > config.yaml
With that kubeconfig
we can target the Liquid Metal cluster with kubectl
:
kubectl --kubeconfig config.yaml get nodes
This may not return anything for a few moments; we need to wait for the MicroVMs
to start and for the cluster control-plane to then be bootstrapped.
Prepend the command with watch
and eventually (<=5m) you
will see the errors stop and the cluster come up.
An expected error for the first 2-3 minutes is:
Unable to connect to the server: dial tcp 192.168.10.25:6443: connect: no route to host
Output
If your cluster does not start within 10 mins, consult the troubleshooting pages.
Where are my logs?
Both CAPMVM and CAPI logs can be found by querying the management cluster.
We recommend using k9s to view your management cluster.
To see the CAPMVM controller logs, look for the pod called capmvm-controller-manager-XXXXX
in
the capmvm-system
namespace. In those logs you will be able to see the controller
reconcile MicrovmMachine
types and connect to the given flintlock host(s) to
create MicroVMs.
Various CAPI controllers are also running:
- The logs of
capi-controller-manager-XXXX
incapi-system
will show you the overall orchestration of the workload cluster. - The logs of
capi-kubeadm-control-plane-controller-manager-XXXX
incapi-kubeadm-control-plane-system
will show the bootstrapping of the first created MicroVM as a control-plane node. - The logs of
capi-kubeadm-bootstrap-controller-manager-XXXX
incapi-kubeadm-bootstrap-system
will show the bootstrapping of all subsequent MicroVMs as worker nodes.