Vagrant & Ansible Kubernetes Cluster
This project automates the setup of a high-availability (HA) Kubernetes cluster on a local machine using Vagrant for VM management and Ansible for provisioning.
The final environment consists of:
- 3 Control Plane Nodes: Providing a resilient control plane.
- 2 Worker Nodes: For deploying applications.
- Networking: All nodes are connected to the host machine via libvirt's default network (
192.168.122.0/24
). - Provisioning: The cluster is bootstrapped using
kubeadm
and uses Calico for the CNI.
Prerequisites
Before you begin, ensure you have the following software installed on your host machine:
Project Structure
Your project directory should look like this:
.
├── Vagrantfile # Defines the virtual machines for Vagrant
├── ansible.cfg # Configuration for Ansible
├── cluster.yml # Ansible playbook to deploy Kubernetes
├── inventory.ini # Ansible inventory defining the cluster nodes
└── README.md # This file
Setup Instructions
Follow these steps to build and provision the entire cluster from scratch.
Step 1: Customize Configuration (Optional)
The project is configured to work out-of-the-box for user pkhamre
. If your setup is different, you may need to adjust the following files:
-
Vagrantfile
:USERNAME
: Change this if you want to create a different user on the VMs.PUBLIC_KEY_PATH
: Update this to the path of the SSH public key you want to grant access with.
-
ansible.cfg
:remote_user
: Ensure this matches theUSERNAME
from theVagrantfile
.private_key_file
: Ensure this points to the corresponding SSH private key for the public key specified in theVagrantfile
.
-
inventory.ini
:- The IP addresses are hardcoded to match the
Vagrantfile
. If you change the IPs inVagrantfile
, you must update them here as well.
- The IP addresses are hardcoded to match the
Step 2: Create the Virtual Machines
With the configuration set, use Vagrant to create the five virtual machines defined in the Vagrantfile
.
vagrant up
Step 3: Deploy Kubernetes with Ansible
Once the VMs are running, execute the Ansible playbook. Ansible will connect to each machine and provision a complete Kubernetes cluster.
ansible-playbook cluster.yml
Step 4: Verify Cluster and Deploy an Example Application
After the playbook completes, you can access the cluster and verify its status.
-
SSH into the first control plane node:
ssh pkhamre@192.168.122.101
-
Check the node status: The
kubectl
command-line tool is pre-configured for your user. All three control plane nodes should have thecontrol-plane
role.kubectl get nodes
Expected Output:
NAME STATUS ROLES AGE VERSION k8s-cp-1 Ready control-plane 10m v1.33.2 k8s-cp-2 Ready control-plane 8m v1.33.2 k8s-cp-3 Ready control-plane 8m v1.33.2 k8s-worker-1 Ready <none> 7m v1.33.2 k8s-worker-2 Ready <none> 7m v1.33.2
Deploying a Test Application (Nginx)
Let's deploy a simple Nginx application to confirm that the worker nodes can run workloads and be exposed to the network.
-
Create an Nginx deployment with two replicas. These pods will be scheduled on your worker nodes.
kubectl create deployment nginx-test --image=nginx --replicas=2
-
Expose the deployment with a
NodePort
service. This makes the application accessible on a specific port on each of the worker nodes.kubectl expose deployment nginx-test --type=NodePort --port=80
-
Find the assigned port. Kubernetes automatically assigns a high-numbered port for the NodePort service.
kubectl get service nginx-test
Look for the port mapping in the
PORT(S)
column. It will look like80:3xxxx/TCP
.NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-test NodePort 10.106.53.188 <none> 80:31234/TCP 25s
-
Access Nginx in your browser. You can now access the Nginx welcome page from your host machine's browser using the IP of any worker node and the assigned port (e.g.,
31234
from the example above).http://192.168.122.111:31234
http://192.168.122.112:31234
Cleaning Up the Example Application
Once you have finished testing, you can remove the Nginx service and deployment.
-
Delete the service:
kubectl delete service nginx-test
-
Delete the deployment:
kubectl delete deployment nginx-test
Cleanup
To tear down the entire cluster and delete all virtual machines and associated resources, run the following command from the project directory:
vagrant destroy -f