Files
vagrant-k8s-vanilla/README.md
2025-07-15 21:31:41 +02:00

114 lines
4.8 KiB
Markdown

# Vagrant & Ansible Kubernetes Cluster
This project automates the setup of a high-availability (HA) Kubernetes cluster on a local machine using Vagrant for VM management and Ansible for provisioning.
The final environment consists of:
* **3 Control Plane Nodes**: Providing a resilient control plane.
* **2 Worker Nodes**: For deploying applications.
* **Networking**: All nodes are connected to the host machine via libvirt's default network (`192.168.122.0/24`).
* **Provisioning**: The cluster is bootstrapped using `kubeadm` and uses Calico for the CNI.
## Prerequisites
Before you begin, ensure you have the following software installed on your host machine:
* [Vagrant](https://www.vagrantup.com/downloads)
* A Vagrant provider, such as [libvirt](https://github.com/vagrant-libvirt/vagrant-libvirt).
* [Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) (version 2.10 or newer).
## Project Structure
Your project directory should look like this:
```
.
├── Vagrantfile # Defines the virtual machines for Vagrant
├── ansible.cfg # Configuration for Ansible
├── cluster.yml # Ansible playbook to deploy Kubernetes
├── inventory.ini # Ansible inventory defining the cluster nodes
└── README.md # This file
```
## Setup Instructions
Follow these steps to build and provision the entire cluster from scratch.
### Step 1: Customize Configuration (Optional)
The project is configured to work out-of-the-box for user `pkhamre`. If your setup is different, you may need to adjust the following files:
1. **`Vagrantfile`**:
* `USERNAME`: Change this if you want to create a different user on the VMs.
* `PUBLIC_KEY_PATH`: Update this to the path of the SSH public key you want to grant access with.
2. **`ansible.cfg`**:
* `remote_user`: Ensure this matches the `USERNAME` from the `Vagrantfile`.
* `private_key_file`: Ensure this points to the corresponding SSH private key for the public key specified in the `Vagrantfile`.
3. **`inventory.ini`**:
* The IP addresses are hardcoded to match the `Vagrantfile`. If you change the IPs in `Vagrantfile`, you must update them here as well.
### Step 2: Create the Virtual Machines
With the configuration set, use Vagrant to create the five virtual machines defined in the `Vagrantfile`. This command will download the base OS image (if not already cached) and boot the VMs.
```bash
vagrant up
```
This will create the following VMs with static IPs on the `192.168.122.0/24` network:
* `k8s-cp-1` (192.168.122.101)
* `k8s-cp-2` (192.168.122.102)
* `k8s-cp-3` (192.168.122.103)
* `k8s-worker-1` (192.168.122.111)
* `k8s-worker-2` (192.168.122.112)
### Step 3: Deploy Kubernetes with Ansible
Once the VMs are running, execute the Ansible playbook. Ansible will connect to each machine, install `containerd` and Kubernetes components, and bootstrap the cluster using `kubeadm`.
```bash
ansible-playbook cluster.yml
```
The playbook will:
1. Install prerequisites on all nodes.
2. Initialize the first control plane node (`k8s-cp-1`).
3. Install the Calico CNI for pod networking.
4. Join the remaining control plane nodes.
5. Join the worker nodes.
### Step 4: Verify the Cluster
After the playbook completes, you can access the cluster and verify its status.
1. SSH into the first control plane node:
```bash
ssh pkhamre@192.168.122.101
```
2. Check the status of all nodes. The `kubectl` command-line tool is pre-configured for your user.
```bash
kubectl get nodes -o wide
```
You should see all 5 nodes in the `Ready` state. It may take a minute for all nodes to report as ready after the playbook finishes.
```
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-cp-1 Ready control-plane 5m12s v1.30.3 192.168.122.101 <none> Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.13
k8s-cp-2 Ready control-plane 4m2s v1.30.3 192.168.122.102 <none> Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.13
k8s-cp-3 Ready control-plane 3m56s v1.30.3 192.168.122.103 <none> Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.13
k8s-worker-1 Ready <none> 2m45s v1.30.3 192.168.122.111 <none> Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.13
k8s-worker-2 Ready <none> 2m40s v1.30.3 192.168.122.112 <none> Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.13
```
Congratulations! Your Kubernetes cluster is now ready.
## Cleanup
To tear down the cluster and delete all virtual machines and associated resources, run the following command from the project directory:
```bash
vagrant destroy -f
```