139 lines
5.2 KiB
Markdown
139 lines
5.2 KiB
Markdown
# Vagrant & Ansible Kubernetes Cluster
|
|
|
|
This project automates the setup of a high-availability (HA) Kubernetes cluster on a local machine using Vagrant for VM management and Ansible for provisioning.
|
|
|
|
The final environment consists of:
|
|
* **3 Control Plane Nodes**: Providing a resilient control plane.
|
|
* **2 Worker Nodes**: For deploying applications.
|
|
* **Networking**: All nodes are connected to the host machine via libvirt's default network (`192.168.122.0/24`).
|
|
* **Provisioning**: The cluster is bootstrapped using `kubeadm` and uses Calico for the CNI.
|
|
|
|
## Prerequisites
|
|
|
|
Before you begin, ensure you have the following software installed on your host machine:
|
|
|
|
* [Vagrant](https://www.vagrantup.com/downloads)
|
|
* A Vagrant provider, such as [libvirt](https://github.com/vagrant-libvirt/vagrant-libvirt).
|
|
* [Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) (version 2.10 or newer).
|
|
|
|
## Project Structure
|
|
|
|
Your project directory should look like this:
|
|
|
|
```
|
|
.
|
|
├── Vagrantfile # Defines the virtual machines for Vagrant
|
|
├── ansible.cfg # Configuration for Ansible
|
|
├── cluster.yml # Ansible playbook to deploy Kubernetes
|
|
├── inventory.ini # Ansible inventory defining the cluster nodes
|
|
└── README.md # This file
|
|
```
|
|
|
|
## Setup Instructions
|
|
|
|
Follow these steps to build and provision the entire cluster from scratch.
|
|
|
|
### Step 1: Customize Configuration (Optional)
|
|
|
|
The project is configured to work out-of-the-box for user `pkhamre`. If your setup is different, you may need to adjust the following files:
|
|
|
|
1. **`Vagrantfile`**:
|
|
* `USERNAME`: Change this if you want to create a different user on the VMs.
|
|
* `PUBLIC_KEY_PATH`: Update this to the path of the SSH public key you want to grant access with.
|
|
|
|
2. **`ansible.cfg`**:
|
|
* `remote_user`: Ensure this matches the `USERNAME` from the `Vagrantfile`.
|
|
* `private_key_file`: Ensure this points to the corresponding SSH private key for the public key specified in the `Vagrantfile`.
|
|
|
|
3. **`inventory.ini`**:
|
|
* The IP addresses are hardcoded to match the `Vagrantfile`. If you change the IPs in `Vagrantfile`, you must update them here as well.
|
|
|
|
### Step 2: Create the Virtual Machines
|
|
|
|
With the configuration set, use Vagrant to create the five virtual machines defined in the `Vagrantfile`.
|
|
|
|
```bash
|
|
vagrant up
|
|
```
|
|
|
|
### Step 3: Deploy Kubernetes with Ansible
|
|
|
|
Once the VMs are running, execute the Ansible playbook. Ansible will connect to each machine and provision a complete Kubernetes cluster.
|
|
|
|
```bash
|
|
ansible-playbook cluster.yml
|
|
```
|
|
|
|
## Step 4: Verify Cluster and Deploy an Example Application
|
|
|
|
After the playbook completes, you can access the cluster and verify its status.
|
|
|
|
1. **SSH into the first control plane node**:
|
|
```bash
|
|
ssh pkhamre@192.168.122.101
|
|
```
|
|
|
|
2. **Check the node status**: The `kubectl` command-line tool is pre-configured for your user. All three control plane nodes should have the `control-plane` role.
|
|
```bash
|
|
kubectl get nodes
|
|
```
|
|
*Expected Output:*
|
|
```
|
|
NAME STATUS ROLES AGE VERSION
|
|
k8s-cp-1 Ready control-plane 10m v1.33.2
|
|
k8s-cp-2 Ready control-plane 8m v1.33.2
|
|
k8s-cp-3 Ready control-plane 8m v1.33.2
|
|
k8s-worker-1 Ready <none> 7m v1.33.2
|
|
k8s-worker-2 Ready <none> 7m v1.33.2
|
|
```
|
|
|
|
### Deploying a Test Application (Nginx)
|
|
|
|
Let's deploy a simple Nginx application to confirm that the worker nodes can run workloads and be exposed to the network.
|
|
|
|
1. **Create an Nginx deployment** with two replicas. These pods will be scheduled on your worker nodes.
|
|
```bash
|
|
kubectl create deployment nginx-test --image=nginx --replicas=2
|
|
```
|
|
|
|
2. **Expose the deployment** with a `NodePort` service. This makes the application accessible on a specific port on each of the worker nodes.
|
|
```bash
|
|
kubectl expose deployment nginx-test --type=NodePort --port=80
|
|
```
|
|
|
|
3. **Find the assigned port**. Kubernetes automatically assigns a high-numbered port for the NodePort service.
|
|
```bash
|
|
kubectl get service nginx-test
|
|
```
|
|
*Look for the port mapping in the `PORT(S)` column. It will look like `80:3xxxx/TCP`.*
|
|
```
|
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
nginx-test NodePort 10.106.53.188 <none> 80:31234/TCP 25s
|
|
```
|
|
|
|
4. **Access Nginx in your browser**. You can now access the Nginx welcome page from your host machine's browser using the IP of **any worker node** and the assigned port (e.g., `31234` from the example above).
|
|
|
|
* `http://192.168.122.111:31234`
|
|
* `http://192.168.122.112:31234`
|
|
|
|
### Cleaning Up the Example Application
|
|
|
|
Once you have finished testing, you can remove the Nginx service and deployment.
|
|
|
|
1. **Delete the service**:
|
|
```bash
|
|
kubectl delete service nginx-test
|
|
```
|
|
|
|
2. **Delete the deployment**:
|
|
```bash
|
|
kubectl delete deployment nginx-test
|
|
```
|
|
|
|
## Cleanup
|
|
|
|
To tear down the entire cluster and delete all virtual machines and associated resources, run the following command from the project directory:
|
|
|
|
```bash
|
|
vagrant destroy -f
|
|
``` |