Added kubeconfig to gitignore and improved documentation.
This commit is contained in:
4
.gitignore
vendored
4
.gitignore
vendored
@ -59,3 +59,7 @@ __pycache__/
|
||||
*.pem
|
||||
id_rsa
|
||||
id_ed25519
|
||||
|
||||
# --- Kubeconfig ---
|
||||
# Never commit secrets
|
||||
.kubeconfig
|
||||
|
85
README.md
85
README.md
@ -50,64 +50,89 @@ The project is configured to work out-of-the-box for user `pkhamre`. If your set
|
||||
|
||||
### Step 2: Create the Virtual Machines
|
||||
|
||||
With the configuration set, use Vagrant to create the five virtual machines defined in the `Vagrantfile`. This command will download the base OS image (if not already cached) and boot the VMs.
|
||||
With the configuration set, use Vagrant to create the five virtual machines defined in the `Vagrantfile`.
|
||||
|
||||
```bash
|
||||
vagrant up
|
||||
```
|
||||
|
||||
This will create the following VMs with static IPs on the `192.168.122.0/24` network:
|
||||
* `k8s-cp-1` (192.168.122.101)
|
||||
* `k8s-cp-2` (192.168.122.102)
|
||||
* `k8s-cp-3` (192.168.122.103)
|
||||
* `k8s-worker-1` (192.168.122.111)
|
||||
* `k8s-worker-2` (192.168.122.112)
|
||||
|
||||
### Step 3: Deploy Kubernetes with Ansible
|
||||
|
||||
Once the VMs are running, execute the Ansible playbook. Ansible will connect to each machine, install `containerd` and Kubernetes components, and bootstrap the cluster using `kubeadm`.
|
||||
Once the VMs are running, execute the Ansible playbook. Ansible will connect to each machine and provision a complete Kubernetes cluster.
|
||||
|
||||
```bash
|
||||
ansible-playbook cluster.yml
|
||||
```
|
||||
|
||||
The playbook will:
|
||||
1. Install prerequisites on all nodes.
|
||||
2. Initialize the first control plane node (`k8s-cp-1`).
|
||||
3. Install the Calico CNI for pod networking.
|
||||
4. Join the remaining control plane nodes.
|
||||
5. Join the worker nodes.
|
||||
|
||||
### Step 4: Verify the Cluster
|
||||
## Step 4: Verify Cluster and Deploy an Example Application
|
||||
|
||||
After the playbook completes, you can access the cluster and verify its status.
|
||||
|
||||
1. SSH into the first control plane node:
|
||||
1. **SSH into the first control plane node**:
|
||||
```bash
|
||||
ssh pkhamre@192.168.122.101
|
||||
```
|
||||
|
||||
2. Check the status of all nodes. The `kubectl` command-line tool is pre-configured for your user.
|
||||
2. **Check the node status**: The `kubectl` command-line tool is pre-configured for your user. All three control plane nodes should have the `control-plane` role.
|
||||
```bash
|
||||
kubectl get nodes -o wide
|
||||
kubectl get nodes
|
||||
```
|
||||
*Expected Output:*
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
k8s-cp-1 Ready control-plane 10m v1.33.2
|
||||
k8s-cp-2 Ready control-plane 8m v1.33.2
|
||||
k8s-cp-3 Ready control-plane 8m v1.33.2
|
||||
k8s-worker-1 Ready <none> 7m v1.33.2
|
||||
k8s-worker-2 Ready <none> 7m v1.33.2
|
||||
```
|
||||
|
||||
You should see all 5 nodes in the `Ready` state. It may take a minute for all nodes to report as ready after the playbook finishes.
|
||||
### Deploying a Test Application (Nginx)
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
k8s-cp-1 Ready control-plane 5m12s v1.30.3 192.168.122.101 <none> Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.13
|
||||
k8s-cp-2 Ready control-plane 4m2s v1.30.3 192.168.122.102 <none> Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.13
|
||||
k8s-cp-3 Ready control-plane 3m56s v1.30.3 192.168.122.103 <none> Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.13
|
||||
k8s-worker-1 Ready <none> 2m45s v1.30.3 192.168.122.111 <none> Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.13
|
||||
k8s-worker-2 Ready <none> 2m40s v1.30.3 192.168.122.112 <none> Ubuntu 24.04 LTS 6.8.0-31-generic containerd://1.7.13
|
||||
Let's deploy a simple Nginx application to confirm that the worker nodes can run workloads and be exposed to the network.
|
||||
|
||||
1. **Create an Nginx deployment** with two replicas. These pods will be scheduled on your worker nodes.
|
||||
```bash
|
||||
kubectl create deployment nginx-test --image=nginx --replicas=2
|
||||
```
|
||||
|
||||
Congratulations! Your Kubernetes cluster is now ready.
|
||||
2. **Expose the deployment** with a `NodePort` service. This makes the application accessible on a specific port on each of the worker nodes.
|
||||
```bash
|
||||
kubectl expose deployment nginx-test --type=NodePort --port=80
|
||||
```
|
||||
|
||||
3. **Find the assigned port**. Kubernetes automatically assigns a high-numbered port for the NodePort service.
|
||||
```bash
|
||||
kubectl get service nginx-test
|
||||
```
|
||||
*Look for the port mapping in the `PORT(S)` column. It will look like `80:3xxxx/TCP`.*
|
||||
```
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
nginx-test NodePort 10.106.53.188 <none> 80:31234/TCP 25s
|
||||
```
|
||||
|
||||
4. **Access Nginx in your browser**. You can now access the Nginx welcome page from your host machine's browser using the IP of **any worker node** and the assigned port (e.g., `31234` from the example above).
|
||||
|
||||
* `http://192.168.122.111:31234`
|
||||
* `http://192.168.122.112:31234`
|
||||
|
||||
### Cleaning Up the Example Application
|
||||
|
||||
Once you have finished testing, you can remove the Nginx service and deployment.
|
||||
|
||||
1. **Delete the service**:
|
||||
```bash
|
||||
kubectl delete service nginx-test
|
||||
```
|
||||
|
||||
2. **Delete the deployment**:
|
||||
```bash
|
||||
kubectl delete deployment nginx-test
|
||||
```
|
||||
|
||||
## Cleanup
|
||||
|
||||
To tear down the cluster and delete all virtual machines and associated resources, run the following command from the project directory:
|
||||
To tear down the entire cluster and delete all virtual machines and associated resources, run the following command from the project directory:
|
||||
|
||||
```bash
|
||||
vagrant destroy -f
|
||||
|
Reference in New Issue
Block a user