Using k3s for your VPS
Kubernetes has become the de facto standard for container orchestration, but its complexity and resource demands can be overwhelming for small projects or limited-resource environments. Enter k3s, a lightweight Kubernetes distribution that is perfect for running on a Virtual Private Server (VPS). In this tutorial, we'll walk you through the steps to set up k3s on your VPS, enabling you to deploy and manage containerized applications with ease.
Prerequisites
Before we start, ensure you have the following:
- A VPS with at least 1GB RAM and 1 CPU (though more is recommended for production workloads).
- A Linux-based operating system (e.g., Ubuntu 20.04 or later).
- SSH access to your VPS.
- Basic knowledge of Linux command line operations.
Step 1: Update Your VPS
First, you need to ensure your VPS is up-to-date. Connect to your VPS via SSH and run the following commands:
sudo apt update
sudo apt upgrade -y
These commands will update the package listings and upgrade any outdated packages.
Step 2: Install k3s
The easiest way to install k3s is by using the official installation script provided by Rancher Labs:
curl -sfL https://get.k3s.io | sh -
This script will download and install k3s along with all necessary dependencies. By default, k3s will run as a systemd service on your VPS.
Step 3: Verify the Installation
Once k3s is installed, verify that the service is running correctly:
sudo systemctl status k3s
You should see a message indicating that k3s is active and running. Next, verify the Kubernetes cluster status:
k3s kubectl get nodes
This command should list your VPS as a node in the cluster, with its status showing as "Ready."
Step 4: Configure kubectl Access
To manage your k3s cluster from your local machine, you'll need to copy the kubeconfig file from your VPS. Run the following command on your VPS to display the configuration file location:
sudo cat /etc/rancher/k3s/k3s.yaml
Copy the contents of this file to your local machine and save it as k3s-config.yaml
. You can then set the KUBECONFIG
environment variable on your local machine to use this configuration:
export KUBECONFIG=/path/to/k3s-config.yaml
With this setup, you can use kubectl
from your local machine to interact with your k3s cluster.
Step 5: Deploy a Sample Application
Let's deploy a simple Nginx server to verify that your k3s cluster is functioning correctly. Create a new file named nginx-deployment.yaml
with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Apply this deployment to your cluster:
kubectl apply -f nginx-deployment.yaml
Then, expose the Nginx deployment using a service:
kubectl expose deployment nginx-deployment --type=NodePort --port=80
Find the NodePort assigned to the Nginx service:
kubectl get services
Access your Nginx server by navigating to http://<your-vps-ip>:<node-port>
in a web browser.
Conclusion
Congratulations! You've successfully set up k3s on your VPS and deployed a sample application. This lightweight Kubernetes distribution is perfect for small-scale projects and development environments. With k3s, you have the power of Kubernetes at your fingertips without the overhead, enabling you to focus on building and deploying your applications efficiently.
Feel free to explore further by deploying more complex applications and integrating additional Kubernetes tools. Happy containerizing!