K3s sur VMs OpenStack¶
Introduction¶
K3s est une distribution Kubernetes légère, idéale pour les environnements edge, IoT, ou les déploiements simples. Cette approche déploie K3s manuellement sur des VMs OpenStack via Terraform et cloud-init, offrant plus de contrôle que Magnum.
Prérequis¶
- Terraform Provider OpenStack
- Réseau OpenStack configuré
- Image Ubuntu 22.04 ou similaire
Points à apprendre¶
Architecture K3s¶
graph TB
subgraph k3s[K3s Cluster]
server[K3s Server<br/>VM<br/>API Server + Controller<br/>+ Scheduler + etcd<br/>Single binary]
agent1[K3s Agent 1<br/>VM<br/>kubelet + kube-proxy<br/>containerd]
agent2[K3s Agent 2<br/>VM<br/>kubelet + kube-proxy<br/>containerd]
agent3[K3s Agent 3<br/>VM<br/>kubelet + kube-proxy<br/>containerd]
end
lb[Load Balancer<br/>Octavia<br/>API + Ingress]
ceph[Ceph Storage<br/>via Longhorn ou CSI]
lb -->|6443 API| server
lb -->|80/443 Ingress| agent1
agent1 -->|Join K3S_TOKEN| server
agent2 -->|Join| server
agent3 -->|Join| server
agent1 -->|Storage| ceph
K3s vs K8s standard¶
| Aspect | K3s | Kubernetes standard |
|---|---|---|
| Binaire | Single ~50MB | Multiple binaires |
| etcd | SQLite/etcd embedded | etcd cluster séparé |
| RAM minimum | 512MB | 2GB+ |
| Stockage | Traefik + Longhorn | À configurer |
| Installation | 1 commande | Kubeadm/Kolla |
| HA | Simple (embedded etcd) | Complexe |
Déploiement avec Terraform¶
# k3s/main.tf
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 1.54"
}
}
}
variable "cluster_name" {
default = "k3s-cluster"
}
variable "server_count" {
default = 1 # 1 ou 3 pour HA
}
variable "agent_count" {
default = 3
}
variable "server_flavor" {
default = "m1.medium"
}
variable "agent_flavor" {
default = "m1.small"
}
variable "image_name" {
default = "ubuntu-22.04"
}
variable "network_id" {
description = "ID du réseau existant"
}
variable "external_network" {
default = "external"
}
# Génération du token K3s
resource "random_password" "k3s_token" {
length = 64
special = false
}
# Security group
resource "openstack_networking_secgroup_v2" "k3s" {
name = "${var.cluster_name}-sg"
description = "Security group for K3s cluster"
}
resource "openstack_networking_secgroup_rule_v2" "ssh" {
security_group_id = openstack_networking_secgroup_v2.k3s.id
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "k3s_api" {
security_group_id = openstack_networking_secgroup_v2.k3s.id
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 6443
port_range_max = 6443
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "http" {
security_group_id = openstack_networking_secgroup_v2.k3s.id
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 80
port_range_max = 80
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "https" {
security_group_id = openstack_networking_secgroup_v2.k3s.id
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "k3s_internal" {
security_group_id = openstack_networking_secgroup_v2.k3s.id
direction = "ingress"
ethertype = "IPv4"
remote_group_id = openstack_networking_secgroup_v2.k3s.id
}
# Keypair
resource "openstack_compute_keypair_v2" "k3s" {
name = "${var.cluster_name}-keypair"
public_key = file("~/.ssh/id_rsa.pub")
}
# K3s Server(s)
resource "openstack_compute_instance_v2" "server" {
count = var.server_count
name = "${var.cluster_name}-server-${count.index}"
flavor_name = var.server_flavor
image_name = var.image_name
key_pair = openstack_compute_keypair_v2.k3s.name
security_groups = [openstack_networking_secgroup_v2.k3s.name]
network {
uuid = var.network_id
}
user_data = templatefile("${path.module}/cloud-init-server.yaml", {
k3s_token = random_password.k3s_token.result
cluster_init = count.index == 0
server_ip = count.index == 0 ? "" : openstack_compute_instance_v2.server[0].access_ip_v4
node_name = "${var.cluster_name}-server-${count.index}"
})
}
# K3s Agents
resource "openstack_compute_instance_v2" "agent" {
count = var.agent_count
name = "${var.cluster_name}-agent-${count.index}"
flavor_name = var.agent_flavor
image_name = var.image_name
key_pair = openstack_compute_keypair_v2.k3s.name
security_groups = [openstack_networking_secgroup_v2.k3s.name]
depends_on = [openstack_compute_instance_v2.server]
network {
uuid = var.network_id
}
user_data = templatefile("${path.module}/cloud-init-agent.yaml", {
k3s_token = random_password.k3s_token.result
server_ip = openstack_compute_instance_v2.server[0].access_ip_v4
node_name = "${var.cluster_name}-agent-${count.index}"
})
}
# Floating IP pour le server
resource "openstack_networking_floatingip_v2" "server" {
pool = var.external_network
}
resource "openstack_compute_floatingip_associate_v2" "server" {
floating_ip = openstack_networking_floatingip_v2.server.address
instance_id = openstack_compute_instance_v2.server[0].id
}
# Outputs
output "server_ip" {
value = openstack_networking_floatingip_v2.server.address
}
output "k3s_token" {
value = random_password.k3s_token.result
sensitive = true
}
output "kubeconfig_command" {
value = "ssh ubuntu@${openstack_networking_floatingip_v2.server.address} 'sudo cat /etc/rancher/k3s/k3s.yaml' | sed 's/127.0.0.1/${openstack_networking_floatingip_v2.server.address}/g' > ~/.kube/k3s-config"
}
Cloud-init Server¶
# cloud-init-server.yaml
#cloud-config
package_update: true
package_upgrade: true
packages:
- curl
write_files:
- path: /etc/rancher/k3s/config.yaml
content: |
token: ${k3s_token}
tls-san:
- ${node_name}
disable:
- traefik # Optionnel: désactiver Traefik inclus
# cluster-init: true # Pour HA avec embedded etcd
runcmd:
- |
%{ if cluster_init }
curl -sfL https://get.k3s.io | sh -s - server \
--config /etc/rancher/k3s/config.yaml \
--cluster-init
%{ else }
curl -sfL https://get.k3s.io | sh -s - server \
--config /etc/rancher/k3s/config.yaml \
--server https://${server_ip}:6443
%{ endif }
- until kubectl get nodes; do sleep 5; done
- echo "K3s server ready"
Cloud-init Agent¶
# cloud-init-agent.yaml
#cloud-config
package_update: true
packages:
- curl
runcmd:
- |
# Attendre que le server soit prêt
until curl -k https://${server_ip}:6443 2>/dev/null; do
echo "Waiting for K3s server..."
sleep 10
done
# Installer K3s agent
curl -sfL https://get.k3s.io | K3S_URL=https://${server_ip}:6443 K3S_TOKEN=${k3s_token} sh -
Configuration HA (3 servers)¶
# Pour un cluster HA avec 3 servers
variable "server_count" {
default = 3
}
# Cloud-init modifié pour HA
# Le premier server utilise --cluster-init
# Les suivants rejoignent avec --server
graph TB
subgraph ha[K3s HA Cluster]
s1[Server 1<br/>cluster-init]
s2[Server 2<br/>join]
s3[Server 3<br/>join]
s1 <-->|etcd sync| s2
s2 <-->|etcd sync| s3
s3 <-->|etcd sync| s1
end
lb[LB<br/>VIP]
a1[Agent 1]
a2[Agent 2]
a3[Agent 3]
lb --> s1
lb --> s2
lb --> s3
a1 -->|join| lb
a2 -->|join| lb
a3 -->|join| lb
Récupération kubeconfig¶
# Après déploiement Terraform
SERVER_IP=$(terraform output -raw server_ip)
# Récupérer le kubeconfig
ssh ubuntu@$SERVER_IP 'sudo cat /etc/rancher/k3s/k3s.yaml' | \
sed "s/127.0.0.1/$SERVER_IP/g" > ~/.kube/k3s-config
export KUBECONFIG=~/.kube/k3s-config
kubectl get nodes
Exemples pratiques¶
Déploiement complet¶
# Cloner les configurations
cd terraform/k3s
# Configurer les variables
cat > terraform.tfvars <<EOF
cluster_name = "k3s-prod"
server_count = 1
agent_count = 3
server_flavor = "m1.medium"
agent_flavor = "m1.small"
network_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
external_network = "external"
EOF
# Déployer
terraform init
terraform apply
# Récupérer kubeconfig
eval $(terraform output -raw kubeconfig_command)
export KUBECONFIG=~/.kube/k3s-config
# Vérifier
kubectl get nodes
kubectl get pods -A
Installation Longhorn pour stockage¶
# Longhorn pour PersistentVolumes
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.5.3/deploy/longhorn.yaml
# Attendre le déploiement
kubectl -n longhorn-system get pods -w
# Créer une StorageClass par défaut
kubectl patch storageclass longhorn -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Test du cluster¶
# Déployer une application
kubectl create deployment nginx --image=nginx --replicas=3
kubectl expose deployment nginx --port=80 --type=LoadBalancer
# Vérifier
kubectl get pods -o wide
kubectl get svc
Ressources¶
Checkpoint¶
- K3s server déployé et accessible
- Agents joints au cluster
- kubeconfig récupéré et fonctionnel
- Tous les nodes en état Ready
- Storage configuré (Longhorn ou autre)
- Application de test déployée