CSI Cinder pour Kubernetes¶
Introduction¶
Le driver CSI (Container Storage Interface) Cinder permet à Kubernetes d'utiliser les volumes Ceph/Cinder d'OpenStack comme PersistentVolumes. Cela offre stockage persistant dynamique pour les workloads.
Prérequis¶
- Cluster Kubernetes fonctionnel (Magnum, K3s, ou RKE2)
- OpenStack Cinder accessible
- Credentials OpenStack
Points à apprendre¶
Architecture CSI Cinder¶
graph TB
subgraph k8s[Kubernetes Cluster]
controller[CSI Controller<br/>Deployment<br/>cinder-csi-controllerplugin<br/>- Provisioner<br/>- Attacher<br/>- Snapshotter]
node[CSI Node Plugin<br/>DaemonSet<br/>cinder-csi-nodeplugin<br/>- Node driver registrar<br/>- Volume mount]
sc[StorageClass<br/>K8s Resource<br/>Définit le backend Cinder]
pvc[PVC<br/>K8s Resource<br/>Demande de volume]
pv[PV<br/>K8s Resource<br/>Volume provisionné]
pod[Pod<br/>Utilise le volume]
end
subgraph openstack[OpenStack]
cinder[Cinder API<br/>Volume service]
ceph[Ceph Backend<br/>Stockage]
end
pvc -->|Uses| sc
controller -->|Create volume API| cinder
cinder -->|Provision RBD| ceph
controller -->|Create| pv
node -->|Mount| pv
pod -->|Claims| pvc
Installation CSI Cinder¶
# Créer le namespace
kubectl create namespace cinder-csi
# Créer le secret avec les credentials OpenStack
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: cloud-config
namespace: cinder-csi
stringData:
cloud.conf: |
[Global]
auth-url=https://10.0.0.10:5000/v3
username=admin
password=<password>
region=RegionOne
tenant-name=admin
domain-name=Default
[BlockStorage]
bs-version=v3
EOF
# Appliquer les manifests CSI
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/cinder-csi-plugin/cinder-csi-controllerplugin-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/cinder-csi-plugin/cinder-csi-nodeplugin-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/cinder-csi-plugin/cinder-csi-controllerplugin.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/master/manifests/cinder-csi-plugin/cinder-csi-nodeplugin.yaml
Configuration via Helm¶
# Ajouter le repo
helm repo add cpo https://kubernetes.github.io/cloud-provider-openstack
helm repo update
# Créer values.yaml
cat > cinder-csi-values.yaml <<EOF
secret:
enabled: true
create: true
name: cloud-config
data:
cloud.conf: |
[Global]
auth-url=https://10.0.0.10:5000/v3
username=admin
password=<password>
region=RegionOne
tenant-name=admin
domain-name=Default
[BlockStorage]
bs-version=v3
storageClass:
enabled: true
delete:
isDefault: true
allowVolumeExpansion: true
EOF
# Installer
helm install cinder-csi cpo/openstack-cinder-csi \
-n cinder-csi --create-namespace \
-f cinder-csi-values.yaml
StorageClass¶
# storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cinder-csi
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: cinder.csi.openstack.org
parameters:
# Type de volume Cinder (optionnel)
type: "standard" # ou "ssd", selon vos backends Cinder
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
StorageClass avec Ceph SSD¶
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: cinder-ssd
provisioner: cinder.csi.openstack.org
parameters:
type: "ssd" # Volume type Cinder configuré pour Ceph SSD pool
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard # TRIM support
Utilisation PVC¶
# pvc-example.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: cinder-csi
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvc
kubectl apply -f pvc-example.yaml
# Vérifier
kubectl get pvc
kubectl get pv
kubectl describe pvc my-pvc
# Vérifier dans Cinder
openstack volume list
Diagramme de provisioning¶
sequenceDiagram
actor User
participant K as kubectl
participant API as API Server
participant CSI as CSI Controller
participant Cinder as Cinder API
participant Ceph
participant Node as CSI Node
participant Pod
User->>K: kubectl apply pvc.yaml
K->>API: Create PVC
API->>CSI: Provision request
CSI->>Cinder: POST /volumes
Cinder->>Ceph: Create RBD image
Ceph-->>Cinder: Image created
Cinder-->>CSI: Volume ID
CSI->>API: Create PV
API->>API: Bind PVC to PV
User->>K: kubectl apply pod.yaml
K->>API: Create Pod
API->>API: Schedule Pod
API->>Node: Mount volume
Node->>Cinder: Attach volume
Cinder-->>Node: Device path
Node->>Node: Format & Mount
Node-->>API: Volume mounted
API->>Pod: Start container
Pod->>Pod: /data mounted
Volume Snapshots¶
# Installer le snapshot controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
# VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: cinder-snapclass
driver: cinder.csi.openstack.org
deletionPolicy: Delete
---
# VolumeSnapshot
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: my-snapshot
spec:
volumeSnapshotClassName: cinder-snapclass
source:
persistentVolumeClaimName: my-pvc
Expansion de volume¶
# Modifier la taille du PVC (si allowVolumeExpansion: true)
kubectl patch pvc my-pvc -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'
# Vérifier
kubectl get pvc my-pvc -o yaml | grep -A5 status
Exemples pratiques¶
Vérification du driver CSI¶
# Pods CSI
kubectl get pods -n cinder-csi
# Logs controller
kubectl logs -n cinder-csi -l app=cinder-csi-controllerplugin -c cinder-csi-plugin
# Logs node
kubectl logs -n cinder-csi -l app=cinder-csi-nodeplugin -c cinder-csi-plugin
# CSI Drivers enregistrés
kubectl get csidrivers
# CSI Nodes
kubectl get csinodes
StatefulSet avec volumes¶
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 3
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15
ports:
- containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_PASSWORD
value: "secret"
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: cinder-csi
resources:
requests:
storage: 50Gi
Troubleshooting¶
# Volume non attaché
kubectl describe pvc <pvc-name>
kubectl describe pv <pv-name>
# Vérifier côté Cinder
openstack volume list
openstack volume show <volume-id>
# Logs détaillés
kubectl logs -n cinder-csi deployment/cinder-csi-controllerplugin --all-containers
Ressources¶
Checkpoint¶
- CSI Cinder installé (controller + node plugins)
- StorageClass configurée et par défaut
- PVC crée un volume dans Cinder
- Pod utilise le volume avec succès
- Volume visible dans OpenStack
- Expansion de volume testée
- Snapshots fonctionnels (optionnel)