Skip to content

Commit 5fe5e1b

Browse files
committed
update readme
1 parent 35b3f46 commit 5fe5e1b

File tree

4 files changed

+65
-4
lines changed

4 files changed

+65
-4
lines changed

README.md

Lines changed: 62 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ disk offerings to Kubernetes storage classes.
8989

9090
> **Note:** The VolumeSnapshot CRDs (CustomResourceDefinitions) of version 8.3.0 are installed in this deployment. If you use a different version, please ensure compatibility with your Kubernetes cluster and CSI sidecars.
9191
92-
// TODO: Ask Wei / Rohit - should we have the crds locally or manually install it from:
92+
// TODO: Should we have the crds locally or manually install it from:
9393

9494
```
9595
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.3.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
@@ -121,6 +121,67 @@ To build the container images:
121121
make container
122122
```
123123

124+
125+
## Volume Snapshots
126+
For Volume snapshots to be created, the following configurations need to be applied:
127+
128+
```
129+
kubectl aplly -f 00-snapshot-crds.yaml # Installs the VolumeSnapshotClass, VolumeSnapshotContent and VolumeSnapshtot CRDs
130+
volume-snapshot-class.yaml # Defines VolumeSnapshotClass for CloudStack CSI driver
131+
```
132+
133+
Once the CRDs are installed, the snapshot can be taken by applying:
134+
```
135+
kubectl apply ./examples/k8s/snapshot/snapshot.yaml
136+
```
137+
138+
In order to take the snapshot of a volume, `persistentVolumeClaimName` should be set to the right PVC name that is bound to the volume whose snapshot is to be taken.
139+
140+
You can check CloudStack volume snapshots if the snapshot was successfully created. If for any reason there was an issue, it can be investgated by checking the logs of the cloudstack-csi-controller pods: cloudstack-csi-controller, csi-snapshotter and snapshot-controller containers
141+
142+
```
143+
kubectl logs -f <cloudstack-csi-controller pod_name> -n kube-system # defaults to tailing logs of cloudstack-csi-controller
144+
kubectl logs -f <cloudstack-csi-controller pod_name> -n kube-system -c csi-snapshotter
145+
kubectl logs -f <cloudstack-csi-controller pod_name> -n kube-system -c snapshot-controller
146+
```
147+
148+
To restore a volume snapshot:
149+
1. Restore a snapshot and Use it in a pod
150+
* Create a PVC from the snapshot - for example ./examples/k8s/snapshot/pvc-from-snapshot.yaml
151+
* Apply the configuration:
152+
```
153+
kubectl apply -f ./examples/k8s/snapshot/pvc-from-snapshot.yaml
154+
```
155+
* Create a pod that uses the restored PVC; example pod config ./examples/k8s/snapshot/restore-pod.yaml
156+
```
157+
kubectl apply -f ./examples/k8s/snapshot/restore-pod.yaml
158+
```
159+
2. To restore a snapshot when using a deployment
160+
Update the deployment to point to the restored PVC
161+
162+
```
163+
spec:
164+
volumes:
165+
- name: app-volume
166+
persistentVolumeClaim:
167+
claimName: pvc-from-snapshot
168+
```
169+
170+
### What happens when you restore a volume from a snapshot
171+
* The CSI external-provisioner (a container in the cloudstack-csi-controller pod) sees the new PVC and notices it references a snapshot
172+
* The CSI driver's `CreateVolume` method is called with a `VolumeContentSource` that contains the snapshot ID
173+
* The CSI driver creates a new volume from the snapshot (using the CloudStack's createVolume API)
174+
* The new volume is now available as a PV (persistent volume) and is bound to the new PVC
175+
* The volume is NOT attached to any node just by restoring from a snapshot, the volume is only attached to a node when a Pod that uses the new PVC is scheduled on a node
176+
* The CSI driver's `ControllerPublishVolume` and `NodePublishVolume` methods are called to attach and mount the volume to the node where the Pod is running
177+
178+
Hence to debug any issues during restoring a snapshot, check the logs of the cloudstack-csi-controller, external-provisioner containers
179+
180+
```
181+
kubectl logs -f <cloudstack-csi-controller pod_name> -n kube-system # defaults to tailing logs of cloudstack-csi-controller
182+
kubectl logs -f <cloudstack-csi-controller pod_name> -n kube-system -c external-provisioner
183+
```
184+
124185
## See also
125186

126187
- [CloudStack Kubernetes Provider](https://github.com/apache/cloudstack-kubernetes-provider) - Kubernetes Cloud Controller Manager for Apache CloudStack

deploy/k8s/volume-snapshot-class.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,4 @@ kind: VolumeSnapshotClass
33
metadata:
44
name: cloudstack-snapshot
55
driver: csi.cloudstack.apache.org
6-
deletionPolicy: Delete
6+
deletionPolicy: Delete # Deleting the snapshot object in Kubernetes with delete the snapshot in CloudStack; You can use policy Retain if the snapshot shouldn't be deleted from CloudStack

examples/k8s/snapshot/pvc-from-snapshot.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
apiVersion: v1
22
kind: PersistentVolumeClaim
33
metadata:
4-
name: snapshot-pvc-1
4+
name: pvc-from-snapshot
55
spec:
66
accessModes:
77
- ReadWriteOnce

examples/k8s/snapshot/restore-pod.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@ spec:
1313
volumes:
1414
- name: data
1515
persistentVolumeClaim:
16-
claimName: snapshot-pvc-1
16+
claimName: pvc-from-snapshot

0 commit comments

Comments
 (0)