Skip to content

Commit 61d15b2

Browse files
authored
Merge pull request #139 from apecloud/support/add-pg-with-etcd-example
chore: add pg with etcd example and upate redis restore example
2 parents ccb9e74 + 69457c3 commit 61d15b2

File tree

3 files changed

+127
-23
lines changed

3 files changed

+127
-23
lines changed
Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
---
2+
title: FAQs
3+
description: FAQs of PostgreSQL
4+
keywords: [KubeBlocks, PostgreSQL, Kubernetes Operator]
5+
sidebar_position: 9
6+
sidebar_label: FAQs
7+
---
8+
9+
import Tabs from '@theme/Tabs';
10+
import TabItem from '@theme/TabItem';
11+
12+
# PostgreSQL FAQs
13+
14+
## 1. Use ETCD as Patroni DCS
15+
16+
KubeBlocks PostgreSQL uses the Kubernetes API itself as DCS (Distributed Config Store) by default.
17+
But when the control plane is under extreme high load, it may lead to unexpected demotion of the primary replica. And it's recommended to use ETCD as DCS in such extreme cases.
18+
19+
```yaml
20+
apiVersion: apps.kubeblocks.io/v1
21+
kind: Cluster
22+
metadata:
23+
name: pg-cluster-etcd
24+
namespace: demo
25+
spec:
26+
terminationPolicy: Delete
27+
clusterDef: postgresql
28+
topology: replication
29+
componentSpecs:
30+
- name: postgresql
31+
serviceVersion: "16.4.0"
32+
env:
33+
- name: DCS_ENABLE_KUBERNETES_API # unset this env if you use zookeeper or etcd, default to empty
34+
- name: ETCD3_HOST
35+
value: 'etcd-cluster-etcd-headless.demo.svc.cluster.local:2379' # where is your etcd?
36+
# - name: ZOOKEEPER_HOSTS
37+
# value: 'myzk-zookeeper-0.myzk-zookeeper-headless.demo.svc.cluster.local:2181' # where is your zookeeper?
38+
replicas: 2
39+
resources:
40+
limits:
41+
cpu: "0.5"
42+
memory: "0.5Gi"
43+
requests:
44+
cpu: "0.5"
45+
memory: "0.5Gi"
46+
volumeClaimTemplates:
47+
- name: data
48+
spec:
49+
storageClassName: ""
50+
accessModes:
51+
- ReadWriteOnce
52+
resources:
53+
requests:
54+
storage: 20Gi
55+
```
56+
57+
The key fields are:
58+
- `DCS_ENABLE_KUBERNETES_API`: Unset this env to use ETCD or ZooKeeper as DCS
59+
- `ETCD3_HOST`: The host of ETCD cluster
60+
61+
You can also use ZooKeeper as DCS by unsetting `DCS_ENABLE_KUBERNETES_API` and setting `ZOOKEEPER_HOSTS` to the host of ZooKeeper cluster.
62+
63+
KubeBlocks has ETCD and ZooKeeper Addons in the `kubeblocks-addons` repository. You can refer to the following links for more details.
64+
- https://github.com/apecloud/kubeblocks-addons/tree/main/examples/etcd
65+
- https://github.com/apecloud/kubeblocks-addons/tree/main/examples/zookeeper
66+
67+
You can shell into one of the etcd container to view the etcd data, and view the etcd data with etcdctl.
68+
69+
```bash
70+
etcdctl get /service --prefix
71+
```

docs/en/preview/kubeblocks-for-redis/05-backup-restore/06-restore-with-pitr.mdx

Lines changed: 28 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ Apply this YAML configuration:
8787
apiVersion: apps.kubeblocks.io/v1
8888
kind: Cluster
8989
metadata:
90-
name: pg-restore-pitr
90+
name: redis-restore-pitr
9191
namespace: demo
9292
annotations:
9393
# NOTE: replace <CONTINUOUS_BACKUP_NAME> with the continuouse backup name
@@ -99,16 +99,34 @@ spec:
9999
topology: replication
100100
componentSpecs:
101101
- name: redis
102-
serviceVersion: "14.7.2"
103-
disableExporter: true
104-
replicas: 1
102+
serviceVersion: "7.2.4"
103+
disableExporter: false
104+
replicas: 2
105105
resources:
106106
limits:
107-
cpu: "0.5"
108-
memory: "0.5Gi"
107+
cpu: '0.5'
108+
memory: 0.5Gi
109109
requests:
110-
cpu: "0.5"
111-
memory: "0.5Gi"
110+
cpu: '0.5'
111+
memory: 0.5Gi
112+
volumeClaimTemplates:
113+
- name: data
114+
spec:
115+
storageClassName: ""
116+
accessModes:
117+
- ReadWriteOnce
118+
resources:
119+
requests:
120+
storage: 20Gi
121+
- name: redis-sentinel
122+
replicas: 3
123+
resources:
124+
limits:
125+
cpu: '0.5'
126+
memory: 0.5Gi
127+
requests:
128+
cpu: '0.5'
129+
memory: 0.5Gi
112130
volumeClaimTemplates:
113131
- name: data
114132
spec:
@@ -142,7 +160,7 @@ metadata:
142160
name: redis-replication-restore
143161
namespace: demo
144162
spec:
145-
clusterName: redis-replication-restore
163+
clusterName: redis-restore-pitr
146164
force: false
147165
restore:
148166
backupName: <CONTINUOUS_BACKUP_NAME>
@@ -167,7 +185,7 @@ To remove all created resources, delete the Redis cluster along with its namespa
167185

168186
```bash
169187
kubectl delete cluster redis-replication -n demo
170-
kubectl delete cluster redis-replication-restore -n demo
188+
kubectl delete cluster redis-restore-pitr -n demo
171189
kubectl delete ns demo
172190
```
173191

docs/en/release-1_0/kubeblocks-for-redis/05-backup-restore/06-restore-with-pitr.mdx

Lines changed: 28 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ Apply this YAML configuration:
8787
apiVersion: apps.kubeblocks.io/v1
8888
kind: Cluster
8989
metadata:
90-
name: pg-restore-pitr
90+
name: redis-restore-pitr
9191
namespace: demo
9292
annotations:
9393
# NOTE: replace <CONTINUOUS_BACKUP_NAME> with the continuouse backup name
@@ -99,19 +99,34 @@ spec:
9999
topology: replication
100100
componentSpecs:
101101
- name: redis
102-
serviceVersion: "14.7.2"
103-
disableExporter: true
104-
labels:
105-
# NOTE: update the label accordingly
106-
apps.kubeblocks.postgres.patroni/scope: pg-restore-pitr-redis
107-
replicas: 1
102+
serviceVersion: "7.2.4"
103+
disableExporter: false
104+
replicas: 2
108105
resources:
109106
limits:
110-
cpu: "0.5"
111-
memory: "0.5Gi"
107+
cpu: '0.5'
108+
memory: 0.5Gi
112109
requests:
113-
cpu: "0.5"
114-
memory: "0.5Gi"
110+
cpu: '0.5'
111+
memory: 0.5Gi
112+
volumeClaimTemplates:
113+
- name: data
114+
spec:
115+
storageClassName: ""
116+
accessModes:
117+
- ReadWriteOnce
118+
resources:
119+
requests:
120+
storage: 20Gi
121+
- name: redis-sentinel
122+
replicas: 3
123+
resources:
124+
limits:
125+
cpu: '0.5'
126+
memory: 0.5Gi
127+
requests:
128+
cpu: '0.5'
129+
memory: 0.5Gi
115130
volumeClaimTemplates:
116131
- name: data
117132
spec:
@@ -145,7 +160,7 @@ metadata:
145160
name: redis-replication-restore
146161
namespace: demo
147162
spec:
148-
clusterName: redis-replication-restore
163+
clusterName: rredis-restore-pitr
149164
force: false
150165
restore:
151166
backupName: <CONTINUOUS_BACKUP_NAME>
@@ -170,7 +185,7 @@ To remove all created resources, delete the Redis cluster along with its namespa
170185

171186
```bash
172187
kubectl delete cluster redis-replication -n demo
173-
kubectl delete cluster redis-replication-restore -n demo
188+
kubectl delete cluster redis-restore-pitr -n demo
174189
kubectl delete ns demo
175190
```
176191

0 commit comments

Comments
 (0)