Member-only story
Clone & Migrate data Between Kubernetes Clusters with Velero
Now a days Kubernetes popular type to host scalable applications & databases. Databases like Redis, Elasticsearch, MySQL & Postgres.
For above mention databases Helm charts are already available, These charts let you deploy database services on Kubernetes in a secure and reliable manner ensuring compliance with current best practices.
there might be situations where you need to migrate the data stored in database deployments to other clusters. For example, you might want to transfer a copy of your data to a separate cluster.
Assumptions and prerequisites
Making the following assumptions:
- You have two Kubernetes(K8s) clusters — a source cluster and a destination cluster — running on the same cloud provider. For this scenario using the Google Kubernetes Engine (GKE) service from Google Cloud Platform (GKE) but you can use any cloud provider supported by Velero.
- Learn about the platform velero supports.
- Kubectl CLI and the Helm package manager installed and configured to work with your both Kubernetes clusters.
If you already running DB server ElasticSearch & Redis you can directly skip to step:3.
Step 1: Deploy Redis on the source cluster and add data to it
Follow the steps below:
- Add the Bitnami chart repository to Helm:
helm repo add bitnami https://charts.bitnami.com/bitnami
- Deploy Redis on the source cluster.
helm install my-release bitnami/redis

- Get the Redis database password by running the command
export REDIS_PASSWORD=$(kubectl get secret --namespace default my-release-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
- Create a redis client(cli) POD to connect with the Redis master
kubectl run --namespace default my-release-redis-client --rm --tty -i --restart='Never' \
> --env REDIS_PASSWORD=$REDIS_PASSWORD \
> --image docker.io/bitnami/redis:6.0.10-debian-10-r19 -- bash