Kubernetes Scale Up/Down Replica set

Kubernetes Scale Up/Down Replica set

Creat Replicaset

Create replicates with below manifest yml file

cat myrs.yml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # modify replicas according to your case
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google_samples/gb-frontend:v3

to create kubernetes replicaset in kubernetes we use kubectl apply command.

master $ kubectl apply -f myrs.yml

replicaset.apps/frontend created

List the Pods

list the pods with Kubectl get pods command.

master $ kubectl get pods

NAME             READY   STATUS    RESTARTS   AGE
frontend-98xkj   1/1     Running   0          13s
frontend-cttxp   1/1     Running   0          13s
frontend-xqb9z   1/1     Running   0          13s

Kubernetes Scale Up Replica set

we can scale up the pods under replicaset using kubectl scale command

master $ kubectl scale rs frontend --replicas 10

replicaset.extensions/frontend scaled

master $ kubectl get pods

NAME             READY   STATUS    RESTARTS   AGE
frontend-4jb2x   1/1     Running   0          23s
frontend-98xkj   1/1     Running   0          6m30s
frontend-bpk2l   1/1     Running   0          23s
frontend-cttxp   1/1     Running   0          6m30s
frontend-f8b6d   1/1     Running   0          23s
frontend-gkkq6   1/1     Running   0          23s
frontend-glgkg   1/1     Running   0          23s
frontend-mxh72   1/1     Running   0          23s
frontend-wk48q   1/1     Running   0          23s
frontend-xqb9z   1/1     Running   0          6m30s

here we scaled up the replicaset with 10 pods

Kubernetes Scale Down Replica set

To scale down the pods under we can use same scale command but here we have to reduce the number of replicas.

master $ kubectl scale rs frontend --replicas 2

replicaset.extensions/frontend scaled

master $ kubectl get pods

NAME             READY   STATUS        RESTARTS   AGE
frontend-4jb2x   1/1     Terminating   0          38s
frontend-98xkj   0/1     Terminating   0          6m45s
frontend-bpk2l   1/1     Terminating   0          38s
frontend-cttxp   1/1     Running       0          6m45s
frontend-f8b6d   1/1     Terminating   0          38s
frontend-gkkq6   1/1     Terminating   0          38s
frontend-glgkg   0/1     Terminating   0          38s
frontend-mxh72   0/1     Terminating   0          38s
frontend-wk48q   1/1     Terminating   0          38s
frontend-xqb9z   1/1     Running       0          6m45s

master $ kubectl get pods

NAME             READY   STATUS        RESTARTS   AGE
frontend-4jb2x   0/1     Terminating   0          41s
frontend-98xkj   0/1     Terminating   0          6m48s
frontend-bpk2l   0/1     Terminating   0          41s
frontend-cttxp   1/1     Running       0          6m48s
frontend-gkkq6   0/1     Terminating   0          41s
frontend-wk48q   0/1     Terminating   0          41s
frontend-xqb9z   1/1     Running       0          6m48s

master $ kubectl get pods

NAME             READY   STATUS    RESTARTS   AGE
frontend-cttxp   1/1     Running   0          23m
frontend-xqb9z   1/1     Running   0          23m
master $

finally you can see only two pods are running.

Verify the owner reference of pods

You can also verify the owner reference of these pods. using below command we can get the owner reference of this pod or how this pod is created or which controller is handling this pod

master $ kubectl get pods frontend-cttxp -o yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2019-09-19T05:11:57Z"
  generateName: frontend-
  labels:
    tier: frontend
  name: frontend-cttxp
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: frontend
    uid: 00e595d8-da9c-11e9-826f-0242ac110052
  resourceVersion: "5705"
  selfLink: /api/v1/namespaces/default/pods/frontend-cttxp
  uid: 00e70999-da9c-11e9-826f-0242ac110052
spec:
  containers:
  - image: gcr.io/google_samples/gb-frontend:v3
    imagePullPolicy: IfNotPresent
    name: php-redis
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-xmgnh
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: node01
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-xmgnh
    secret:
      defaultMode: 420
      secretName: default-token-xmgnh
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-09-19T05:11:57Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2019-09-19T05:11:59Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2019-09-19T05:11:59Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2019-09-19T05:11:57Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://f19f538bf188164b7e5335b4cb5c85e15cbaa08ef3995e21244b4cf08ae65dab
    image: gcr.io/google_samples/gb-frontend:v3
    imageID: docker-pullable://gcr.io/google_samples/gb-frontend@sha256:50b22839aaf6a18586d6751e8963cf684c27b9873ca926df22cdf88ed4452615
    lastState: {}
    name: php-redis
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: "2019-09-19T05:11:59Z"
  hostIP: 172.17.0.87
  phase: Running
  podIP: 10.44.0.1
  qosClass: BestEffort
  startTime: "2019-09-19T05:11:57Z"

in the above output you can see owner reference of the pod is replicaset.

  • Kubernetes Scale Up/Down Replica set
  • scale down kubernetes replicaset
  • Kubernetes Scale Up Replica set
  • scale up replicaset
  • Kubernetes Scale Down Replica set
  • scale down replicaet
  • scale up kuberntes replicaset

Leave a Reply

Your email address will not be published. Required fields are marked *