Kubernetes Emptydir Example-Emptydir Volume

Kubernetes Emptydir Example-Emptydir Volume

As the name implies kubernetes emptydir is an empty directory in the pod and it will be created when the pod is created and it will be deleted after pod deleted. If the pod is stopped also emptydir volume will be available, But if the pod deleted you will loose entire data in emptdir. emptyDir that lasts for the life of the Pod, even if the Container in the pod terminates and restarts. Intially the emptydir volume is empty. Pod/container can write and read data into this emptydir volume. And this storage is persists if the node is running. If the node goes down, the contents of the emptydir will be erased. Emptydir volume will take memory from storage volumes of the node machine like SSD. or it can use RAM for higher performance. in this blog post i will explain in detail about what is kubernetes emptydir volume with examples.

aster $ cat mypod.yml

apiVersion: v1
kind: Pod
metadata:
  name: myemptypod
spec:
  containers:
  - image: nginx
    name: mycontainer
    volumeMounts:
    - mountPath: /demo
      name: my-volume
  volumes:
  - name: my-volume
    emptyDir: {}

Here you can see in the yml, we created one emtydir volume and the volume name is my-vloume. And we are mounting this volume into container demo directory in volumemount section. after pod creation you can see an empty directory called /demo in container.

lets verify

aster $ kubectl get pods
NAME         READY   STATUS    RESTARTS   AGE
myemptypod   1/1     Running   0          8s

master $ kubectl exec -it myemptypod bash

root@myemptypod:/# ls
bin   demo  etc   lib    media  opt   root  sbin  sys  usr
boot  dev   home  lib64  mnt    proc  run   srv   tmp  var

root@myemptypod:/# ls demo/
root@myemptypod:/#

here you can see there is no files in demo directory and it is empty directory.

All Containers Sharing emptyDir:

if you mount the emptydir volume in 3 containers. the data in emptydir volume will be available to all containers. lets say that you/container-1 created some data in mountpath. if you go and check the data in other containers you can see the data of containers-1. if you/container-2 created some data in mountpath, the same data you can see in 1st and 3rd container mounted paths. So all the containers are mounted to same volume and all the containers can share/access the data.

apiVersion: v1
kind: Pod
metadata:
  name: mpod
spec:
  containers:

  - image: nginx
    name: my1
    ports:
    - containerPort: 8080
    command: ["/bin/sh", "-ec", "sleep 3600"]
    volumeMounts:
    - mountPath: /demo1
      name: my-volume

  - image: nginx
    name: my2
    ports:
    - containerPort: 8081
    command: ["/bin/sh", "-ec", "sleep 3600"]
    volumeMounts:
    - mountPath: /demo2
      name: my-volume

  - image: nginx
    name: my3
    ports:
    - containerPort: 8082
    command: ["/bin/sh", "-ec", "sleep 3600"]
    volumeMounts:
    - mountPath: /demo3
      name: my-volume

  volumes:
  - name: my-volume
    emptyDir: {}

Here i mentioned command to run containers. if you want to know more info about this please check this in stackoverflow

  • kubernetes emptydir
  • kubernetes emptydir sizelimit
  • kubernetes emptydir mount

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *