CKAD : 5. Deployment Configuration


K8s volume share lifetime of pod, not lifetime of containers within. If container restart, then data available to new container. Volume is available to multiple pods. Volume's life can be longer than life of pod. If multiple pods have write access mode, then data corruption is possible unless there is locking mechanism. 

Pod has Volume. Single volume can be available to multiple container within pod using VolumeMount (directory or folder path). (If VolumeMount has different value, then within each container, the path will be different.) So it can be used for intra pod communication.

28+ volume types available
- rbd for Ceph, GlusterFS: block storage. 
- NFS, iSCSI : for multiple reading 
- gcePersistentDisk 
- Others: azureDisk, azureFile, csi, downardAPI, fc (fiber channel) it has raw block volume, Flocker, gitRepo, Local, projectd, portworxVolume, quobyte, scaleIO, secret, StorageOS, vsphereVolume, persistentVolumeClaim, awsElasticStore, CepthFS, Cinder, FlexVolume, GCEPersistentDisk, Quobyte, 

PV K8s PV has longer lifetime than pod. Pod can claim PV as PVC. 
PV is cluster scoped. 

Volume plugin (as per CSI), are in-tree. i.e. they compiled along with k8s binaries
Out-of-tree object allow storage vendors to develop single driver and plugin can be containerized. it needs elevated access to host node so it is security risk. 

Pod can claim PV as PVC. Multiple pods can share PVC. Other pod cannot use data inside volume.

Pod:Voulme:PVC
Then it is mounted in container. 

Access Mode (accessModes) : 
1. RWO (Read Write Once)
2. ROX (Read Only Many)
3. RWx (Read Write Many)
for node

1. cluster group all volumes with same access mode. 
2. then sort volume by size. smallest to largest. 

PVC request with parameter (1) access mode, (2) volume size (3) storage class is optional parameter. 

PVC is namespaced scope

phase
1. Provisioning : PV

PV can be of type (1) empty or (2) pre-populated volume. 

1.1. emptyDir
1.2. hostPath : mounts resource from host node file system
- directory
- file
- socket
- character device
- block device
path: 
* All types have its own configuration settings. 

The resource must exist on host node. 
- DirecotryOrCreate
- FileOrCreate

When PV is created using NFS storage, we must mention, server and path both

  nfs:
    server: nfs01
    # Exported path of your NFS server

    path: "/mysql"

2. Binding: PVC

k get pv
Here CLAIM column indicate name of PVC
Other columns are : Capacity, Access Mode, Reclaim policy, status = Bound | Available, Storageclass. StorageClass = manual if PV is creating using hostPath. 

3. use

If we create a file in path, which is mounted in container pointing to PVC of pod. Then we delete all pods, who have this PVC, then create again, the file is as it is. Because PVC is not deleted. 

4. Release: when PVC is deleted
5. reclaim as per persistentVolumeReclaimPolicy
- Retain
- Delete
- Recycle 

With retain, the PV is not available for any other PVC, even after old PVC is deleted. 

StorageClass is used for dynamic provisioning. We no need to define PV. PV is created automatically. At PVC we can use storageClassName to link PVC with storage class. 

Secret 

k create secret generic "name of secret" --from-literal=key=value

To encrypt secret, use EncryptionConfiguration object with (1) key and (2) provider identity. The kube-apiserver should be started with --encryption-provider-config flag. Example provider: aescbc, ksm. Other tools Helm Secrets, HashiCorp Vault. Then we need to recreate all existing secrets. 

Max size of secret = 1 MB
We can have many secret. No limit
They will stored in tempfs of host node. 

It can be mounted in container as environment variable or file. It is stored as plain text inside container. 

Inside pod it is referred as: secretKeyRef

Secret has two maps
1. data
2. stringData: It is a write-only convenience field. Not display in output of command kubectl get secret mysecret -o yaml

If any key is specified at both data and stringData, then value specified at stringData is used and value specified at data will be ignored

When we mount secret, we can mount individual key

    volumeMounts:
    - name: foo
      mountPath: "/etc/foo"
      readOnly: true
  volumes:
  - name: foo
    secret:
      secretName: mysecret
      items:
      - key: username

        path: my-group/my-username

OR all keys of secret. 

    volumeMounts:
    - name: foo
      mountPath: "/etc/foo"
      readOnly: true
  volumes:
  - name: foo
    secret:

      secretName: mysecret

If we specify file permission in JSON, it should be decimal. In YAML it should be octal. 

In K8s 1.18 Alpha feature cm and secret both can be immutable, if they specify with "immutable: true"

Secret resources reside in a namespace. Secrets can only be referenced by Pods in that same namespace.

secret->data->.secret-file It will create hidden file, when mounted. 

The ability to watch and list all secrets in a cluster should be reserved for only the most privileged, system-level components.

Configmap:
1. Key value pair
2. Plain config file in any format

Data come from
1. singe file
2. collection of files in a single directory. 
3. literal value (use --from-literal with command k create cm 'cm name' --from-liternal)

Inside pod
1. env var
2. volume
3. a file in volume (a filename is key, file content is value)
4. in pod command
5. set file name and its access mode in volume using ConfigMaps
ConfigMaps can be used by system components and controllers. 

Inside pod it is referred as: configMapKeyRef

If ConfigMap is defined as K,V pairs. Pod - > volume is linked with that ConfigMap. The volume is mounted in container at specific path. Inside that directory, there are multiple files as per K in config map and the content of file is V

If we want to mount env variable inside container then no need to modify anything in pod section. Suppose we have cm with name cmone and its content KONE=vone. Another cm with name cmtwo and its content is KA=va and KB=vb

Now to load only one env variable from cmone, we shall use: 

envFrom:
      - configMapRef:
          name: cmone

All K,v pairs from cmone will be loaded to the container as env variables. env name = k and env value = v

If we want to load individual k,v pair use below syntax

        - name: KAA
          valueFrom:
            configMapKeyRef:
              name: cmtwo
              key: KA
        - name: KBB
          valueFrom:
            configMapKeyRef:
              name: cmtwo
              key: KB

Here we can change env variable name also. 

Same thing applies for secret

envFrom:
      - secretRef:

          name: sone

and

        - name: KAA
          valueFrom:
            secretKeyRef:
              name: stwo
              key: KA
        - name: KBB
          valueFrom:
            secretKeyRef:
              name: stwo

              key: KB

We can also define a readiness probe with command ls /path/for/volumeMount. So when the configmap is loaded to container then only the container becomes ready. 

Deployment Status


k get deploy grafana -o yaml
At the end Status is present. 
1. availableReplicas
2. observedGeneration : For rollout and rollback situation. This parameter specifies current revision number. 

Rolling update by
1. changing replica
k scale deploy "name of deployment" --replicas="new count for replica Count"
2. edit deployment and change container image with other version

A deployment has strategy with value “Recreate” or “RollingUpdate”.

We can use --record option to set annotation in deployment. then with command we can rollback. The --record option put annotation in CHANGE-CAUSE column. Once we use --record, then same annotation is continued to future upgrade also, if we do not use --record in future. 

The rollback is possible with "rollout undo' command as below. Suppose current version is 5 and if you rollback to 2. Then new version 6 is created. version 2 is removed. record of version 2 is automatically added to 6. if you are at version 5, and you rollback with 'rollout undo' command then we are going back to revision 4. new revision is created as revision 6. So 4 is renamed as 6. You can see revisions 1, 2, 3, 5, 6

k rollout undo deploy "name of deployment" 
rollback to specific version with option --to-revision='number'
k rollout undo deploy "name of deployment" --to-revision=1

When we create deployment, replicaset A is automatically created. 
When we rollout, new replica set B is created. B's replica count increase from 0 to rpelica count of old A replicaset. A's replicacount will decrease from original value to 0. If we rollback (rollout undo) then reverse will happen. 

if we use 'k rolling-update' command, then update will be stopped if client is closed.

We can see status
k rollout status deploy "name of deployment"

We can see all events about new pod creation and old pod termination with command 
k describe deploy "name of deployment"

we can pause and resume deployment.

k rollout pause deploy "name of deployment"
k rollout resume deploy "name of deployment" 

k rollout history deploy "name of deployment"
provides all the revisions. 

We can see specific revision 
k rollout history deploy "name of deployment" --revision=1

We can see diff
k rollout history deploy "name of deployment" --revision=1 > one.out
k rollout history deploy "name of deployment" --revision=1 > two.out
diff one.out two.out

The changes are:
1. line 1 has revision number
2. pod-template-hash
3. Image

If we change replica count, then new revision is not created. 

We can trigger rollout with command 

k set image deploy "name of deployment" "name of container"=nginx:1.9.1

0 comments:

Post a Comment