CKAD : 4.Design
Each component should be decoupled from resources.
All resources have transient relationship with others.
K8s orchestration works with series of agents = controllers = watch loops.
PodSpec determines the best node for deployment
One CPU =
1 AWS vCPU or
1 GCP Core
1 Azure vCore
1 Hyperthread in bare metal Intel CPU with Hyperthreading
Label and Selector
Label can be
- production / development
- department name
- team name
- primary application developer
Selectors are namespace scoped, unless --all-namespace is used
--selector key=value
OR
-l key=value
If we use command
kubectl create deployment design2 --image=nginx
Then a pod is created with label app=design2
Here we can not specify replicas in same command.
We need to run one more
if we edit label of that pod, then design2 deployment will create another pod with label app=design2
if we delete deployment design2 then all pod whose label is app=design2, those will only get deleted.
To show all labels use --show-labels options with kubectl get command.
Job
Job is for scenarios when you don’t want the process to keep running indefinitely.
It does support parallel processing of a set of independent but related work items. These might be
- emails to be sent,
- frames to be rendered,
- files to be transcoded,
- ranges of keys in a NoSQL database to scan, and so on.
A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job manages Pods that are expected to terminate (e.g. batch tasks).
Jobs are part of batch API group
It has following parameters
1. activeDeadlineSeconds it can remain alive for that many seconds only.
2. completions : How many instance ? default 1. We can edit it using
k edit job 'job name'
3. parallelism : how many should run at a time ? default 1. with value 0, the job is paused.
4. restartPolicy : {Never | OnFailure } Default is Always. Default is not suitable for Job
5. It is restarted for backoffLimit times default 6
6. ttlSecondsAfterFinished default 'never'
7. backoffLimit: Total how many job shall be created.
* If parallelism > completions then parallelism = completions
So logic create 'parallelism' number of pod in one shot. Check how many are successful. It is equal or more than completions then stop. Else create another set of 'parallelism' number of pod. Continue untill total pod creation count is less than backoffLimit.
While debugging set restartpolicy = never. This policy applies to pod not to job
The job status is Failed if
- restarted more than backoffLimit times OR
- it run more than activeDeadlineSeconds
else status is Complete
To create a job with imperative command:
k run 'name of job' --image='container image' --restart=OnFailure
Delete Job
With cascade = false, only job get deleted, not pods. Default is true.
kubectl delete jobs job_name cascade=false
CronJob
Linux style cronjob syntax
MM(minute) HH DD MM(month) WW
It can be list with comma separated value: 1,2
It can be range with hyphen: 1-5
It can be * to indicate all
It can be */ and number to indicate periodic: */2
* and ? has same meaning.
CronJob creates multiple jobs as per schedule. The CronJob is only responsible for creating Jobs that match its schedule, and the Job in turn is responsible for the management of the Pods it represents.
It has following parameters
1. If a cronJob has sleep for 30 seconds and activeDeadlineSeconds is 10 then none of the job created by cronJob get completed state.
2. startingDeadlineSeconds If cornjob cannot scheduled within this time then it is considered as failed. Failure can also because of forbid policy. After 100 such failure, no more job will get scheduled.
Note: If startingDeadlineSeconds is set then failure count is considered in last startingDeadlineSeconds . It should be less then 100.
3. concurrencyPolicy
Allow:
Forbid: If second job is scheduled, before earlier job finished, then it is not allowed
Replace
4. suspend
all subsequent job will not be scheduled.
5. successfulJobsHistoryLimit and failedJobsHistoryLimit
How many job shall be kept
To create CronJob (i.e. cj) using imperative commands
k run 'name of job' --image='container image' --restart=OnFailure --schedule="* * * * *"
Terms for multi container pod
1. Ambassador : Communicate with outside resources / outside cluster. E.g. Envoy Proxy
- Proxy local connection
- Reverse Proxy
- Limits HTTP request
- Re-route to outside world
2. Adapter : Modify data that primary container generates
3. sidecar : helps to provide service that is not found in primary container. E.g. logging
Flexibility : one application per pod
granular scalability : one application per pod
best inter container performance : Multiple application per pod
Containerizing an application
- It should be stateless
- It should be transient
- Remove the environment configuration. It should be via ConfigMap and Secrets
- it is like converting city bus to scooters
After containerization of application, just ask
Q1 : Is my application as decoupled as it could possible be?
Q2 : Are all components design considering other components are transient. Will it work with Chaos Monkey?
Q3 : Can I scale any particular component?
Q4 : Have I used stable and open standard to meet my need?
Managing Resource Usage
If pod ask more CPU then defined, then
- Nothing
If pod ask more memory then defined, then behavior is undefined
- restart pod OR
- evicted node
If pod ask memory more than node has, then
- evicted node
If pod ask more storage then defined, then
- evicted node
Resource Limits
1. CPU: cpu
2. Memory: memory
3. Huge Pages: hugepages-2Mi
4. Ephemeral Storage: ephemeral-storage
they apply at container level.
* By default (if resources->requests are not mentioned), K8s assume container request for 0.5 CPU and 256 Mi memory.
It is set by default LimitRange K8s object
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
* By default (if resources->limits are not mentioned), K8s assume container limit for 1 vCPU and 512 Mi memory.
It is set by default LimitRange K8s object
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
Pod level value is summation of all container's values
The resources can be specify at project quota level also.
limits:
cpu: "1"
memory: "1Gi"
requests:
cpu: "0.5"
memory: "500Mi"
k describe node "Node Name"
We can specify LimitRange object with default values. It is applicable within namespace. It is applicable if admission controller LimitRanger is enabled.
apiVersion: v1
kind: LimitRange
metadata:
name: limit-mem-cpu-per-container
spec:
limits:
- max:
cpu: "800m"
memory: "1Gi"
min:
cpu: "100m"
memory: "99Mi"
default:
cpu: "700m"
memory: "900Mi"
defaultRequest:
cpu: "110m"
memory: "111Mi"
type: Container
- While creating container, if memory request and limit both are not specified
then default range applies.
- While creating container, if memory request is not specified and limit is specified. Then request value is same as limit
- While creating container, if memory request is specified and limit is not specified. Then limit value is double then request value
CNI
* Some CNI plugins supports Network Policies. E.g. Calico, Canal, Kube Router, Romana, Weave Net
* Some CNI plugins supports encryption of UDP and TCP traffic. E.g. Calico, Kopeio, Weave Net
* Some CNI plugins allows vxlan. E.g. Canal, Flannel, Kopeio-networking, Weave Net
* CNI plugins are layer 2 or layer 3
Layer 2: Canal, Flannel, Kopeio-networking, Weave Net
Layer 3: Calico, Romana, Kube Router
* kubenet is basic CNI. It relis on cloud provider for routing and cross node networking
All resources have transient relationship with others.
K8s orchestration works with series of agents = controllers = watch loops.
PodSpec determines the best node for deployment
One CPU =
1 AWS vCPU or
1 GCP Core
1 Azure vCore
1 Hyperthread in bare metal Intel CPU with Hyperthreading
Label and Selector
Label can be
- production / development
- department name
- team name
- primary application developer
Selectors are namespace scoped, unless --all-namespace is used
--selector key=value
OR
-l key=value
If we use command
kubectl create deployment design2 --image=nginx
Then a pod is created with label app=design2
Here we can not specify replicas in same command.
We need to run one more
if we edit label of that pod, then design2 deployment will create another pod with label app=design2
if we delete deployment design2 then all pod whose label is app=design2, those will only get deleted.
To show all labels use --show-labels options with kubectl get command.
Job
Job is for scenarios when you don’t want the process to keep running indefinitely.
It does support parallel processing of a set of independent but related work items. These might be
- emails to be sent,
- frames to be rendered,
- files to be transcoded,
- ranges of keys in a NoSQL database to scan, and so on.
A Replication Controller manages Pods which are not expected to terminate (e.g. web servers), and a Job manages Pods that are expected to terminate (e.g. batch tasks).
Jobs are part of batch API group
It has following parameters
1. activeDeadlineSeconds it can remain alive for that many seconds only.
2. completions : How many instance ? default 1. We can edit it using
k edit job 'job name'
3. parallelism : how many should run at a time ? default 1. with value 0, the job is paused.
4. restartPolicy : {Never | OnFailure } Default is Always. Default is not suitable for Job
5. It is restarted for backoffLimit times default 6
6. ttlSecondsAfterFinished default 'never'
7. backoffLimit: Total how many job shall be created.
* If parallelism > completions then parallelism = completions
So logic create 'parallelism' number of pod in one shot. Check how many are successful. It is equal or more than completions then stop. Else create another set of 'parallelism' number of pod. Continue untill total pod creation count is less than backoffLimit.
While debugging set restartpolicy = never. This policy applies to pod not to job
The job status is Failed if
- restarted more than backoffLimit times OR
- it run more than activeDeadlineSeconds
else status is Complete
To create a job with imperative command:
k run 'name of job' --image='container image' --restart=OnFailure
Delete Job
With cascade = false, only job get deleted, not pods. Default is true.
kubectl delete jobs job_name cascade=false
CronJob
Linux style cronjob syntax
MM(minute) HH DD MM(month) WW
It can be list with comma separated value: 1,2
It can be range with hyphen: 1-5
It can be * to indicate all
It can be */ and number to indicate periodic: */2
* and ? has same meaning.
CronJob creates multiple jobs as per schedule. The CronJob is only responsible for creating Jobs that match its schedule, and the Job in turn is responsible for the management of the Pods it represents.
It has following parameters
1. If a cronJob has sleep for 30 seconds and activeDeadlineSeconds is 10 then none of the job created by cronJob get completed state.
2. startingDeadlineSeconds If cornjob cannot scheduled within this time then it is considered as failed. Failure can also because of forbid policy. After 100 such failure, no more job will get scheduled.
Note: If startingDeadlineSeconds is set then failure count is considered in last startingDeadlineSeconds . It should be less then 100.
3. concurrencyPolicy
Allow:
Forbid: If second job is scheduled, before earlier job finished, then it is not allowed
Replace
4. suspend
all subsequent job will not be scheduled.
5. successfulJobsHistoryLimit and failedJobsHistoryLimit
How many job shall be kept
To create CronJob (i.e. cj) using imperative commands
k run 'name of job' --image='container image' --restart=OnFailure --schedule="* * * * *"
Terms for multi container pod
1. Ambassador : Communicate with outside resources / outside cluster. E.g. Envoy Proxy
- Proxy local connection
- Reverse Proxy
- Limits HTTP request
- Re-route to outside world
2. Adapter : Modify data that primary container generates
3. sidecar : helps to provide service that is not found in primary container. E.g. logging
Flexibility : one application per pod
granular scalability : one application per pod
best inter container performance : Multiple application per pod
Containerizing an application
- It should be stateless
- It should be transient
- Remove the environment configuration. It should be via ConfigMap and Secrets
- it is like converting city bus to scooters
After containerization of application, just ask
Q1 : Is my application as decoupled as it could possible be?
Q2 : Are all components design considering other components are transient. Will it work with Chaos Monkey?
Q3 : Can I scale any particular component?
Q4 : Have I used stable and open standard to meet my need?
Managing Resource Usage
If pod ask more CPU then defined, then
- Nothing
If pod ask more memory then defined, then behavior is undefined
- restart pod OR
- evicted node
If pod ask memory more than node has, then
- evicted node
If pod ask more storage then defined, then
- evicted node
Resource Limits
1. CPU: cpu
2. Memory: memory
3. Huge Pages: hugepages-2Mi
4. Ephemeral Storage: ephemeral-storage
they apply at container level.
* By default (if resources->requests are not mentioned), K8s assume container request for 0.5 CPU and 256 Mi memory.
It is set by default LimitRange K8s object
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
* By default (if resources->limits are not mentioned), K8s assume container limit for 1 vCPU and 512 Mi memory.
It is set by default LimitRange K8s object
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
The resources can be specify at project quota level also.
limits:
cpu: "1"
memory: "1Gi"
requests:
cpu: "0.5"
memory: "500Mi"
k describe node "Node Name"
We can specify LimitRange object with default values. It is applicable within namespace. It is applicable if admission controller LimitRanger is enabled.
apiVersion: v1
kind: LimitRange
metadata:
name: limit-mem-cpu-per-container
spec:
limits:
- max:
cpu: "800m"
memory: "1Gi"
min:
cpu: "100m"
memory: "99Mi"
default:
cpu: "700m"
memory: "900Mi"
defaultRequest:
cpu: "110m"
memory: "111Mi"
type: Container
- While creating container, if memory request and limit both are not specified
then default range applies.
- While creating container, if memory request is not specified and limit is specified. Then request value is same as limit
- While creating container, if memory request is specified and limit is not specified. Then limit value is double then request value
CNI
* Some CNI plugins supports Network Policies. E.g. Calico, Canal, Kube Router, Romana, Weave Net
* Some CNI plugins supports encryption of UDP and TCP traffic. E.g. Calico, Kopeio, Weave Net
* Some CNI plugins allows vxlan. E.g. Canal, Flannel, Kopeio-networking, Weave Net
* CNI plugins are layer 2 or layer 3
Layer 2: Canal, Flannel, Kopeio-networking, Weave Net
Layer 3: Calico, Romana, Kube Router
* kubenet is basic CNI. It relis on cloud provider for routing and cross node networking
0 comments:
Post a Comment