Showing posts with label CKAD. Show all posts
Showing posts with label CKAD. Show all posts
CKA and CKAD: My Notes and Website Links
Let me post all links, for CKA and CKAD exam
I posted series of articles at my blog: http://layers7.blogspot. com/ as my study note:
CKAD
CKAD: 8 Troubleshooting http://layers7.blogspot.com/2020/04/ckad-8troubleshooting.html
CKAD: 7 Exposing Application http://layers7.blogspot.com/2020/04/ckad-7exposing-applications.html
CKAD: 6 Security http://layers7.blogspot.com/2020/04/ckad-6security.html
CKAD : 5 Deployment Configuration http://layers7.blogspot.com/2020/04/ckad-5-deployment-configuration.html
CKAD : 4 Design http://layers7.blogspot.com/2020/03/ckad-4design.html
CKAD: 3 Build http://layers7.blogspot.com/2020/03/ckad-3-build.html
CKAD: 2 K8s Architecture http://layers7.blogspot.com/2020/03/ckad-2-k8s-architecture.html
Imperative commands with kubectl http://layers7.blogspot.com/2020/06/imperative-commands-with-kubectl.html
CKA
CKA 14: JSON http://layers7.blogspot.com/2020/08/cka-14-json.html
CKA 13: Troubleshooting http://layers7.blogspot.com/2020/08/cka-13-troubleshooting.html
CKA 10: Install K8s the Hard Way http://layers7.blogspot.com/2020/08/cka-10-install-k8s-hard-way.html
Kelseyhightower Kubernetes hard way http://layers7.blogspot.com/2020/05/kelseyhightower-kubernetes-hard-way.html
CKA 9: Networking http://layers7.blogspot.com/2020/08/cka-9-networking.html
CKA 8: Storage http://layers7.blogspot.com/2020/08/cka-8-storage.html
CKA 7: Security http://layers7.blogspot.com/2020/08/cka-7-security.html
CKA 6: Cluster Maintenance http://layers7.blogspot.com/2020/08/cka-6-cluster-maintenance.html
CKA 5: Application Life Cycle Management http://layers7.blogspot.com/2020/08/cka-5-application-life-cycle-management.html
CKA 4: Logging and Monitoring http://layers7.blogspot.com/2020/07/cka-4-logging-and-monitoring.html
CKA 3: Scheduling http://layers7.blogspot.com/2020/07/cka-scheduling.html
RBAC in K8s : http://layers7.blogspot.com/2020/06/rbac-in-k8s.html
CKA 2: K8s Core Concepts http://layers7.blogspot.com/2020/06/cka-k8s-core-concepts.html
To know more about K8s from my blog
Theoretical : http://layers7.blogspot.com/2017/11/kubernetes.html
All K8s related articles : http://layers7.blogspot.com/search/label/k8s
Apart form there are many links material on Internet. Here is the list of website, that I am aware about
https://github.com/StenlyTU/K8s-training-official
https://www.manning.com/books/kubernetes-in-action#toc
https://amartus.com/amartus-kubernetes-exam-tips/
https://scriptcrunch.com/ kubernetes-exam-guide/
https://medium.com/@ KevinHoffman/taking-the- certified-kubernetes- administrator-exam- eeab17d65476
https://medium.com/akena-blog/ k8s-admin-exam-tips- 22961241ba7d
https://scriptcrunch.com/ kubernetes-tutorial-guides/
https://hackmd.io/@mauilion/cka-lab
https://medium.com/@
https://medium.com/akena-blog/
https://scriptcrunch.com/
https://hackmd.io/@mauilion/cka-lab
CKAD
https://itnext.io/the-
https://itnext.io/
https://itnext.io/learn-how-
https://github.com/
https://azure.microsoft.com/
https://kubernetes.io/
https://discuss.kubernetes.io/
https://kubernetes.io/
https://training.
https://www.edx.org/course/
https://www.edx.org/course/
https://killer.sh/course/
https://github.com/
https://kubernetes.io/docs/
https://learnk8s.io/
https://learnk8s.io/academy
https://kodekloud.com/
https://kubernauts.de/en/
https://docs.google.com/
https://collabnix.github.io/
https://www.katacoda.com/
https://www.youtube.com/watch?
https://docs.google.com/
https://killer.sh/
https://github.com/
https://github.com/
Imperative Commands with Kubectl
kuectl create command
kuectl set command
kuectl run command
Other commands
kubectl delete pod POD_NAME --grace-period=0 --force
kubectl annotate (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N
kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N
kubectl replace -f FILENAME
kubectl autoscale deployment "deployment name" [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU]
This will automatically create HPA object with "deployment name"
kubectl logs --since=DURATION --tail=N --time-stamps=true
kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--type=type]
Here:
TYPE NAME = rc | deploy | pod | svc
type = ClusterIP | NodePort | LoadBalancer
1. kubectl run '--image=image [--env="key=value"] [--port=port] [--labels="key1=value1, key2=value2"] [--requests='cpu=CPU,mem=MEM'] [--serviceaccount=SA] [--command -- COMMAND] [args...]
2. kubectl run '--image=image [--env="key=value"] [--port=port] -- [args...]
3. kubectl run '--image=image [--env="key=value"] [--port=port]
Reference:
https://kubernetes.io/docs/reference/kubectl/conventions/
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
service | type | name | tcp=port:tragetport | node-port |
configmap | fromfile | fromliteral | ||
secret generic | fromfile | fromliteral | ||
rolebinding | clusterrole | serviceaccount | role | |
clusterrolebinding | clusterrole | serviceaccount | ||
role | verb | resource | ||
clusterrole | verb | resource | ||
cronjob | image | schedule | ||
deployment | image | |||
job | image | from=cj name |
kuectl set command
env | RESOURCE/NAME | KEY_1=VAL_1 ... KEY_N=VAL_N |
image | (-f FILENAME | TYPE NAME) | container=image |
resources | (-f FILENAME | TYPE NAME) | ([--limits=cpu=CPU,mem=MEM & --requests=REQUESTS] |
sa | (-f FILENAME | TYPE NAME) | SA_NAME |
kuectl run command
run | --restart==OnFailure --schedule="* * * * *" | Job |
--restart=Never | pod | |
--generator=run-pod/v1 | pod |
Other commands
kubectl delete pod POD_NAME --grace-period=0 --force
kubectl annotate (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N
kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N
kubectl replace -f FILENAME
kubectl autoscale deployment "deployment name" [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU]
This will automatically create HPA object with "deployment name"
kubectl logs --since=DURATION --tail=N --time-stamps=true
kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--type=type]
Here:
TYPE NAME = rc | deploy | pod | svc
type = ClusterIP | NodePort | LoadBalancer
1. kubectl run '--image=image [--env="key=value"] [--port=port] [--labels="key1=value1, key2=value2"] [--requests='cpu=CPU,mem=MEM'] [--serviceaccount=SA] [--command -- COMMAND] [args...]
2. kubectl run '--image=image [--env="key=value"] [--port=port] -- [args...]
3. kubectl run '--image=image [--env="key=value"] [--port=port]
Reference:
https://kubernetes.io/docs/reference/kubectl/conventions/
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
CKAD: Tips
1. how to run on master node?
nodeName: master
2. how to run command and args
commands: ["/bin/sh", "-c" "COMMAND"]
3. rolling update
Rolling update YAML
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
4. inside container
volumeMounts:
- mountPath:
5. Useful command
k explain pods --recursive
6. Environment Variable
env:
- name: ENV_NAME
valueFrom:
configMapKeyRef:
name: CM
key: KEY
- name: ENV_NAME
value: "VALUE"
envFrom:
- configMapRef:
name: CM_NAME
Same applies for secret
7. Empty Dir volume
volumes:
- name: VOL
emptyDir: {}
8. Ports inside container
ports:
- containerPort: AAAA
9. CPU limit
resources:
requests:
cpu: "0.2"
10. PVC at Pod
volumes:
- name: V_NAME
persistentVolumeClaim:
claimName: PVC_NAME
11.
A. Security Context for container
securityContext:
capabilities:
add:
- SYS_TIME
drop:
- SYS_TIME
securityContext:
runAsUser: UID
runAsGroup: GID
fsGroup: NA
fsGroupChangePolicy: NA
allowPrivilegeEscalation : true | false
privileged: true | false
B. Security Context for pod
securityContext:
systls:
- name: NAME
value: VALUE
12. Ingress
spec:
rules:
- host: HOST_URL
http:
paths:
- path: /PATH
backend:
serviceName: K8S_SVC
servicePort: PORT(note NODE_PORT)
For testing HOST_URL can be specified with -H option
curl -H "HOST_URL" http://IP_ADDRESS/PATH
13. PV
persistentVolumeReclaimPolicy: Retain | Recycle | Delete
14. netpol
Please define port also of service
podSelector:
matchLabels:
KEY: VALUE
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
KEY: VALUE
- podSelector:
matchLabels:
KEY: VALUE
Same for egress, we shall use to
15 Job
activeDeadlineSeconds
completions
parallelism
restartPolicy : {Never | OnFailure } Default is Always. Default is not suitable for Job
backoffLimit
ttlSecondsAfterFinished default 'never'
16 Probe
A livenessProbe
B readinessProbe
C startupProbe
A
exec:
command:
- COMMAND1
- COMMAND2
B
httpGet:
path: /PATH
port: PORT
httpHeaders:
- name: Custom-Header
value: VALUE
C
tcpSocket:
port: PORT
For all:
initialDelaySeconds: 15
periodSeconds: 20
failureThreshold
11. k explain K8S_OBJECT --recursive
12. Rolling Update
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
13. Volumes at pod using secret and configmap
volumes:
name: VOLUME_NAME
configMap:
name: CM_NAME
volume:
name: VOLUME_NAME
secret:
secretName: S_NAME
14. For 'k create' commnad, first we shall specify name of K8s object and then other parameter. the exception is svc. For svc, first specify type of svc and then its name and then other parameters.
15. Inside YAML file, all type/parameter with plural name are list. E.g .volumes, volumemounts, containers, resources etc. Only exception is command. It is singular, yet list. However args is plural, no exception.
16. Find API version with command
k explain OBJECT --recursive | grep VERSION
17. compare to
k get po POD_NAME -o yaml
below command is better
k get po POD_NAME -o yaml --export
18. To change namespace
k config set-context --current --namespace=NAMESPACE
nodeName: master
2. how to run command and args
commands: ["/bin/sh", "-c" "COMMAND"]
3. rolling update
Rolling update YAML
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
4. inside container
volumeMounts:
- mountPath:
5. Useful command
k explain pods --recursive
6. Environment Variable
env:
- name: ENV_NAME
valueFrom:
configMapKeyRef:
name: CM
key: KEY
- name: ENV_NAME
value: "VALUE"
envFrom:
- configMapRef:
name: CM_NAME
Same applies for secret
7. Empty Dir volume
volumes:
- name: VOL
emptyDir: {}
8. Ports inside container
ports:
- containerPort: AAAA
9. CPU limit
resources:
requests:
cpu: "0.2"
10. PVC at Pod
volumes:
- name: V_NAME
persistentVolumeClaim:
claimName: PVC_NAME
11.
A. Security Context for container
securityContext:
capabilities:
add:
- SYS_TIME
drop:
- SYS_TIME
securityContext:
runAsUser: UID
runAsGroup: GID
fsGroup: NA
fsGroupChangePolicy: NA
allowPrivilegeEscalation : true | false
privileged: true | false
B. Security Context for pod
securityContext:
systls:
- name: NAME
value: VALUE
12. Ingress
spec:
rules:
- host: HOST_URL
http:
paths:
- path: /PATH
backend:
serviceName: K8S_SVC
servicePort: PORT(note NODE_PORT)
For testing HOST_URL can be specified with -H option
curl -H "HOST_URL" http://IP_ADDRESS/PATH
13. PV
persistentVolumeReclaimPolicy: Retain | Recycle | Delete
14. netpol
Please define port also of service
podSelector:
matchLabels:
KEY: VALUE
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
KEY: VALUE
- podSelector:
matchLabels:
KEY: VALUE
Same for egress, we shall use to
15 Job
activeDeadlineSeconds
completions
parallelism
restartPolicy : {Never | OnFailure } Default is Always. Default is not suitable for Job
backoffLimit
ttlSecondsAfterFinished default 'never'
16 Probe
A livenessProbe
B readinessProbe
C startupProbe
A
exec:
command:
- COMMAND1
- COMMAND2
B
httpGet:
path: /PATH
port: PORT
httpHeaders:
- name: Custom-Header
value: VALUE
C
tcpSocket:
port: PORT
For all:
initialDelaySeconds: 15
periodSeconds: 20
failureThreshold
11. k explain K8S_OBJECT --recursive
12. Rolling Update
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
13. Volumes at pod using secret and configmap
volumes:
name: VOLUME_NAME
configMap:
name: CM_NAME
volume:
name: VOLUME_NAME
secret:
secretName: S_NAME
14. For 'k create' commnad, first we shall specify name of K8s object and then other parameter. the exception is svc. For svc, first specify type of svc and then its name and then other parameters.
15. Inside YAML file, all type/parameter with plural name are list. E.g .volumes, volumemounts, containers, resources etc. Only exception is command. It is singular, yet list. However args is plural, no exception.
16. Find API version with command
k explain OBJECT --recursive | grep VERSION
17. compare to
k get po POD_NAME -o yaml
below command is better
k get po POD_NAME -o yaml --export
18. To change namespace
k config set-context --current --namespace=NAMESPACE
StatefulSet
Purpose
1. creation order is guaranteed unless podManagementPolicy: parallel. The default podManagementPolicy value is OrderedReady
2. pod name remain same even after restart
3. Use volumeClaimTemplate . Its an array. The content of array element is same as PVC. Each pod will get its own PV.
If we delete statefulset then all pods may not get deleted. First we shall scale statefulset to 0 then delete statefulset. After that we shall manually delete PVC.
For statefulset hostname and pod name are same.
kubectl patch statefulset
It can be used to update:
- label
- annotation
- container image
- resource requests
- resource limits
Two types of updtateStrategy
- RollingUpdate: The RollingUpdate update strategy will update all Pods in a StatefulSet, in reverse ordinal order
- OnDelete
During update, if any pod, that is not under update process, fails, then it will restored back to its original version. Support N pod. Updates goes from pod-N to pod-1 and change container image to version 1 to version 2. Now suppose update for pod-i is going on. pod-j is crashed. if j > i then it will be restored back to version 2 and if j < i then it will be restored back to version 1
Headeless service will add DNS entries
- for each pod: "pod name"."headless service name"."namespace name".svc.cluster.local
Here pod IP address is not used.
- for headeless service: DNS is mapped to all pod's DNS.
To create Headless service, specify ClusterIP: None
1. Headless service with deployment.
Pod shall have value for subdomain as same as name of headless service.
Also specify hostname then only pod's dns name A record will be created. But all pod will have same hostname.
2. To create Headless service with statefulset, no need to specify (1) subdomain (2) hostname
Instead of subdomain, we shall specify serviceName
1. creation order is guaranteed unless podManagementPolicy: parallel. The default podManagementPolicy value is OrderedReady
2. pod name remain same even after restart
3. Use volumeClaimTemplate . Its an array. The content of array element is same as PVC. Each pod will get its own PV.
If we delete statefulset then all pods may not get deleted. First we shall scale statefulset to 0 then delete statefulset. After that we shall manually delete PVC.
For statefulset hostname and pod name are same.
kubectl patch statefulset
It can be used to update:
- label
- annotation
- container image
- resource requests
- resource limits
Two types of updtateStrategy
- RollingUpdate: The RollingUpdate update strategy will update all Pods in a StatefulSet, in reverse ordinal order
- OnDelete
During update, if any pod, that is not under update process, fails, then it will restored back to its original version. Support N pod. Updates goes from pod-N to pod-1 and change container image to version 1 to version 2. Now suppose update for pod-i is going on. pod-j is crashed. if j > i then it will be restored back to version 2 and if j < i then it will be restored back to version 1
Headeless service will add DNS entries
- for each pod: "pod name"."headless service name"."namespace name".svc.cluster.local
Here pod IP address is not used.
- for headeless service: DNS is mapped to all pod's DNS.
To create Headless service, specify ClusterIP: None
1. Headless service with deployment.
Pod shall have value for subdomain as same as name of headless service.
Also specify hostname then only pod's dns name A record will be created. But all pod will have same hostname.
2. To create Headless service with statefulset, no need to specify (1) subdomain (2) hostname
Instead of subdomain, we shall specify serviceName
CKAD : 8.Troubleshooting
Tools
* busybox container has shell
* DNS configuration file
* dig (for DNS lookup)
* tcpdump
* Prometheus
* Fluentd
Commands
k exec -ti "deployment name"+Tab -- /bin/sh
Logging
Log Command
k logs "pod name"
- this command can also be used to find out name of containers, if there are multiple container inside pod.
To get live stream of logs use -f option. Same as we add -f to tail : tail -f
The actual command is:
k logs "pod name" "container name"
k logs "pod name" --all-containers=true
k logs command has some useful options like
--tail="N"
--since="1h"
-l for selector
--max-log-results="N" along with -l
-p for previous instance
--time-stamps=true to add timestamp on each line.
- Without logs, we can deploy sidecar container that generates logs. (1) stream application logs to their own stdout OR (2) run a logging agent.
The Kubelet uses Docker logging driver to write container logs to local files. These logs are retrieved by k logs command.
Tools
Elastic Search, Logstash, Kibana Stack (ELK), Fluentd
Fluent agent is daemonset. It feeds data to Elastic Search. Then one can visualize at Kibana dashboard.
kubelet is a non-container component. its log found in "/var/log/journal" folder. It is access with command journalctl -a
Networking
- DNS, firewall, general connectivity, using standard linux command tools
Changes at switches, routes, or other network settings. Inter node networking. Look at all recent relevant / irrelevant infrastructure changes.
Security
- RBAC,
- SELinux and AppArmor are important to check, for network-centric applications.
Disable security and test again
Refer log of tools, for find out rule violation.
Fix possible multiple issues and then re-enable security
Other Points
- check node logs for errors. Make sure enough resources are allocated
- pod logs and state of pod
- troubleshoot pod DNS and pod network
- API calls between (1) controller < - > (2) kube API server
- inter node network issue: DNS, firewall
K8s Troubleshooting is similar to data center troubleshooting. Main differences are:
- See pod state: pending and error state
- See error in log files
- check resources are enough
Prometheus
counter, gauge, Histogram (server side), Summary (client side)
MetricsServer
It has only in memory DB. Now heapster is deprecated.
With MetricsServer we can use command
k top node
k top pod
Jaeger
feature:
- distributed context propagation
- transaction monitoring
- root cause analysis
Conformance Testing
Tool:
1. Sonobuoy https://sonobuoy.io/ , https://github.com/vmware-tanzu/sonobuoy
2. https://github.com/cncf/k8s-conformance/blob/master/instructions.md
3. Heptio
It makes sure that
- workload on one distribution works on another.
- API functions the same
- Minimum functionality exists.
Misc
Inside pod, each container has its own restart count. We can check by running command k describe pod . Pod's restart count is summation of restart count of all containers.
nslookup FQDN command is to check DNS query gets resolved or not. its configuration is /etc/resolv.conf (not resolve)
If pod is within service then it can have its DNS name as
"hyphen separated IP address"."pod name"."service name"."namespace name".svc.cluster.local
If pod is part of deployment, then pod name is not the absolute name.
If we change label of any pod in deployment, with --overwrite option then it will be removed from service, a new pod will be created. The removed pod's DNS entry will also get removed and new pod's DNS entry will be added.
To add label key=value on k8s object (e.g. pod) command is:
k label 'object type' 'object name' key=value
To overwrite label key=value1
k label 'object type' 'object name' --overwrite key=value1
To remove label with key
k label 'object type' 'object name' key-
There is no DNS entry for naked pod. There is no entry for pod, that belongs to daemon set.
With wget command, we can check DNS resolution is working or not.
Kube-proxy
We can check kube-proxy log by
k -n kube-system
8.1: 11,13
Reference:
https://kubernetes.io/docs/concepts/cluster-administration/logging/
https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/
https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
https://github.com/kubernetes/kubernetes/issues
CKAD
https://github.com/dgkanatsios/CKAD-exercises
https://github.com/lucassha/CKAD-resources
* busybox container has shell
* DNS configuration file
* dig (for DNS lookup)
* tcpdump
* Prometheus
* Fluentd
Commands
k exec -ti "deployment name"+Tab -- /bin/sh
Logging
Log Command
k logs "pod name"
- this command can also be used to find out name of containers, if there are multiple container inside pod.
To get live stream of logs use -f option. Same as we add -f to tail : tail -f
The actual command is:
k logs "pod name" "container name"
k logs "pod name" --all-containers=true
k logs command has some useful options like
--tail="N"
--since="1h"
-l for selector
--max-log-results="N" along with -l
-p for previous instance
--time-stamps=true to add timestamp on each line.
- Without logs, we can deploy sidecar container that generates logs. (1) stream application logs to their own stdout OR (2) run a logging agent.
The Kubelet uses Docker logging driver to write container logs to local files. These logs are retrieved by k logs command.
Tools
Elastic Search, Logstash, Kibana Stack (ELK), Fluentd
Fluent agent is daemonset. It feeds data to Elastic Search. Then one can visualize at Kibana dashboard.
kubelet is a non-container component. its log found in "/var/log/journal" folder. It is access with command journalctl -a
Networking
- DNS, firewall, general connectivity, using standard linux command tools
Changes at switches, routes, or other network settings. Inter node networking. Look at all recent relevant / irrelevant infrastructure changes.
Security
- RBAC,
- SELinux and AppArmor are important to check, for network-centric applications.
Disable security and test again
Refer log of tools, for find out rule violation.
Fix possible multiple issues and then re-enable security
Other Points
- check node logs for errors. Make sure enough resources are allocated
- pod logs and state of pod
- troubleshoot pod DNS and pod network
- API calls between (1) controller < - > (2) kube API server
- inter node network issue: DNS, firewall
K8s Troubleshooting is similar to data center troubleshooting. Main differences are:
- See pod state: pending and error state
- See error in log files
- check resources are enough
Prometheus
counter, gauge, Histogram (server side), Summary (client side)
MetricsServer
It has only in memory DB. Now heapster is deprecated.
With MetricsServer we can use command
k top node
k top pod
Jaeger
feature:
- distributed context propagation
- transaction monitoring
- root cause analysis
Conformance Testing
Tool:
1. Sonobuoy https://sonobuoy.io/ , https://github.com/vmware-tanzu/sonobuoy
2. https://github.com/cncf/k8s-conformance/blob/master/instructions.md
3. Heptio
It makes sure that
- workload on one distribution works on another.
- API functions the same
- Minimum functionality exists.
Misc
Inside pod, each container has its own restart count. We can check by running command k describe pod . Pod's restart count is summation of restart count of all containers.
nslookup FQDN command is to check DNS query gets resolved or not. its configuration is /etc/resolv.conf (not resolve)
If pod is within service then it can have its DNS name as
"hyphen separated IP address"."pod name"."service name"."namespace name".svc.cluster.local
If pod is part of deployment, then pod name is not the absolute name.
If we change label of any pod in deployment, with --overwrite option then it will be removed from service, a new pod will be created. The removed pod's DNS entry will also get removed and new pod's DNS entry will be added.
To add label key=value on k8s object (e.g. pod) command is:
k label 'object type' 'object name' key=value
To overwrite label key=value1
k label 'object type' 'object name' --overwrite key=value1
To remove label with key
k label 'object type' 'object name' key-
There is no DNS entry for naked pod. There is no entry for pod, that belongs to daemon set.
With wget command, we can check DNS resolution is working or not.
Kube-proxy
We can check kube-proxy log by
k -n kube-system
8.1: 11,13
Reference:
https://kubernetes.io/docs/concepts/cluster-administration/logging/
https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/
https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
https://github.com/kubernetes/kubernetes/issues
CKAD
https://github.com/dgkanatsios/CKAD-exercises
https://github.com/lucassha/CKAD-resources
CKAD : 7.Exposing Applications
ClusterIP range is defined via API server startup option --service-cluster-ip-range
NodePort range is defined in cluster configuration.
ExternalName has no port, no selector, no endpoint. Redirection happen at DNS level.
'kubectl proxy' command create a local service, to access ClusterIP. Useful for troubleshooting and development work.
If we create service with LoadBalancer type on bare metal, and we have not deployed any load balancer then also we can access it as NodePort service.
Grace Period
We should add
pod and deployment have terminationGracePeriodSeconds parameter in spec section. One cannot modify it runtime with kubectl edit command. We can modify it during deployment time only.
KubeProxy Mode
* K8s 1.0 userpace mode
* K8s 1.1 iptables introduced
1.2 iptables become default
it allows max upto approx 5000 worker nodes.
* K8s 1.9 ipvs. configurable load balancing algorithms
- round-robin
- shortest expected delay
- least connection
- others.
IPVS kernel module shall be installed and running.
KubeProxy Mode is configured as startup flag
mode=iptables, mode=IPVS, mode=userspace
Accessing an application with a service
k expose deploy "deploy name" --port=80 --type=NodePort
We can expose pod also as service, if pod has label
k expose pod "pod name" --port=80 --type=NodePort
The targetPort value by default set as value of port.
port is part of endpoint: clusterIP:port
targetPort is opened at pod.
Service can point to service in different namespace, to service which is outside cluster.
External Name is used to access resource, external to cluster. Here selector is not used.
Ingress resource
match : host and path both
rules: HTTP rules only to direct traffic.
Usecase:
- Fan out to service
- name based hosting
- TLS
- load balancing
- expose low numbered port
Ingress Controller
Officially supported
- nginx
- GCE
Community supported
- Traefik (pronounced Traffic)
- HAProxy
Other:
- Contour
- Istio
Ingress controller can be deployed as daemonset. It has its own service account, ClusterRole and ClusterRoleBinding. ClusterRole includes (1) get (2) list (3) watch access to (1) service (2) ep (3) secrets and (4) ingress resource.
Ingress resource has rules. This rule is kind of similar to (1) Ingress GW (2) Virtual Service (3) Destination Rule in Istio.
Ingress resource is created in a same namespace where we have all the svc and deployment.
Traefik has nice UI also accessible with default 8080 port
Questions
What is difference between containerPort and targetPort ?
NodePort range is defined in cluster configuration.
ExternalName has no port, no selector, no endpoint. Redirection happen at DNS level.
'kubectl proxy' command create a local service, to access ClusterIP. Useful for troubleshooting and development work.
If we create service with LoadBalancer type on bare metal, and we have not deployed any load balancer then also we can access it as NodePort service.
Grace Period
We should add
--grace-period=0 --force
for immediate deletionpod and deployment have terminationGracePeriodSeconds parameter in spec section. One cannot modify it runtime with kubectl edit command. We can modify it during deployment time only.
* K8s 1.0 userpace mode
* K8s 1.1 iptables introduced
1.2 iptables become default
it allows max upto approx 5000 worker nodes.
* K8s 1.9 ipvs. configurable load balancing algorithms
- round-robin
- shortest expected delay
- least connection
- others.
IPVS kernel module shall be installed and running.
KubeProxy Mode is configured as startup flag
mode=iptables, mode=IPVS, mode=userspace
Accessing an application with a service
k expose deploy "deploy name" --port=80 --type=NodePort
We can expose pod also as service, if pod has label
k expose pod "pod name" --port=80 --type=NodePort
The targetPort value by default set as value of port.
port is part of endpoint: clusterIP:port
targetPort is opened at pod.
Service can point to service in different namespace, to service which is outside cluster.
External Name is used to access resource, external to cluster. Here selector is not used.
Ingress resource
match : host and path both
rules: HTTP rules only to direct traffic.
Usecase:
- Fan out to service
- name based hosting
- TLS
- load balancing
- expose low numbered port
Ingress Controller
Officially supported
- nginx
- GCE
Community supported
- Traefik (pronounced Traffic)
- HAProxy
Other:
- Contour
- Istio
Ingress controller can be deployed as daemonset. It has its own service account, ClusterRole and ClusterRoleBinding. ClusterRole includes (1) get (2) list (3) watch access to (1) service (2) ep (3) secrets and (4) ingress resource.
Ingress resource has rules. This rule is kind of similar to (1) Ingress GW (2) Virtual Service (3) Destination Rule in Istio.
Ingress resource is created in a same namespace where we have all the svc and deployment.
Traefik has nice UI also accessible with default 8080 port
Questions
What is difference between containerPort and targetPort ?