Tmux is like GNU Screen. Very useful tool to store your ongoing SSH/Putty terminal by attach/detach. We can have multiple session from single command prompt. It is recommended to use at CKA/CKAD exam, where you need to deal with multiple clusters.

Here are relevant URLs

RBAC in K8s

RBAC is all about

Can "Subject" "Verb" "Object" at "Location"

K8s admission controller extracts following from the incoming request to K8s API server

1. HTTP Method. It can derive verb. E.g. POST is mapped with create. VERB

2. From URI extract (1) API group and (2) resource. OBJECT

3. From URI extract Namespace. LOCATION

4. From authentication derive (1) User Name (2) Groups. SUBJECT. 

Note: For creating new object, the URI does not contain name of the resource. RBAC is NOT about a resource with specific name can be created or not. 


1. Role. 

It has namespace.
- OBJECT = API group + Resource
Here resource are namespaced resource

2. RoleBinding

It has:
LOCATION = namespace 
Reference to Role = LOCATION + VERB + OBJECT
SUBJECT = Kind (user | group | service account) + name

Here suppose service account = sa in namespace myns then it is mentioned as

If we want to specify all service account in the namespace then do NOT use "serviceaccount=myns:*" We shall use: "group=system:serviceaccounts:myns"

3. ClusterRole

It does not have namespace.It is global role. so can be used in all namespaces. 
Resource inside ClusterRole can be either (1) namespaced resource or (2) resource with cluster scope. E.g. Nodes, PV

RoleBinding can have reference to ClusterRole, instead of Role. 
Since RoleBinding has namespace, the ClusterRole is assigned locally to specific namespace only

4. ClusterRoleBinding

It does not have namespace. 

It has reference to ClusterRole
It grants permission globally. 

* So ClousterRole has multiple purpose
1. It is global role. It is used to grant permission to user A in namespace AA and user B in namespace BB. It is used in RoleBinding
2. It is for resource with cluster scope. E.g. Nodes, PV
3. It is used in ClusterRoleBinding to grand global permission. 
Note: namespaced permission and non-namespaced permission cannot be mixed in single cluster role.

* ClusterRoleBinding can be for
- cluster scope resource
- granting global cluster wide permission. 

Default Subjects
1. system:master is name of group. SUBJECT = user | group | service account. So group is SUBJECT
2. system:kube-scheduler , 
3. system:kube-control-manager, 
4. system:kube-proxy

kubelet runs with
5. username = system:node:"node name"
group = system:nodes
Here RBAC alone is not sufficient. So authorization mode is both: (1) RBAC and (2) Node. The Kubelet's client certificate is useful for (1) Node Authorizer and (2) Node Restriction Admission plugin

Note: In client certificate username is called CN (Common Name) and group is called Organisations

Default ClusterRole
1. cluster-admin is like super user
2. admin
3. edit
4. view
admin, edit and view are assigned for individual SUBJECT for specific namespace LOCATION
Resource aggregation is used for new CR and default role: admin, edit, view. 

Verb Expansion

List = List, Get, Watch
Update = Update, Patch


HPA works on /scale subresource
controller works on /status subresource 

1 . define centrally and sync all local k8s cluster
2. webhook points to RBAC of other K8s cluster


CKA : K8s Core Concepts

Node-Controller monitor each node, with heart-bit every 5 seconds. With grace period of 40 seconds, the controller declared that the node is unreachable. After 5 min, the pods of unhealthy nodes are scheduled on other node.

Kubelet is not deployed by kubeadm

without pod, we need to create network between application container and sidecar container. We need to create volume and share with both containers. We need to monitor both containers.

Pycharm editor has very good support for YAML
- Audo indentation
- at bottom status bar, the format of present line in complete tree of YAML.
- off course syntax color highlighting.

Replicaset has selector. so it will take care of pods created earlier also.
Replica controller does not have selector->mathLabels
Replicaset selector choose from set of value
Replica controller choose pod when key, value matches.

After a pod of RS reached to ImagePullBackOff stage, even if you correct image name at RS.yaml file. no impact. you need to delete all pods of RS. RS will create new pod using correct image.

In the RS, these two values should be identical

RS and deployment has same YAML file. Deployment creates RS and RS creates pod.

To change namespace in kubectl the command is:

k config set-context $(k config current-context) --namespace=dev

ResourceQuota is for namespace.

Deployment should be created with "k create deployment" command then set replica count with "k scale" command. 

Scheduler consider following values from YAML
- affinity / anti-affinity
- nodeSelector
- taints / tolerations
- reservations / limits  

Imperative Commands with Kubectl

kuectl create command

service type name tcp=port:tragetport node-port
configmap fromfile fromliteral
secret generic fromfile fromliteral
rolebinding clusterrole serviceaccount role
clusterrolebinding clusterrole serviceaccount
role verb resource
clusterrole verb resource
cronjob image schedule
deployment image
job image from=cj name

kuectl set command

image (-f FILENAME | TYPE NAME) container=image
resources (-f FILENAME | TYPE NAME) ([--limits=cpu=CPU,mem=MEM & --requests=REQUESTS]

kuectl run command

run --restart==OnFailure --schedule="* * * * *" Job
--restart=Never pod
--generator=run-pod/v1 pod

Other commands 

kubectl delete pod POD_NAME --grace-period=0 --force 

kubectl annotate (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N 

kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N 

kubectl replace -f FILENAME 

kubectl logs --since=DURATION --tail=N --time-stamps=true

kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--type=type]

TYPE NAME = rc | deploy | pod | svc
type = ClusterIP | NodePort | LoadBalancer

1. kubectl run '--image=image [--env="key=value"] [--port=port] [--labels="key1=value1, key2=value2"] [--requests='cpu=CPU,mem=MEM'] [--serviceaccount=SA] [--command -- COMMAND] [args...]

2. kubectl run '--image=image [--env="key=value"] [--port=port]  --  [args...]

3. kubectl run '--image=image [--env="key=value"] [--port=port]


Minikube etcd

Here are my experiment with etcd and minikube

I ran below command

etcdctl --endpoints="" --cacert="/var/lib/minikube/certs/etcd/ca.crt"  --cert="/var/lib/minikube/certs/apiserver-etcd-client.crt"  --key="/var/lib/minikube/certs/apiserver-etcd-client.key"  member list

I got permission error for /var/lib/minikube/certs/apiserver-etcd-client.key
I used sudo, but i faced different error. 
So I copied the file and changed its permission. I could run following command: 

etcdctl --endpoints="" --cacert="/var/lib/minikube/certs/etcd/ca.crt"  --cert="/var/lib/minikube/certs/apiserver-etcd-client.crt"  --key="/home/manish/.etcd/apiserver-etcd-client.key"  member list

Instead of "member list" i could also able to run below commands

get --prefix /registry 

get / --prefix --keys-only

get --prefix /registry/events/default to dump events in default namespace. 

Same we we can get details of all pods, by
get --prefix /registry/pods/default  

and for specific pod 
get --prefix /registry/pods/"namespace name"/"pod name" 

with option -w json, we get json data, but values are base64 encoded. We can set value v1, for k1 using

etcdctl set k1 v1 // version 2 and
etcdctl put k1 v1 // version 3

We can add --limit="number" to limit output number of entries. 


kelseyhightower Kubernetes The Hard Way

CFSSL consists of:
  • a set of packages useful for building custom TLS PKI tools
  • the cfssl program, which is the canonical command line utility using the CFSSL packages.
  • the multirootca program, which is a certificate authority server that can use multiple signing keys.
  • the mkbundle program is used to build certificate pool bundles.
  • the cfssljson program, which takes the JSON output from the cfssl and multirootca programs and writes certificates, keys, CSRs, and bundles to disk.
The cfssl command line tool takes a command to specify what operation it should carry out:
   sign             signs a certificate
   bundle           build a certificate bundle
   genkey           generate a private key and a certificate request
   gencert          generate a private key and a certificate
   serve            start the API server
   version          prints out the current version
   selfsign         generates a self-signed certificate
   print-defaults   print default configurations
Use cfssl [command] -help to find out more about a command. The version command takes no arguments.
gcloud compute networks : kubernetes-the-hard-way
gcloud compute networks subnets :
gcloud compute firewall-rules
1. tcp, udp, icmp : source-ranges,
2. tcp:22,tcp:6443,icmp : source-ranges
gcloud compute firewall-rules list
Now, create public address
gcloud compute addresses
3 K8s controllers: 
3 Worker node
worker-0: pod-cidr
worker-1: pod-cidr
worker-2: pod-cidr
TLS Certificates
TLS certificates for the following components: 
* etcd, 
* kube-apiserver, 
* kube-controller-manager, 
* kube-scheduler, 
* kubelet, and 
* kube-proxy.
A public key infrastructure (PKI) is a set of roles, policies, hardware, software and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption. In cryptography, a PKI is an arrangement that binds public keys with respective identities of entities (like people and organizations).
1. ca.config file
"key encipherment", 
"server auth", 
"client auth"
2. Generate CSR JSON file
Output: Private key and Certificate for CA
3. Generate various CSR JSON files. Use CA key, CA key certificate, CA config file. 
Output Private key and Certificate 
3.1. Admin
3.2. for each worker node for kubelet. 
3.3  for kube-controller-manager
3.4 kube-proxy
3.5 kube-scheduler
4. Generate K8s API server certificate. 
For -hostname argument pass
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local, K8s master node public IP, K8s all master nodes' private IP addresses. 
5. Generate Service Account pair
6. To Worker node copy (scp) the following files
7. To all master node, copy (scp) following files
client authentication configuration
The kube-proxy, kube-controller-manager, kube-scheduler, and kubelet client certificates will be used to generate client authentication configuration file, also known as kubeconfigs. It enables Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets.
1. Generate kubeconfig file for each worker node, with user name as system:node:workerN. The output is worker-N.kubeconfig
2. Generate kubeconfig file for the kube-proxy service. The output is kube-proxy.kubeconfig
3. Generate a kubeconfig file for the kube-controller-manager service. here server is and output is kube-controller-manager.kubeconfig
4. Generate a kubeconfig file for the kube-scheduler service. here server is and output is kube-scheduler.kubeconfig
5. Generate a kubeconfig file for the admin user. here server is and output is admin.kubeconfig
To generate .kubeconfig file, we will use these three commands:
kubectl config set-cluster
kubectl config set-credentials
kubectl config set-context
Files for worker nodes:
  • worker-N.kubeconfig
  • kube-proxy.kubeconfig

Files for master nodes
  • admin.kubeconfig 
  • kube-controller-manager.kubeconfig 
  • kube-scheduler.kubeconfig

Data Encryption Config and Key
1. Generate encryption key with command
head -c 32 /dev/urandom | base64
2. Generate encryption-config.yaml file using that encryption key. 
Upload it on all three master node. 
Bootstrap etcd
On each master node
1. download and install etcd
2. copy these 3 files at /etc/etcd
3. Create /etc/systemd/system/etcd.service file. It opesn 2379 and 2380 port for etcd
4. Start etcd service
Bootstrap k8s-controller, K8s API server, K8s Scheduler 
On each master node
1. download and install 
2. Move all binary to /usr/local/bin
3. Move the following files to /var/lib/kubernetes/
ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem   encryption-config.yaml
4. Create .service file for each of them at /etc/systemd/system/
For API server specify etcd and other parameters
  --service-cluster-ip-range= \\
  --service-node-port-range=30000-32767 \\
We can configure nginx for healthcheck of any service. Copy kubernetes.default.svc.cluster.local file at /etc/nginx/sites-available/
server {
  listen      80;
  server_name kubernetes.default.svc.cluster.local;
  location /healthz {
     proxy_pass          ;
     proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
RBAC for Kubelet Authorization
Let's set the Kubelet --authorization-mode flag to Webhook. Webhook mode uses the SubjectAccessReview API to determine authorization.
1. Create the system:kube-apiserver-to-kubelet ClusterRole with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
2. Bind the system:kube-apiserver-to-kubelet ClusterRole to the kubernetes user:
It is sufficient to run on any one worker with kubectl
K8s Frontend LoadBalancer
Bootstrapping the Kubernetes Worker Nodes
1. First install
socat conntrack ipset
The socat binary enables support for the kubectl port-forward command.
2. Turn off swap
sudo swapoff -a
3. download and install 
critools (cri-ctl)
runc, container networking plugins, containerd, kubelet, and kube-proxy.
4. Installation directory 
  /etc/cni/net.d \
  /opt/cni/bin \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \
5. Create network configuration file at /etc/cni/net.d/
6. configure containerd service
7. configure Kubelet
8. configure kube-proxy
9. Start services: containerd kubelet kube-proxy
Configuring kubectl for Remote Access
Use the following commands
kubectl config set-cluster  // --certificate-authority=ca.pem
kubectl config set-credentials // --client-certificate=admin.pem  --client-key=admin-key.pem
kubectl config set-context // --user=admin
kubectl config use-context 
Provisioning Pod Network Routes
Add route for pods CIDR on each node, with destination as node's IP address. 
Deploying the DNS Cluster Add-on

CKAD: Tips

1. how to run on master node?
nodeName: master

2. how to run command and args
commands: ["/bin/sh", "-c" "COMMAND"]

3. rolling update
Rolling update YAML
        type: RollingUpdate
           maxSurge: 1
           maxUnavailable: 1

4. inside container
     - mountPath: 

5. Useful command
k explain pods --recursive

6. Environment Variable

- name: ENV_NAME
      name: CM
      key: KEY 
- name: ENV_NAME
  value: "VALUE"

      - configMapRef:
          name: CM_NAME

Same applies for secret

7. Empty Dir volume

- name: VOL
  emptyDir: {}

8. Ports inside container

- containerPort: AAAA

9. CPU limit

    cpu: "0.2"

10. PVC at Pod

        - name: V_NAME

            claimName: PVC_NAME


A. Security Context for container 

        - SYS_TIME
        - SYS_TIME

    runAsUser: UID
    runAsGroup: GID
    fsGroup: NA
    fsGroupChangePolicy: NA
    allowPrivilegeEscalation : true | false
    privileged: true | false

B. Security Context for pod

        - name: NAME
          value: VALUE

12. Ingress

  - host: HOST_URL
      - path: /PATH
          serviceName: K8S_SVC
          servicePort: PORT(note NODE_PORT)

For testing HOST_URL can be specified with -H option

curl -H "HOST_URL" http://IP_ADDRESS/PATH 

13. PV

persistentVolumeReclaimPolicy: Retain | Recycle | Delete

14. netpol
Please define port also of service

      KEY: VALUE
  - Ingress
  - Egress
  - from:
    - ipBlock:
    - namespaceSelector:
          KEY: VALUE
    - podSelector:
          KEY: VALUE

Same for egress, we shall use to

15 Job

restartPolicy : {Never | OnFailure }  
Default is Always. Default is not suitable for Job
ttlSecondsAfterFinished default 'never'

16 Probe

A livenessProbe
B readinessProbe
C startupProbe

    - COMMAND1
    - COMMAND2


        path: /PATH
        port: PORT
        - name: Custom-Header
          value: VALUE


        port: PORT

For all:

      initialDelaySeconds: 15
      periodSeconds: 20

11. k explain K8S_OBJECT --recursive

12. Rolling Update

      maxSurge: 1
      maxUnavailable: 1

    type: RollingUpdate

13. Volumes at pod using secret and configmap

    name: CM_NAME

    secretName: S_NAME

14. For 'k create' commnad, first we shall specify name of K8s object and then other parameter. the exception is svc. For svc, first specify type of svc and then its name and then other parameters. 

15. Inside YAML file, all type/parameter with plural name are list. E.g .volumes, volumemounts, containers, resources etc. Only exception is command. It is singular, yet list. However args is plural, no exception. 

16. Find API version with command

k explain OBJECT --recursive | grep VERSION

17. compare to 

k get po POD_NAME -o yaml 

below command is better

k get po POD_NAME -o yaml --export

18. To change namespace

k config set-context --current --namespace=NAMESPACE