Tmux


Tmux is like GNU Screen. Very useful tool to store your ongoing SSH/Putty terminal by attach/detach. We can have multiple session from single command prompt. It is recommended to use at CKA/CKAD exam, where you need to deal with multiple clusters.

Here are relevant URLs

http://alvinalexander.com/downloads/linux/tmux-cheat-sheet.pdf
http://alvinalexander.com/linux-unix/tmux-cheat-sheet-commands-pdf/
https://leanpub.com/the-tao-of-tmux/read
https://pragprog.com/titles/bhtmux2/
https://www.hamvocke.com/blog/a-guide-to-customizing-your-tmux-conf/
https://man7.org/linux/man-pages/man1/tmux.1.html

RBAC in K8s


RBAC is all about

Can "Subject" "Verb" "Object" at "Location"

K8s admission controller extracts following from the incoming request to K8s API server

1. HTTP Method. It can derive verb. E.g. POST is mapped with create. VERB

2. From URI extract (1) API group and (2) resource. OBJECT

3. From URI extract Namespace. LOCATION

4. From authentication derive (1) User Name (2) Groups. SUBJECT. 

Note: For creating new object, the URI does not contain name of the resource. RBAC is NOT about a resource with specific name can be created or not. 

YAML

1. Role. 

It has namespace.
Rules. 
- VERB
- OBJECT = API group + Resource
Here resource are namespaced resource

2. RoleBinding

It has:
LOCATION = namespace 
Reference to Role = LOCATION + VERB + OBJECT
SUBJECT = Kind (user | group | service account) + name

Here suppose service account = sa in namespace myns then it is mentioned as
"serviceaccount=myns:sa"

If we want to specify all service account in the namespace then do NOT use "serviceaccount=myns:*" We shall use: "group=system:serviceaccounts:myns"

3. ClusterRole

It does not have namespace.It is global role. so can be used in all namespaces. 
Resource inside ClusterRole can be either (1) namespaced resource or (2) resource with cluster scope. E.g. Nodes, PV

RoleBinding can have reference to ClusterRole, instead of Role. 
Since RoleBinding has namespace, the ClusterRole is assigned locally to specific namespace only

4. ClusterRoleBinding

It does not have namespace. 

It has reference to ClusterRole
It grants permission globally. 

* So ClousterRole has multiple purpose
1. It is global role. It is used to grant permission to user A in namespace AA and user B in namespace BB. It is used in RoleBinding
2. It is for resource with cluster scope. E.g. Nodes, PV
3. It is used in ClusterRoleBinding to grand global permission. 
Note: namespaced permission and non-namespaced permission cannot be mixed in single cluster role.

* ClusterRoleBinding can be for
- cluster scope resource
- granting global cluster wide permission. 

Default Subjects
1. system:master is name of group. SUBJECT = user | group | service account. So group is SUBJECT
2. system:kube-scheduler , 
3. system:kube-control-manager, 
4. system:kube-proxy

kubelet runs with
5. username = system:node:"node name"
group = system:nodes
Here RBAC alone is not sufficient. So authorization mode is both: (1) RBAC and (2) Node. The Kubelet's client certificate is useful for (1) Node Authorizer and (2) Node Restriction Admission plugin

Note: In client certificate username is called CN (Common Name) and group is called Organisations

Default ClusterRole
1. cluster-admin is like super user
2. admin
3. edit
4. view
admin, edit and view are assigned for individual SUBJECT for specific namespace LOCATION
Resource aggregation is used for new CR and default role: admin, edit, view. 

Verb Expansion

List = List, Get, Watch
Update = Update, Patch

Subresource

HPA works on /scale subresource
controller works on /status subresource 

Federatoin
1 . define centrally and sync all local k8s cluster
2. webhook points to RBAC of other K8s cluster

Reference:

https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
https://www.youtube.com/watch?v=Nw1ymxcLIDI
https://github.com/liggitt/audit2rbac

CKA : K8s Core Concepts


Node-Controller monitor each node, with heart-bit every 5 seconds. With grace period of 40 seconds, the controller declared that the node is unreachable. After 5 min, the pods of unhealthy nodes are scheduled on other node.

Kubelet is not deployed by kubeadm

without pod, we need to create network between application container and sidecar container. We need to create volume and share with both containers. We need to monitor both containers.

Pycharm editor has very good support for YAML
- Audo indentation
- at bottom status bar, the format of present line in complete tree of YAML.
- off course syntax color highlighting.

Replicaset has selector. so it will take care of pods created earlier also.
Replica controller does not have selector->mathLabels
Replicaset selector choose from set of value
Replica controller choose pod when key, value matches.

After a pod of RS reached to ImagePullBackOff stage, even if you correct image name at RS.yaml file. no impact. you need to delete all pods of RS. RS will create new pod using correct image.

In the RS, these two values should be identical
spec->selector->matchLabels
spec->template->metadata-labels

RS and deployment has same YAML file. Deployment creates RS and RS creates pod.

To change namespace in kubectl the command is:

k config set-context $(k config current-context) --namespace=dev

ResourceQuota is for namespace.

Deployment should be created with "k create deployment" command then set replica count with "k scale" command. 

Scheduler consider following values from YAML
- affinity / anti-affinity
- nodeSelector
- taints / tolerations
- reservations / limits  



Imperative Commands with Kubectl


kuectl create command

service type name tcp=port:tragetport node-port
configmap fromfile fromliteral
secret generic fromfile fromliteral
rolebinding clusterrole serviceaccount role
clusterrolebinding clusterrole serviceaccount
role verb resource
clusterrole verb resource
cronjob image schedule
deployment image
job image from=cj name

kuectl set command

env RESOURCE/NAME  KEY_1=VAL_1 ... KEY_N=VAL_N
image (-f FILENAME | TYPE NAME) container=image
resources (-f FILENAME | TYPE NAME) ([--limits=cpu=CPU,mem=MEM & --requests=REQUESTS]
sa (-f FILENAME | TYPE NAME) SA_NAME

kuectl run command

run --restart==OnFailure --schedule="* * * * *" Job
--restart=Never pod
--generator=run-pod/v1 pod

Other commands 

kubectl delete pod POD_NAME --grace-period=0 --force 

kubectl annotate (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N 

kubectl label [--overwrite] (-f FILENAME | TYPE NAME) KEY_1=VAL_1 ... KEY_N=VAL_N 

kubectl replace -f FILENAME 

kubectl logs --since=DURATION --tail=N --time-stamps=true

kubectl expose (-f FILENAME | TYPE NAME) [--port=port] [--protocol=TCP|UDP|SCTP] [--target-port=number-or-name] [--name=name] [--type=type]

Here:
TYPE NAME = rc | deploy | pod | svc
type = ClusterIP | NodePort | LoadBalancer

1. kubectl run '--image=image [--env="key=value"] [--port=port] [--labels="key1=value1, key2=value2"] [--requests='cpu=CPU,mem=MEM'] [--serviceaccount=SA] [--command -- COMMAND] [args...]

2. kubectl run '--image=image [--env="key=value"] [--port=port]  --  [args...]

3. kubectl run '--image=image [--env="key=value"] [--port=port]

Reference:
https://kubernetes.io/docs/reference/kubectl/conventions/
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands

Minikube etcd


Here are my experiment with etcd and minikube

I ran below command

etcdctl --endpoints="127.0.0.1:2379" --cacert="/var/lib/minikube/certs/etcd/ca.crt"  --cert="/var/lib/minikube/certs/apiserver-etcd-client.crt"  --key="/var/lib/minikube/certs/apiserver-etcd-client.key"  member list

I got permission error for /var/lib/minikube/certs/apiserver-etcd-client.key
I used sudo, but i faced different error. 
So I copied the file and changed its permission. I could run following command: 

etcdctl --endpoints="127.0.0.1:2379" --cacert="/var/lib/minikube/certs/etcd/ca.crt"  --cert="/var/lib/minikube/certs/apiserver-etcd-client.crt"  --key="/home/manish/.etcd/apiserver-etcd-client.key"  member list

Instead of "member list" i could also able to run below commands

get --prefix /registry 

get / --prefix --keys-only

get --prefix /registry/events/default to dump events in default namespace. 

Same we we can get details of all pods, by
get --prefix /registry/pods/default  

and for specific pod 
get --prefix /registry/pods/"namespace name"/"pod name" 

with option -w json, we get json data, but values are base64 encoded. We can set value v1, for k1 using

etcdctl set k1 v1 // version 2 and
etcdctl put k1 v1 // version 3

We can add --limit="number" to limit output number of entries. 

Reference: 

https://medium.com/better-programming/a-closer-look-at-etcd-the-brain-of-a-kubernetes-cluster-788c8ea759a5#:~:text=In%20the%20Kubernetes%20world%2C%20etcd,handled%20by%20the%20Raft%20algorithm.