CKA 10 : Install K8s the Hard Way
Production grade cluster
- 5000 nodes
- 150,000 pods
- 300,000 containers
- 100 pods per node
A node can have upto 36 vCPU and 60 GB
Storage:
high performance SSD
if we have multiple concurrent connections then network based storage
PV, PVC if we need shared access among multiple pods
Assign label to node, as per its storage and then use node selector to assign application to specific node
* kubeadm add taints to master node, so application is not scheduled on it
Turnky solutions
kops is tool to install K8s on AWS.
kubeadm for on-prem
OpenShift
BOSH is tool for K8s on CloudFoundry Container Runtime
VMWare Cloud PKS
Vagrant provides set of useful scripts
Hosted solutions (managed solution)
GKE for GCP
AKS for Azure
OpenShift Online
EKS on AWS
HA
* If master fails
- pod will not be recreated, if it is part of replicaset
- we cannot use kubectl
* API server : active-active mode with LB
* Controller Manager and Scheduler : active-standby mode. Leader election process, parameters
-- leader-elect true .
-- least time 15 second
-- leader-elect-renew-deadline is 10 second
-- leader-elect-retry-period is 2 second
* etcd
- etcd is part of master node. it is called stacked control plane nodes topology
- etcd is not part of master node. it is called external etcd topology
Write is done by leader only. then it updates all of its followers. write is complete only it is done at majority of nodes. majority is Quorum N/2+1. N - Quorum = Fault Tolerance. N = 1, N = 2 then Fault Tolerance = 0. Recommendation N minimum as 3. also select N as odd. If N is even then cluster may fail due to network segmentation. N = 5 is sufficient.
distributed consensus using RAFT protocol.
https://github.com/mmumshad/kubernetes-the-hard-way
ssh-keygen command generates id_rsa and id_rsa.pub key pairs. The content of id_rsa.pub should be copied to peer host 'known_host' file.
secure cluster communication
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens
Components for automation around SA
1. A Service account admission controller: It adds secret to pod
2. A Token controller (part of Controller Manager): It creates / deletes secrets as per SA creation/deletion. It uses option --service-account-private-key-file and API server uses option --service-account-key-file The secret type is
3. A Service account controller
ref: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
etcd
etcd uses peer.key and peer.cert for multi-node HA etcd port 2379 and client.key client.cert for API server on port 2380. We shall set --client-cert-auth=true and also provide CA --trusted-ca-file=etcd.ca
command to know status of all k8s component
k get componentstatuses
Worker node
1. ca.crt and ca.key exist with master node
All below steps can be automated with TLS bootstraping
2. create key
3. create CSR with unique CN for each kubelet
4. sign certificate
5. distribute certificate
1. kube-apiserver --enable-bootstrap-token-auth=true
2. kube-controller-manager will sign certificate. It is already configured with ca.crt and ca.key. We shall pass --controllers=*,bootstrapsigner,tokencleaner to its service
3. create a secret with type : bootstrap.kubernetes.io/token Token format is abcdef.0123456789abcdef Here public id: abcdef private part is 0123456789abcdef. Name of secret bootstrap-token-PUBLICID
4. create cluster-role-binding create-csrs-for-bootstrapping for group=system:bootstrappers
7. create bootstrap-kubeconfig with 4 commands: cluster, credential, set-context, use-context
8. create YAML kind: KubeletConfiguration with cert and private key for TLS
7. kubelet service with --bootstrap-kubeconfig="/var/lib/kubelet/bootstrap-kubeconfig"
vargant file: https://gist.github.com/surajssd/71892b7a9c5c2cb175fd050cee45d495
- 5000 nodes
- 150,000 pods
- 300,000 containers
- 100 pods per node
A node can have upto 36 vCPU and 60 GB
Storage:
high performance SSD
if we have multiple concurrent connections then network based storage
PV, PVC if we need shared access among multiple pods
Assign label to node, as per its storage and then use node selector to assign application to specific node
* kubeadm add taints to master node, so application is not scheduled on it
Turnky solutions
kops is tool to install K8s on AWS.
kubeadm for on-prem
OpenShift
BOSH is tool for K8s on CloudFoundry Container Runtime
VMWare Cloud PKS
Vagrant provides set of useful scripts
Hosted solutions (managed solution)
GKE for GCP
AKS for Azure
OpenShift Online
EKS on AWS
HA
* If master fails
- pod will not be recreated, if it is part of replicaset
- we cannot use kubectl
* API server : active-active mode with LB
* Controller Manager and Scheduler : active-standby mode. Leader election process, parameters
-- leader-elect true .
-- least time 15 second
-- leader-elect-renew-deadline is 10 second
-- leader-elect-retry-period is 2 second
* etcd
- etcd is part of master node. it is called stacked control plane nodes topology
- etcd is not part of master node. it is called external etcd topology
Write is done by leader only. then it updates all of its followers. write is complete only it is done at majority of nodes. majority is Quorum N/2+1. N - Quorum = Fault Tolerance. N = 1, N = 2 then Fault Tolerance = 0. Recommendation N minimum as 3. also select N as odd. If N is even then cluster may fail due to network segmentation. N = 5 is sufficient.
distributed consensus using RAFT protocol.
https://github.com/mmumshad/kubernetes-the-hard-way
ssh-keygen command generates id_rsa and id_rsa.pub key pairs. The content of id_rsa.pub should be copied to peer host 'known_host' file.
secure cluster communication
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens
Components for automation around SA
1. A Service account admission controller: It adds secret to pod
2. A Token controller (part of Controller Manager): It creates / deletes secrets as per SA creation/deletion. It uses option --service-account-private-key-file and API server uses option --service-account-key-file The secret type is
ServiceAccountToken
3. A Service account controller
ref: https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/
etcd
etcd uses peer.key and peer.cert for multi-node HA etcd port 2379 and client.key client.cert for API server on port 2380. We shall set --client-cert-auth=true and also provide CA --trusted-ca-file=etcd.ca
command to know status of all k8s component
k get componentstatuses
Worker node
1. ca.crt and ca.key exist with master node
All below steps can be automated with TLS bootstraping
2. create key
3. create CSR with unique CN for each kubelet
4. sign certificate
5. distribute certificate
1. kube-apiserver --enable-bootstrap-token-auth=true
2. kube-controller-manager will sign certificate. It is already configured with ca.crt and ca.key. We shall pass --controllers=*,bootstrapsigner,tokencleaner to its service
3. create a secret with type : bootstrap.kubernetes.io/token Token format is abcdef.0123456789abcdef Here public id: abcdef private part is 0123456789abcdef. Name of secret bootstrap-token-PUBLICID
4. create cluster-role-binding create-csrs-for-bootstrapping for group=system:bootstrappers
5. create cluster-role-binding auto-approve-csrs-for-group for group=system:bootstrappers // for first time
6. create cluster-role-binding auto-approve-csrs-for-nodes for group=system:nodes // for renewal7. create bootstrap-kubeconfig with 4 commands: cluster, credential, set-context, use-context
8. create YAML kind: KubeletConfiguration with cert and private key for TLS
7. kubelet service with --bootstrap-kubeconfig="/var/lib/kubelet/bootstrap-kubeconfig"
vargant file: https://gist.github.com/surajssd/71892b7a9c5c2cb175fd050cee45d495
0 comments:
Post a Comment