Minikube etcd

Here are my experiment with etcd and minikube

I ran below command

etcdctl --endpoints="" --cacert="/var/lib/minikube/certs/etcd/ca.crt"  --cert="/var/lib/minikube/certs/apiserver-etcd-client.crt"  --key="/var/lib/minikube/certs/apiserver-etcd-client.key"  member list

I got permission error for /var/lib/minikube/certs/apiserver-etcd-client.key
I used sudo, but i faced different error. 
So I copied the file and changed its permission. I could run following command: 

etcdctl --endpoints="" --cacert="/var/lib/minikube/certs/etcd/ca.crt"  --cert="/var/lib/minikube/certs/apiserver-etcd-client.crt"  --key="/home/manish/.etcd/apiserver-etcd-client.key"  member list

Instead of "member list" i could also able to run below commands

get --prefix /registry 

get / --prefix --keys-only

get --prefix /registry/events/default to dump events in default namespace. 

Same we we can get details of all pods, by
get --prefix /registry/pods/default  

and for specific pod 
get --prefix /registry/pods/"namespace name"/"pod name" 

with option -w json, we get json data, but values are base64 encoded. We can set value v1, for k1 using

etcdctl set k1 v1 // version 2 and
etcdctl put k1 v1 // version 3

We can add --limit="number" to limit output number of entries. 


kelseyhightower Kubernetes The Hard Way

CFSSL consists of:
  • a set of packages useful for building custom TLS PKI tools
  • the cfssl program, which is the canonical command line utility using the CFSSL packages.
  • the multirootca program, which is a certificate authority server that can use multiple signing keys.
  • the mkbundle program is used to build certificate pool bundles.
  • the cfssljson program, which takes the JSON output from the cfssl and multirootca programs and writes certificates, keys, CSRs, and bundles to disk.
The cfssl command line tool takes a command to specify what operation it should carry out:
   sign             signs a certificate
   bundle           build a certificate bundle
   genkey           generate a private key and a certificate request
   gencert          generate a private key and a certificate
   serve            start the API server
   version          prints out the current version
   selfsign         generates a self-signed certificate
   print-defaults   print default configurations
Use cfssl [command] -help to find out more about a command. The version command takes no arguments.
gcloud compute networks : kubernetes-the-hard-way
gcloud compute networks subnets :
gcloud compute firewall-rules
1. tcp, udp, icmp : source-ranges,
2. tcp:22,tcp:6443,icmp : source-ranges
gcloud compute firewall-rules list
Now, create public address
gcloud compute addresses
3 K8s controllers: 
3 Worker node
worker-0: pod-cidr
worker-1: pod-cidr
worker-2: pod-cidr
TLS Certificates
TLS certificates for the following components: 
* etcd, 
* kube-apiserver, 
* kube-controller-manager, 
* kube-scheduler, 
* kubelet, and 
* kube-proxy.
A public key infrastructure (PKI) is a set of roles, policies, hardware, software and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption. In cryptography, a PKI is an arrangement that binds public keys with respective identities of entities (like people and organizations).
1. ca.config file
"key encipherment", 
"server auth", 
"client auth"
2. Generate CSR JSON file
Output: Private key and Certificate for CA
3. Generate various CSR JSON files. Use CA key, CA key certificate, CA config file. 
Output Private key and Certificate 
3.1. Admin
3.2. for each worker node for kubelet. 
3.3  for kube-controller-manager
3.4 kube-proxy
3.5 kube-scheduler
4. Generate K8s API server certificate. 
For -hostname argument pass
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local, K8s master node public IP, K8s all master nodes' private IP addresses. 
5. Generate Service Account pair
6. To Worker node copy (scp) the following files
7. To all master node, copy (scp) following files
client authentication configuration
The kube-proxy, kube-controller-manager, kube-scheduler, and kubelet client certificates will be used to generate client authentication configuration file, also known as kubeconfigs. It enables Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets.
1. Generate kubeconfig file for each worker node, with user name as system:node:workerN. The output is worker-N.kubeconfig
2. Generate kubeconfig file for the kube-proxy service. The output is kube-proxy.kubeconfig
3. Generate a kubeconfig file for the kube-controller-manager service. here server is and output is kube-controller-manager.kubeconfig
4. Generate a kubeconfig file for the kube-scheduler service. here server is and output is kube-scheduler.kubeconfig
5. Generate a kubeconfig file for the admin user. here server is and output is admin.kubeconfig
To generate .kubeconfig file, we will use these three commands:
kubectl config set-cluster
kubectl config set-credentials
kubectl config set-context
Files for worker nodes:
  • worker-N.kubeconfig
  • kube-proxy.kubeconfig

Files for master nodes
  • admin.kubeconfig 
  • kube-controller-manager.kubeconfig 
  • kube-scheduler.kubeconfig

Data Encryption Config and Key
1. Generate encryption key with command
head -c 32 /dev/urandom | base64
2. Generate encryption-config.yaml file using that encryption key. 
Upload it on all three master node. 
Bootstrap etcd
On each master node
1. download and install etcd
2. copy these 3 files at /etc/etcd
3. Create /etc/systemd/system/etcd.service file. It opesn 2379 and 2380 port for etcd
4. Start etcd service
Bootstrap k8s-controller, K8s API server, K8s Scheduler 
On each master node
1. download and install 
2. Move all binary to /usr/local/bin
3. Move the following files to /var/lib/kubernetes/
ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem   encryption-config.yaml
4. Create .service file for each of them at /etc/systemd/system/
For API server specify etcd and other parameters
  --service-cluster-ip-range= \\
  --service-node-port-range=30000-32767 \\
We can configure nginx for healthcheck of any service. Copy kubernetes.default.svc.cluster.local file at /etc/nginx/sites-available/
server {
  listen      80;
  server_name kubernetes.default.svc.cluster.local;
  location /healthz {
     proxy_pass          ;
     proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
RBAC for Kubelet Authorization
Let's set the Kubelet --authorization-mode flag to Webhook. Webhook mode uses the SubjectAccessReview API to determine authorization.
1. Create the system:kube-apiserver-to-kubelet ClusterRole with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
2. Bind the system:kube-apiserver-to-kubelet ClusterRole to the kubernetes user:
It is sufficient to run on any one worker with kubectl
K8s Frontend LoadBalancer
Bootstrapping the Kubernetes Worker Nodes
1. First install
socat conntrack ipset
The socat binary enables support for the kubectl port-forward command.
2. Turn off swap
sudo swapoff -a
3. download and install 
critools (cri-ctl)
runc, container networking plugins, containerd, kubelet, and kube-proxy.
4. Installation directory 
  /etc/cni/net.d \
  /opt/cni/bin \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \
5. Create network configuration file at /etc/cni/net.d/
6. configure containerd service
7. configure Kubelet
8. configure kube-proxy
9. Start services: containerd kubelet kube-proxy
Configuring kubectl for Remote Access
Use the following commands
kubectl config set-cluster  // --certificate-authority=ca.pem
kubectl config set-credentials // --client-certificate=admin.pem  --client-key=admin-key.pem
kubectl config set-context // --user=admin
kubectl config use-context 
Provisioning Pod Network Routes
Add route for pods CIDR on each node, with destination as node's IP address. 
Deploying the DNS Cluster Add-on

CKAD: Tips

1. how to run on master node?
nodeName: master

2. how to run command and args
commands: ["/bin/sh", "-c" "COMMAND"]

3. rolling update
Rolling update YAML
        type: RollingUpdate
           maxSurge: 1
           maxUnavailable: 1

4. inside container
     - mountPath: 

5. Useful command
k explain pods --recursive

6. Environment Variable

- name: ENV_NAME
      name: CM
      key: KEY 
- name: ENV_NAME
  value: "VALUE"

      - configMapRef:
          name: CM_NAME

Same applies for secret

7. Empty Dir volume

- name: VOL
  emptyDir: {}

8. Ports inside container

- containerPort: AAAA

9. CPU limit

    cpu: "0.2"

10. PVC at Pod

        - name: V_NAME

            claimName: PVC_NAME


A. Security Context for container 

        - SYS_TIME
        - SYS_TIME

    runAsUser: UID
    runAsGroup: GID
    fsGroup: NA
    fsGroupChangePolicy: NA
    allowPrivilegeEscalation : true | false
    privileged: true | false

B. Security Context for pod

        - name: NAME
          value: VALUE

12. Ingress

  - host: HOST_URL
      - path: /PATH
          serviceName: K8S_SVC
          servicePort: PORT(note NODE_PORT)

For testing HOST_URL can be specified with -H option

curl -H "HOST_URL" http://IP_ADDRESS/PATH 

13. PV

persistentVolumeReclaimPolicy: Retain | Recycle | Delete

14. netpol
Please define port also of service

      KEY: VALUE
  - Ingress
  - Egress
  - from:
    - ipBlock:
    - namespaceSelector:
          KEY: VALUE
    - podSelector:
          KEY: VALUE

Same for egress, we shall use to

15 Job

restartPolicy : {Never | OnFailure }  
Default is Always. Default is not suitable for Job
ttlSecondsAfterFinished default 'never'

16 Probe

A livenessProbe
B readinessProbe
C startupProbe

    - COMMAND1
    - COMMAND2


        path: /PATH
        port: PORT
        - name: Custom-Header
          value: VALUE


        port: PORT

For all:

      initialDelaySeconds: 15
      periodSeconds: 20

11. k explain K8S_OBJECT --recursive

12. Rolling Update

      maxSurge: 1
      maxUnavailable: 1

    type: RollingUpdate

13. Volumes at pod using secret and configmap

    name: CM_NAME

    secretName: S_NAME

14. For 'k create' commnad, first we shall specify name of K8s object and then other parameter. the exception is svc. For svc, first specify type of svc and then its name and then other parameters. 

15. Inside YAML file, all type/parameter with plural name are list. E.g .volumes, volumemounts, containers, resources etc. Only exception is command. It is singular, yet list. However args is plural, no exception. 

16. Find API version with command

k explain OBJECT --recursive | grep VERSION

17. compare to 

k get po POD_NAME -o yaml 

below command is better

k get po POD_NAME -o yaml --export

18. To change namespace

k config set-context --current --namespace=NAMESPACE



1. creation order is guaranteed unless podManagementPolicy: parallel
2. pod name remain same even after restart
3. Use volumeClaimTemplate . Its an array. The content of array element is same as PVC. Each pod will get its own PV.

Headeless service will add DNS entries
- for each pod: "pod name"."headless service name"."namespace name".svc.cluster.local
Here pod IP address is not used.
- for headeless service: DNS is mapped to all pod's DNS. 

To create Headless service, specify ClusterIP: None

1. Headless service with deployment. 
Pod shall have value for subdomain as same as name of headless service.
Also specify hostname then only pod's dns name A record will be created. But all pod will have same hostname

2. To create Headless service with statefulset, no need to specify (1) subdomain (2) hostname
Instead of subdomain, we shall specify serviceName

CKAD : 8.Troubleshooting

* busybox container has shell
* DNS configuration file
* dig (for DNS lookup)
* tcpdump
* Prometheus
* Fluentd

k exec -ti "deployment name"+Tab -- /bin/sh

Log Command
k logs "pod name"
- this command can also be used to find out name of containers, if there are multiple container inside pod.
To get live stream of logs use -f option. Same as we add -f to tail : tail -f
The actual command is: 
k logs "pod name" "container name"
k logs "pod name" --all-containers=true
k logs command has some useful options like
-l for selector
--max-log-results="N" along with -l
-p for previous instance 
--time-stamps=true to add timestamp on each line. 
- Without logs, we can deploy sidecar container that generates logs. (1) stream application logs to their own stdout OR (2) run a logging agent. 
The Kubelet uses Docker logging driver to write container logs to local files. These logs are retrieved by k logs command. 
Elastic Search, Logstash, Kibana Stack (ELK), Fluentd
Fluent agent is daemonset. It feeds data to Elastic Search. Then one can visualize at Kibana dashboard. 

kubelet is a non-container component. its log found in "/var/log/journal" folder. It is access with command journalctl -a

- DNS, firewall, general connectivity, using standard linux command tools
Changes at switches, routes, or other network settings. Inter node networking. Look at all recent relevant / irrelevant infrastructure changes. 

- SELinux and AppArmor are important to check, for network-centric applications. 
Disable security and test again
Refer log of tools, for find out rule violation.
Fix possible multiple issues and then re-enable security

Other Points
- check node logs for errors. Make sure enough resources are allocated
- pod logs and state of pod
- troubleshoot pod DNS and pod network
- API calls between (1) controller < - > (2) kube API server
- inter node network issue: DNS, firewall

K8s Troubleshooting is similar to data center troubleshooting. Main differences are:
- See pod state: pending and error state
- See error in log files
- check resources are enough

counter, gauge, Histogram (server side), Summary (client side)

It has only in memory DB. Now heapster is deprecated. 
With MetricsServer we can use command
k top node
k top pod

- distributed context propagation
- transaction monitoring
- root cause analysis

Conformance Testing
1. Sonobuoy ,
3. Heptio

It makes sure that
- workload on one distribution works on another. 
- API functions the same
- Minimum functionality exists. 


Inside pod, each container has its own restart count. We can check by running command k describe pod . Pod's restart count is summation of restart count of all containers. 

nslookup FQDN command is to check DNS query gets resolved or not. its configuration is /etc/resolv.conf (not resolve)

If pod is within service then it can have its DNS name as 
"hyphen separated IP address"."pod name"."service name"."namespace name".svc.cluster.local

If pod is part of deployment, then pod name is not the absolute name. 
If we change label of any pod in deployment, with --overwrite option then it will be removed from service, a new pod will be created. The removed pod's DNS entry will also get removed and new pod's DNS entry will be added. 

To add label key=value on k8s object (e.g. pod) command is:
k label 'object type' 'object name' key=value

To overwrite label key=value1 
k label 'object type' 'object name' --overwrite key=value1

To remove label with key
k label 'object type' 'object name' key-

There is no DNS entry for naked pod. There is no entry for pod, that belongs to daemon set. 

With wget command, we can check DNS resolution is working or not. 


We can check kube-proxy log by
k -n kube-system

8.1: 11,13