K8S meetup

Last Saturday on 19th Jan 2019, I attended an interesting event meetup event by name "Joint Meetup with Kubernetes & OpenShift + CloudNativeCon Meetup Group of Bangalore"  jointly hosted by (1) Bangalore CNCF Meetup (2) Docker Bangalore and (3) Kubernetes & Openshift India Community meetup groups

Krishna Kumar (Huawei) shared his experience from recent KubeCon event. He showed the book and asked about "The illustrated Children's guide to k8s" the Phippy story. Surprisingly very few were aware about it. The book was just to spice up KubeCon events. He shared some numbers about event: 

* 8000 people attended it in person, 
* 2000 people attended over live stream
50+ announcements. He covered some major announcements at slidedeck. 
250+ exhibitions
* 47 hands-on session. They are called pre-conf sessions and post-conf sessions.
* Many people wrote excellent recap about Kubecon at their blog. Krishna shared top most 15 recaps in PPT slide deck. 

He noticed, plenty of job posting. All organisations are are hiring k8s experts. 

Everyone is saying K8S is complex. However OpenShift is making it easy. 

At CNCF, all software projects belongs to any one out of three categories of (1) Graduated, (2) Incubating and (3) Sandbox 

He talked about some of the sessions, that he could attend. (1) Operator Framework: It is about adding domain knowledge about how to bring up specific application using custom controller at k8s (2) The session about Helm was for 2.5 hours, out of that 1.5 hour was just for Questions and Answers ! (3) Kustomize is about 1 YMAL refer to multiple YAML files. (4) Application Special Interest Group (APP SIG) (5) CNAB (Cloud Native Application Bundles) 

He also discussed about k8s application deployment : several options and comparison among few options. 

The "k8s day india" event is scheduled at Infosys on March 23. 

Reference: https://www.slideshare.net/mKrishnaKumar1/kubecon-seattle-2018-recap-application-deployment-aspects


Rajkavitha Kodhandapani talked about "Special Interest Groups - Docs". Everyone wants to use k8s, develop tools for k8s. However very few are contributing for k8s documentation. She motivated all to contribute. 

Reference: https://www.slideshare.net/RajakavithaKodhandap/kubecon-2018seattle

Abhishek Kumar discussed about Helm, Airflow scheduler, flower service etc. Helm Chart is about Multiple K8s resources into single logical deployment unit. 

Key concepts about Helm are: 

1. chart
2. repository
3. release

Few commands:

helm create "mychart"
it creates directory tree. Keep all K8s component at respective folder. 

helm install --debug --dry-run "mychart"
it will just show the final templates after setting values. 

"helm search" search on public repository, that has URL : https://kubernetes-charts-incubator.storage.googleapis.com/

helm list
helm delete
helm list --deleted
helm rollback "name" "version number"
helm fetch //just to download 


Suraj Deskmujh from Kinvolk talked about "K8S security updates"

He talked about Service Account. Then recent changes/updates like: 

1. Now every pod gets a different service account. It will valid for fixed limited duration only. 
2. New API
2.1. TokenRequest
2.1. TokenRequestProjection
2.3. BoundServiceAccountTokenVolume
3. RuntimeClass, now we can change docker to rocket, etc. It is still under Work in Progress" stage. 
4. New API
PodSpec : runetimeClassName
5. NodeRestriction
Earlier, it was possible to modify kubelet config from pod. Now node can see only secrets of pods. 
6. Encrypting secret data
aescbc, secretbox, aesgcm, kms
7. Dynamic Audit Backend with AuditSink new API

Now "Bug bounty" program is coming to K8s

He insist all to join slack.k8s.io and channels #in-dev and #in-users




Aditya Konarde from RedHat gave updates from SRE (Site Reliability Engineering) perspective


* K8s is now mature. LTS
* Observability and Life Cycle is importance
* Many vendors to manage k8s cluster
* Serverless, istio, service mesh
* Prometheus used for monitoring k8s. 
* Thanos, Cortex and M3 tools are for long term retention of metrics from Prometheus. 
* New additions: Prometheus Operator, Grafana's Loki (It merges metrics and log), Istio
* The trend is: DaemonSet + kernel patch for monitoring and security within kernel mods. 
* GitOps https://github.com/app-sre/qontract-server


Docker-con update by Ajeet Singh from DellEMC 

He talked about "Docker Desktop" :
  • "Docker for Mac" and "Docker for Windows' are not "Docker Desktop"
  • Docker Desktop supports both Docker Swarm and K8s. We just need to enable it. 
  • Docker Desktop Enterprise edition has app designer interface. 
  • Docker Desktop Enterprise has customize application template. 
  • Docker Desktop Enterprise will be available in 1H 2019. At present only for preview. 
  • As such Linux does not need Docker Desktop. Let it be only for MAC and Windows. 

CNAB (Cloud Native Application Bundle) 


4. No single solution for defining and packaging these Multi-service, Multi-format distributed application. Now CNAB package. 
3. Composite APIs (ARM, Terraform) and tooling
2. Low level APIS (JSON, REST)
1. VM, containers, storage

CNAB is package format specification for bundling, installing and managing distributed apps. It uses technologies : JSON, Docker container, OpenPGP

Duffle is Package manger for cloud. 
Reference : https://github.com/garethr/docker-app-cnab-examples

Docker-assemble builds docker image without using DOCKER_FILE. It analyzes your app, dependencies and give Docker image. It is uiltn on top of buildKit. At present it is enterprise edition feature. It supports many languages including Java.

With Docker application Package, we can push multi-service app, not only Docker image. 
Reference : https://github.com/docker/app

He demonstrated Compose on K8s. Now with Docker Compose file itself, one can deploy using (1) K8s and (2) Docker Swarm  
Compose on K8S guide for minikube / Azure AKS /GKE



He used https://asciinema.org/ to record+play remote Linux CLI session and demonstrated. 


Let me share some useful resources related to React.JS

First let's have a look to overall Layer architecture, to get big picture.

Layer Component Examples
4 At browser : MVC Framework Angular.JS, React.JS, Veu.JS, Vanila JS etc.
3 Web Application Framework (WAF) Express.JS
2 Server side scripting Node.JS
1 Databse (DB) Mongo DB, Any NoSQL DB

With React.JS one can build: 

1. Web Application
2. Web Services
3. Web Resources
4. Web API

We have

1. React.JS to build user interface
2. React Native to build application for Android OS, iOS and UWP (Universal Windows Platform : Windows10)

Now here is list of URLs:


https://www.safaribooksonline.com/library/view/designing-with-web/9780321679765/ (safari books)
https://www.safaribooksonline.com/library/view/speaking-javascript/9781449365028/ (safari books)

https://www.safaribooksonline.com/videos/reactjs-fundamentals-second/9780134854670 (safari books)








Utility libraries



https://jestjs.io/docs/en/getting-started (jest)
https://airbnb.io/enzyme/docs/api/ (enzyme)
https://hackernoon.com/testing-react-components-with-jest-and-enzyme-41d592c174f (react + jest +enzyme)

Some more reference

1. To get start with skeleton code using this NPM project. 

Container Runtime

Low-Level container run time:  At their core, low-level container runtimes are responsible for 
* setting up these namespaces and cgroups for containers, 
* allow for setting resource limits on the cgroup, 
* setting up a root filesystem, 
* chrooting the container's process to the root file system, and 
* then running commands inside those namespaces and cgroups
using the standard Linux cgcreate, cgset, cgexec, chroot and unshare commands.

1. runC is reference implementation of OCI specification https://github.com/opencontainers/runtime-spec sudo runc run mycontainerid
2. lxc
3. lmctfy by Google. It has the ability to run sub-task containers under a pre-allocated set of resources on a server, and thus achieve more stringent SLOs than could be provided by the runtime itself. It supports container hierarchies that use cgroup hierarchies via the container names

High-level container run time (= container run time) supports: 
* image management, 
* image transport
* image unpack
* pass image to low-level container run time to run it. 
* provide daemon application
* gRPC API. gRPC is a modern, open source, high-performance remote procedure call (RPC) framework 
* Web API. 

1. containerd from docker. ctr is command line client for containerd. https://github.com/docker/containerd/blob/master/design/architecture.md
2. rkt by CoreOS 
3. cri-o


* rkt is a runtime containing both high level runtime and low level runtime. 
containerd and cri-o are on top of runC
* containerd does not have support for building container image. 
* rkt can build docker image, however does not provide remote API. 
* docker provides everything. docker is not container runtime. It uses containerd as container runtime. 
* rkt can become alternative of docker. K8s can use rkt instead of docker. K8s can also use cri-o + runC instead of docker. 
* rkt supports pod natively. 


What is Docker?

  • Docker is programme written in Go
  • Docker is an open source community project started in 2013
  • Docker is a company that supports this project. 
  • Docker is a containerization technology/platform for Linux
  • Docker is command line tool
Under the hood

  • Docker is programme written in Go
  • Docker needs Linux kernel version 3.10 or more. 
  • Docker is 2 programme = client + server. The server can be on remote machine
  • Docker manage kernel features (1) cgroups (2) namespace (3) copy-on-write COW file system
  • Cgroups control what a process can use. Whereas namespaces control what a process can see
Refer https://ericchiang.github.io/post/containers-from-scratch/

1. cgroups : limits the amount of resources 

1.1 blkio (Bulk IO device)
1.2 cpu
1.3 cpuacct (CPU accounting OR CPU usage report)
1.4 cpuset (Assigning CPU)
1.5 devices
1.6 freezer : Suspends and resumes tasks in cgroups
1.7 memory
1.8 net_clas : Linux tc (traffic controller) can identify packet from particular cgroups
1.9 net_prio : for bandwidth management
1.10 ns (namespace)
1.11 perf_event : To identify membership of task to specific cgroups for performance analysis. 

/etc/cgconfig.d is daemon process for cgroups

Check all cgroups under folder /sys/fs/cgroups

I found additional cgroups in my syste,


2. namespace = mount, IPC, network, PID, user (user and group) and UTS (=Unix Time Sharing. Host name and domain name). 

2.1 Network Namespace

2.1.1 Network name space includes : network devices, IPv4  and  IPv6  protocol stacks, IP routing tables, firewalls, the /proc/net directory, the /sys/class/net directory, port numbers (sockets), and  so  on

2.1.2 Namespace API has 3 system calls I. clone II. unshare III. setns 

Typically, "ip netns" can be used 

ip netns exec $namespace_id

to run some arbitrary command inside a given namespace.

However, ip netns only knows about namespaces listed in /var/run/netns/, and neither docker nor Kuberenetes have symlink its namespaces there. 

Docker and Kuberenetes can accessed network namespace using "nsenter" and specifying the process ID of the container, e.g.

nsenter -t ${PID} -n ip addr
nsenter -t ${PID} -n route -n
sudo nsenter -t ${PID} -n /bin/sh

nsenter discovers the network namespace at /proc/$PID/ns/net, where the namespace is accessible (by syscalls like setns(2)) via file descriptor.

Docker creates bridges. Docker sets routes. docker uses iptables

Docker Architecture

Docker CLI
dockerd : docker daemon
(docker-)containerd : daemon listening to on UDS, gRPC endpoint
(docker-)containerd-ctr : Light weight CLI to communicate with containerd, for debug purpose
(docker-)runc : lightwieght binary deals with namespace, cgroups etc. 
(docker-)containerd-shim : Between containerd and runc

Docker Dependencies

ca-certificates package contains certificates provided by the Certificate Authorities.

apt-transport-https allows the use of repositories accessed and downloaded via https protocol

software-properties-common provides some useful scripts for adding and removing PPAs + the DBUS backends. Without it, you would need to add and remove repositories (such as PPAs) manually by editing /etc/apt/sources.list and/or any subsidiary files in /etc/apt/sources.list.d

sudo apt install docker.io

To add non-root user

sudo groupadd docker

sudo gpasswd -a @USER docker
sudo usermod -aG docker $USER
sudo setfacl -m user::rw /var/run/docker.sock

Docker Flow

docker run puts image to container

1. docker commit creates new image from container
Here commit will also tag image. default tag value is 'latest"

2. docker tag "old name" "new name may contain registry url"
Here "tag" is optional. 

Instead of 1, and 2 use single command
"docker commit "

docker image and container both have IDs. They are different. 

Docker run command

docker run --memory --cpu-share --cpu-quota -p [outside port:]"inside port"[/tcp|udp] --link --name -ti --privileged=true --net="host | some name" --ip "ip address" -v [absolute local path:] --restart=always --pid=host --env KEY=VALUE --volumes-from --rm -d [:tag default is latest]

ti = interactive terminal
rm = remove container, once it is done.
d = detach
-p here outside port is optional
-p :/ default is tcp
-P = --publish-all For exposing all ports
--link = it will add "ip address another container name" at /etc/hosts. It will auto detect IP address of another container. Assume the IP address does not change. 
-v for volume
--privileged to get full access of host OS
--pid will give more privilege to control PID of host OS
--restart : To restart if it dies. 
--net : To specify network name space

docker run = docker create + docker start


^d to exit container. 

Attach and detach 

attach using

docker attach <container name>

to detach, when you are inside container, press ^p, ^q

Run process at container

docker -exec

Cannot add ports, volumes etc.
docker -exec -ti my_container sh -c "echo a && echo b"

Running more than one process or service in a container is discouraged for maximum efficiency & isolation. 


docker logs

Remove and Kill

docker kill

Container moves to stopped state. 

docker rm  

This will remove the container

docker ps -l

l = last running container. 


docker port

It will list out external v/s internal ports. Same like iptables

For dynamic link, first create network
docker network create  

Then use --net while creating both networks.

here a name server will be added to network and it will take care of new ip address. 

If we specify --net=host then we can see, all the bridges inside container also with "brctl show" command

Docker inspect

This command gives various information about docker like author, pid, IP address etc. 

docker inspect --format '{{ .State.Pid }}' "name"
docker inspect --format '{{ .NetworkSettings.IPAddress }}' "name"

Docker images

docker images

List downloaded image. 

Docker Registry 

name = registry FQDN : port /organization name/image name:version tag

short name = organization name/image name

docker rmi

docker login
docker pull
docker push "name as per used in docker tag"
docker search


Amazon's ECR,
Sonatype Nexus,
JFrog Artifactory 

port 5000 for push, pull etc. 


The docker images can be stored at local host, docker, AWS, Google, Microsoft

docker save -o "tar.gz file" "one or more than one docker image:tag"
docker load -i "tar.gz file"


Volumes are like virtual disc

Two types: (1) persistent (-v) v/s (2) ephemeral (--volumes-from)
Volumes are not part of image

mount -o bind "original folder" "new name of folder. May be existing folder"

Host file system over guest can be mounter. Not visa versa. 


No. Purpose docker file "docker run" command
1 Base image FROM
2 "Author" as output of "docker inspect" command MAINTAINER
3 Debug statement. Execute commands and commit the results on top of the previous file layer RUN
4 copy files or directories from the local (or remote) URL to the filesystem of the image. ADD "URL | local path" "container path"
5 During build time and run time ENV --env (or -e)KEY=VALUE
6 It is like CMD. It is first programe to run inside container. It makes container as executable  ENTRYPOINT --entrypoint It will override from DockerFile
7 CMD Last Argument is combination of ENTRYPOINT + CMD
8 Multi project file. copy files or directories from the local (or remote) URL to the filesystem of the image. COPY
9 sets working dir for build and for container. it is like CD WORKDIR --workdir
10 binary argument Shell Form
11 ["bianry", "argument"] Exec Form
12 Opening outgoing port at iptables firewall EXPOSE portNumber -p [outside port:]"inside port" P=publish. Publish is for outside network. Expose is for inside network.
13 Avoid local path VOLUME ["optional local path" "container path] -v and --volume-from
14 the container will run as user USER sudo

* Dockerfile shall start with FROM
* Dockerfile name shall be Dockerfile. D capital. docker will search for this file, to build Docker image. Now we can have file with any name and specify that file to Docker with -f option. 
* Dockerfile can have multiple lines with "CMD ". The last "CMD " will override all previous lines with "CMD "
* One can pass arguments to Entrypoint
* COPY and ADD are similar. COPY allows to add file from local file system only. ADD allows to add file from URL also.

Sample Docker File







Explore further: 

1. How to run xserver in container ?

Here are the links

https://blog.jessfraz.com/post/docker-containers-on-the-desktop/ 4.Lynx

2. How to connect using docker client to remote docker server?
3. how to expose my docker server on network? How to bind it to tcp socket instead of unix socket
4. Is LXD replacement of docker? 

Ansible modules