Istio 101

Istio 101 
Meetup event by
Kubernetes & Openshift India Community


Challenge with microservice

- Service Discovery
- Load Balancing 
- Monitoring and Observability
- Network resiliency
- Latency
- Security

Istio : Connect, Manage, Secure microservices. 
Istio has rich policy driven ops IFTTT

Istio has evolved. 
When people realized the challenges with micro services, Netflix OSS has developed following tools

Hystrix: Circuit Breaking
Zuul: Edge Router
Ribbon: Service Discovery, LB
Eureka: Service Registry
Brave / Zipkin: Tracing

Spectator / Atlas : Metrics

However they are specific to Java. Addition code was added existing Java application code. 

In case of Istio, side car proxy container is added to each pod. The existing application code is not modified. Istio can be used for application developed in any language and polyglot applications. 

Early version of Istio was not optimized. Industry was skeptical and reluctant to adopt Istio. For each request, Envoy Sidecar proxy contacts Mixer module for policy check. After the request is processed, it updates the metrics to Mixer. Later on Caching was added. The early adopters of Istio, themselves contribute back to Istio. Lately many many performance optimization happened in Istio. Now more and more micro service based applications are using Istio. 

Istio : Production deployment

Success : eBay, IBM
Failure : BigBasket

Few analogy between Open Shift and Kubernetes. 
* project = namespace
* oc = kubectl
* oc expose service = ingress in k8s
Side Car proxy can be injected by two ways
1. mannual injection with istioctl command
2. automatic injection: by annotation for mutation webhook
istioctl modules talks with istio's control plane component by name Pilot
IstioAuth module is not Citadel
================= We had interesting question about Mirroing / Shadowing the incoming request. How even a new TCP session will be created? 

- name: tcpdump
   image: corfr/tcpdump
     - /bin/sleep
     - infinity
at Deployment.yaml
under spec: containers:
Cross cluster federation is also present at Istio, in case if the application is deployed on two different clusters hosted by two different cloud service provider. 
There are set of istio-ctl commands for debugging the application deployment. I found this URL :
Envoy proxy is light weight, efficient and very powerful. It has lots of configuration options. One should avoid play around with them, at beginner stage. 
Istio can be installed using Helm chart. Another option is to use Maistra Istio-operator. It is wrapper around Helm chart. 
Redhat offers Istio as "OpenShift Service Mesh"

For cartoons :


Slide Deck :

Disclaimer : This blog is just my note from an event, that I attended. It is not verbatim of any speech. This blog may not indicate the exact expression/opinion of speakers of the event, due to my possible mistake in taking note. Any corrections/suggestions are welcome. 

Turn Off

Today I came across an interesting website about all comics related to IT, computer, software etc. 

Let me share my faviorte list

K8s :
Container :

Python :

Manager :
Social Media
AI :


Computer Science concepts
Process v/s thread :
Zombie Process
Idle CPU :

K8s Hands-on - 2


Reference file for dashboard.yaml
Similar file, is present at Katacoda course

Each worker node has Kubelet. cAdvisor (port 4194) is part of Kubelet binary. It collects following data for node, pods and containers. 
- CPU usage
- Memory usage
- File system
- Network usage

Heapster collect / aggregate all the above data from cAdvisor over REST API. Heapster store this data to InfluxDB. Grafana access the data from InfluxDB and visulaise 

Heapster cal also store data in Google Cloud Monitoring service. Then Google Cloud Monitoring Console can access this data and visualize it. 


Horizontal scaling is possible with below command

kubectl scale --replicas=3 deployment x

Few more useful alias

alias kc='kubectl create'
alias kd='kubectl delete'
alias ka='kubectl apply'

Deployment can be store mannually as YAML file and created back again using that YAML file. 

kg deployment x -o yaml
k delete svc x
k delete deployment x

k create -f x.yaml
kubectl expose deployment x --port=80 --type=NodePort

Same applies for service

kg svc x -o yaml
kd svc x
kc -f x_svc.xml

Remove undwanted lines and change replicas value. 
ka -f x.yaml

Guestbook example

kc -f redis_m_controller.yaml 
kg rc

Same example So related YAML files are also similar. E.g.

redis-master-controller.yaml is same as

redis-master-.yaml is same as

Same for redis-slave and PHP frontend. 
To see log from any pod

k logs -f POD_NAME

To see log from any container


To get details about CRDs

kubectl get crd

kubectl api-reference | grep "output of previous command" 

If the APIGROUP column is empty in output of "kubectl api-reference" command, then its default value is v1 that denote stable object. 

K8s Hands-on - 1

One can find many useful articles about basic kubectl commands and minikube. Katacoda is one of the best website, for online hands-on with k8s. Here, I just shared my experience with katacoda and on-premise minikube cluster. 

First, one need set few alias at .bashrc file. 

alias k=kubectl
alias kg=kubectl get
alias m=minikube


By default, minikube runs with 2 CPUs and 2GB RAM. It makes the system slow. minikube needs minimum 2 CPU. With trial and error i found, 1.5 GB is sufficient to run minikube. 

minikube start --memory 1536

Now few commands

kubectl config view
kubectl cluster-info
kubectl get nodes
kubectl desscribe node

minikube ip
minikube dashboard
minikube addons enable heapster
minikube addons list
minikube service list
minikube status

Service related commands

One can use svc in place of service for kubectl command, not for minikube commands

kubectl get svc 
command lists services only from default namespace, while
minikube service list 
command list services from all namespace. 
One can add "-n kube-system" for kubectl command. 
kubectl get svc -n kube-system
One can also add "--all-namespaces" for kubectl command
kubectl get svc --all-namespaces

Virtual Box

/home folder of host OS is mounted as /hosthome folder inside VirtualBox

To login to virtual box
minikube ssh
ssh to minikube's IP address with docker/tcuser
Note: None of the above methods work at Katacoda for first scenario "Launching single Node cluster" under "Introduction to Kubernets" hands-on.

Here are comparision of IP address and various interface within virtual box and outside virtual box

IP Address Interface Interface IP Address Remarks
Outside VBox Outside VBox Inside VBox Inside VBox docker0 docker0 Pod network vboxnet0 eth1 Minikube IP address

eth0 Node Internal IP lo lo Local interface


I found, below 3 basic images to begin with

kubectl create deployment x --image=katacoda/docker-http-server
kubectl create deployment k

kubectl create deployment i --image=nginx

The dployment should be exposed with below commans
For http-server and nginx
kubectl expose deployment x --port=80 --type=NodePort
kubectl expose deployment i --port=80 --type=NodePort
For echo-server

kubectl expose deployment k --port=8080 --type=NodePort

The deployment can be removed with

kubectl delete svc
kubectl delete deployment

Access Service

As per katacoda, the service can be tested with curl command as below

export PORT=$(kubectl get svc first-deployment -o go-template='{{range.spec.ports}}{{if .nodePort}}{{.nodePort}}{{"\n"}}{{end}}{{end}}')

echo "Accessing host01:$PORT"

curl host01:$PORT

There are alternate ways also


curl $(minikube service x --url) 

3. The below command invoke browser with required URL

minikube service x

4. Using proxy

kubectl proxy

Open URL in browser 


To get details about pod in JSON format

kubectl get pods -o json


kubectl proxy

Open URL in browser 

To login to pod

kubectl exec -it $POD_NAME bash

To get enviornment variables

kubectl exec $POD_NMAE env

To display custom columns

kubectl get pod -o=custom-columns=NODE:.spec.nodeName,,podIP:.status.podIP,NAMESPACE:.metadata.namespace --all-namespaces

kubectl config set-context --current --namespace="namespace name"


Reference file for dashboard.yaml
Similar file, is present at Katacoda course