Istio 101
Istio 101
Meetup event by
Kubernetes & Openshift India Community
https://www.meetup.com/kubernetes-openshift-India-Meetup/events/263328152/
=================
Challenge with microservice
- Service Discovery
- Load Balancing
- Monitoring and Observability
- Network resiliency
- Latency
- Security
- ACL
Istio has evolved.
When people realized the challenges with micro services, Netflix OSS has developed following tools
Hystrix: Circuit Breaking
Zuul: Edge Router
Ribbon: Service Discovery, LB
Eureka: Service Registry
Brave / Zipkin: Tracing
Spectator / Atlas : Metrics
=================
Few analogy between Open Shift and Kubernetes.
IstioAuth module is not Citadel
================= We had interesting question about Mirroing / Shadowing the incoming request. How even a new TCP session will be created?
Add
- name: tcpdump
image: corfr/tcpdump
command:
- /bin/sleep
- infinity
at Deployment.yaml
under spec: containers:
https://developers.redhat.com/blog/2019/02/27/sidecars-analyze-debug-network-traffic-kubernetes-pod/
=================
Cross cluster federation is also present at Istio, in case if the application is deployed on two different clusters hosted by two different cloud service provider.
Envoy proxy is light weight, efficient and very powerful. It has lots of configuration options. One should avoid play around with them, at beginner stage.
=================
Istio can be installed using Helm chart. Another option is to use Maistra Istio-operator. It is wrapper around Helm chart.
=================
Redhat offers Istio as "OpenShift Service Mesh"
=================
Reference :
https://github.com/redhat-developer-demos/istio-tutorial
https://redhat-developer-demos.github.io/istio-tutorial/istio-tutorial/1.1.x/index.html
https://developers.redhat.com/topics/service-mesh/
For cartoons : http://turnoff.us
Books
=================
Meetup event by
Kubernetes & Openshift India Community
https://www.meetup.com/kubernetes-openshift-India-Meetup/events/263328152/
=================
Challenge with microservice
- Service Discovery
- Load Balancing
- Monitoring and Observability
- Network resiliency
- Latency
- Security
- ACL
Istio : Connect, Manage, Secure microservices.
Istio has rich policy driven ops IFTTT
Istio has rich policy driven ops IFTTT
Istio has evolved.
When people realized the challenges with micro services, Netflix OSS has developed following tools
Hystrix: Circuit Breaking
Zuul: Edge Router
Ribbon: Service Discovery, LB
Eureka: Service Registry
Brave / Zipkin: Tracing
Spectator / Atlas : Metrics
However they are specific to Java. Addition code was added existing Java application code.
In case of Istio, side car proxy container is added to each pod. The existing application code is not modified. Istio can be used for application developed in any language and polyglot applications.
Early version of Istio was not optimized. Industry was skeptical and reluctant to adopt Istio. For each request, Envoy Sidecar proxy contacts Mixer module for policy check. After the request is processed, it updates the metrics to Mixer. Later on Caching was added. The early adopters of Istio, themselves contribute back to Istio. Lately many many performance optimization happened in Istio. Now more and more micro service based applications are using Istio.
Istio : Production deployment
Success : eBay, IBM
Failure : BigBasket https://tech.bigbasket.com/bigbaskets-experience-with-istio/
Success : eBay, IBM
Failure : BigBasket https://tech.bigbasket.com/bigbaskets-experience-with-istio/
Few analogy between Open Shift and Kubernetes.
* project = namespace
* oc = kubectl
* oc expose service = ingress in k8s
=================
Side Car proxy can be injected by two ways
Side Car proxy can be injected by two ways
1. mannual injection with istioctl command
2. automatic injection: by annotation for mutation webhook
=================
istioctl modules talks with istio's control plane component by name Pilot
=================istioctl modules talks with istio's control plane component by name Pilot
IstioAuth module is not Citadel
================= We had interesting question about Mirroing / Shadowing the incoming request. How even a new TCP session will be created?
Add
- name: tcpdump
image: corfr/tcpdump
command:
- /bin/sleep
- infinity
at Deployment.yaml
under spec: containers:
https://developers.redhat.com/blog/2019/02/27/sidecars-analyze-debug-network-traffic-kubernetes-pod/
=================
Cross cluster federation is also present at Istio, in case if the application is deployed on two different clusters hosted by two different cloud service provider.
=================
There are set of istio-ctl commands for debugging the application deployment. I found this URL : https://istio.io/docs/ops/component-debugging/
=================Envoy proxy is light weight, efficient and very powerful. It has lots of configuration options. One should avoid play around with them, at beginner stage.
=================
Istio can be installed using Helm chart. Another option is to use Maistra Istio-operator. It is wrapper around Helm chart.
=================
Redhat offers Istio as "OpenShift Service Mesh"
=================
Reference :
https://github.com/redhat-developer-demos/istio-tutorial
https://redhat-developer-demos.github.io/istio-tutorial/istio-tutorial/1.1.x/index.html
https://developers.redhat.com/topics/service-mesh/
For cartoons : http://turnoff.us
Books
- Introducing Istio Service Mesh MicroServices
- Migrating from monolithic to microservice databases
- Designing Distributed Systems : Chapter 2 about side car proxy
=================
Disclaimer : This blog is just my note from an event, that I attended. It is not verbatim of any speech. This blog may not indicate the exact expression/opinion of speakers of the event, due to my possible mistake in taking note. Any corrections/suggestions are welcome.
Turn Off
Posted by
Manish Panchmatia
on Saturday, July 27, 2019
Labels:
ArtificialIntelligence,
DevOps,
MachineLearning,
microservice,
Python,
software
/
Comments: (0)
Full article...>>
Today I came across an interesting website about all comics related to IT, computer, software etc.
Let me share my faviorte list
K8s : http://turnoff.us/geek/the-depressed-developer-44/
Container : http://turnoff.us/geek/kernel-economics/
Python :
http://turnoff.us/geek/the-depressed-developer-35/
http://turnoff.us/geek/python-private-methods/
http://turnoff.us/geek/math-class-2018/
Manager : http://turnoff.us/geek/the-realist-manager/
Social Media http://turnoff.us/geek/the-depressed-developer-23/
AI :
http://turnoff.us/geek/python-robots/
http://turnoff.us/geek/chatbot/
http://turnoff.us/geek/sad-robot/
http://turnoff.us/geek/when-ai-meets-git/
Debug: http://turnoff.us/geek/the-last-resort/
USB : http://turnoff.us/geek/tobbys-world/
CI/CD : http://turnoff.us/geek/deployment-pipeline/
GW API : http://turnoff.us/geek/distributed-architecture-drama/
Computer Science concepts
Process v/s thread : http://turnoff.us/geek/dont-share-mutable-state/
Btree: http://turnoff.us/geek/binary-tree/
Zombie Process http://turnoff.us/geek/zombie-processes/
Idle CPU : http://turnoff.us/geek/idle/
Let me share my faviorte list
K8s : http://turnoff.us/geek/the-depressed-developer-44/
Container : http://turnoff.us/geek/kernel-economics/
Python :
http://turnoff.us/geek/the-depressed-developer-35/
http://turnoff.us/geek/python-private-methods/
http://turnoff.us/geek/math-class-2018/
Manager : http://turnoff.us/geek/the-realist-manager/
Social Media http://turnoff.us/geek/the-depressed-developer-23/
AI :
http://turnoff.us/geek/python-robots/
http://turnoff.us/geek/chatbot/
http://turnoff.us/geek/sad-robot/
http://turnoff.us/geek/when-ai-meets-git/
Debug: http://turnoff.us/geek/the-last-resort/
USB : http://turnoff.us/geek/tobbys-world/
CI/CD : http://turnoff.us/geek/deployment-pipeline/
GW API : http://turnoff.us/geek/distributed-architecture-drama/
Computer Science concepts
Process v/s thread : http://turnoff.us/geek/dont-share-mutable-state/
Btree: http://turnoff.us/geek/binary-tree/
Zombie Process http://turnoff.us/geek/zombie-processes/
Idle CPU : http://turnoff.us/geek/idle/
K8s Hands-on - 2
Dashboard
Reference file for dashboard.yaml https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Similar file, is present at Katacoda course
Each worker node has Kubelet. cAdvisor (port 4194) is part of Kubelet binary. It collects following data for node, pods and containers.
- CPU usage
- Memory usage
- File system
- Network usage
Heapster collect / aggregate all the above data from cAdvisor over REST API. Heapster store this data to InfluxDB. Grafana access the data from InfluxDB and visulaise
Heapster cal also store data in Google Cloud Monitoring service. Then Google Cloud Monitoring Console can access this data and visualize it.
====================================
Horizontal scaling is possible with below command
kubectl scale --replicas=3 deployment x
====================================
Few more useful alias
alias kc='kubectl create'
alias kd='kubectl delete'
alias ka='kubectl apply'
Deployment can be store mannually as YAML file and created back again using that YAML file.
kg deployment x -o yaml
k delete svc x
k delete deployment x
k create -f x.yaml
kubectl expose deployment x --port=80 --type=NodePort
Same applies for service
kg svc x -o yaml
kd svc x
kc -f x_svc.xml
Remove undwanted lines and change replicas value.
ka -f x.yaml
==========================================
Guestbook example
kc -f redis_m_controller.yaml
kg rc
Same example https://kubernetes.io/docs/tutorials/stateless-application/guestbook/ So related YAML files are also similar. E.g.
redis-master-controller.yaml is same as
https://k8s.io/examples/application/guestbook/redis-master-deployment.yaml
redis-master-.yaml is same as
https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/guestbook/redis-master-service.yaml
Same for redis-slave and PHP frontend.
===========================================
To see log from any pod
k logs -f POD_NAME
To see log from any container
k logs POD_NAME -c CONTAINER_NAME
===========================================
To get details about CRDs
kubectl get crd
kubectl api-reference | grep "output of previous command"
If the APIGROUP column is empty in output of "kubectl api-reference" command, then its default value is v1 that denote stable object.
K8s Hands-on - 1
One can find many useful articles about basic kubectl commands and minikube. Katacoda is one of the best website, for online hands-on with k8s. Here, I just shared my experience with katacoda and on-premise minikube cluster.
First, one need set few alias at .bashrc file.
alias k=kubectl
alias kg=kubectl get
alias m=minikube
Minikube
By default, minikube runs with 2 CPUs and 2GB RAM. It makes the system slow. minikube needs minimum 2 CPU. With trial and error i found, 1.5 GB is sufficient to run minikube.
minikube start --memory 1536
Now few commands
kubectl config view
kubectl cluster-info
kubectl get nodes
kubectl desscribe node
minikube ip
minikube dashboard
minikube addons enable heapster
minikube addons list
minikube service list
minikube status
Service related commands
One can use svc in place of service for kubectl command, not for minikube commands
kubectl get svc
command lists services only from default namespace, while
minikube service list
command list services from all namespace.
One can add "-n kube-system" for kubectl command.
kubectl get svc -n kube-system
One can also add "--all-namespaces" for kubectl command
kubectl get svc --all-namespaces
Virtual Box
/home folder of host OS is mounted as /hosthome folder inside VirtualBox
To login to virtual box
minikube ssh
OR
ssh to minikube's IP address with docker/tcuser
Note: None of the above methods work at Katacoda for first scenario "Launching single Node cluster" under "Introduction to Kubernets" hands-on.
Here are comparision of IP address and various interface within virtual box and outside virtual box
Deployment
I found, below 3 basic images to begin with
kubectl create deployment x --image=katacoda/docker-http-server
kubectl create deployment k --image=k8s.gcr.io/echoserver:1.10
kubectl create deployment i --image=nginx
The dployment should be exposed with below commans
For http-server and nginx
kubectl expose deployment x--port=80 --type=NodePort
kubectl expose deployment i --port=80 --type=NodePort
For echo-server
kubectl expose deployment k--port=8080 --type=NodePort
The deployment can be removed with
kubectl delete svc
Access Service
1.
As per katacoda, the service can be tested with curl command as below
export PORT=$(kubectl get svc first-deployment -o go-template='{{range.spec.ports}}{{if .nodePort}}{{.nodePort}}{{"\n"}}{{end}}{{end}}')
echo "Accessing host01:$PORT"
curl host01:$PORT
There are alternate ways also
2.
curl $(minikube service x --url)
3. The below command invoke browser with required URL
minikube service x
4. Using proxy
kubectl proxy
Open URL in browser
http://127.0.0.1:8001/api/v1/namespaces/default/pods/POD_NAME/proxy/
POD
To get details about pod in JSON format
1.
kubectl get pods -o json
2.
kubectl proxy
Open URL in browser
http://127.0.0.1:8001/api/v1/namespaces/default/pods/
To login to pod
kubectl exec -it $POD_NAME bash
To get enviornment variables
kubectl exec $POD_NMAE env
To display custom columns
kubectl get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name,podIP:.status.podIP,NAMESPACE:.metadata.namespace --all-namespaces
kubectl config set-context --current --namespace="namespace name"
Dashboard
Reference file for dashboard.yaml https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Similar file, is present at Katacoda course
First, one need set few alias at .bashrc file.
alias k=kubectl
alias kg=kubectl get
alias m=minikube
Minikube
By default, minikube runs with 2 CPUs and 2GB RAM. It makes the system slow. minikube needs minimum 2 CPU. With trial and error i found, 1.5 GB is sufficient to run minikube.
minikube start --memory 1536
Now few commands
kubectl config view
kubectl cluster-info
kubectl get nodes
kubectl desscribe node
minikube ip
minikube dashboard
minikube addons enable heapster
minikube addons list
minikube service list
minikube status
Service related commands
One can use svc in place of service for kubectl command, not for minikube commands
kubectl get svc
command lists services only from default namespace, while
minikube service list
command list services from all namespace.
One can add "-n kube-system" for kubectl command.
kubectl get svc -n kube-system
One can also add "--all-namespaces" for kubectl command
kubectl get svc --all-namespaces
Virtual Box
/home folder of host OS is mounted as /hosthome folder inside VirtualBox
To login to virtual box
minikube ssh
OR
ssh to minikube's IP address with docker/tcuser
Note: None of the above methods work at Katacoda for first scenario "Launching single Node cluster" under "Introduction to Kubernets" hands-on.
Here are comparision of IP address and various interface within virtual box and outside virtual box
IP Address | Interface | Interface | IP Address | Remarks |
---|---|---|---|---|
Outside VBox | Outside VBox | Inside VBox | Inside VBox | |
172.17.0.1 | docker0 | docker0 | 172.17.0.1 | Pod network |
192.168.99.1 | vboxnet0 | eth1 | 192.168.99.102 | Minikube IP address |
eth0 | 10.0.2.15 | Node Internal IP | ||
127.0.0.1 | lo | lo | 127.0.0.1 | Local interface |
Deployment
I found, below 3 basic images to begin with
kubectl create deployment x --image=katacoda/docker-http-server
kubectl create deployment k --image=k8s.gcr.io/echoserver:1.10
kubectl create deployment i --image=nginx
The dployment should be exposed with below commans
For http-server and nginx
kubectl expose deployment x
For echo-server
kubectl expose deployment k
kubectl delete svc
kubectl delete deployment
As per katacoda, the service can be tested with curl command as below
export PORT=$(kubectl get svc first-deployment -o go-template='{{range.spec.ports}}{{if .nodePort}}{{.nodePort}}{{"\n"}}{{end}}{{end}}')
echo "Accessing host01:$PORT"
curl host01:$PORT
There are alternate ways also
2.
curl $(minikube service x --url)
3. The below command invoke browser with required URL
minikube service x
4. Using proxy
kubectl proxy
Open URL in browser
http://127.0.0.1:8001/api/v1/namespaces/default/pods/POD_NAME/proxy/
POD
To get details about pod in JSON format
1.
kubectl get pods
2.
kubectl proxy
Open URL in browser
http://127.0.0.1:8001/api/v1/namespaces/default/pods/
To login to pod
kubectl exec -it $POD_NAME bash
To get enviornment variables
kubectl exec $POD_NMAE env
To display custom columns
kubectl get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name,podIP:.status.podIP,NAMESPACE:.metadata.namespace --all-namespaces
kubectl config set-context --current --namespace="namespace name"
Dashboard
Reference file for dashboard.yaml https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Similar file, is present at Katacoda course