Service Meshes + Kubernetes: Solving Service-Service Issues


Saturday 23rd March Kubernetes Day India 2019 event is happening in Bangalore. Mr. Ben Hall has arrived here, in Bangalore as speaker for this event. He is founder of https://katacoda.com/ The "DigitalOcean Bangalore" Meetup group, grabbed this opportunity and organized meetup event "Service Meshes + Kubernetes: Solving Service-Service Issues" today evening, where MR. Ben Hall shared his knowledge. Here is my notes exclusively for readers of this blog "Express YourSelf !"

Digital Ocean announced about another event do.co/tide2019 on April 4th and encourage all to participate. They also promote new offering about managed Kubernetes on Digital Ocean. 

Kubernetes is great tool. However we need service mesh for security, scaling, communication among pods. It is about TLS, certification roations Auth/Security, Rate Limit, Retry Logic, Fail Over etc. It provides more control for A/B testing, Canary releases, collecting system metrics and to verify, is this trusted caller pod? The service mesh can be implemented using: 

  • ASPEN MESH
  • linkerd
  • HasiCorp Consul
  • istio

Istio provides four key capabilities
1 connect. service discovery, load balancing 
2 secure. e.g protect the system against fake payment service. encryption, authentication, authorization 
3 control
4 observe

Istio adds/extends some more capabilities to kubernetes APIs, by adding YAML files. Grafana and Prometheus installation is part of istio installation. 


istio is all about just configuring Envoy proxy. Istio uses three major components. 1. Pilot 2. Mixer and 3. Citadel

He demonstrated istio on his own website / learning platform katacoda. He testing using curl to generate ingress traffic. He also mentioned and demonstrated "scope" tool. It is for monitoring, visualization and management for Docker and k8s. It is not part of istio. 

We had interesting QA sessions.

* Yes, istio is adding little latency, when we use HTTP 1.0 based RESTful APIs. However, we get performance gain, when we use gRPC or HTTP 2.0 based RESTful APIs. It is tradeoff between, performance at production environment for given hardware, or gain in terms of developer's productivity. 

* Prometheus  is used to store all matrices of cluster. 

* How to configure different time out values for 2 different consumers, who consumes the same service? Well, we can duplicate service with different names, or modify some application level logic. 

* for pod communication, one can use either RPC (gRPC, RESTful API) based or enterprise service bus based (message queue and kafka). If one takes the second approach, then istio may or may not provide additional values. It depends. 

* These was a quick comparison between SDN based routing and istio based approach. 

* during informal QA, we discussed about Gloo. Gloo is a feature-rich, Kubernetes-native ingress controller, and next-generation API gateway, powered by Envoy. 

At the end we had tea and light snacks of cookies and Samosa. 

Reference 

https://www.meetup.com/DigitalOceanBangalore/events/259864782/
https://www.slideshare.net/BenHalluk/presentations
https://medium.com/google-cloud/understanding-kubernetes-networking-ingress-1bc341c84078
https://blog.aquasec.com/istio-service-mesh-traffic-control
https://gloo.solo.io/
https://github.com/solo-io/gloo

https://layers7.blogspot.com/2019/03/kafka-communication-among-micro-services.html
https://layers7.blogspot.com/2017/12/istio.html

Disclaimer

This blog post is NOT verbatim of the speech. I captured this note as per my understanding. It may not necessarily indicate the speaker's intention. So corrections/suggestions/comments are welcome.

0 comments:

Post a Comment