Open Container Initiative


1. Image specification

An OCI Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime

https://github.com/opencontainers/image-spec/blob/master/spec.md

1.1 manifest
It contains metadata (annotation = key, value pair) about the contents and dependencies of the image 
JSON file
https://github.com/opencontainers/image-spec/blob/master/manifest.md
https://docs.docker.com/engine/reference/commandline/manifest/

1.2 image index (optional)
For platform specific version of image

1.3 image layout

1.4 file system layers

1.5 configuration
JSON file https://github.com/opencontainers/image-spec/blob/master/config.md

1.6 conversion

1.7 descriptor 

Tools

2. Distribution specification

It describes the API that might be used by a registry to distribute images, to easily 
- share, 
- search, 
- obtain and 
- verify 
container image

https://github.com/opencontainers/distribution-spec/blob/master/spec.md

It is implemented by Docker Registry and https://github.com/atlaskerr/stori

3. Runtime Specification

It aims to specify the configuration, execution environment, and lifecycle of a container.
aims to specify the configuration, execution environment, and lifecycle of a container.

runc is Runtime specification OCI implementation 

LifeCycle


3.1 create command
3.2 create runtime environment using config.json
3.3 start command
3.4 pre-start hook
3.5 user program specified by process
3.6 post-start hook
3.7 exit due to exit, error, kill, crash
3.8 delete
3.9 destroy container. undo create
3.10 post-stop hook

Virtual Machine


Operating System level virtualization / containers /  virtual private servers / virtual environments (VEs) / partitions / jails

  • Docker, 
  • Solaris Containers, 
  • OpenVZ, 
  • Linux-VServer, 
  • LXC, 
  • AIX Workload Partitions, 
  • Parallels Virtuozzo Containers, and 
  • iCore Virtual Accounts.
with lx zones on Solaris, one can run old Solaris and Linux inside lx zones container.

Hardware-assisted virtualization 
  • KVM, 
  • VMware Workstation, 
  • VMware Fusion, 
  • Hyper-V, 
  • Windows Virtual PC, 
  • Xen, 
  • Parallels Desktop for Mac, 
  • Oracle VM Server for SPARC, 
  • VirtualBox and 
  • Parallels Workstation.
Process Virtual Machine / application virtual machine / Managed Runtime Environment (MRE)

  • Java JRE Java Runetime Environment : JVM
  • .NET framework CLR Common Language Runetime : Parrot Virtual Machine
System Virtual Machine
  • using hypervisor

Python Virtual Enviornment


Introduction

The venv module provides support for creating lightweight “virtual environments”. It is optionally isolated from system site directories.

A virtual environment is a Python environment such that 
- the Python interpreter, 
- libraries and 
- scripts 
installed into it are isolated from (1) those installed in other virtual environments, and (2) (by default) any libraries installed in a “system” Python, i.e., one which is installed as part of your operating system.

A virtual environment is a directory tree which contains Python executable files and other files which indicate that it is a virtual environment. The path to this folder can be printed with variable
sys.prefix
sys.exec_prefix
These variables are used to locate site packages directory. 

Each virtual environment has its own
- Python binary (which matches the version of the binary that was used to create this environment) 
- independent set of installed Python packages in its site directories.

Packages (Modules) 

Check installed packages using
pip list

Check specific package is installed or not using
pip show "package name"

Two types of packages
1. System packages (installed as part of Python installation)
2. Site packages (3rd party libraries)

Reinstall using
pip install --upgrade --no-deps --force-reinstall "package name"

Pip can export a list of all installed packages and their versions using the freeze command:
pip freeze
This can be used to create requirements.txt file, that is used later as : 
pip install -r requirements.txt

Commands

To create venv
python3 -m venv /path/to/new/virtual/environment
c:\>c:\Python35\python -m venv c:\path\to\myenv

To activate venv
source /bin/activate
C:\> \Scripts\activate.bat
It modifies PATH variables. Add the venv path at the beginning. 

To deactivate venv
source /bin/deactivate
C:\> \Scripts\deactivate.bat
It reset back PATH variables

To change Python version
pyenv local 2.x
pyenv local 3.x
pyenv global 2.x
pyenv global 3.x

K8s Operator


Introduction

An Operator is a method of 

- packaging, 
- deploying and 
- managing 
a Kubernetes application

Here a Kubernetes application is an application that is both 
- deployed on Kubernetes and 
- managed by Kubernetes
using the Kubernetes APIs and kubectl tooling.

* Operators are like runtime for K8s application


* Operators are more relevant for stateful applications. like

- databses
- caches
- Machine Learning
- Analytics
- monitoring systems

* Application native/specific. Operator is application specific controller, that extends k8s API for 

- packaging (Helm can use to package multiple k8s apps, that will be deployed and managed by their respective operator). 
- installation / deployment on k8s cluster
- run
- configure / re-configure
- updates
- upgrades
- rollback, in case of failure
- recover from fault
- backup
- Restore
- scaling
- Monitoring
self-service provisioning capabilities,
- more complex automation
of application. (particularly stateful applications). Operator does all these without data loss and without application unavailability. 

* Operator is like = Custom Resource Definition CRD + its controller. Here the custom resource is application instance. 


* Operator is like = A set of cohesive APIs to extend K8s functionalities.


* K8s manages K8s objects. With operator, k8s manage multiple application instances across multiple clusters. 


An operator is a way of building an application and driving an application on top of Kubernetes, behind Kubernetes APIs. Operator is more than K8s primitives like StateFulSet and PVC. Operator uses ReplicaSet, Services, DaemonSets, Deployments


Operator takes human operational knowledge (Read : DevOps Team's  knowledge  OR Site Reliability Engineer's knowledge about application lifecycle) and put it into software to easily package and share with consumers.


* Here controllers have direct access to K8s API. So they can monitor the cluster, change pods/services, scale up/down and call endpoints of the running applications, all according to custom rules written inside those controllers.


* Out of 4 najor modules of controller code generator (1) informer-gen and (2) list-gen modules are also called operator. They build basis of custom controller. 

* A powerful feature of an Operator is : 
to enable the desired state to be reflected in your application. 
As the application gets more complicated with more components, detecting these changes can be harder for a human, but easier for a computer.

Operator Framework

1. Operator SDK: Set of tools to accelerate the development of an Operator. E.g. KubeBuilder

Here is CLI guide https://github.com/operator-framework/operator-sdk/blob/master/doc/sdk-cli-reference.md
Here is user guide : https://github.com/operator-framework/operator-sdk/blob/master/doc/user-guide.md
2. Operator Life Cycle Management
3. Operator Metering 


Here is K8s core API to write custom code for operator

Operator can be developed using

- GoLang
- Ansible Playbooks
- Helm Charts. It is more specific to stateless K8s application. Here deeper inspection to Pods, Services, Deployments and ConfigMaps will not happen

Types of Custom Resource Definition CRD

1. primary resources
2. secondary resources

* For ReplicaSet, primary resource is ReplicaSet itself docker image and secondary resource is pod

* For DaemonSet, primary resource is DaemonSet itself and secondary resource is pod+node
* For PodSet, primary resource is PodSet itself and secondary resource is pod.

PodSet owns the pod with controllerutil.SetControllerReference()

PodSet will be developed, that is similar to ReplicaSet. 


Controller


Controller works like this pseudo code. 


while True {
  values = check_real_values()
  tune_system(values, expected_state)
}





Reconsile() is major function of controller. K8s without operator simply create new pod. Controller does application specific code in addition to pod creation. 


Code Generator
- deepcopy-gen 
- client-gen
- informer-gen
- lister-gen
- conversion-gen
- defaulter-gen
- register-gen
- set-gen

gen-go  and CRD-code-generation are also another tool to generate code. 

Last two are also called operator. They are basis for building custom controller. 

Notification loop declare and describe desired state. Controller will find out how to reach there. 

Operator Examples


CoreOS provides sample operators as reference like (1) etcd Here is blog (2) Prometheus. Here is blog. K8s exports all of its internal metrics in the native Prometheus format. Here two third party resources (TPRs) are defined: Prometheus and ServiceMonitor.


Demo controller

Sample controller 
netperf operator

There are many operators available. Here are the lists


https://github.com/operator-framework/awesome-operators

https://operatorhub.io/
https://commons.openshift.org/sig/operators.html
https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps


Reference


https://coreos.com/operators/

https://coreos.com/blog/introducing-operator-framework
https://coreos.com/blog/introducing-operators.html
https://medium.com/devopslinks/writing-your-first-kubernetes-operator-8f3df4453234
https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps
https://enterprisersproject.com/article/2019/2/kubernetes-operators-plain-english
https://linuxhint.com/kubernetes_operator/
https://blog.couchbase.com/kubernetes-operators-game-changer/


Reference for building controller

https://medium.com/@cloudark/kubernetes-custom-controllers-b6c7d0668fdf
https://github.com/piontec/k8s-demo-controller
https://github.com/kubernetes/sample-controller
https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/
https://github.com/operator-framework/getting-started
https://www.linux.com/blog/learn/2018/12/demystifying-kubernetes-operators-operator-sdk-part-2

https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/
https://blog.openshift.com/make-a-kubernetes-operator-in-15-minutes-with-helm/
https://github.com/operator-framework/helm-app-operator-kit
https://www.tailored.cloud/kubernetes/write-a-kubernetes-controller-operator-sdk/

Questois


1. K8s operator are under which namespace?

2. Operator can be developed using Python?

Service Catalog


Introduction

The Service Catalog offers :
- powerful abstractions to make services available in a Kubernetes cluster. 
These services are:
- typically third-party managed cloud offerings OR
- self-hosted services 

Developers can focus on the applications without the need for managing complex services deployment.

Service Catalog is list of (1) service class (or service offering) E.g services : database, messaging queue, API gateway, a log drain, or something else. Services are associated with (2) service plan The service plan is variant of service in terms of cost, size etc.

Service Broker
Typically, a third party Service provider will expose a Service Broker on their own infrastructure

Service configuration options using JSON schema to configure service and plan. Automatic form building is also possible. 

Service Broker use cases

1. Service breaker handle interactions from modern cloud native apps to legacy system for valueable data stored at legacy system.


2. OSBAPI allows interactions with multiple cloud provider. 

Service broker can also implement web-based service dashboard

A Service Broker is a piece of software (server) that implements the Open Service Broker APIs. These APIs are for
- listing catalog
- provisioning 
- deprovisioning
- binding service instance with application
- unbinding service instance from application

These are secure APIs. Service broker implements OAuth etc on interface with application / container / platform. a service broker proxy can be used to support custom authentication flow. 

For time consuming provisioning and deprovisioning, OSBAPI supports asynchronous operation. 

The service broker first register with K8s.
K8s platform is like client software, that request service broker. The first request may be to get service catalog. Then K8s platform will ask to create new service instance

on-demand : service instance is provisioned, when requested.

multi-tenant : service instance are pre-provisioned.

The service instance will be bind to K8s application/pod using Service BindingsTypically, a Service Binding will also provide credentials and connectivity information (IP and port) to the application to access the Service. These credentials will be managed by K8s as secrets. 

Here, service catalog is K8s resource and it has corresponding (1) Service Catalog Controller and (2) Service Catalog API Gateway. The end-user will interact with Service Catalog API gateway. The gateway will ask service broker to list out all the services (service offering/class + service plan). Then K8s will update (remove duplicate, add, delete, modify) the master service catalog and respond to application/platform client. 

Comparison


Open Service Broker API             Kubernetes servicecatalog.k8s.io Kinds
Platform                            Kubernetes
Service Broker                     ClusterServiceBroker
Service                             ClusterServiceClass
Plan                                ClusterServicePlan
Service Instance                    ServiceInstance
Service Binding                     ServiceBinding

Reference 

Docker Volume


Create Volume

There are multiple options

1. With -v flag. 

docker run -v /data

Here /data folder can be accessed from inside the running container. It is mapped to 

/var/lib/docker/volumes/8e0b3a9d5c544b63a0bbaa788a250e6f4592d81c089f9221108379fd7e5ed017/_data

4. we can also specify the path 
docker run -v /home/usr/docker_data:/data

3. Using DOCKERFILE
One can add

VOLUME /data

Notes:
3.1. Here we can not specify path on host. 
3.2. After creating VOLUME, we cannot add file

RUN useradd foo
VOLUME /data
RUN touch /data/x
RUN chown -R foo:foo /data

This will not work. 

3.3. We can do the same in DOCKERFILE, before creating VOLUME

RUN useradd foo
RUN mkdir /data
RUN touch /data/x
RUN chown -R foo:foo /data
VOLUME /data

This will work

4. Create a volume and attached it

docker volume create --name my-vol
docker run -v my-vol:/data

It is similar to earlier approach. Here we just specify the name to volume.

Earlier instead of creating volume, data container was used.

5. we can specify volume of other container, even if the container is not running. 

docker run --volumes-from"container name"

The option -v for persistent volume and the option --volumes-from is for ephemeral volume

Delete volume

* docker rm command will not delete volume. Orphan volume will remain
* Remove parent docker docker rm -v If the volume is not referred by any other container, then it will be removed. 
* Volumes linked to user specified host directories are never deleted by docker.
* To have a look at the volumes in your system use docker volume ls:
docker volume ls
* To delete all volumes
docker volume rm $(docker volume ls -q)

Cloud Native


Cloud Native
- Promotes OpenSource
- MicroService Architecture
- Containers and container orchestration tools (Read: Docker and K8s)
- Agility
- Automation

Cloud Computing
- On demand computing on Internet
- Minimal Mgmt efforts
- Cost effective due to economies of scale.

Serverless
- cloud-computing execution model
- dynamically managing resources.
- Pricing is based on resources consumed
- application, run in managed, ephemeral containers on a “Functions as a Service” (FaaS) platform. ” - Reduced operational cost, complexity and engineering lead time

MicroService
- Software development technique
- A Variant of SOA
- loosely couples services
- fine grained services
- lightweight protocols
- modular design

Service Oriented Architecture
- Service reusability
- Easy maintenance
- Greater reliability
- Platform independence

MicroService  Benefits 
- Agility
- Fine-grained Scaling
- Technology Independence
- Parallel Development
- Better Fault Isolation
- Easier to Refactor
- Easy to Understand
- Faster Developer On boarding

Micro Service Architecture Challenges
- Operational Complexity
- Performance Hit Due to Network Latency
- Increased Configuration Management
- Unsafe Communication Medium
- Harder to Troubleshoot
- Architectural Complexity
- Higher Costs
- Duplication of Developer Effort

Cross cutting concerns
- Externalized configuration
- Logging
- Health checks
- Distributed tracing
- Boilerplate code for integrations with message broker, etc.

API Design
REST over HTTP using JSON is common choice

* One can have shared database among all services OR database per service. 

Fault Tolerance 
The Circuit Breaker pattern can prevent an application from repeatedly trying to execute an operation that's likely to fail. It allows services to recover and limit cascading failures across multiple systems

Log Aggregation
ELK
1. Logstash
2. Elastic Serach
3. Kibana

Distributed Tracing
The correlation ID helps to understand flow of events across services. 

Securing Micro Services
1. JSON Web Token (JWT) 
2. OAuth2. About resource access and sharing. 
3. OpenID Connect. About user authentication. It is on top of OAuth2

K8S Deployment
https://container-solutions.com/kubernetes-deployment-strategies/
https://github.com/ContainerSolutions/k8s-deployment-strategies/

Bengaluru Tech Summit : Day 3 - Part 2


Next panel discussion was on "Society 5.0 : Moving towards smart society by Japan". The investment in infrastructure projects needs patience, while US investors are impatients. So Japan is better choice with its excess capital. In Bangalore, all metro project, sewage project etc infrastructure projects are funded by Japan.  Human species evolved from  hunting to agriculture to industry revolution. Japan had huge contribution during the  industry revolution with its unique quality processes/tools etc. Even today they are applicable in software/IT industry as DevOps practices. India's strength is neither manufacturing nor hardware. India strength is knowledge. Its skilled, English speaking manpower in knowledge driven industry of today's. India also has massive data for machine learning of around 1.2 billion UID. China has even more data, but under full control of government, so no use. "Bengaluru Tokyo Technology Initiative" is worth to explore. Bangalore is hub for technology, efficiency and entrepreneurship. Japan welcomes Indians. 

Dolphin tank is an interesting initiative at Bangalore. It proposes to mentor and guide the start-ups through the next stage of journey for 6 months.  Just like the dolphin who is a friend in the ocean, Dolphin guides the person through the journey in the rough sea till the person becomes independent. India had cost arbitrage earlier, now India has skill arbitrage. Large organisations depends on India for innovation. Now such organisation does not increase head count at Indian center. They effectively utilizes the startup ecosystem of India.  

Panel discussion about "Decoding industry 4.0 and digitization: from vision to reality by Germany". Today if one cannot learn, unlearn and relearn then he/she is illiterate. There will be 200 billion devices by year 2025. There are four pillars of IoT: 
1. Hardware, 
2. Extremely complex algorithms, 
3. Software = Algorithm + data. Software takes action based on them. 
4. Cyber Security. 
It requires risk taking attitude for testing and trial for IoT project. German has more corporate culture. An average German person is skillful in analog, slow and perfect in engineering. While India has startup culture, risk taking attitude, software skill etc. So both are supplementary to each other. 

The industry 4.0 revolution is happening right now. There is no time of 8 years of fundamental research. The capable heavy industry needs supports from SMEs for smart factories. Today with AR and VR, one can get feel of being in middle of dangerous machine, aviation and heavy sector. All regions of world has different lead time to manufacture IC in small quantity. China/Tiwan never accept order in small quantity. German quotes high price with long delivery time. The IoT and Industry 4.0 revolution is happening now, here in college curriculum. During QA, three challenges for startups were discussed:
1. Skillset
2. Finance 
3. Scalability
"Who is better? A German leader or Modi?" "Nelson Mandela." 

It is possible to write book / biography by "ghost writer" using NLP based software. 

Panel discussion about "Geospatial innovations in the times of disruptive changes by KSRSAC" Karnataka Geographic Information System is worth to explore. Geospatial analysis and deep learning based GeoAI are also interesting fields. Bharat Lohani explains, how his company Geokno use LiDAR technology to capture data. They cover 300 square km per day. They have developed alogrihtms, that can detect terrain even in dense forest. They can detect wild creatures in forest. they can detect water channel, very useful to divide water between two states and to help better irrigation. They can create 3d maps. The shadow analysis throughout a day, is very useful for establishing solar energy plant. One more interesting talk by Laxmi Prasad Putta from Vassar Labs. 

Panel discussion about " Emerging Technologies areas in the Indic language-technology Industry by FICCI". Vivekanand Pani talked about his company Reverie Language Technologies's product "Gopal". It is a virtual assistant that can speak many Indian languages. Thirumalai Anandanpillai mentioned that Microsfot's Azure cloud exposes we service API for speech to text for Indian languages. Vinay Chhajlani's WebDunia.com turns 19 years old company in September 2018, who survived through dot com boom. In 2014, DNS was supporting 15 different languages. In 2019, Kannada will be also added in DNS, that is called IDN (International Domain Name). In the world 20 % people knows English and in India 12 % people knows English. So IDN is needed for non-English domain name. On 1st May 2018, a legal framework for Indic language was established. Since 2014 Gmail supports IDN based e-mail addresses. 4 million users uses Hindi email address by Rajsthan Government. Today more Internet data traffic is about entertainment content. However it will change, as government will give more online services in regional languages. The emerging opportunity is driving emerging of technology. In year 2010, VCs thought that Indian language speaking customers has no money so they were reluctant to invest for such language based startups. 22 years ago, we had only 1 % of PC penetration, so no need of Hindi. In 2010, 50% of mobile penetration, so Hindi was needed. So in 2011, first mobile phone launched with Hindi supports. Today many people are first time Internet user with mobile. In 2002, the way people were behaving at Yahoo chat room, today this first time user may behave same way. 

During QA, I asked about Sanskrit, Sage Panini's Sanskrit grammar and its relevant with NLP. Everyone nominated Thirumalai Anandanpillai from Microsoft to answer the question. May be because he has nice TILAK on his forehead. He replied, that today translation happens by Neural Network. It is not rule based. Panini's grammar is relevant and useful for rule based services. 

So overall "Bengaluru Tech Summit" was worth to attend event, with many thought provoking ideas, updates about recent trends, startups and insight to upcoming futures. There was also exhibition with stalls from established companies and startups both. 

Service Meshes + Kubernetes: Solving Service-Service Issues


Saturday 23rd March Kubernetes Day India 2019 event is happening in Bangalore. Mr. Ben Hall has arrived here, in Bangalore as speaker for this event. He is founder of https://katacoda.com/ The "DigitalOcean Bangalore" Meetup group, grabbed this opportunity and organized meetup event "Service Meshes + Kubernetes: Solving Service-Service Issues" today evening, where MR. Ben Hall shared his knowledge. Here is my notes exclusively for readers of this blog "Express YourSelf !"

Digital Ocean announced about another event do.co/tide2019 on April 4th and encourage all to participate. They also promote new offering about managed Kubernetes on Digital Ocean. 

Kubernetes is great tool. However we need service mesh for security, scaling, communication among pods. It is about TLS, certification roations Auth/Security, Rate Limit, Retry Logic, Fail Over etc. It provides more control for A/B testing, Canary releases, collecting system metrics and to verify, is this trusted caller pod? The service mesh can be implemented using: 

  • ASPEN MESH
  • linkerd
  • HasiCorp Consul
  • istio

Istio provides four key capabilities
1 connect. service discovery, load balancing 
2 secure. e.g protect the system against fake payment service. encryption, authentication, authorization 
3 control
4 observe

Istio adds/extends some more capabilities to kubernetes APIs, by adding YAML files. Grafana and Prometheus installation is part of istio installation. 


istio is all about just configuring Envoy proxy. Istio uses three major components. 1. Pilot 2. Mixer and 3. Citadel

He demonstrated istio on his own website / learning platform katacoda. He testing using curl to generate ingress traffic. He also mentioned and demonstrated "scope" tool. It is for monitoring, visualization and management for Docker and k8s. It is not part of istio. 

We had interesting QA sessions.

* Yes, istio is adding little latency, when we use HTTP 1.0 based RESTful APIs. However, we get performance gain, when we use gRPC or HTTP 2.0 based RESTful APIs. It is tradeoff between, performance at production environment for given hardware, or gain in terms of developer's productivity. 

* Prometheus  is used to store all matrices of cluster. 

* How to configure different time out values for 2 different consumers, who consumes the same service? Well, we can duplicate service with different names, or modify some application level logic. 

* for pod communication, one can use either RPC (gRPC, RESTful API) based or enterprise service bus based (message queue and kafka). If one takes the second approach, then istio may or may not provide additional values. It depends. 

* These was a quick comparison between SDN based routing and istio based approach. 

* during informal QA, we discussed about Gloo. Gloo is a feature-rich, Kubernetes-native ingress controller, and next-generation API gateway, powered by Envoy. 

At the end we had tea and light snacks of cookies and Samosa. 

Reference 

https://www.meetup.com/DigitalOceanBangalore/events/259864782/
https://www.slideshare.net/BenHalluk/presentations
https://medium.com/google-cloud/understanding-kubernetes-networking-ingress-1bc341c84078
https://blog.aquasec.com/istio-service-mesh-traffic-control
https://gloo.solo.io/
https://github.com/solo-io/gloo

https://layers7.blogspot.com/2019/03/kafka-communication-among-micro-services.html
https://layers7.blogspot.com/2017/12/istio.html

Disclaimer

This blog post is NOT verbatim of the speech. I captured this note as per my understanding. It may not necessarily indicate the speaker's intention. So corrections/suggestions/comments are welcome.

MicroService Structure


1. API

* Operation
- Command to modify data
- Query to retrieve data

* Types
- Synchronous
- Asynchronous 

* Protocol
- gRPC
- RESTful

Here the microservice is like server. Customer will invoke the service using API

2. API client

Here the microservice is like client. It will invoke another microservice using its API. 

3. Event Publisher

4. Event Consumer

Event is typically DDD event.  

5. Business logic

6. Private Database

MicroServices : common characteristics



- Automated Deployment

- Componentization via Services : 
-- component is a unit of software that is independently replaceable and upgradeable.
-- A service may consist of multiple processes that will always be developed and deployed together
-- services are independently deployable.
-- Microservices have their own domain logic

- Organized around Business Capabilities
-- No 3-tier

- Products not Projects

- Smart endpoints and dumb pipes
-- microservices aim to be as decoupled and as cohesive as possible
-- Microservices receiving a request, applying logic as appropriate and prois ducing a response
-- No Enterprise Service Bus (ESB), with sophisticated facilities for message routing, choreography, transformation, and applying business rules.

- Decentralized Governance
-- difference component, different langugae
-- Patterns: Tolerant Reader and Consumer-Driven Contracts

- Decentralized Data Management
-- Domain-Driven Design DDD divides a complex domain up into multiple bounded contexts and maps out the relationships between them.
-- Polyglot Persistence : Each service owns its database
-- Results in simpler upgrade of application. 
1. User sessions : Redis
2. Financial Data and Reporting : RDBMS
3. Shopping Cart : Riak
4. Recommendation : Neo4j
5. Product Catalog : MongoDB
6. Analytics and User activity logs : Cassandra

- Infrastructure Automation
-- CI/CD Pipeline

- Design for failure
-- The application should able to tolerate the failure of services
-- Real time monitoring (of circuit breaker status, current throughput and latency) and auto restore of services.


- Evolutionary Design
-- How to divide monolith : 
1. The key property of a component: independent replacement and upgradeability
2. drive modularity through the pattern of change: Most frequent changed code should be in one service. Least frequent changed, stable code in another service. The modules often changed to gather should be merged in single service. 
-- Avoid using versing of services
-- Example The Guardian website

Reference
https://martinfowler.com/articles/microservices.html
https://martinfowler.com/bliki/PolyglotPersistence.html