Python Virtual Enviornment


Introduction

The venv module provides support for creating lightweight “virtual environments”. It is optionally isolated from system site directories.

A virtual environment is a Python environment such that 
- the Python interpreter, 
- libraries and 
- scripts 
installed into it are isolated from (1) those installed in other virtual environments, and (2) (by default) any libraries installed in a “system” Python, i.e., one which is installed as part of your operating system.

A virtual environment is a directory tree which contains Python executable files and other files which indicate that it is a virtual environment. The path to this folder can be printed with variable
sys.prefix
sys.exec_prefix
These variables are used to locate site packages directory. 

Each virtual environment has its own
- Python binary (which matches the version of the binary that was used to create this environment) 
- independent set of installed Python packages in its site directories.

Packages (Modules) 

Check installed packages using
pip list

Check specific package is installed or not using
pip show "package name"

Two types of packages
1. System packages (installed as part of Python installation)
2. Site packages (3rd party libraries)

Reinstall using
pip install --upgrade --no-deps --force-reinstall "package name"

Pip can export a list of all installed packages and their versions using the freeze command:
pip freeze
This can be used to create requirements.txt file, that is used later as : 
pip install -r requirements.txt

Commands

To create venv
python3 -m venv /path/to/new/virtual/environment
c:\>c:\Python35\python -m venv c:\path\to\myenv

To activate venv
source /bin/activate
C:\> \Scripts\activate.bat
It modifies PATH variables. Add the venv path at the beginning. 

To deactivate venv
source /bin/deactivate
C:\> \Scripts\deactivate.bat
It reset back PATH variables

To change Python version
pyenv local 2.x
pyenv local 3.x
pyenv global 2.x
pyenv global 3.x

K8s Operator


Introduction

An Operator is a method of 

- packaging, 
- deploying and 
- managing 
a Kubernetes application

Here a Kubernetes application is an application that is both 
- deployed on Kubernetes and 
- managed by Kubernetes
using the Kubernetes APIs and kubectl tooling.

* Operators are like runtime for K8s application


* Operators are more relevant for stateful applications. like

- databses
- caches
- Machine Learning
- Analytics
- monitoring systems

* Application native/specific. Operator is application specific controller, that extends k8s API for 

- packaging (Helm can use to package multiple k8s apps, that will be deployed and managed by their respective operator). 
- installation / deployment on k8s cluster
- run
- configure / re-configure
- updates
- upgrades
- rollback, in case of failure
- recover from fault
- backup
- Restore
- scaling
- Monitoring
self-service provisioning capabilities,
- more complex automation
of application. (particularly stateful applications). Operator does all these without data loss and without application unavailability. 

* Operator is like = Custom Resource Definition CRD + its controller. Here the custom resource is application instance. 


* Operator is like = A set of cohesive APIs to extend K8s functionalities.


* K8s manages K8s objects. With operator, k8s manage multiple application instances across multiple clusters. 


An operator is a way of building an application and driving an application on top of Kubernetes, behind Kubernetes APIs. Operator is more than K8s primitives like StateFulSet and PVC. Operator uses ReplicaSet, Services, DaemonSets, Deployments


Operator takes human operational knowledge (Read : DevOps Team's  knowledge  OR Site Reliability Engineer's knowledge about application lifecycle) and put it into software to easily package and share with consumers.


* Here controllers have direct access to K8s API. So they can monitor the cluster, change pods/services, scale up/down and call endpoints of the running applications, all according to custom rules written inside those controllers.


* Out of 4 najor modules of controller code generator (1) informer-gen and (2) list-gen modules are also called operator. They build basis of custom controller. 

* A powerful feature of an Operator is : 
to enable the desired state to be reflected in your application. 
As the application gets more complicated with more components, detecting these changes can be harder for a human, but easier for a computer.

Operator Framework

1. Operator SDK: Set of tools to accelerate the development of an Operator. E.g. KubeBuilder

Here is CLI guide https://github.com/operator-framework/operator-sdk/blob/master/doc/sdk-cli-reference.md
Here is user guide : https://github.com/operator-framework/operator-sdk/blob/master/doc/user-guide.md
2. Operator Life Cycle Management
3. Operator Metering 


Here is K8s core API to write custom code for operator

Operator can be developed using

- GoLang
- Ansible Playbooks
- Helm Charts. It is more specific to stateless K8s application. Here deeper inspection to Pods, Services, Deployments and ConfigMaps will not happen

Types of Custom Resource Definition CRD

1. primary resources
2. secondary resources

* For ReplicaSet, primary resource is ReplicaSet itself docker image and secondary resource is pod

* For DaemonSet, primary resource is DaemonSet itself and secondary resource is pod+node
* For PodSet, primary resource is PodSet itself and secondary resource is pod.

PodSet owns the pod with controllerutil.SetControllerReference()

PodSet will be developed, that is similar to ReplicaSet. 


Controller


Controller works like this pseudo code. 


while True {
  values = check_real_values()
  tune_system(values, expected_state)
}





Reconsile() is major function of controller. K8s without operator simply create new pod. Controller does application specific code in addition to pod creation. 


Code Generator
- deepcopy-gen 
- client-gen
- informer-gen
- lister-gen
- conversion-gen
- defaulter-gen
- register-gen
- set-gen

gen-go  and CRD-code-generation are also another tool to generate code. 

Last two are also called operator. They are basis for building custom controller. 

Notification loop declare and describe desired state. Controller will find out how to reach there. 

Operator Examples


CoreOS provides sample operators as reference like (1) etcd Here is blog (2) Prometheus. Here is blog. K8s exports all of its internal metrics in the native Prometheus format. Here two third party resources (TPRs) are defined: Prometheus and ServiceMonitor.


Demo controller

Sample controller 
netperf operator

There are many operators available. Here are the lists


https://github.com/operator-framework/awesome-operators

https://operatorhub.io/
https://commons.openshift.org/sig/operators.html
https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps


Reference


https://coreos.com/operators/

https://coreos.com/blog/introducing-operator-framework
https://coreos.com/blog/introducing-operators.html
https://medium.com/devopslinks/writing-your-first-kubernetes-operator-8f3df4453234
https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps
https://enterprisersproject.com/article/2019/2/kubernetes-operators-plain-english
https://linuxhint.com/kubernetes_operator/
https://blog.couchbase.com/kubernetes-operators-game-changer/


Reference for building controller

https://medium.com/@cloudark/kubernetes-custom-controllers-b6c7d0668fdf
https://github.com/piontec/k8s-demo-controller
https://github.com/kubernetes/sample-controller
https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/
https://github.com/operator-framework/getting-started
https://www.linux.com/blog/learn/2018/12/demystifying-kubernetes-operators-operator-sdk-part-2

https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/
https://blog.openshift.com/make-a-kubernetes-operator-in-15-minutes-with-helm/
https://github.com/operator-framework/helm-app-operator-kit
https://www.tailored.cloud/kubernetes/write-a-kubernetes-controller-operator-sdk/

Questois


1. K8s operator are under which namespace?

2. Operator can be developed using Python?

Service Catalog


Introduction

The Service Catalog offers :
- powerful abstractions to make services available in a Kubernetes cluster. 
These services are:
- typically third-party managed cloud offerings OR
- self-hosted services 

Developers can focus on the applications without the need for managing complex services deployment.

Service Catalog is list of (1) service class (or service offering) E.g services : database, messaging queue, API gateway, a log drain, or something else. Services are associated with (2) service plan The service plan is variant of service in terms of cost, size etc.

Service Broker
Typically, a third party Service provider will expose a Service Broker on their own infrastructure

Service configuration options using JSON schema to configure service and plan. Automatic form building is also possible. 

Service Broker use cases

1. Service breaker handle interactions from modern cloud native apps to legacy system for valueable data stored at legacy system.


2. OSBAPI allows interactions with multiple cloud provider. 

Service broker can also implement web-based service dashboard

A Service Broker is a piece of software (server) that implements the Open Service Broker APIs. These APIs are for
- listing catalog
- provisioning 
- deprovisioning
- binding service instance with application
- unbinding service instance from application

These are secure APIs. Service broker implements OAuth etc on interface with application / container / platform. a service broker proxy can be used to support custom authentication flow. 

For time consuming provisioning and deprovisioning, OSBAPI supports asynchronous operation. 

The service broker first register with K8s.
K8s platform is like client software, that request service broker. The first request may be to get service catalog. Then K8s platform will ask to create new service instance

on-demand : service instance is provisioned, when requested.

multi-tenant : service instance are pre-provisioned.

The service instance will be bind to K8s application/pod using Service BindingsTypically, a Service Binding will also provide credentials and connectivity information (IP and port) to the application to access the Service. These credentials will be managed by K8s as secrets. 

Here, service catalog is K8s resource and it has corresponding (1) Service Catalog Controller and (2) Service Catalog API Gateway. The end-user will interact with Service Catalog API gateway. The gateway will ask service broker to list out all the services (service offering/class + service plan). Then K8s will update (remove duplicate, add, delete, modify) the master service catalog and respond to application/platform client. 

Comparison


Open Service Broker API             Kubernetes servicecatalog.k8s.io Kinds
Platform                            Kubernetes
Service Broker                     ClusterServiceBroker
Service                             ClusterServiceClass
Plan                                ClusterServicePlan
Service Instance                    ServiceInstance
Service Binding                     ServiceBinding

Reference 

Docker Volume


Create Volume

There are multiple options

1. With -v flag. 

docker run -v /data

Here /data folder can be accessed from inside the running container. It is mapped to 

/var/lib/docker/volumes/8e0b3a9d5c544b63a0bbaa788a250e6f4592d81c089f9221108379fd7e5ed017/_data

4. we can also specify the path 
docker run -v /home/usr/docker_data:/data

3. Using DOCKERFILE
One can add

VOLUME /data

Notes:
3.1. Here we can not specify path on host. 
3.2. After creating VOLUME, we cannot add file

RUN useradd foo
VOLUME /data
RUN touch /data/x
RUN chown -R foo:foo /data

This will not work. 

3.3. We can do the same in DOCKERFILE, before creating VOLUME

RUN useradd foo
RUN mkdir /data
RUN touch /data/x
RUN chown -R foo:foo /data
VOLUME /data

This will work

4. Create a volume and attached it

docker volume create --name my-vol
docker run -v my-vol:/data

It is similar to earlier approach. Here we just specify the name to volume.

Earlier instead of creating volume, data container was used.

5. we can specify volume of other container, even if the container is not running. 

docker run --volumes-from"container name"

The option -v for persistent volume and the option --volumes-from is for ephemeral volume

Delete volume

* docker rm command will not delete volume. Orphan volume will remain
* Remove parent docker docker rm -v If the volume is not referred by any other container, then it will be removed. 
* Volumes linked to user specified host directories are never deleted by docker.
* To have a look at the volumes in your system use docker volume ls:
docker volume ls
* To delete all volumes
docker volume rm $(docker volume ls -q)

Cloud Native


Cloud Native
- Promotes OpenSource
- MicroService Architecture
- Containers and container orchestration tools (Read: Docker and K8s)
- Agility
- Automation

Cloud Computing
- On demand computing on Internet
- Minimal Mgmt efforts
- Cost effective due to economies of scale.

Serverless
- cloud-computing execution model
- dynamically managing resources.
- Pricing is based on resources consumed
- application, run in managed, ephemeral containers on a “Functions as a Service” (FaaS) platform. ” - Reduced operational cost, complexity and engineering lead time

MicroService
- Software development technique
- A Variant of SOA
- loosely couples services
- fine grained services
- lightweight protocols
- modular design

Service Oriented Architecture
- Service reusability
- Easy maintenance
- Greater reliability
- Platform independence

MicroService  Benefits 
- Agility
- Fine-grained Scaling
- Technology Independence
- Parallel Development
- Better Fault Isolation
- Easier to Refactor
- Easy to Understand
- Faster Developer On boarding

Micro Service Architecture Challenges
- Operational Complexity
- Performance Hit Due to Network Latency
- Increased Configuration Management
- Unsafe Communication Medium
- Harder to Troubleshoot
- Architectural Complexity
- Higher Costs
- Duplication of Developer Effort

Cross cutting concerns
- Externalized configuration
- Logging
- Health checks
- Distributed tracing
- Boilerplate code for integrations with message broker, etc.

API Design
REST over HTTP using JSON is common choice

* One can have shared database among all services OR database per service. 

Fault Tolerance 
The Circuit Breaker pattern can prevent an application from repeatedly trying to execute an operation that's likely to fail. It allows services to recover and limit cascading failures across multiple systems

Log Aggregation
ELK
1. Logstash
2. Elastic Serach
3. Kibana

Distributed Tracing
The correlation ID helps to understand flow of events across services. 

Securing Micro Services
1. JSON Web Token (JWT) 
2. OAuth2. About resource access and sharing. 
3. OpenID Connect. About user authentication. It is on top of OAuth2

K8S Deployment
https://container-solutions.com/kubernetes-deployment-strategies/
https://github.com/ContainerSolutions/k8s-deployment-strategies/