Open Container Initiative


1. Image specification

An OCI Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime

https://github.com/opencontainers/image-spec/blob/master/spec.md

1.1 manifest
It contains metadata (annotation = key, value pair) about the contents and dependencies of the image 
JSON file
https://github.com/opencontainers/image-spec/blob/master/manifest.md
https://docs.docker.com/engine/reference/commandline/manifest/

1.2 image index (optional)
For platform specific version of image

1.3 image layout

1.4 file system layers

1.5 configuration
JSON file https://github.com/opencontainers/image-spec/blob/master/config.md

1.6 conversion

1.7 descriptor 

Tools

2. Distribution specification

It describes the API that might be used by a registry to distribute images, to easily 
- share, 
- search, 
- obtain and 
- verify 
container image

https://github.com/opencontainers/distribution-spec/blob/master/spec.md

It is implemented by Docker Registry and https://github.com/atlaskerr/stori

3. Runtime Specification

It aims to specify the configuration, execution environment, and lifecycle of a container.
aims to specify the configuration, execution environment, and lifecycle of a container.

runc is Runtime specification OCI implementation 

LifeCycle


3.1 create command
3.2 create runtime environment using config.json
3.3 start command
3.4 pre-start hook
3.5 user program specified by process
3.6 post-start hook
3.7 exit due to exit, error, kill, crash
3.8 delete
3.9 destroy container. undo create
3.10 post-stop hook

Virtual Machine


Operating System level virtualization / containers /  virtual private servers / virtual environments (VEs) / partitions / jails

  • Docker, 
  • Solaris Containers, 
  • OpenVZ, 
  • Linux-VServer, 
  • LXC, 
  • AIX Workload Partitions, 
  • Parallels Virtuozzo Containers, and 
  • iCore Virtual Accounts.
with lx zones on Solaris, one can run old Solaris and Linux inside lx zones container.

Hardware-assisted virtualization 
  • KVM, 
  • VMware Workstation, 
  • VMware Fusion, 
  • Hyper-V, 
  • Windows Virtual PC, 
  • Xen, 
  • Parallels Desktop for Mac, 
  • Oracle VM Server for SPARC, 
  • VirtualBox and 
  • Parallels Workstation.
Process Virtual Machine / application virtual machine / Managed Runtime Environment (MRE)

  • Java JRE Java Runetime Environment : JVM
  • .NET framework CLR Common Language Runetime : Parrot Virtual Machine
System Virtual Machine
  • using hypervisor

Python Virtual Enviornment


Introduction

The venv module provides support for creating lightweight “virtual environments”. It is optionally isolated from system site directories.

A virtual environment is a Python environment such that 
- the Python interpreter, 
- libraries and 
- scripts 
installed into it are isolated from (1) those installed in other virtual environments, and (2) (by default) any libraries installed in a “system” Python, i.e., one which is installed as part of your operating system.

A virtual environment is a directory tree which contains Python executable files and other files which indicate that it is a virtual environment. The path to this folder can be printed with variable
sys.prefix
sys.exec_prefix
These variables are used to locate site packages directory. 

Each virtual environment has its own
- Python binary (which matches the version of the binary that was used to create this environment) 
- independent set of installed Python packages in its site directories.

Packages (Modules) 

Check installed packages using
pip list

Check specific package is installed or not using
pip show "package name"

Two types of packages
1. System packages (installed as part of Python installation)
2. Site packages (3rd party libraries)

Reinstall using
pip install --upgrade --no-deps --force-reinstall "package name"

Pip can export a list of all installed packages and their versions using the freeze command:
pip freeze
This can be used to create requirements.txt file, that is used later as : 
pip install -r requirements.txt

Commands

To create venv
python3 -m venv /path/to/new/virtual/environment
c:\>c:\Python35\python -m venv c:\path\to\myenv

To activate venv
source /bin/activate
C:\> \Scripts\activate.bat
It modifies PATH variables. Add the venv path at the beginning. 

To deactivate venv
source /bin/deactivate
C:\> \Scripts\deactivate.bat
It reset back PATH variables

To change Python version
pyenv local 2.x
pyenv local 3.x
pyenv global 2.x
pyenv global 3.x

K8s Operator


Introduction

An Operator is a method of 

- packaging, 
- deploying and 
- managing 
a Kubernetes application

Here a Kubernetes application is an application that is both 
- deployed on Kubernetes and 
- managed by Kubernetes
using the Kubernetes APIs and kubectl tooling.

* Operators are like runtime for K8s application


* Operators are more relevant for stateful applications. like

- databses
- caches
- Machine Learning
- Analytics
- monitoring systems

* Application native/specific. Operator is application specific controller, that extends k8s API for 

- packaging (Helm can use to package multiple k8s apps, that will be deployed and managed by their respective operator). 
- installation / deployment on k8s cluster
- run
- configure / re-configure
- updates
- upgrades
- rollback, in case of failure
- recover from fault
- backup
- Restore
- scaling
- Monitoring
self-service provisioning capabilities,
- more complex automation
of application. (particularly stateful applications). Operator does all these without data loss and without application unavailability. 

* Operator is like = Custom Resource Definition CRD + its controller. Here the custom resource is application instance. 


* Operator is like = A set of cohesive APIs to extend K8s functionalities.


* K8s manages K8s objects. With operator, k8s manage multiple application instances across multiple clusters. 


An operator is a way of building an application and driving an application on top of Kubernetes, behind Kubernetes APIs. Operator is more than K8s primitives like StateFulSet and PVC. Operator uses ReplicaSet, Services, DaemonSets, Deployments


Operator takes human operational knowledge (Read : DevOps Team's  knowledge  OR Site Reliability Engineer's knowledge about application lifecycle) and put it into software to easily package and share with consumers.


* Here controllers have direct access to K8s API. So they can monitor the cluster, change pods/services, scale up/down and call endpoints of the running applications, all according to custom rules written inside those controllers.


* Out of 4 najor modules of controller code generator (1) informer-gen and (2) list-gen modules are also called operator. They build basis of custom controller. 

* A powerful feature of an Operator is : 
to enable the desired state to be reflected in your application. 
As the application gets more complicated with more components, detecting these changes can be harder for a human, but easier for a computer.

Operator Framework

1. Operator SDK: Set of tools to accelerate the development of an Operator. E.g. KubeBuilder

Here is CLI guide https://github.com/operator-framework/operator-sdk/blob/master/doc/sdk-cli-reference.md
Here is user guide : https://github.com/operator-framework/operator-sdk/blob/master/doc/user-guide.md
2. Operator Life Cycle Management
3. Operator Metering 


Here is K8s core API to write custom code for operator

Operator can be developed using

- GoLang
- Ansible Playbooks
- Helm Charts. It is more specific to stateless K8s application. Here deeper inspection to Pods, Services, Deployments and ConfigMaps will not happen

Types of Custom Resource Definition CRD

1. primary resources
2. secondary resources

* For ReplicaSet, primary resource is ReplicaSet itself docker image and secondary resource is pod

* For DaemonSet, primary resource is DaemonSet itself and secondary resource is pod+node
* For PodSet, primary resource is PodSet itself and secondary resource is pod.

PodSet owns the pod with controllerutil.SetControllerReference()

PodSet will be developed, that is similar to ReplicaSet. 


Controller


Controller works like this pseudo code. 


while True {
  values = check_real_values()
  tune_system(values, expected_state)
}





Reconsile() is major function of controller. K8s without operator simply create new pod. Controller does application specific code in addition to pod creation. 


Code Generator
- deepcopy-gen 
- client-gen
- informer-gen
- lister-gen
- conversion-gen
- defaulter-gen
- register-gen
- set-gen

gen-go  and CRD-code-generation are also another tool to generate code. 

Last two are also called operator. They are basis for building custom controller. 

Notification loop declare and describe desired state. Controller will find out how to reach there. 

Operator Examples


CoreOS provides sample operators as reference like (1) etcd Here is blog (2) Prometheus. Here is blog. K8s exports all of its internal metrics in the native Prometheus format. Here two third party resources (TPRs) are defined: Prometheus and ServiceMonitor.


Demo controller

Sample controller 
netperf operator

There are many operators available. Here are the lists


https://github.com/operator-framework/awesome-operators

https://operatorhub.io/
https://commons.openshift.org/sig/operators.html
https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps


Reference


https://coreos.com/operators/

https://coreos.com/blog/introducing-operator-framework
https://coreos.com/blog/introducing-operators.html
https://medium.com/devopslinks/writing-your-first-kubernetes-operator-8f3df4453234
https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps
https://enterprisersproject.com/article/2019/2/kubernetes-operators-plain-english
https://linuxhint.com/kubernetes_operator/
https://blog.couchbase.com/kubernetes-operators-game-changer/


Reference for building controller

https://medium.com/@cloudark/kubernetes-custom-controllers-b6c7d0668fdf
https://github.com/piontec/k8s-demo-controller
https://github.com/kubernetes/sample-controller
https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/
https://github.com/operator-framework/getting-started
https://www.linux.com/blog/learn/2018/12/demystifying-kubernetes-operators-operator-sdk-part-2

https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
https://blog.openshift.com/kubernetes-deep-dive-code-generation-customresources/
https://blog.openshift.com/make-a-kubernetes-operator-in-15-minutes-with-helm/
https://github.com/operator-framework/helm-app-operator-kit
https://www.tailored.cloud/kubernetes/write-a-kubernetes-controller-operator-sdk/

Questois


1. K8s operator are under which namespace?

2. Operator can be developed using Python?

Service Catalog


Introduction

The Service Catalog offers :
- powerful abstractions to make services available in a Kubernetes cluster. 
These services are:
- typically third-party managed cloud offerings OR
- self-hosted services 

Developers can focus on the applications without the need for managing complex services deployment.

Service Catalog is list of (1) service class (or service offering) E.g services : database, messaging queue, API gateway, a log drain, or something else. Services are associated with (2) service plan The service plan is variant of service in terms of cost, size etc.

Service Broker
Typically, a third party Service provider will expose a Service Broker on their own infrastructure

Service configuration options using JSON schema to configure service and plan. Automatic form building is also possible. 

Service Broker use cases

1. Service breaker handle interactions from modern cloud native apps to legacy system for valueable data stored at legacy system.


2. OSBAPI allows interactions with multiple cloud provider. 

Service broker can also implement web-based service dashboard

A Service Broker is a piece of software (server) that implements the Open Service Broker APIs. These APIs are for
- listing catalog
- provisioning 
- deprovisioning
- binding service instance with application
- unbinding service instance from application

These are secure APIs. Service broker implements OAuth etc on interface with application / container / platform. a service broker proxy can be used to support custom authentication flow. 

For time consuming provisioning and deprovisioning, OSBAPI supports asynchronous operation. 

The service broker first register with K8s.
K8s platform is like client software, that request service broker. The first request may be to get service catalog. Then K8s platform will ask to create new service instance

on-demand : service instance is provisioned, when requested.

multi-tenant : service instance are pre-provisioned.

The service instance will be bind to K8s application/pod using Service BindingsTypically, a Service Binding will also provide credentials and connectivity information (IP and port) to the application to access the Service. These credentials will be managed by K8s as secrets. 

Here, service catalog is K8s resource and it has corresponding (1) Service Catalog Controller and (2) Service Catalog API Gateway. The end-user will interact with Service Catalog API gateway. The gateway will ask service broker to list out all the services (service offering/class + service plan). Then K8s will update (remove duplicate, add, delete, modify) the master service catalog and respond to application/platform client. 

Comparison


Open Service Broker API             Kubernetes servicecatalog.k8s.io Kinds
Platform                            Kubernetes
Service Broker                     ClusterServiceBroker
Service                             ClusterServiceClass
Plan                                ClusterServicePlan
Service Instance                    ServiceInstance
Service Binding                     ServiceBinding

Reference