OpenStack Services and OpenStack Distributions


* NOVA Compute Service
- Main part of IaaS
ZUN Containers Service
QINLING Functions Service

Bare Metal

IRONIC Bare Metal Provisioning Service
- Preboot eXecution Environment PXE
- Intelligent Platform Management Interface IPMI
- can extend with vendor specific plugin
CYBORG Accelerators resource management


SWIFT Object store
-scalable redundant storage system 
* CINDER Block Storage
MANILA Shared filesystems


NEUTRON Networking
- old name Quantum
OCTAVIA Load balancer
- old name Moniker

Shared Services

* KEYSTONE Identity service
- common authentication system
- Can integrate with LDAP
* GLANCE Image service
- It can use SWIFT
- Heat and Nova interface with Glance
BARBICAN Key management
KARBOR Application Data Protection as a Service
SEARCHLIGHT Indexing and Search
- Integrated with Horizon and CLI


* HEAT Orchestration
- OpenStack-native REST API 
- CloudFormation-compatible Query API
SENLIN Clustering service
MISTRAL Workflow service
ZAQAR Messaging Service
- Messages between various components of SaaS and mobile Apps. 
- old name Marconi
BLAZAR Resource reservation service
AODH Alarming Service
- rule based
- defined rules against metric
defined rules against event data 
- metric and event data collected by Ceilometer or Gnocchi

Workload Provisioning

MAGNUM Container Orchestration Engine Provisioning
- K8s
- Apache Mesos
- Docker Swarm
SAHARA Big Data Processing Framework Provisioning
- Elastic MapReduce
- To provision Hadoop cluster
- old name Savanna
TROVE Database as a Service
- Rational DB
- Non-rational DB
- old name RedDwarf

Application Lifecycle

MASAKARI Instances High Availability Service
MURANO Application Catalog
SOLUM Software Development Lifecycle Automation
FREEZER Backup, Restore, and Disaster Recovery

API Proxies

EC2API EC2 API proxy

Web Frontend

* HORIZON Dashboard
- native OpenStack API 
- EC2 compatibility API.

- Single Point of Contact for billing system

- for organizing, analyzing and expanding OpenStack alarms & events, yielding insights regarding the root cause of problems and deducing their existence before they are directly detected

Services marked with * are main services


  • Bright Computing
  • Canonical (Ubuntu)
  • HPE (which was spin-merged to Micro Focus/Suse)
  • IBM
  • Mirantis
  • Oracle OpenStack for Oracle Linux, or O3L
  • Oracle OpenStack for Oracle Solaris
  • Red Hat
  • Sardina Systems
  • Stratoscale
  • SUSE
  • VMware Integrated OpenStack (VIO)

Service Function Chaining

Network monitoring/measurement

  • sFlow RFC 3176
  • Cisco's NetFlow 
  • IPFIX Protocol RFC 7011

Cloud native technologies include 

  • containers, 
  • service meshes, 
  • microservices, 
  • immutable infrastructure and 
  • declarative APIs 

that allow deployment in public, private and hybrid cloud environments through loosely coupled and automated systems

Various planes

  • infrastructure plane, 
  • virtual infrastructure plane, 
  • service plane,
  • user plane,
SFC Path identification

* NSH Network Service Header
* Ethernet MAC chaining

NSH is new tunneling protocol. RFC 8300

Then service function forwarders (SFFs) will create the service function paths (SFPs) in the form of an overlay by forwarding packets based on their NSH header.

The NSH header is composed of 

  • service path identification, 
  • transport independent per-packet service metadata and 
  • optional variable type-length-value (TLV) metadata.

physical probe or virtual probe functionality deployed as 

  • switches,
  • classifiers, 
  • SFs, or 
  • SFFs.

The term probe to designate any network node capable of reading and writing to a NSH header

Middleboxes are also interchangeably called 

  • services, 
  • inline services, 
  • appliances, 
  • network functions (NFs), 
  • virtual NFs
  • (vNFs), or 
  • service functions (SFs)

Example SFs includes 

  • firewalls, 
  • content filters, 
  • virus scanners (VS), 
  • intrusion detection systems (IDS), 
  • deep packet inspection (DPI), 
  • network address translation (NAT), 
  • content caches, 
  • load-balancers, 
  • wide area network (WAN) accelerators,
  • multimedia transcoders, 
  • multiservice proxies, 
  • application acceleration,
  • Lawful Intercept (LI),
  • HTTP header enrichment functions
  • TCP Optimizer
  • logging/metering/charging/advanced charging applications,  
  • or any other function that requires processing of packets 

ETSI NFV uses the term "network function forwarding graph" (NF-FG) 
IETF uses the term "service function chaining" (SFC) 

Fundamentally SFC is the ability to cause network packet flows to route through a network via a path other than the one that would be chosen by routing table lookups on the packet’s destination IP address.

VNF Forwarding Graph (VNFFG)
The combination of 

  • VNFs, 
  • SFC, and 
  • the classification of traffic to flow through them 

is described as the VNF Forwarding Graph (VNFFG). 

It is described as YAML file as per TOSCA VNF Forwarding Graph Descriptor (VNFFGD). VNFFGD = Forwarding Path + VNFGG


Each node is really a logical port, which is defined in the path as a Connection Point (CP) belonging to a specific VNFD. 

Tacker = OpenStack service addressing uses cases of 

  • NFV Orchestration and 
  • VNF Infrastructure Manager VIM ( Nova, Neutron, Cinder)

using standards based architecture

NFVO Renders VNF Forwarding Graphs using SDN Controller or a SFC API

Tacker allows for managing VNFs

Example CLI calls:

To create VNFFG

openstack vnf descriptor create --vnfd-file tosca-vnffg-vnfd1.yaml VNFD1
openstack vnf create --vnfd-name VNFD1 VNF1

openstack vnf descriptor create --vnfd-file tosca-vnffg-vnfd2.yaml VNFD2

openstack vnf create --vnfd-name VNFD2 VNF2

To create VNFFG SFC (where testVNF1, and testVNF2 are VNF instances):

tacker vnffg-create –name mychain –chain testVNF2,testVNF1 –symmetrical True

To create VNFFG SFC by abstract VNF types (ex. “firewall”, “nat”): 

tacker vnffg-create –name mychain –chain firewall,nat –abstract-types

To create SFC Classifier for a VNFFG:

tacker vnffg-classifier-create –name myclass –chain mychain –match tcp_dest=80,ip_proto=6

vnffg, vnffg_classifier are schema. Can be represented as dictionary. 

For classifier, one can use tenant_id attribute to implement 


K8S Tools

K8S Native Tools


It has many addons

minikube addons list

--insecure-registry flag for private docker registry.
registry-creads addon to use GCR ECR and private docker registry.  

Advanced topics:


To manage production-grade k8s clusters using CLI on AWS etc. It creates configuration file, that can be used to create actual clusters. It is like kubectl for AWS.


Master Node needs : docker, kubeadm, kubelet, kubectl. Worker Node needs : Kubeadm
Master : kubeadm init It gives "joint token" to be used at worker node. with command kubeadm join


1. Manage k8s apps
2. troubleshoot issue with k8s apps
3. manage entire k8s cluster. 

It is add-on for Minikube and application for real K8s cluster. It needs kubectl proxy.


1. sync resources across clusters
2. cross-cluster discovery (DNS and load balancer) 

With federated clustered we can have hybrid cloud and multi-vendor cloud.


converts Docker compose to K8s objects like deployments and services

Docker -> Compose
K8s -> Replication Controller = deployments + replica sets
Rancher -> Cattle
Stack Engine -> applications and deployments

Github link:


Installations and management of K8s apps. it is like package manager. 
chart = pre-configured k8s resources
Helm (client at local host) -> Tiller (server at K8s cluster) 

chart = 
1. chart.yml
2. Templates
3. values.yml

Kubernetes/Charts at github has list of important projects

Helm charts:
Stable charts:

Draft, Gitkube, Helm, Ksonnet, Metaparticle and Skaffold are some of the tools around that help developers build and deploy their apps on Kubernetes


3 namespace always exists
1. default
2. kube-public
3. kube-system

Auto Complete


Knative helps developers build, deploy, and manage modern serverless workloads on Kubernetes.

CNCF Tools

gRPC will replace SOAP and REST. Payload is protobuf. 

Consul and etcd are for service discovery. CoreDNS is from CNCF that can replace kube-dns

Service-mesh handles communication among micro services and network intricacies. Linkerd  transparent network proxy. Envoy small server with small footprint. Both support gRPC and http2 

CNI is plugin-based networking solutions for containers. Calico and flannel are most popular networking provider. 

GlusterFS and Ceph are for storage. Rook file, object and block storage system. Rook runs as an operator and creates Rook cluster using PV.

rkt and containerd are for container runtime

Prometheus is CNCF project for monitoring and many vendor specific such similar projects. Add metrics to application and to add exporter to use at Prometheus. PromoQL is its query language. Its alert manager has many good features and can integrate with PagerDuty. Prometheus for backend. Front end can be Grafana. 

Logging : Beats / Elastic Stack, Grayling, Fluentd. Fluentd

Tracing : Jaeger, OpenTracing, Zipkin. Application instrumentation is exposed using OpenTracing API to Jaeger agent. Jaeger has Client, agent, collector and UT

Security : (1) Image security and (2) Key management. Notary and TUF for secure image by CNCF. Valut and Confident stores sensitive data of image in secure manner and encrypt in REST. TUF is framework for software update system. Notary is implementation of TUF specification. Acquasec Product Suite for complete security platform. 

Kubeless and Fission providing equivalents to functions-as-a-service but running within Kubernetes


Kelsey Hightower:
Kubernetes Docs:
Kubernetes Slack:
CNCF Meetups:
The agile admin:

Identity and Access Management


1. Active Directory : Windows solution
2. LDAP Directory

Safeguard personal information Legal

1. Safe Harbor (US)
3. GDPR (Europe) 


1. penetration tests
2. network scans
3. bug bounty 


1. Open Web Application Security Project (OWASP) for Web Application Security
2. SANS Institute

Other initiatives

1. Health Insurance Portability and Accountability Act HIPAA to protect patient data
2. Gramm-Leach-Bliley Act GLBA for consumer financial information. Federal Financial institutions Examination Council FFIEC provides guidelines for it
3. National Institute of Standards and Technology (NIST) Framework for Improving Critical Infrastructure Cybersecurity 
4. Family Educational Rights and Privacy Act (FERPA) to protect the privacy of student education records.
5. G-Cloud by UK government for cloud services. 
6. Federal Information Security Management Act (FISMA) defines a comprehensive framework to protect government information

Open Standards

1. Security Assertion Markup Language (SAML) for web browser Single Sign-On (SSO) using secure tokens. XML based protocol. No password needed. 
2. OpenID : Decentralized authentication protocol by 3rd party
3. OAuth. OpenID is built on OAuth. REST API using JSON
4.  System for Cross-Domain Identity Management SCIM to exchange user identity information. REST API using JSON or XML

KubeCon Seattle 2018 - Announcements

KubeCon Seattle 2018 - Announcements
(via CNCF)

Kubecon seattle 2018 recap





* Ansible needs Python, OpenSSH and few libraries. 

* Ansible cannot be installed on Windows as control machine. Ansible runs only on Unix like system. It can control / configure Windows machines also using many modules that start with win_*

* Ansible is agent less

* Ansible uses JSON protocol

* Ansible uses (1) YAML and (2) Jinja templates

Mode of operations

1. Linear
2. rolling deployments
3. Serial
4. Free: Run as Fast as You Can

Inventory = A set of target hosts. It is describe with file format INI or YAML, located at /etc/ansible/hosts 

A Custom dynamic script can pull data from different systems. A custom script can be developed using Each cloud provider has its own dynamic inventory script. is also a cloud provider. 

The inventory It is not tied with set of Ansible instructions. It is a grouped set of hosts in [group] and [group:subgroup]. This group can be based on location, purpose (e.g. Web, DB) , OS. The host can be access within playbook with array index. E.g. first host in group named "group" is "{{ groups['group'][0] }}" 

Operator : ! We can use group:!subgroup to exclude subgroup. 
Operator : & for intersection
: is must after each group name, regardless of operator

Inventory variables are key-value pair. The same name can be at multiple levels : Host, group, group of groups, all groups. 

Keyword : ansible_ssh_host, ansible_connection, ansible_user, ansible_password

No need to define local_host


One can generate text file as per template and use the variable value defined for that host in the text file. 

One can have for loop inside Jinja 2 template using

{% for package in packages %}{{package}}{% if not loop.last %}, {% endif %}{% endfor %}

to get complete value struct inside nested dictionary, we can use : "dict name".iteritems()


Descriptive desired state expressed in YAML. 
Task Data
Task Control : Looping, Conditional, Privilege Escalation (-b option)
keyword = start_at_task


A code, that task uses to perform work. It is written in any language : Python, Ruby, Perl, Bash etc. 
Modules are placed at /usr/share/ansible path


YAML formatted file contains plays. 
commands : 

ansible-playbook "yaml file"
ansible-playbook "yaml file" -i "inventory file"

It maps a group of hosts to a set of roles. The role is set of Ansible tasks. 

We can have group of python modules installed with pip command in a given virtual environment using Ansible script. 

We can use handler and notify. 

--vv option make verbose for ansible-playbook command. 
-e for environment variable, to pass variable. Variable can be defined at inventory file and YML file. For each variable value pass -e option. 
--check option is like compiling
--ask-vault-pass to enter vault password

Some useful keywords
gather_facts : If Python missing then set this to False

All Keywords are here :

1. variable sets
2. Sequences
3. Retries on failures

Playbook are placed at  /usr/share/ansible/library path

Playbook format

- hosts: all
  connection: local
    - name: Do Something
        parameter: value
        parameter: '{{variable}}'

tags can be associated with hosts or task
can be passed as --tags "tag name" OR --skip-tags "tags name"

Variables can be inside inventory file OR outside in folders like host_vars, group_vars, 

Alternative to playbook for ad-hoc task is "ansible" executable with -m for module name and -a for argument. 

ansible-doc copy
ansible -m copy

ansible-doc command
ansible -m command

Fork : Maximum number of concurrent host


it is a directory of roles. Role is grouping of tasks. For each "role" folder, we should have "task" subfolder, that contains main.yml file. Galaxy is also public repository of role by RedHat.

ansible-galaxy login
ansible-galaxy import "user name" "role name" 
ansible-galaxy search "name"
ansible-galaxy install "user name.role name" -p "path"


ansible-vault encrypt vault
ansible-vault edit vault

Network Management

use of ipaddr filer
modules: set_fact


Popular ones: 

1. callbacks: for hooking into logging or displaying Ansible actions.
2. connection: for communication methods 
3. filter: for manipulating data within templates.

Task Automation
1. Ansible Tower (AWX project) : Commercial product by RadHat. REST API web service
2. Semaphore : Open Source. Written in Go.