Identity and Access Management


1. Active Directory : Windows solution
2. LDAP Directory

Safeguard personal information Legal

1. Safe Harbor (US)
3. GDPR (Europe) 


1. penetration tests
2. network scans
3. bug bounty 


1. Open Web Application Security Project (OWASP) for Web Application Security
2. SANS Institute

Other initiatives

1. Health Insurance Portability and Accountability Act HIPAA to protect patient data
2. Gramm-Leach-Bliley Act GLBA for consumer financial information. Federal Financial institutions Examination Council FFIEC provides guidelines for it
3. National Institute of Standards and Technology (NIST) Framework for Improving Critical Infrastructure Cybersecurity 
4. Family Educational Rights and Privacy Act (FERPA) to protect the privacy of student education records.
5. G-Cloud by UK government for cloud services. 
6. Federal Information Security Management Act (FISMA) defines a comprehensive framework to protect government information

Open Standards

1. Security Assertion Markup Language (SAML) for web browser Single Sign-On (SSO) using secure tokens. XML based protocol. No password needed. 
2. OpenID : Decentralized authentication protocol by 3rd party
3. OAuth. OpenID is built on OAuth. REST API using JSON
4.  System for Cross-Domain Identity Management SCIM to exchange user identity information. REST API using JSON or XML

KubeCon Seattle 2018 - Announcements

KubeCon Seattle 2018 - Announcements
(via CNCF)

Kubecon seattle 2018 recap





* Ansible needs Python, OpenSSH and few libraries. 

* Ansible cannot be installed on Windows as control machine. Ansible runs only on Unix like system. It can control / configure Windows machines also using many modules that start with win_*

* Ansible is agent less

* Ansible uses JSON protocol

* Ansible uses (1) YAML and (2) Jinja templates

Mode of operations

1. Linear
2. rolling deployments
3. Serial
4. Free: Run as Fast as You Can

Inventory = A set of target hosts. It is describe with file format INI or YAML, located at /etc/ansible/hosts 

A Custom dynamic script can pull data from different systems. A custom script can be developed using Each cloud provider has its own dynamic inventory script. is also a cloud provider. 

The inventory It is not tied with set of Ansible instructions. It is a grouped set of hosts in [group] and [group:subgroup]. This group can be based on location, purpose (e.g. Web, DB) , OS. The host can be access within playbook with array index. E.g. first host in group named "group" is "{{ groups['group'][0] }}" 

Operator : ! We can use group:!subgroup to exclude subgroup. 
Operator : & for intersection
: is must after each group name, regardless of operator

Inventory variables are key-value pair. The same name can be at multiple levels : Host, group, group of groups, all groups. 

Keyword : ansible_ssh_host, ansible_connection, ansible_user, ansible_password

No need to define local_host


One can generate text file as per template and use the variable value defined for that host in the text file. 

One can have for loop inside Jinja 2 template using

{% for package in packages %}{{package}}{% if not loop.last %}, {% endif %}{% endfor %}

to get complete value struct inside nested dictionary, we can use : "dict name".iteritems()


Descriptive desired state expressed in YAML. 
Task Data
Task Control : Looping, Conditional, Privilege Escalation (-b option)
keyword = start_at_task


A code, that task uses to perform work. It is written in any language : Python, Ruby, Perl, Bash etc. 
Modules are placed at /usr/share/ansible path


YAML formatted file contains plays. 
commands : 

ansible-playbook "yaml file"
ansible-playbook "yaml file" -i "inventory file"

It maps a group of hosts to a set of roles. The role is set of Ansible tasks. 

We can have group of python modules installed with pip command in a given virtual environment using Ansible script. 

We can use handler and notify. 

--vv option make verbose for ansible-playbook command. 
-e for environment variable, to pass variable. Variable can be defined at inventory file and YML file. For each variable value pass -e option. 
--check option is like compiling
--ask-vault-pass to enter vault password

Some useful keywords
gather_facts : If Python missing then set this to False

All Keywords are here :

1. variable sets
2. Sequences
3. Retries on failures

Playbook are placed at  /usr/share/ansible/library path

Playbook format

- hosts: all
  connection: local
    - name: Do Something
        parameter: value
        parameter: '{{variable}}'

tags can be associated with hosts or task
can be passed as --tags "tag name" OR --skip-tags "tags name"

Variables can be inside inventory file OR outside in folders like host_vars, group_vars, 

Alternative to playbook for ad-hoc task is "ansible" executable with -m for module name and -a for argument. 

ansible-doc copy
ansible -m copy

ansible-doc command
ansible -m command

Fork : Maximum number of concurrent host


it is a directory of roles. Role is grouping of tasks. For each "role" folder, we should have "task" subfolder, that contains main.yml file. Galaxy is also public repository of role by RedHat.

ansible-galaxy login
ansible-galaxy import "user name" "role name" 
ansible-galaxy search "name"
ansible-galaxy install "user name.role name" -p "path"


ansible-vault encrypt vault
ansible-vault edit vault

Network Management

use of ipaddr filer
modules: set_fact


Popular ones: 

1. callbacks: for hooking into logging or displaying Ansible actions.
2. connection: for communication methods 
3. filter: for manipulating data within templates.

Task Automation
1. Ansible Tower (AWX project) : Commercial product by RadHat. REST API web service
2. Semaphore : Open Source. Written in Go. 


Ansible Modules

Ansible has many modules. Here is a list of some of them, that draw my attention

Software installation Related

package – Generic OS package manager
yum – Manages packages with the yum package manager
yum_repository – Add or remove YUM repositories
apk – Manages apk packages
apt – Manages apt-packages
apt_repository – Add and remove APT repositories
apt_rpm – apt_rpm package manager
npm – Manage node.js packages with npm

Other Misc. 

cli_command – Run a cli command on cli-based network devices
command – Executes a command on a remote node
copy – Copies files to remote locations
cron – Manage cron.d and crontab entries
debug – Print statements during execution
fetch – Fetches a file from remote nodes
file – Sets attributes of files
filesystem – Makes a filesystem
find – Return a list of files based on specific criteria
git – Deploy software (or files) from git checkouts
hostname – Manage hostname
ip_netns – Manage network namespaces
iptables – Modify the systems iptables
lineinfile - Mange lines in text files
ping - Try to connect to host, verify a usable python and return pong on success
python_requirements_facts – Show python path and assert dependency versions
reboot – Reboot a machine
service – Manage services
shell – Execute commands in nodes.
systemd - Manage Services
uri – Interacts with webservices

Specific modules for particular technology

os_* OpenStack related modules
docker_* Docker related modules
jenkin_* Jenkins related modules
k8s_* Kuberenetes related modules
net_* Network related modules
win_* Window host related modules



Useful commands

1. to combine "git add" and "git commit -m" use "git commit -am"
2. Never use forced push "git push --force"  
3. Daily run "git pull origin master"
4. ".gitignore" file
4.1 name ////file name OR directory name
4.2 name/  ///// only directory name
4.3 **name //// directory in any subfolder


1 core.autocrlf
git config core.autocrlf input
1.1 "input" if repository on Unix
1.2 "TRUE" if repository on Windows
1.3 "FALSE" if common platform for development and repository. 

2. Editor
git config --global core.editor "path of the editor" 

3. git commit message template
git config commit.template "path"
The message is visible at output of "git log" command  

Branch name:
changeType=[bug | feat]/[bug id | feature id]/short-name

Central Repository

1. GitLab : Depends on curl, ca-certificates, openssh-server
1.1 code repository
1.2 issue trackers
1.3 wikis
1.4 CI/CD tooling / integration (GitLab runner) 
1.5 docker container registry.
1.6 code snippets store area (gist feature)

Issues are added to Backlog by default. They are grouped as milestone

commit message should have keyword like 1. Fix 2. Fixes 3. Close 4. Closes and then some number. So GitLab will automatically update status of that issue pointed by the number.

In merge request we can add "\spend 1h"  

2. git init -bare

3. Github : REST API

4. BigBucket : integration with Jira and Confluence 

Different Branches

1. long running branch : master, develop
2. feature branch

3. hotfix branch

Branching Strategy 

1. TBD Trunk Based Development

2. GitFlow: Develop branch from Master branch. Multiple feature branches from Develop branch. After all feature branches merged back to develop branch, new release branch created. All bug fixes happen in release branch. Finally release branch merged to master. Hotfix branch from master for emergency. Develop and master both branches are controlled by maintainers. 

CI Tools

1. Jenkins (CloudBees for cloud hosted) 

2. GitLab : Using "GitLab Runners"

Installation :
Add file ".gitlab-ci.yml"

3. TFS Team Foundation Server (MicroSoft) Integrated with Visual Studio



Components of DevSecOps / 

  1. Code Analysis
  2. Change Management
  3. Compliance Monitoring
  4. Threat Investigation
  5. Vulnerability Assessment 
  6. Security Training
Ops side Automation

1. Vulnerability scanning
2. Network Security
3. Automated Patching Compilance
4. Encryption 

Type of Tools

1. Static Application Securtiy Testing SAST
1.1. Fortify
1.2. AppScan
1.3. CheckMarx
1.4 SonarQube
1.5 Burp
1.6 Nesus
1.7 MobSF
1.8 Crucible (auditing software)
OpenSource tools: 
1.9. FindSecBugs 
1.10. Brakeman
1.11. PMD

2. Dynamic Application Securtiy Testing DAST
2.1 WebInspect
2.2 Burt
2.3 AppSpider
2.4 sqlmap
2.5 Zaps

* Vulnerability Testing
1. Qualiss
2 Nesis
3. OpenVAS (Free)
4. NACL (Cloud security check) 

3. Interactive Application Securtiy Testing IAST
It uses Instrumentation like performance monitoring tool. 
It works at JRE or DotNet Run time level. 
3.1 Contrast
3.2 Seeker 

* Continuous Monitoring
1. Recon-ng (for Python)
2. Contrast RASP

* OWASP Glue Tool Project : Docker container, to keep all the tools together. 

These tools should be part of single CI/CD pipeline. FindSecBugs is part of Java IDE. 


K8S meetup

Last Saturday on 19th Jan 2019, I attended an interesting event meetup event by name "Joint Meetup with Kubernetes & OpenShift + CloudNativeCon Meetup Group of Bangalore"  jointly hosted by (1) Bangalore CNCF Meetup (2) Docker Bangalore and (3) Kubernetes & Openshift India Community meetup groups

Krishna Kumar (Huawei) shared his experience from recent KubeCon event. He showed the book and asked about "The illustrated Children's guide to k8s" the Phippy story. Surprisingly very few were aware about it. The book was just to spice up KubeCon events. He shared some numbers about event: 

* 8000 people attended it in person, 
* 2000 people attended over live stream
50+ announcements. He covered some major announcements at slidedeck. 
250+ exhibitions
* 47 hands-on session. They are called pre-conf sessions and post-conf sessions.
* Many people wrote excellent recap about Kubecon at their blog. Krishna shared top most 15 recaps in PPT slide deck. 

He noticed, plenty of job posting. All organisations are are hiring k8s experts. 

Everyone is saying K8S is complex. However OpenShift is making it easy. 

At CNCF, all software projects belongs to any one out of three categories of (1) Graduated, (2) Incubating and (3) Sandbox 

He talked about some of the sessions, that he could attend. (1) Operator Framework: It is about adding domain knowledge about how to bring up specific application using custom controller at k8s (2) The session about Helm was for 2.5 hours, out of that 1.5 hour was just for Questions and Answers ! (3) Kustomize is about 1 YMAL refer to multiple YAML files. (4) Application Special Interest Group (APP SIG) (5) CNAB (Cloud Native Application Bundles) 

He also discussed about k8s application deployment : several options and comparison among few options. 

The "k8s day india" event is scheduled at Infosys on March 23. 



Rajkavitha Kodhandapani talked about "Special Interest Groups - Docs". Everyone wants to use k8s, develop tools for k8s. However very few are contributing for k8s documentation. She motivated all to contribute. 


Abhishek Kumar discussed about Helm, Airflow scheduler, flower service etc. Helm Chart is about Multiple K8s resources into single logical deployment unit. 

Key concepts about Helm are: 

1. chart
2. repository
3. release

Few commands:

helm create "mychart"
it creates directory tree. Keep all K8s component at respective folder. 

helm install --debug --dry-run "mychart"
it will just show the final templates after setting values. 

"helm search" search on public repository, that has URL :

helm list
helm delete
helm list --deleted
helm rollback "name" "version number"
helm fetch //just to download 


Suraj Deskmujh from Kinvolk talked about "K8S security updates"

He talked about Service Account. Then recent changes/updates like: 

1. Now every pod gets a different service account. It will valid for fixed limited duration only. 
2. New API
2.1. TokenRequest
2.1. TokenRequestProjection
2.3. BoundServiceAccountTokenVolume
3. RuntimeClass, now we can change docker to rocket, etc. It is still under Work in Progress" stage. 
4. New API
PodSpec : runetimeClassName
5. NodeRestriction
Earlier, it was possible to modify kubelet config from pod. Now node can see only secrets of pods. 
6. Encrypting secret data
aescbc, secretbox, aesgcm, kms
7. Dynamic Audit Backend with AuditSink new API

Now "Bug bounty" program is coming to K8s

He insist all to join and channels #in-dev and #in-users



Aditya Konarde from RedHat gave updates from SRE (Site Reliability Engineering) perspective


* K8s is now mature. LTS
* Observability and Life Cycle is importance
* Many vendors to manage k8s cluster
* Serverless, istio, service mesh
* Prometheus used for monitoring k8s. 
* Thanos, Cortex and M3 tools are for long term retention of metrics from Prometheus. 
* New additions: Prometheus Operator, Grafana's Loki (It merges metrics and log), Istio
* The trend is: DaemonSet + kernel patch for monitoring and security within kernel mods. 
* GitOps


Docker-con update by Ajeet Singh from DellEMC 

He talked about "Docker Desktop" :
  • "Docker for Mac" and "Docker for Windows' are not "Docker Desktop"
  • Docker Desktop supports both Docker Swarm and K8s. We just need to enable it. 
  • Docker Desktop Enterprise edition has app designer interface. 
  • Docker Desktop Enterprise has customize application template. 
  • Docker Desktop Enterprise will be available in 1H 2019. At present only for preview. 
  • As such Linux does not need Docker Desktop. Let it be only for MAC and Windows. 

CNAB (Cloud Native Application Bundle) 


4. No single solution for defining and packaging these Multi-service, Multi-format distributed application. Now CNAB package. 
3. Composite APIs (ARM, Terraform) and tooling
2. Low level APIS (JSON, REST)
1. VM, containers, storage

CNAB is package format specification for bundling, installing and managing distributed apps. It uses technologies : JSON, Docker container, OpenPGP

Duffle is Package manger for cloud. 
Reference :

Docker-assemble builds docker image without using DOCKER_FILE. It analyzes your app, dependencies and give Docker image. It is uiltn on top of buildKit. At present it is enterprise edition feature. It supports many languages including Java.

With Docker application Package, we can push multi-service app, not only Docker image. 
Reference :

He demonstrated Compose on K8s. Now with Docker Compose file itself, one can deploy using (1) K8s and (2) Docker Swarm  
Compose on K8S guide for minikube / Azure AKS /GKE


He used to record+play remote Linux CLI session and demonstrated.