Identity and Access Management


Directory

1. Active Directory : Windows solution
2. LDAP Directory

Safeguard personal information Legal

1. Safe Harbor (US)
2. TRUSTe
3. GDPR (Europe) 

Programms

1. penetration tests
2. network scans
3. bug bounty 

Vulnerabilities

1. Open Web Application Security Project (OWASP) for Web Application Security
2. SANS Institute

Other initiatives

1. Health Insurance Portability and Accountability Act HIPAA to protect patient data
2. Gramm-Leach-Bliley Act GLBA for consumer financial information. Federal Financial institutions Examination Council FFIEC provides guidelines for it
3. National Institute of Standards and Technology (NIST) Framework for Improving Critical Infrastructure Cybersecurity 
4. Family Educational Rights and Privacy Act (FERPA) to protect the privacy of student education records.
5. G-Cloud by UK government for cloud services. 
6. Federal Information Security Management Act (FISMA) defines a comprehensive framework to protect government information

Open Standards

1. Security Assertion Markup Language (SAML) for web browser Single Sign-On (SSO) using secure tokens. XML based protocol. No password needed. 
2. OpenID : Decentralized authentication protocol by 3rd party
3. OAuth. OpenID is built on OAuth. REST API using JSON
4.  System for Cross-Domain Identity Management SCIM to exchange user identity information. REST API using JSON or XML

KubeCon Seattle 2018 - Announcements


KubeCon Seattle 2018 - Announcements
(via CNCF)

Kubecon seattle 2018 recap


https://blog.openshift.com/openshift-commons-gathering-at-seattle-kubecon-2018-recap-with-video-and-slides/

https://www.cncf.io/blog/2018/12/14/closing-out-2018-with-a-top-notch-cloud-native-community-event/

https://www.forbes.com/sites/jasonbloomberg/2018/12/15/top-nine-vendor-highlights-from-kubecon/

https://aws.amazon.com/blogs/opensource/kubecon-seattle-2018-recap/

https://blogs.oracle.com/cloudnative/kubecon-2018-cloud-native-recaps-and-highlights

https://blog.openshift.com/podcast-podctl-reviewing-kubecon-seattle-2018/

https://www.storagereview.com/kubecon_2018_bits

https://www.ibm.com/blogs/bluemix/2018/12/highlights-ibm-cloud-kubecon-2018/

https://blog.openshift.com/podcast-podctl-reviewing-kubecon-seattle-2018/

https://medium.com/awesome-tech-confs/all-things-kubecon-and-cloudnativecon-seattle-2018-db84eb121217

https://chrisshort.net/my-kubecon-cloudnativecon-na-2018-recap/

https://thenewstack.io/this-week-on-the-new-stack-kubecon-highlights/

https://vexxhost.com/blog/recap-kubecon-2018-seattle/

https://diamanti.com/main-blog/kubecon-2018-recap/

UDS


DevOps


Ansible


About

* Ansible needs Python, OpenSSH and few libraries. 

* Ansible cannot be installed on Windows as control machine. Ansible runs only on Unix like system. It can control / configure Windows machines also using many modules that start with win_*

* Ansible is agent less

* Ansible uses JSON protocol

* Ansible uses (1) YAML and (2) Jinja templates

Mode of operations

1. Linear
2. rolling deployments
3. Serial
4. Free: Run as Fast as You Can

Inventory = A set of target hosts. It is describe with file format INI or YAML, located at /etc/ansible/hosts 

A Custom dynamic script can pull data from different systems. https://github.com/ansible/ansible/tree/devel/contrib/inventory A custom script can be developed using https://docs.ansible.com/ansible/latest/dev_guide/developing_inventory.html Each cloud provider has its own dynamic inventory script. packet.net is also a cloud provider. 

The inventory It is not tied with set of Ansible instructions. It is a grouped set of hosts in [group] and [group:subgroup]. This group can be based on location, purpose (e.g. Web, DB) , OS. The host can be access within playbook with array index. E.g. first host in group named "group" is "{{ groups['group'][0] }}" 

Operator : ! We can use group:!subgroup to exclude subgroup. 
Operator : & for intersection
: is must after each group name, regardless of operator


Inventory variables are key-value pair. The same name can be at multiple levels : Host, group, group of groups, all groups. 

Keyword : ansible_ssh_host, ansible_connection, ansible_user, ansible_password

No need to define local_host

Template

One can generate text file as per template and use the variable value defined for that host in the text file. 

One can have for loop inside Jinja 2 template using

{% for package in packages %}{{package}}{% if not loop.last %}, {% endif %}{% endfor %}

to get complete value struct inside nested dictionary, we can use : "dict name".iteritems()

Task

Descriptive desired state expressed in YAML. 
Task Data
Task Control : Looping, Conditional, Privilege Escalation (-b option)
keyword = start_at_task

Modules 

A code, that task uses to perform work. It is written in any language : Python, Ruby, Perl, Bash etc. 
Modules are placed at /usr/share/ansible path

Playbook 

YAML formatted file contains plays. 
commands : 

ansible-playbook "yaml file"
ansible-playbook "yaml file" -i "inventory file"

It maps a group of hosts to a set of roles. The role is set of Ansible tasks. 

We can have group of python modules installed with pip command in a given virtual environment using Ansible script. 

We can use handler and notify. 

Options
--vv option make verbose for ansible-playbook command. 
-e for environment variable, to pass variable. Variable can be defined at inventory file and YML file. For each variable value pass -e option. 
--check option is like compiling
--ask-vault-pass to enter vault password

Some useful keywords
changed_when
with_sequence
with_items
with_dict
when
wait_for
ansible_os_family
gather_facts : If Python missing then set this to False

All Keywords are here : https://docs.ansible.com/ansible/latest/reference_appendices/playbooks_keywords.html

Loops:
1. variable sets
2. Sequences
3. Retries on failures

Playbook are placed at  /usr/share/ansible/library path

Playbook format

---
- hosts: all
  connection: local
  task:
    - name: Do Something
      module:
        parameter: value
        parameter: '{{variable}}'

tags can be associated with hosts or task
can be passed as --tags "tag name" OR --skip-tags "tags name"

Variables can be inside inventory file OR outside in folders like host_vars, group_vars, 

Alternative to playbook for ad-hoc task is "ansible" executable with -m for module name and -a for argument. 

ansible-doc copy
ansible -m copy

ansible-doc command
ansible -m command

Fork : Maximum number of concurrent host

Galaxy

it is a directory of roles. Role is grouping of tasks. For each "role" folder, we should have "task" subfolder, that contains main.yml file. Galaxy is also public repository of role by RedHat. https://galaxy.ansible.com/

ansible-galaxy login
ansible-galaxy import "user name" "role name" 
ansible-galaxy search "name"
ansible-galaxy install "user name.role name" -p "path"


Vault

ansible-vault encrypt vault
ansible-vault edit vault

Network Management

use of ipaddr filer
modules: set_fact

Plugins

Popular ones: 

1. callbacks: for hooking into logging or displaying Ansible actions.
2. connection: for communication methods 
3. filter: for manipulating data within templates.

Task Automation
1. Ansible Tower (AWX project) : Commercial product by RadHat. REST API web service
2. Semaphore : Open Source. Written in Go. 


Reference

Ansible Modules


Ansible has many modules. Here is a list of some of them, that draw my attention

Software installation Related

package – Generic OS package manager
yum – Manages packages with the yum package manager
yum_repository – Add or remove YUM repositories
apk – Manages apk packages
apt – Manages apt-packages
apt_repository – Add and remove APT repositories
apt_rpm – apt_rpm package manager
npm – Manage node.js packages with npm

Other Misc. 

cli_command – Run a cli command on cli-based network devices
command – Executes a command on a remote node
copy – Copies files to remote locations
cron – Manage cron.d and crontab entries
debug – Print statements during execution
fetch – Fetches a file from remote nodes
file – Sets attributes of files
filesystem – Makes a filesystem
find – Return a list of files based on specific criteria
git – Deploy software (or files) from git checkouts
hostname – Manage hostname
ip_netns – Manage network namespaces
iptables – Modify the systems iptables
lineinfile - Mange lines in text files
ping - Try to connect to host, verify a usable python and return pong on success
python_requirements_facts – Show python path and assert dependency versions
reboot – Reboot a machine
service – Manage services
shell – Execute commands in nodes.
systemd - Manage Services
uri – Interacts with webservices

Specific modules for particular technology

os_* OpenStack related modules
docker_* Docker related modules
jenkin_* Jenkins related modules
k8s_* Kuberenetes related modules
net_* Network related modules
win_* Window host related modules

Reference

Git


Useful commands

1. to combine "git add" and "git commit -m" use "git commit -am"
2. Never use forced push "git push --force"  
3. Daily run "git pull origin master"
4. ".gitignore" file
4.1 name ////file name OR directory name
4.2 name/  ///// only directory name
4.3 **name //// directory in any subfolder

Settings

1 core.autocrlf
git config core.autocrlf input
1.1 "input" if repository on Unix
1.2 "TRUE" if repository on Windows
1.3 "FALSE" if common platform for development and repository. 

2. Editor
git config --global core.editor "path of the editor" 

3. git commit message template
git config commit.template "path"
The message is visible at output of "git log" command  


Branch name:
changeType=[bug | feat]/[bug id | feature id]/short-name

Central Repository

1. GitLab : Depends on curl, ca-certificates, openssh-server
1.1 code repository
1.2 issue trackers
1.3 wikis
1.4 CI/CD tooling / integration (GitLab runner) 
1.5 docker container registry.
1.6 code snippets store area (gist feature)

Issues are added to Backlog by default. They are grouped as milestone

commit message should have keyword like 1. Fix 2. Fixes 3. Close 4. Closes and then some number. So GitLab will automatically update status of that issue pointed by the number.

In merge request we can add "\spend 1h"  

2. git init -bare

3. Github : REST API

4. BigBucket : integration with Jira and Confluence 

Different Branches

1. long running branch : master, develop
2. feature branch

3. hotfix branch

Branching Strategy 

1. TBD Trunk Based Development

2. GitFlow: Develop branch from Master branch. Multiple feature branches from Develop branch. After all feature branches merged back to develop branch, new release branch created. All bug fixes happen in release branch. Finally release branch merged to master. Hotfix branch from master for emergency. Develop and master both branches are controlled by maintainers. 

CI Tools

1. Jenkins (CloudBees for cloud hosted) 

2. GitLab : Using "GitLab Runners"

Installation : https://docs.gitlab.com/runner/install/
Register 
Add file ".gitlab-ci.yml"

3. TFS Team Foundation Server (MicroSoft) Integrated with Visual Studio

Reference:

http://nvie.com/posts/a-successful-git-branching-model

https://barro.github.io/2016/02/a-succesful-git-branching-model-considered-harmful
http://try.github.io/
https://github.com/zendframework/maintainers

DevSecOps


Components of DevSecOps / 

  1. Code Analysis
  2. Change Management
  3. Compliance Monitoring
  4. Threat Investigation
  5. Vulnerability Assessment 
  6. Security Training
Ops side Automation

1. Vulnerability scanning
2. Network Security
3. Automated Patching Compilance
4. Encryption 

Type of Tools

1. Static Application Securtiy Testing SAST
1.1. Fortify
1.2. AppScan
1.3. CheckMarx
1.4 SonarQube
1.5 Burp
1.6 Nesus
1.7 MobSF
1.8 Crucible (auditing software)
OpenSource tools: 
1.9. FindSecBugs 
1.10. Brakeman
1.11. PMD

2. Dynamic Application Securtiy Testing DAST
2.1 WebInspect
2.2 Burt
2.3 AppSpider
2.4 sqlmap
OpenSource
2.5 Zaps

* Vulnerability Testing
1. Qualiss
2 Nesis
3. OpenVAS (Free)
4. NACL (Cloud security check) 

3. Interactive Application Securtiy Testing IAST
It uses Instrumentation like performance monitoring tool. 
It works at JRE or DotNet Run time level. 
3.1 Contrast
3.2 Seeker 

* Continuous Monitoring
1. Recon-ng (for Python)
2. Contrast RASP

* OWASP Glue Tool Project : Docker container, to keep all the tools together. 

These tools should be part of single CI/CD pipeline. FindSecBugs is part of Java IDE. 

Reference

K8S meetup


Last Saturday on 19th Jan 2019, I attended an interesting event meetup event by name "Joint Meetup with Kubernetes & OpenShift + CloudNativeCon Meetup Group of Bangalore"  jointly hosted by (1) Bangalore CNCF Meetup (2) Docker Bangalore and (3) Kubernetes & Openshift India Community meetup groups

Krishna Kumar (Huawei) shared his experience from recent KubeCon event. He showed the book and asked about "The illustrated Children's guide to k8s" the Phippy story. Surprisingly very few were aware about it. The book was just to spice up KubeCon events. He shared some numbers about event: 

* 8000 people attended it in person, 
* 2000 people attended over live stream
50+ announcements. He covered some major announcements at slidedeck. 
250+ exhibitions
* 47 hands-on session. They are called pre-conf sessions and post-conf sessions.
* Many people wrote excellent recap about Kubecon at their blog. Krishna shared top most 15 recaps in PPT slide deck. 

He noticed, plenty of job posting. All organisations are are hiring k8s experts. 

Everyone is saying K8S is complex. However OpenShift is making it easy. 

At CNCF, all software projects belongs to any one out of three categories of (1) Graduated, (2) Incubating and (3) Sandbox 

He talked about some of the sessions, that he could attend. (1) Operator Framework: It is about adding domain knowledge about how to bring up specific application using custom controller at k8s (2) The session about Helm was for 2.5 hours, out of that 1.5 hour was just for Questions and Answers ! (3) Kustomize is about 1 YMAL refer to multiple YAML files. (4) Application Special Interest Group (APP SIG) (5) CNAB (Cloud Native Application Bundles) 

He also discussed about k8s application deployment : several options and comparison among few options. 

The "k8s day india" event is scheduled at Infosys on March 23. 

Reference: https://www.slideshare.net/mKrishnaKumar1/kubecon-seattle-2018-recap-application-deployment-aspects

=======================================

Rajkavitha Kodhandapani talked about "Special Interest Groups - Docs". Everyone wants to use k8s, develop tools for k8s. However very few are contributing for k8s documentation. She motivated all to contribute. 

Reference: https://www.slideshare.net/RajakavithaKodhandap/kubecon-2018seattle
=======================================

Abhishek Kumar discussed about Helm, Airflow scheduler, flower service etc. Helm Chart is about Multiple K8s resources into single logical deployment unit. 

Key concepts about Helm are: 

1. chart
2. repository
3. release

Few commands:

helm create "mychart"
it creates directory tree. Keep all K8s component at respective folder. 

helm install --debug --dry-run "mychart"
it will just show the final templates after setting values. 

"helm search" search on public repository, that has URL : https://kubernetes-charts-incubator.storage.googleapis.com/

helm list
helm delete
helm list --deleted
helm rollback "name" "version number"
helm fetch //just to download 

Referencehttps://github.com/helm/charts
==============================================

Suraj Deskmujh from Kinvolk talked about "K8S security updates"

He talked about Service Account. Then recent changes/updates like: 

1. Now every pod gets a different service account. It will valid for fixed limited duration only. 
2. New API
2.1. TokenRequest
2.1. TokenRequestProjection
2.3. BoundServiceAccountTokenVolume
3. RuntimeClass, now we can change docker to rocket, etc. It is still under Work in Progress" stage. 
4. New API
PodSpec : runetimeClassName
5. NodeRestriction
Earlier, it was possible to modify kubelet config from pod. Now node can see only secrets of pods. 
6. Encrypting secret data
EncryptionConfiguration
aescbc, secretbox, aesgcm, kms
7. Dynamic Audit Backend with AuditSink new API

Now "Bug bounty" program is coming to K8s

He insist all to join slack.k8s.io and channels #in-dev and #in-users

Reference: 

https://www.slideshare.net/surajssd009005/kubernetes-security-updates-from-kubecon-2018-seattle

===============================

Aditya Konarde from RedHat gave updates from SRE (Site Reliability Engineering) perspective

Takeaway

* K8s is now mature. LTS
* Observability and Life Cycle is importance
* Many vendors to manage k8s cluster
* Serverless, istio, service mesh
* Prometheus used for monitoring k8s. 
* Thanos, Cortex and M3 tools are for long term retention of metrics from Prometheus. 
* New additions: Prometheus Operator, Grafana's Loki (It merges metrics and log), Istio
* The trend is: DaemonSet + kernel patch for monitoring and security within kernel mods. 
* GitOps https://github.com/app-sre/qontract-server


==========================================

Docker-con update by Ajeet Singh from DellEMC 

He talked about "Docker Desktop" :
  • "Docker for Mac" and "Docker for Windows' are not "Docker Desktop"
  • Docker Desktop supports both Docker Swarm and K8s. We just need to enable it. 
  • Docker Desktop Enterprise edition has app designer interface. 
  • Docker Desktop Enterprise has customize application template. 
  • Docker Desktop Enterprise will be available in 1H 2019. At present only for preview. 
  • As such Linux does not need Docker Desktop. Let it be only for MAC and Windows. 

CNAB (Cloud Native Application Bundle) 

Layers:

4. No single solution for defining and packaging these Multi-service, Multi-format distributed application. Now CNAB package. 
3. Composite APIs (ARM, Terraform) and tooling
2. Low level APIS (JSON, REST)
1. VM, containers, storage

CNAB is package format specification for bundling, installing and managing distributed apps. It uses technologies : JSON, Docker container, OpenPGP

Duffle is Package manger for cloud. 
Reference : https://github.com/garethr/docker-app-cnab-examples

Docker-assemble builds docker image without using DOCKER_FILE. It analyzes your app, dependencies and give Docker image. It is uiltn on top of buildKit. At present it is enterprise edition feature. It supports many languages including Java.

With Docker application Package, we can push multi-service app, not only Docker image. 
Reference : https://github.com/docker/app

He demonstrated Compose on K8s. Now with Docker Compose file itself, one can deploy using (1) K8s and (2) Docker Swarm  
Compose on K8S guide for minikube / Azure AKS /GKE

Reference

https://github.com/collabnix/dockerlabs
https://www.slideshare.net/mobile/ajeetraina/dockercon-2018-eu-updates
http://collabnix.com/


He used https://asciinema.org/ to record+play remote Linux CLI session and demonstrated.