Developing Secure Software


Terms

  • Vulnerability: Any exploitable weakness to breach confidentiality, integrity, availability 
  • Threat: harm
  • Defect: any error introducing software security vulnerability
  • threat vector: means through which cyber security criminal gain unauthorized access to protected resources. 
  • TPM Trusted Platform Module: secure encryption keys
  • Risk Manifest: Probability and Consequences
  • Requirement level threats: 
  • Architectural patterns: E.g. Single point of access 
  • Flaws: design error causing software security vulnerability E.g.: Improper input validation-> SQL injection attacks
  • Bugs: coding error leading to software security vulnerability. They can be avoided by: 
- static and dynamic code analysis
- peer code review
  • Threat Modelling Process TMP
1. identification
2. analysis
3. categorization for priorities
4. mitigation
STRIDE threat model
S: Spoofing identity
T: Tampering with data
R: Repudiation
I: Information disclosure
D: Denial of Service
E: Elevation of privilege

Hardware level threats
1. eavesdropping: hardware key logger
2. man-made disruption: power outage, sabotage
3. natural disaster 
Countermeasures
1. geo-graphically dispersed redundancy
2. UPS
3. Physical security

Secure design 
1. Security Tactics.
- Detect Attacks
- Resist Attacks
- React to Attacks
- Recover from Attacks
2. Security Patterns. Two Types 
- Design Patterns
- Architectural Patterns
Design patterns become Architectural patterns when applied consistently
3. Security Vulnerability
- CVE DB maintained by MITRE: Product specific details
- CWE: Category of CVE: Useful insights about what can go wrong. 
4. Architectural Analysis for Security AAFS Three phases
- ToAA Tactic-Oriented Architectural Analysis
- PoAA Pattern-Oriented Architectural Analysis
- VoAA Vulnerability-Oriented Architectural Analysis: Code inspection
5. Software Security Anti-Patterns
- unrestricted upload of files
- unrestricted path traversal
- hardcoded password

Secure Coding
1. Buffer overflow attacks due to Lack of user input validation
- automatic bounds checking
- built in language specific library model for safe buffer handling
- code scanner
- advanced compiler
- OS support
2. broken authentication and session management
- Software Framework 
- Standard Application Security Verification Standard ASVS V2(Authentication) and V3(Session Management) by OWSAP
3. Insecure Direct Object Reference
4. Exposing sensitive data
5. Access Control
- Identification
- Authentication
- Authorization
6. Input Validation
- Buffer overflow
- SQL injection
- Cross site scripting XSS
Intercepting validation security pattern

Testing for security
1. Static Analysis
2. Dynamic Analysis
- HCL AppScan
- Nikto2
- Qualys
3. Penetration Testing (Ethical Hacking)
- Kali Linux
4. Vulnerability Management Tools
- Nessus uses CVSS Common Vulnerability Scoring System

Recent Development and Future Directions
1. DevOps
2. DevSecOps https://www.redhat.com/en/topics/devops/what-is-devsecops
3. Cloud Security
- Hypervisor vulnerability
- Cloud Service Provider access physical machines
4. Developer friendly security tools and training
5. IoT and software security
6. Rules and Regulations
- GDPR General Data Protection Regulations. Compliance expectations 
-- Data controller
-- Data Producer
- HIPAA Health Insurance Portability and Accountability Act 
- PCI DSS Payment Card Industry Data Security Standard
7. Certification
- Global Information Assurance Certification GIAC
- The International Council of Electronic Commerce Consultants (EC-Council): Certified Application Security Engineer (CASE)
- International Information System Security Certification Consortium (ISC Square)
(1) Certified Information System Security Professional CISSP 
(2) Certified Secure Software Lifecycle Professional CSSLP 

The evolution of Ingress through the Gateway API


We have three types components. 

1. Transparent Proxies: sidecar, kube-proxy
2. Cloud LB: GCP, Azure, AWS
3. Middle Proxies: nginx, HAProxy, Envoy

Now their functionalities are overlapping 

In K8s, Ingress is simple L7 description 

Roles: 
1. Infrastructure Provider 
2. Cluster Operator / NetOps / SRE 
3. Applicatoin DEveloper 

In case of Ingress, Infrastructure Provider provides IngressClass. No role for SRE. Application Developer defines Ingress and Service

In case of Gateway API, Infrastructure Provider provides GatewayClass. It is associated with type of LB. SRE provides Gateway implementation. Application Developer defines Route and Service

We have K8s objects like

1. GatewayClass
2. Gateway provides: Exposure and access, LB
3. Route: HTTP, TCP, UDP, SNI and TLS: HTTPS, TLS
4. Service about grouping and selection 

API groups
1. Core: Must be supported
2. Extended: Feature specific. May be supported, must be portable
3. Implementation specific: Not K8s API schema. No guarantee for portability

Gateway contains route. Route can be CRD. 
Earlier we had single ingress resource. Now it splits in Gateway and Route. So there may be conflict. E.g. same host, same path in multiple routes. 

Extension Points in Alpha
1. GatewayClass parameters for LB configuration
2. Gateway.Listener have ExtensionRef to customize listener properties
3. Route: Custom filter via ExtenstionRef. Backend is more than Services, like ingress

ExtenstionRef is for CRD. 

Reference:

OVS and OVN in K8s


OVS Networking at K8s

kbr0 bridge created on each node with brctl command with 10.244.x.0/24. obr0 bridge is also created and added as port to kbr0 . All obr0 bridges accross nodes are connected with GRE. For large scale isolation VxLAN is used. It may not be complete mesh of all nodes with obr0. STP mode in obr0 bridges prevents loop. Routing rules makes target reachable in 10.244.0.0/16

Ref: https://unofficial-kubernetes.readthedocs.io/en/latest/admin/ovs-networking/

CNIs:
1. Kube-OVN

Kube-OVN is an OVN-based kubernetes network fabric for enterprises. With the help of OVN/OVS, it provides some advanced overlay network features like subnet, QoS, static IP allocation, traffic mirroring, gateway, openflow-based network policy and service proxy.
Kube-OVN implements a subnet-per-namespace network topology. That means a cidr can spread the entire cluster nodes, and the ip allocation is fulfilled by kube-ovn-controller at a central place. kube-ovn can apply lots of network configurations at subnet level, like cidr, gw, exclude_ips, nat 
Calico use no encapsulation or lightweight IPIP encapsulation and Kube-OVN uses geneve to encapsulate packets. No encapsulation can achieve better network performance for both throughput and latency. However, as this method will expose pod network directly to the underlay network with it comes with the burden on deploy and maintain. In some managed network environment where BGP and IPIP is not allowed, encapsulation is a must.
Kube-OVN can also work in non-encapsulation mode, that take use of underlay switches to switch the packets or use hardware offload to achieve better performance than kernel datapath.
https://github.com/alauda/kube-ovn

3. OVN4NFV-K8s-Plugin (OVN based CNI controller & plugin)

OVN4NFV-K8S-Plugin is OVN based CNI controller plugin to provide cloud native based Service function chaining(SFC), Multiple OVN overlay networking, dynamic subnet creation, dynamic creation of virtual networks, VLAN Provider network, Direct provider network and pluggable with other Multi-network plugins, ideal for edge based cloud native workloads in Multi-cluster networking
https://github.com/opnfv/ovn4nfv-k8s-plugin

4. OVN (Open Virtual Networking)
OVN is an opensource network virtualization solution developed by the Open vSwitch community. It lets one create logical switches, logical routers, stateful ACLs, load-balancers etc to build different virtual networking topologies. The project has a specific Kubernetes plugin and documentation at ovn-kubernetes.
ovn-kubernetes implements a subnet-per-node network topology. That means each node will have a fixed cidr range,
https://github.com/ovn-org/ovn-kubernetes

5. Open vSwitch CNI plugin
This plugin allows user to define Kubernetes networks on top of Open vSwitch bridges available on nodes. Note that ovs-cni does not configure bridges, it's up to a user to create them and connect them to L2, L3 or an overlay network. This project also delivers OVS marker, which exposes available bridges as Node resources, that can be used to schedule pods on the right node via intel/network-resources-injector. Finally please note that Open vSwitch must be installed and running on the host.

In order to use this plugin, Multus must be installed on all hosts and NetworkAttachmentDefinition CRD created. First create network attachment definition. This object specifies to which Open vSwitch bridge should the pod be attached and what VLAN ID should be set on the port.
https://github.com/kubevirt/ovs-cni

6. OVS
§ CNI binary attaching PODs to and OVS bridge
§ POD-to-POD and POD-to-Service communication with OpenFlow rules
§ Enhanced monitoring using Prometheus and OVS-exporter
§ Speed and latency is comparable with leading plugins (Flannel, Calico, Weave)
§ DPDK integration possibility

OVS and OVN


Let's understand first OVS and OVN

 Limitation of OpenStack networking
- L2 population, 
- local ARP responder, 
- L2 Gateway and 
- DVR 
https://networkop.co.uk/blog/2016/10/13/os-dvr/
https://networkop.co.uk/blog/2016/05/21/neutron-l2gw/
https://networkop.co.uk/blog/2016/05/06/neutron-l2pop/

OVN is 
- a distributed SDN controller 
- implementing virtual networks 
- with the help OVS. 

OVN provides
- L2/L3 Virtual networking
- firewall service

Architecture Same as VMWare's NSX

* dedicated Linux bridge between the VM and the OVS integration bridge for implementation of security group
* dedicated NS for DHCP agent
* dedicated NS for routing
* NAT = network namespaces + iptables + proxy-ARP.

OVN implements inside a single OVS bridge:
- security groups, 
- distributed virtual routing, 
- NAT and 
- distributed DHCP server

Flow/path
Neutron data model -> 
OVN ML2 plugin -> 
OVN Northbond DB (DB Node) : QoS, NAT and ACL settings -> 
OVN northd (DB Node) -> 
OVN southbond DB (DB Node): L2 Datapath and L3 Datapath -> 
OVN Controller (Worker Node): Distributed SDN Contoller -> 
local OVS over openflow. (Worker Node)

Network: a virtual L2 broadcast domain
Subnet: attached to the network.
Router: provides connectivity between all directly connected subnets
Port: VM’s point of attachment to the subnet

KubeCon CloudNativeCon Europe 2020 Virtual : Keynote addresses day 2


K8s Project Update

K8s 1.18

- Raw block device support

- generic data populated volume 

- CSI on Windows

- HPA controls: rate of scale in and scale down

- kubectl debug. ephemeral container

- API server : API priority

- Node topology manager 

- IPv6

- CSR API

- DryRun and kubectl diff on server side

K8s 1.19

- Generic ephemeral volumes, like emptyDir

- IPv6 on Windows

- kubectl alpha debug for node, in node's host namespace

- Initial support for cgroups V2

   =============================================================

ChatOps

razee.io : Multicluster CD tool. Automate deployment of 1000s of clusters. Same small SRE team for 24000 clusters

ibm.biz/transformed

============================================================

Challenges: 

- kubesprwal

- under utilized cluster

- noisy neighbors

- cluster scope CRD, even namespace scope CRD available

- zombie workload

Possible solutions: 

Create K8s cluster with expiry date. It can be extended, but expiry date is needed

GitOps, we can destroy cluster, get it back with infrastructure as a code. It is good for disaster recovery also.

Let's have innovation around creating a tool to support better utilisation of cluster and de-zombification.  


KubeCon CloudNativeCon Europe 2020 Virtual : Keynote addresses day 1


End Users > Passive users
https://www.cncf.io/blog/2020/06/12/introducing-the-cncf-technology-radar/
=========================================================================
CNCF Project Updates

* Argo: 
- GitOps Engine
- Canary: Istio, Nginx, AWS LB, SMI

* SPIFFE/SPIRE Project updates
- Enhanced K8s automated workflow registration natively/oterwise
- Federation support

* Contour: Envoy based ingress controller
- auto TLS certificate rotation
- TLS client authentication
- Req/Res header manipulation

* TiKV https://tikv.org/
- A distributed transactional kv database. 
- data encryption

* Jaeger
- on top of OpenTelemetry collector
- OpenTelemetry data model based storage

=========================================================================
Cloud Native SD-WAN
NSM: Multicloud ,multicluster wired construct among pod
=========================================================================
intrusion detection at container. 
https://www.hackerone.com/
https://github.com/shopify/kubeaudit
Flaco: Cloud-Native Runtime Security
System Call + enrich with k8s data + enrich with container engine data 
https://falco.org/
=========================================================================
OCI
http://layers7.blogspot.com/2019/05/open-container-initiative.html
=========================================================================