<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Hemant Kumar</title>
    <description></description>
    <link>https://hemantkumar.net/</link>
    <atom:link href="https://hemantkumar.net/kodekitabfeed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Sun, 14 Dec 2025 06:16:19 +0000</pubDate>
    <lastBuildDate>Sun, 14 Dec 2025 06:16:19 +0000</lastBuildDate>
    <generator>Jekyll v3.10.0</generator>
    
    	 
    
    	
	      <item>
	        <title>Kubernetes certificate based mutual auth with different CAs</title>
	        <description>&lt;p&gt;Configuring certificate based mutual authentication in Kubernetes using nginx ingress controller is explained pretty well in &lt;a href=&quot;https://medium.com/@awkwardferny/configuring-certificate-based-mutual-authentication-with-kubernetes-ingress-nginx-20e7e38fdfca&quot;&gt;this&lt;/a&gt; post. However, the post assumes that the certificates used for validating the client and the server are issued by the same CA (Certificate Authority). How do you configure client certificate authentication in kubernetes when using client and server certificates issued by different CAs? The current &lt;a href=&quot;https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/&quot;&gt;nginx ingress controller docs&lt;/a&gt; do not make this absolutely clear either. I recently came across a scenario where we were using our own internal/private CA for issuing client certificates and a publicly trusted CA for server TLS. This post covers configuring kubernetes nginx ingress to use certificates issued by different CAs on the same host to perform mutual authentication.&lt;/p&gt;

&lt;h2 id=&quot;what-is-mutual-authentication&quot;&gt;What is mutual authentication?&lt;/h2&gt;

&lt;p&gt;Mutual authentication or 2-way authentication is a process in which both the client and server verify each others identity via a Certificate Authority. An &lt;a href=&quot;https://www.ssl.com/faqs/what-is-an-x-509-certificate/&quot;&gt;X.509 Certificate&lt;/a&gt; can provide identity to a machine or a device and enable the independent verification of the issued identity by an external authority such as a CA. Therefore, mutual authentication as defined by &lt;a href=&quot;https://www.codeproject.com/Articles/326574/An-Introduction-to-Mutual-SSL-Authentication&quot;&gt;codeproject.com&lt;/a&gt; is also referred to as certificate based mutual authentication.&lt;/p&gt;

&lt;blockquote&gt;Mutual SSL authentication or certificate based mutual authentication refers to two parties authenticating each other through verifying the provided digital certificate so that both parties are assured of the others’ identity.&lt;/blockquote&gt;

&lt;p&gt;You can have the client and the server certificates issued by the same CA or as shown below by different CAs.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../assets/mutual-auth.png&quot; alt=&quot;mutual-auth.png&quot; title=&quot;Mutual authentication&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;configuring-mutual-authentication&quot;&gt;Configuring mutual authentication&lt;/h2&gt;

&lt;p&gt;In order to configure mutual authentication for a host in a kubernetes cluster, we are going to run a simple application within kubernetes and ensure it can be accessed publicly over TLS only with a valid client certificate.&lt;/p&gt;

&lt;p&gt;There are &lt;a href=&quot;https://kubernetes.io/docs/tutorials/hello-minikube/&quot;&gt;many ways&lt;/a&gt; of setting up a kubernetes cluster but for this exercise we are going to use &lt;a href=&quot;https://docs.microsoft.com/en-gb/azure/aks/kubernetes-walkthrough&quot;&gt;Azure Kubernetes Service (AKS)&lt;/a&gt; and &lt;a href=&quot;https://kubernetes.github.io/ingress-nginx/deploy/&quot;&gt;deploy nginx ingress controller&lt;/a&gt; to it. We use an ingress controller for routing (layer 7) external traffic to your application running within the AKS cluster and exposing multiple services under the same IP address. The ingress controller deployment on AKS provisions a load balancer in Azure and assigns it a public IP. This allows the nginx controller to be accessed publicly via an &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;EXTERNAL_IP&lt;/code&gt;. After deploying the nginx ingress controller, you can get its external IP by&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;kubectl get svc &lt;span class=&quot;nt&quot;&gt;-n&lt;/span&gt; ingress-nginx

NAME                                 TYPE           CLUSTER-IP     EXTERNAL-IP    PORT&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;S&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;                      AGE
ingress-nginx-controller             LoadBalancer   10.0.106.200   13.93.79.119   80:31599/TCP,443:31682/TCP   46d
ingress-nginx-controller-admission   ClusterIP      10.0.101.201   &amp;lt;none&amp;gt;         443/TCP                      46d
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;We are going to use wildcard DNS service &lt;a href=&quot;https://nip.io/&quot;&gt;nip.io&lt;/a&gt; to give this &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;EXTERNAL_IP&lt;/code&gt; a domain name like &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;EXTERNAL_IP.nip.io&lt;/code&gt; and configure server TLS for this domain.&lt;/p&gt;

&lt;h3 id=&quot;generating-the-certificates&quot;&gt;Generating the certificates&lt;/h3&gt;

&lt;p&gt;Usually you are not expected to be in possession of a public CA key and certificate. A &lt;a href=&quot;https://en.wikipedia.org/wiki/Certificate_signing_request&quot;&gt;Certificate Signing Request&lt;/a&gt; (CSR) is sent to a public CA to obtain a globally trusted certificate for securing your assets. However, for demonstration purposes below we will generate two separate CA certificates using OpenSSL and then generate server and client certificates signed by each CA to configure kubernetes ingress.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Generate a public CA Key and Certificate&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;openssl req &lt;span class=&quot;nt&quot;&gt;-x509&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-sha256&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-newkey&lt;/span&gt; rsa:4096 &lt;span class=&quot;nt&quot;&gt;-days&lt;/span&gt; 356 &lt;span class=&quot;nt&quot;&gt;-nodes&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-keyout&lt;/span&gt; public-ca.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; public-ca.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-subj&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;/CN=Public Cert Authority/O=Org Public CA/C=GB&apos;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Generate the Server Key and Server Certificate and Sign with the public CA Certificate&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;openssl req &lt;span class=&quot;nt&quot;&gt;-new&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-nodes&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-newkey&lt;/span&gt; rsa:4096 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; server.csr &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-keyout&lt;/span&gt; server.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-subj&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;/CN={EXTERNAL_IP}.nip.io/O=aks-ingress/C=GB&apos;&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;openssl x509 &lt;span class=&quot;nt&quot;&gt;-req&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-sha256&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-days&lt;/span&gt; 365 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-in&lt;/span&gt; server.csr &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-CA&lt;/span&gt; public-ca.crt &lt;span class=&quot;nt&quot;&gt;-CAkey&lt;/span&gt; public-ca.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-set_serial&lt;/span&gt; 01 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; server.crt

&lt;span class=&quot;c&quot;&gt;# Generate an internal CA Key and Certificate&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;openssl req &lt;span class=&quot;nt&quot;&gt;-x509&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-sha256&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-newkey&lt;/span&gt; rsa:4096 &lt;span class=&quot;nt&quot;&gt;-days&lt;/span&gt; 356 &lt;span class=&quot;nt&quot;&gt;-nodes&lt;/span&gt; &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-keyout&lt;/span&gt; internal-ca.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; internal-ca.crt &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-subj&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;/CN=Internal Cert Authority/O=Org Internal CA/C=GB&apos;&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;# Generate the Client Key and Client Certificate and Sign with the internal CA Certificate&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;openssl req &lt;span class=&quot;nt&quot;&gt;-new&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-nodes&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-newkey&lt;/span&gt; rsa:4096 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; client.csr &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-keyout&lt;/span&gt; client.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-subj&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&apos;/CN=internal-client/O=aks-ingress-client/C=GB&apos;&lt;/span&gt;
&lt;span class=&quot;nv&quot;&gt;$ &lt;/span&gt;openssl x509 &lt;span class=&quot;nt&quot;&gt;-req&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-sha256&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-days&lt;/span&gt; 365 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-in&lt;/span&gt; client.csr &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-CA&lt;/span&gt; internal-ca.crt &lt;span class=&quot;nt&quot;&gt;-CAkey&lt;/span&gt; internal-ca.key &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-set_serial&lt;/span&gt; 02 &lt;span class=&quot;se&quot;&gt;\&lt;/span&gt;
	&lt;span class=&quot;nt&quot;&gt;-out&lt;/span&gt; client.crt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;create-the-kubernetes-secrets&quot;&gt;Create the kubernetes secrets&lt;/h3&gt;

&lt;p&gt;Kubermetes requires you to store the certificates as secrets in order for them to be used by nginx ingress controller. We create 2 separate secrets, one for the internal CA certificate to validate client certificates and the other for server TLS for the client to validate server’s identity.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;# Add a secret for the internal CA certificate to validate client certs &lt;/span&gt;
kubectl create secret generic internal-ca &lt;span class=&quot;nt&quot;&gt;--from-file&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;ca.crt&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;internal-ca.crt

&lt;span class=&quot;c&quot;&gt;# Add a secret for server TLS (e.g. issued by a public CA) to validate server&apos;s identity&lt;/span&gt;
kubectl create secret tls server-tls &lt;span class=&quot;nt&quot;&gt;--key&lt;/span&gt; server.key &lt;span class=&quot;nt&quot;&gt;--cert&lt;/span&gt; server.crt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;deploy-the-application&quot;&gt;Deploy the application&lt;/h3&gt;
&lt;ol&gt;
  &lt;li&gt;Deploy the application pods.&lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;
apiVersion: apps/v1
kind: Deployment
metadata:
  name: http-svc
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: http-svc
  template:
    metadata:
      labels:
        app: http-svc
    spec:
      containers:
      - name: http-svc
        image: gcr.io/kubernetes-e2e-test-images/echoserver:2.1
        ports:
        - containerPort: 8080&quot;&lt;/span&gt; | kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; -
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
&lt;ol&gt;
  &lt;li&gt;Expose the pods within the cluster using a Service.&lt;/li&gt;
&lt;/ol&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;
apiVersion: v1
kind: Service
metadata:
  name: http-svc
  namespace: default
spec:
  ports:
  - port: 80
    targetPort: 8080
    protocol: TCP
    name: http
  selector:
    app: http-svc&quot;&lt;/span&gt; | kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; -
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;create-the-ingress-rule&quot;&gt;Create the ingress rule&lt;/h3&gt;

&lt;p&gt;The previous step exposes the service within the kubernetes cluster. To access the service externally we need to create an ingress rule. The ingress rule below sets up TLS and makes the service avaialble on &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;https://{EXTERNAL_IP}.nip.io&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;default/internal-ca&lt;/code&gt; secret containing the Internal CA certificate is used for the client certificate validation&lt;/li&gt;
  &lt;li&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;server-tls&lt;/code&gt; secret containing the server certificate is used for server TLS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Please note for the ingress rule to take effect it needs to be created in the same namespace as the service&lt;/em&gt;.&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;nb&quot;&gt;echo&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&quot;
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/auth-tls-verify-client: &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;on&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
    nginx.ingress.kubernetes.io/auth-tls-secret: &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;default/internal-ca&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: &lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;true&lt;/span&gt;&lt;span class=&quot;se&quot;&gt;\&quot;&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;
  name: http-svc
  namespace: default
spec:
  rules:
  - host: {EXTERNAL_IP}.nip.io
    http:
      paths:
      - backend:
          serviceName: http-svc
          servicePort: 80
        path: /
  tls:
  - hosts:
    - {EXTERNAL_IP}.nip.io
    secretName: server-tls&quot;&lt;/span&gt; | kubectl apply &lt;span class=&quot;nt&quot;&gt;-f&lt;/span&gt; -
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;test-the-ingress-configuration&quot;&gt;Test the ingress configuration&lt;/h3&gt;

&lt;p&gt;Sending a request without a client certificate and key should give a 400 error, however the server certificate (issued by the Public CA) validation does succeed as shown below&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;curl &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-k&lt;/span&gt; https://&lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;EXTERNAL_IP&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;.nip.io

&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt; Server certificate:
&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;  subject: &lt;span class=&quot;nv&quot;&gt;CN&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;13.93.79.119.nip.io&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;O&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;aks-ingress&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;C&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;GB
&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;  start &lt;span class=&quot;nb&quot;&gt;date&lt;/span&gt;: Nov 10 23:19:03 2020 GMT
&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;  expire &lt;span class=&quot;nb&quot;&gt;date&lt;/span&gt;: Nov 10 23:19:03 2021 GMT
&lt;span class=&quot;k&quot;&gt;*&lt;/span&gt;  issuer: &lt;span class=&quot;nv&quot;&gt;CN&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;Public Cert Authority&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;O&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;Org Public CA&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;nv&quot;&gt;C&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;GB
...
&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;400 Bad Request&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;
&amp;lt;center&amp;gt;No required SSL certificate was sent&amp;lt;/center&amp;gt;
....
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Sending a request with the client certificate and key should redirect you to the http-svc:&lt;/p&gt;

&lt;div class=&quot;language-sh highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;curl &lt;span class=&quot;nt&quot;&gt;-v&lt;/span&gt; &lt;span class=&quot;nt&quot;&gt;-k&lt;/span&gt; https://&lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;EXTERNAL_IP&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;.nip.io &lt;span class=&quot;nt&quot;&gt;--cert&lt;/span&gt; client.crt &lt;span class=&quot;nt&quot;&gt;--key&lt;/span&gt; client.key&lt;span class=&quot;sb&quot;&gt;`&lt;/span&gt;
...
ssl-client-issuer-dn&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;C&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;GB,O&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;Org Internal CA,CN&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;Internal Cert Authority
ssl-client-subject-dn&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;C&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;GB,O&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;aks-ingress-client,CN&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;internal-client
ssl-client-verify&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;SUCCESS
user-agent&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;curl/7.58.0
....
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;
</description>
	        <pubDate>Thu, 29 Oct 2020 07:31:00 +0000</pubDate>
	        <link>https://hemantkumar.net/kubernetes-mutual-auth-with-diffferent-cas.html</link>
	        <guid isPermaLink="true">https://hemantkumar.net/kubernetes-mutual-auth-with-diffferent-cas.html</guid>
	        
	        <category>kubernetes,</category>
	        
	        <category>aks,</category>
	        
	        <category>mutual-auth,</category>
	        
	        <category>ingress,</category>
	        
	        <category>nginx,</category>
	        
	        <category>ingress-controller</category>
	        
	        
	        <category>kodekitab</category>
	        
	      </item>
	      
    
    	
	      <item>
	        <title>Transformation, what the heck?</title>
	        <description>&lt;p&gt;Throughout my consulting career, I’ve frequently encountered the term “transformation” being thrown around excessively in IT circles. Virtually every organization is undergoing some form of transformation, whether it’s related to agility, digitalisation, or cultural change. To gain a genuine understanding of what transformation entails, it’s crucial to delve deeper and clarify some of the commonly used terminology.&lt;/p&gt;

&lt;p&gt;Transformation is a word used to &lt;a href=&quot;https://www.thoughtworks.com/insights/blog/gut-check-time-do-you-have-what-it-takes-transform&quot;&gt;signify intent to make a change&lt;/a&gt;, often triggered by an imminent business disaster that is going to affect bottom lines or by a change in the executive. It is generally driven from executives who have a sense of urgency e.g. being driven out of business due to disruption caused by new players in the market. Making things digital is what we did 30 years ago. We replaced tapes with CDs, letters with email and paper based calculations with spreadsheets. &lt;strong&gt;Digital transformation&lt;/strong&gt; is not about making things digital but about modelling your business around technology. Considering technology not merely as a business support function or a commodity available for purchase from an IT vendor, but as a key differentiator to gain deeper insights into customer behaviors and steer your market strategy. This involves modernising your technology stack to unlock data from legacy systems (e.g., through APIs), externalising it to collaborate with partners, and creating novel business opportunities. It also entails providing services to customers anytime, anywhere, and on any device. Additionally, it may involve establishing a data-driven business by combining data from various sources, constructing analytics models for predicting and optimising outcomes, and subsequently transforming the business based on these models to enhance the return on investment from data.&lt;/p&gt;

&lt;h2 id=&quot;agile-transformation&quot;&gt;Agile transformation&lt;/h2&gt;

&lt;p&gt;Driving business change through agile transformation is modelled around aligning people, processes and products.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;people - cultivate a supporting environment that emphasizes learning from failures rather than blaming. Setting up teams to succeed by providing them time, space and resources to experiment and learn.&lt;/li&gt;
  &lt;li&gt;processes and tools - adopt processes and tools that help in continuous improvement and better collaboration by breaking silos within the business&lt;/li&gt;
  &lt;li&gt;governance - manage investment risk to
    &lt;ul&gt;
      &lt;li&gt;understand where your IT spend is going. Are you tracking cost, time and resource allocation efficiency or tracking speed and value?&lt;/li&gt;
      &lt;li&gt;improve delivery assurance by &lt;a href=&quot;https://www.gov.uk/service-manual/agile-delivery/measuring-reporting-progress&quot;&gt;measuring the progress on your plan&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;customer - manage stakeholder expectations and improve product decision making by factoring in
    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;https://www.investopedia.com/terms/o/opportunitycost.asp&quot;&gt;opportunity cost&lt;/a&gt; - explore multiple options to understand the potential missed opportunities foregone by choosing one option over another.&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;http://blackswanfarming.com/cost-of-delay/&quot;&gt;cost of delay&lt;/a&gt; - ensure product decisions are made not just by understanding the value of something but also its urgency.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;money  - move from annual investment cycles to &lt;a href=&quot;https://bbrt.org/what-is-beyond-budgeting/&quot;&gt;beyond budgeting&lt;/a&gt;, switch from &lt;a href=&quot;http://www.informit.com/articles/article.aspx?p=169495&amp;amp;seqNum=12&quot;&gt;cost accounting to throughput accounting&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;organisation - changing the organisation structure and culture to optimize for delivering value to your customers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Simply put agile at scale means breaking up large projects into small pieces, so you can release to the market faster, run experiments, get customer feedback and deliver something that the market wants rather than deliver what you think that they may want. This focus on delivering quickly in small increments reduces risk. An all or nothing approach is required if you are launching a rocket into space, not when you are delivering an improvement over your existing website.&lt;/p&gt;

&lt;h2 id=&quot;cultural-transformation&quot;&gt;Cultural transformation&lt;/h2&gt;

&lt;p&gt;Culture is often what people do organically, it is not what you say, it is about what you do. It is about specific behaviours that help in building effective teams e.g.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;having a shared purpose&lt;/li&gt;
  &lt;li&gt;establish safety and openness - ability to express vulnerability and know each others background, goals, strengths and weaknesses&lt;/li&gt;
  &lt;li&gt;discourage blame - say no to talented jerks - scientific experiments have shown jerks diminish a team’s performance by 30-40%&lt;/li&gt;
  &lt;li&gt;keep the culture alive (culture capture) - seek and provide constructive feedback. What excites the team, what frustrates them, what is their biggest challenge?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Applying the &lt;a href=&quot;https://www.forbes.com/sites/forbestechcouncil/2019/07/16/agile-might-be-dead-but-agility-isnt/?mkt_tok=eyJpIjoiWVdFM1pqa3pNVFV5TURWbSIsInQiOiJBVXNEeWJza1wvVDhjRnVKVDZkcmU2S214OVJ2Mk5qV3VjcmdqV2prVGFQZERwSllZRE5PMlVVM1ErcnIrNkMrZlE4VG1lTWFkMFlVc1U2SDBzd3RITW9CdlFvY1JSNnZ5Nmt4NXJpejBtUVFPMm1qakxOVDhNdEU3TDJhb0FPQUUifQ%3D%3D#6c8e0d5b1245&quot;&gt;agile principles in practice&lt;/a&gt; without worrying too much about the methodology is critical to &lt;strong&gt;being agile rather than doing agile&lt;/strong&gt;. Too much emphasis on process, stops people from thinking what they are doing and whether they are doing it right. For example, the value of processes like &lt;strong&gt;daily standups&lt;/strong&gt; is their ability to create a safe space for team-members to hold each other accountable against the shared objectives rather than perform a daily morning ninja routine of &lt;em&gt;what I did yesterday and what I will be doing today&lt;/em&gt;.&lt;/p&gt;

&lt;h3 id=&quot;collaboration&quot;&gt;Collaboration&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;If you want to go fast travel alone, if you want to go far travel together&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Organisational structure is important in managing accountability and fixing responsibility within teams but obsession with tightly defined structures and roles ends up creating barriers amongst your teams. The goals across teams sometimes vary vastly and may even be contrary to each other. This can result in &lt;strong&gt;organisational silos&lt;/strong&gt; e.g Development teams want to push new features and functionality quickly to customers where as Operations want the systems to be resilient and can be averse to rapid change. Balancing velocity against risk is a fine art and often safety trumps new thinking. This siloed thinking has a direct effect on the kind of systems built within your organisation.&lt;/p&gt;

&lt;p&gt;Opening a dialogue between disparate teams is as much a cultural change as it is an operational one. Having small, self empowered (often collocated) &lt;strong&gt;cross functional teams&lt;/strong&gt; with experts from across the business, drives collective responsibility and ownership of:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;issues/problems that the business faces&lt;/li&gt;
  &lt;li&gt;results and successes&lt;/li&gt;
  &lt;li&gt;as well as failures&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;autonomy-and-alignment&quot;&gt;Autonomy and alignment&lt;/h3&gt;

&lt;p&gt;Everyone wants autonomy, everyone wants to have freedom to do what they like, but there are always boundaries to everyone’s autonomy. In a team, my autonomy might start to step on somebody else’s autonomy, and so we need to form an agreement, expectations.&lt;/p&gt;

&lt;p&gt;Don’t mandate how a team should work. Don’t say, “You have to do stand ups. You have to do one weekly iterations. If you want your teams to be very agile and adaptive &lt;strong&gt;focus on the outcomes rather than output&lt;/strong&gt;. We want to make sure that all code has automated tests, we want to make sure that you do frequent releases, and we expect that, but how you would like to do that is up to you and your team. We’re talking about the boundaries of autonomy.&lt;/p&gt;

&lt;p&gt;Autonomy may also result in reactive solutions to problems. This is where alignment becomes important, because when you’re in a larger organization, and you’re moving from Start Up to Scale Up, everyone is normally aligned to working on the most urgent thing. In a very early-stage startup, that most urgent thing constantly changes, and as you get bigger and bigger, you need to create levels of alignment.&lt;/p&gt;

&lt;h3 id=&quot;embracing-failure&quot;&gt;Embracing failure&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Learning is the result of every experiment that you run. Failure is learning.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The impact of a failure isn’t necessarily bad, provided you learn from it without paying a heavy price. You can think of failure as a means to blame someone or an opportunity to expose underlying problems so that they can be fixed. If you can get small incremental changes out to customers frequently, you allow yourselves the ability to fail fast and learn cheaply.&lt;/p&gt;

&lt;p&gt;The reason why we couldn’t do it - because of such and such thing e.g. The reason we couldn’t drive customer sales up because our assumptions about customer’s online usage patterns were wrong.&lt;/p&gt;

&lt;h2 id=&quot;measuring-success&quot;&gt;Measuring success&lt;/h2&gt;

&lt;p&gt;There are various aspects of measuring success - cost, modernisation that leads to improvement in quality and performance.&lt;/p&gt;

&lt;p&gt;Being able to identify successes and failures is essential to gathering feedback when developing new processes. Many businesses find it very difficult to visualise information or value flow across the organisation (value stream mapping). Without this information, the only lever left with business managers is cost. The most important question then becomes: &lt;em&gt;Where is my spend going?&lt;/em&gt; Whether that spend actually delivers customer value becomes an inessential concern.&lt;/p&gt;

&lt;p&gt;Focus on &lt;strong&gt;cost utilization&lt;/strong&gt; means that IT budgeting decisions are based on cost estimation of your portfolio. During the planning phase the budgeting exercise allocates fixed amount of money (in batches) to one or more programes consisting of smaller projects. With the budget allocated and the cost fixed we then expect teams to deliver all the agreed features that we think customers want. However, if customer feedback loops have been established via continuously and incremental delivery methods, they may tell you a very different story about what the customer’s actually want.&lt;/p&gt;

&lt;p&gt;Obsession with cost utilization (which is essentially a measure of people busyness) leads to misplaced priorities and bad decisions. Time spent becomes more important than the result. Anything that doesn’t generate revenue becomes a cost centre, IT teams end up being treated as cost functions rather than centres of value addition.&lt;/p&gt;

&lt;p&gt;Doug Hubbard in &lt;a href=&quot;https://www.cio.com/article/2438748/it-organization/the-it-measurement-inversion.html&quot;&gt;IT measurement inversion&lt;/a&gt; suggests that cost has a very limited effect on return on investment whereas the utilization of the system i.e. whether the system is actually going to rollout and whether anyone will use it at all, is the most important factor for ROI analysis.&lt;/p&gt;

&lt;p&gt;One way of measuring value of features is using &lt;a href=&quot;http://blackswanfarming.com/experience-report-maersk-line&quot;&gt;cost of delay&lt;/a&gt; - how much is it costing you in per unit time to not deliver a feature? &lt;a href=&quot;https://www.cio.com/article/2438921/it-organization/everything-is-measurable.html&quot;&gt;Everything is measurable&lt;/a&gt; article explains how to measure intangibles or &lt;a href=&quot;http://www.hubbardresearch.com/wp-content/uploads/2011/08/TAC-How-To-Measure-Anything.pdf&quot;&gt;measure anything&lt;/a&gt;&lt;/p&gt;

&lt;h3 id=&quot;creating-feedback-loops&quot;&gt;Creating feedback loops&lt;/h3&gt;

&lt;p&gt;How do we know what we are building is the right thing faster? Whether the assumptions we are making are correct? How do we quickly test those assumptions?&lt;/p&gt;

&lt;p&gt;A lean business can cheaply find out whether people are going to use new features in a system by running small experiments.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Tackling the most important problems that add value, not optimizing for the case where we think we are right (For established products, with a well proven business model, two-thirds of the time the features that we want to build have 0 or negative value. In new product development 90% of the time they have no value)&lt;/li&gt;
  &lt;li&gt;By creating feedback loops to validate assumptions&lt;/li&gt;
  &lt;li&gt;By delivering in small increments&lt;/li&gt;
  &lt;li&gt;Enabling experimental approach to product development using a scientific method&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the ways of achieving this is using &lt;strong&gt;hypothesis driven delivery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We believe that
    [building this feature]
    [for these people]
    will achieve [this outcome]
We will know we are successful when we see
    [this signal from the market]&lt;/p&gt;

&lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;

&lt;p&gt;The path to digital transformation involves dismantling organisational silos by fostering collaboration and establishing cross-functional teams with shared goals. These teams become conduits for rapid experiments and crucial feedback loops, measuring the value derived from these initiatives. Shifting away from traditional budgeting and throughput accounting to reorganize around value streams, flow, and a customer-centric approach is essential. This strategic realignment not only deepens the understanding of customer values but also opens opportunities to rethink outdated processes and create innovative customer experiences.&lt;/p&gt;
</description>
	        <pubDate>Tue, 03 Mar 2020 07:31:00 +0000</pubDate>
	        <link>https://hemantkumar.net/transformation-what-the-heck.html</link>
	        <guid isPermaLink="true">https://hemantkumar.net/transformation-what-the-heck.html</guid>
	        
	        <category>technology,</category>
	        
	        <category>transformation,</category>
	        
	        <category>agile,</category>
	        
	        <category>digital,</category>
	        
	        <category>business</category>
	        
	        
	        <category>kodekitab</category>
	        
	      </item>
	      
    
    	
	      <item>
	        <title>Six rules for tech leadership</title>
	        <description>&lt;p&gt;Over the years I have lead some complex technical pieces of work with teams of varied sizes and location. This has involved optimizing working practices and processes of teams, advising management on building engineering capabilities, agile practices, technical governance and continuous delivery. As a technical leader, I have faced numerous challenges while trying to align team objectives, resolve conflicts, highlighting the importance of cross functional requirements to the business and managing technical risk. Learning from my mistakes I have devised a set of rules to help me navigate these challenges. While the six rules I share here are far from exhaustive, I hope they provide value to technology leaders or, at the very least, resonate with their experiences.&lt;/p&gt;

&lt;h2 id=&quot;know-your-goal&quot;&gt;Know your goal&lt;/h2&gt;

&lt;p&gt;“&lt;em&gt;If you don’t know where you want to go, then it doesn’t matter which path you take&lt;/em&gt;” - Lewis Carroll&lt;/p&gt;

&lt;p&gt;What’s most important to you? This is a tough question to answer, because it requires you to think long and deep. Knowing yourself and knowing exactly what you want to achieve is a continuous process, therefore your answer may change over time. This is what I have found drives me:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Control and flexibility over the kind of work I do and when it is done&lt;/li&gt;
  &lt;li&gt;Have a positive influence and a wider impact on a number of people (colleagues or customers) based on what I do&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Having a clear purpose and repeating it often gives you the strength and the drive to lead by example and take others with you on the journey, even in the face of resistance. It is only when you have a clear understanding of what you want that you are going to be able to articulate it to others. Enabling others can only begin once you know what you want to achieve. People are going to take you seriously only if you are clear in your thinking. Working a job however, requires a focus on immediate priorities, whether they be organisational or team-oriented goals. The key is discovering the optimal balance where your personal objectives align harmoniously with those of the business.&lt;/p&gt;

&lt;h2 id=&quot;know-your-team&quot;&gt;Know your team&lt;/h2&gt;

&lt;p&gt;Leadership is about enabling others to achieve their true potential by letting them be the best version of themselves. Getting to know your team and the wider business by listening and engaging with them is crucial to be able to connect, build relationships and understand other people’s viewpoints.&lt;/p&gt;

&lt;p&gt;“&lt;em&gt;If there is one secret of success, it lies in the ability to get the other person’s point of view and see things from that person’s angle as well as your own&lt;/em&gt;” - Henry Ford&lt;/p&gt;

&lt;p&gt;Empower others by aligning their individual goals with the team or organizational goals. Team goals often vary vastly and may even be contrary to each other, resulting in &lt;strong&gt;silos&lt;/strong&gt; e.g Development teams want to push new features and functionality quickly to customers where as Operations want the systems to be resilient and can be averse to rapid change. Collaboration, not only within teams but across teams is required to figure out how disparate goals can be met keeping the overall business goals in mind. Some strategies that have worked for me in enabling teams to be self empowered capable of building, running and supporting their apps are&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Breaking organizational silos e.g. development &amp;amp; operations - changing the org set up by introducing &lt;strong&gt;cross functional teams&lt;/strong&gt;&lt;/li&gt;
  &lt;li&gt;Coaching &amp;amp; mentoring dev teams to be focussed on production readiness and support, right from project kick off. Delivering great apps is fine but in order for them to be effective you need the appropriate level of operational excellence, security, reliability, performance efficiency, cost optimization and sustainability in place for them to be successful.&lt;/li&gt;
  &lt;li&gt;Adopting similar tool set across build, delivery &amp;amp; ops teams that support collaboration e.g. adopting approaches like &lt;a href=&quot;https://www.gitops.tech/&quot;&gt;GitOps&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Pair programming within teams to prevent single point of failures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;get-the-business-on-board&quot;&gt;Get the business on-board&lt;/h2&gt;

&lt;p&gt;Technical leadership is a lot about meeting, influencing, planning and shaping the technical direction than hands-on work. Your hands-on experience may not be up to date, however you learn a lot about new tools and practices by seeing what the team is doing. Execs and management don’t expect “architect” types to be hands-on. They &lt;em&gt;do&lt;/em&gt; expect you to be able to talk authoritatively about technical topics at the high level. For example,&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;What has proven to be useful approach to cloud migrations - IAAS, PAAS, FAAS etc?&lt;/li&gt;
  &lt;li&gt;Impact of moving from capex to opex on budgeting (e.g. moving from owning a car to renting a car)&lt;/li&gt;
  &lt;li&gt;Why invest in adopting continuous integration and continuous delivery?&lt;/li&gt;
  &lt;li&gt;Sharing pitfalls of serverless&lt;/li&gt;
  &lt;li&gt;Explaining where container orchestration has gone wrong on projects&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the kind of stuff that you pick up through having lots of conversations with different people. The key qualification is your recommendations should be aligned to the experiences and wisdom of devs who actually do stuff. The goal is that if the business believes what the “architect” says, it will help the team to deliver as opposed to creating pain for them.&lt;/p&gt;

&lt;p&gt;Before making any technology choices, think critically and innovatively - seeing situations in new ways, being able to deal with uncertainty and ambiguity. It is critical to get decisions right early in a project. Ensure the business goals are clear and aligned with the technical direction you want to pursue. While the execs may find your technical proposal amusing, what they are really interested in is:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Cost&lt;/li&gt;
  &lt;li&gt;Timeline&lt;/li&gt;
  &lt;li&gt;Resources required&lt;/li&gt;
  &lt;li&gt;Customer impact&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;visualise-your-tech-strategy&quot;&gt;Visualise your tech strategy&lt;/h2&gt;

&lt;p&gt;Having a visual representation of your technology estate and capabilities allows you to&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;share technical vision and direction with the entire organisation to get alignment &amp;amp; ensure everybody is pulling in the same direction&lt;/li&gt;
  &lt;li&gt;identify gaps that could be filled with training&lt;/li&gt;
  &lt;li&gt;achieve standardisation and assess new technologies for innovation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Light weight documentation that captures key technology decisions and choices helps in providing context not only to your team but also for future teams, who may want to evolve your technology landscape. &lt;a href=&quot;https://github.com/joelparkerhenderson/architecture_decision_record&quot;&gt;Architecture decision records&lt;/a&gt; is a useful technique that I have used to capture key architecture decisions on projects for visibility within the team and for external oversight.&lt;/p&gt;

&lt;p&gt;Most of our time and energy is spent translating complex business rules into code, rather than thinking about the rules themselves. When you’re thinking in terms of a programming language, code constrains your ability to think, it can make you miss the forest for the trees. Therefore it is crucial to think before writing any code and to document your thinking. This could be your business architecture, system architecture or the associated data flows. Whilst documentation is important, it is also important for it to be easily consumable. Using a standard approach like &lt;a href=&quot;https://c4model.com/&quot;&gt;C4 model&lt;/a&gt; across all your artifacts for visualising your architecture, goes a long way in communicating and thinking above the code.&lt;/p&gt;

&lt;h2 id=&quot;simplify-not-over-engineer&quot;&gt;Simplify not over engineer&lt;/h2&gt;

&lt;p&gt;Simplicity in software can be elusive because we often do not make the distinction between essential complexity and accidental complexity. Automation is a noble pursuit, however knowing what to automate and when, determines whether you are simplifying or over engineering. Before you go on an &lt;em&gt;“automate everything”&lt;/em&gt; spree it is worth thinking about&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;What percentage of your software is really complicated? How much of it actually impacts the business? Does the complex part absolutely need to be automated? Can the business deal with it manually or accept it as a risk?&lt;/li&gt;
  &lt;li&gt;Are you designing a system that can never possibly fail? Trying to handle &lt;em&gt;“every possible thing”&lt;/em&gt; that can go wrong in your system leads to accidental complexity. Perform a cost benefit analysis of your design choices and think if all of them are worth investing time and money.&lt;/li&gt;
  &lt;li&gt;Are you doing premature generalisation? When do you prioritise extensibility and generalisation in your software because each of them has simplicity and cost trade offs?&lt;/li&gt;
  &lt;li&gt;Do you adopt bleeding edge tech or something that works? &lt;em&gt;Boring is good&lt;/em&gt;. Are you experimenting with new tech at the core of your system or at the periphery?&lt;/li&gt;
  &lt;li&gt;Are you optimising for the things that can be easily seen and measured (e.g. code repetition) while ignoring software complexity that can be hard to measure?&lt;/li&gt;
  &lt;li&gt;Are you a victim of tech overuse? Technology can solve a lot of problems but overusing it can cause more problems that it can solve. &lt;em&gt;With a hammer everything looks like a nail&lt;/em&gt;. You can let humans deal with the edge cases, especially early on in a project.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;adopt-tools-and-tech-fit-for-purpose&quot;&gt;Adopt tools and tech fit for purpose&lt;/h2&gt;

&lt;p&gt;“&lt;em&gt;Technology can bring benefits if and only if it diminishes a limitation&lt;/em&gt;” - Eliyahu Goldratt&lt;/p&gt;

&lt;p&gt;Before adopting any new technology, it is important to do an objective evaluation of:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;What is the &lt;strong&gt;power&lt;/strong&gt; of the technology?&lt;/li&gt;
  &lt;li&gt;What &lt;strong&gt;limitation&lt;/strong&gt; does the technology diminish? How can you prove that the limitation is holding you back? How would you know it was diminishing? What could you measure?&lt;/li&gt;
  &lt;li&gt;What &lt;strong&gt;existing rules&lt;/strong&gt; enable us to manage this limitation? Do we need to be wedded to those rules? Who owns the rules? Who might be threatened by dismantling them? How can we make it safe to change? How to create a graceful exit?&lt;/li&gt;
  &lt;li&gt;What &lt;strong&gt;new rules&lt;/strong&gt; will we need? How can we safely exploit this new technology? How do we introduce and institutionalise these new rules across the business?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a principle - chose the best tools and implementations available over standardising on any one language or platform and resentfully accepting its inherent limitation. Select a tool/programming language keeping business goals in mind to optimise for the right combination of&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Cost of the service
    &lt;ul&gt;
      &lt;li&gt;Runtime cost - current vs future, cost variance with scale and additional capabilities&lt;/li&gt;
      &lt;li&gt;Cost of operation and support&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Fit for purpose
    &lt;ul&gt;
      &lt;li&gt;Minimal lock-in (choice)&lt;/li&gt;
      &lt;li&gt;Ease and speed of development (agility)&lt;/li&gt;
      &lt;li&gt;Tech capability of the business (support)&lt;/li&gt;
      &lt;li&gt;Availability of existing libraries/integrations (flexibility)&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Performance &amp;amp; security&lt;/li&gt;
  &lt;li&gt;Openness - can’t do it alone, learn from shared practices and best ideas in the open source community, no vendor lock-in, contribute &amp;amp; attract talent. e.g. in your area it maybe easier to hire engineers for deploying and securing linux based systems than say windows.&lt;/li&gt;
&lt;/ul&gt;
</description>
	        <pubDate>Sun, 01 Mar 2020 07:31:00 +0000</pubDate>
	        <link>https://hemantkumar.net/six-rules-for-tech-leadership.html</link>
	        <guid isPermaLink="true">https://hemantkumar.net/six-rules-for-tech-leadership.html</guid>
	        
	        <category>technology,</category>
	        
	        <category>leadership,</category>
	        
	        <category>agile</category>
	        
	        
	        <category>kodekitab</category>
	        
	      </item>
	      
    
    	
	      <item>
	        <title>Securing REST APIs</title>
	        <description>&lt;p&gt;RESTful services are stateless therefore each request needs to be authenticated individually. State in REST terminology means the state of the resource that the API manages, not session state. There maybe good reasons to build a stateful API but that is going against REST principles. It is important to realize that managing sessions is complex and difficult to do securely, as it is prone to replay and impersonation attacks. So what options do we have to secure RESTful services? This post looks into Basic Authentication, MAC (Message Authentication Code), Digital Signatures and OAuth.&lt;/p&gt;

&lt;h2 id=&quot;security-basics&quot;&gt;Security basics&lt;/h2&gt;

&lt;p&gt;When looking at any security aspect, often a lot of terms get thrown around which can be overwhelming and also confusing. Therefore, before looking at the nitty-gritty of securing RESTful APIs it is worth getting some security jargon out of the way.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Authentication&lt;/strong&gt; - Establish the sender’s identity so that receiver knows &lt;strong&gt;who&lt;/strong&gt; they are talking to; e.g. client (user, device, or another service/API) sends credentials (either in plaintext or encrypted) to the server to identify itself.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Authorization&lt;/strong&gt; - Verification &lt;strong&gt;what&lt;/strong&gt; that the sender has access to. Happens post authentication to determine whether they have access to a certain resource.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Integrity&lt;/strong&gt; - Ensuring message contents of a request haven’t changed in transit.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Non-repudiation&lt;/strong&gt; - Ensuring that the sender cannot deny having sent the message; e.g. your bank cannot deny having sent you a bank statement if it has a valid stamp of the bank on it and this could be proved to a third party.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Confidentiality&lt;/strong&gt; - No one can see the message contents in transit from sender to receiver.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In order to achieve secure communication, be it client to service or service to service, there are fundamentally two problems to solve:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Ensure that the message can only be read by the intended recipient.&lt;/li&gt;
  &lt;li&gt;Ensure that the message is from a known sender and it has not been modified in transit.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first problem can be solved using encryption. Encryption is used to achieve &lt;strong&gt;confidentiality&lt;/strong&gt; and  means only those with a corresponding secret key can read the message. However encryption alone does not guarantee &lt;strong&gt;integrity&lt;/strong&gt;. The second problem can be solved using cryptography which often uses combination of encryption and &lt;a href=&quot;https://en.wikipedia.org/wiki/Cryptographic_hash_function&quot;&gt;hashing&lt;/a&gt; to achieve &lt;strong&gt;authenticity&lt;/strong&gt; and &lt;strong&gt;integrity&lt;/strong&gt; in addition to &lt;strong&gt;confidentiality&lt;/strong&gt;.&lt;/p&gt;

&lt;h3 id=&quot;symmetric-private-key-cryptography&quot;&gt;Symmetric (Private key cryptography)&lt;/h3&gt;

&lt;p&gt;You share the same secret key between sender and receiver to encrypt and decrypt the message. You can trust the &lt;strong&gt;authenticity&lt;/strong&gt; (from a trusted known sender) of the message, its &lt;strong&gt;confidentiality&lt;/strong&gt; and &lt;strong&gt;integrity&lt;/strong&gt; but &lt;strong&gt;non-repudiation&lt;/strong&gt; cannot be guaranteed. Because the secret key could be shared amongst several participants, there is no single identity attached to the key, therefore the receiver knows it came from a source in possession of the key but doesn’t know which one. The risk of the key falling in the wrong hands is also higher because it needs to be securely shared amongst the participants, often over the internet. Other options include face-to-face meeting or use of a trusted courier but these can often be impractical. Higher the number of participants, higher is the exposure of the key.&lt;/p&gt;

&lt;h3 id=&quot;asymmetric-public-key-cryptography&quot;&gt;Asymmetric (Public key cryptography)&lt;/h3&gt;

&lt;p&gt;Different key is used between sender and receiver to encrypt and decrypt the message. This gets us beyond the shared key issue in symmetric key cryptography. In order to solve the first secure communication problem mentioned above, when encrypting, you use the &lt;strong&gt;receiver’s public key&lt;/strong&gt; to write (encrypt) the message and the receiver uses &lt;strong&gt;their private key&lt;/strong&gt; to read (decrypt) the message. This establishes &lt;strong&gt;confidentiality&lt;/strong&gt; of the message.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../assets/end-to-end-encryption.png&quot; alt=&quot;end-to-end-encryption.png&quot; title=&quot;End to End Encryption&quot; /&gt;&lt;/p&gt;

&lt;p&gt;In order to solve the 2nd secure communication problem mentioned above, you use a &lt;a href=&quot;https://en.wikipedia.org/wiki/Digital_signature&quot;&gt;digital signature&lt;/a&gt;. Digitally signing data is equivalent to a physical signature that can only be produced by the signing authority and verified by anyone who has visibility of the signing authority’s signature. Signing uses public key encryption where the sender uses &lt;strong&gt;their private key&lt;/strong&gt; to write message’s signature, and the receiver uses the &lt;strong&gt;sender’s public key&lt;/strong&gt; to check if it’s really from the sender. It is a means of attaching identity to a key. It is discussed in further detail in the section below on &lt;a href=&quot;## Message signing using Digital Signature&quot;&gt;message signing using digital signature&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;approaches-to-securing-restful-apis&quot;&gt;Approaches to securing RESTful APIs&lt;/h2&gt;

&lt;p&gt;Having covered some security fundamentals, we can now look at the different techniques to secure RESTful APIs. The fundamentals discussed above form the basis of the techniques we are going to look at.&lt;/p&gt;

&lt;h3 id=&quot;basic-authentication&quot;&gt;Basic Authentication&lt;/h3&gt;

&lt;p&gt;The most simple way to authenticate senders is to use HTTP basic authentication. Sender’s credentials (username and password) are base64-encoded and sent across the network unencrypted in an HTTP header.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;GET / HTTP/1.1
Host: api.example.com
Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;There are a few issues with HTTP Basic Authentication:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Credentials are sent over the wire and even though they are encoded, they are not encrypted and can easily be converted to plaintext.&lt;/li&gt;
  &lt;li&gt;Credentials are sent repeatedly, for each request, which widens the attack window.&lt;/li&gt;
  &lt;li&gt;The password may be stored permanently in the browser, if the user requests. The browser caches the password minimum for the length of the window / process, which can be silently used to make requests to the server e.g. in CSRF attacks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using HTTPS can solve the first issue. Even then, the credentials are only protected until SSL/TLS termination. Any internal network routing, logging, etc. can still expose the plaintext credentials. In an enterprise, SSL/TLS termination often occurs much before the request reaches your API server. Does HTTPS protect the credentials in transit? Yes. Is that enough? Usually, No. Basic Authentication with HTTPS provides you &lt;strong&gt;confidentiality&lt;/strong&gt; only for a window during which  SSL/TLS is on.&lt;/p&gt;

&lt;h3 id=&quot;mac-message--authentication-code&quot;&gt;MAC (Message  Authentication Code)&lt;/h3&gt;

&lt;p&gt;Basic Auth over HTTP exposes credentials in transit and does not guarantee integrity of the message. MAC on the other hand is used to send hashed version of credentials and the message using a secret key. It can be used to &lt;strong&gt;authenticate&lt;/strong&gt; a message and verify its &lt;strong&gt;integrity&lt;/strong&gt;. MAC is symmetric, i.e. it uses the same key to produce a MAC value for a message and to verify the MAC value for the message.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../assets/MAC.jpg&quot; alt=&quot;MAC.jpg&quot; title=&quot;MAC&quot; /&gt;&lt;/p&gt;

&lt;p&gt;For accessing a protected resource&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;/users/username/account
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;HMAC (Hash-based message authentication) an implementation of MAC involves calculation of an HMAC value&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;value = base64encode(hmac(&quot;sha256&quot;, &quot;secret&quot;, &quot;GET+/users/username/account&quot;))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The HMAC value then is sent over as an HTTP header:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;GET /users/username/account HTTP/1.1
Host: api.example.com
Authentication: hmac username:[value]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;prevent-hash-reuse&quot;&gt;Prevent hash reuse&lt;/h4&gt;

&lt;p&gt;Hashing the same message repeatedly results in the same HMAC (hash). If the hash falls into the wrong hands, it can be used to make the same request at a later time. Therefore it is important to introduce entropy to the hash generation to prevent a &lt;a href=&quot;https://en.wikipedia.org/wiki/Replay_attack&quot;&gt;replay attack&lt;/a&gt;. This is done by adding a &lt;strong&gt;timestamp&lt;/strong&gt; and &lt;strong&gt;nonce&lt;/strong&gt; to the hash computation. The nonce is a number we only use once and is regenerated on each subsequent request, even if the request is for the same resource.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;value = base64encode(hmac(&quot;sha256&quot;, &quot;secret&quot;, &quot;GET+/users/username/account+28jul201712:59:24+123456&quot;))
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The additional data - timestamp and nonce is sent to the receiver for reconstructing the hash.&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;GET /users/username/account HTTP/1.1
Host: example.org
Authentication: hmac username:123456:[value]
Date: 28 jul 2017 12:59:24
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The receiver reconstructs the hash with the nonce value and if it doesn’t match the received hash value, discards the message possibly because the hash has been previously used. This ensures each request is only valid once and only once.&lt;/p&gt;

&lt;p&gt;If the timestamp is not within a certain range (say 10 minutes) of the receiver’s time, then the receiver can discard the message as it is probably a replay of an earlier message. It is worth noting time-limited authentications can be problematic if the sender and receiver’s time is not synchronized.&lt;/p&gt;

&lt;h3 id=&quot;message-signing-using-digital-signature&quot;&gt;Message signing using Digital Signature&lt;/h3&gt;

&lt;p&gt;Digital signatures use asymmetric public key cryptography to establish &lt;strong&gt;authenticity&lt;/strong&gt; (message sent by a known sender), &lt;strong&gt;integrity&lt;/strong&gt; (message wasn’t tampered with) and &lt;strong&gt;non-repudiation&lt;/strong&gt; (message sent by the sender cannot be denied).&lt;/p&gt;

&lt;p&gt;When signing, the sender uses their private key (also called a secret key, abbreviated to sk) to write message’s signature, and the receiver uses the sender’s public key to check if it’s really from the sender. Again, to ensure a message cannot be duplicated and reused in a &lt;a href=&quot;https://en.wikipedia.org/wiki/Replay_attack&quot;&gt;replay attack&lt;/a&gt; a unique identifier for the message like &lt;strong&gt;timestamp&lt;/strong&gt; and &lt;strong&gt;nonce&lt;/strong&gt; is also used while generating the signature.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../assets/message-signing.jpg&quot; alt=&quot;message-signing.jpg&quot; title=&quot;Message Signing&quot; /&gt;&lt;/p&gt;

&lt;p&gt;$Sign(Message, sk) = Signature$          $Verify(Message, Signature, pk)$&lt;/p&gt;

&lt;p&gt;When you have verified that the signature against a given message is valid, you can be extremely confident that the only way some one could have produced it is if they knew the private key, associated with the public key that you used to verify the message.&lt;/p&gt;

&lt;p&gt;A service when acting as a &lt;strong&gt;receiver&lt;/strong&gt; has a list of public keys for all other services that want to send messages to it. The same service, when acting as the &lt;strong&gt;sender&lt;/strong&gt; can provides its public key to other services that it wants to send messages to.&lt;/p&gt;

&lt;h3 id=&quot;oauth-2&quot;&gt;OAuth 2&lt;/h3&gt;

&lt;p&gt;OAuth 2 is an open protocol to allow secure authorization in a standard method from web, mobile and desktop applications. It is for &lt;strong&gt;delegation of access&lt;/strong&gt; e.g. you hire a business assistant and delegate her to withdraw money from the business account to fulfill business requests on your behalf. You (the user) have delegated the authority to your assistant (client), however the authorization policy (identity and access control checks performed) to allow the assistant to withdraw money are still enforced by your bank account (resource API), not you.&lt;/p&gt;

&lt;p&gt;In techniques mentioned above often it is your resource API that is responsible for establishing the identity of clients and defining access controls. Oauth enables &lt;a href=&quot;https://docs.microsoft.com/en-us/dotnet/framework/wcf/feature-details/federation&quot;&gt;federated security&lt;/a&gt; to allow clear separation between your applications and the associated authentication and authorization mechanism. Other identity protocols like &lt;a href=&quot;https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language&quot;&gt;SAML&lt;/a&gt; and &lt;a href=&quot;https://en.wikipedia.org/wiki/WS-Federation&quot;&gt;WS-Fed&lt;/a&gt; also provide federated security but they are older and relatively more complex than OAuth.  Figure below depicts an &lt;a href=&quot;https://tools.ietf.org/html/rfc6749&quot;&gt;Oauth2 protocol flow&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;../assets/oauth-protocol-flow.png&quot; alt=&quot;oauth-protocol-flow.png&quot; title=&quot;OAuth Protocol Flow&quot; /&gt;&lt;/p&gt;

&lt;p&gt;The separation between application (client) and &lt;strong&gt;authorization server&lt;/strong&gt; means you can either&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;build out the authorization server as a standalone component which is responsible for obtaining authorization from users and issuing tokens to clients&lt;/li&gt;
  &lt;li&gt;outsource the authorization server as a service that the user trusts, such as a social identity provider like google or facebook&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allows you to focus on building and scaling your APIs (&lt;strong&gt;resource server&lt;/strong&gt;) independent of authorization.&lt;/p&gt;

&lt;p&gt;OAuth 2 has multiple &lt;a href=&quot;https://www.oauth.com/oauth2-servers/differences-between-oauth-1-2/user-experience-alternative-token-issuance-options/&quot;&gt;flows&lt;/a&gt; called &lt;em&gt;grant types&lt;/em&gt; for obtaining an access token. &lt;a href=&quot;https://developer.okta.com/authentication-guide/auth-overview/#choosing-an-oauth-20-flow&quot;&gt;Deciding which grants to implement&lt;/a&gt; depends on the type of client applications you support and the experience you want for your users. In essence each flow involves obtaining authorization to get an access token and using the access token to access protected resources. An access token is a &lt;a href=&quot;https://tools.ietf.org/html/rfc7519&quot;&gt;JSON Web Token (JWT)&lt;/a&gt; encoded in base64URL format that contains a header, payload, and signature. A &lt;strong&gt;resource server&lt;/strong&gt; (API) can &lt;a href=&quot;https://developer.okta.com/authentication-guide/tokens/validating-access-tokens#what-to-check-when-validating-an-access-token&quot;&gt;validate the access token&lt;/a&gt; and can authorize the client (application) to access particular resources based on the scopes and claims in the access token.&lt;/p&gt;

&lt;h4 id=&quot;openid-connect-provider&quot;&gt;OpenId Connect Provider&lt;/h4&gt;

&lt;p&gt;OAuth is for authorization but applications require to know the users identity too. &lt;a href=&quot;http://openid.net/connect/&quot;&gt;OpenID Connect&lt;/a&gt; adds authentication to OAuth. In OAuth the client applications do not get any information about the user, it is only the resource APIs that get access to the identity data via the access tokens. OpenID Connect defines a standard way of providing this identity data to the client applications by giving them an ID token. Very often an OpenID Connect provider also acts as an OAuth server - which means you can request ID tokens as well as access tokens. This allows client applications to get information about who the user is and when they last authenticated and decide whether to re-authenticate or reject the user. E.g. for high value transactions the client may decide to re-authenticate the user even if they are already logged in.&lt;/p&gt;

&lt;p&gt;An OpenId Connect Provider is a REST-like identity layer on top of OAuth 2 that allows clients to verify the identity of the end-user, as well as to obtain basic profile information about the end-user. It provides &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/.well-known/openid-configuration&lt;/code&gt; service discovery endpoint for clients to get information about interacting with the OpenId Connect/OAuth server.&lt;/p&gt;

&lt;h2 id=&quot;in-summary&quot;&gt;In summary&lt;/h2&gt;

&lt;p&gt;Before deciding on an API security approach, it is important to understand what are you going to secure and what is the sensitivity of the data being managed? APIs handling things like personal data, medical health records or financial data will need a different security approach than an API  handling, say traffic updates. It is also worth defining the scope of your API security. Securing network and server infrastructure for things like intrusion, eves dropping via packet sniffing and physical security often lie outside the scope of API security. Opting for a particular approach may depend on specific security requirements of your application because each technique provides protection on different security aspects. For example basic authentication without HTTPS can provide authenticity but no integrity or confidentiality. MACs may be sufficient for internal APIs (non public facing) serving a few web applications. For highly sensitive data, digital signatures maybe a necessity, but if you are looking for flexibility and performance at scale OAuth 2 Bearer tokens maybe the way to go.&lt;/p&gt;
</description>
	        <pubDate>Thu, 10 Aug 2017 19:31:00 +0000</pubDate>
	        <link>https://hemantkumar.net/securing-rest-apis.html</link>
	        <guid isPermaLink="true">https://hemantkumar.net/securing-rest-apis.html</guid>
	        
	        <category>restapi,</category>
	        
	        <category>security,</category>
	        
	        <category>cryptography,</category>
	        
	        <category>authentication,</category>
	        
	        <category>HMAC,</category>
	        
	        <category>OAuth,</category>
	        
	        <category>digital</category>
	        
	        <category>signatures</category>
	        
	        
	        <category>kodekitab</category>
	        
	      </item>
	      
    
    	
	      <item>
	        <title>Making sense of Blockchain</title>
	        <description>&lt;p&gt;&lt;strong&gt;Trust&lt;/strong&gt; is fundamental to commerce. Any business transaction is based upon trust and requires secure way of transferring assets between transacting parties. Banks provide this trust by maintaining a true record of financial transactions. Government agencies provide evidence of land titles, vehicle registration records, health and education records etc by maintaining a transaction log. They provide trust by maintaining a central ledger for recording transactions that can be relied upon to verify each transaction. The onus of maintaining the transactions accurately and securely on the central ledger also lies with the authority owning it. This grants significant responsibility and control to the central authority or intermediary facilitating commerce between transacting parties. The intermediary essentially establishes the rules of commerce that every transacting party must adhere to. While the intermediary often operates effectively, it can occasionally become a single point of failure, as seen in the global financial crash of 2008 where banks were at the epicenter of the economic turmoil.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Can a centralised authority with utmost power be trusted to run and meddle with entire economic systems?&lt;/li&gt;
  &lt;li&gt;Could the monetary supply and monetary policy be set by a computer where it could not be corrupted by humans, thereby preventing government overreach?&lt;/li&gt;
  &lt;li&gt;Is the transaction ledger owned by the central authority tamper evident and prevent any illegitimate records being added or updated? Is it independently and transparently verifiable in case of a dispute?&lt;/li&gt;
  &lt;li&gt;Are the intermediaries efficient in fulfilling monetary transactions? Communication over the internet takes place at a mind boggling rate but systems with layers of middlemen can take days to clear and reconcile.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;what-is-blockchain&quot;&gt;What is blockchain?&lt;/h2&gt;

&lt;p&gt;Shortly after the 2008 global financial crisis, a &lt;a href=&quot;https://bitcoin.org/en/bitcoin-paper&quot;&gt;white paper&lt;/a&gt; by an unknown entity Satoshi Nakamoto emerged. The paper introduced a new peer to peer financial system where payments are based on cryptographic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party. The system would use a new digital crypto currency called Bitcoin. The technology invented to power this new system was called blockchain. Simply put, a blockchain is a continuously updated decentralized record of who holds what.&lt;/p&gt;

&lt;h3 id=&quot;decentralized-trust-and-control&quot;&gt;Decentralized trust and control&lt;/h3&gt;

&lt;p&gt;Blockchain transfers control and decision-making from a centralized entity (individual, organization, or group thereof) to a distributed network. &lt;a href=&quot;https://aws.amazon.com/blockchain/decentralization-in-blockchain/&quot;&gt;Decentralized networks&lt;/a&gt; strive to reduce the level of trust that participants must place in one another, and deter their ability to exert authority or control over one another in ways that degrade the functionality of the network.
Blockchain attempts to reduce the cost and increase trust in business transactions by using an immutable distributed transaction ledger on peer to peer networks rather than a central ledger. Rather than a single authority like a bank responsible for maintaining transactions, it is now a group of people running blockchain software that do this. They ensure that the information stored in the distributed ledger is immutable and verifiable by applying techniques like cryptography and hashing. The list of transactions, also known as a &lt;strong&gt;distributed ledger&lt;/strong&gt; is available for everyone to see and verify. The distributed nature of the blockchain makes it tamper evident and unhackable because if a block of transactions is messed with, everyone gets to know about it and as long as the bad actors are outnumbered by the good ones it is rejected by the system. Trust is inherently built into the system.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://hemantkumar.net/assets/blockchain.jpg&quot; alt=&quot;Blockchain&quot; /&gt;&lt;/p&gt;

&lt;h3 id=&quot;underlying-technology&quot;&gt;Underlying technology&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Asymmetric cryptography using digital signatures&lt;/strong&gt;
A way to verify that a message was sent by the known sender, that the only way some one could have produced it is if they knew the private key, associated with the public key that you used to verify the message.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Peer to peer network&lt;/strong&gt;
A ledger containing the record of all the messages is copied on a group of computers rather than relying on a single authority to maintain the records on a central computer. This decentralization removes the need to trust a single authority but with as many copies of the ledger as the number of computers in the network, which version of the ledger is to be trusted? This is the problem addressed in the original Bitcoin paper. The solution offered is to trust whichever ledger has the most computational work put in to it.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Cryptographic hashing&lt;/strong&gt;
A way to generate a small, unique “fingerprint” for any data allowing quick comparison of large data sets and a secure way to &lt;a href=&quot;https://www.miracl.com/press/the-essence-of-the-blockchain&quot;&gt;verify data has not been altered&lt;/a&gt;. Some computational work must be carried out to generate the fingerprint or hash in the desired format and update the decentralized ledger. This is known as &lt;em&gt;Proof or Work&lt;/em&gt;.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;how-do-we-ensure-the-rules-are-being-followed&quot;&gt;How do we ensure the rules are being followed?&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Group of people who maintain the distributed ledger are called bookkeepers. Transactions often reach different bookkeepers in different order, depending upon which bookkeeper is online. Bookkeepers need to agree on the order of transactions and rules about money creation, version of software to run and the transaction formats.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Periodically bookkeepers are allowed to add money to their own accounts, thereby creating money out of thin air. But this is only allowed according to very constrained rules. Those rules include a very slow gradual rate of money creation, until no more money can be created.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Math based voting system to determine what the majority thinks. Bitcoin requires bookkeepers to solve a very special math problem to vote. This is called &lt;em&gt;“Proof of work”&lt;/em&gt; explained in the Bitcoin white paper &lt;em&gt;“one vote per CPU”&lt;/em&gt; instead of one vote per person.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Voting is allowed to happen every 10 minutes to allow all bookkeepers to stay synchronized. Each new group of transactions that gets approved is called a block and these blocks are grouped together in a chain called the &lt;strong&gt;blockchain&lt;/strong&gt;.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;where-can-blockchain-be-applied&quot;&gt;Where can blockchain be applied?&lt;/h2&gt;

&lt;p&gt;You can use the FITS model - Fraud, Intermediaries, Throughput, Stable data to understand the possibility of using blockchain applications in a particular environment. This could include&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Fraud - an environment which has a history and likelihood of fraud involved in various transactions, making international payment providers early adopters of blockchain.&lt;/li&gt;
  &lt;li&gt;Intermediaries or middle men - areas where there are a lot of intermediaries involved who do not provide a lot of value, the application of blockchain can reduce transaction times from days to minutes by taking the middle men out.&lt;/li&gt;
  &lt;li&gt;Throughput - environments with high throughput or number of transactions per second (tps). Bitcoin, currently can only process 7 transactions per second. Visa processes around 1,700 transactions per second on average, claiming to be able to support 24,000 tps. Mastercard utilizes a network that claims to handle around 5,000 tps. Researchers are working on increasing the Bitcoin throughput.&lt;/li&gt;
  &lt;li&gt;Stable data - For a blockchain application you do not want volatile data, rather you want things that are going to stay the same for a while e.g. land ownership titles and personal information.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;impact-on-financial-services&quot;&gt;Impact on financial services&lt;/h3&gt;

&lt;p&gt;Bitcoin is the best-known application of blockchain technology. The prevailing view is that blockchain will cause two main shifts in the way banks do business&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The first is in its broad potential to bring financial institutions closer together and make global collaboration easier.
    &lt;ul&gt;
      &lt;li&gt;Due to the lack of trust, very little data is shared amongst financial institutions. Blockchain reduces this trust deficit and can allow seamless transfer of digital assets within a business network and better sharing of data across businesses.&lt;/li&gt;
      &lt;li&gt;Creation of secured, shared data with common standards - a public distributed ledger that allows automatic synchronization and removes inefficiencies due to variations in internal processes and data formats within the systems at different institutions.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;The second is by creating real efficiencies in the way the bank processes data.
    &lt;ul&gt;
      &lt;li&gt;Simplification of payments infrastructure, the use of smart contracts to standardise post-trade processes without having to rely on a central certifying authority, and efficiently connecting parties in trade finance and syndicated lending by reducing the need for reconciliation at both ends of the business (purchaser and supplier).&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ol&gt;

&lt;h3 id=&quot;impact-on-public-key-infrastructure-pki&quot;&gt;Impact on Public Key Infrastructure (PKI)&lt;/h3&gt;

&lt;p&gt;The most commonly employed approach to &lt;a href=&quot;https://www.thesslstore.com/blog/wide-world-pki/&quot;&gt;Public Key Infrastructures (PKIs)&lt;/a&gt; is the Web PKI. It is a Certificate Authority (CA) based system that adopts a centralized trust infrastructure. Communications over the internet are secured through the safe delivery of public keys and the corresponding private keys. PKI has been the backbone of Internet security since its inception through the use of digital certificates.&lt;/p&gt;

&lt;p&gt;However, there are &lt;a href=&quot;https://medium.com/hackernoon/decentralized-public-key-infrastructure-dpki-what-is-it-and-why-does-it-matter-babee9d88579&quot;&gt;problems with centralized PKIs&lt;/a&gt; such as CA-based systems. Because of the ability to impersonate another user or a website, CA systems are well-known targets for hackers. If they get hacked you get hacked. By breaching them, the bad guys gain access to a treasure-trove of personal and financial information traveling on the Internet. DigiNotar a Dutch CA whose systems were &lt;a href=&quot;https://www.wired.com/2011/09/diginotar-bankruptcy/&quot;&gt;attacked&lt;/a&gt; and many fraudulent certificates issued had to eventually file for bankruptcy. CAs are a single point of failure that can be exploited to compromise encrypted online communication. Blockchain acts as a decentralized, open and transparent key-value store and &lt;a href=&quot;https://remme.io/blog/how-blockchain-addresses-public-key-infrastructure-shortcomings&quot;&gt;eliminates traditional PKI vulnerabilities&lt;/a&gt;. It is capable of securing data to prevent MITM (Man-in-the-Middle) attacks, and to minimize the power and fragilities of third parties.&lt;/p&gt;

&lt;h3 id=&quot;impact-on-iot&quot;&gt;Impact on IOT&lt;/h3&gt;

&lt;p&gt;Today we transact not only with humans but machines and smart devices. How do we trust all the new IOT devices that are coming online? One prediction suggests that by 2020 we will have 7 times more smart devices than we have human beings in the world. That’s about 50 billion devices in 2020 that we’ll have to transact with and that we’ll have to trust. In 1982, a &lt;a href=&quot;https://en.wikipedia.org/wiki/At_the_Abyss&quot;&gt;compromised software in the Trans-Siberian pipeline&lt;/a&gt; that controlled pump speeds and valve settings, produced pressures far beyond those acceptable to the pipeline joints and welds. This led to a three-kiloton, non-nuclear explosion so big that it was seen from space. There are &lt;a href=&quot;https://www.digicert.com/internet-of-things/&quot;&gt;centralized PKI based solutions&lt;/a&gt; to securing IOT device communications, however they are still prone to problems of centralized trust. Blockchain provides a decentralized PKI alternative by adopting &lt;em&gt;given enough eyeballs, all bugs are shallow&lt;/em&gt; approach. This allows&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Securely sharing any information between machines, including connected vehicles, smart appliances, or manufacturing equipment in a decentralized manner without depending upon a single third party to secure your entire system.&lt;/li&gt;
  &lt;li&gt;Providing open and transparent scrutiny to the adopted PKI security controls and protocols.&lt;/li&gt;
  &lt;li&gt;Reacting fast to misuse by revoking certificates by making the process transparent, immutable, and prevent attackers from breaking in, thus effectively avoiding the MITM attacks.&lt;/li&gt;
&lt;/ul&gt;
</description>
	        <pubDate>Thu, 13 Apr 2017 19:31:00 +0000</pubDate>
	        <link>https://hemantkumar.net/making-sense-of-blockchain.html</link>
	        <guid isPermaLink="true">https://hemantkumar.net/making-sense-of-blockchain.html</guid>
	        
	        <category>blockchain,</category>
	        
	        <category>cryptocurrency,</category>
	        
	        <category>cryptography</category>
	        
	        
	        <category>kodekitab</category>
	        
	      </item>
	      
    
    	 
    
    	
	      <item>
	        <title>Think Docker, Think Security</title>
	        <description>&lt;p&gt;Docker allows you to completely abstract the underlying operating system and run your app across multiple platforms (local machine, cloud or on-premise data centre) as long as the destination has the Docker runtime (Docker daemon) running. With Docker, the Continuous Delivery philosophy &lt;em&gt;Build once deploy anywhere&lt;/em&gt; really comes to the fore. You build your binary artifact as a Docker image that includes all the application stack and requirements once and deploy the same image to various environments. This ensures the binary is built once and the same source code is promoted in subsequent deployments, allowing agile, continuous application delivery.&lt;/p&gt;

&lt;p&gt;I have been using Docker for local development, testing and for running apps in production. Docker is pretty swift to get started with, allows rapid app development, setting up builds and running tests in a repeatable and consistent manner. You can get an application running on your local machine with all its dependencies (web servers, databases) in fairly quick time. Does that mean you ship your machine to production? Probably not! Because you are working with the Docker abstraction, do you need to worry about any underlying security risks, is it all taken care of or should you even care?&lt;/p&gt;

&lt;h2 id=&quot;isolation&quot;&gt;Isolation&lt;/h2&gt;

&lt;p&gt;Docker is a virtualization technique used to create isolated environments called &lt;em&gt;containers&lt;/em&gt; for running your applications. A container is quite like a VM but light-weight. It is a bare minimum linux machine with minimum packages installed which means it uses less CPU, less memory and less disk space than a full blown VM. Containers are more like application runtime environments that sit on top of the OS (Docker host) and create an isolated environment in which to run your application.&lt;/p&gt;

&lt;p&gt;Docker uses the resource isolation features of the Linux kernel such as &lt;strong&gt;Namespaces&lt;/strong&gt; and &lt;strong&gt;cgroups&lt;/strong&gt; to create the walls between containers and other processes running in the host.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;em&gt;Namesapces&lt;/em&gt; control what processes can see. They allow resources to have separate values on the host and in the container; for example PID 1 inside a container is not PID 1 on the host.  However not all resources that a container has access to are &lt;em&gt;namespaced&lt;/em&gt; i.e they are not isolated on the host and in the containers. Containers running on the same host still share the same operating system kernel and any kernel modules.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;em&gt;cgroups&lt;/em&gt; (abbreviated from control groups) control what processes can use by limiting and isolating resource usage (CPU, memory, disk I/O, network, etc.) for a process.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Linux Security Module is another layer of protection that sits on top of namespaces and cgroups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AppArmor&lt;/strong&gt; can control and audit various process actions such as file (read, write, execute, etc) and system functions (mount, network tcp, etc). Again, while running a docker container you get a sensible set of defaults:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Preventing writing to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/proc/{num}&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/proc/sys&lt;/code&gt;, &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/sys&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;Preventing mount&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;SELinux&lt;/strong&gt; provides a mechanism for supporting access control security policies. Uses labels. Defaults for running docker include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Gets access to everything in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/usr&lt;/code&gt; and most things in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc&lt;/code&gt;&lt;/li&gt;
  &lt;li&gt;To give more access: relabel content on the host&lt;/li&gt;
  &lt;li&gt;To restrict access to things in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/usr&lt;/code&gt; or things in &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;/etc&lt;/code&gt;: relabel them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Seccomp&lt;/strong&gt; is another security facility in the Linux kernel that allow an application to define what syscalls it allows or denies. Docker’s default seccomp profile is a &lt;em&gt;whitelist&lt;/em&gt; which specifies the calls that are allowed. Running docker by default&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Prevents ~150 uncommon or dangerous syscalls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Capabilites&lt;/strong&gt; are a set of privileges that can be independently enabled or disabled for a process to provide or restrict access to the system. Removing capabilities can cause applications to break, therefore deciding which ones to keep and remove is a balancing act. Docker containers run with a sensible subset of default capabilities, e.g. a container will not normally be able to modify capabilities for other containers. Apart from the &lt;a href=&quot;https://opensource.com/business/14/9/security-for-docker&quot;&gt;capabilities removed by default&lt;/a&gt; you can remove or add capabilities while running a container.&lt;/p&gt;

&lt;h2 id=&quot;security-challenges&quot;&gt;Security challenges&lt;/h2&gt;

&lt;p&gt;I was at a GOTO conference in Stockholm earlier in the year and &lt;a href=&quot;https://twitter.com/adrianmouat&quot;&gt;Adrian Mout&lt;/a&gt; speaking on Docker security highlighted some security issues mentioned below, that could affect your apps running inside containers. The following is not a comprehensive list but one that should get you thinking.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;&lt;em&gt;Kernel exploits&lt;/em&gt;&lt;/strong&gt;: The kernel is shared amongst all the containers and the host. A flaw in the kernel could be exploited by a container process which will bring down the entire host.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;&lt;em&gt;Denial of service&lt;/em&gt;&lt;/strong&gt;: All containers share the kernel resources. If one container manages to hog kernel resources like CPU, memory or block I/O, it can starve the other containers on the host for resources, resulting in a denial of service attack.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;&lt;em&gt;Container breakouts&lt;/em&gt;&lt;/strong&gt;: Running as root on the container means you will be root on the host. Therefore you need to worry about &lt;em&gt;escalated privileges&lt;/em&gt; attack where a user can gain root access on the host due to a vulnerability in the application code. If an attacker gains access to one container that should not allow access to other containers or the host. Therefore it becomes important to run your container with restricted privileges.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;&lt;em&gt;Tampered images&lt;/em&gt;&lt;/strong&gt;: You need to be sure that the image that you are running is from a trusted source and not from an attacker who has tricked you. The images that you run should also be up to date, scanned and without any known security vulnerabilities.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;&lt;em&gt;Secret Leakages&lt;/em&gt;&lt;/strong&gt;: Secrets for your applications (like API keys and Database passwords) need to be protected from falling into the wrong hands. You need to be very careful before putting these as plain text in source code; for example in a Dockerfile. Once the secret has made it into the Dockerfile, even if you remove the secret later, an attacker can still retrieve it from the history of the Docker image built from the Dockerfile. Ideally using a secret vault for storing application secrets and allowing restricted access to the secret vault serves as a secure mechanism to distribute secrets.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;mitigations&quot;&gt;Mitigations&lt;/h2&gt;

&lt;p&gt;Having gone through key Docker security issues, lets look at some ways to prevent them. &lt;a href=&quot;https://benchmarks.cisecurity.org/tools2/docker/CIS_Docker_1.11.0_Benchmark_v1.0.0.pdf&quot;&gt;CIS Docker Benchmark&lt;/a&gt; provides an elaborate list of docker security recommendations. Defence in Depth is a common approach to security that involves building multiple layers of defences in order to hinder attackers. The following mitigations are based on securing the host, container and image in a container based environment.&lt;/p&gt;

&lt;h3 id=&quot;host-and-kernel&quot;&gt;Host and kernel&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Use a good quality supported host system for running containers with regular security updates. Keep the kernel updated with the latest security fixes. The security of the kernel is paramount.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Apart from updating and patching the kernel, it might be worth considering running a hardened kernel using patches such as those provided by grsecurity (https://grsecurity.net/) and &lt;em&gt;PAX&lt;/em&gt; (https://pax.grsecurity.net/). This provides extra protection against attackers manipulating program execution by modifying memory (such as &lt;em&gt;buffer overflow attacks&lt;/em&gt;).&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;run-containers-with&quot;&gt;Run containers with&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Least privilege (non admin). When a vulnerability is exploited, it generally provides the attacker with access and privileges equal to those of the application or process that has been compromised. Ensuring that containers operate with the least privileges and access required to get the job done reduces your exposure to risk. A docker container runs as root by default, if there is no user specified. Create a non privileged user and switch to it using a &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;USER&lt;/code&gt; statement before an entrypoint script in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;Dockerfile&lt;/code&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Limited file system (Readonly access). Prevents attackers from writing a script and tricking your application from running it. This can be done by passing &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;--read-only&lt;/code&gt; flag to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker run&lt;/code&gt;.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Limited resources (CPU and memory). Limiting memory and CPU prevents against DoS attacks. &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;docker run&lt;/code&gt; provides options &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-m&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;-c&lt;/code&gt; for setting the memory and cpu requirements respectively for running a container.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Limited networking. By default containers running on the same host can talk to each other whether or not ports have been explicitly published or exposed. A container should open only the ports it needs to use in production, to prevent compromised containers from being able to attack other containers.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;No access to privileged ports. Only root has access to privileged ports, so if you’re talking to a privileged port you know you’re talking to root.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;docker-images&quot;&gt;Docker images&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;Only run images from trusted parties. Control the inflow of docker images into your development environment. This means using only approved private registries and approved images and versions. As of Docker 1.8 a new security feature was implemented called Docker Content Trust. This feature allows you to verify the authenticity, integrity, and publication date of all Docker images available on the Docker Hub Registry. This feature is not enabled by default but if you enable it by &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;export DOCKER_CONTENT_TRUST=1&lt;/code&gt; Docker notifies you, when you attempt to pull down an image that isn’t signed.&lt;/p&gt;

    &lt;p&gt;&lt;img src=&quot;https://hemantkumar.net/assets/docker-content-trust.png&quot; alt=&quot;Docker Content Trust&quot; /&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;Run regular scans on your docker images for vulnerabilities. Vulnerability management is tricky because source images aren’t always patched. Even if you get the base layer up to date, you probably also have tens or hundreds of other components in your images that aren’t covered by the base layer package manager. Because the environment changes so frequently, traditional approaches to patch management are irrelevant. To stay in front of the problem you have to a) find vulnerabilities as part of the continuous integration (CI) process, and b) use quality gates to prevent the deployment of unsafe and non compliant images in the first place. Docker Hub has its own &lt;a href=&quot;https://docs.docker.com/docker-cloud/builds/image-scan/&quot;&gt;image scanning tool&lt;/a&gt; and there are paid options like &lt;a href=&quot;https://www.twistlock.com/&quot;&gt;Twistlock&lt;/a&gt; that provide image vulnerability analysis as well as a container security monitoring.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;If needed, remove unwanted packages that your application does not depend upon with major and critical vulnerabilities from the base image to reduce the attack surface.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;
</description>
	        <pubDate>Mon, 07 Nov 2016 15:23:00 +0000</pubDate>
	        <link>https://hemantkumar.net/think-docker-think-security.html</link>
	        <guid isPermaLink="true">https://hemantkumar.net/think-docker-think-security.html</guid>
	        
	        <category>docker,</category>
	        
	        <category>microservices,</category>
	        
	        <category>security,</category>
	        
	        <category>devops</category>
	        
	        
	        <category>kodekitab</category>
	        
	      </item>
	      
    
    	
	      <item>
	        <title>Services, microservices, bounded context, actors.. the lot</title>
	        <description>&lt;p&gt;I was at the DDD exchange recently where we had the likes of &lt;a href=&quot;https://twitter.com/UdiDahan&quot;&gt;Udi Dahan&lt;/a&gt;, &lt;a href=&quot;https://twitter.com/ericevans0&quot;&gt;Eric Evans&lt;/a&gt; and &lt;a href=&quot;https://twitter.com/ScottWlaschin&quot;&gt;Scott Wlaschin&lt;/a&gt; on the panel. In a post event Q&amp;amp;A session I asked the panel - &lt;em&gt;“Is microservices SOA renamed? “&lt;/em&gt; which triggered an hour long debate. The panelists argued amongst themselves about what exactly a service or a microservice means. By the end of the debate I doubt any one of us was any more wiser. Clearly there was no consensus on the definition of a microservice and what it actually means. It is quite a buzzword these days but none of the panelists could come to a common understanding. This raised a few questions in my head. Disappointed with the expert advice, I decided to look out for a definition of my own.&lt;/p&gt;

&lt;p&gt;As what’s needed in such a situation, I looked up the dictionary for the word service - &lt;em&gt;“the action of helping or doing work for someone”&lt;/em&gt;. Does a microservice fit into this general definition? In order to come up with a more definitive answer, lets recollect the knowledge that is already out there.&lt;/p&gt;

&lt;blockquote&gt;“Microservices aim to do SOA well, it is a specific approach of achieving SOA in the same way as XP and Scrum are specific approaches for Agile software development.” - Sam Newman (Building Microservices)
&lt;/blockquote&gt;

&lt;p&gt;If microservices is about doing SOA (Service orientated architecture) well, it is probably worth looking at the SOA tenets:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Services are autonomous - Cohesive single responsibility.&lt;/li&gt;
  &lt;li&gt;Services have explicit boundaries - Loosely coupled, owns its data and business rules.&lt;/li&gt;
  &lt;li&gt;Service share contract and schema, not Class or Type or a Database&lt;/li&gt;
  &lt;li&gt;Service compatibility is based upon policy - Explicitly state the constraints (structural and behavioral) which the service imposes on its usage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tenets do not appear to be too different from the object oriented principles. According to Alan Kays 1971 &lt;a href=&quot;http://wiki.c2.com/?AlanKaysDefinitionOfObjectOriented&quot;&gt;description of Smalltalk&lt;/a&gt;&lt;/p&gt;
&lt;blockquote&gt;An object is a little computer that has its own memory, you send messages to it in order to tell it to do something. It can interact with other objects through messages in order to get that task done.&lt;/blockquote&gt;

&lt;p&gt;An object’s private memory provides it the autonomy and  message based communication an explicit boundary from other objects. Can we define a service based on the above principles? Lets look at the tenets a bit closely.&lt;/p&gt;

&lt;h1 id=&quot;what-is-a-service&quot;&gt;What is a Service?&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;Autonomy of a service suggests it is independent of other services to perform its tasks, therefore in order to be independent it needs to have one and only one well defined responsibility. Uncle Bob has summarised Single Responsibility Principle rather well - &lt;em&gt;“Gather together those things that change for the same reason and separate those things that change for different reasons.”&lt;/em&gt; In short a service should not have more than one reason to change.&lt;/li&gt;
  &lt;li&gt;Boundaries are drawn to restrict free movement and ensure all movement is governed by a set of rules. In the context of a service this restriction is enforced on free movement of data across a service boundary. All data and business rules reside within the service imposing strict restrictions on any movement in and out.&lt;/li&gt;
  &lt;li&gt;Services interact with other services through a shared contract by sending messages. These messages contain stable data (i.e. immutable, think events). The data going through service boundaries is minimal and very basic.&lt;/li&gt;
  &lt;li&gt;Usage of a service enforces certain constraints, the incoming messages conform to an expected structure and format.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;what-a-service-is-not&quot;&gt;What a service is NOT?&lt;/h1&gt;

&lt;ul&gt;
  &lt;li&gt;Anything with the word &lt;em&gt;Service&lt;/em&gt; appended to it does not automatically qualify as a service.&lt;/li&gt;
  &lt;li&gt;A service that has only a function is a function not a service, like calculation, validation (not be confused with DDD’s &lt;em&gt;Domain Services&lt;/em&gt; which is a more granular concept). Making it remotely callable through RPC/SOAP still does not make it a service.&lt;/li&gt;
  &lt;li&gt;A service that only has data is a database not a service. Doing CRUD through REST over HTTP does not change that.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Philip Kruchten’s 4+1 &lt;a href=&quot;https://en.wikipedia.org/wiki/4%2B1_architectural_view_model&quot;&gt;Architecture View Model&lt;/a&gt; describes software architecture based on multiple concurrent views.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://upload.wikimedia.org/wikipedia/commons/f/f2/4%2B1_Architectural_View_Model.jpg&quot; alt=&quot;alt text&quot; title=&quot;4+1 Architecture View Model&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Defining services involves breaking an overall system into smaller isolated sub systems so that adding features to the overall system requires touching as few sub systems as possible. This decomposition can be at the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;logical level&lt;/code&gt; (business capabilities - the reason for something to exist), &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;component level&lt;/code&gt; (dlls, jars, source code repos), &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;process level&lt;/code&gt; (web app, http endpoints) or the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;physical level&lt;/code&gt; (machines, hosts). &lt;strong&gt;Bounded context&lt;/strong&gt; in DDD terminology focuses on the logical separation whereas &lt;strong&gt;Microservice&lt;/strong&gt; focuses on the physical separation. The idea of slicing up your system into manageable chunks is the key here.&lt;/p&gt;

&lt;h1 id=&quot;what-is-an-actor&quot;&gt;What is an Actor?&lt;/h1&gt;

&lt;p&gt;Actors is another decomposition model that allows dividing a system into smaller isolated tasks or actors that can run concurrently. Traditional approaches to concurrency are based on synchronizing shared mutable state which is difficult to get right, as it often involves locking and coordination. Wouldn’t it be better not having to deal with coordinating threads, synchronization and locks? Actors achieve this by avoiding shared state and only mutating internal private state between processing messages. When there are no shared state mutations, synchronization and locking are no more required. When an actor wants to communicate with another actor, it sends a message rather than contacting it directly, all messaging being asynchronous. Apart from concurrency and performance gains there are other benefits with the actor based approach like hot code replacements but asynchronous programming is a whole new ballgame.&lt;/p&gt;

&lt;p&gt;Asynchronous message communication allows Actors to embrace &lt;strong&gt;latency&lt;/strong&gt;, potentially at the cost of &lt;strong&gt;simplicity&lt;/strong&gt;. Asynchronous messaging is an event driven model as compared to synchronous model where you have a sequence of events that execute sequentially (in a given order) which in theory is easy (read familiar) to program and reason about.
An asynchronous model (event driven) on the other hand, allows you to scale better but you have to find a way to impose ordering on the incoming requests.&lt;/p&gt;

&lt;h1 id=&quot;in-summary&quot;&gt;In summary&lt;/h1&gt;

&lt;p&gt;Partitioning can occur at different levels, be it a service, a microservice, an actor or even an object. What we really gain from this partitioning is isolation. We want a small computer with private memory that you can interact with through a contract, without any access to its internal private state. Isolation is easily compromised in objects. We often break encapsulation by sharing an object’s private memory through public getters and setters in languages like Java and C#. Actors enforce this isolation by restricting access to private memory (internal state). Microservies make it even harder to break the isolation by often introducing physical separation and communication over a network.&lt;/p&gt;

&lt;blockquote&gt;Each service ends up having its own process or/and network boundary. The contract is a service updates it&apos;s shared memory and exposes a mechanism to read from its shared memory but the service itself is the only one that is allowed to write to it.&lt;/blockquote&gt;

&lt;p&gt;Turns out, microservices reinforce a lot of the old ideas like SOA and object oriented principles that we have known for sometime. Adding features to your system will invariably involve touching more than one sub system at a given time but depending upon how well the system is isolated, it should involve touching as few sub systems as possible. Isolation gives you all the good stuff - loosely coupled systems, failure isolation, independent evolution and scalability. But distributed mircoservices is not the only way to achieve isolation. Modularised monoliths can very well be isolated too (obviosuly not over a network but in memory), they probably need a bit more discipline.&lt;/p&gt;
</description>
	        <pubDate>Fri, 30 Oct 2015 15:23:00 +0000</pubDate>
	        <link>https://hemantkumar.net/services-microservices-bounded-context.html</link>
	        <guid isPermaLink="true">https://hemantkumar.net/services-microservices-bounded-context.html</guid>
	        
	        <category>microservices,</category>
	        
	        <category>bounded</category>
	        
	        <category>context,</category>
	        
	        <category>soa</category>
	        
	        
	        <category>kodekitab</category>
	        
	      </item>
	      
    
    	
	      <item>
	        <title>Problem with mutating state</title>
	        <description>&lt;p&gt;The purpose of any computer program is to take some inputs and produce an output. Producing an output causes the program to have an effect which means during its execution cycle, the program changes certain values. A live program models the real world where things change frequently over time and in parallel. But while writing programs we only have a static view of the problem domain and with the best of intentions and tools at hand we try to manage state change in our program as it would happen in the real world. This leaves us having to deal with values that change over time. Are OO languages capable of handling this complexity easily or do we need to look further?&lt;/p&gt;

&lt;blockquote&gt;“No man ever steps in the same river twice, for it&apos;s not the same river and he&apos;s not the same man.” - *Heraclitus*&lt;/blockquote&gt;

&lt;p&gt;Rich Hickey in his &lt;a href=&quot;http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey&quot;&gt;keynote&lt;/a&gt; at the JVM Languages summit talked about value, identity and state. Values are essentially constants that do not change.  The flowing water in a river makes the river change constantly over time, this introduces the concept of identity. Identity is a succession of related values where the current one is caused from the previous. State is the value of an identity at a given time.&lt;/p&gt;

&lt;div class=&quot;language-java highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;// value (immutable)&lt;/span&gt;
&lt;span class=&quot;kt&quot;&gt;int&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;// identity (variables are like rivers that change)&lt;/span&gt;
&lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;// x changes state (introduces side effect)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;An assignment statement however, mutates state and introduces the concept of time. The value of x changes before and after the assignment statement therefore time becomes important. The assignment statement separates the code above the assignment from the code below the assignment because other parts of the system that can access x can now have different views of x depending upon what time it is observed. If x is shared between multiple functions, objects or threads and either has the ability to assign to x, this can lead to &lt;em&gt;side effects&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;When you have a function that gives you a side effect then you need another function to undo the side effect. This leads to &lt;a href=&quot;https://en.wikipedia.org/wiki/Nondeterministic_algorithm&quot;&gt;&lt;strong&gt;non deterministic&lt;/strong&gt;&lt;/a&gt; results  that introduce &lt;a href=&quot;https://en.wikipedia.org/wiki/No_Silver_Bullet&quot;&gt;&lt;strong&gt;accidental complexity&lt;/strong&gt;&lt;/a&gt; (problems which engineers create and fix). For example a program whose output is influenced by the particular order of execution of threads or by a call to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;gettimeofday&lt;/code&gt; or some other non-repeatable thing is generally best considered as non-deterministic. This accidental complexity is inherent in OO languages. When you open a file you must also close it. Functions with side effects are also separated in time - opening a graphical context must precede closing it, using an unmanaged resource in .NET must always precede disposing it. If these functions are not called in the correct order it leads to memory leaks. OO languages support garbage collection to manage some side effects but not all.&lt;/p&gt;

&lt;h1 id=&quot;managing-side-effects&quot;&gt;Managing side effects&lt;/h1&gt;
&lt;p&gt;One of the main ideas of Functional programming is to manage side effects. It enables programs to make decisions based on stable values rather than those that change over time (like rivers). Evaluation of a &lt;strong&gt;pure function&lt;/strong&gt; or &lt;strong&gt;higher order function&lt;/strong&gt; has &lt;em&gt;no side effects&lt;/em&gt;. f(x) is always the same, no matter what. This leads to &lt;a href=&quot;https://wiki.haskell.org/Referential_transparency&quot;&gt;&lt;strong&gt;referential transparency&lt;/strong&gt;&lt;/a&gt; which implies that provided a set of inputs a function will always result in the same output or it will have the same behaviour.&lt;/p&gt;

&lt;p&gt;A pure function is &lt;strong&gt;stateless&lt;/strong&gt; rather than stateful i.e it does not update any shared state/memory. This means that execution of a pure function has no effect on the output of the execution of any other function in your application. However in practice, applications do need to have some side effects.&lt;/p&gt;

&lt;blockquote&gt;&quot;In the end, any program must manipulate state. A program that has no side effects whatsoever is a kind of black box. All you can tell is that the box gets hotter&quot; - *Simon Peyton-Jones (Haskell contributor)*&lt;/blockquote&gt;

&lt;p&gt;The key is to limit side effects, clearly identify them, and avoid scattering them throughout your application. This can be achieved by having more and more pure functions inside your application that depend only on the input and return a value without:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Accessing global memory/state&lt;/li&gt;
  &lt;li&gt;Modifying input(s)&lt;/li&gt;
  &lt;li&gt;Changing shared memory/state&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is interesting to note in an imperative style, the above situations result as a use of the assignment statement. Functional programs attempt to remove the non determinism and complexity in your application by containing side effects. This makes them easier to write and maintain:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Without temporal coupling - order in which functions are called becomes irrelevant.&lt;/li&gt;
  &lt;li&gt;Fewer concurrency issues due to restricted updates to shared memory/state.&lt;/li&gt;
  &lt;li&gt;Less time spent in the debugger without having to constantly ask “&lt;em&gt;What is the application state ?&lt;/em&gt;”&lt;/li&gt;
&lt;/ul&gt;

&lt;h1 id=&quot;time-to-give-up-the-assignment-statement&quot;&gt;Time to give up the assignment statement?&lt;/h1&gt;

&lt;p&gt;The addition of more cores to computer hardware means, the individual throughput of each core goes down but the throughput of the chip with multiple processors goes up, if you can take advantage of the multiple cores. In order to utilise all those cores, we will have to learn to write more pure functions. Our ability to write multi-threaded programs will depend upon the scarce and disciplined use of the assignment statement.&lt;/p&gt;
</description>
	        <pubDate>Tue, 19 May 2015 15:23:00 +0000</pubDate>
	        <link>https://hemantkumar.net/problem-with-mutating-state.html</link>
	        <guid isPermaLink="true">https://hemantkumar.net/problem-with-mutating-state.html</guid>
	        
	        <category>functional</category>
	        
	        
	        <category>kodekitab</category>
	        
	      </item>
	      
    
  </channel>
</rss>