DSPy


DSPy = Declarative Self-improving Python.

Components

1. language model — LLM that will answer our questions,

2. signature —a declaration of the program’s input and output (what task we want to solve),

 - 1. inline

 - 2. class

dspy.InputField()

List[Literal['', '', '']] = dspy.OutputField()

3. module — the prompting technique (how we want to solve the task).

 - Building blocks

 - different prompting strategies, 

  - 1. dspy.Predict

  - 2. dspy.ChainOfThought 

  - 3. dspy.ReAct (to add tools = function calling

4. Optimiser

  - 1. Automatic few-shot learning (e.g. BootstrapFewShot or BootstrapFewShotWithRandomSearch)

  - 2. Automatic instructions optimisation (e.g. MIPROv2) 

  - 3. Automatic fine-tuning (e.g, BootstrapFinetune) 

Other points

  • dspy.inspect_history for logs
  • Caching

# 1. updating config

dspy.configure_cache(enable_memory_cache=False, enable_disk_cache=False)

# 2. not using cache for specific module

math = dspy.Predict("question -> answer: float", cache = False)

  • dspy.configure(adapter=dspy.JSONAdapter()) 

  • DSPy is integrated with MLFlow (an observability tool)
Reference

Article
https://miptgirl.medium.com/programming-not-prompting-a-hands-on-guide-to-dspy-04ea2d966e6d
https://github.com/miptgirl/miptgirl_medium/blob/main/dspy_example/nps_topic_modelling.ipynb

https://www.dbreunig.com/2025/06/10/let-the-model-write-the-prompt.html

https://thedataquarry.com/blog/learning-dspy-1-the-power-of-good-abstractions/
https://thedataquarry.com/blog/learning-dspy-2-understanding-the-internals/
https://thedataquarry.com/blog/learning-dspy-3-working-with-optimizers/

https://thenewstack.io/goodbye-manual-prompting-hello-programming-with-dspy/

https://dzone.com/static/csrfAttackDetected.html

Paper
https://arxiv.org/abs/2310.03714

Website
https://dspy.ai/
https://dspy.ai/tutorials/games/

Course
https://www.deeplearning.ai/short-courses/dspy-build-optimize-agentic-apps/

Github
https://github.com/stanfordnlp/dspy

Hashicorp User Group Bangalore Meetup #1 : Powering the Multi-Cloud Era


Alternatives for IDP 

(1) https://github.com/JanssenProject/jans  https://github.com/JanssenProject/jans/tree/main/jans-keycloak-link   https://imshakil.medium.com/janssen-mod-auth-openidc-module-to-test-openid-connect-single-sign-on-s…  It is by Glu 

(2) Vault it self support OIDC https://developer.hashicorp.com/vault/docs/secrets/identity/oidc-provider    https://brian-candler.medium.com/using-vault-as-an-openid-connect-identity-provider-ee0aaef2bba2

SQL++ is for JSON data. https://www.couchbase.com/sqlplusplus/

https://techmilap.com/ is free website for hosting event

Vault can provide dynamic temporary secrets to access data for each identity used by consumer. so later on, we can audit, who has accessed data. In our case, pods use ServiceAccount (SA). here we get dynamic secret per serviceaccount. So we cannot audit which pod accessed the data. we can only audit, data is accessed by which ServiceAccount. This dynamic secret has short life so one cannot use it again. SA we can use it as many time as we want.

Vault secure data in-transit with TLS and other encryption method that is called "encryption as a service"

In terraform, state file is the most confidential. 

Nomad is alternative of K8s. It can manage VM also using QEMU driver. Consul is used for networking and service. Fabio is for ingress and load balancing in Nomad.

Event: Hashicorp User Group Bangalore Meetup #1 : Powering the Multi-Cloud Era, Sun, Nov 2, 2025, 10:00 AM | Meetup

Identity Provider


https://github.com/pando85/kaniop Kaniop is a Kubernetes operator for managing Kanidm. 

https://kanidm.com/ Kanidm is a modern, secure identity management system that provides authentication and authorization services with support for POSIX accounts, OAuth2, and more. It is simple and written in rust

IDP

(1)

https://github.com/JanssenProject/jans  

https://github.com/JanssenProject/jans/tree/main/jans-keycloak-link

https://imshakil.medium.com/janssen-mod-auth-openidc-module-to-test-openid-connect-single-sign-on-s…  

It is by Glu 

(2) Vault it self support OIDC https://developer.hashicorp.com/vault/docs/secrets/identity/oidc-provider    https://brian-candler.medium.com/using-vault-as-an-openid-connect-identity-provider-ee0aaef2bba2

-------------

Why Choose Keycloak?. Understanding the Need for an Identity… | by J3 | Jungletronics | Medium

Ory

Quickstart | Ory

Quickstart | Ory

Ory Kratos Helm Chart | k8s

GitHub - ory/k8s: Kubernetes Helm Charts for the ORY ecosystem. · GitHub

Ory Hydra: OAuth 2.0 and OpenID Connect server | Ory

GitHub - ory/kratos: Headless cloud-native authentication and identity management written in Go. Scales to a billion+ users. Replace Homegrown, Auth0, Okta, Firebase with better UX and DX. Passkeys, Social Sign In, OIDC, Magic Link, Multi-Factor Auth, SMS, SAML, TOTP, and more. Runs everywhere, runs best on Ory Network. · GitHub

GitHub - ory/hydra: Internet-scale OpenID Certified™ OpenID Connect and OAuth2.1 provider that integrates with your user management through headless APIs. Solve OIDC/OAuth2 user cases over night. Consume as a service on Ory Network or self-host. Trusted by OpenAI and many others for scale and security. Written in Go. · GitHub

The Top 7 Ory Kratos Alternatives

The Paper That Changed Everything: Attention is All You Need


Here are few links

The Paper

https://arxiv.org/pdf/1706.03762.pdf

------------------------

Medium

https://medium.com/@SimplifyingFutureTech/understanding-attention-is-all-you-need-750713a1631b

https://medium.com/codex/attention-is-all-you-need-explained-ebdb02c7f4d4

-------------

PoloClub

https://poloclub.github.io/transformer-explainer/

https://arxiv.org/abs/2408.04619

https://www.youtube.com/watch?v=ECR4oAwocjs

-----------

Last Few videos of https://www.youtube.com/watch?v=2dH_qjc9mFg&list=PLKnIA16_RmvYuZauWaPlRTC54KxSNLtNn

https://hasgeek.com/fifthelephant/paper-reading-meet-up-december-2023/

https://www.linkedin.com/pulse/decoding-attention-all-you-need-how-transformers-ai-yuri-sylse/

--------------

Embedding is representation of text in multi dimensional space

Diffusion model add noise and then remove it. It is for multimodal.  

Multi head = syntax + semantics + position. It improves expressiveness and captures richer patterns. 

Attention is about which embedding to look at. It does not change embedding. 

Few other miscellaneous link from event https://luma.com/d0yhf0ib

1. IronClaw

https://github.com/nearai/ironclaw

https://www.ironclaw.com/

IronClaw is the secure, open-source alternative to OpenClaw that runs in encrypted enclaves on NEAR AI Cloud.  TEE (Trusted Execution Environment) 


VoIP in Agentic AI era


Once upon a time signaling stack is separated from voice as packet switched SS7 network, with its own protocol stack. SS7 over TCP/IP stack is SIGTRAN. VoIP signaling plane has protocols like H.323 (by ITU), SIP (by IETF) and MEGACO. SIP became most popular. VoIP data plane is RTP. Now in era of Agentic AI, we have  business solutions for different verticals to integrate voice with STT, LLM, TTS etc. Here are few resource URLs

All Relevant technologies  

https://www.voip-info.org/

https://telecom.altanai.com/

Signalwire

https://www.linkedin.com/posts/briankwest_github-signalwire-demosveronica-this-activity-7430982255675678720-jsTH/

https://developer.signalwire.com/sdks/agents-sdk/

https://github.com/signalwire-demos

https://signalwire.com/

https://postpromptviewer.signalwire.io/

FreeSWITCH

https://en.wikipedia.org/wiki/FreeSWITCH

https://signalwire.com/freeswitch

https://github.com/signalwire/freeswitch

https://developer.signalwire.com/freeswitch/FreeSWITCH-Explained/


https://github.com/amigniter/mod_audio_stream

https://github.com/sptmru/freeswitch_mod_audio_stream

https://medium.com/@srivastava.vikash/day-9-real-time-voice-ai-starts-here-streaming-audio-from-freeswitch-a45d69547164

https://www.cyberpunk.tools/jekyll/update/2025/11/18/add-ai-voice-agent-to-freeswitch.html


Asterisk

https://www.asterisk.org/

https://en.wikipedia.org/wiki/Asterisk_(PBX)

https://github.com/asterisk/asterisk

Plivo

https://www.plivo.com/

https://github.com/plivo

JsSIP

https://jssip.net/

https://github.com/versatica/JsSIP

https://en.wikipedia.org/wiki/JsSIP

Security

https://www.frafos.com/

OverSIP

https://oversip.versatica.com/

https://github.com/versatica/OverSIP

https://rubygems.org/gems/oversip/versions/2.0.1?locale=en

https://www.voip-info.org/oversip/

OfficeSIP

https://officesip-server.software.informer.com/

https://telecom.altanai.com/2014/10/13/sip-server-officesip/

https://sourceforge.net/projects/officesip/

https://github.com/vf1/sipserver

FlexiSIP

https://github.com/BelledonneCommunications/flexisip

https://www.linphone.org/en/flexisip-sip-server/

https://www.linhome.org/software-products/flexisip/

https://wiki.linphone.org/xwiki/wiki/public/view/Flexisip/

Tools

https://postpromptviewer.signalwire.io/

https://github.com/briankwest/libnemo_normalize

https://github.com/signalwire-demos/utils

https://github.com/xiph/rnnoise

FreePBX

https://www.hostinger.com/in/tutorials/freepbx-tutorial

https://www.freepbx.org/

https://en.wikipedia.org/wiki/FreePBX

https://github.com/freepbx

Others

https://medium.com/@dwilkie_34546/implementing-ai-powered-voice-at-somleng-a-technical-deep-dive-93edbb920e02

https://stringee.com/en/

https://www.kamailio.org/w/

https://github.com/resiprocate/resiprocate/wiki

https://www.kaplansoft.com/teksip/

AI

https://deepgram.com/

https://github.com/dograh-hq/dograh Voice AI agent


Transformers & Large Language Models - 1 of 9


• Background on NLP and tasks

NLP Tasks

1. Classification

- Sentimental analysis :  

* Examples: Amazon reviews, IMDB critiques, Twitter.

* Many to one RNN example. 

Input: sequence of data

Output: scaler. 

- Intent detection

- Language detection

* One to many RNN example. 

Example: Image Captioning and Topic modeling

Input: single or scaler

output: sequence of data

2. "Multi"-Classification

* Synchronous Many to many RNN example. 

Example: Part of speech tagging and Named entity recognition (NER): Dataset = annotated Reuters newspaper (CONLL-2003, CONLL+)

Input: sequence of data

output: sequence of data

- Dependency parsing

- Constituency parsing

3. Generation

* Asynchronous Many to many RNN example. 

Example : Machine translation: Dataset = WMT'14 Translation quality unit is , Question answering,  Summarization, Speech to text

Input: sequence of data

output: sequence of data

Length is not equal. No one to one mapping. 

This RNN example is now done with transformer, LLM. 

- Text generation

History of LLM

1980 RNN

1997 LSTM (Theoretical Foundation) 

2013 Word2Vec

2014 Sequence to Sequence Learning with NN

2015: "Neural machine translation by jointly learning to align and translate" It introduced attention mechanism. Here sequential nature of processing at encoder and decoder. 

2017: Transformer. Parallel processing. "Attention is all you need". Encoder and decoder both have self attention. 

2018: Transfer learning. "Universal language model fine tuning (ULMFit) for text classification" 

- Introduced language modelling

- Now common model for all usecases

- No need of supervised data

- It is about predicating next word. 

Transformer Language Model

1. BERT by Google (encoder only model)

2. GPT by OpenAI (decoder only model). Then GPT2, GPT3 etc. 

2020s LLM

• Tokenization

1. Arbitrary (n/a)

2. Word (multiple tokens with similar meanings need same embedding, so Word variations not handled)  

3. sub-word : focus on common root. Increase sequence length. Tokenization more complex

4. character level: can correct mis-spelled word & CasINg. Sequence length is much longer. No OOV

• Embeddings

Word (Token) Representation by vector

OHE = One Hot Encoding

cosine similarity 

• Word2vec, RNN, LSTM

1. Word2Vec

It is ANN with proxy-task

1. CBOW: Continuous Bag of Words. You predict the target word 

2. Skip-gram : You take the target word and predict words around it

Word order does not matter

Embeddings is not context aware

Dimension size example 768

Special token to indicate "end of sequence" 

2. RNN Recurrent Neural Network

Connection forms a temporal sequence

H = Hidden state = A = Activation Vector = Context Vector. 

RNN is used for all 3 NLP tasks

1. Classification

2. "Multi"-Classification

3. Generation

RNN is keep forgetting the past. This phenomena is called "vanishing gradient"

Word order matters in RNN

3. LSTM = Long short-term memory

1. hidden state

2. cell state

• Attention mechanism

Attention tries to have a direct link between next word that we are predicting and something from the past. 

"self-attention" is main principle of "Attention is all you need" 2017 paper

"self-attention" = Instead of sequential, let direct connection with all part of text at once. 

Concept of Query, Key and Value

We compare Q to K. How they are similar and then take corresponding value

Softmax converts unnormalized network output into probability for different class such that value is [0,1] and sum is 1. 

Formula – Given a query Q, we want to know which key K the query should pay "attention" to with respect to the associated value V. 

attention = softmax ( Q * K ^ T / Sqrt (dimension of K) ) * V

There are three attention layers

1. Attention layer at encoder to compute embeddings from input

2. Decoder-decoder attention OR self-attention layer in decoder, It is is masked, because it only look at those token that are translated. It determines: what other token of output sentence is useful to predict next token. 

3. cross-attention layer : expressed as function of what is seen in input. Last part of encoder. it is fetch to decoder. 

We have direct link to all token. So order words does not matter. (unlike RNN).  So we have Position Encoding: to inform position of word in sequence. 


BOS Token: Beginning of Sequence. 

EOS Token: End of Sequence


• Transformer architecture

Self-attention is achieved by transformer = encoder and decoder

1. Encoder computes meaningful embedding from input text. We have N such encoders. Input layer generates position aware embedding matrix with size d = model size and length = length of input sequence = n

Encoder projects input sequence on 3 spaces Wk, Wq and Wv. so model learns. 

attention = softmax ( Q * K ^ T / Sqrt (dimension of K) ) * V

Projecting on Wq gives a matrix where each row represents a given query Q. So we get matrix Wo that is project back to original dimension of embedding. 

K^T is each column represents key of each token. 

When we multiple K^T and Wq, Each row represents projection of query over each key and then get probability distribution. 

Now multiple with matrix V 

This is self-attention mechanism. means compute representation of each token as function of other tokens. it is done by attention layer. 

Multi-Head Attention (MHA) means this computation is done in different way. So model can learn 

- different representation

- different projections

so all token of input text attend each other. 

It is masked self-attention layer. 

A Multi-Head Attention (MHA) layer performs attention computations across multiple heads, then projects the result in the output space.

Variations of MHA
* Grouped-Query Attention (GQA) and 
* Multi-Query Attention (MQA) 
that reduce computational overhead by sharing keys and values across attention heads.

Head is term given to project matrix that we used to obtain Q, K, V. With more heads, model learns different projection. It is like multiple filters in convolution layer in computer vision. 
h = number of heads
For having h number of heads, the output of attention is h such matrices. Here, because of gradient decent every time we get different result. Each objective function with degree of freedom. We concatenate output of all headers with respect to columns. 

2. FFNN (Feed Forward Neural Network) : so model learn another kind of projection

so we get rich representation of input token

In LLM, hidden layer has higher dimension. So model has enough degree of freedom to learn useful representation. 

3. output is for decoder

It takes Q from output. 

K, V from encoder. 

we have N decoders. 

New Terms

  • Perplexity is an evaluation matrix for machine translation. It quantifies how 'surprised' the model is to see some words together. Lower is better. 
  • OOV = out of vocabulary
  • RNN is keep forgetting the past. This phenomena is called "vanishing gradient"
  • Label Smoothing Purpose

    - prevent overfitting

    - introduce noise

    - let model be little unsure about prediction. 

    It improves accuracy and BLEU score of translation.

  • RLHF : Reinforcement learning from human feedback

References

https://cme295.stanford.edu/

Syllabus : https://cme295.stanford.edu/syllabus/

CheatSheet 

https://cme295.stanford.edu/cheatsheet/ 

https://github.com/afshinea/stanford-cme-295-transformers-large-language-models/tree/main/en 

https://www.youtube.com/watch?v=Ub3GoFaUcds

https://www.youtube.com/watch?v=8fX3rOjTloc

Text Book Super Study Guides

------------------------------------------------------

Sequence to Sequence model has

1. Encoder, Decoder

2. Attention Mechanism

3. Transformer architecture

4. Fine tuning of Transformer Architecture

Usecases

1. Language, sentence has words in sequecne

2. Time series data

3. Biology: Genes, DNA

4. 

------------------------------------------------------

Some more relevant stuff: 

Each layer has 

1. Attention and 

2. Fast Forward


Between two layers we have high dimension 'hidden state vector' in activation space. 


LLM encodes concepts as distributed patterns accross layers = Superposition. 

Antropic has series of papers on superposition and monosemanticity

https://www.youtube.com/watch?v=F2jd5WuT-zg

https://www.neuronpedia.org

https://huggingface.co/collections/dlouapre/sparse-auto-encoders-saes-for-mechanistic-interpretability

https://huggingface.co/spaces/dlouapre/eiffel-tower-llama

------------------------------------------------------------

BAPS IT Convention


On January 18, 2026 BAPS Banglore temple hosted IT convention event from morning to evening. More than 375 participants. 


Here are few take away points 



1.  "Changing Trends of AI in technologies" by Prof. Rahul De

Prof. Rahul De' is founder and CEO of https://www.memoricai.in/ He provided nice academic insight, history of AI, present state and future. AI is about inference and inference is predications, classification and generative output

Human brain has 80 to 86 billion neurons (cells). 

Evolution of AI

1. ANI : Artificial Narrow Intelligence 

2. GNI : General Narrow Intelligence 

3. ANI : Super Narrow Intelligence 

In 1966 a professor Joseph Weizenbaum at MIT developed first chatbot by name ELIZA. It acts like a Rogerian psychotherapist. People like it so much. Later on, we had to convince, that it is not a real person. It is just a computer program that simulates human conversation, through pattern matching and keyword substitution.

Late in 1980 John Searle did "Chinese Room Experiment". Here, a non-Chinese speaker in a room, just manipulate Chinese symbols manually and produce fluent responses without understanding the language.  It proves that just through syntax (rule-following) alone, computer cannot achieve semantics (genuine understanding or consciousness). 

Probably that is why today, GenAI has caveats like hallucinations, jail breaking, bias, privacy violations and unfair responses. In the context of bias, he mentioned about recent movie "Human in the loop"  Available on Netflix. "An indigenous woman works as an AI data-labeler after returning to her village with her children, but soon questions the human bias in machine learning."

He shared some statistics

  • - 2.5 billion prompts are handled by ChatGPT alone in a year. 
  • - 2.4 million models are present at hugging face
  • - 50 billion USD are spent for AI in year 2025. 
  • - 95% firms fail in GenAI adoption. 
  • - We achieved 15% improvement by GenAI

The above numbers raised serious questions that does spending behind GenAI is worth? 

He mentioned few books and categorised all AI adopters in four groups. Boomers (books by Ray Kurzweil), Doomers , Skeptics and Critics "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI"



2. panel discussion on "Ways to overcome Challenges in IT"


One of the discussion point was about reducing 95% failure rate in 2026

We need: 

  • - Automated task workflow
  • - Cross functional aggregation across various departments
  • - Collaboration between AI and human
  • - Orchestration for AI in day-today work

How can you fail? Your effort (to adapt AI, to retain job etc) can fail. In fact the definition for failure came after industry revolution. During the last century, "Productivity" was more in focus due to industry revolution. 

Other points: 

Former OpenAI co-founder Ilya Sutskever has indicated that simply scaling up to 1 trillion parameters model will not improve AI capability further. 

May be, personalised AI will be next thing

We are humans so 

  • - we are always optimistic, we have hope. 
  • - We have ability to adapt.
  • - We are creator of AI so we are smarter than AI 

We shall remove the fear that we need to learn everything. Yes, we shall learn something new everyday and take its now on NotebookLM. Internet is flooded with many buzzword about AI. We need to separate signal from noise. 

Now learning is not same as degree earning. 

Now, we need to be aware about all domain. That knowledge shall not be gain by asking ChatGPT. 

Few points were discussed about parenting: We shall have 30 min of productive arguments with kids. We will learn AI from the end-user. We shall enable parental control for Internet, OTT. While using AI, be skeptic. You are interacting with product. So product has market, company / organisation behind it, that want to earn profit. AI product is not your friend. More we use LLM, that much brain is unused and lost.  

We need to learn basic values like

  • - Humans are not means
  • - Respect life in people

About firing due to AI. 

  • * If software engineers consider themselves as coder then AI will replace them. They shall consider themselves as problem solver. 
  • * On lighter notes: Pujya Aksharatit Swami mentioned that we SADHUs are easier to get replace. Chat with AI is available round the clock. 
  • * On lighter notes: Pujya Aksharatit Swami mentioned 

येषाम् न विद्या न तपः न दानम् न ज्ञानम् न शीलम् न गुणः न धर्मः ।।

ते मृत्यु-लोके भुवि भार-भूता मनुष्यरूपेण मृगाः चरन्ति ।

It means: Those who possess no knowledge (Vidya), no penance (Tapa), no charity (Dana), no wisdom (Jnana), no good character (Sheela), no virtues (Guna), and no righteousness (Dharma), are a burden to the earth. Although they look like humans, they roam the earth like animals in human form. 

Now this sloka is applicable for knowledge of AI also :-) 

During the panel discussion, the floor is open for everyone to ask question via WhatsApp group, that was flooded with many questions. 

3. Networking

The audience was divided into many groups. The participants had a round of introductions within group. There was engaging quiz, where all groups participated and the group leader responded to questions on behalf of group. 

We had delicious vegetarian, SAATTVIK food. Now some key take away points from post lunch sessions.



4. Work-Life Integration & Wellbeing

“In God we trust. All others must bring data.” - W. Edwards Deming

Here are some shocking data points

  • 83% of Indian IT professional are burnout
  • 73% of European IT professional are burnout
  • 72% are working beyond limit

The data source is ISACA (Information Systems Audit and Control Association)

5 pillars of work life integration

1. Know your why?

What is purpose? It brings impact, mastery and autonomy. 

Our health and family should tune to work. 

2. Design your system

Define boundaries so you can protect capacity

Bring rhythms by frequent breaks. 

3. Recovery and build resilience practices

3.1 Mindful ness and reflection. Moment to moment non-reactive, non-judgmental awareness is mindful ness. 

3.2 physical movement

3.3 social connections

4. Align your environment

5. Seek help early

Something about sleep

Sleep is non-negotiable.

- Sleep hygiene and nutrition  

- Stop all screens 60 to 90 min before going to bed. It is digital sunset. 

- use eye mask while sleeping. 

Next topic was NSANE

Nutrition

Screen Time

Automatic health. He mentioned about book : "Atomic Habit

Notice Signal

Engage with real people

Remember 3 truths

1. Mind = body. Means if body is unhealthy, means mind is unhealthy. Body can be healthy by making healthy mind

2. Work family balance is important

3. It is still not too late. 

Ask your wife, what she expect about husband? 

- Rich husband?

- Healthy husband? 

- Rich but unhealthy husband? 

- burn out person as husband? 

3. Personal/Spiritual Fire Chat with Pujya Santo

Along with other relevant questions and guidance by Pujya SAINTs, again IT layoff was discussed. The apparent reason is AI for layoff. The real reason can be poor performance of employee. Extra hiring happens during COVID phase, so now layoff is inevitable. जातस्य हि ध्रुवो मृत्युः Same way, if you have job, you may get fire. If not today then in future, at age of 62 years. Even after 10 years, the present software application has no values. This is also as per SANKHYA philosophy. It inspires us to make better documentation of product. 



Summary

1. Be happy

2. Worship God. 


બી.એ.પી.એસ. પ્રકાશ એપ


પ્રકાશ વિશેષ :

બેંગલુરુમાં આઇ.ટી. પ્રોફેશનલ્સ માટે યોજાયો વિશેષ સમારોહ


વધુ માહિતી માટે નીચેની લિંક પર ક્લિક કરો.


https://bapsprakash.in/deeplink?data=QsfHnR6PaUp%2FlihVajHZKg%3D%3D%3AWm7u3d6zy%2FylpRGzMz6xFENQBWhBhVioEwCNJCh3MI8%3D


Disclaimer: The author had put best effort to capture all the points, as per his understanding. It may or may not reflect exact intention of the speaker. So any corrections are welcome. This article is not verbatim