BAPS IT Convention
On January 18, 2026 BAPS Banglore temple hosted IT convention event from morning to evening. More than 375 participants.
Here are few take away points
1. "Changing Trends of AI in technologies" by Prof. Rahul De
Prof. Rahul De' is founder and CEO of https://www.memoricai.in/ He provided nice academic insight, history of AI, present state and future. AI is about inference and inference is predications, classification and generative output
Human brain has 80 to 86 billion neurons (cells).
Evolution of AI
1. ANI : Artificial Narrow Intelligence
2. GNI : General Narrow Intelligence
3. ANI : Super Narrow Intelligence
In 1966 a professor Joseph Weizenbaum at MIT developed first chatbot by name ELIZA. It acts like a Rogerian psychotherapist. People like it so much. Later on, we had to convince, that it is not a real person. It is just a computer program that simulates human conversation, through pattern matching and keyword substitution.
Late in 1980 John Searle did "Chinese Room Experiment". Here, a non-Chinese speaker in a room, just manipulate Chinese symbols manually and produce fluent responses without understanding the language. It proves that just through syntax (rule-following) alone, computer cannot achieve semantics (genuine understanding or consciousness).
Probably that is why today, GenAI has caveats like hallucinations, jail breaking, bias, privacy violations and unfair responses. In the context of bias, he mentioned about recent movie "Human in the loop" Available on Netflix. "An indigenous woman works as an AI data-labeler after returning to her village with her children, but soon questions the human bias in machine learning."
He shared some statistics
- - 2.5 billion prompts are handled by ChatGPT alone in a year.
- - 2.4 million models are present at hugging face
- - 50 billion USD are spent for AI in year 2025.
- - 95% firms fail in GenAI adoption.
- - We achieved 15% improvement by GenAI
The above numbers raised serious questions that does spending behind GenAI is worth?
He mentioned few books and categorised all AI adopters in four groups. Boomers (books by Ray Kurzweil), Doomers , Skeptics and Critics "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI"
2. panel discussion on "Ways to overcome Challenges in IT"
One of the discussion point was about reducing 95% failure rate in 2026
We need:
- - Automated task workflow
- - Cross functional aggregation across various departments
- - Collaboration between AI and human
- - Orchestration for AI in day-today work
How can you fail? Your effort (to adapt AI, to retain job etc) can fail. In fact the definition for failure came after industry revolution. During the last century, "Productivity" was more in focus due to industry revolution.
Other points:
Former OpenAI co-founder Ilya Sutskever has indicated that simply scaling up to 1 trillion parameters model will not improve AI capability further.
May be, personalised AI will be next thing
We are humans so
- - we are always optimistic, we have hope.
- - We have ability to adapt.
- - We are creator of AI so we are smarter than AI
We shall remove the fear that we need to learn everything. Yes, we shall learn something new everyday and take its now on NotebookLM. Internet is flooded with many buzzword about AI. We need to separate signal from noise.
Now learning is not same as degree earning.
Now, we need to be aware about all domain. That knowledge shall not be gain by asking ChatGPT.
Few points were discussed about parenting: We shall have 30 min of productive arguments with kids. We will learn AI from the end-user. We shall enable parental control for Internet, OTT. While using AI, be skeptic. You are interacting with product. So product has market, company / organisation behind it, that want to earn profit. AI product is not your friend. More we use LLM, that much brain is unused and lost.
- - Humans are not means
- - Respect life in people
About firing due to AI.
- * If software engineers consider themselves as coder then AI will replace them. They shall consider themselves as problem solver.
- * On lighter notes: Pujya Aksharatit Swami mentioned that we SADHUs are easier to get replace. Chat with AI is available round the clock.
- * On lighter notes: Pujya Aksharatit Swami mentioned
येषाम् न विद्या न तपः न दानम् न ज्ञानम् न शीलम् न गुणः न धर्मः ।।
ते मृत्यु-लोके भुवि भार-भूता मनुष्यरूपेण मृगाः चरन्ति ।
It means: Those who possess no knowledge (Vidya), no penance (Tapa), no charity (Dana), no wisdom (Jnana), no good character (Sheela), no virtues (Guna), and no righteousness (Dharma), are a burden to the earth. Although they look like humans, they roam the earth like animals in human form.
Now this sloka is applicable for knowledge of AI also :-)
During the panel discussion, the floor is open for everyone to ask question via WhatsApp group, that was flooded with many questions.
3. Networking
The audience was divided into many groups. The participants had a round of introductions within group. There was engaging quiz, where all groups participated and the group leader responded to questions on behalf of group.
We had delicious vegetarian, SAATTVIK food. Now some key take away points from post lunch sessions.
4. Work-Life Integration & Wellbeing
“In God we trust. All others must bring data.” - W. Edwards Deming
Here are some shocking data points
- 83% of Indian IT professional are burnout
- 73% of European IT professional are burnout
- 72% are working beyond limit
The data source is ISACA (Information Systems Audit and Control Association)
5 pillars of work life integration
1. Know your why?
What is purpose? It brings impact, mastery and autonomy.
Our health and family should tune to work.
2. Design your system
Define boundaries so you can protect capacity
Bring rhythms by frequent breaks.
3. Recovery and build resilience practices
3.1 Mindful ness and reflection. Moment to moment non-reactive, non-judgmental awareness is mindful ness.
3.2 physical movement
3.3 social connections
4. Align your environment
5. Seek help early
Something about sleep
- Sleep is non-negotiable.
- Sleep hygiene and nutrition
- Stop all screens 60 to 90 min before going to bed. It is digital sunset.
- use eye mask while sleeping.
Next topic was NSANE
Nutrition
Screen Time
Automatic health. He mentioned about book : "Atomic Habit"
Notice Signal
Engage with real people
Remember 3 truths
1. Mind = body. Means if body is unhealthy, means mind is unhealthy. Body can be healthy by making healthy mind
2. Work family balance is important
3. It is still not too late.
Ask your wife, what she expect about husband?
- Rich husband?
- Healthy husband?
- Rich but unhealthy husband?
- burn out person as husband?
3. Personal/Spiritual Fire Chat with Pujya Santo
Along with other relevant questions and guidance by Pujya SAINTs, again IT layoff was discussed. The apparent reason is AI for layoff. The real reason can be poor performance of employee. Extra hiring happens during COVID phase, so now layoff is inevitable. जातस्य हि ध्रुवो मृत्युः Same way, if you have job, you may get fire. If not today then in future, at age of 62 years. Even after 10 years, the present software application has no values. This is also as per SANKHYA philosophy. It inspires us to make better documentation of product.
Summary
1. Be happy
2. Worship God.
બી.એ.પી.એસ. પ્રકાશ એપ
પ્રકાશ વિશેષ :
બેંગલુરુમાં આઇ.ટી. પ્રોફેશનલ્સ માટે યોજાયો વિશેષ સમારોહ
વધુ માહિતી માટે નીચેની લિંક પર ક્લિક કરો.
Disclaimer: The author had put best effort to capture all the points, as per his understanding. It may or may not reflect exact intention of the speaker. So any corrections are welcome. This article is not verbatim
MCP and Security OAuth2.0
This video is about MCP, ACP and A2A but later 11+ more protocols were found
MCP : Design goal is not Agent to Agent communication. But now it is happening.
- MCP is also wroking on Agent registry.
- It is different from A2A (1) No async message. (2) No renegotation.
- It is most adopted protocol.
- MCP has conquer world like React.js
These protocols are about : Discover, communicate, authenticate. over Internet
1. Inter-Agent protocol
1.1. Robot-Agent: CrowdES, SPPs (Saptial Population Protocol)
1.2. Human Computer: LOKA, PXP
1.3. System-Agent: Agent Protocol, LMOS (Large Model Operating System)
2. Context Oriented
2.1 MCP
Few Protocols
- Agents.JSON: File has API documentation for Agent. It is in OpenAPI format
- ANP: Agent Network Protocl. DID Distributed Identity (Blockchain-based)
- AITP: Blockchain. interaction cost.
- ACP (Agent Connect Protocol) by Cisco. Like A2A except it describe how to host and launch agent
- ACP (Agent Communication Protocol) by IBM. like MCP. It has registry = distributed Database of all agents. It is fork of MCP
- Agora: Natural language protocol. protocol upgrade itself.
All protocols lack
1. Regsitry
2. Authorizatoin
3. Reputation
https://www.youtube.com/watch?v=kqB_xML1SfA
=============================
OAuth is bunch of specs
OAuth2.0 = RFC6749 (2012) + RFC6750
OAuth2.1 = Authorization code + PKCE, Client credentials (may not useful), token in HTTP header, token in POST form body. Refer MCP Github issue 830
OpenID Connect is to identify user. It provides ID token
RFC 9728 : single URL for MCP server. It points to file that define meta data of server. it has auth server meta data as per RFC 8414.
RFC 7591 : Dynamic client registration to get client ID and Client secret.
https://www.youtube.com/watch?v=mYKMwZcGynw
==========================================
Keycloak and SPIRE
CNCF IAM Whitepaper. Yet to be published
5 principals
1. mTLS
2. OAuth2 token exchange RFC 8693
3. OIDC client authentication with SPIFEE SVID
4. OAuth2 token validation
5. PDP based Authorization decision.
PDP, PAP, PIP, PEP
- PAP (Policy Administration Point): The user interface/management layer where policies (rules) are created, stored, and managed centrally.
- PDP (Policy Decision Point): The "brain" that evaluates a user's request against policies from the PAP, using attributes from PIP, to return an "Allow" or "Deny" decision.
- PIP (Policy Information Point): Fetches necessary attributes (like user roles, data from databases/LDAP) needed by the PDP to make a decision.
- PEP (Policy Enforcement Point): Sits in front of the protected resource, intercepts requests, sends them to the PDP for a decision, and enforces that decision.
1. A user requests a resource (e.g., "view my profile").
2. The PEP intercepts the request and the OAuth access token.
3. The PEP sends the request details (user context from token, resource, action) to the PDP.
4. The PDP queries the PIP (e.g., a database) for attributes like user's department.
5. The PDP evaluates policies (managed by PAP) and returns "Permit/Deny" to the PEP.
6. The PEP allows or blocks the user's request based on the decision.
https://www.youtube.com/watch?v=S6qF0N5D1tM
Sanskrit speech at closing ceremony of spoken Sanskrit workshop
10 days Spoken Sanskrit workshops (Sanskrit Sambhaasan Shibir) held from July14 2025 to July 23 2025 at BEML Balaji Temple and at Skylark Arcadia Bangalore, by Samskrita Bharati Marathahalli bhaga, Bangalore, India.
Spoken Sanskrit workshops - closing ceremony on 26th July 2025.
Sanskrit speech by chief guest Shri Manish Panchmatia
Greetings!
A warm welcome to everyone present—teachers, volunteers from Samskrita Bharati, students, and all those passionate about Sanskrit. After bowing down to each of you, I am honored to begin my speech.
First, let me ask: “How was the Spoken Sanskrit Workshop?” Was it “Good”? “Very good”? I’m certain it was a positive experience. Whenever Samskrita Bharati organizes a workshop, the teachers impart their knowledge with dedication, don’t they? Over the past ten days, each one of you demonstrated incredible determination to learn Sanskrit. No matter what, you attended all the classes and gave your best effort. Kudos to you all for your commitment and enthusiasm. You sang songs in Sanskrit, offered prayers, performed dramas, and narrated stories—all in Sanskrit. I am truly impressed by your efforts and progress.
As you may know, NASA called Sanskrit the most suitable language for computers. I won’t repeat those facts, but I do want to share a story about Sanskrit’s vast vocabulary. Just now, we sang about great poets—Vyasa, Bhasa, Kalidasa, Banabhatta—yet there is another renowned poet: Dandi, who composed “Dasakumaracharitam,” the story of ten princes. One prince, wounded in the lower lip, could not pronounce certain sounds: "pa," "fa," "ba," "bha," and "ma." Dandi skillfully chose synonymous words without those sounds for all of that prince’s dialogue. This was possible only because Sanskrit’s vocabulary is so rich.
Sanskrit grammar is exceptionally precise; in fact, the very meaning of the word “Sanskrit” is “well-formed” or “perfect.” Sanskrit has maintained its grammar rules since Vedic times—pronunciation, grammar, language rules—all unchanged. Even though new inventions like mobile phones have appeared, in Sanskrit, we create new words effortlessly. “Mobile phones” in Sanskrit is “Jangam Door-Vani”—a moving telephone. Artificial intelligence becomes “Krutrim Buddhi.” Thanks to Sanskrit’s structure, new words can always be formed. This habit of following rules brings discipline to our lives.
We all are Indians. Sanskrit is deeply imprinted in our consciousness. Just as our childhood photos make us happy, speaking Sanskrit awakens ancient impressions within us—it fills the soul with joy. Throughout these ten days, you learned Sanskrit joyfully, supported by equally joyful teachers.
How do we gain these benefits? By giving your 100% effort to learning Sanskrit. What does 100% effort mean? Here is another story from the Mahabharata. Karna is renowned as the greatest giver. Once, Arjuna asked Krishna, "Why is Karna more famous for charity than my brother Yudhishthira?" Krishna replied, “Let’s see for ourselves.” Disguised as BRAHMINs, Arjuna and Krishna visited Yudhishthira and requested sandalwood for a yagna. Yudhishthir responded, "OK". Then, what did Yudhishthir do? He searched. Remember the story of the crow. The crow was searching for water. Here Yudhishthir is searching for sandalwood. He looked here . He looked there. Right side. Left side. Front side. Back side. He looked for Sandalwood everywhere. The sandalwood is nowhere. Then he went outside. Looked at the garden. He searched everywhere. The sandalwood is nowhere. He returned back and told BRAHMIN: "Sorry sir. The sandalwood is nowhere. I will surely make a little more effort to arrange for sandalwood. Please come tomorrow. I will give." BHRAMIN said, "OK". Then, they visited Karna with the same request. Karna eagerly searched, and when he could not find sandalwood, he noticed his door was made of sandalwood. Without hesitation, he dismantled it and gave it to them. True charity is giving with 100% effort. It is the same with learning Sanskrit—when you give your wholehearted effort, you will gain both knowledge and discipline.
Further, speaking Sanskrit even helps with breathing exercises—“ma” and “ha” are frequently used, which naturally leads to “pranayama.”
This spoken Sanskrit workshop is just the beginning. Explore further—enroll in correspondence Sanskrit learning courses, Gita Sopanam etc. There are many Sanskrit books here, for sale. Buy them, read them and then, become Sanskrit teachers yourselves! Spread and promote Sanskrit to others just as your teachers did for you. You all know Swami Vivekananda. Right? "Yes". On 4th July, it was his death anniversary. On the last day of life, he taught Sanskrit to the students. You all know that? It is in his biography. Let us honor this legacy by teaching and learning Sanskrit.
Having spoken much about Sanskrit, let me turn briefly to our IT professionals. When we hear the word “language,” we often think of C, C++, Java, Python, NodeJS, ReactJS, and so on. Recently, I attended a "PyKrit" (Python + Sanskrit) workshop at Aksharam, Samskrita Bharati’s center. This workshop was not for IT people. It was for Sanskrit scholars. I saw Sanskrit scholars name Python functions in Sanskrit, such as “YANA SANDHI” (a grammar topic). We truly can create software for Sanskrit grammar, using modern programming languages like Python, and even coding in Sanskrit.
Lastly, my wish: While we use AI/GenAI tools like ChatGPT, we mostly interact in English. Wouldn’t it be wonderful to have a large language model (LLM) in Sanskrit? Imagine asking questions in Sanskrit and receiving answers, back in Sanskrit, from AI tools, such as ChatGPT. That is my hope for the future.
Best wishes to everyone, and thank you all.
LLMOps
For AI application, we need automation of
1. Data preparation
2. model tuning
3. Deployment
4. Maintenance and
5. Monitoring
- Managing Dependency adds complexity.
E2E workflow for LLM based application.
MLOps framework
1. data ingestion
2. data validation
3. data transformation
4. model
5. model analysis
6. serving model
7. logging.
LLM System Design
boarder design of E2E app including front end, back end, data engineering etc.
Chain multiple LLMs together
* Grounding : provides additional information/fact with prompt to LLM.
* Track History. how it works past.
LLM App
User input->Preprocessing->grounding->prompt goes to LLM model->LLM Response->Grounding->Post processing + Responsible AI->Final output to user.
Model Customization
1. Data Prep
2. Model Tuning
3. Evaluate
It is iterative process
LLMOps Pipeline (Simplified)
1. Data Preparation and versioning (for training data)
2. Supervised tuning (pipeline)
3. Artifact = config and workflow : are generated.
- config = config for workflow
E.g.
Which data set to use
- Workflow = steps
4. Pipeline execution
5. deploy LLM
6. Prompting and predictions
7. Responsible AI
Orchestration = 1 + 2 . Orchestration : What is first, then next step and further next step. sequence of step assurance.
Automation = 4 + 5
Fine Tuning Data Model using Instructions (Hint)
1. rules
2. step by step
3. procedure
4. example
File formats
1. JSONL: JSON Line. Human readable. For small and medium size dataset.
2. TFRecord
3. Parquet for large and complex dataset.
MLOps Workflow for LLM
1. Apache Airflow
2. KubeFlow
DSL = Domain Specific Language
Decorator
@dls.component
@dls.pipeline
Next compiler will generate YAML file for pipeline
YAML file has
- components
- deploymentSpec
Pipeline can be run on
- K8s
- Vertex AI pipeline execute pipeline in serverless enviornment
PipelineJob takes inputs
1. Template Path: pipline.yaml
2. Display name
3. Parameters
4. Location: Data center
5. pipeline root: temp file location
Open Source Pipeline
https://us-kfp.pkg.dev/ml-pipeline/large-language-model-pipelines/tune-large-model/v2.0.0
Deployment
Batch and REST
1. Batch. E.g. customer review. Not real time.
2. REST API e.g. chat. More like teal time library.
* pprint is library to format
LLM provides output and 'safetyAttributes'
- blocked
* We can find citation also from output of LLM
===========
vertexAI SDK
https://cloud.google.com/vertex-ai
BigQuery
https://cloud.google.com/bigquery
sklearn
To decide data 80-20% for training and evaluation.
Building AI/ML apps in Python with BigQuery DataFrames | Google Cloud Blog
===========
K8s GW API
Examples:
Istio, Kong, Envoy , Gluee , Trafeik, kgateway, Contour, NGINX, Kong Gateway and many more as per https://gateway-api.sigs.k8s.io/implementations/#gateway-controller-implementation-status
Protocols: gRPC, HTTP/2, and WebSockets
The structure of a Kubernetes Custom Resource Definition (CRD) or manifest file is referred to as an API. This is because it refers to the structure of the API in the Kubernetes control plane
Migration from ingress https://gateway-api.sigs.k8s.io/guides/migrating-from-ingress/#migrating-from-ingress
extension points
ingress has 2 extension points
1. annotations
2. resource end point
primary extension points in GW API:
1. External references
1.1 HTTP Route Filter
1.2 Backend Object Reference
1.3 Secret Object Reference
Here GW API reference 'external reference'
2. Custom implementations
e.g. The RegularExpression type of the 'HTTP Path Match'
3. Policies
A "Policy Attachment" is a specific type of metaresource
Here, Policy reference 'GW API'
- GW API is not API GW
- GAMMA (Gateway API for Mesh Management and Administration) initiative
- A 'waypoint proxy' is a proxy server deployed inside the mesh for E-W traffic. It can be (1) per namespace level destination services (2) few service(s) within a namespace as destination (3) destination services from multiple namespaces.
- It is at cluster level. so no namespace
- Annotations at GatewayClassfor vendor specific
- It defines controller capabilities
2. Gateway
- Each Gateway defines one or more listeners, which are the ingress points to the cluster
- You can control which services can be connected to this listener (allowedRoutes) by way of their namespace — this defaults to the same namespace as the Gateway
- Advanced featues like
-- request mirroring,
-- direct response injection,
-- and fine-grained traffic metrics
-- Traffic spilt
- In Istio APIs, a Gateway configures an existing gateway Deployment/Service that has been deployed. In the Gateway APIs, the Gateway resource both configures and deploys a gateway
- one can attach HPA and PodDisuptionBudget to gateway deployment.
3. HTTP Route:
- any combinations of hostname, path, header values, HTTP method and query parameters.
- Paths (e.g., /headers, /status/*)
- Headers (e.g., User-Agent: Mobile)
- Query Parameters (e.g., ?version=beta)
- Methods (e.g., GET, POST)
- hostname (optional) at HTTP route shall match with hostname at Gateway->Listener->hostname
- A definition of the Gateway to use (in ParentRefs), is referenced by name and namespace
- The backendRefs that defines the service to route the request to for this match
- advanced pattern matching and filtering on arbitrary headers as well as paths.
1. RequestRedirect : E.g. Redirect HTTP traffic to HTTPS
2. URLRewrite
3. <Request|Response>HeaderModifier
4. RequestMirror
5. CORS
6. ExtensionRef for custom filter. E.g. DirectResponse
filters:
- type: ExtensionRef
extensionRef:
name: direct-response
group: gateway.kgateway.dev
kind: DirectResponse
- In the Istio VirtualService, all protocols are configured within a single resource. In the Gateway APIs, each protocol type has its own resource, such as HTTPRoute and TCPRoute.
- Traffic splitting is done by specifying multiple backendRef, with weight
- timeout, retry, sessionPersistence Session persistence, (= sticky sessions or strong session affinity), ensures that a client's requests are consistently routed to the same backend instance for the duration of a session. based on cookie or a header
- Route and Gateway can be in different namespace. If Gateway is defined with
allowedRoutes:
namespaces:
from: Same
then Route and Gateway shall be in same namespace. We can have group of namespaces with label selector and specify those namespace using label at Gatway resource.
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
self-serve-ingress: "true"
* 4. TLS Route
5. GRPCRoute
* 6. TCPRoute
* not v1, GA
Details: https://gateway-api.sigs.k8s.io/reference/spec/
If you are using a service mesh, it would be highly desirable to use the same API resources to configure both ingress traffic routing and internal traffic, similar to the way Istio uses VirtualService to configure route rules for both. Fortunately, the Kubernetes Gateway API is working to add this support. Although not as mature as the Gateway API for ingress traffic, an effort known as the Gateway API for Mesh Management and Administration (GAMMA) initiative is underway to make this a reality and Istio intends to make Gateway API the default API for all of its traffic management in the future.
https://gateway-api.sigs.k8s.io/mesh/
Gateway controller is for North South traffic. mesh controller is for East West traffic
7. ReferenceGrant: for cross-namespace reference.
8. Inference Extension
K8s offers following mechanisms to optimize GPU usage,
- time slicing,
- Multi-Instance GPU (MIG) partitioning,
- virtual GPUs, and
- NVIDIA MPS
for concurrent processing.
Effective GPU utilization means
- hardware allocation;
- how inference requests are routed across model-serving instances.
- how inference requests are load-balanced across model-serving instances.
Simple load-balancing strategies often fall short in handling AI workloads effectively, leading to suboptimal GPU usage and increased latency.
Inference requests V/s traditional web traffic
- It often takes much longer to process, sometimes several seconds (or even minutes!) rather than milliseconds,
- It has significantly larger payloads (ie, with RAG, multi-turn chats, etc). So a single request can consume an entire GPU, So making scheduling decisions far more impactful than those for standard API workloads. So, these requests need to queue up while others are being processed.
AI Models are stateful
- They maintain in-memory caches, such as KV storage for prompt tokens,
- They load fine tuned adapters like LoRA to customize response for specific user/organisation.
So routing decisions are based on
- current state (in-memory caches, adapters)
- available memory, and
- request queue depth.
So Inference aware routing through
8.1 Inference Model
- maps user facing model name to backend model
- traffic splitting between fine-tuned adapaters
- Priority based on real time interaction OR best-effort batch job
8.2 Inference Pool
- It is for platform operators managing model-serving infrastructure.
- a group of model-serving instances
- specialized backend service for AI workloads.
- It manages
-- inference-aware endpoint selection,
-- intelligent routing decisions based on real-time metrics such as
--- request queue depth and
--- GPU memory availability.
* Inference Pool is mapped with HTTP Route->backendRefs
* Inference Model has poolRef to link with Inference Pool
* Inference Pool has extensionRef (EPP = Endpoint picker) . If name for Inference Pool is xyz then extensionRef is "xyz-endpoint-picker" It is similar to K8s service, as it also has selector and target port
9. DirectResponse
10. Backend
For external endpoint




