Sanskrit speech at closing ceremony of spoken Sanskrit workshop
10 days Spoken Sanskrit workshops (Sanskrit Sambhaasan Shibir) held from July14 2025 to July 23 2025 at BEML Balaji Temple and at Skylark Arcadia Bangalore, by Samskrita Bharati Marathahalli bhaga, Bangalore, India.
Spoken Sanskrit workshops - closing ceremony on 26th July 2025.
Sanskrit speech by chief guest Shri Manish Panchmatia
Greetings!
A warm welcome to everyone present—teachers, volunteers from Samskrita Bharati, students, and all those passionate about Sanskrit. After bowing down to each of you, I am honored to begin my speech.
First, let me ask: “How was the Spoken Sanskrit Workshop?” Was it “Good”? “Very good”? I’m certain it was a positive experience. Whenever Samskrita Bharati organizes a workshop, the teachers impart their knowledge with dedication, don’t they? Over the past ten days, each one of you demonstrated incredible determination to learn Sanskrit. No matter what, you attended all the classes and gave your best effort. Kudos to you all for your commitment and enthusiasm. You sang songs in Sanskrit, offered prayers, performed dramas, and narrated stories—all in Sanskrit. I am truly impressed by your efforts and progress.
As you may know, NASA called Sanskrit the most suitable language for computers. I won’t repeat those facts, but I do want to share a story about Sanskrit’s vast vocabulary. Just now, we sang about great poets—Vyasa, Bhasa, Kalidasa, Banabhatta—yet there is another renowned poet: Dandi, who composed “Dasakumaracharitam,” the story of ten princes. One prince, wounded in the lower lip, could not pronounce certain sounds: "pa," "fa," "ba," "bha," and "ma." Dandi skillfully chose synonymous words without those sounds for all of that prince’s dialogue. This was possible only because Sanskrit’s vocabulary is so rich.
Sanskrit grammar is exceptionally precise; in fact, the very meaning of the word “Sanskrit” is “well-formed” or “perfect.” Sanskrit has maintained its grammar rules since Vedic times—pronunciation, grammar, language rules—all unchanged. Even though new inventions like mobile phones have appeared, in Sanskrit, we create new words effortlessly. “Mobile phones” in Sanskrit is “Jangam Door-Vani”—a moving telephone. Artificial intelligence becomes “Krutrim Buddhi.” Thanks to Sanskrit’s structure, new words can always be formed. This habit of following rules brings discipline to our lives.
We all are Indians. Sanskrit is deeply imprinted in our consciousness. Just as our childhood photos make us happy, speaking Sanskrit awakens ancient impressions within us—it fills the soul with joy. Throughout these ten days, you learned Sanskrit joyfully, supported by equally joyful teachers.
How do we gain these benefits? By giving your 100% effort to learning Sanskrit. What does 100% effort mean? Here is another story from the Mahabharata. Karna is renowned as the greatest giver. Once, Arjuna asked Krishna, "Why is Karna more famous for charity than my brother Yudhishthira?" Krishna replied, “Let’s see for ourselves.” Disguised as BRAHMINs, Arjuna and Krishna visited Yudhishthira and requested sandalwood for a yagna. Yudhishthir responded, "OK". Then, what did Yudhishthir do? He searched. Remember the story of the crow. The crow was searching for water. Here Yudhishthir is searching for sandalwood. He looked here . He looked there. Right side. Left side. Front side. Back side. He looked for Sandalwood everywhere. The sandalwood is nowhere. Then he went outside. Looked at the garden. He searched everywhere. The sandalwood is nowhere. He returned back and told BRAHMIN: "Sorry sir. The sandalwood is nowhere. I will surely make a little more effort to arrange for sandalwood. Please come tomorrow. I will give." BHRAMIN said, "OK". Then, they visited Karna with the same request. Karna eagerly searched, and when he could not find sandalwood, he noticed his door was made of sandalwood. Without hesitation, he dismantled it and gave it to them. True charity is giving with 100% effort. It is the same with learning Sanskrit—when you give your wholehearted effort, you will gain both knowledge and discipline.
Further, speaking Sanskrit even helps with breathing exercises—“ma” and “ha” are frequently used, which naturally leads to “pranayama.”
This spoken Sanskrit workshop is just the beginning. Explore further—enroll in correspondence Sanskrit learning courses, Gita Sopanam etc. There are many Sanskrit books here, for sale. Buy them, read them and then, become Sanskrit teachers yourselves! Spread and promote Sanskrit to others just as your teachers did for you. You all know Swami Vivekananda. Right? "Yes". On 4th July, it was his death anniversary. On the last day of life, he taught Sanskrit to the students. You all know that? It is in his biography. Let us honor this legacy by teaching and learning Sanskrit.
Having spoken much about Sanskrit, let me turn briefly to our IT professionals. When we hear the word “language,” we often think of C, C++, Java, Python, NodeJS, ReactJS, and so on. Recently, I attended a "PyKrit" (Python + Sanskrit) workshop at Aksharam, Samskrita Bharati’s center. This workshop was not for IT people. It was for Sanskrit scholars. I saw Sanskrit scholars name Python functions in Sanskrit, such as “YANA SANDHI” (a grammar topic). We truly can create software for Sanskrit grammar, using modern programming languages like Python, and even coding in Sanskrit.
Lastly, my wish: While we use AI/GenAI tools like ChatGPT, we mostly interact in English. Wouldn’t it be wonderful to have a large language model (LLM) in Sanskrit? Imagine asking questions in Sanskrit and receiving answers, back in Sanskrit, from AI tools, such as ChatGPT. That is my hope for the future.
Best wishes to everyone, and thank you all.
LLMOps
For AI application, we need automation of
1. Data preparation
2. model tuning
3. Deployment
4. Maintenance and
5. Monitoring
- Managing Dependency adds complexity.
E2E workflow for LLM based application.
MLOps framework
1. data ingestion
2. data validation
3. data transformation
4. model
5. model analysis
6. serving model
7. logging.
LLM System Design
boarder design of E2E app including front end, back end, data engineering etc.
Chain multiple LLMs together
* Grounding : provides additional information/fact with prompt to LLM.
* Track History. how it works past.
LLM App
User input->Preprocessing->grounding->prompt goes to LLM model->LLM Response->Grounding->Post processing + Responsible AI->Final output to user.
Model Customization
1. Data Prep
2. Model Tuning
3. Evaluate
It is iterative process
LLMOps Pipeline (Simplified)
1. Data Preparation and versioning (for training data)
2. Supervised tuning (pipeline)
3. Artifact = config and workflow : are generated.
- config = config for workflow
E.g.
Which data set to use
- Workflow = steps
4. Pipeline execution
5. deploy LLM
6. Prompting and predictions
7. Responsible AI
Orchestration = 1 + 2 . Orchestration : What is first, then next step and further next step. sequence of step assurance.
Automation = 4 + 5
Fine Tuning Data Model using Instructions (Hint)
1. rules
2. step by step
3. procedure
4. example
File formats
1. JSONL: JSON Line. Human readable. For small and medium size dataset.
2. TFRecord
3. Parquet for large and complex dataset.
MLOps Workflow for LLM
1. Apache Airflow
2. KubeFlow
DSL = Domain Specific Language
Decorator
@dls.component
@dls.pipeline
Next compiler will generate YAML file for pipeline
YAML file has
- components
- deploymentSpec
Pipeline can be run on
- K8s
- Vertex AI pipeline execute pipeline in serverless enviornment
PipelineJob takes inputs
1. Template Path: pipline.yaml
2. Display name
3. Parameters
4. Location: Data center
5. pipeline root: temp file location
Open Source Pipeline
https://us-kfp.pkg.dev/ml-pipeline/large-language-model-pipelines/tune-large-model/v2.0.0
Deployment
Batch and REST
1. Batch. E.g. customer review. Not real time.
2. REST API e.g. chat. More like teal time library.
* pprint is library to format
LLM provides output and 'safetyAttributes'
- blocked
* We can find citation also from output of LLM
===========
vertexAI SDK
https://cloud.google.com/vertex-ai
BigQuery
https://cloud.google.com/bigquery
sklearn
To decide data 80-20% for training and evaluation.
Building AI/ML apps in Python with BigQuery DataFrames | Google Cloud Blog
===========
K8s GW API
Examples:
Istio, Kong, Envoy , Gluee , Trafeik, kgateway, Contour, NGINX, Kong Gateway and many more as per https://gateway-api.sigs.k8s.io/implementations/#gateway-controller-implementation-status
Protocols: gRPC, HTTP/2, and WebSockets
The structure of a Kubernetes Custom Resource Definition (CRD) or manifest file is referred to as an API. This is because it refers to the structure of the API in the Kubernetes control plane
Migration from ingress https://gateway-api.sigs.k8s.io/guides/migrating-from-ingress/#migrating-from-ingress
extension points
ingress has 2 extension points
1. annotations
2. resource end point
primary extension points in GW API:
1. External references
1.1 HTTP Route Filter
1.2 Backend Object Reference
1.3 Secret Object Reference
Here GW API reference 'external reference'
2. Custom implementations
e.g. The RegularExpression type of the 'HTTP Path Match'
3. Policies
A "Policy Attachment" is a specific type of metaresource
Here, Policy reference 'GW API'
- GW API is not API GW
- GAMMA (Gateway API for Mesh Management and Administration) initiative
- A 'waypoint proxy' is a proxy server deployed inside the mesh for E-W traffic. It can be (1) per namespace level destination services (2) few service(s) within a namespace as destination (3) destination services from multiple namespaces.
- It is at cluster level. so no namespace
- Annotations at GatewayClassfor vendor specific
- It defines controller capabilities
2. Gateway
- Each Gateway defines one or more listeners, which are the ingress points to the cluster
- You can control which services can be connected to this listener (allowedRoutes) by way of their namespace — this defaults to the same namespace as the Gateway
- Advanced featues like
-- request mirroring,
-- direct response injection,
-- and fine-grained traffic metrics
-- Traffic spilt
- In Istio APIs, a Gateway configures an existing gateway Deployment/Service that has been deployed. In the Gateway APIs, the Gateway resource both configures and deploys a gateway
- one can attach HPA and PodDisuptionBudget to gateway deployment.
3. HTTP Route:
- any combinations of hostname, path, header values, HTTP method and query parameters.
- Paths (e.g., /headers, /status/*)
- Headers (e.g., User-Agent: Mobile)
- Query Parameters (e.g., ?version=beta)
- Methods (e.g., GET, POST)
- hostname (optional) at HTTP route shall match with hostname at Gateway->Listener->hostname
- A definition of the Gateway to use (in ParentRefs), is referenced by name and namespace
- The backendRefs that defines the service to route the request to for this match
- advanced pattern matching and filtering on arbitrary headers as well as paths.
1. RequestRedirect : E.g. Redirect HTTP traffic to HTTPS
2. URLRewrite
3. <Request|Response>HeaderModifier
4. RequestMirror
5. CORS
6. ExtensionRef for custom filter. E.g. DirectResponse
filters:
- type: ExtensionRef
extensionRef:
name: direct-response
group: gateway.kgateway.dev
kind: DirectResponse
- In the Istio VirtualService, all protocols are configured within a single resource. In the Gateway APIs, each protocol type has its own resource, such as HTTPRoute and TCPRoute.
- Traffic splitting is done by specifying multiple backendRef, with weight
- timeout, retry, sessionPersistence Session persistence, (= sticky sessions or strong session affinity), ensures that a client's requests are consistently routed to the same backend instance for the duration of a session. based on cookie or a header
- Route and Gateway can be in different namespace. If Gateway is defined with
allowedRoutes:
namespaces:
from: Same
then Route and Gateway shall be in same namespace. We can have group of namespaces with label selector and specify those namespace using label at Gatway resource.
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
self-serve-ingress: "true"
* 4. TLS Route
5. GRPCRoute
* 6. TCPRoute
* not v1, GA
Details: https://gateway-api.sigs.k8s.io/reference/spec/
If you are using a service mesh, it would be highly desirable to use the same API resources to configure both ingress traffic routing and internal traffic, similar to the way Istio uses VirtualService to configure route rules for both. Fortunately, the Kubernetes Gateway API is working to add this support. Although not as mature as the Gateway API for ingress traffic, an effort known as the Gateway API for Mesh Management and Administration (GAMMA) initiative is underway to make this a reality and Istio intends to make Gateway API the default API for all of its traffic management in the future.
https://gateway-api.sigs.k8s.io/mesh/
Gateway controller is for North South traffic. mesh controller is for East West traffic
7. ReferenceGrant: for cross-namespace reference.
8. Inference Extension
K8s offers following mechanisms to optimize GPU usage,
- time slicing,
- Multi-Instance GPU (MIG) partitioning,
- virtual GPUs, and
- NVIDIA MPS
for concurrent processing.
Effective GPU utilization means
- hardware allocation;
- how inference requests are routed across model-serving instances.
- how inference requests are load-balanced across model-serving instances.
Simple load-balancing strategies often fall short in handling AI workloads effectively, leading to suboptimal GPU usage and increased latency.
Inference requests V/s traditional web traffic
- It often takes much longer to process, sometimes several seconds (or even minutes!) rather than milliseconds,
- It has significantly larger payloads (ie, with RAG, multi-turn chats, etc). So a single request can consume an entire GPU, So making scheduling decisions far more impactful than those for standard API workloads. So, these requests need to queue up while others are being processed.
AI Models are stateful
- They maintain in-memory caches, such as KV storage for prompt tokens,
- They load fine tuned adapters like LoRA to customize response for specific user/organisation.
So routing decisions are based on
- current state (in-memory caches, adapters)
- available memory, and
- request queue depth.
So Inference aware routing through
8.1 Inference Model
- maps user facing model name to backend model
- traffic splitting between fine-tuned adapaters
- Priority based on real time interaction OR best-effort batch job
8.2 Inference Pool
- It is for platform operators managing model-serving infrastructure.
- a group of model-serving instances
- specialized backend service for AI workloads.
- It manages
-- inference-aware endpoint selection,
-- intelligent routing decisions based on real-time metrics such as
--- request queue depth and
--- GPU memory availability.
* Inference Pool is mapped with HTTP Route->backendRefs
* Inference Model has poolRef to link with Inference Pool
* Inference Pool has extensionRef (EPP = Endpoint picker) . If name for Inference Pool is xyz then extensionRef is "xyz-endpoint-picker" It is similar to K8s service, as it also has selector and target port
9. DirectResponse
10. Backend
For external endpoint
Low Cost Cloud
NVIDIA GTC25: Telecom Special Address
LTM Large Teleco Model : SoftBank is pioneer. Here is WhitePaper by GSMA https://www.gsma.com/get-involved/gsma-foundry/gsma_resources/white-paper-large-telecom-models/
Llama Nemotron Reasoning Model. Open source by NVIDIA on HF
https://www.nvidia.com/en-in/ai-data-science/foundation-models/nemotron/
https://arxiv.org/pdf/2505.00949
AI Factory is a specialized, integrated infrastructure designed to manage the entire AI lifecycle, from data ingestion to model training and deployment for real-time inference
AI Grid is a network of small, highly specialized AI communities. The members of AI Grid share their research work within these communities, initiate collaborations and establish fruitful connections for the future. https://lightning.ai/
Building Blocks of the NVIDIA AI Aerial Platform:
1. NVIDIA Aerial CUDA-Accelerated RAN
2. NVIDIA Aerial AI Radio Frameworks
3. NVIDIA Aerial Omniverse Digital Twin
Reference
AI Bootcamp for students
8 Day Live Online Workshop
AI Bootcamp for Students
Make Your Child Future-Ready with AI
by Timesof Inida
https://www.notion.com/product Documentation
https://www.todoist.com/ To Do List
https://gamma.app/ For presentation
https://openai.com/index/sora/ Cinematic Video
https://www.midjourney.com/home Art Grade Visuals for story telling
https://ideogram.ai/t/explore Typography to image. Communicate in style
https://lovable.dev/ No code web apps
https://n8n.io/ Workflow automation tools
Few more tools
TachyonGPT accelerate the project planning process, potentially saving weeks of effort. This powerful AI assistant allows you to create a complex backlog structure for your project in very little time. Tachyon GPT gives you the power to improve existing work items or generate new work items based on brief titles or descriptions. https://marketplace.visualstudio.com/items?itemName=Neudesic.TachyonGPT
windserf editor and cascade. Agentic code IDE
Reference: https://economictimes.indiatimes.com/masterclass/ai-for-students
https://www.msn.com/en-in/money/news/chatgpt-to-google-gemini-top-5-ai-tools-to-enhance-productivity-mostly-free/ar-AA1GRlt1
Regional LLM, SLM, TinyML Language Learning
New Language Learning
Want to learn a new language this summer? Explore these expert-led platforms
Mobile app https://youtu.be/jyffkeM9GB0
Regional LLM
Sarvam AI launches Bulbul-v2, its voice model with support for 11 Indian languages
https://asr.iitm.ac.in/
BharatGen https://bharatgen.tech/ Bharatgen: First Indigenous Language Ai Model Launched In India News In Hindi - Amar Ujala Hindi News Live - Bharatgen:à¤ारत में लॉन्च हुआ पहला स्वदेशी à¤ाषा Ai मॉडल, 22 à¤ाषाओं में करेगा अनुवाद; दूर होंगी संवाद चुनौतियां and Google to collaborate with IIT Bombay’s BharatGen to build indigenous Indic language model
AIKoshahttps://aikosha.indiaai.gov.in/home It looks like huggingface website for India https://aikosha.indiaai.gov.in/home/resources?from= resource-detail having some PDF books https://aikosha.indiaai.gov.in/home/toolkit having a list of popular AI related tools.
Google for Education
URL for AI/GenAI
LLM
Inside The Brain Of An LLM: What Makes AI So Powerful?
Landscape
https://landscape.lfai.foundation/
https://landscape.pytorch.org/
Models for coding
1. Qwen2.5 Coder
2. Granite Code
3. CodeGemma
4. Deep Seek Coder V2
5. StarCoder 2
6. Code llama
7. Codestral
Coding tool Google Opal is a new vibe coding app and here's how you can try it for free
Interviews
Pioneering Innovation in Cloud and AI Transformation Done By Chandrakanth Devarakadra Anantha
Innovation in Machine Learning & Engineering Leadership by Pratik Parekh
Amazing Innovation in Telecom Cloud: The Journey of Jayavelan Jayabalan
Handson
https://www.youtube.com/watch?v=RQFfK7xIL28
CAMARA - NaaS
Keywords
- CAMARA APIs
- Open GW
- Network API
- NaaS
AI impacts API development
Usecase
1. anti fraud
2. location API : book cab for people not having smart phone
3. voice activated AI transaction. book a cab
4. geo fencing. warn people when other people comes close to them. logistic when truck reaches store, offloading
5. future Quality in demand
Challenges
1. monetizing
2. standardizing
3. presenting to non-telco audience
4. focus on right APIs: There are 15 fraud APIs, customer only wants to know is it fraud or not?
5. scale and coverage: approach operator and help them coming to eco system
6. data privacy and consent: no need for customer to provide consent for each new API. it is bad experience
7. education and certification about API. Let developer make new business models and business case.
8. telco shall listen to industry need. how to solve challenges using advance connectivity and APIs. demand side focus.
https://www.youtube.com/watch?v=Rg-TKpBuiPI
Popular APIs
- Messaging
- authentication
- Device location,
- QoS,
- fraud prevention,
- identity verification
- age verification
https://cpaasaa.com/post-mwc-aduna-vonage-and-the-future-of-network-apis/
Network API centralizes complexity and distribute simplicity
https://www.youtube.com/watch?v=4C9zrRNoxas
Vonage and Infobip : service aggregation
https://www.youtube.com/watch?v=Jh8iUuNHFYw
Network APIs, allow operators to virtualize parts of their networks and provide tailored data and features to developers
Network API v/s Usecase
1. Verify Location: Navigation, geotagging and location-aware notifications, personalized marketing
2. Device Status: Optimize resource usage based on device health and network condition. Identify issue and proactive customer support
3. SIM Density: Ensure optimal user experience during peak hours, SON
4. SIM swap: Fraud Prevention
5. QoD
6. Device identification, device location, and phone number verification
7. Identity and consent management
8. OTP validation
https://www.vonage.com/resources/articles/what-is-a-network-api/
Vonage Network Registry
CSP can find who developer uses
Developer can decide which CSP to choose.
We are moving from Transactional world to conversational world.
https://camaraproject.org/resources/
अष्टाध्यायी - 2
This article is my key take away points from PythonKrit workshop, at Samskrit Bharti Bangalore during March 2025
Dr. Amba Kulkarni explains how ASHTADHYAYI by sage PAANINI is similar to computer programming and compiler design
https://sanskrit.uohyd.ac.in/faculty/amba/ and https://www.sanskritstudiespodcast.com/1759898/episodes/12324157-16-amba-kulkarni-sanskrit-and-computers
ASHTADHYAYI is also Algorithm and Data structure. Class/Object has data and function. Paanini's DHTAATU list has name of DHAATU and "इत् प्रत्यय". Here "इत् प्रत्यय" indicates, which operation to be performed. We know the challenges with multiple inheritance in Object Oriented Programming. Prof. Ashvini Bhave shows how TADDHITTA indicates single inheritance.
Sage PAANINI introduced a new data structure SHIVA-SUTRA. He rearranged all character and did slicing then perform Boolean operation that input character belongs to given list or not, given input set of character is subset or not.
Meta language itself is part of ASHTADHYAYI .
Three types of rules
1. regular rules
2. context free rules
3. context sensitive rules
We use regular expression * for beginning AADI , UPAADHAA for set of characters in middle with [] and $ for end of line (ANTHA).
We know yacc and bison tools are for context free grammar. If we write all PAANINI rules as per syntax of yacc and bison then we can analyze the complexity of PAANINI's ASHTADHYAYI grammar. There are few non-formal aspects in ASHTADHYAYI, as it was written to understand by human brain, not by computer.
Sage PAANINI was about 1500 years ahead of time compare to today's computing power.
Rules are like event in programming. To understand grammar one of the rule shall be evaluated, it is like firing an event.
ANUVRUTI is similar to factorization in Maths.
Slides: https://web.stanford.edu/~kiparsky/Papers/paris.pdf and Stream rtsp://stream-serv.inrialpes.fr/Roc/Symposiums_2007/Sanskrit291007B_Gillon.rm by Paul Kiparsky
The entire data that powers https://ashtadhyayi.com https://github.com/chaitanya-lakkundi/ashtadhyayi-com-data/
https://github.com/chaitanya-lakkundi/ashtadhyayi-commentaries/
https://drdhaval2785.github.io/siddhantakaumudi/
https://github.com/drdhaval2785/siddhantakaumudi
https://en.wikipedia.org/wiki/Mahabhashya
Books
https://en.wikipedia.org/wiki/Algorithms_%2B_Data_Structures_%3D_Programs
https://www.sushmajee.com/reldictionary/literature/grammar/sanskrit-grammar.htm
Books: https://www.ebharatisampat.in/
https://www.amazon.com/Vaiyakaran-Siddhant-Kaumudi-Set-Volumes/dp/B00LND3A5U
Papers
https://sanskrit.inria.fr/Symposium/Program.html
https://upenn.academia.edu/Cardona
https://independent.academia.edu/SarojaBhate
YouTube / Videos:
https://www.youtube.com/ashtadhyayi
https://www.youtube.com/playlist?list=PLxPxgIW05q49w0453x8iDZpfv0fNH8ujK
https://www.youtube.com/watch?v=gs0c4UXgM8M
https://www.youtube.com/@prasarbharatisanskrit
https://www.sanskritstudiespodcast.com/1759898
https://www.youtube.com/watch?v=7X5uqiODNPw&list=PLEKLkZ5fxeD0Xt4TKUwAkiRVw_AUV3y_X
https://www.youtube.com/watch?v=_OkzIE61EMg
https://www.youtube.com/watch?v=AGPfSgVqb78
https://www.youtube.com/watch?v=9tndwY-pJAk&list=PLeCoRXpRAy9iK1CTKseX_Vgg9-RcV-Uql
PythonKrit
This article is my key take away points from PythonKrit workshop, at Samskrit Bharti Bangalore during March 2025.
XML to Mindmap generation : https://sambhasha.ksu.ac.in/CompLing/tarkasangraha/live/
We can have special tag like
<PAA-LAXANAM>
<PAA-UDAA>
<PAA-VAKYAM>
Other Tools
https://sambhasha.ksu.ac.in/projects/
Aksharamukha
https://github.com/chaitanya-lakkundi/aksharamukha
Vaijayantīkośa Knowledge-Net https://sambhasha.ksu.ac.in/CompLing/VK_ACL.pdf
A directory of Indic (Indian) language computing projects and resources https://indic.page/
https://sambhasha.ksu.ac.in/CompLing/chandas/chandas.html
https://www.gitasupersite.iitk.ac.in/conceptmaps Good resource for Neo4J graph DB
https://sanskritlibrary.org/downloads.html
https://sanskritlibrary.org/projects.html
https://sanskritlibrary.org/tools.html
Krudanta Rupa: https://github.com/chaitanya-lakkundi/kridanta-rupa-android/blob/master/kridanta_rupa_samgraha.pdf
Aadi Shankaracharya : https://www.sankara.iitk.ac.in/ and https://www.advaita-vedanta.org/texts/index.html
https://www.gitasupersite.iitk.ac.in/
GitHub
https://github.com/chaitanya-lakkundi/
https://github.com/drdhaval2785
Useful Sanskrit Alphabet https://github.com/chaitanya-lakkundi/varnamala/blob/main/varnamala.py
https://github.com/drdhaval2785/SanskritVerb/
https://github.com/drdhaval2785/SanskritSubanta
For Kids
https://bala.sambhasha.ksu.ac.in/
https://www.samskritpromotion.in/samskrit-toys
Scholars
https://sanskrit.uohyd.ac.in/faculty/amba/ and https://www.sanskritstudiespodcast.com/1759898/episodes/12324157-16-amba-kulkarni-sanskrit-and-computers
https://web.stanford.edu/~kiparsky/ and https://en.wikipedia.org/wiki/Paul_Kiparsky
Python
List Tuple Dictionary
Ordered? Yes Yes No
Mutable? Yes No Yes
Different Data Types? Yes Yes Yes
Can be indexed? Yes Yes Yes by keys
Syntax [] () {}
Duplicate elements? Yes Yes Yes, but key must be unique
- Both List and tuple supports: Slicing and skipping index
- Tuple is immutable, so faster
AI for Observability
The speaker explains his solution about adding AI for observability. Where observability includes logs, traces and matrices.
Features
It does not embed log message. most sophisticated GenAI also takes maximum 2 millions token. Logs generates it in 2 seconds. So solution need to feed right data to AI. It understands form log, which field shall be feed as initial value and then instruct to feed more data.
It creates visualization dashboard based on question
It has level 0 (manual observability) to level 4 (full observability)
It uses AWS Bedrock to solve privacy issue and compliance.
In future solution : GenAI
- will understand deployment
- will understand changes between deployments and its impact : cost, error increase or decrease.
- can go to Github repo to know changes that happen
- can fix the code
- then write test (UT) so it cannot happen again
So it makes much stable environment. It can make autonomous cluster configuration
At present, the solution has
- ability to analyze exception. Root cause analysis of exception. not 100% accurate all the time. It gives list of actions, that are taken to understand & troubleshoot problem. The solution can auto run RCA for each alert.
As we know GenAI has 3 models
1. generic questions
2. RAG
3. Agent
Yes, the solution will make openAI calls. every openAI call costs money. Now cost is reducing.
Future we may have trend of : BoY RAG
AI Language Model
Final thoughts
The choice between DeepSeek R1, Llama 3.2, and OpenAI o1 depends on specific project requirements:
- Choose DeepSeek R1 for budget-friendly deployments with strong reasoning capabilities.
- Opt for Llama 3.2 if multimodal functionality or edge optimisation is critical.
- Select OpenAI o1 for unparalleled reasoning performance in STEM fields despite its higher cost.
Refernce:
Deepseek R1 vs Llama 3.2 vs ChatGPT o1: Which AI model wins?
DeepSeek-R1, BLOOM and Falcon AI: Exploring lesser-known open source LLMs
GitHub - deepseek-ai/awesome-deepseek-integration
(1) Use DeepSeek-R1 in Microsoft Word Locally. No Monthly Fees. - YouTube
SPIFFE
SA is at cluster level.
So Nepheo could not use SA
Every CSP has workload identity
spiffe is standard:
- spiffe id. It is URL.
- spiffe verifiable documents (SVIDs): cert or toekn
- The spiffe workload API.
spire: spiffe Runtime Environment.
- A toolchain of API for establising trust based on spifee
- provides out of the box attestation plugins
Expiry is short. can be 4 hours. So no need of revocation
* spire agent can be colocated. it is dameonset in K8s.
=========
Nephio
ss7, sigtra, ngin, CN model (e.g. ORAN)
DISH is on AWS
CP based requirement for identity
Nephio SIG security wiki page has all details
Porch : Package Orchestration KPT
KPT does in place substitution
5G requirements / usecases
IMS, SMO , IMS
LF article about Nephio spifee implementation at LF wiki
Catalog packages at GitOps
Each cluster shall have its own repo
Identity federation is based on cert chain.
R3 Oct 23 of Nephio.
It is proposed solution. It will be upstream.
Workload identity solution shall not be native to specific cloud provider.
Identity federation across CSPs.
Google, E//, RedHat are in Nephio
SPIRE's alternative may be due to speicfic attestion plugin
What protocol between SPIRE Agent and SPIRE server? Bootstrap trust. it is pre-provision aspect. REST API and TLS. x.509 cert will be pulled. protocol is spire specific
Today's attestation is based on SA, pod labels, namespace.
CA, Cert Manager can be used.
Network Automation
Telecom Networks are complex due to multi layer, multi vendor
N/w Management -> SDN -> Intent Based Networking (programable and declarative) -> Cloud Native Networking
Earlier Monolithic NMS with FCAPS
Now : CICD, Microservice, K8s.
NSP (N/s Service Platform) is for IP and optical domain
It has API (OpenAPI Spec).
Model-driven mediation
Framework has orchestration
Contributed by Nokia: Kubenet, gNMIc, SDCIO
1. Unified Artifactory Manager Component
It uses Kubespray
UAM creates CRs. CRs are consumed by deployer. Deployer is short lived job.
2. Telemetry:
A: internal NSP components
B: External system
Four Core Principle
1. Model driven
2. Vendor & Mediation Agnostic
3. Horizontal scale
4. Resilent
Six Layers
6. Analytics and optimization layer
5. o/p / storage layer : Kafka
3.and 4 make it model driven
4. Normalization Layer
3. Mapping layer
2. Collector layer (SNMP, gNMI)
1. N/w layer
Architecture
UAM, Restconf GW
source : from network using SNMP, gNMI
Sink: influxDB, Prm, VErtica, Kafka, PostgreSQL, File
Source and Sink are connected using NATS. NATS also connected with multiple transform worker using transformer CR from UAM
gNMIc
1. single mode
2. CLI mode (auto complete option)
3. cluster mode (more replica. one is leader).
Kubenet and SDCIO
declarative model and event driven reconciliation. It is more n/w automation using K8s. Gitops principle.
Arch:
SDCIO Schema Driven Configuration.
IPAM etc are CRD to build abstract network configuration.
Config CR and ConfigSet CR, RunningConfig, UnmanagedConfig. It has different backend own etcd.
YANG by schema server.
==========================
BNG, CUPS specific implementation
Kubenet Nephio are solving same problem? May be overlap.
APIs for sink? customer provides sink.
Kubenet is automation. more than NMS
Slide 21: Cisco Prime