About
I am an applied ML researcher with a background in Electrical Engineering (NIT Trichy) and an M.Eng from the University of Toronto. My work focuses on evaluating and improving the robustness of Large Language Models (LLMs) in clinical and agentic workflows. I specialize in perturbation-based testing, latent geometry analysis, and building privacy-aware, interpretable AI systems.
Research Interests
- LLM Robustness and Structured Perturbation
- Geometric Evaluation and Latent Fragility
- Multimodal Learning and Agentic Diagnostic Systems
- LLM Privacy and Safe Deployment
Selected Projects
- Embeddings to Diagnosis: PCA-based fragility analysis for clinical LLMs (KDD 2025)
- Clinical Tiger: Multimodal RAG with GPT-4-Vision & ClinicalBERT
- Clinical Panda: LLM explanation pipeline using synthetic clinical notes
- PII Guard: LLM-based privacy masking and utility-preserving redaction
- Multimodal LLM: Medical vision-language model using BLIP-2 and FSDP
Résumé
Experience
- LLM Engineer, LexisNexis/Appfabs (2024–Now)
- ML Associate, Vector Institute (2024)
- ML Engineer, Scribble Data (2022–2024)
- Backend Engineer, G2 (2019–2021)
Education
- University of Toronto – M.Eng in Electrical and Computer Engineering (2022–2023)
- NIT Trichy – B.Tech in Electrical and Electronics Engineering (2015–2019)
Publications
- Embeddings to Diagnosis, Agentic & GenAI Evaluation Workshop @ KDD 2025
- ARMatron, IEEE Robotics & Automation Society (RAHA 2016)
Awards
- Mitacs Business Strategy Internship (2022)
- PEAK Giver Award – G2 (2020)
- France Travel Grant (2019)
- National Topper in Mathematics – AISSCE (2015)