PyCon India 2025

Anindita Sinha Banerjee

With over a decade in Data and Decision Sciences, I design NLP and AI solutions that solve complex business challenges. Currently a Data Scientist at Red Hat and former researcher at Tata Research Development and Design Center, I have presented research at premier conferences and hold patents, advancing AI-driven innovations. Explore my Google Scholar page -https://scholar.google.com/citations?user=5GCQcVkAAAAJ&hl=en&oi=ao


Professional Link

https://www.linkedin.com/in/anindita-sinha-banerjee-41a99956/

Preferred Pronoun

She/Her

Speaker Tagline

Data Scientist at Red Hat | AI Researcher

Gravatar - Professional Photo

https://gravatar.com/whispershappily943bfaa979

LinkedIn Profile

https://www.linkedin.com/in/anindita-sinha-banerjee-41a99956/


Sessions

09-14
10:50
30min
[Panel] Vibe Coding: Yay or Nay?
Anand S, Anindita Sinha Banerjee, Kumar Anirudha, Anand Chitipothu, Shreya Kommuri

Vibe Coding isn't a new topic. We all know it, seen it or have done it. The idea is to explore the current vibe coding environment from development process, code quality, hiring and more perspectives. We'll explore how it has impacted the dev ecosystem today and specially the python developement in general.

Community
Track 1
09-14
14:50
30min
Green AI at Scale: Energy-Efficient LLM Serving using vLLM & LLM Compressor
Anindita Sinha Banerjee, Abhijit Roy

LLMs like GPT-4 can consume as much energy per query as an entire web search session. What if we could cut that with python powered vLLM? In this session, we'll explore how vLLM, a Python-powered, high-throughput inference engine, enables green AI deployment by drastically improving GPU efficiency. We'll cover techniques like PagedAttention, continuous batching, and speculative decoding, showing how they reduce latency, memory overhead, and energy usage per token. Additionally, we'll dive into the role of the LLM Compressor, a lightweight compression framework that shrinks model size while preserving accuracy—further slashing inference costs and power consumption. If you're interested in sustainable LLM deployment, GPU optimization, or how Python can lead the charge in green computing, this talk is for you.

Others
Track 3