Nitin Agarwal
Data Science, ML, and AI | Ex-Microsoft | Mentor | Speaker
About Me
Principal Data Science and AI Leader with 13+ years of experience in building impactful intelligent solutions across various domains. Skilled in leading large-scale projects, guiding cross-functional teams, and fostering a culture of innovation and excellence.
Expertise in:
- Machine Learning, Natural Language Processing, and Generative AI
- Driving business growth and operational efficiency
- Enhancing customer satisfaction through cutting-edge AI solutions
Proven Track Record:
- Recognized for strong leadership and strategic thinking
- Passionate about mentoring future talent
- Frequent speaker and thought leader in the data science community
- Numerous accolades for innovation and excellence
Let's connect!
đź”— LinkedIn: https://www.linkedin.com/in/agnitin/
Session
Large language models (LLMs) don’t always need to be bigger to be better—sometimes, they just need to think more efficiently. Inference-time compute scaling improves LLM reasoning by dynamically increasing computational effort during inference, similar to how humans perform better when given more time to think. This session will explore cutting-edge techniques like Chain-of-Thought (CoT) prompting, voting-based search, and the latest research in test-time scaling. We’ll dive into methods like “Wait” tokens, preference optimization, and dynamic depth scaling, which allow models to refine responses on the fly. Whether you're interested in boosting LLM accuracy, improving robustness, or optimizing compute budgets, this talk will provide key insights into the future of smarter, more adaptable AI.