Aniket
Aniket Kulkarni is the founder of Curlscape, where he helps businesses bring practical AI applications to life fast. He has led the design and deployment of large-scale systems across industries, from finance and healthcare to education and logistics. His work spans LLM-based information extraction, agentic workflows, voice assistants, and continuous evaluation frameworks. An engineer at heart, Aniket blends deep technical expertise with a product mindset to build AI that’s both reliable and usable.
Session
Fine-tuning large language models (LLMs) used to be an expensive and resource-intensive process, traditionally accessible only to large organizations with powerful GPUs. With recent advances, this landscape has dramatically changed. Using cutting-edge techniques like Low-Rank Adaptation (LoRA), Quantized Low-Rank Adaptation (QLoRA), and Fully Sharded Data Parallelism (FSDP), it's now possible to fine-tune massive models such as the 70B-parameter LLaMA at home on consumer-grade GPUs.
In this hands-on tutorial, you will gain an intuitive understanding of the fine-tuning process, how it differs from pre-training, and its practical applications. By the end of this session, you will have hands-on experience fine-tuning a large model efficiently and cost-effectively, empowering them to create and deploy customized LLMs using accessible hardware setups.