BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//cfp.in.pycon.org//8XGQT3
BEGIN:VTIMEZONE
TZID:IST
BEGIN:STANDARD
DTSTART:20000101T000000
RRULE:FREQ=YEARLY;BYMONTH=1
TZNAME:IST
TZOFFSETFROM:+0530
TZOFFSETTO:+0530
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-2025-8XGQT3@cfp.in.pycon.org
DTSTART;TZID=IST:20250912T140000
DTEND;TZID=IST:20250912T170000
DESCRIPTION:Fine-tuning large language models (LLMs) used to be an expensiv
 e and resource-intensive process\, traditionally accessible only to large 
 organizations with powerful GPUs. With recent advances\, this landscape ha
 s dramatically changed. Using cutting-edge techniques like Low-Rank Adapta
 tion (LoRA)\, Quantized Low-Rank Adaptation (QLoRA)\, and Fully Sharded Da
 ta Parallelism (FSDP)\, it's now possible to fine-tune massive models such
  as the 70B-parameter LLaMA at home on consumer-grade GPUs.\n\nIn this han
 ds-on tutorial\, you will gain an intuitive understanding of the fine-tuni
 ng process\, how it differs from pre-training\, and its practical applicat
 ions. By the end of this session\, you will have hands-on experience fine-
 tuning a large model efficiently and cost-effectively\, empowering them to
  create and deploy customized LLMs using accessible hardware setups.
DTSTAMP:20260317T123256Z
LOCATION:Room 6
SUMMARY:Axolotl on a Budget: Fine-Tuning 70B-parameter LLMs - Aniket Kulkar
 ni
URL:https://cfp.in.pycon.org/2025/talk/8XGQT3/
END:VEVENT
END:VCALENDAR
