BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//cfp.in.pycon.org//2025//GHKUY7
BEGIN:VTIMEZONE
TZID:IST
BEGIN:STANDARD
DTSTART:20000101T000000
RRULE:FREQ=YEARLY;BYMONTH=1
TZNAME:IST
TZOFFSETFROM:+0530
TZOFFSETTO:+0530
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-2025-GH8RVF@cfp.in.pycon.org
DTSTART;TZID=IST:20250912T140000
DTEND;TZID=IST:20250912T170000
DESCRIPTION:Deploying real-time AI models on embedded Linux platforms like 
 Raspberry Pi\, Jetson Nano\, or Rockchip-based boards is a growing need in
  industries like manufacturing\, healthcare\, and automotive. However\, th
 e challenges are real: constrained computing\, tight memory\, and limited 
 power. This hands-on workshop walks you through the full lifecycle—desig
 ning\, optimising\, cross-compiling\, and deploying lightweight CNNS for i
 nference at the edge.\nParticipants will start with a base CNN (e.g.\, Mob
 ileNet or ShuffleNet)\, apply model compression techniques like pruning an
 d quantisation\, and then learn how to build optimised deployment pipeline
 s using TensorFlow Lite and PyTorch Mobile. We'll also touch upon using NP
 U accelerators and real-time profiling to hit performance targets. By the 
 end\, the audience would be able to deploy and benchmark a real model on a
 n embedded Linux system.
DTSTAMP:20260317T122806Z
LOCATION:Room 5
SUMMARY:Optimising Deep Neural Inference for Edge Devices: Toolchain and Te
 chniques - Saradindu Sengupta
URL:https://cfp.in.pycon.org/2025/talk/GH8RVF/
END:VEVENT
END:VCALENDAR
