2025-09-14 –, Track 1
As a developer trying to iterate on SDXL prompts and schedulers, I found script-based tools painful to debug. ComfyUI addresses this challenge by offering a powerful, modular, and open-source visual programming environment tailored for AI image generation and experimentation. This session will introduce ComfyUI as a serious developer tool for building, debugging, and optimizing Stable Diffusion pipelines. With node-based execution graphs, real-time previews, and complete control over every parameter and model component, ComfyUI enables reproducibility, rapid prototyping, and extensibility for technical users. We’ll explore how developers can build custom inference workflows, integrate APIs, deploy on cloud GPUs, and contribute custom nodes to the growing ecosystem. Whether you're optimizing inference, conducting A/B testing, or building a generative AI backend, this session shows why ComfyUI is an essential tool for ML practitioners and engineers.
Introduction to ComfyUI
Why It Matters:
- Enables visual DAGs for complex AI workflows
- Reproducible and inspectable pipeline execution
- Ideal for experimentation, debugging, and deployment
- A developer-friendly alternative to black-box UIs
Comparison with Other Tools:
- CLI/scripts: Precise but opaque and hard to share
- Automatic1111 / InvokeAI: Good for artists, limited for devs
- ComfyUI: Modular, programmable, and deeply customizable
Core Architecture & Capabilities:
- Visual interface built on Directed Acyclic Graphs
- Stateless, parameterized node execution
- Supports SDXL, ControlNet, LoRA, and FLUX
- Intermediate visualizations and debug-friendly tools
Developer Use Cases:
- Build and iterate on inference pipelines with modular nodes
- Share or deploy workflow templates
- Experiment with samplers, schedulers, and conditioning models
Optimization & Deployment:
- Run on local GPUs or cloud platforms (Colab, RunPod, Lambda Labs)
- VRAM optimization via offloading and fp16 precision
- Automate workflows via batch processing or API integration
Extensibility & Ecosystem:
- Write custom nodes in Python with PyTorch
- Contribute to an open-source community with clear guidelines
- Integrate with FastAPI, Streamlit, or orchestration tools
Conclusion:
ComfyUI is an ideal frontend/backend for building, experimenting with, and deploying diffusion-based generative models. Its flexibility makes it a powerful tool for Python developers in AI, ML, and creative domains.
Q&A or Demo:
If hardware permits, a live walkthrough of building a diffusion pipeline using ComfyUI.
Beginner
I'm an AI Developer with a Master's in Physics and a strong focus on computer vision, image generation, and applied AI in e-commerce. My current work blends rule-based logic, probabilistic modeling, and reinforcement learning to design adaptive templates for ad creatives. I also build tools using ComfyUI, pushing the boundaries of generative image workflows.
When I’m not coding, I explore how AI can bridge the gap between creativity and utility, especially for small businesses and creators. This is my first PyCon, and I'm excited to share what I’ve learned and learn from others solving hard, meaningful problems.
AI developer working in a small, curious team focused on exploring what machines can learn from images. We tinker with models, test strange ideas, and try to teach AI to see the world a little better (or at least differently)."