Abstract
Burn and Candle are two of the most visible ML frameworks in the Rust ecosystem, but they emphasize different trade-offs.
Burn is framework-oriented: pluggable backends via its Backend trait, automatic kernel fusion, async execution, ONNX import, and a full training story complete with a terminal dashboard. Candle is more minimalist: a PyTorch-inspired API from Hugging Face with a strong focus on lightweight inference and serverless deployment, while still supporting training.
This talk compares both through real code. We will load a comparable model in each, walk through the tensor APIs, look at how backend selection works in practice, and discuss where each fits best across server-side inference, edge deployment, training workflows, and WASM.
Whether you write Rust and want to explore ML, or you come from Python and want to understand what Rust brings to the table, you will leave with a clear picture of the landscape and enough context to pick the right tool.