Really exciting idea — turning idle devices into a distributed inference cluster feels practical and privacy-friendly. Quick question: how do you handle latency and bandwidth variability across WAN peers to keep inference smooth for real-time apps? Would love clarity on any built-in QoS or fallback strategies.