Delay-Aware Diffusion Policy: Bridging the Observation–Execution Gap in Dynamic Tasks

Diffusion Policy (DP) struggles with computation delays and fails to hit the ball in the ping-pong task.
Our Delay-Aware Diffusion Policy (DA-DP) successfully handles highly dynamic, reactive tasks under inference delays.

Abstract

As a robot senses and selects actions, the world keeps changing. This inference delay creates a gap of tens to hundreds of milliseconds between the observed state and the state at execution. In this work, we take the natural generalization from zero delay to measured delay during training and inference. We introduce Delay-Aware Diffusion Policy (DA-DP), a framework for explicitly incorporating inference delays into policy learning. DA-DP corrects zero-delay trajectories to their delay-compensated counterparts, and augments the policy with delay conditioning. We empirically validate DA-DP on a variety of tasks, robots, and delays and find its success rate more robust to delay than delay-unaware methods. DA-DP is architecture agnostic and transfers beyond diffusion policies, offering a general pattern for delay-aware imitation learning. More broadly, DA-DP encourages evaluation protocols that report performance as a function of measured latency, not just task difficulty.

Result: Pick up rolloing ball

Diffusion Policy (DP)
Delay-Aware Diffusion Policy (DA-DP)

Result: Ping-pong

Diffusion Policy (DP)
Delay-Aware Diffusion Policy (DA-DP)

Result: Pick and place moving box

Diffusion Policy (DP)
Delay-Aware Diffusion Policy (DA-DP)