-
Notifications
You must be signed in to change notification settings - Fork 48
Description
The Within-Orbit Adaptive Leapfrog No-U-Turn Sampler (WALNUTS), presented in https://arxiv.org/abs/2506.18746, is an extension of NUTS that adapts to local curvature by dividing each integrator step into a certain number of sub-steps, where the number of sub-steps is chosen to keep the energy difference between the endpoints below a user-specified (or tuned) threshold. When the energy difference is low (i.e. curvature is low), it does no more work than NUTS; but when some regions have high curvature, where NUTS might require tuning a very low step size to make divergent transitions rare, WALNUTS can permit tuning a larger step size for exploring low-curvature regions while still exploring high-curvature regions well. They show that WALNUTS performs well for high-curvature models like Neal's funnel and hierarchical models.
The paper also proposes a strategy for tuning the global energy error and a different different step size tuning strategy than the one used in Stan.
Implementation
There are reference implementations at https://github.com/bob-carpenter/walnuts and https://github.com/flatironinstitute/walnuts.
From an initial skim of the paper, I suspect WALNUTS can be implemented by introducing a new AbstractIntegrator subtype that wraps another leapfrog integrator and using this with the existing NUTS sampler, but it's quite likely I'm missing something where information about the "micro steps" is needed elsewhere in the algorithm.