timefirstgrav 2 hours ago

This aligns with what I've observed in computational physics.

Trying to handle instantaneous constraints and propagating modes with a single timestepper is often suboptimal.

I developed a framework that systematically separates these components: using direct elliptic solvers for constraints and explicit methods for flux evolution. The resulting algorithms are both more stable and more efficient than unified implicit approaches.

The key insight is that many systems (EM, GR, fluids, quantum mechanics) share the same pattern: - An elliptic constraint equation (solved directly, not timestepped) - A continuity law for charge/mass/probability flux - Wave-like degrees of freedom (handled with explicit methods)

Witgh this structure, you can avoid the stiffness issues entirely rather than trying to power through them with implicit methods.

Paper: https://zenodo.org/records/16968225

  • tomrod 2 hours ago

    > Trying to handle instantaneous constraints and propagating modes with a single timestepper is often suboptimal.

    When I read statements like this, I wonder if this is related to optimal policy conditions required for infinitely lived Bellman equations to have global and per-period policies in alignment

    • timefirstgrav an hour ago

      That's a fascinating parallel! both involve separating timeless constraints (value function / elliptic equation) from temporal dynamics (policy / flux evolution).

      Trying to timestep through a constraint that should hold instantaneously creates artificial numerical difficulties. The Bellman equation's value iteration is actually another example of this same pattern...

inasio 13 minutes ago

I was expecting a mention of symplectic ODE solvers, although perhaps that was beyond the scope of the blog post. For Hamiltonian ODEs, you can design methods that explicitly preserve energy, outperforming more generic methods.

  • ChrisRackauckas 2 minutes ago

    Yeah, that's a separate post https://scicomp.stackexchange.com/questions/29149/what-does-.... I wanted to keep this post as simple as possible. If you could show cases where explicit Runge-Kutta methods outperform (by some metrics) implicit Runge-Kutta methods, then it leads to a whole understanding of "what matters is what you're trying to measure". And then yes, symplectic integrators are for long time integrations (explicit RK methods will how lower error for shorter time, so its specifically longer time integrations on symplectic manifolds, though there are implicit RK methods which are symplectic but ... tradeoff tradeoff)

groos 29 minutes ago

I always see RK(Fe) mentioned but almost never Bulirsch-Stoer (https://en.wikipedia.org/wiki/Bulirsch%E2%80%93Stoer_algorit...). Some years ago, I was looking to do high precision orbital motion (involving several bodies) simulations, and came across this in Numerical Recipes for C. I don't pretend to understand how B-S works, but it produced amazing precision with adaptive step size adjustment. For highly chaotic systems, it held out far longer than RK.

  • adgjlsfhk1 11 minutes ago

    Unfortunately the Numerical Recipies book is known for being pretty horrible in general for ODEs. For chaotic systems at high tolerances, you likely want to be using Vern9 which is a highly efficient 9th order RK solver.

stephencanon 2 hours ago

The orbital example where BDF loses momentum is really about the difference between a second-order method (BDF2) and a fourth-order method (RK), rather than explicit vs implicit (but: no method with order > 2 can be A-stable; since the whole point of implict methods is to achieve stability, the higher order BDF formulas are relatively niche).

There are whole families of _symplectic_ integrators that conserve physical quantities and are much more suitable for this sort of problem than either option discussed. Even a low-order symplectic method will conserve momentum on an example like this.

  • Certhas an hour ago

    Obviously^1. But it illustrates the broader point of the article, even if for the concrete problem even better choices are available.

    1) if you have studied these things in depth. Which many/most users of solver packages have not.

  • _alternator_ an hour ago

    The fascinating thing is that discrete symplectic integrators typically can only conserve one of the physical quantities exactly, eg angular momentum but not energy in orbital mechanics.

abhinavsns 3 hours ago

This is taught in the intro class in basic numerics. Methods are considered more robust and practical if they have larger stability regions. Von-Neumann is the OG analyst for PDEs

dataflow 3 hours ago

> Those kinds of models aren’t really realistic? Energy goes to infinity, angular momentum goes to infinity, the chemical concentration goes to infinity: whatever you’re modeling just goes crazy! If you’re in this scenario, then your model is probably wrong. Or if the model isn’t wrong, the numerical methods aren’t very good anyways.

Uh... that's not how it works. The whole point of making a model and using a simulator is you don't already know what's going to happen. If your system is going to blow up, your model needs to tell you that. If it does, that's good, because it caught your mistake. And if it doesn't, it's obviously wrong, possibly in a deadly way.

  • tonyarkles 2 hours ago

    Having worked on non-linear simulators/ODE solvers off and on for a decade, I agree and disagree with what you're saying.

    I agree with you that that is 100% the point: you don't already know what's going to happen and you're doing modelling and simulation because it's cheaper/safer to do simulation than it is to build and test the real physical system. Finding failure modes, unexpected instability and oscillations, code bugs, etc. is an absolutely fantastic output from simulations.

    Where I disagree: you don't already know what's going to happen, but you DO know generally what is going to happen. If you don't have, at a minimum, an intuition for what's going to happen, you are going to have a very hard time distinguishing between "numerical instability with the simulation approach taken", "a bug in the simulation engine", "a model that isn't accurately capturing the physical phenomena", and "an unexpected instability in an otherwise reasonably-accurate model".

    For the first really challenging simulation engine that I worked on early on in my career I was fortunate: the simulation itself needed to be accurate to 8-9 sig figs with nanosecond resolution, but I also had access to incredibly precise state snapshots from the real system (which was already built and on orbit) every 15 minutes. As I was developing the simulator, I was getting "reasonable" values out, but when I started comparing the results against the ground-truth snapshots I could quickly see "oh, it's out by 10 meters after 30 minutes of timesteps... there's got to be either a problem with the model or a numerical stability problem". Without that ground truth data, even just identifying that there were missing terms in the model would have been exceptionally challenging. In the end, the final term that needed to get added to the model was Solar Radiation Pressure; I wasn't taking into account the momentum transfer from the photons striking the SV and that was causing just enough error in the simulation that the results weren't quite correct.

    Other simulations I've worked on were more focused on closed-loop control. There was a dynamics model and a control loop. Those can be deceptive to work on in a different way: the open-loop model can be surprisingly incorrect and a tuned closed-loop control system around the incorrect model can produce seemingly correct results. Those kinds of simulations can be quite difficult to debug as well, but if you have a decent intuition of the kinds of control forces that you aught to expect to come from the controller, you can generally figure out if it's a bad numerical simulation, bad model, or good model of a bad system... but without those kinds of gut feelings and maybe envelope math it's going to be challenging and it's going to be easy to trick yourself into thinking it's a good simulation.

    • srean 2 hours ago

      Love these kind of comments where I get to learn about things that I haven't played with.

      > the simulation itself needed to be accurate to 8-9 sig figs with nanosecond resolution

      What kind of application would require such demanding tolerances ? My first thought was nuclear fission. But then the fact that you had one in orbit sending data feed every 15 mins imploded my fanciful thinking.

      • tonyarkles 2 hours ago

        Ahh, I really wish I could go into all of the details because it was a really cool project. The high-level answer (which is also related to the software-defined radio post that's on the front page right now) is that it was simulating the orbit of a satellite such that we could simulate the signals the satellite was transmitting with enough accuracy to implicitly get Doppler shift. That is... we didn't explicitly model Doppler shift in any piece of the system, it just showed up as a result of accurately modelling the orbit of the satellite's position and velocity relative to the receiver on Earth.

        • srean an hour ago

          Fantastic.

          My second guess was that this was a part of a GPS like service.