Iteration time #634
Replies: 1 comment
-
Thanks for reporting – this sounds like a memory leak due to caching, potential inside In any case, there might be a better work-around, which likely will also lead to better performance: You could create a stepper function directly: grid = UnitGrid([32, 32]) # generate grid
state = ScalarField.random_uniform(grid, 0.2, 0.3) # generate initial condition
eq = pde.DiffusionPDE()
solver = pde.ExplicitSolver(eq, scheme="euler", adaptive=True) # initialize a solver
stepper = solver.make_stepper(state, dt=1e-3) # initial time step The stepper can then be used inside your loop:
Note that the |
Beta Was this translation helpful? Give feedback.
-
I am using py-pde in a non-standard way to try and use it to explore (through RL) controllers for PDE/distributed parameter systems. To allow the controller to act on the system, I am looping over the pde with a short time horizon, then applying the control inputs to the resulting state, and using that state as the basis for another short pde run, as in the following pseudo code:
The problem is the amount of time it takes py-pde to run eq.solve (currently using Diffusion) grows (like 5-10x) as I go through thousands of these iterations. I cannot figure out why. I start with taking about 30 sec to go through 50 iterations and after a couple hundred of these I am up to around 100-120 sec to complete 50 iterations. I need such high iterations for training reinforcement learning. I have tried garbage collect, and deleting variables like grid, state, etc. I know the time . The only way I have found to get back down to 30s iteration time is to restart the kernel in my notebook.
I know py-pde wasn't designed to be used (abused) like this, but is there something I can do to keep iteration time down? Is there some variable I can delete, or a memory leak somewhere? I can't figure out why the iteration time grows, or a more workflow-friendly way to restore it.
Beta Was this translation helpful? Give feedback.
All reactions