You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current Newton's method implementation has no option to force Hessian to be positive definite, which is a common technique to improve convergence.
In the current implementation the algorithm will not converge to the global minimum if the following conditions are met:
Hessian is indefinite
Directional derivative is negative
Initial guess is close to a local maximum and the desired global minimum (too vaguely described to be very useful condition but there are cases where the above two are true and the algorithm still converges so I thought I would mention this anyway)
This may not be a complete description of cases when forcing positive definiteness is helpful but I ran into such a case and can illustrate with the Six Camel Hump Function
Below is a contour plot of this function with areas that satisfy the first two requirements above highlighted in dark blue. I have picked some points out of those areas and confirmed that the NewtonMinimizer implementation does not converge when using those points as initial guesses.
It would be interesting to write a test that iterates through a grid in the domain above to see if those two conditions are necessary/sufficient for non-convergence but I don't currently have the time and it is outside the scope of this issue.
To restate, an option should be added to NewtonMinimzer that forces the Hessian to be Positive Definite.
I will shortly create a pull request with my proposed solution to this issue.
Regards,
Mikhail
The text was updated successfully, but these errors were encountered:
Created and linked a pull request to address this issue. The code changes include a unit test which initializes the NewtonMinimizer at point {1, -0.6} (in one of the blue areas above). If no modification of the Hessian is performed then the algorithm converges to the local maximum of {1.109205317132443, -0.76826824818394157}.
The current Newton's method implementation has no option to force Hessian to be positive definite, which is a common technique to improve convergence.
In the current implementation the algorithm will not converge to the global minimum if the following conditions are met:
This may not be a complete description of cases when forcing positive definiteness is helpful but I ran into such a case and can illustrate with the Six Camel Hump Function
Below is a contour plot of this function with areas that satisfy the first two requirements above highlighted in dark blue. I have picked some points out of those areas and confirmed that the NewtonMinimizer implementation does not converge when using those points as initial guesses.
It would be interesting to write a test that iterates through a grid in the domain above to see if those two conditions are necessary/sufficient for non-convergence but I don't currently have the time and it is outside the scope of this issue.
To restate, an option should be added to NewtonMinimzer that forces the Hessian to be Positive Definite.
I will shortly create a pull request with my proposed solution to this issue.
Regards,
Mikhail
The text was updated successfully, but these errors were encountered: