-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compare ceres with scipy curve_fit #1042
Comments
scipy and ceres use different optimization algorithms, especially for how they deal with bounds, so I am not surprised that they give different results. Can you share the log from ceres solver iterations and the output of Solver::Summary::FullReport()? |
However, I find that curve_fit seems more reasonable in this case. Solver Summary (v 2.1.0-eigen-(3.4.0)-no_lapack-no_openmp)
Parameter blocks 3 3 Minimizer TRUST_REGION Dense linear algebra library EIGEN
Linear solver DENSE_QR DENSE_QR Cost: Minimizer iterations 3 Time (in seconds): Residual only evaluation 0.000058 (3) Postprocessor 0.000010 Termination: CONVERGENCE (Function tolerance reached. |cost_change|/cost: 9.231992e-10 <= 1.000000e-06) |
ty, I'd like to run your code myself. I think you have most of the code pasted above but not all of it, including your initial guesses etc. Can you post your entire ceres code please? |
@sandwichmaker I have already uploaded the code. Thank you. //#include "FitTest.h" template public:
public:
private: }; struct ModelResidual
private: void ceres_test()
} |
Ty, I will take a look at this in a bit. |
in the python version of the code you have
which seems a bit different from what you have in the c++ version of the code
In particular in the python code you are using psill - nugget when multiplying to Are the c++ and python version of psill the same or different? That said, I substituted the values found by scipy into the objective function and it is indeed a better minimum than the one found by ceres. |
There are issues with the variable naming in the code. You can use the following code instead: def spherical_variogram_model(d,nugget,range_,sill):
sill = nugget + psill Under the Gaussian and Exponential models, their results are similar. |
I compared ceres and scipy's curve_fit and found that ceres did not perform as well as curve_fit. The range calculated by ceres is 0, while the range calculated by curve_fit is more reasonable. What could be the possible reasons for this?
Code used for curve_fit:
import numpy as np
from scipy.optimize import curve_fit
def spherical_variogram_model(d,nugget,range_,psill):
Input data
lag = np.array([175.65234, 390.07074, 617.2337, 846.20544, 1079.8468, 1312.8428,
1545.525, 1777.5623, 2009.1091, 2239.874, 2472.7234, 2709.9663,
2941.5889, 3174.672, 3406.3713, 3639.4817, 3873.5212, 4107.823,
4341.223, 4571.714, 4805.605, 5041.677, 5271.1084, 5503.067,
5737.291, 5970.111, 6202.1523, 6437.138, 6670.4116, 6892.423])
gamma = np.array([20.815123, 21.075365, 19.551971, 20.397446, 22.835442, 24.360342,
28.06185, 27.736881, 28.389353, 27.340208, 30.65644, 31.15702,
29.316727, 28.009079, 26.655233, 24.42093, 20.318785, 23.195473,
22.581009, 23.794619, 22.329227, 22.553814, 21.483736, 20.519196,
21.407993, 18.173586, 21.122591, 25.138948, 24.998138, 34.060234])
nlag=len(lag)
#Pick a random initial value for the nugget
Nugget=(gamma[1]*lag[0]-gamma[0]*lag[1])/(lag[0]-lag[1])
if Nugget<0: Nugget=gamma[0]
#Pick a random initial value for the sill
Sill=(gamma[nlag-3]+gamma[nlag-2]+gamma[nlag-1])/3.0
#kick the starting value for the range
Range=lag[int(nlag/2)]
#Array of Initial Values. Also used in Gold Rule Fit
init_vals = [Nugget, Range, Sill]
#define the maximum values
maxlim=[max(gamma),max(lag),max(gamma)]
check=True
[Nugget,Range,Sill], _ = curve_fit(spherical_variogram_model, lag, gamma,method='trf', check_finite = check, p0=init_vals ,bounds=(0, maxlim) )
print("Estimated parameters:")
print("Range:", Range)
print("Sill:", Sill)
print("Nugget:", Nugget)
ceres :
using namespace ceres;
ceres::Problem problem;
template
class SphericalCovariance
{
public:
SphericalCovariance(T range, T sill, T nugget)
: _nugget(nugget)
, _range(range)
, _sill(sill)
{
_psill = _sill - _nugget;
public:
T compute(double d) const
{
private:
T _sill;
T _range;
T _nugget;
T _psill;
};
struct ModelResidual
{
ModelResidual(double lag, double gamma)
: _lag(lag), _gamma(gamma) {}
private:
// Observations for a sample.
const double _lag;
const double _gamma;
};
//solve
double nugget = guessNugget;
double sill = guessSill;
double range = guessRange;
for (size_t i = 0; i < lag.size(); ++i)
{
ceres::CostFunction* cost_function =
new ceres::AutoDiffCostFunction<ModelResidual, 1, 1, 1,1>(
new ModelResidual{lag[i], gamma[i]});
}
problem.SetParameterLowerBound(&nugget, 0, 0.0);
problem.SetParameterUpperBound(&nugget, 0, maxNugget);
problem.SetParameterLowerBound(&range, 0, 0.0);
problem.SetParameterUpperBound(&range, 0, maxRange);
problem.SetParameterLowerBound(&sill, 0, 0.0);
problem.SetParameterUpperBound(&sill, 0, maxSill);
ceres::Solver::Options options;
options.linear_solver_type = ceres::DENSE_QR;
options.minimizer_progress_to_stdout = false;
ceres::Solver::Summary summary;
ceres::Solve(options, &problem, &summary);
The text was updated successfully, but these errors were encountered: