-
-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Augmented Lagrangian method #223
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- move
alm_test.py
into the test folder.
pypose/optim/alm_optimizer.py
Outdated
############ | ||
# Update Needed Parameters: | ||
# 1. model params: \thetta, gradient_descent, J -> self.update_parameter(pg['params'], D) > | ||
# D: used linear solver, update with LM | ||
# 2. lambda multiplier: \lambda, \lambda_{t+1} = \lambda_{t} + pf * error_C | ||
# -> self.update_parameter(pg['params'], D) | ||
# 3. penalty factor(Optional): update_para * penalty factor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
convert comments into docstring to generate docs.
pypose/optim/alm_optimizer.py
Outdated
# 2. lambda multiplier: \lambda, \lambda_{t+1} = \lambda_{t} + pf * error_C | ||
# -> self.update_parameter(pg['params'], D) | ||
# 3. penalty factor(Optional): update_para * penalty factor | ||
class Augmented_Lagrangian_Algorithm(_Optimizer): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename to AugmentedLagrangian
, capitalized names don't need underscores.
pypose/optim/alm_optimizer.py
Outdated
convergence_tolerance_constrains=100, convergence_tolerance_model= 1e-3, min=1e-6, max=1e32,\ | ||
update_tolerance=2, scheduler=None, penalty_factor=5, lm_rate=0.01, num_iter=10, kernel=None \ | ||
): | ||
defaults = {**{'min':min, 'max':max}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
defaults = {'min':min, 'max':max}
alm_test.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- move
alm_test.py
into the test folder, and follow the format of other tests.
pypose/optim/alm_optimizer.py
Outdated
for _ in range(self.num_iter): | ||
self.loss = optim.step(input) | ||
model_eror, error= self.model(input), self.constraints(input) | ||
print(torch.norm(error), torch.norm(model_eror)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove print
pypose/optim/alm_optimizer.py
Outdated
class Generalized_SGD(_Optimizer): | ||
def __init__(self, model, solver=None, strategy=None, kernel=None, corrector=None, \ | ||
learning_rate=0.1, momentum=0.9, min=1e-6, max=1e32, vectorize=True): | ||
assert min > 0, ValueError("min value has to be positive: {}".format(min)) | ||
assert max > 0, ValueError("max value has to be positive: {}".format(max)) | ||
self.strategy = TrustRegion() if strategy is None else strategy | ||
defaults = {**{'min':min, 'max':max}, **self.strategy.defaults} | ||
super().__init__(model.parameters(), defaults=defaults) | ||
self.momentum = momentum | ||
self.lr = learning_rate | ||
self.model = RobustModel(model, kernel, auto=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this class if unused.
alm_test.py
Outdated
while scheduler.continual(): | ||
loss = optimizer.step(input) | ||
scheduler.step(loss) | ||
losses.append(torch.norm(loss).cpu().detach().numpy()) | ||
constraint_violation.append(torch.norm(constraints(input)).cpu().detach().numpy()) | ||
objective_loss.append(torch.norm(invnet(input)).cpu().detach().numpy()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
refer to point 8 in the contributor guideline
The latest version for review contains two files: the test file in the tests/optim/test_alm.py; the optimizer file in pypose/optim/optimizer_alm.py. The test file includes three examples: two tensor problems and one LieTensor problem. |
…pypose into imnotprepared/alm
|
||
R, C = self.model(inputs) | ||
self.lmd = self.lmd if hasattr(self, 'lmd') \ | ||
else torch.zeros((C.shape[0], )) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An error occured in my computer when running tests/optim/test_cnstopt.py,
Traceback (most recent call last): File "tests/optim/test_cnstopt.py", line 48, in <module> test.test_tensor() File "tests/optim/test_cnstopt.py", line 39, in test_tensor loss, lmd, = optimizer.step(input) File "/home/xukuan/anaconda3/envs/pypose_cnst/lib/python3.8/site-packages/torch-2.0.1-py3.8-linux-x86_64.egg/torch/optim/optimizer.py", line 280, in wrapper out = func(*args, **kwargs) File "/home/xukuan/projects/pypose/tmp/pypose/pypose/optim/cnstopt.py", line 78, in step else self.alm_model(inputs) File "/home/xukuan/anaconda3/envs/pypose_cnst/lib/python3.8/site-packages/torch-2.0.1-py3.8-linux-x86_64.egg/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/xukuan/projects/pypose/tmp/pypose/pypose/optim/cnstopt.py", line 33, in forward L = R + (self.lmd @ C) + self.pf * penalty_term / 2 RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument tensor in method wrapper_CUDA__dot)
Should change this line to self.lmd = self.lmd if hasattr(self, 'lmd') else torch.zeros((C.shape[0], )).to(C.device)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, plz. We tested all our samples in pure cpu environ. You can do it too at first. Then I will check this and update a new version at my earliest convenience.
Ok for me. |
add the implementation of augmented lagrangian method and its test(version: 0.0):
1. Unconstrained_Model(nn.Module): convert a constrained problem to unconstrained one
2. Unconstrained_Model(_Optimizer): alm optimizer
3. InvNet(nn.Module): an example of user defined objective function
4. ConstrainNet(nn.Module): an example of user defined constraints
5. test: see the "main", illustration of how to obtain optimized parameters and lagrangian multiplier can be found.