Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Augmented Lagrangian method #223

Open
wants to merge 40 commits into
base: main
Choose a base branch
from

Conversation

ImNotPrepared
Copy link

add the implementation of augmented lagrangian method and its test(version: 0.0):
1. Unconstrained_Model(nn.Module): convert a constrained problem to unconstrained one
2. Unconstrained_Model(_Optimizer): alm optimizer
3. InvNet(nn.Module): an example of user defined objective function
4. ConstrainNet(nn.Module): an example of user defined constraints
5. test: see the "main", illustration of how to obtain optimized parameters and lagrangian multiplier can be found.

@wang-chen wang-chen changed the title alm_0 Augmented Lagrangian method Apr 18, 2023
Copy link
Member

@wang-chen wang-chen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • move alm_test.py into the test folder.

Comment on lines 32 to 38
############
# Update Needed Parameters:
# 1. model params: \thetta, gradient_descent, J -> self.update_parameter(pg['params'], D) >
# D: used linear solver, update with LM
# 2. lambda multiplier: \lambda, \lambda_{t+1} = \lambda_{t} + pf * error_C
# -> self.update_parameter(pg['params'], D)
# 3. penalty factor(Optional): update_para * penalty factor
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

convert comments into docstring to generate docs.

# 2. lambda multiplier: \lambda, \lambda_{t+1} = \lambda_{t} + pf * error_C
# -> self.update_parameter(pg['params'], D)
# 3. penalty factor(Optional): update_para * penalty factor
class Augmented_Lagrangian_Algorithm(_Optimizer):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename to AugmentedLagrangian, capitalized names don't need underscores.

convergence_tolerance_constrains=100, convergence_tolerance_model= 1e-3, min=1e-6, max=1e32,\
update_tolerance=2, scheduler=None, penalty_factor=5, lm_rate=0.01, num_iter=10, kernel=None \
):
defaults = {**{'min':min, 'max':max}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

defaults = {'min':min, 'max':max}

alm_test.py Outdated
Copy link
Member

@wang-chen wang-chen Apr 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • move alm_test.py into the test folder, and follow the format of other tests.

for _ in range(self.num_iter):
self.loss = optim.step(input)
model_eror, error= self.model(input), self.constraints(input)
print(torch.norm(error), torch.norm(model_eror))
Copy link
Member

@wang-chen wang-chen Apr 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove print

Comment on lines 83 to 93
class Generalized_SGD(_Optimizer):
def __init__(self, model, solver=None, strategy=None, kernel=None, corrector=None, \
learning_rate=0.1, momentum=0.9, min=1e-6, max=1e32, vectorize=True):
assert min > 0, ValueError("min value has to be positive: {}".format(min))
assert max > 0, ValueError("max value has to be positive: {}".format(max))
self.strategy = TrustRegion() if strategy is None else strategy
defaults = {**{'min':min, 'max':max}, **self.strategy.defaults}
super().__init__(model.parameters(), defaults=defaults)
self.momentum = momentum
self.lr = learning_rate
self.model = RobustModel(model, kernel, auto=False)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this class if unused.

alm_test.py Outdated
Comment on lines 61 to 66
while scheduler.continual():
loss = optimizer.step(input)
scheduler.step(loss)
losses.append(torch.norm(loss).cpu().detach().numpy())
constraint_violation.append(torch.norm(constraints(input)).cpu().detach().numpy())
objective_loss.append(torch.norm(invnet(input)).cpu().detach().numpy())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

refer to point 8 in the contributor guideline

@wang-chen wang-chen requested a review from Zhaozhpe April 28, 2023 18:31
@wang-chen wang-chen added the new feature New feature or request label May 4, 2023
@Zhaozhpe
Copy link

The latest version for review contains two files: the test file in the tests/optim/test_alm.py; the optimizer file in pypose/optim/optimizer_alm.py. The test file includes three examples: two tensor problems and one LieTensor problem.

@wang-chen wang-chen requested review from xukuanHIT and removed request for Zhaozhpe August 16, 2023 05:56

R, C = self.model(inputs)
self.lmd = self.lmd if hasattr(self, 'lmd') \
else torch.zeros((C.shape[0], ))
Copy link
Contributor

@xukuanHIT xukuanHIT Sep 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An error occured in my computer when running tests/optim/test_cnstopt.py,

Traceback (most recent call last): File "tests/optim/test_cnstopt.py", line 48, in <module> test.test_tensor() File "tests/optim/test_cnstopt.py", line 39, in test_tensor loss, lmd, = optimizer.step(input) File "/home/xukuan/anaconda3/envs/pypose_cnst/lib/python3.8/site-packages/torch-2.0.1-py3.8-linux-x86_64.egg/torch/optim/optimizer.py", line 280, in wrapper out = func(*args, **kwargs) File "/home/xukuan/projects/pypose/tmp/pypose/pypose/optim/cnstopt.py", line 78, in step else self.alm_model(inputs) File "/home/xukuan/anaconda3/envs/pypose_cnst/lib/python3.8/site-packages/torch-2.0.1-py3.8-linux-x86_64.egg/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/xukuan/projects/pypose/tmp/pypose/pypose/optim/cnstopt.py", line 33, in forward L = R + (self.lmd @ C) + self.pf * penalty_term / 2 RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument tensor in method wrapper_CUDA__dot)

Should change this line to self.lmd = self.lmd if hasattr(self, 'lmd') else torch.zeros((C.shape[0], )).to(C.device) ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, plz. We tested all our samples in pure cpu environ. You can do it too at first. Then I will check this and update a new version at my earliest convenience.

@xukuanHIT
Copy link
Contributor

Ok for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new feature New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants