-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can not fine-tuning after reload the model weights #168
Comments
Hi @wubizhi, what is the value of |
n_bases = 2 |
Is it the same as the model located at |
Yes, the trained model path, the model name keep the same. Firstly, i trained the model for 20 epoches, the trained weights were stored in the path: ./torch_ensemble_results/softGBM/ Then, i using the io.load to reload the model weights in the path ./torch_ensemble_results/softGBM/ Thirdly, i just want to runing the same model for another 50 epoches, but, it report the issue as i paste above. Dose the reload and re-running of torch-ensemble work well for you? if yes, can you give me some demo that i can find out what heppend in my own code? Or have you had some tips or idea for the issue? Best wishes, thanks very much! |
Sure, I will try to reproduce your problem first, and then get back to you. |
It doesn't seem like io.save() method actually saves anything like optimizer.state_dict() or scheduler.state_dict() which seem to mess up my trained ckpt.pth when I load it and call .fit() to continue training. Can you suggest any work around for it? |
I want to know my code above can work or not? if i have just trained the model in 20 epoches, and reload the model weights for the longer epoches training? if it make sense, why it would report bug like below:
The text was updated successfully, but these errors were encountered: