Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training #4

Open
Luke2642 opened this issue Apr 23, 2024 · 1 comment
Open

Training #4

Luke2642 opened this issue Apr 23, 2024 · 1 comment

Comments

@Luke2642
Copy link

I was just wondering if you think your two interesting innovations RAU-Net and MSW-MSA would also have any benefit during fine tuning? Do you think the model would train faster or generalise better, or need only to be trained on smaller crops and downsampled images rather than high resolution images? If so that could be quite a boost to training!

I skimmed the paper but it was focused on the improvement during inference.

@ShenZhang-Shin
Copy link
Collaborator

Both RAU-Net and MSW-MSA can speed up the forward speed, thereby enabling faster training.
If you have datasets with a resolution of 1024x1024, I believe that using RAU-Net to fine-tune a model trained on 512x512 resolution could boost convergence. This is because, after applying RAU-Net, the size of the features within the deep blocks of network remains consistent with those of a 512x512 resolution.
High-resolution images are precious and rare. I suggest training models using high-resolution images, and then generating higher-resolution images with HiDiffusion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants