Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lambda Layer for Multiscale Training #19

Open
lufanma opened this issue Oct 31, 2020 · 5 comments
Open

Lambda Layer for Multiscale Training #19

lufanma opened this issue Oct 31, 2020 · 5 comments

Comments

@lufanma
Copy link

lufanma commented Oct 31, 2020

Does Lambda Layer support multiscale training, it seems like that you have to specify n?

@lucidrains
Copy link
Owner

@Lufan111 if you mean whether you can train a lambda layer to be agnostic to the image size, you will want to use the version with the localized context (keyword r)

@lufanma
Copy link
Author

lufanma commented Nov 1, 2020

@Lufan111 if you mean whether you can train a lambda layer to be agnostic to the image size, you will want to use the version with the localized context (keyword r)

Thanks for reply. Well, I mean if I can use the global context version for different image sizes while multiscale training, since the H*W is actually different per batch. Does the current code version support this?

@lucidrains
Copy link
Owner

@Lufan111 it won't work for global context if the images differ in size across batches, only for local context

@lufanma
Copy link
Author

lufanma commented Nov 1, 2020

@Lufan111 it won't work for global context if the images differ in size across batches, only for local context

Okay~ I see... As for Appendix D "Experimental Details" COCO object detection part, "apply multi-scale jitter of [0.1, 2.0] during training", so you just use different arg 'r' for local context right? not for multi-scale global images?

@lucidrains
Copy link
Owner

@Lufan111 yup, so if you look at the beginning of Appendix D, the author(s) detail the architecture they settled on. they use a local context of 23x23 unless if the intra-depth dimension is greater than 1, in which case they lower that to 7x7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants