-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Testing on unseen images #21
Comments
You can refer to the example commands, which correspond to the setups for the main results. You can start from setting up the data pipeline as in this code base. Some important knobs in the config I think: 1) the size of the image patch for each inference 2) the sliding window density. I think if you have a good GPU, you can mainly refer to the cityscale config which uses 512x512 image patches. If you have limited GPU memory. you can refer to the SpaceNet config which uses 256x256 image patches. feel free to leave other questions in this thread. If any part of the code/config is especially confusing, I'll find time to add more comments. |
As of now, i want to perform testing of your model weights to detect roads, can you tell me which config yaml to use for that? And what is the ideal image size to test it upon? |
Been quite busy recently - I have updated the example inference commands with our checkpoint in README, those config should be good starting points. Regardless of your image size, it would be ideal if you resize it such that it is 1 meter / pixel. |
I want to test your model weights on my own satellite dataset, unfortunately i don't understand which config .yaml to use and how to prepare my data for it. Please help
The text was updated successfully, but these errors were encountered: