-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom rules for vision transformer #184
Comments
Hey @ascdqz , we have planned support for transformers/ attention, and the new built-in Unfortunately, my schedule is super full, so I will probably not get to work on this until maybe late summer. But do not feel pressured, I will eventually try to find someone to do it, or do it myself once my schedule allows me to. |
Hello
I'm trying to use this method on a vision transformer model(model = torchvision.models.vit_b_16(), first several layers in below image). I read the document, And I think I need to write and use new rules?(I see that there are some new types of layers that doesn't have an existing class . And also submodules are a little complex. So I have to use new rules right?). I read the document of how to write a custom rule, but I can't think of an idea which rules use on which layer in this VIT model.( I want to get images like original epsilonplusflat.) I run the code below and got error show below. Do you have any recommendations on how to run lrp method on this model?
Thank you!
The text was updated successfully, but these errors were encountered: