Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensembling Methods incompatible with snnTorch models #144

Open
kgano-ucsd opened this issue Feb 2, 2023 · 1 comment
Open

Ensembling Methods incompatible with snnTorch models #144

kgano-ucsd opened this issue Feb 2, 2023 · 1 comment

Comments

@kgano-ucsd
Copy link

kgano-ucsd commented Feb 2, 2023

Hi,

I have been trying to set up GradientBoosting for an snnTorch model I am working on it's mostly PyTorch in the background. However, I've run into a circular issue that I have yet to find a solution for:

Originally, my inputs for my train/test loaders for my feed forward snnTorch model were all dtype torch.float. I got this error:

     33 onehot = torch.zeros(label.size(0), n_classes).float().to(label.device)
---> 34 onehot.scatter_(1, label.view(-1, 1), 1)
     36 return onehot

RuntimeError: scatter(): Expected dtype int64 for index

In an attempt to fix this, I tried changed the type for all my inputs to be dtype torch.int64, but got this error:

    113 def forward(self, input: Tensor) -> Tensor:
--> 114     return F.linear(input, self.weight, self.bias)

RuntimeError: mat1 and mat2 must have the same dtype

For an input to require_grad, the tensor must be a float, so changing dtype in my Linear layers don't help, either.

What could be going wrong? Since snnTorch is an extension of PyTorch, I was hoping that Ensemble-Pytorch would also be compatible, but if there is some core compatibility issues I understand. Thanks in advance!

Edit: To clarify, ensemble.fit exposes these issues.

@xuyxu
Copy link
Member

xuyxu commented Feb 12, 2023

Hi @kgano-ucsd, sorry for the late response. I think the reason is that the size of one hot encoded vector mismatches the input dim of your model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants