We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug描述
self.fc = nn.Sequential( nn.Linear(16*4*4, 120), nn.Sigmoid(), nn.Linear(120, 84), nn.Sigmoid(), nn.Linear(84, 10) ) 第二行nn.Linear的第一个参数是不是应该是16*5*5 英文版的对应这一行的代码就是 nn.Linear(in_features=16*5*5, out_features=120), **版本信息** pytorch: torchvision: torchtext: ...
The text was updated successfully, but these errors were encountered:
其实没有问题,它这里输入是batch_sizex1x28x28,所以是16*4*4. 英文版的输入可能是batch_sizex1x32x32(或者用了batch_sizex1x28x28, 在第一个卷积中加了padding=2), 因为没看过英文版所以不清楚是哪一种,但98年的LeNet论文输入的图像确实是32x32
Sorry, something went wrong.
其实没有问题,它这里输入是batch_sizex1x28x28,所以是1644. 英文版的输入可能是batch_sizex1x32x32(或者用了batch_sizex1x28x28, 在第一个卷积中加了padding=2), 因为没看过英文版所以不清楚是哪一种,但98年的LeNet论文输入的图像确实是32x32
感谢,刚开始还以为有问题,原论文用的mnist,每张图像高和宽均是32像素,而本例用的fashion-mnist,每张图像高和宽均是28像素
No branches or pull requests
bug描述
The text was updated successfully, but these errors were encountered: