Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the exercise #1

Open
Puff-Wen opened this issue Oct 24, 2017 · 5 comments
Open

About the exercise #1

Puff-Wen opened this issue Oct 24, 2017 · 5 comments

Comments

@Puff-Wen
Copy link

Hi,

I added a conv as following

    self.conv1 = nn.Conv2d(1, 6, kernel_size=(5,5),stride=1, padding=0)
    self.conv1_5 = nn.Conv2d(6, 12, kernel_size=(5,5),stride=1, padding=1)
    self.conv2 = nn.Conv2d(12, 16, kernel_size=(5,5),stride=1, padding=0)

and

    out = F.relu(self.conv1_5(out))

in the forward function. Then I got the error message as following.

RuntimeError: Given input size: (16x1x1). Calculated output size: (16x0x0). Output size is too small at /pytorch/torch/lib/THCUNN/generic/SpatialDilatedMaxPooling.cu:69

Would you please help to give some clues?
Thanks.

@hui-po-wang
Copy link

hui-po-wang commented Oct 24, 2017

Hi,

I try to reproduce the error, but it seems to work well.
I am not sure what happened to your codes, it's helpful if you can provide more information.

The attachment is my codes, you can print out each size of the outputs to check the output size as I write in the forward function.
test.txt

Thanks.

@Puff-Wen
Copy link
Author

Hi Hui-Po,

Thanks for your prompt reply. Two questions are as following.

  1. I added print in my forward function as following.

    def forward(self, x):
    out = F.relu(self.conv1(x))
    print(out.size())
    out = F.max_pool2d(out, 2)
    print(out.size())
    out = F.relu(self.conv1_5(out))
    print(out.size())
    out = F.max_pool2d(out, 2, stride=1, padding=1)
    print(out.size())
    out = F.relu(self.conv2(out))
    print(out.size())
    out = F.max_pool2d(out, 2)
    print(out.size())
    out = out.view(out.size(0), -1) #flatten
    print(out.size())
    out = F.relu(self.fc1(out))
    print(out.size())
    out = F.relu(self.fc2(out))
    print(out.size())
    out = self.fc3(out)
    print(out.size())
    return out

And the output is

(128L, 6L, 24L, 24L)
(128L, 6L, 12L, 12L)
(128L, 12L, 10L, 10L)
(128L, 12L, 11L, 11L)
(128L, 16L, 7L, 7L)
(128L, 16L, 3L, 3L)
(128L, 144L)
RuntimeError: size mismatch at /pytorch/torch/lib/THC/generic/THCTensorMathBlas.cu:243

  1. I found the fc1 in your code is updated from
    self.fc1 = nn.Linear(1644, 120)
    to
    self.fc1 = nn.Linear(1633, 120)

Would you please help to explain about the modification? Thanks

@JiaRenChang
Copy link

Hi,
I believe that the input dimension of your fc1 layer is wrong.
Now the output dimension before fc layer is 128(batch size) x 144 (feature dimension).
So the input of your first fc layer must be 144.

@hui-po-wang
Copy link

Hi,

The main problem here comes from the new convolution layer you added. The output size of self.conv1_5 is not the same as the input size of self.conv1_5 .
To keep the same size, the padding should be floor (kernel_size/2), that is, 2. Otherwise, you will find that your output size becomes (128L, 16, 3, 3) rather than (128, 16, 4, 4). As a result, you need to modify your self.fc1.

@Puff-Wen
Copy link
Author

Hi all,

Thanks. It is resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants