-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MSR weight filler #1883
MSR weight filler #1883
Conversation
… for use with LRUs instead of tanh. Based on paper: He et al, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification," 2015
: Filler<Dtype>(param) {} | ||
virtual void Fill(Blob<Dtype>* blob) { | ||
CHECK(blob->count()); | ||
int fan_in = blob->count() / blob->num(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my understanding, they use number of output channels instead of input in order to avoid increasing/decreasing variances of gradients though backward pass, don't they? In this case, it should be the following.
fan_in = blob->count() / blobs->channels()
I am sorry if my understanding is not correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, the current version implements the forward propagation case, which is equation (10) in the paper. If you used the fan_out
fan_out = blob->count() / blobs->channels()
that would implement the backward propagation case in equation (14). They say at the end of that section that "We note that it is sufficient to use either Eqn.(14) or Eqn.(10) alone." and that " For all models in this paper, both forms can make them converge."
I don't know which is better. The current Caffe Xavier implementation only considers the fan_in, so this PR follows that lead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nickcarlevaris I havn't read those. It might be better to name it like MSRForwardFiller
? Anyways, this PR should be helpful since we no longer need to set the filler variances by hand. Thanks!
@nickcarlevaris you could add a parameter to the Filler to specify which formula to use, i.e |
@sguada and @tnarihi, If you guys think this is sensible way to do the settings, we could allow the same options to apply for the XavierFiller as well. |
@nickcarlevaris @sguada Do you know why XavierFiller uses sqrt(3/fan_in)? http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf |
@weiliu89 Eq (16) is derived starting with the mean of the fan_in and fan_out values (12). The current XavierFilter in Caffe starts with (10) instead of (12). That is why you get |
Thanks for the explanation. Now I understand that Eq. 15 is only to show their initial heuristic is not good. However, I still don't get Eq. 16 (It is a uniform distribution according to the paper I think, not normal distribution). According to Eq. 12, we should initialize the weight to be a normal distribution with std sqrt(2/(fan_in+fan_out)). If we don't consider fan_out, then according to Eq. 10, we should initialize the weight to be a normal distribution with std sqrt(1/fan_in). The Caffe implementation is a normal distribution with std sqrt(3/fan_in), you maybe right if they derive it from Eq. 16 by ignoring fan_out. But it is not correct, right? At least if we strictly following the paper. Besides, the MSR paper and your implementation of it use std = sqrt(2/fan_in), which they claim is larger than sqrt(1/fan_in) as shown in the Xavier paper, however is smaller than Caffe's implementation? I am a little confused here. |
Ah. I think I get it now. I always thought Caffe's Xavier filler is using Gaussian distribution... I just checked that it uses uniform distribution. My bad. Sorry... I think it is also equivalent if it is initialized with normal distribution with sqrt(1/fan_in) for xavier method. |
Having two separate parameters to control the sense of the normalization is a bit confusing; it's not obvious at a glance what I'd rather see an enum, with settings (perhaps) Also not sure that "MSR" is the right name for this, discussion is welcome. |
@longjon you're right an enum, would be more clear. Something like?
Would 'relu' be preferable for the filler name since it was designed for use with ReLUs? While I'm updating the PR, should the same option apply to the xavier filler? |
I have another question. According to "xavier" paper, we need var[w] = 1/fan_in. The paper propose use uniform distribution with scale of sqrt(3/fan_in) as implemented in Caffe. Is it better than gaussian distribution with std of sqrt(1/fan_in)? Is there any comparison between the two initialization method? The same question apply to the "msr" paper, that we know we need var[w] = 2/fan_in. The paper says it uses gaussian distribution with std of sqrt(2/fan_in). But how does it compare to uniform distribution with scale of sqrt(6/fan_in)? Theoretically they are equal because they both satisfy var[w] = 2/fan_in. I am just curious... maybe they are practically also same... |
Re: naming, "MSRA" does give attribution as "Xavier" does but it is perhaps opaque when it comes to the actual equation and purpose. As long as the name is clear in reference however it should be ok -- if everyone calls it the "MSRA" there isn't so much of a problem. "ReLU" could be an accurate name but ReLU layer vs. ReLU parameter filling vs. parametric ReLU starts to seem a bit overloaded. |
That seems reasonable as long as the current behavior is the default for p.s. It's fine if you push your change to this PR -- we can merge it to master instead -- or you can rebase to the new master and make a replacement PR. Either way is good. |
This PR seems to copy over an existing bug from Xavier (#1575): the computations This is because fully connected layers have parameter shape In #1575, it was suggested that the fully connected weights should be changed to |
Replaced by #1946 which makes updates based on this thread and is rebased against master. |
@seanbell Thanks for raising the conv / innerproduct fan-in issue again. We've talked about switching parameter order before so it should be decided for 1.0. There is also a performance consideration for the GEMMs based on the parameter shape. |
This PR implements the weight initialization strategy from http://arxiv.org/abs/1502.01852. It is very similar to the Xavier filler---except designed for RLU instead of tanh non-linearities.
This would compliment #1880 which implements the PReLU layer also proposed in this paper.