Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keras model/layer regularization property deprecated #4777

Closed
Shuailong opened this issue Dec 20, 2016 · 7 comments
Closed

Keras model/layer regularization property deprecated #4777

Shuailong opened this issue Dec 20, 2016 · 7 comments

Comments

@Shuailong
Copy link

Hi,

After I updated to the bleeding edge version of keras, I got the following warning:

/home/user/.local/lib/python2.7/site-packages/keras/engine/topology.py:368: UserWarning: The regularizers property of layers/models is deprecated. Regularization losses are now managed via the losses layer/model property.

What does this mean? The model code is here

    model_word = Sequential()
    model_word.add(Embedding(vocab_size+1,
                             EMBEDDING_DIM,
                             input_length=None,
                             weights=[embedding_matrix],
                             trainable=True,
                             W_regularizer=l2(1e-6),
                             dropout=0.5))

    feature_layers = [FeatureEmbeddingLayer(size) for size in feature_sizes]

    layers = [model_word] + feature_layers

    model = Sequential()
    model.add(Merge(layers, mode='concat', concat_axis=2))

    model.add(Bidirectional(LSTM(128, return_sequences=True, W_regularizer=l2(1e-6), U_regularizer=l2(1e-6), b_regularizer=l2(1e-6))))
    model.add(Bidirectional(LSTM(128, return_sequences=True, W_regularizer=l2(1e-6), U_regularizer=l2(1e-6), b_regularizer=l2(1e-6))))
    model.add(TimeDistributed(Dense(64, activation='relu', W_regularizer=l2(1e-6), b_regularizer=l2(1e-6))))
    model.add(TimeDistributed(Dense(tag_size, activation='softmax', W_regularizer=l2(1e-6), b_regularizer=l2(1e-6))))

I Googled, but found no clues. Any ideas?

Thank you!

@lvapeab
Copy link
Contributor

lvapeab commented Dec 23, 2016

I think lines 26-28 from wrappers.py weren't removed as the regularizers were refactored (#4703) and they should be removed.

@ghost
Copy link

ghost commented Dec 31, 2016

@lvapeab what do you all think about the way to resolve this issue? @farizrahman4u @fchollet

@volvador
Copy link

volvador commented Feb 21, 2017

Using Keras model in another application (distributed tensorflow as in : https://gist.github.com/fchollet/2c9b029f505d94e6b8cd7f8a5e244a4e
I used to do this to deal with regularizers

model = Sequential()
...... #build keras model
loss = tf.reduce_mean(keras.objectives.mean_squared_error(targets, preds))
# apply regularizers if any
if model.regularizers:
    total_loss = loss * 1.  # copy tensor
    for regularizer in model.regularizers:
        total_loss = regularizer(total_loss)
else:
    total_loss = loss

If I understand well, now I should do

loss = tf.reduce_mean(keras.objectives.mean_squared_error(targets, preds))
total_loss = loss * 1.  # copy tensor
for reg_loss in model.losses:
    tf.assign_add(total_loss, reg_loss)

It crashed when I do that. Any help please?

@stale stale bot added the stale label May 23, 2017
@stale
Copy link

stale bot commented May 23, 2017

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs, but feel free to re-open it if needed.

@stale stale bot closed this as completed Jun 22, 2017
@vkmenon
Copy link

vkmenon commented Oct 11, 2017

@volvador - did you ever find a solution to this problem? Encountering it now.

@volvador
Copy link

@vkmenon

      total_loss = loss * 1.  # copy tensor
      for reg_loss in model.losses:
        total_loss = total_loss + reg_loss

@vkmenon
Copy link

vkmenon commented Oct 11, 2017

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants