-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak #33
Comments
Thanks for your kind information. I will investigate this issue. Besides, it is appreciated if you can provide more information, e.g., the error message, log, parameters, etc. |
Is there any solution or suggestion? :) |
It seems that the following codes will add nodes into computation graph per epoch.
A possible solution is that creating loss node in graph in |
Any further updates on when this fix will be added? |
It is better to define loss node in the graph in class DCRNNModel initialization. Then inside run_epoch_generator model.loss and model.mae can be used. For a quick fix, I initialized the training and testing loss separately during the initialization of DCRNNSupervisor.
Inside run_epoch_generator:
In the paper, how did you plot the learned localized filters centered at different nodes (Figure 7 in the paper)? Is that code available? |
def run_epoch_generator(self, sess, model, data_generator, return_output=False, training=False, writer=None):
output_dim = self._model_kwargs.get('output_dim')
preds = model.outputs
labels = model.labels[..., :output_dim]
loss = self._loss_fn(preds=preds, labels=labels)
This part of the code has a memory leak. Getting OOM error after several epochs.
The text was updated successfully, but these errors were encountered: