You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I have noticed that after training the DGP and using the command model.read_values() that it returns the same values as before the training. While the model has been correctly trained, in fact model.compute_log_likelihood() has been decreased before and after while the parameters remained the same. So i think that probably model.read_values() is not the correct function to read the values after the optimization?
Thank you in advance for clarifying this question !
Ali
The text was updated successfully, but these errors were encountered:
Are you referring to the variational parameters q_mu and q_sqrt or the hyperparameters? One problem I've had is that the variational parameter numpy arrays don't get updated when using natural gradients (as they have the trainable flag as False). The other parameters should be working though. Could you provide an example?
In the mean time, though, running the tensorflow variable should always work. E.g.
sess=model.enquire_session() # get the current session print(sess.run(model.p.constrained_tensor)) # prints the value of parameter p
To read the values of the parameters i used as in gpflowmodel.read_values or model.as_pandas_table. However, for ALL the trainable parameters the values printed do not change befor and after the training. But, by runing the tensorflow variable as you suggested print(sess.run(model.p.constrained_tensor)) the true values of the parameters after training are printed.
I've never actually used .read_values before, I've always just done print(model). Do you get the same issue for a vanilla gpflow model, e.g. SVGP? Also, can I check which version of gpflow you're using?
Hello,
I have noticed that after training the DGP and using the command model.read_values() that it returns the same values as before the training. While the model has been correctly trained, in fact model.compute_log_likelihood() has been decreased before and after while the parameters remained the same. So i think that probably model.read_values() is not the correct function to read the values after the optimization?
Thank you in advance for clarifying this question !
Ali
The text was updated successfully, but these errors were encountered: