-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Renormalize tensor before projection svd #154
Conversation
…rease svd_rrule_tol.
Codecov ReportAll modified and coverable lines are covered by tests ✅
🚀 New features to boost your workflow:
|
Hi @Confusio, sorry for not following up on this sooner, there were a lot of things going on at the same time so it was hard to keep track. Are the issues you were experiencing now solved with the recent additions (in particular preserving tensor norms during optimizations and switching to the |
I didn't test any more as I always renormalize the tensor before For extreme case, see the following test using MKL, LinearAlgebra
using TensorKit, OptimKit
using PEPSKit, KrylovKit, MPSKit
using ChainRulesCore, Zygote
using JLD2, LoggingExtras, Accessors
using Random
Random.seed!(91283219347)
Pspace = ℂ^2
Nspace = ℂ^2
Espace = ℂ^2
Sspace = ℂ^2
Wspace = ℂ^2
χenv = 16
ctm_alg = SimultaneousCTMRG(;)
ψ = randn(ComplexF64, Pspace, Pspace ⊗ Pspace ⊗ Pspace' ⊗ Pspace');
peps = InfinitePEPS(fill(ψ, (1, 1)));
env, _ = leading_boundary(CTMRGEnv(peps, ℂ^16), peps)
operator = heisenberg_XYZ(InfiniteSquare(); Jx=-1, Jy=1, Jz=-1)
function en(ψ, env)
env, = PEPSKit.ctmrg_iteration(InfiniteSquareNetwork(ψ), env, ctm_alg)
return cost_function(ψ, env, operator)
end
gd = gradient(x -> en(x, env), peps)[1]
prod_norm1 = norm(gd) * norm(peps)
peps = peps / norm(peps)
gd = gradient(x -> en(x, env), peps)[1]
prod_norm2 = norm(gd) * norm(peps)
peps = peps * 1.0e-6;
gd = gradient(x -> en(x, env), peps)[1]
prod_norm3 = norm(gd) * norm(peps) with the result (prod_norm1, prod_norm2, prod_norm3) = (3.135719049030705, 3.135719048758805, 6.854887470954637) Since energy is independent of the norm of |
That's actually a really nice example, that should make it a lot easier to test things in the future. I think from this we would want to conclude that we always want to "normalize" (tensor-norm) the peps, which we now already do in the optimization but maybe it's even fair to also do this in the constructor? |
This replaces previous PR about the tensor normalization before projection svd. Failed tests can pass if lower the tolerance of
svd_rrule_tol
.halfinf
tensor before performing SVD in thecompute_projector
function forHalfInfiniteProjec
algorithm. (src/algorithms/ctmrg/projectors.jl
, src/algorithms/ctmrg/projectors.jlL136-R140)HalfInfiniteProjec
algorithm. (src/algorithms/ctmrg/projectors.jl
, src/algorithms/ctmrg/projectors.jlL145-R152)fullinf
tensor before performing SVD in thecompute_projector
function forFullInfiniteProjec
algorithm. (src/algorithms/ctmrg/projectors.jl
, src/algorithms/ctmrg/projectors.jlL158-R168)FullInfiniteProjec
algorithm. (src/algorithms/ctmrg/projectors.jl
, src/algorithms/ctmrg/projectors.jlL167-R180)