Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Renormalize tensor before projection svd #154

Closed
wants to merge 2 commits into from

Conversation

Confusio
Copy link

This replaces previous PR about the tensor normalization before projection svd. Failed tests can pass if lower the tolerance of svd_rrule_tol.

Copy link

codecov bot commented Mar 11, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Files with missing lines Coverage Δ
src/algorithms/ctmrg/projectors.jl 87.03% <100.00%> (+0.49%) ⬆️
🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Confusio Confusio closed this Mar 18, 2025
@leburgel
Copy link
Member

Hi @Confusio, sorry for not following up on this sooner, there were a lot of things going on at the same time so it was hard to keep track. Are the issues you were experiencing now solved with the recent additions (in particular preserving tensor norms during optimizations and switching to the tsvd! rrule for the SVD pullback by default)? If you're still experiencing problems due to small singular values definitely let us know and we can follow up.

@Confusio
Copy link
Author

I didn't test any more as I always renormalize the tensor before tsvd in my implement. The current default tol in svd_pullback of TensorKit is not suitable for the tensors with small norms. I notice that peps is renormalized before entering optimization now and its norm is fixed in the current implement. I think it is now OK if we work with the optimized peps.

For extreme case, see the following test

using MKL, LinearAlgebra
using TensorKit, OptimKit
using PEPSKit, KrylovKit, MPSKit
using ChainRulesCore, Zygote
using JLD2, LoggingExtras, Accessors
using Random
Random.seed!(91283219347)
Pspace =^2
Nspace =^2
Espace =^2
Sspace =^2
Wspace =^2
χenv = 16
ctm_alg = SimultaneousCTMRG(;)
ψ = randn(ComplexF64, Pspace, Pspace  Pspace  Pspace'  Pspace');
peps = InfinitePEPS(fill(ψ, (1, 1)));
env, _ = leading_boundary(CTMRGEnv(peps, ℂ^16), peps)

operator = heisenberg_XYZ(InfiniteSquare(); Jx=-1, Jy=1, Jz=-1)

function en(ψ, env)
    env, = PEPSKit.ctmrg_iteration(InfiniteSquareNetwork(ψ), env, ctm_alg)
    return cost_function(ψ, env, operator)
end

gd = gradient(x -> en(x, env), peps)[1]
prod_norm1 = norm(gd) * norm(peps)

peps = peps / norm(peps)
gd = gradient(x -> en(x, env), peps)[1]
prod_norm2 = norm(gd) * norm(peps)

peps = peps * 1.0e-6;
gd = gradient(x -> en(x, env), peps)[1]
prod_norm3 = norm(gd) * norm(peps)

with the result

(prod_norm1, prod_norm2, prod_norm3) = (3.135719049030705, 3.135719048758805, 6.854887470954637)

Since energy is independent of the norm of peps and thus the norm(gd)*norm(peps) should be constant. The above test fails as the norm of peps is very small, e.g. 1.0e-6. So I think it may be good to renormalize the tensor before tsvd to avoid the worse case.

@lkdvos
Copy link
Member

lkdvos commented Mar 19, 2025

That's actually a really nice example, that should make it a lot easier to test things in the future.

I think from this we would want to conclude that we always want to "normalize" (tensor-norm) the peps, which we now already do in the optimization but maybe it's even fair to also do this in the constructor?
For the wrong results, I think this will also be handled once we actually move over to the MatrixAlgebraKit implementations, where the tolerances are (hopefully) properly scaled.
If at all possible, I would like to avoid having to deal with this issue on a case by case basis, so maybe I would be okay with simply normalizing all InfinitePEPS always, which is similar to what MPSKit is doing as well: the norm of an infinite state by itself is anyways a bit hard to define properly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants