Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using CUDA in TDVP algorithm #13

Open
YotamKa opened this issue May 7, 2024 · 2 comments
Open

Using CUDA in TDVP algorithm #13

YotamKa opened this issue May 7, 2024 · 2 comments

Comments

@YotamKa
Copy link

YotamKa commented May 7, 2024

Hi,

I tried to use a GPU backend using CUDA.jl for the TDVP method, however, I get an error.
Would you please help me understand why is that? Maybe there is a quick fix?
ITensors and ITensorsTDVP are working great with CUDA, as I experienced.

Thanks,
Yotam

Here is a minimal code example:


using ITensors
using TenNetLib
using CUDA

let
    N = 4
    sites = siteinds("S=1/2",N)
    os = OpSum()
    
    for j=1:N-1
        os += 1, "Sz", j,"Sz", j+1
        os += 0.5, "S+", j, "S-", j+1
        os += 0.5, "S-", j, "S+", j+1
    end
    
    H = MPO(os,sites)
    states = [isodd(n) ? "Up" : "Dn" for n in 1:N]
    psi0 = MPS(sites, states)

    tau = -0.01im    
    engine = TDVPEngine(cu(psi0), cu(H))
    for ii = 1:100

    	# `nsite = "dynamic"` for dynamical selection between
	# single- and two-site variants at different bonds
        tdvpsweep!(engine, tau,
                   nsite = "dynamic";
                   maxdim = 200,
                   cutoff = 1E-12,
                   extendat = 5)

	psi = getpsi(engine)
	# DO STUFF
    end
end

and here is the error message that I get:

ERROR: Setting the type parameter of the type `DenseVector` at position `NDTensors.SetParameters.Position{1}()` to `Float64` is not currently defined. Either that type parameter position doesn't exist in the type, or `set_parameter` has not been overloaded for this type.
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] set_parameter(type::Type, position::NDTensors.SetParameters.Position{1}, parameter::Type)
    @ NDTensors.SetParameters ~/.julia/packages/NDTensors/pey4a/src/lib/SetParameters/src/interface.jl:10
  [3] set_parameters(type::Type, position::NDTensors.SetParameters.Position{1}, parameter::Type)
    @ NDTensors.SetParameters ~/.julia/packages/NDTensors/pey4a/src/lib/SetParameters/src/set_parameters.jl:20
  [4] set_eltype(arraytype::Type{DenseVector}, eltype::Type)
    @ NDTensors ~/.julia/packages/NDTensors/pey4a/src/abstractarray/set_types.jl:8
  [5] similartype(::Type{SimpleTraits.Not{NDTensors.Unwrap.IsWrappedArray{…}}}, arraytype::Type{DenseVector}, eltype::Type)
    @ NDTensors ~/.julia/packages/NDTensors/pey4a/src/abstractarray/similar.jl:109
  [6] similartype
    @ ~/.julia/packages/SimpleTraits/l1ZsK/src/SimpleTraits.jl:331 [inlined]
  [7] promote_rule(::Type{NDTensors.Dense{Float64, Vector{…}}}, ::Type{NDTensors.Dense{Float32, CuArray{…}}})
    @ NDTensors ~/.julia/packages/NDTensors/pey4a/src/dense/dense.jl:127
  [8] promote_type
    @ ./promotion.jl:313 [inlined]
  [9] promote_rule(::Type{NDTensors.DenseTensor{…}}, ::Type{NDTensors.DenseTensor{…}})
    @ NDTensors ~/.julia/packages/NDTensors/pey4a/src/tensor/tensor.jl:262
 [10] promote_type
    @ ./promotion.jl:313 [inlined]
 [11] permutedims!!(R::NDTensors.DenseTensor{…}, T::NDTensors.DenseTensor{…}, perm::Tuple{…}, f::Function)
    @ NDTensors ~/.julia/packages/NDTensors/pey4a/src/dense/densetensor.jl:198
 [12] _map!!(f::Function, R::NDTensors.DenseTensor{…}, T1::NDTensors.DenseTensor{…}, T2::NDTensors.DenseTensor{…})
    @ ITensors ~/.julia/packages/ITensors/Gf9aD/src/itensor.jl:1960
 [13] map!(f::Function, R::ITensor, T1::ITensor, T2::ITensor)
    @ ITensors ~/.julia/packages/ITensors/Gf9aD/src/itensor.jl:1965
 [14] copyto!
    @ ~/.julia/packages/ITensors/Gf9aD/src/broadcast.jl:330 [inlined]
 [15] materialize!
    @ ./broadcast.jl:914 [inlined]
 [16] materialize!
    @ ./broadcast.jl:911 [inlined]
 [17] -(A::ITensor, B::ITensor)
    @ ITensors ~/.julia/packages/ITensors/Gf9aD/src/itensor.jl:1882
 [18] _krylov_addbasis!(psi::MPS, phis::Vector{MPS}, extension_cutoff::Float64)
    @ TenNetLib ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:483
 [19] krylov_extend!(psi::MPS, H::MPO; kwargs::@Kwargs{…})
    @ TenNetLib ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:405
 [20] krylov_extend!
    @ ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:386 [inlined]
 [21] macro expansion
    @ ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:436 [inlined]
 [22] macro expansion
    @ ./timing.jl:395 [inlined]
 [23] krylov_extend!(sysenv::StateEnvs{…}; kwargs::@Kwargs{…})
    @ TenNetLib ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:434
 [24] krylov_extend!
    @ ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:424 [inlined]
 [25] dynamic_fullsweep!(sysenv::StateEnvs{…}, solver::Function, swdata::SweepData; eigthreshold::Float64, extendat::Int64, kwargs::@Kwargs{…})
    @ TenNetLib ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:261
 [26] dynamic_fullsweep!
    @ ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:250 [inlined]
 [27] #tdvpsweep!#175
    @ ~/.julia/packages/TenNetLib/tHJWh/src/mps/tdvp.jl:257 [inlined]
 [28] top-level scope
    @ ~/Documents/repos/gatesimulator/TDVP/TenNetLib_cuda_example.jl:26
@titaschanda
Copy link
Owner

Hi,

I guess, the problem comes from older version of ITensors.jl that TenNetLib.jl is using. In recent, releases of ITensors.jl, the developers have fixed the GPU related problems. But since recent releases made some breaking changes, TenNetLib.jl still uses older version of ITensors.jl. Hopefully, soon I will update TenNetLib.jl so that it becomes compatible with recent releases of ITensors.

Unfortunately, I do not have access to a GPU machine right now. So I cannot really test GPU codes from my end. This situation is expected to be resolved soon however.

@YotamKa
Copy link
Author

YotamKa commented May 15, 2024

Dear Titas,

Thank you for your quick response.
Do you have an estimation for the time it would take for you to fix such a problem? And how much time can would it take for someone else?

I can try to do that, but some tips might be useful..

Also, I understand that ITensors’ guys are now working on a new version, do you think that it is better to wait until they finish it?

Best,
Yotam

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants