You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to use a GPU backend using CUDA.jl for the TDVP method, however, I get an error.
Would you please help me understand why is that? Maybe there is a quick fix?
ITensors and ITensorsTDVP are working great with CUDA, as I experienced.
Thanks,
Yotam
Here is a minimal code example:
using ITensors
using TenNetLib
using CUDA
let
N = 4
sites = siteinds("S=1/2",N)
os = OpSum()
for j=1:N-1
os += 1, "Sz", j,"Sz", j+1
os += 0.5, "S+", j, "S-", j+1
os += 0.5, "S-", j, "S+", j+1
end
H = MPO(os,sites)
states = [isodd(n) ? "Up" : "Dn" for n in 1:N]
psi0 = MPS(sites, states)
tau = -0.01im
engine = TDVPEngine(cu(psi0), cu(H))
for ii = 1:100
# `nsite = "dynamic"` for dynamical selection between
# single- and two-site variants at different bonds
tdvpsweep!(engine, tau,
nsite = "dynamic";
maxdim = 200,
cutoff = 1E-12,
extendat = 5)
psi = getpsi(engine)
# DO STUFF
end
end
and here is the error message that I get:
ERROR: Setting the type parameter of the type `DenseVector` at position `NDTensors.SetParameters.Position{1}()` to `Float64` is not currently defined. Either that type parameter position doesn't exist in the type, or `set_parameter` has not been overloaded for this type.
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:35
[2] set_parameter(type::Type, position::NDTensors.SetParameters.Position{1}, parameter::Type)
@ NDTensors.SetParameters ~/.julia/packages/NDTensors/pey4a/src/lib/SetParameters/src/interface.jl:10
[3] set_parameters(type::Type, position::NDTensors.SetParameters.Position{1}, parameter::Type)
@ NDTensors.SetParameters ~/.julia/packages/NDTensors/pey4a/src/lib/SetParameters/src/set_parameters.jl:20
[4] set_eltype(arraytype::Type{DenseVector}, eltype::Type)
@ NDTensors ~/.julia/packages/NDTensors/pey4a/src/abstractarray/set_types.jl:8
[5] similartype(::Type{SimpleTraits.Not{NDTensors.Unwrap.IsWrappedArray{…}}}, arraytype::Type{DenseVector}, eltype::Type)
@ NDTensors ~/.julia/packages/NDTensors/pey4a/src/abstractarray/similar.jl:109
[6] similartype
@ ~/.julia/packages/SimpleTraits/l1ZsK/src/SimpleTraits.jl:331 [inlined]
[7] promote_rule(::Type{NDTensors.Dense{Float64, Vector{…}}}, ::Type{NDTensors.Dense{Float32, CuArray{…}}})
@ NDTensors ~/.julia/packages/NDTensors/pey4a/src/dense/dense.jl:127
[8] promote_type
@ ./promotion.jl:313 [inlined]
[9] promote_rule(::Type{NDTensors.DenseTensor{…}}, ::Type{NDTensors.DenseTensor{…}})
@ NDTensors ~/.julia/packages/NDTensors/pey4a/src/tensor/tensor.jl:262
[10] promote_type
@ ./promotion.jl:313 [inlined]
[11] permutedims!!(R::NDTensors.DenseTensor{…}, T::NDTensors.DenseTensor{…}, perm::Tuple{…}, f::Function)
@ NDTensors ~/.julia/packages/NDTensors/pey4a/src/dense/densetensor.jl:198
[12] _map!!(f::Function, R::NDTensors.DenseTensor{…}, T1::NDTensors.DenseTensor{…}, T2::NDTensors.DenseTensor{…})
@ ITensors ~/.julia/packages/ITensors/Gf9aD/src/itensor.jl:1960
[13] map!(f::Function, R::ITensor, T1::ITensor, T2::ITensor)
@ ITensors ~/.julia/packages/ITensors/Gf9aD/src/itensor.jl:1965
[14] copyto!
@ ~/.julia/packages/ITensors/Gf9aD/src/broadcast.jl:330 [inlined]
[15] materialize!
@ ./broadcast.jl:914 [inlined]
[16] materialize!
@ ./broadcast.jl:911 [inlined]
[17] -(A::ITensor, B::ITensor)
@ ITensors ~/.julia/packages/ITensors/Gf9aD/src/itensor.jl:1882
[18] _krylov_addbasis!(psi::MPS, phis::Vector{MPS}, extension_cutoff::Float64)
@ TenNetLib ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:483
[19] krylov_extend!(psi::MPS, H::MPO; kwargs::@Kwargs{…})
@ TenNetLib ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:405
[20] krylov_extend!
@ ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:386 [inlined]
[21] macro expansion
@ ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:436 [inlined]
[22] macro expansion
@ ./timing.jl:395 [inlined]
[23] krylov_extend!(sysenv::StateEnvs{…}; kwargs::@Kwargs{…})
@ TenNetLib ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:434
[24] krylov_extend!
@ ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:424 [inlined]
[25] dynamic_fullsweep!(sysenv::StateEnvs{…}, solver::Function, swdata::SweepData; eigthreshold::Float64, extendat::Int64, kwargs::@Kwargs{…})
@ TenNetLib ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:261
[26] dynamic_fullsweep!
@ ~/.julia/packages/TenNetLib/tHJWh/src/mps/sweep.jl:250 [inlined]
[27] #tdvpsweep!#175
@ ~/.julia/packages/TenNetLib/tHJWh/src/mps/tdvp.jl:257 [inlined]
[28] top-level scope
@ ~/Documents/repos/gatesimulator/TDVP/TenNetLib_cuda_example.jl:26
The text was updated successfully, but these errors were encountered:
I guess, the problem comes from older version of ITensors.jl that TenNetLib.jl is using. In recent, releases of ITensors.jl, the developers have fixed the GPU related problems. But since recent releases made some breaking changes, TenNetLib.jl still uses older version of ITensors.jl. Hopefully, soon I will update TenNetLib.jl so that it becomes compatible with recent releases of ITensors.
Unfortunately, I do not have access to a GPU machine right now. So I cannot really test GPU codes from my end. This situation is expected to be resolved soon however.
Thank you for your quick response.
Do you have an estimation for the time it would take for you to fix such a problem? And how much time can would it take for someone else?
I can try to do that, but some tips might be useful..
Also, I understand that ITensors’ guys are now working on a new version, do you think that it is better to wait until they finish it?
Hi,
I tried to use a GPU backend using CUDA.jl for the TDVP method, however, I get an error.
Would you please help me understand why is that? Maybe there is a quick fix?
ITensors and ITensorsTDVP are working great with CUDA, as I experienced.
Thanks,
Yotam
Here is a minimal code example:
and here is the error message that I get:
The text was updated successfully, but these errors were encountered: