You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am getting this error when running stable diffusion. I seen that the last development (3 weeks ago) should solve this topic, but may be something need to be changed also on the side of A1111 or stable diffusion.
`ckF` is not supported because:
max(query.shape[-1], value.shape[-1]) > 256
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
python -m xformers.info
xFormers 0.0.30+4cfab36.d20250207
memory_efficient_attention.ckF: available
memory_efficient_attention.ckB: available
memory_efficient_attention.ck_decoderF: available
memory_efficient_attention.ck_splitKF: available
memory_efficient_attention.cutlassF-pt: available
memory_efficient_attention.cutlassB-pt: available
memory_efficient_attention.fa2F@0.0.0: unavailable
memory_efficient_attention.fa2B@0.0.0: unavailable
memory_efficient_attention.fa3F@0.0.0: unavailable
memory_efficient_attention.fa3B@0.0.0: unavailable
memory_efficient_attention.triton_splitKF: available
indexing.scaled_index_addF: available
indexing.scaled_index_addB: available
indexing.index_select: available
sequence_parallel_fused.write_values: available
sequence_parallel_fused.wait_values: available
sequence_parallel_fused.cuda_memset_32b_async: available
sp24.sparse24_sparsify_both_ways: available
sp24.sparse24_apply: available
sp24.sparse24_apply_dense_output: available
sp24._sparse24_gemm: available
sp24._cslt_sparse_mm_search@0.0.0: available
sp24._cslt_sparse_mm@0.0.0: available
swiglu.dual_gemm_silu: available
swiglu.gemm_fused_operand_sum: available
swiglu.fused.p.cpp: available
is_triton_available: True
pytorch.version: 2.7.0.dev20250206+rocm6.3
pytorch.cuda: available
gpu.compute_capability: 11.0
gpu.name: AMD Radeon RX 7900 XTX
dcgm_profiler: unavailable
build.info: available
build.cuda_version: None
build.hip_version: None
build.python_version: 3.10.16
build.torch_version: 2.7.0.dev20250206+rocm6.3
build.env.TORCH_CUDA_ARCH_LIST: None
build.env.PYTORCH_ROCM_ARCH: None
build.env.XFORMERS_BUILD_TYPE: None
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
build.env.NVCC_FLAGS: None
build.env.XFORMERS_PACKAGE_FROM: None
source.privacy: open source
even I see that the cutlassF-pt and cutlassB-pt are now available.
I am pretty new to this topic so may be I am asking stupid questions.
But when running without --precision full --no-half there is a very different error. I assume that the Xformers are ok but torch or A1111 can not work with them for some reason
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
I am getting this error when running stable diffusion. I seen that the last development (3 weeks ago) should solve this topic, but may be something need to be changed also on the side of A1111 or stable diffusion.
I have compiled the Xformers from this source code:
pip3.10 install -v -U git+https://github.com/ROCm/xformers@for_upstream
and it seems to be working fine when I run:
even I see that the cutlassF-pt and cutlassB-pt are now available.
I am pretty new to this topic so may be I am asking stupid questions.
But when running without --precision full --no-half there is a very different error. I assume that the Xformers are ok but torch or A1111 can not work with them for some reason
Beta Was this translation helpful? Give feedback.
All reactions