Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm) #47

Open
IAFFeng opened this issue Dec 2, 2024 · 12 comments

Comments

@IAFFeng
Copy link

IAFFeng commented Dec 2, 2024

comfyanonymous/ComfyUI#5862 (comment)

@philipy1219
Copy link
Contributor

Could you please share your workflow?

@IAFFeng
Copy link
Author

IAFFeng commented Dec 3, 2024

Could you please share your workflow?

Unsaved Workflow.json

@LIIDAA
Copy link

LIIDAA commented Dec 3, 2024

新建_文本文档_(2).json
After a picture is posted, it will be like this, and one will be restarted again.

@LIIDAA
Copy link

LIIDAA commented Dec 3, 2024

Afterwards, I still report the same mistakes

@LIIDAA
Copy link

LIIDAA commented Dec 3, 2024

2024-12-03T10:59:55.886986 -
0%| | 0/20 [00:00<?, ?it/s]2024-12-03T10:59:55.965962 -
0%| | 0/20 [00:00<?, ?it/s]2024-12-03T10:59:55.965962 -
2024-12-03T10:59:56.094326 - !!! Exception during processing !!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
2024-12-03T10:59:56.118359 - Traceback (most recent call last):
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in map_node_over_list
process_inputs(input_dict, i)
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1457, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1424, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
raise e
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 118, in KSampler_sample
return orig_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 855, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 136, in sample
return orig_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 753, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 740, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 719, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 101, in KSAMPLER_sample
return orig_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 624, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 299, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in call
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 709, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 176, in sampling_function
out = orig_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 279, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch
output = model.apply_model(input_x, timestep
, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 145, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\ldm\flux\model.py", line 184, in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IPAdapter-Flux\utils.py", line 78, in forward_orig_ipa
img, txt = block(img=img, txt=txt, vec=vec, pe=pe, t=timesteps)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IPAdapter-Flux\flux\layers.py", line 60, in forward
ip_hidden_states = self.ip_adapter(self.num_heads, img_q, self.image_emb, t)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IPAdapter-Flux\attention_processor.py", line 39, in call
ip_hidden_states_key_proj = self.to_k_ip(ip_hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 125, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

2024-12-03T10:59:56.120359 - Prompt executed in 17.13 seconds
2024-12-03T11:00:00.707904 - got prompt
2024-12-03T11:00:00.796411 - Unloading models for lowram load.
2024-12-03T11:00:01.177245 - 0 models unloaded.
2024-12-03T11:00:01.179245 -
0%| | 0/20 [00:00<?, ?it/s]2024-12-03T11:00:01.221215 -
0%| | 0/20 [00:00<?, ?it/s]2024-12-03T11:00:01.221215 -
2024-12-03T11:00:01.235215 - !!! Exception during processing !!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
2024-12-03T11:00:01.238215 - Traceback (most recent call last):
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in map_node_over_list
process_inputs(input_dict, i)
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1457, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1424, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
raise e
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 118, in KSampler_sample
return orig_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 855, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 136, in sample
return orig_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 753, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 740, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 719, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 101, in KSAMPLER_sample
return orig_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 624, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 299, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in call
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 709, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 176, in sampling_function
out = orig_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 279, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch
output = model.apply_model(input_x, timestep
, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 145, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\ldm\flux\model.py", line 184, in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IPAdapter-Flux\utils.py", line 78, in forward_orig_ipa
img, txt = block(img=img, txt=txt, vec=vec, pe=pe, t=timesteps)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IPAdapter-Flux\flux\layers.py", line 60, in forward
ip_hidden_states = self.ip_adapter(self.num_heads, img_q, self.image_emb, t)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IPAdapter-Flux\attention_processor.py", line 39, in call
ip_hidden_states_key_proj = self.to_k_ip(ip_hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 125, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

2024-12-03T11:00:01.247215 - Prompt executed in 0.49 seconds
2024-12-03T11:00:02.977906 - got prompt
2024-12-03T11:00:03.059883 - Unloading models for lowram load.
2024-12-03T11:00:03.121090 - 0 models unloaded.
2024-12-03T11:00:03.123089 -
0%| | 0/20 [00:00<?, ?it/s]2024-12-03T11:00:03.170091 -
0%| | 0/20 [00:00<?, ?it/s]2024-12-03T11:00:03.170091 -
2024-12-03T11:00:03.190164 - !!! Exception during processing !!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
2024-12-03T11:00:03.193169 - Traceback (most recent call last):
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in map_node_over_list
process_inputs(input_dict, i)
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1457, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1424, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
raise e
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 118, in KSampler_sample
return orig_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 855, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 136, in sample
return orig_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 753, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 740, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 719, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 101, in KSAMPLER_sample
return orig_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 624, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 299, in call
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in call
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 709, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 176, in sampling_function
out = orig_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 279, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch
output = model.apply_model(input_x, timestep
, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 145, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\comfy\ldm\flux\model.py", line 184, in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IPAdapter-Flux\utils.py", line 78, in forward_orig_ipa
img, txt = block(img=img, txt=txt, vec=vec, pe=pe, t=timesteps)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IPAdapter-Flux\flux\layers.py", line 60, in forward
ip_hidden_states = self.ip_adapter(self.num_heads, img_q, self.image_emb, t)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IPAdapter-Flux\attention_processor.py", line 39, in call
ip_hidden_states_key_proj = self.to_k_ip(ip_hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable_nvidia (1)\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\linear.py", line 125, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

2024-12-03T11:00:03.196133 - Prompt executed in 0.18 seconds

## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":93,"last_link_id":139,"nodes":[{"id":89,"type":"KSampler","pos":[5022,746],"size":[315,262],"flags":{},"order":9,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":133,"label":"模型"},{"name":"positive","type":"CONDITIONING","link":132,"label":"正面条件"},{"name":"negative","type":"CONDITIONING","link":136,"label":"负面条件"},{"name":"latent_image","type":"LATENT","link":139,"label":"Latent"}],"outputs":[{"name":"LATENT","type":"LATENT","links":[134],"slot_index":0,"label":"Latent"}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[835765754753103,"randomize",20,1,"euler","normal",1]},{"id":12,"type":"UNETLoader","pos":[3867,1117],"size":[315,82],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[114],"slot_index":0,"shape":3,"label":"模型"}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["flux_dev.safetensors","fp8_e4m3fn"]},{"id":11,"type":"DualCLIPLoader","pos":[3824,1290],"size":[315,106],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[127,137],"slot_index":0,"shape":3,"label":"CLIP"}],"properties":{"Node name for S&R":"DualCLIPLoader"},"widgets_values":["clip_l.safetensors","t5xxl_fp16.safetensors","flux"]},{"id":10,"type":"VAELoader","pos":[4216,883],"size":[315,58],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"VAE","type":"VAE","links":[12],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAELoader"},"widgets_values":["ae.safetensors"]},{"id":93,"type":"EmptyLatentImage","pos":[4706,1383],"size":[315,106],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[139],"label":"Latent"}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[1024,1024,1]},{"id":8,"type":"VAEDecode","pos":[5036,1235],"size":[237.8846435546875,89.71307373046875],"flags":{"collapsed":false},"order":10,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":134,"label":"Latent"},{"name":"vae","type":"VAE","link":12,"label":"VAE"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[9],"slot_index":0,"label":"图像"}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":78,"type":"IPAdapterFluxLoader","pos":[4177,681],"size":[315,106],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"ipadapterFlux","type":"IP_ADAPTER_FLUX_INSTANTX","links":[113],"slot_index":0,"label":"ipadapterFlux"}],"properties":{"Node name for S&R":"IPAdapterFluxLoader"},"widgets_values":["ip-adapter.bin","google/siglip-so400m-patch14-384","cuda"]},{"id":90,"type":"CLIPTextEncode","pos":[4277,1304],"size":[400,200],"flags":{},"order":7,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":137,"label":"CLIP"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[136],"label":"条件"}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":[""]},{"id":79,"type":"ApplyIPAdapterFlux","pos":[4554,662],"size":[327.5999755859375,146],"flags":{},"order":8,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":114,"label":"model"},{"name":"ipadapter_flux","type":"IP_ADAPTER_FLUX_INSTANTX","link":113,"label":"ipadapter_flux"},{"name":"image","type":"IMAGE","link":116,"label":"image"}],"outputs":[{"name":"MODEL","type":"MODEL","links":[133],"slot_index":0,"label":"MODEL"}],"properties":{"Node name for S&R":"ApplyIPAdapterFlux"},"widgets_values":[0.75,0,1]},{"id":9,"type":"SaveImage","pos":[5488,577],"size":[1023.5243530273438,1062.5860595703125],"flags":{},"order":11,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9,"label":"图像"}],"outputs":[],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["Flux_Lora"]},{"id":87,"type":"CLIPTextEncode","pos":[4260,1049],"size":[400,200],"flags":{},"order":6,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":127,"label":"CLIP"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[132],"slot_index":0,"label":"条件"}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["pillow"]},{"id":80,"type":"LoadImage","pos":[4553,288],"size":[315,314],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[116],"slot_index":0,"label":"图像"},{"name":"MASK","type":"MASK","links":null,"label":"遮罩"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["flux_dev_example.png","image"]}],"links":[[9,8,0,9,0,"IMAGE"],[12,10,0,8,1,"VAE"],[113,78,0,79,1,"IP_ADAPTER_FLUX_INSTANTX"],[114,12,0,79,0,"MODEL"],[116,80,0,79,2,"IMAGE"],[127,11,0,87,0,"CLIP"],[132,87,0,89,1,"CONDITIONING"],[133,79,0,89,0,"MODEL"],[134,89,0,8,0,"LATENT"],[136,90,0,89,2,"CONDITIONING"],[137,11,0,90,0,"CLIP"],[139,93,0,89,3,"LATENT"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.7972024500000012,"offset":[-3350.003932817133,-266.37604920487615]},"0246.VERSION":[0,0,4]},"version":0.4}


## Additional Context
(Please add any additional context or steps to reproduce the error here)

@philipy1219
Copy link
Contributor

Could you please share your workflow?

Unsaved Workflow.json

It seems you have used cuda as ipadapter and clip_vision device and runs with ComfyUI windows portable. The CUDA settings might have failed, or the model has been dumped to the CPU. We will have a test on ComfyUI windows portable.

@IAFFeng
Copy link
Author

IAFFeng commented Dec 3, 2024

Could you please share your workflow?

Unsaved Workflow.json

It seems you have used cuda as ipadapter and clip_vision device and runs with ComfyUI windows portable. The CUDA settings might have failed, or the model has been dumped to the CPU. We will have a test on ComfyUI windows portable.

Yes, I use ComfyUI windows portable

@kakkkarot
Copy link

THIS CONTINUES TO HAPPEN REGARDLESS OF SWITCHING TO GPU OR CPU, RUNS FINE WHEN U SWITCH TO GPU AND IT WILL GIVE U AN ERROR AFTER ONE RUN, SO U SWITCH TO CPU, AGAIN IT GENERATES AN IMAGE, JUST ONE AND BAM !!! U WILL GET THIS ERROR YET AGAIN. P.S IT'S NOT THE CASE ONLY ON THE PORTABLE, BEFORE THIS I HAD DONE A STANDALONE INSTALLATION OF COMFYUI AND I WOULD GET THE SAME ERROR, SO IT'S GOT NOTHING AT ALL TO DO WITH THE PORTABLE VERSION. THIS STARTED EXACTLY AFTER THE LAST UPDATE OF PULID, I'VE POSTED ABOUT THIS SEVERAL TIMES AND ON THE COMFYUI DISCORD AS WELL BUT NO ONE SEEMS TO HAVE A SOLUTION FOR IT. SOME SPOKE OF DOWNGRADING COMFY BUT THAT DIDN'T WORK EITHER.

@CmoneBK
Copy link

CmoneBK commented Dec 22, 2024

Any workarounds yet?
I got the same error not using portable.
Everything seems up to date.

Also found it unsolved here: comfyanonymous/ComfyUI#5763
and here: comfyanonymous/ComfyUI#5862

ComfyUI Error Report

Error Details

  • Node ID: 589
  • Node Type: SamplerCustomAdvanced
  • Exception Type: RuntimeError
  • Exception Message: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

Stack Trace

  File "D:\Automatic1111 2\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\Automatic1111 2\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\Automatic1111 2\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\Automatic1111 2\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "D:\Automatic1111 2\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 633, in sample
    samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 740, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 719, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 100, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 624, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\k_diffusion\sampling.py", line 1058, in sample_deis
    denoised = model(x_cur, t_cur * s_in, **extra_args)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 706, in __call__
    return self.predict_noise(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 709, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 175, in sampling_function
    out = orig_fn(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)

  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)

  File "D:\Automatic1111 2\ComfyUI\comfy\model_base.py", line 145, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\comfy\ldm\flux\model.py", line 184, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py", line 136, in forward_orig
    img = img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], img)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)

  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\encoders_flux.py", line 57, in forward
    q = self.to_q(latents)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)

  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)

System Information

  • ComfyUI Version: v0.3.4
  • Arguments: main.py
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.1.1+cu121

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 3080 : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 12884377600
    • VRAM Free: 1224219096
    • Torch VRAM Total: 10066329600
    • Torch VRAM Free: 66308568

Logs

2024-12-22T11:00:50.868237 -  [DONE]2024-12-22T11:00:50.869230 - 
2024-12-22T11:00:50.873737 - Install model '4x_foolhardy_Remacri' into 'D:\Automatic1111 2\ComfyUI\models\upscale_models\4x_foolhardy_Remacri.pth'2024-12-22T11:00:50.873737 - 
2024-12-22T11:00:52.486759 - Downloading https://cdn-lfs.hf-mirror.com/repos/ec/ee/eceeed2a0e8a9141e6d7535f06f502877d9c21e33ed536ec902a38f876756416/e1a73bd89c2da1ae494774746398689048b5a892bd9653e146713f9df8bca86a?response-content-disposition=inline%3B+filename*%3DUTF-8%27%274x_foolhardy_Remacri.pth%3B+filename%3D%224x_foolhardy_Remacri.pth%22%3B&Expires=1735120190&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczNTEyMDE5MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5oZi5jby9yZXBvcy9lYy9lZS9lY2VlZWQyYTBlOGE5MTQxZTZkNzUzNWYwNmY1MDI4NzdkOWMyMWUzM2VkNTM2ZWM5MDJhMzhmODc2NzU2NDE2L2UxYTczYmQ4OWMyZGExYWU0OTQ3NzQ3NDYzOTg2ODkwNDhiNWE4OTJiZDk2NTNlMTQ2NzEzZjlkZjhiY2E4NmE%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qIn1dfQ__&Signature=eUnPOUsT8WVf67iU-Glhy3MCMg%7E7mkxf7aG6UWe1Watz%7EItBZ8KTyrlktTMuqMvryYe31actOFRmTUgsu0C75CwWSdQhDIrJJEJg0kz3UC6DlBHW8%7E9wCbwwbwxk-Mcdrm0vbJSIsaN9TrH1MDCooEVtareA1TzBuksX1UlDVQZ-opHwXHKxwMQ%7E1hIWH6WNnUusd%7Eo3Ohcmdc%7EKlyw95TUDq4FAtqJ3l8IhtXjc3ggku5oCBuqFhX5xuO9whjBw5lS4ZoUOkaj1TYIHAUFVNwp44UzD-rnbrmpXZHpE4HCn2gvFElAcDVOAAvVr0NYMecd0lejNB5hI6yEgXpR1%7EA__&Key-Pair-Id=K3RPWS32NSSJCE to D:\Automatic1111 2\ComfyUI\models\upscale_models\4x_foolhardy_Remacri.pth2024-12-22T11:00:52.486759 - 
2024-12-22T11:00:55.235550 - 
 97%|████████████████████████████████████████████████████████████████████▏ | 65241088/67025055 [00:02<00:00, 24901625.10it/s]2024-12-22T11:00:55.303664 - 
100%|██████████████████████████████████████████████████████████████████████| 67025055/67025055 [00:02<00:00, 24849563.19it/s]2024-12-22T11:00:55.303664 - 
2024-12-22T11:01:01.000597 - got prompt
2024-12-22T11:01:01.011075 - Failed to validate prompt for output 358:
2024-12-22T11:01:01.011075 - * UNETLoader 762:
2024-12-22T11:01:01.011075 -   - Value not in list: unet_name: 'flux1-dev.safetensors' not in ['flux1-dev-fp8.safetensors', 'flux1-dev.sft', 'flux1-schnell-fp8.safetensors', 'flux\\flux1-dev-fp8-e4m3fn.safetensors']
2024-12-22T11:01:01.011075 - * ControlNetLoader 647:
2024-12-22T11:01:01.011075 -   - Value not in list: control_net_name: 'Flux-Controlnet-Union.safetensors' not in (list of length 70)
2024-12-22T11:01:01.011075 - Output will be ignored
2024-12-22T11:01:01.011075 - Failed to validate prompt for output 756:
2024-12-22T11:01:01.012075 - Output will be ignored
2024-12-22T11:01:01.012075 - Failed to validate prompt for output 140:
2024-12-22T11:01:01.012075 - Output will be ignored
2024-12-22T11:01:01.012075 - Failed to validate prompt for output 258:
2024-12-22T11:01:01.012075 - Output will be ignored
2024-12-22T11:01:01.012075 - Failed to validate prompt for output 84:
2024-12-22T11:01:01.012075 - Output will be ignored
2024-12-22T11:01:01.012075 - Failed to validate prompt for output 299:
2024-12-22T11:01:01.012075 - Output will be ignored
2024-12-22T11:01:01.012075 - Failed to validate prompt for output 179:
2024-12-22T11:01:01.012075 - Output will be ignored
2024-12-22T11:01:01.012075 - Failed to validate prompt for output 138:
2024-12-22T11:01:01.013075 - Output will be ignored
2024-12-22T11:01:01.013075 - Failed to validate prompt for output 354:
2024-12-22T11:01:01.013075 - Output will be ignored
2024-12-22T11:01:01.013075 - Failed to validate prompt for output 300:
2024-12-22T11:01:01.013075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 301:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 758:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 757:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 146:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 141:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 346:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 584:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 145:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.014075 - Failed to validate prompt for output 755:
2024-12-22T11:01:01.014075 - Output will be ignored
2024-12-22T11:01:01.015578 - Failed to validate prompt for output 637:
2024-12-22T11:01:01.015578 - Output will be ignored
2024-12-22T11:01:01.015578 - Failed to validate prompt for output 147:
2024-12-22T11:01:01.015578 - Output will be ignored
2024-12-22T11:01:01.015578 - Failed to validate prompt for output 440:
2024-12-22T11:01:01.016582 - Output will be ignored
2024-12-22T11:01:01.016582 - Failed to validate prompt for output 447:
2024-12-22T11:01:01.016582 - Output will be ignored
2024-12-22T11:01:01.016582 - Failed to validate prompt for output 433:
2024-12-22T11:01:01.016582 - Output will be ignored
2024-12-22T11:01:01.017255 - Failed to validate prompt for output 356:
2024-12-22T11:01:01.017255 - Output will be ignored
2024-12-22T11:01:01.017255 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-12-22T11:01:11.689591 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json2024-12-22T11:01:11.690598 - 2024-12-22T11:01:11.741164 -  [DONE]2024-12-22T11:01:11.741164 - 
2024-12-22T11:01:11.757675 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ControlNet-LLLite-ComfyUI/models2024-12-22T11:01:11.757675 - 
2024-12-22T11:01:11.759676 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/pfg-ComfyUI/models2024-12-22T11:01:11.761185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/pfg-ComfyUI/models2024-12-22T11:01:11.761185 - 
2024-12-22T11:01:11.761185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI_FaceAnalysis/dlib2024-12-22T11:01:11.761185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/pfg-ComfyUI/models2024-12-22T11:01:11.761185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI_FaceAnalysis/dlib2024-12-22T11:01:11.762185 - 
2024-12-22T11:01:11.762185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI-YoloWorld-EfficientSAM2024-12-22T11:01:11.762185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI-YoloWorld-EfficientSAM2024-12-22T11:01:11.762185 - 
2024-12-22T11:01:11.762185 - 
2024-12-22T11:01:11.762185 - 
2024-12-22T11:01:11.763185 - 
2024-12-22T11:01:11.763185 - 
2024-12-22T11:01:11.763185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI_ID_Animator/models/animatediff_models2024-12-22T11:01:11.763185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI_ID_Animator/models/image_encoder2024-12-22T11:01:11.763185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI_CustomNet/pretrain2024-12-22T11:01:11.763185 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI_ID_Animator/models2024-12-22T11:01:11.764184 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/ComfyUI-ToonCrafter/ToonCrafter/checkpoints/tooncrafter_512_interp_v12024-12-22T11:01:11.764184 - 
2024-12-22T11:01:11.764184 - 
2024-12-22T11:01:11.764184 - 
2024-12-22T11:01:11.764184 - 
2024-12-22T11:01:11.765184 - 
2024-12-22T11:01:11.766184 - [ComfyUI-Manager] The target custom node for model download is not installed: custom_nodes/comfyui-SegGPT2024-12-22T11:01:11.766184 - 
2024-12-22T11:02:59.785023 - got prompt
2024-12-22T11:02:59.795530 - Failed to validate prompt for output 358:
2024-12-22T11:02:59.795530 - * UNETLoader 762:
2024-12-22T11:02:59.795530 -   - Value not in list: unet_name: 'flux1-dev.safetensors' not in ['flux1-dev-fp8.safetensors', 'flux1-dev.sft', 'flux1-schnell-fp8.safetensors', 'flux\\flux1-dev-fp8-e4m3fn.safetensors']
2024-12-22T11:02:59.795530 - * BasicGuider 618:
2024-12-22T11:02:59.795530 -   - Required input is missing: model
2024-12-22T11:02:59.795530 - Output will be ignored
2024-12-22T11:02:59.796528 - Failed to validate prompt for output 756:
2024-12-22T11:02:59.796528 - Output will be ignored
2024-12-22T11:02:59.796528 - Failed to validate prompt for output 140:
2024-12-22T11:02:59.796528 - Output will be ignored
2024-12-22T11:02:59.796528 - Failed to validate prompt for output 258:
2024-12-22T11:02:59.796528 - Output will be ignored
2024-12-22T11:02:59.796528 - Failed to validate prompt for output 84:
2024-12-22T11:02:59.796528 - Output will be ignored
2024-12-22T11:02:59.796528 - Failed to validate prompt for output 299:
2024-12-22T11:02:59.796528 - Output will be ignored
2024-12-22T11:02:59.797528 - Failed to validate prompt for output 179:
2024-12-22T11:02:59.797528 - Output will be ignored
2024-12-22T11:02:59.797528 - Failed to validate prompt for output 138:
2024-12-22T11:02:59.797528 - Output will be ignored
2024-12-22T11:02:59.798528 - Failed to validate prompt for output 354:
2024-12-22T11:02:59.799528 - Output will be ignored
2024-12-22T11:02:59.799528 - Failed to validate prompt for output 300:
2024-12-22T11:02:59.799528 - Output will be ignored
2024-12-22T11:02:59.799528 - Failed to validate prompt for output 301:
2024-12-22T11:02:59.801542 - Output will be ignored
2024-12-22T11:02:59.801542 - Failed to validate prompt for output 758:
2024-12-22T11:02:59.801542 - Output will be ignored
2024-12-22T11:02:59.801542 - Failed to validate prompt for output 757:
2024-12-22T11:02:59.801542 - Output will be ignored
2024-12-22T11:02:59.801542 - Failed to validate prompt for output 146:
2024-12-22T11:02:59.801542 - Output will be ignored
2024-12-22T11:02:59.802542 - Failed to validate prompt for output 141:
2024-12-22T11:02:59.802542 - Output will be ignored
2024-12-22T11:02:59.802542 - Failed to validate prompt for output 346:
2024-12-22T11:02:59.802542 - Output will be ignored
2024-12-22T11:02:59.802542 - Failed to validate prompt for output 584:
2024-12-22T11:02:59.802542 - Output will be ignored
2024-12-22T11:02:59.802542 - Failed to validate prompt for output 145:
2024-12-22T11:02:59.802542 - Output will be ignored
2024-12-22T11:02:59.802542 - Failed to validate prompt for output 755:
2024-12-22T11:02:59.802542 - Output will be ignored
2024-12-22T11:02:59.803542 - Failed to validate prompt for output 637:
2024-12-22T11:02:59.803542 - Output will be ignored
2024-12-22T11:02:59.803542 - Failed to validate prompt for output 147:
2024-12-22T11:02:59.803542 - Output will be ignored
2024-12-22T11:02:59.803542 - Failed to validate prompt for output 440:
2024-12-22T11:02:59.803542 - Output will be ignored
2024-12-22T11:02:59.803542 - Failed to validate prompt for output 447:
2024-12-22T11:02:59.803542 - Output will be ignored
2024-12-22T11:02:59.804542 - Failed to validate prompt for output 433:
2024-12-22T11:02:59.804542 - Output will be ignored
2024-12-22T11:02:59.804542 - Failed to validate prompt for output 356:
2024-12-22T11:02:59.804542 - Output will be ignored
2024-12-22T11:02:59.804542 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-12-22T11:04:01.203325 - got prompt
2024-12-22T11:04:01.212751 - Failed to validate prompt for output 358:
2024-12-22T11:04:01.212751 - * BasicGuider 618:
2024-12-22T11:04:01.213752 -   - Required input is missing: model
2024-12-22T11:04:01.213752 - Output will be ignored
2024-12-22T11:04:01.213752 - Failed to validate prompt for output 756:
2024-12-22T11:04:01.213752 - Output will be ignored
2024-12-22T11:04:01.213752 - Failed to validate prompt for output 140:
2024-12-22T11:04:01.213752 - Output will be ignored
2024-12-22T11:04:01.214751 - Failed to validate prompt for output 258:
2024-12-22T11:04:01.214751 - Output will be ignored
2024-12-22T11:04:01.214751 - Failed to validate prompt for output 84:
2024-12-22T11:04:01.214751 - Output will be ignored
2024-12-22T11:04:01.214751 - Failed to validate prompt for output 299:
2024-12-22T11:04:01.214751 - Output will be ignored
2024-12-22T11:04:01.214751 - Failed to validate prompt for output 179:
2024-12-22T11:04:01.214751 - Output will be ignored
2024-12-22T11:04:01.214751 - Failed to validate prompt for output 138:
2024-12-22T11:04:01.215751 - Output will be ignored
2024-12-22T11:04:01.215751 - Failed to validate prompt for output 354:
2024-12-22T11:04:01.215751 - Output will be ignored
2024-12-22T11:04:01.215751 - Failed to validate prompt for output 300:
2024-12-22T11:04:01.215751 - Output will be ignored
2024-12-22T11:04:01.215751 - Failed to validate prompt for output 301:
2024-12-22T11:04:01.215751 - Output will be ignored
2024-12-22T11:04:01.215751 - Failed to validate prompt for output 758:
2024-12-22T11:04:01.215751 - Output will be ignored
2024-12-22T11:04:01.215751 - Failed to validate prompt for output 757:
2024-12-22T11:04:01.215751 - Output will be ignored
2024-12-22T11:04:01.215751 - Failed to validate prompt for output 146:
2024-12-22T11:04:01.216750 - Output will be ignored
2024-12-22T11:04:01.216750 - Failed to validate prompt for output 141:
2024-12-22T11:04:01.216750 - Output will be ignored
2024-12-22T11:04:01.216750 - Failed to validate prompt for output 346:
2024-12-22T11:04:01.216750 - Output will be ignored
2024-12-22T11:04:01.217751 - Failed to validate prompt for output 584:
2024-12-22T11:04:01.217751 - Output will be ignored
2024-12-22T11:04:01.217751 - Failed to validate prompt for output 145:
2024-12-22T11:04:01.217751 - Output will be ignored
2024-12-22T11:04:01.217751 - Failed to validate prompt for output 755:
2024-12-22T11:04:01.217751 - Output will be ignored
2024-12-22T11:04:01.217751 - Failed to validate prompt for output 637:
2024-12-22T11:04:01.217751 - Output will be ignored
2024-12-22T11:04:01.217751 - Failed to validate prompt for output 147:
2024-12-22T11:04:01.217751 - Output will be ignored
2024-12-22T11:04:01.217751 - Failed to validate prompt for output 440:
2024-12-22T11:04:01.217751 - Output will be ignored
2024-12-22T11:04:01.219254 - Failed to validate prompt for output 447:
2024-12-22T11:04:01.219254 - Output will be ignored
2024-12-22T11:04:01.219254 - Failed to validate prompt for output 433:
2024-12-22T11:04:01.219254 - Output will be ignored
2024-12-22T11:04:01.219254 - Failed to validate prompt for output 356:
2024-12-22T11:04:01.219254 - Output will be ignored
2024-12-22T11:04:01.219254 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-12-22T11:04:15.735177 - got prompt
2024-12-22T11:04:15.744696 - Failed to validate prompt for output 358:
2024-12-22T11:04:15.745695 - * BasicGuider 618:
2024-12-22T11:04:15.745695 -   - Required input is missing: model
2024-12-22T11:04:15.745695 - Output will be ignored
2024-12-22T11:04:15.745695 - Failed to validate prompt for output 756:
2024-12-22T11:04:15.745695 - Output will be ignored
2024-12-22T11:04:15.746696 - Failed to validate prompt for output 140:
2024-12-22T11:04:15.746696 - Output will be ignored
2024-12-22T11:04:15.746696 - Failed to validate prompt for output 258:
2024-12-22T11:04:15.746696 - Output will be ignored
2024-12-22T11:04:15.746696 - Failed to validate prompt for output 84:
2024-12-22T11:04:15.746696 - Output will be ignored
2024-12-22T11:04:15.747694 - Failed to validate prompt for output 299:
2024-12-22T11:04:15.747694 - Output will be ignored
2024-12-22T11:04:15.747694 - Failed to validate prompt for output 179:
2024-12-22T11:04:15.747694 - Output will be ignored
2024-12-22T11:04:15.747694 - Failed to validate prompt for output 138:
2024-12-22T11:04:15.747694 - Output will be ignored
2024-12-22T11:04:15.748694 - Failed to validate prompt for output 354:
2024-12-22T11:04:15.748694 - Output will be ignored
2024-12-22T11:04:15.748694 - Failed to validate prompt for output 300:
2024-12-22T11:04:15.748694 - Output will be ignored
2024-12-22T11:04:15.748694 - Failed to validate prompt for output 301:
2024-12-22T11:04:15.748694 - Output will be ignored
2024-12-22T11:04:15.748694 - Failed to validate prompt for output 758:
2024-12-22T11:04:15.748694 - Output will be ignored
2024-12-22T11:04:15.749698 - Failed to validate prompt for output 757:
2024-12-22T11:04:15.749698 - Output will be ignored
2024-12-22T11:04:15.749698 - Failed to validate prompt for output 146:
2024-12-22T11:04:15.749698 - Output will be ignored
2024-12-22T11:04:15.749698 - Failed to validate prompt for output 141:
2024-12-22T11:04:15.749698 - Output will be ignored
2024-12-22T11:04:15.749698 - Failed to validate prompt for output 346:
2024-12-22T11:04:15.749698 - Output will be ignored
2024-12-22T11:04:15.749698 - Failed to validate prompt for output 584:
2024-12-22T11:04:15.749698 - Output will be ignored
2024-12-22T11:04:15.750894 - Failed to validate prompt for output 145:
2024-12-22T11:04:15.750894 - Output will be ignored
2024-12-22T11:04:15.750894 - Failed to validate prompt for output 755:
2024-12-22T11:04:15.750894 - Output will be ignored
2024-12-22T11:04:15.750894 - Failed to validate prompt for output 637:
2024-12-22T11:04:15.750894 - Output will be ignored
2024-12-22T11:04:15.750894 - Failed to validate prompt for output 147:
2024-12-22T11:04:15.750894 - Output will be ignored
2024-12-22T11:04:15.751893 - Failed to validate prompt for output 440:
2024-12-22T11:04:15.751893 - Output will be ignored
2024-12-22T11:04:15.751893 - Failed to validate prompt for output 447:
2024-12-22T11:04:15.751893 - Output will be ignored
2024-12-22T11:04:15.752894 - Failed to validate prompt for output 433:
2024-12-22T11:04:15.752894 - Output will be ignored
2024-12-22T11:04:15.752894 - Failed to validate prompt for output 356:
2024-12-22T11:04:15.752894 - Output will be ignored
2024-12-22T11:04:15.752894 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-12-22T11:04:29.666931 - got prompt
2024-12-22T11:04:32.346987 - Requested to load FluxClipModel_
2024-12-22T11:04:32.346987 - Loading 1 new model
2024-12-22T11:04:44.268525 - loaded completely 0.0 4777.53759765625 True
2024-12-22T11:04:44.739906 - Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.
2024-12-22T11:04:47.510680 - Loading PuLID-Flux model.
2024-12-22T11:04:54.507511 - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-12-22T11:04:54.514762 - model_type FLUX
2024-12-22T11:07:38.488513 - C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\utils\transform.py:68: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
  P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4

2024-12-22T11:07:40.603622 - Requested to load Flux
2024-12-22T11:07:40.603622 - Loading 1 new model
2024-12-22T11:07:45.251781 - loaded partially 4364.382933044433 4361.963928222656 0
2024-12-22T11:07:45.322728 - 
  0%|                                                                                                 | 0/25 [00:00<?, ?it/s]2024-12-22T11:07:45.353635 - Requested to load AutoencodingEngine
2024-12-22T11:07:45.353635 - Loading 1 new model
2024-12-22T11:07:47.546526 - loaded completely 0.0 159.87335777282715 True
2024-12-22T11:07:47.881637 - Requested to load Flux
2024-12-22T11:07:47.881637 - Loading 1 new model
2024-12-22T11:07:49.486162 - loaded partially 5757.105179595947 5757.078186035156 0
2024-12-22T11:08:36.390379 - 
  0%|                                                                                                 | 0/25 [00:51<?, ?it/s]2024-12-22T11:08:36.390379 - 
2024-12-22T11:08:36.390379 - Processing interrupted
2024-12-22T11:08:36.396887 - Prompt executed in 246.71 seconds
2024-12-22T11:10:26.555383 - got prompt
2024-12-22T11:10:27.047238 - Requested to load Flux
2024-12-22T11:10:27.047238 - Loading 1 new model
2024-12-22T11:10:27.101210 - loaded partially 5821.078186035156 5820.086975097656 0
2024-12-22T11:10:27.120301 - 
  0%|                                                                                                 | 0/25 [00:00<?, ?it/s]2024-12-22T11:10:27.166668 - Requested to load AutoencodingEngine
2024-12-22T11:10:27.166668 - Loading 1 new model
2024-12-22T11:10:45.757388 - loaded completely 0.0 159.87335777282715 True
2024-12-22T11:10:49.102077 - loaded completely 8385.545609283447 6297.982421875 True
2024-12-22T11:10:49.647333 - loaded partially 2340.0543983459474 2336.3047485351562 0
2024-12-22T11:17:27.993365 - 
 40%|███████████████████████████████████▏                                                    | 10/25 [07:00<08:51, 35.45s/it]2024-12-22T11:17:31.963662 - 
 40%|███████████████████████████████████▏                                                    | 10/25 [07:04<10:37, 42.48s/it]2024-12-22T11:17:31.963662 - 
2024-12-22T11:17:31.976176 - !!! Exception during processing !!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
2024-12-22T11:17:31.980417 - Traceback (most recent call last):
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "D:\Automatic1111 2\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\Automatic1111 2\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 633, in sample
    samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 740, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 719, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 100, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample
    return orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 624, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\k_diffusion\sampling.py", line 1058, in sample_deis
    denoised = model(x_cur, t_cur * s_in, **extra_args)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 706, in __call__
    return self.predict_noise(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 709, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 175, in sampling_function
    out = orig_fn(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
  File "D:\Automatic1111 2\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "D:\Automatic1111 2\ComfyUI\comfy\model_base.py", line 145, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\comfy\ldm\flux\model.py", line 184, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\pulidflux.py", line 136, in forward_orig
    img = img + node_data['weight'] * self.pulid_ca[ca_idx](node_data['embedding'], img)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\Automatic1111 2\ComfyUI\custom_nodes\ComfyUI-PuLID-Flux-Enhanced\encoders_flux.py", line 57, in forward
    q = self.to_q(latents)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\Christoph\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

2024-12-22T11:17:31.981974 - Prompt executed in 425.07 seconds

Workflow

Workflow

@Lalimec
Copy link

Lalimec commented Dec 24, 2024

Any news on this? It was fine before the update.

@bramvera
Copy link

I have the same problem with ComfyUI on Archlinux

@CmoneBK
Copy link

CmoneBK commented Dec 25, 2024

Try this in case you are also using PuLID Flux Enhanced sipie800/ComfyUI-PuLID-Flux-Enhanced#54

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants