Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

running inference error #7

Open
bruceisme opened this issue Aug 12, 2024 · 2 comments
Open

running inference error #7

bruceisme opened this issue Aug 12, 2024 · 2 comments

Comments

@bruceisme
Copy link

orkspace/code/MPS/inference2.py:22: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model = torch.load(model_ckpt_path)
/opt/conda/envs/mps/lib/python3.9/site-packages/pydantic/_internal/_config.py:269: UserWarning: Valid config keys have changed in V2:

  • 'allow_population_by_field_name' has been renamed to 'populate_by_name'
  • 'validate_all' has been renamed to 'validate_default'
    warnings.warn(message, UserWarning)
    /opt/conda/envs/mps/lib/python3.9/site-packages/deepspeed/runtime/zero/linear.py:53: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead.
    def forward(ctx, input, weight, bias=None):
    /opt/conda/envs/mps/lib/python3.9/site-packages/deepspeed/runtime/zero/linear.py:79: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead.
    def backward(ctx, grad_output):
    /opt/conda/envs/mps/lib/python3.9/site-packages/pydantic/_internal/fields.py:127: UserWarning: Field "model_persistence_threshold" has conflict with protected namespace "model".

You may be able to resolve this warning by setting model_config['protected_namespaces'] = ().
warnings.warn(
/opt/conda/envs/mps/lib/python3.9/site-packages/pydantic/_internal/_config.py:269: UserWarning: Valid config keys have changed in V2:

  • 'validate_all' has been renamed to 'validate_default'
    warnings.warn(message, UserWarning)
    Traceback (most recent call last):
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1126, in _get_module
    return importlib.import_module("." + module_name, self.name)
    File "/opt/conda/envs/mps/lib/python3.9/importlib/init.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
    File "", line 1030, in _gcd_import
    File "", line 1007, in _find_and_load
    File "", line 986, in _find_and_load_unlocked
    File "", line 680, in _load_unlocked
    File "", line 850, in exec_module
    File "", line 228, in _call_with_frames_removed
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 27, in
    from ...modeling_utils import PreTrainedModel
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/transformers/modeling_utils.py", line 37, in
    from .deepspeed import deepspeed_config, is_deepspeed_zero3_enabled
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/transformers/deepspeed.py", line 38, in
    from accelerate.utils.deepspeed import HfDeepSpeedConfig as DeepSpeedConfig
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/accelerate/init.py", line 3, in
    from .accelerator import Accelerator
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/accelerate/accelerator.py", line 31, in
    from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/accelerate/checkpointing.py", line 24, in
    from .utils import (
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/accelerate/utils/init.py", line 105, in
    from .launch import (
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/accelerate/utils/launch.py", line 28, in
    from ..utils.other import merge_dicts
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/accelerate/utils/other.py", line 28, in
    from deepspeed import DeepSpeedEngine
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/deepspeed/init.py", line 15, in
    from . import module_inject
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/deepspeed/module_inject/init.py", line 3, in
    from .replace_module import replace_transformer_layer, revert_transformer_layer, ReplaceWithTensorSlicing, GroupQuantizer, generic_injection
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/deepspeed/module_inject/replace_module.py", line 803, in
    from ..pipe import PipelineModule
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/deepspeed/pipe/init.py", line 3, in
    from ..runtime.pipe import PipelineModule, LayerSpec, TiedLayerSpec
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/deepspeed/runtime/pipe/init.py", line 3, in
    from .module import PipelineModule, LayerSpec, TiedLayerSpec
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/deepspeed/runtime/pipe/module.py", line 16, in
    from ..activation_checkpointing import checkpointing
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/deepspeed/runtime/activation_checkpointing/checkpointing.py", line 25, in
    from deepspeed.runtime.config import DeepSpeedConfig
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/deepspeed/runtime/config.py", line 30, in
    from ..monitor.config import get_monitor_config
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/deepspeed/monitor/config.py", line 70, in
    class DeepSpeedMonitorConfig(DeepSpeedConfigModel):
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/deepspeed/monitor/config.py", line 82, in DeepSpeedMonitorConfig
    def check_enabled(cls, values):
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/pydantic/deprecated/class_validators.py", line 222, in root_validator
    return root_validator()(*__args) # type: ignore
    File "/opt/conda/envs/mps/lib/python3.9/site-packages/pydantic/deprecated/class_validators.py", line 228, in root_validator
    raise PydanticUserError(
    pydantic.errors.PydanticUserError: If you use @root_validator with pre=False (the default) you MUST specify skip_on_failure=True. Note that @root_validator is deprecated and should be replaced with @model_validator.

For further information visit https://errors.pydantic.dev/2.3/u/root-validator-pre-skip

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/workspace/code/MPS/inference2.py", line 22, in
model = torch.load(model_ckpt_path)
File "/opt/conda/envs/mps/lib/python3.9/site-packages/torch/serialization.py", line 1097, in load
return _load(
File "/opt/conda/envs/mps/lib/python3.9/site-packages/torch/serialization.py", line 1525, in _load
result = unpickler.load()
File "/opt/conda/envs/mps/lib/python3.9/site-packages/torch/serialization.py", line 1515, in find_class
return super().find_class(mod_name, name)
File "/workspace/code/MPS/trainer/models/clip_model.py", line 2, in
from transformers import CLIPModel as HFCLIPModel
File "", line 1055, in _handle_fromlist
File "/opt/conda/envs/mps/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1117, in getattr
value = getattr(module, name)
File "/opt/conda/envs/mps/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1116, in getattr
module = self._get_module(self._class_to_module[name])
File "/opt/conda/envs/mps/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1128, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.clip.modeling_clip because of the following error (look up to see its traceback):
If you use @root_validator with pre=False (the default) you MUST specify skip_on_failure=True. Note that @root_validator is deprecated and should be replaced with @model_validator.

@bruceisme
Copy link
Author

Just follow the readme

@ChiehYunChen
Copy link

I also encounter the issue and solve it by
pip install --upgrade transformers pydantic

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants