We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug
paddle框架提供的paddle.onnx.export 不支持导出半精度ONNX模型,会直接报错[ERROR] Float16 is not supported. 同paddle2onnx也不支持导出半精度ONNX模型,期望提供方法能够支持这个需求。
related issue: PaddlePaddle/Paddle#57194
最小的实现示例:
import paddle from paddlenlp.transformers import UIEX # 从模型代码中导入模型 model = UIEX.from_pretrained("uie-x-base") # 实例化模型 model.to(dtype="float16") # 加载预训练模型参数 model.eval() # 将模型设置为评估状态 input_spec = [ paddle.static.InputSpec(shape=[None, None], dtype="int64", name="input_ids"), paddle.static.InputSpec(shape=[None, None], dtype="int64", name="token_type_ids"), paddle.static.InputSpec(shape=[None, None], dtype="int64", name="position_ids"), paddle.static.InputSpec(shape=[None, None], dtype="int64", name="attention_mask"), paddle.static.InputSpec(shape=[None, None, 4], dtype="int64", name="bbox"), paddle.static.InputSpec(shape=[None, 3, 224, 224], dtype="float16", name="image"), ] # # 定义输入数据 print("Exporting ONNX model to %s" % "./uiex_fp16.onnx") paddle.onnx.export(model, "./uiex_fp16", input_spec=input_spec) # ONNX模型导出 print("ONNX model exported.")
另,网上提供的工具https://zenn.dev/pinto0309/scraps/588ed8342e2182 将FP32的onnx模型转为FP16后,此FP16的onnx模型存在问题不能在onnxruntime上使用。
Informations (please complete the following information):
The text was updated successfully, but these errors were encountered:
请问解决了吗?
Sorry, something went wrong.
会在下面的PR中修复Paddle2ONNX 无法原生导出FP16模型的Bug
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
No branches or pull requests
Describe the bug
paddle框架提供的paddle.onnx.export 不支持导出半精度ONNX模型,会直接报错[ERROR] Float16 is not supported.
同paddle2onnx也不支持导出半精度ONNX模型,期望提供方法能够支持这个需求。
related issue: PaddlePaddle/Paddle#57194
最小的实现示例:
另,网上提供的工具https://zenn.dev/pinto0309/scraps/588ed8342e2182 将FP32的onnx模型转为FP16后,此FP16的onnx模型存在问题不能在onnxruntime上使用。
Informations (please complete the following information):

The text was updated successfully, but these errors were encountered: