Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: 如何提高UIE使用CPU部署的model.onnx模型的推理性能? #3636

Closed
yuwochangzai opened this issue Nov 1, 2022 · 5 comments
Labels
question Further information is requested stale

Comments

@yuwochangzai
Copy link

yuwochangzai commented Nov 1, 2022

使用的uie_nano预训练模型,但是速度还是很慢

31个schema,80多条数据测试耗时190秒
模型是按照文档使用命令:
python deploy/python/infer_cpu.py --model_path_prefix export/inference
生成的,该如何提高推理性能?

@yuwochangzai yuwochangzai added the question Further information is requested label Nov 1, 2022
@LemonNoel
Copy link
Contributor

可以根据机器CPU配置,适当调大batch_size参数

@wawltor
Copy link
Collaborator

wawltor commented Nov 1, 2022

#3496

这里有个LIte的INT8部署的工作,你可以看看自己的cpu是否支持INT8部署

@yuwochangzai
Copy link
Author

可以根据机器CPU配置,适当调大batch_size参数

没用

@github-actions
Copy link

This issue is stale because it has been open for 60 days with no activity. 当前issue 60天内无活动,被标记为stale。

@github-actions github-actions bot added the stale label Dec 31, 2022
@github-actions
Copy link

This issue was closed because it has been inactive for 14 days since being marked as stale. 当前issue 被标记为stale已有14天,即将关闭。

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested stale
Projects
None yet
Development

No branches or pull requests

3 participants