We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
你好, 我训练picode_xs得到的模型转成onnx后,自己粗略测试,循环1000次取平均,不包括预处理,单单模型的推理时间,在cpu上需要30ms一次。根据链接(( w/o 后处理))下载你们提供的onnx模型,速度也是30ms左右一次。 另外转成trt模型用tritonserver推理(A10卡)在10 batchsize的情况下也需要大概6ms,但是在文档中写着只需要3ms左右,为什么差距会这么大呢 另外训练得到的模型是怎么转ncnn模型的呢?我尝试了教程讲的,结果转模型出错了,但是onnx模型和训练得到的模型结果是能对得上的
The text was updated successfully, but these errors were encountered:
可能是和机器其他配置有关哈,不单单是算力卡问题
Sorry, something went wrong.
lyuwenyu
changdazhou
No branches or pull requests
问题确认 Search before asking
请提出你的问题 Please ask your question
你好,
我训练picode_xs得到的模型转成onnx后,自己粗略测试,循环1000次取平均,不包括预处理,单单模型的推理时间,在cpu上需要30ms一次。根据链接(( w/o 后处理))下载你们提供的onnx模型,速度也是30ms左右一次。
另外转成trt模型用tritonserver推理(A10卡)在10 batchsize的情况下也需要大概6ms,但是在文档中写着只需要3ms左右,为什么差距会这么大呢
另外训练得到的模型是怎么转ncnn模型的呢?我尝试了教程讲的,结果转模型出错了,但是onnx模型和训练得到的模型结果是能对得上的
The text was updated successfully, but these errors were encountered: