Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disabled conversion to float of model's input #25555

Merged
merged 4 commits into from May 16, 2024

Conversation

alexlyulkov
Copy link
Contributor

In dnn 4.x usually any model's input is converted to float32 or float16 (except quantized models). Also mean and scale can be applied. In current dnn 5.x there is the same conversion except int32 and int64 types. I removed this conversion.

Here is how the pipeline works now:

  • if input Mat type is float32, the pipeline applies mean and scale and may convert it to float16.
  • if input Mat type is not float32, the pipeline preserves the input type and doesn't apply mean and scale

There was a conflict in protobuf parser between ONNX importer and tests. In ONNX importer any uint8 weight was handled as quantized weight and x = int8(x_uint8 - 128) conversion was used inside the protobuf parser. ONNX conformance tests used the same protobuf reader, so tests with uint8 inputs couldn't read the input values properly. I've made this conversion optional.

These ONNX conformance tests are enabled:

  • test_add_uint8
  • test_div_uint8
  • test_mul_uint8
  • test_sub_uint8
  • test_max_int8
  • test_max_uint8
  • test_min_int8
  • test_min_uint8
  • test_mod_mixed_sign_int8
  • test_mod_uint8

These tests were removed:

  • Test_two_inputs.basic (when input is uint8)
  • setInput.normalization (when input is uint8)

Pull Request Readiness Checklist

See details at https://github.com/opencv/opencv/wiki/How_to_contribute#making-a-good-pull-request

  • I agree to contribute to the project under Apache 2 License.
  • To the best of my knowledge, the proposed patch is not based on a code under GPL or another license that is incompatible with OpenCV
  • The PR is proposed to the proper branch
  • There is a reference to the original bug report and related work
  • There is accuracy test, performance test and test data in opencv_extra repository, if applicable
    Patch to opencv_extra has the same branch name.
  • The feature is well documented and sample code can be built with the project CMake

@alexlyulkov alexlyulkov self-assigned this May 7, 2024
modules/dnn/src/layer_internals.hpp Show resolved Hide resolved
modules/dnn/src/layer_internals.hpp Show resolved Hide resolved
@@ -4116,7 +4116,7 @@ Mat readTensorFromONNX(const String& path)
{
CV_Error(Error::StsUnsupportedFormat, cv::format("Failed to parse ONNX data: %s", path.c_str()));
}
Mat mat = getMatFromTensor(tensor_proto);
Mat mat = getMatFromTensor(tensor_proto, false);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If conversion is set to off by default for all cases, why not just remove the conversion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getMatFromTensor is used in several places in onnx_importer.cpp and onnx_graph_simplifier.cpp. Removing the conversion breaks several tests.

@asmorkalov
Copy link
Contributor

CUDA failures:

2024-05-07T11:19:57.7352829Z [  PASSED  ] 8230 tests.
2024-05-07T11:19:57.7354070Z [  FAILED  ] 12 tests, listed below:
2024-05-07T11:19:57.7356794Z [  FAILED  ] Test_ONNX_conformance.Layer_Test/test_max_int8_CUDA_CUDA, where GetParam() = (test_max_int8, CUDA/CUDA)
2024-05-07T11:19:57.7361102Z [  FAILED  ] Test_ONNX_conformance.Layer_Test/test_max_int8_CUDA_CUDA_FP16, where GetParam() = (test_max_int8, CUDA/CUDA_FP16)
2024-05-07T11:19:57.7365381Z [  FAILED  ] Test_ONNX_conformance.Layer_Test/test_max_uint8_CUDA_CUDA, where GetParam() = (test_max_uint8, CUDA/CUDA)
2024-05-07T11:19:57.7369679Z [  FAILED  ] Test_ONNX_conformance.Layer_Test/test_max_uint8_CUDA_CUDA_FP16, where GetParam() = (test_max_uint8, CUDA/CUDA_FP16)
2024-05-07T11:19:57.7374209Z [  FAILED  ] Test_ONNX_conformance.Layer_Test/test_min_int8_CUDA_CUDA, where GetParam() = (test_min_int8, CUDA/CUDA)
2024-05-07T11:19:57.7378613Z [  FAILED  ] Test_ONNX_conformance.Layer_Test/test_min_int8_CUDA_CUDA_FP16, where GetParam() = (test_min_int8, CUDA/CUDA_FP16)
2024-05-07T11:19:57.7382887Z [  FAILED  ] Test_ONNX_conformance.Layer_Test/test_min_uint8_CUDA_CUDA, where GetParam() = (test_min_uint8, CUDA/CUDA)
2024-05-07T11:19:57.7387157Z [  FAILED  ] Test_ONNX_conformance.Layer_Test/test_min_uint8_CUDA_CUDA_FP16, where GetParam() = (test_min_uint8, CUDA/CUDA_FP16)
2024-05-07T11:19:57.7391866Z [  FAILED  ] Test_ONNX_conformance.Layer_Test/test_mod_mixed_sign_int8_CUDA_CUDA, where GetParam() = (test_mod_mixed_sign_int8, CUDA/CUDA)
2024-05-07T11:19:57.7396881Z [  FAILED  ] Test_ONNX_conformance.Layer_Test/test_mod_mixed_sign_int8_CUDA_CUDA_FP16, where GetParam() = (test_mod_mixed_sign_int8, CUDA/CUDA_FP16)
2024-05-07T11:19:57.7401505Z [  FAILED  ] Test_ONNX_conformance.Layer_Test/test_mod_uint8_CUDA_CUDA, where GetParam() = (test_mod_uint8, CUDA/CUDA)
2024-05-07T11:19:57.7405804Z [  FAILED  ] Test_ONNX_conformance.Layer_Test/test_mod_uint8_CUDA_CUDA_FP16, where GetParam() = (test_mod_uint8, CUDA/CUDA_FP16)

@asmorkalov asmorkalov assigned asmorkalov and unassigned alexlyulkov May 15, 2024
…s with uint8 to float input conversion, disabled test_mul_uint8 comformance test
@asmorkalov
Copy link
Contributor

@dkurt do you have any comments?

@asmorkalov
Copy link
Contributor

@dkurt We need the patch merged to continue with boolean type support. Please provide comments after the PR merge, Alexander will fix remarks with the next PR.

@asmorkalov asmorkalov merged commit 9238eb2 into opencv:5.x May 16, 2024
23 of 25 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants