Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUGFIX] Set metric id at metric config initialization. #10700

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

billdirks
Copy link
Contributor

@billdirks billdirks commented Nov 22, 2024

This is an experiment. If I like the results I will add tests, update this description, and ask for a review.

  • Description of PR changes above includes a link to an existing GitHub issue
  • PR title is prefixed with one of: [BUGFIX], [FEATURE], [DOCS], [MAINTENANCE], [CONTRIB]
  • Code is linted - run invoke lint (uses ruff format + ruff check)
  • Appropriate tests and docs have been updated

For more information about contributing, visit our community resources.

After you submit your PR, keep the page open and monitor the statuses of the various checks made by our continuous integration process at the bottom of the page. Please fix any issues that come up and reach out on Slack if you need help. Thanks for contributing!

Copy link

netlify bot commented Nov 22, 2024

Deploy Preview for niobium-lead-7998 ready!

Name Link
🔨 Latest commit cd591ad
🔍 Latest deploy log https://app.netlify.com/sites/niobium-lead-7998/deploys/67411558edef130008c5359f
😎 Deploy Preview https://deploy-preview-10700.docs.greatexpectations.io
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Copy link

codecov bot commented Nov 22, 2024

❌ 44 Tests Failed:

Tests completed Failed Passed Skipped
27280 44 27236 4782
View the top 3 failed tests by shortest run time
tests.test_definitions.test_expectations_v3_api::test_case_runner_v3_api[sqlite/multi_table_expectations/expect_table_row_count_to_equal_other_table:basic_negative]
Stack Traces | 0.009s run time
test_case = {'expectation_type': 'expect_table_row_count_to_equal_other_table', 'pk_column': False, 'skip': False, 'test': {'exact...lude_in_gallery': True, 'input': {'other_table_name': 'expect_table_row_count_to_equal_other_table_data_3'}, ...}, ...}

    @pytest.mark.order(index=0)
    @pytest.mark.slow  # 12.68s
    def test_case_runner_v3_api(test_case):
        if test_case["skip"]:
            pytest.skip()
    
>       evaluate_json_test_v3_api(
            validator=test_case["validator_with_data"],
            expectation_type=test_case["expectation_type"],
            test=test_case["test"],
            pk_column=test_case["pk_column"],
        )

tests/test_definitions/test_expectations_v3_api.py:446: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
great_expectations/self_check/util.py:2015: in evaluate_json_test_v3_api
    check_json_test_result(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test = {'exact_match_out': False, 'in': {'other_table_name': 'expect_table_row_count_to_equal_other_table_data_3'}, 'include_in_gallery': True, 'input': {'other_table_name': 'expect_table_row_count_to_equal_other_table_data_3'}, ...}
result = {'exception_info': {'exception_message': None, 'exception_traceback': None, 'raised_exception': False}, 'expectation_c...'expect_table_row_count_to_equal_other_table'}, 'meta': {}, 'result': {'observed_value': {'other': 4, 'self': 4}}, ...}
pk_column = False

    def check_json_test_result(  # noqa: C901, PLR0912, PLR0915
        test, result, pk_column=False
    ) -> None:
        # check for id_pk results in cases where pk_column is true and unexpected_index_list already exists  # noqa: E501
        # this will work for testing since result_format is COMPLETE
        if pk_column:
            if not result["success"]:
                if "unexpected_index_list" in result["result"]:
                    assert "unexpected_index_query" in result["result"]
    
        if "unexpected_list" in result["result"]:
            if ("result" in test["output"]) and ("unexpected_list" in test["output"]["result"]):
                (
                    test["output"]["result"]["unexpected_list"],
                    result["result"]["unexpected_list"],
                ) = sort_unexpected_values(
                    test["output"]["result"]["unexpected_list"],
                    result["result"]["unexpected_list"],
                )
            elif "unexpected_list" in test["output"]:
                (
                    test["output"]["unexpected_list"],
                    result["result"]["unexpected_list"],
                ) = sort_unexpected_values(
                    test["output"]["unexpected_list"],
                    result["result"]["unexpected_list"],
                )
    
        if "partial_unexpected_list" in result["result"]:
            if ("result" in test["output"]) and ("partial_unexpected_list" in test["output"]["result"]):
                (
                    test["output"]["result"]["partial_unexpected_list"],
                    result["result"]["partial_unexpected_list"],
                ) = sort_unexpected_values(
                    test["output"]["result"]["partial_unexpected_list"],
                    result["result"]["partial_unexpected_list"],
                )
            elif "partial_unexpected_list" in test["output"]:
                (
                    test["output"]["partial_unexpected_list"],
                    result["result"]["partial_unexpected_list"],
                ) = sort_unexpected_values(
                    test["output"]["partial_unexpected_list"],
                    result["result"]["partial_unexpected_list"],
                )
    
        # Determine if np.allclose(..) might be needed for float comparison
        try_allclose = False
        if "observed_value" in test["output"]:
            if RX_FLOAT.match(repr(test["output"]["observed_value"])):
                try_allclose = True
    
        # Check results
        if test["exact_match_out"] is True:
            if "result" in result and "observed_value" in result["result"]:
                if isinstance(result["result"]["observed_value"], (np.floating, float)):
                    assert np.allclose(
                        result["result"]["observed_value"],
                        expectationValidationResultSchema.load(test["output"])["result"][
                            "observed_value"
                        ],
                        rtol=RTOL,
                        atol=ATOL,
                    ), f"(RTOL={RTOL}, ATOL={ATOL}) {result['result']['observed_value']} not np.allclose to {expectationValidationResultSchema.load(test['output'])['result']['observed_value']}"  # noqa: E501
                else:
                    assert result == expectationValidationResultSchema.load(
                        test["output"]
                    ), f"{result} != {expectationValidationResultSchema.load(test['output'])}"
            else:
                assert result == expectationValidationResultSchema.load(
                    test["output"]
                ), f"{result} != {expectationValidationResultSchema.load(test['output'])}"
        else:
            # Convert result to json since our tests are reading from json so cannot easily contain richer types (e.g. NaN)  # noqa: E501
            # NOTE - 20191031 - JPC - we may eventually want to change these tests as we update our view on how  # noqa: E501
            # representations, serializations, and objects should interact and how much of that is shown to the user.  # noqa: E501
            result = result.to_json_dict()
            for key, value in test["output"].items():
                if key == "success":
                    if isinstance(value, (np.floating, float)):
                        try:
                            assert np.allclose(
                                result["success"],
                                value,
                                rtol=RTOL,
                                atol=ATOL,
                            ), f"(RTOL={RTOL}, ATOL={ATOL}) {result['success']} not np.allclose to {value}"  # noqa: E501
                        except TypeError:
                            assert result["success"] == value, f"{result['success']} != {value}"
                    else:
>                       assert result["success"] == value, f"{result['success']} != {value}"
E                       AssertionError: True != False

great_expectations/self_check/util.py:2120: AssertionError
tests.test_definitions.test_expectations_v3_api::test_case_runner_v3_api[sqlite/multi_table_expectations/expect_table_row_count_to_equal_other_table:basic_negative]
Stack Traces | 0.009s run time
test_case = {'expectation_type': 'expect_table_row_count_to_equal_other_table', 'pk_column': False, 'skip': False, 'test': {'exact...lude_in_gallery': True, 'input': {'other_table_name': 'expect_table_row_count_to_equal_other_table_data_3'}, ...}, ...}

    @pytest.mark.order(index=0)
    @pytest.mark.slow  # 12.68s
    def test_case_runner_v3_api(test_case):
        if test_case["skip"]:
            pytest.skip()
    
>       evaluate_json_test_v3_api(
            validator=test_case["validator_with_data"],
            expectation_type=test_case["expectation_type"],
            test=test_case["test"],
            pk_column=test_case["pk_column"],
        )

tests/test_definitions/test_expectations_v3_api.py:446: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
great_expectations/self_check/util.py:2015: in evaluate_json_test_v3_api
    check_json_test_result(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test = {'exact_match_out': False, 'in': {'other_table_name': 'expect_table_row_count_to_equal_other_table_data_3'}, 'include_in_gallery': True, 'input': {'other_table_name': 'expect_table_row_count_to_equal_other_table_data_3'}, ...}
result = {'exception_info': {'exception_message': None, 'exception_traceback': None, 'raised_exception': False}, 'expectation_c...'expect_table_row_count_to_equal_other_table'}, 'meta': {}, 'result': {'observed_value': {'other': 4, 'self': 4}}, ...}
pk_column = False

    def check_json_test_result(  # noqa: C901, PLR0912, PLR0915
        test, result, pk_column=False
    ) -> None:
        # check for id_pk results in cases where pk_column is true and unexpected_index_list already exists  # noqa: E501
        # this will work for testing since result_format is COMPLETE
        if pk_column:
            if not result["success"]:
                if "unexpected_index_list" in result["result"]:
                    assert "unexpected_index_query" in result["result"]
    
        if "unexpected_list" in result["result"]:
            if ("result" in test["output"]) and ("unexpected_list" in test["output"]["result"]):
                (
                    test["output"]["result"]["unexpected_list"],
                    result["result"]["unexpected_list"],
                ) = sort_unexpected_values(
                    test["output"]["result"]["unexpected_list"],
                    result["result"]["unexpected_list"],
                )
            elif "unexpected_list" in test["output"]:
                (
                    test["output"]["unexpected_list"],
                    result["result"]["unexpected_list"],
                ) = sort_unexpected_values(
                    test["output"]["unexpected_list"],
                    result["result"]["unexpected_list"],
                )
    
        if "partial_unexpected_list" in result["result"]:
            if ("result" in test["output"]) and ("partial_unexpected_list" in test["output"]["result"]):
                (
                    test["output"]["result"]["partial_unexpected_list"],
                    result["result"]["partial_unexpected_list"],
                ) = sort_unexpected_values(
                    test["output"]["result"]["partial_unexpected_list"],
                    result["result"]["partial_unexpected_list"],
                )
            elif "partial_unexpected_list" in test["output"]:
                (
                    test["output"]["partial_unexpected_list"],
                    result["result"]["partial_unexpected_list"],
                ) = sort_unexpected_values(
                    test["output"]["partial_unexpected_list"],
                    result["result"]["partial_unexpected_list"],
                )
    
        # Determine if np.allclose(..) might be needed for float comparison
        try_allclose = False
        if "observed_value" in test["output"]:
            if RX_FLOAT.match(repr(test["output"]["observed_value"])):
                try_allclose = True
    
        # Check results
        if test["exact_match_out"] is True:
            if "result" in result and "observed_value" in result["result"]:
                if isinstance(result["result"]["observed_value"], (np.floating, float)):
                    assert np.allclose(
                        result["result"]["observed_value"],
                        expectationValidationResultSchema.load(test["output"])["result"][
                            "observed_value"
                        ],
                        rtol=RTOL,
                        atol=ATOL,
                    ), f"(RTOL={RTOL}, ATOL={ATOL}) {result['result']['observed_value']} not np.allclose to {expectationValidationResultSchema.load(test['output'])['result']['observed_value']}"  # noqa: E501
                else:
                    assert result == expectationValidationResultSchema.load(
                        test["output"]
                    ), f"{result} != {expectationValidationResultSchema.load(test['output'])}"
            else:
                assert result == expectationValidationResultSchema.load(
                    test["output"]
                ), f"{result} != {expectationValidationResultSchema.load(test['output'])}"
        else:
            # Convert result to json since our tests are reading from json so cannot easily contain richer types (e.g. NaN)  # noqa: E501
            # NOTE - 20191031 - JPC - we may eventually want to change these tests as we update our view on how  # noqa: E501
            # representations, serializations, and objects should interact and how much of that is shown to the user.  # noqa: E501
            result = result.to_json_dict()
            for key, value in test["output"].items():
                if key == "success":
                    if isinstance(value, (np.floating, float)):
                        try:
                            assert np.allclose(
                                result["success"],
                                value,
                                rtol=RTOL,
                                atol=ATOL,
                            ), f"(RTOL={RTOL}, ATOL={ATOL}) {result['success']} not np.allclose to {value}"  # noqa: E501
                        except TypeError:
                            assert result["success"] == value, f"{result['success']} != {value}"
                    else:
>                       assert result["success"] == value, f"{result['success']} != {value}"
E                       AssertionError: True != False

great_expectations/self_check/util.py:2120: AssertionError
tests.test_definitions.test_expectations_v3_api::test_case_runner_v3_api[sqlite/multi_table_expectations/expect_table_row_count_to_equal_other_table:basic_negative]
Stack Traces | 0.01s run time
test_case = {'expectation_type': 'expect_table_row_count_to_equal_other_table', 'pk_column': False, 'skip': False, 'test': {'exact...lude_in_gallery': True, 'input': {'other_table_name': 'expect_table_row_count_to_equal_other_table_data_3'}, ...}, ...}

    @pytest.mark.order(index=0)
    @pytest.mark.slow  # 12.68s
    def test_case_runner_v3_api(test_case):
        if test_case["skip"]:
            pytest.skip()
    
>       evaluate_json_test_v3_api(
            validator=test_case["validator_with_data"],
            expectation_type=test_case["expectation_type"],
            test=test_case["test"],
            pk_column=test_case["pk_column"],
        )

tests/test_definitions/test_expectations_v3_api.py:446: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
great_expectations/self_check/util.py:2015: in evaluate_json_test_v3_api
    check_json_test_result(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

test = {'exact_match_out': False, 'in': {'other_table_name': 'expect_table_row_count_to_equal_other_table_data_3'}, 'include_in_gallery': True, 'input': {'other_table_name': 'expect_table_row_count_to_equal_other_table_data_3'}, ...}
result = {'exception_info': {'exception_message': None, 'exception_traceback': None, 'raised_exception': False}, 'expectation_c...'expect_table_row_count_to_equal_other_table'}, 'meta': {}, 'result': {'observed_value': {'other': 4, 'self': 4}}, ...}
pk_column = False

    def check_json_test_result(  # noqa: C901, PLR0912, PLR0915
        test, result, pk_column=False
    ) -> None:
        # check for id_pk results in cases where pk_column is true and unexpected_index_list already exists  # noqa: E501
        # this will work for testing since result_format is COMPLETE
        if pk_column:
            if not result["success"]:
                if "unexpected_index_list" in result["result"]:
                    assert "unexpected_index_query" in result["result"]
    
        if "unexpected_list" in result["result"]:
            if ("result" in test["output"]) and ("unexpected_list" in test["output"]["result"]):
                (
                    test["output"]["result"]["unexpected_list"],
                    result["result"]["unexpected_list"],
                ) = sort_unexpected_values(
                    test["output"]["result"]["unexpected_list"],
                    result["result"]["unexpected_list"],
                )
            elif "unexpected_list" in test["output"]:
                (
                    test["output"]["unexpected_list"],
                    result["result"]["unexpected_list"],
                ) = sort_unexpected_values(
                    test["output"]["unexpected_list"],
                    result["result"]["unexpected_list"],
                )
    
        if "partial_unexpected_list" in result["result"]:
            if ("result" in test["output"]) and ("partial_unexpected_list" in test["output"]["result"]):
                (
                    test["output"]["result"]["partial_unexpected_list"],
                    result["result"]["partial_unexpected_list"],
                ) = sort_unexpected_values(
                    test["output"]["result"]["partial_unexpected_list"],
                    result["result"]["partial_unexpected_list"],
                )
            elif "partial_unexpected_list" in test["output"]:
                (
                    test["output"]["partial_unexpected_list"],
                    result["result"]["partial_unexpected_list"],
                ) = sort_unexpected_values(
                    test["output"]["partial_unexpected_list"],
                    result["result"]["partial_unexpected_list"],
                )
    
        # Determine if np.allclose(..) might be needed for float comparison
        try_allclose = False
        if "observed_value" in test["output"]:
            if RX_FLOAT.match(repr(test["output"]["observed_value"])):
                try_allclose = True
    
        # Check results
        if test["exact_match_out"] is True:
            if "result" in result and "observed_value" in result["result"]:
                if isinstance(result["result"]["observed_value"], (np.floating, float)):
                    assert np.allclose(
                        result["result"]["observed_value"],
                        expectationValidationResultSchema.load(test["output"])["result"][
                            "observed_value"
                        ],
                        rtol=RTOL,
                        atol=ATOL,
                    ), f"(RTOL={RTOL}, ATOL={ATOL}) {result['result']['observed_value']} not np.allclose to {expectationValidationResultSchema.load(test['output'])['result']['observed_value']}"  # noqa: E501
                else:
                    assert result == expectationValidationResultSchema.load(
                        test["output"]
                    ), f"{result} != {expectationValidationResultSchema.load(test['output'])}"
            else:
                assert result == expectationValidationResultSchema.load(
                    test["output"]
                ), f"{result} != {expectationValidationResultSchema.load(test['output'])}"
        else:
            # Convert result to json since our tests are reading from json so cannot easily contain richer types (e.g. NaN)  # noqa: E501
            # NOTE - 20191031 - JPC - we may eventually want to change these tests as we update our view on how  # noqa: E501
            # representations, serializations, and objects should interact and how much of that is shown to the user.  # noqa: E501
            result = result.to_json_dict()
            for key, value in test["output"].items():
                if key == "success":
                    if isinstance(value, (np.floating, float)):
                        try:
                            assert np.allclose(
                                result["success"],
                                value,
                                rtol=RTOL,
                                atol=ATOL,
                            ), f"(RTOL={RTOL}, ATOL={ATOL}) {result['success']} not np.allclose to {value}"  # noqa: E501
                        except TypeError:
                            assert result["success"] == value, f"{result['success']} != {value}"
                    else:
>                       assert result["success"] == value, f"{result['success']} != {value}"
E                       AssertionError: True != False

great_expectations/self_check/util.py:2120: AssertionError

To view more test analytics, go to the Test Analytics Dashboard
Got feedback? Let us know on Github

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant