Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dealing with tests that cannot succeeded #665

Open
ligurio opened this issue Apr 23, 2021 · 6 comments
Open

dealing with tests that cannot succeeded #665

ligurio opened this issue Apr 23, 2021 · 6 comments

Comments

@ligurio
Copy link

ligurio commented Apr 23, 2021

Sometimes tests cannot be fixed quickly and you expect to fail. In such cases it's a common practice to mark them accordingly with statuses like XFail or Skip.

A Skip means that you expect your test to pass unless a certain configuration or condition prevents it to run. And XFail means that your test can run but you expect it to fail because there is an implementation problem.

It would be nice to have functionality to set certain test status in a test source code.

@Tieske
Copy link
Member

Tieske commented Apr 23, 2021

you can use tags and then include/exclude those tags based on the conditions in which you run the tests.

But I might not understand your request exactly...

@ligurio
Copy link
Author

ligurio commented Apr 23, 2021

@Tieske filtering using tags excludes test from a test report and it is not desired.
see how this functionality implemented in pytest -
https://docs.pytest.org/en/latest/how-to/skipping.html

@Tieske
Copy link
Member

Tieske commented Apr 24, 2021

Here's how we do stuff like that (untested):

local platform_it = function(platforms, description, ...)
    if type(platforms) ~= "table" then
        -- plain 'it' call
        return it(platforms, description, ...)
    end

    local platform = get_platform()
    local test = false
    for _, plat in ipairs(platforms or {}) do
        if plat == platform then 
            test = true
            break
        end
    end
    return test and it(description, ...) or pending("[skipping on "..platform.."] "..description, ...)
end

platform_it({ windows, osx }, "a test as usual", function()
    -- test something, only on Windows and OSX, not on Linux
end)

platform_it({ osx, linux }, "another test as usual", function()
    -- test something, only on OSX and Linux, not on Windows
end)

@jamessan
Copy link

jamessan commented Jun 7, 2021

The important aspect of an xfail test is that it still runs but it's expected to fail.

This is useful to document the expected behavior for a scenario that's know to be failing (e.g., a bug report) but hasn't been fixed yet. If something changes that fixes the test, you're alerted to it because the test passing is treated as a failure.

At that point, you can verify whether the behavior change is intended and simply switch it from "xfail" to a normal test.

@DorianGray
Copy link
Contributor

We might be able to extend the reporting functionality to report on excluded tests maybe? Tags exist specifically so skip, xfail, etc can all be handled the same way anyways

@alerque
Copy link
Member

alerque commented Aug 25, 2022

I don't think reporting on excluded tests answers this question, the point of an Xfail test is that they are included but the mode is reversed. They are not excluded, they are run, but the expected output is anything but the declared expectation. That way you sound an alarm if a know-broken test starts passing and you fixed a bug you didn't realize was being affected (for the better).

I don't see way to do that with the tag system. We can include and exclude but not reverse modes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

5 participants