Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--rerun doesn't take effect if there are multiple test suites #393

Open
seagreen opened this issue Nov 19, 2018 · 7 comments
Open

--rerun doesn't take effect if there are multiple test suites #393

seagreen opened this issue Nov 19, 2018 · 7 comments

Comments

@seagreen
Copy link

I have an example of the problem here: https://github.com/seagreen/hspec-multiple-suites

Let me know if it would be helpful to have examples of this that don't use stack, as it is it's possible this is a downstream issue in stack.

@sol
Copy link
Member

sol commented Nov 19, 2018

What I think happens here is that due to --failure-report=TESTREPORT all the test suites use the same file to store the test report. Now if you run them sequentially, one after an other, each one will overwrite the test report of the previous one, leading to unexpected behavior.

This is a conceptual problem; I don't have a good idea how to solve this. If anybody has ideas on how to approach this then I'd love to hear them.

@sol
Copy link
Member

sol commented Nov 19, 2018

Maybe one way to approach this in hspec, we could

  1. determine the full path to the test executable
  2. store the failure report in e.g. ~/.hspec-failures/$(md5sum test_executable_path)

But I think this is still a conceptual problem with stack (and cabal). You have similar issues if you have e.g. one hspec and one doctest test suite. In that situation if you pass e.g. --test-arguments=--rerun to stack test, the doctests will fail due to an unrecognized argument. It could be useful if you could scope test arguments to specific test suites.

Is this already possible with stack and/or cabal somehow?

@sol
Copy link
Member

sol commented Nov 20, 2018

Workaround:

  1. Create a file run-test.sh
    #!/bin/bash
    `find .stack-work -type f -executable -name spec-1` --rerun --failure-report=TESTREPORT-spec-1 --rerun-all-on-success
    `find .stack-work -type f -executable -name spec-2` --rerun --failure-report=TESTREPORT-spec-2 --rerun-all-on-success
  2. chmod + x run-test.sh
  3. stack test --fast --file-watch --no-run-tests --exec=./run-test.sh

You could probably write a script that solves this generically, e.g. use yaml2json and jq to extract the test names from package.yaml and then use that with some bash scripting to make things happen. I would love to see a blog post on that!

@seagreen
Copy link
Author

Sweet, that worked!

But I think this is still a conceptual problem with stack (and cabal). You have similar issues if you have e.g. one hspec and one doctest test suite. In that situation if you pass e.g. --test-arguments=--rerun to stack test, the doctests will fail due to an unrecognized argument. It could be useful if you could scope test arguments to specific test suites.

stack is definitely not powerful enough here. It should be able to handle invoking difference test suites with different arguments, and provide nice defaults for doing so.

However, maybe we can make it easier on them. What about if there was an hspec flag that specified a directory to store results in instead of a file, where the results of running individual test suites were each named after that test suite?

I'll use the name --failure-report-dir in the example below, though the name could be anything you want.

So the example could change to stack build --fast --file-watch --test --test-arguments '--rerun --failure-report-dir=TESTDIR --rerun-all-on-success' and would result in two files, ./TESTDIR/spec-1 and ./TESTDIR/spec-2.

Thoughts? I'm not actually sure this is a good idea, but it's what occured to me 😃

@sol
Copy link
Member

sol commented Apr 18, 2021

I think the right approach is to use hspec/sensei or ghcid to run your tests. It's faster and it does not rely on --failure-report. hspec will store the test report in the process environment instead.

So I'm not sure if I'm going to burn any cycles on this. I'm still keeping this issue open for now as a reminder to improve the docs.

@sol sol added this to the later milestone Apr 18, 2021
@sol sol removed this from the later milestone May 12, 2021
@sol
Copy link
Member

sol commented Sep 11, 2022

As of hspec-2.10.0 unique failure report paths can be implemented as a plug-in.

Something like this:

module UniqueFailureReport (use) where

import GHC.Fingerprint (fingerprintString)
import System.Environment (getProgName, getExecutablePath)
import Test.Hspec.Core.Runner (Config(..))
import Test.Hspec.Core.Spec (SpecWith, runIO, modifyConfig)

use :: SpecWith a -> SpecWith a
use = (uniqueFailureReportPath >>)

uniqueFailureReportPath :: SpecWith a
uniqueFailureReportPath = do
  path <- runIO uniquePath
  modifyConfig (setFailureReport path)

setFailureReport :: FilePath -> Config -> Config
setFailureReport path config = config { configFailureReport = Just path}

uniquePath :: IO FilePath
uniquePath = do
  name <- getProgName
  md5sum <- show . fingerprintString <$> getExecutablePath
  return $ ".hspec-failures-" <> name <> "-" <> md5sum

This can be packaged up and published to Hackage. A user can then use it by registering it anywhere in their spec, or better yet in a spec hook:

 -- file test/SpecHook.hs
 module SpecHook where

 import           Test.Hspec
 import qualified UniqueFailureReport

 hook :: Spec -> Spec
 hook = UniqueFailureReport.use

I don’t plan to publish/maintain this myself, but @seagreen if you want to take responsibility of this code, then please go ahead (no attribution required).

@seagreen
Copy link
Author

I don’t plan to publish/maintain this myself, but @seagreen if you want to take responsibility of this code, then please go ahead (no attribution required).

I'll pass on that, but I am glad to see there's a fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants