Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shrink does not stop in 0.1 releases #24

Open
jockbert opened this issue Sep 28, 2017 · 5 comments
Open

Shrink does not stop in 0.1 releases #24

jockbert opened this issue Sep 28, 2017 · 5 comments

Comments

@jockbert
Copy link

The shrinker does not stop shrinking despite reaching min values in given ranges. Keeps getting failure for values (0,6) , but the shrinker still continues with the shrinking process using the same values (0,6) over and over. The behaviour might be acceptable for primitive cases such as the given example. It can be a real problem for more time consuming properties, such as serialization and deserialization involving files on disk.

My expectation on the shrinker is to stop trying to shrink when there is no more smaller values to try. It should stop instead of trying the same values over and over again in "infinity" (i.e. 10000 times).

Example

package mypackage;

import static org.junit.Assert.fail;
import org.junit.Test;
import org.quicktheories.quicktheories.WithQuickTheories;

public class MyTest implements WithQuickTheories {
  private int mCounter;
  
  @Test
  public void doNotStopShrink()
  {
    mCounter = 0;
    
    qt()
    .forAll(
      integers().between(0, 5),
      integers().between(6, 10))
    .checkAssert((w, h) -> {
      System.err.println("count: " + (mCounter++) + " w=" + w + " h=" + h);
      fail();
    });
  }
}

output is

count: 0 w=4 h=9
count: 1 w=3 h=8
count: 2 w=2 h=7
count: 3 w=1 h=6
count: 4 w=0 h=6
count: 5 w=0 h=6
count: 6 w=0 h=6
count: 7 w=0 h=6
count: 8 w=0 h=6
...
count: 99976 w=0 h=6
count: 99977 w=0 h=6
count: 99978 w=0 h=6
count: 99979 w=0 h=6
count: 99980 w=0 h=6
count: 99981 w=0 h=6
count: 99982 w=0 h=6
count: 99983 w=0 h=6
count: 99984 w=0 h=6
count: 99985 w=0 h=6
count: 99986 w=0 h=6
count: 99987 w=0 h=6
count: 99988 w=0 h=6
count: 99989 w=0 h=6
count: 99990 w=0 h=6
count: 99991 w=0 h=6
count: 99992 w=0 h=6
count: 99993 w=0 h=6
count: 99994 w=0 h=6
count: 99995 w=0 h=6
count: 99996 w=0 h=6
count: 99997 w=0 h=6
count: 99998 w=0 h=6
count: 99999 w=0 h=6
count: 100000 w=0 h=6
@hcoles
Copy link
Contributor

hcoles commented Sep 28, 2017

That is not good - thanks for the report.

This is already fixed in master where the way that shrinking works has been fundamentally changed.

This will however need to be looked at in the 0.1x branch.

@jockbert
Copy link
Author

jockbert commented Oct 4, 2017

Correct, I forgot to mention in the initial comment that the problem is found in release 0.15. I have tried to reproduce the problem in release 0.21, but it seem to be solved. Since I already have upgraded to release 0.21, the bug is in my perspective solved.

off-topic note: I imagine that there are a lot of internal changes in release 0.20, but as a user I really like the (breaking) API changes. Test code got smaller and easier.

@hcoles
Copy link
Contributor

hcoles commented Oct 6, 2017

Glad the api changes work for you - we lost a bit of functionality but there's now a lot less code to maintain and features like coverage targeting have become possible. We're not too happy with some of the implementation details, but it should be possible to fix these without further breaking change.

We'd be very happy to receive feedback on how the api can be improved.

@jockbert
Copy link
Author

jockbert commented Oct 7, 2017

Quick feedback on API improvements

Firstly, I'm missing the ability (convenience) to specify an arbitrary double range. It can be good to have just because it can be a common use case. I have earlier written things like

Gen<Double> someRangePositive = 
    doubles().fromZeroToOne().map(d -> d * MAX);

Gen<Double> someRangeAroundZero = 
    doubles().fromZeroToOne().map(d -> (d - 0.5) * 2 * MAX);

but it would be easier to be able to just write something like

Gen<Double> someRangePositive = doubles().range(0,MAX);
Gen<Double> someRangeAroundZero = doubles().range(-MAX, MAX);

Secondly, I'm missing the ability to mix generators with other weights that 50-50 percent. Lets say that you want to create a generator for a expression tree, built up of binary operators (e.g. + * - /) and values (expession tree leafs). If you mix a generator of operators with an generator of values and uses that for recursively build up a generator of expession trees, you will quickly end up with non-teminating trees when using weights 50-50. The solution is to give a small overweight to the leaves, let say 55% or 60% pecent chance for an expression to be a leaf.

I have used these kinds of trees with ScalaCheck, so I know that they can be done and useful. I tried to produce a working QuickTheories-example here, but only produced StackOverflowError because there is recursion involved in the generator definitions.


Thirdly, it is not obvious in the documentation that the following generator has a finite length

Gen<String> operators = arbitrary().pick("+", "-", "*", "/")

@hcoles
Copy link
Contributor

hcoles commented Oct 9, 2017

Thanks for the feedback, both double ranges and mixing make sense. Although it seems like it should be trivial there is an issue with creating bounded double ranges. I forget now what this actually is but hopefully the answer lies somewhere within the closed issues and/or commit history.

On the third point all generators will produce infinite values, including pick. However as of 0.20 quicktheories will only evaluate each input once per run even if a generator happens to produce the same output multiple times.

So it will appear that your example only generates 4 values. However if you combine it with other generators e.g.

Gen<String> combined = operators.zip(integers.all(), (op,i) -> op + i );

or

qt()
.forAll(operators, integers.all())
.check((op,i) -> whatever);

It is unlikely that the generator pair will produce duplicate outputs and many pairs will be processed.

The duplicate detection is potentially very valuable if assessing a property is expensive. It is based on the raw precursor rather than the final type, so does not rely on functioning hashcode and equals methods or make any assumption about the structure of the generated type.

@hcoles hcoles changed the title Shrink does not stop Shrink does not stop in 0.1 releases Nov 1, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants