-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
no detection of run time errors of tutorials? #235
Comments
The proper way to do this the way you seem to intend above would be something like tut1_network = tut1()
time.sleep(20)
tut1_network.shutdown() What errors are you experiencing? You are right, that the tests are not ideal. Of course the time the networks run is close to zero. The tests rather ensure, that the networks are properly set up but nothing more really. We were working on comparing to expected output in #229 but did not succeed with that yet, due to the various ways the agents print to the console in the two backends. We could improve the mechanisms with some more effort though, as far as I can see. |
With your code it works, and it does what I intended, thank you! |
Indeed I see one significant problem. If we simply include a one minute sleep into those tutorial tests we increase the time needed for one execution of the test suite form something like 4 minutes to something like 10 minutes. I have a look into parallel test execution. |
I would rather set an individual minimal execution time for those tests that should have one. Regarding the error checking in general I would rather try to test against specific expected results, if needed after some time, to catch unexpected behaviour, instead of just letting the tutorials run longer. Xiang already provided one possible pattern to do that in his test_memory_monitor_agent.py. I am still not sure, what is the best way to achieve that for the tutorials but I think, revising the printing mechanisms and using |
Quick experiments with parallel execution of the tests (via pytest-parallel and pytest-xdist) suggest, that this would require some thorough preparation, due to the several servers started during the test executions. |
The 60 seconds mainly comes from the agent data generation rate that is set somewhere, and the minimum buffer size. If we increase the data generation rate, the tests can go much faster. |
"I would rather set an individual minimum execution time for those tests that should have one." |
"Regarding the error checking in general I would rather try to test against specific expected results, if needed after some time, to catch unexpected behaviour, instead of just letting the tutorials run longer." |
Of course, all depends on the time and budget we have to improve the tests. |
If the agents should calculate something, they can print the output, we can grap it, and compare it with the reference outcome. But how to automatically test that the plots look nice? Even if the network hasn't crashed, the plots can still look bad/ erroneous. |
If we agree on that, I would suggest including such a |
That's right, although the tutorials do not take any parameters as they are right now. We would have to introduce that additioonal parameter first into the tutorials with an appropriate default and use that to speed up the agents during tests. As you already indicated we may have to introduce another |
BTW: test_timeout = 10 in conftest.py doesn't work for my computer. You should either increase it to 20, or, preferably, buy me a faster computer :-). |
Oh, that is probably the reason for #240, right?! So you do actually observe failing tests due to the 10 seconds and with 20 seconds those same tests are passing? |
Exactly! |
I think the test of the tutorials shuts down to quickly to test and raise run time errors. This is due to the test form tut1().shutdown().
The potential alternative
tut1()
time.sleep(20)
tut1.shutdown()
doesn't work, as it never shutsdown. Maybe the links to the agent processes are lost.
Is there a way to run the framework 20 seconds and shutting down only then?
The text was updated successfully, but these errors were encountered: