Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[JOSS Review] Add automated tests in CI #37

Closed
matthewfeickert opened this issue Aug 11, 2022 · 5 comments · May be fixed by #41
Closed

[JOSS Review] Add automated tests in CI #37

matthewfeickert opened this issue Aug 11, 2022 · 5 comments · May be fixed by #41

Comments

@matthewfeickert
Copy link
Contributor

For the JOSS documentation check of openjournals/joss-reviews#4480

Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?

There are no automated tests (there are GitHub Action workflows for building and deploying the docs, but not for running the tests across all versions of Python supported). It seems that CI with some basic coverage of the API would greatly improve things.

For v0.8.3 there are no instructions on how to run the tests in the README (but they were added in for a future version in PR #34). It is additionally unclear what the test coverage of running the tests in the notebooks is, which would be important to know.

@itepifanio
Copy link
Collaborator

@matthewfeickert

Thanks for the comment. I think this requires a nbdev specific explanation as well. :)

Tests in ci

This is something we have also discussed with our other reviewer. We had first included an explanation into the docs but have removed it again later for clarity.

In short, nbdev doesn't separate the tests from the notebooks. When the CI builds the notebooks code it's also running the tests. From nbdev docs:

"Everything that is not an exported cell is considered a test, so you should make sure your notebooks can all run smoothly (and fast) if you want to use this functionality as the CLI."

The tests were written in notebooks right next to the doc and code. We're using ipytests to add tests in the notebooks and the assert keyword for simple tests.

Coverage

There are a few issues when trying to run coverage with nbdev, but we're going to investigate how to do it and come up with at least a rough number. In addition to the technical difficulty to get the number, ipyannotator has a fair amount of ui-related code which is difficult to automatically test (eyeballing notebooks with rendered widgets is unreasonable effective) which makes the line coverage score even harder to interpret than it usually is.

@ibayer
Copy link
Member

ibayer commented Aug 16, 2022

test coverage as reported by #41

============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/italo/workspace/github_ipyannotator
plugins: anyio-3.5.0, cov-3.0.0
collected 1 item

test_nbs.py .                                                            [100%]

---------- coverage: platform linux, python 3.8.10-final-0 -----------
Name                                         Stmts   Miss  Cover
----------------------------------------------------------------
ipyannotator/__init__.py                         1      0   100%
ipyannotator/_nbdev.py                           6      0   100%
ipyannotator/annotator.py                      111      5    95%
ipyannotator/base.py                            83     12    86%
ipyannotator/bbox_annotator.py                 187      3    98%
ipyannotator/bbox_canvas.py                    379     81    79%
ipyannotator/bbox_video_annotator.py           272      8    97%
ipyannotator/capture_annotator.py              183      6    97%
ipyannotator/custom_input/__init__.py            0      0   100%
ipyannotator/custom_input/buttons.py            98      0   100%
ipyannotator/custom_input/coordinates.py        52      8    85%
ipyannotator/custom_widgets/__init__.py          0      0   100%
ipyannotator/custom_widgets/grid_menu.py        86      7    92%
ipyannotator/datasets/__init__.py                0      0   100%
ipyannotator/datasets/download.py               95     10    89%
ipyannotator/datasets/factory.py                48     26    46%
ipyannotator/datasets/factory_legacy.py         65     18    72%
ipyannotator/datasets/generators.py            301      6    98%
ipyannotator/datasets/generators_legacy.py     141     12    91%
ipyannotator/debug_utils.py                     61      5    92%
ipyannotator/docs/__init__.py                    0      0   100%
ipyannotator/docs/api.py                         9      0   100%
ipyannotator/docs/input_output.py               13      0   100%
ipyannotator/docs/utils.py                     142     51    64%
ipyannotator/explore_annotator.py               84      6    93%
ipyannotator/helpers.py                        152      5    97%
ipyannotator/im2im_annotator.py                217     35    84%
ipyannotator/ipytyping/__init__.py               0      0   100%
ipyannotator/ipytyping/annotations.py           72      5    93%
ipyannotator/mltypes.py                         58      1    98%
ipyannotator/navi_widget.py                     71      5    93%
ipyannotator/right_menu_widget.py              147      1    99%
ipyannotator/services/__init__.py                0      0   100%
ipyannotator/services/bbox_trajectory.py        31      1    97%
ipyannotator/storage.py                        273     21    92%
----------------------------------------------------------------
TOTAL                                         3438    338    90%


======================== 1 passed in 265.57s (0:04:25) =========================

@matthewfeickert
Copy link
Contributor Author

There are no automated tests (there are GitHub Action workflows for building and deploying the docs, but not for running the tests across all versions of Python supported). It seems that CI with some basic coverage of the API would greatly improve things.

This statement of mine was false as of PR #2 given

- name: Run Notebooks tests
run: poetry run nbdev_test_nbs

PR #40 improves on this by testing across all supported Python versions, and will resolve this Issue. 👍

test coverage as reported by #41

90% coverage is rather low, but I will consider this as acceptable for publication. 👍 I would strongly suggest that you bring your coverage up.

@matthewfeickert
Copy link
Contributor Author

(There was a previous comment from me that I deleted that made an incorrect statement about there needing to be additional work. Please ignore that comment in any email notifications you received.)

@ibayer
Copy link
Member

ibayer commented Aug 22, 2022

I'm closing the issue since the confusion about missing CI has been resolved and
the suggestion to improve coverage is tracked elsewhere.

@ibayer ibayer closed this as completed Aug 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants