Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Formatters: add steps.undefined, unused and undefinedunused #1089

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

djasa
Copy link

@djasa djasa commented Mar 7, 2023

For testing of behave repository itself, it'd be useful to have formatters that would report just faulty steps.

The StepsUsageFormatter implementing 'steps.usage' formatter already does this. This commits just adds derived classes implementing:

  • steps.undefined formatter listing just the undefined steps
  • steps.unused formatter listing just the unused steps
  • steps.unusedundefined listing both

Note:
Due to the way how behave parses steps, in case of typo in a step or param with a wrong tyep, the faulty step will appear in both listings. This is motivation for the third formatter which adds both and may be most useful for projects without many unused steps. :)

For testing of behave repository itself, it'd be useful to have
formatters that would report just faulty steps.

The StepsUsageFormatter implementing 'steps.usage' formatter already
does this. This commits just adds derived classes implementing:
  * steps.undefined formatter listing just the undefined steps
  * steps.unused formatter listing just the unused steps
  * steps.unusedundefined listing both

Note:
Due to the way how behave parses steps, in case of typo in a step
or param with a wrong tyep, the faulty step will appear in both
listings. This is motivation for the third formatter which adds both.
@djasa
Copy link
Author

djasa commented Mar 8, 2023

Example use with few scenarios and pytest calling behave --dry to point out mistyped or incorrectly used steps as soon as possible, or to warn about unused steps:

$ pytest --capture fd
============================= test session starts ==============================
platform linux -- Python 3.11.1, pytest-7.1.3, pluggy-1.0.0
rootdir: /var/home/djasa/tmp/behave-step-sanity
collected 4 items                                                              

test_step_sanity.py .FF.                                                 [100%]

=================================== FAILURES ===================================
______________________________ test_undefined_all ______________________________

    def test_undefined_all():
        res = subprocess.run([*_behave, "steps.undefined"])
    
>       assert res.returncode == 0
E       AssertionError: assert 1 == 0
E        +  where 1 = CompletedProcess(args=['behave', '--dry', '--no-snippets', '--no-summary', '-f', 'steps.undefined'], returncode=1).returncode

test_step_sanity.py:18: AssertionError
----------------------------- Captured stdout call -----------------------------

UNDEFINED STEPS[3]:
  Given step does not exist               # features/scenarios/broken.feature:4
  When we're good!                        # features/scenarios/broken.feature:5
  Then we have "zero" errors              # features/scenarios/broken.feature:6

___________________________ test_unusedundefined_all ___________________________

    def test_unusedundefined_all():
        res = subprocess.run([*_behave, "steps.unusedundefined"])
    
>       assert res.returncode == 0
E       AssertionError: assert 1 == 0
E        +  where 1 = CompletedProcess(args=['behave', '--dry', '--no-snippets', '--no-summary', '-f', 'steps.unusedundefined'], returncode=1).returncode

test_step_sanity.py:24: AssertionError
----------------------------- Captured stdout call -----------------------------
UNUSED STEP DEFINITIONS[2]:
  @given('unused step')                   # features/steps/steps.py:3
  @then('we have "{num:d}" errors')       # features/steps/steps.py:16

UNDEFINED STEPS[3]:
  Given step does not exist               # features/scenarios/broken.feature:4
  When we're good!                        # features/scenarios/broken.feature:5
  Then we have "zero" errors              # features/scenarios/broken.feature:6

=============================== warnings summary ===============================
test_step_sanity.py::test_warn_on_unused
  /var/home/djasa/tmp/behave-step-sanity/test_step_sanity.py:32: UserWarning: Unused steps encountered:
  
  UNUSED STEP DEFINITIONS[2]:
    @given('unused step')                   # features/steps/steps.py:3
    @then('we have "{num:d}" errors')       # features/steps/steps.py:16
  
  
    warnings.warn(UserWarning(f"Unused steps encountered:\n\n{cap.out}"))

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED test_step_sanity.py::test_undefined_all - AssertionError: assert 1 == 0
FAILED test_step_sanity.py::test_unusedundefined_all - AssertionError: assert...
==================== 2 failed, 2 passed, 1 warning in 0.36s ====================

This output is produced with:

test_step_sanity.py
import pytest
import subprocess
import sys
import warnings


_behave = ("behave", "--dry", "--no-snippets", "--no-summary", "-f")

def test_undefined_sane():
    res = subprocess.run([*_behave, "steps.undefined", "features/scenarios/sane.feature"])

    assert res.returncode == 0


def test_undefined_all():
    res = subprocess.run([*_behave, "steps.undefined"])

    assert res.returncode == 0


def test_unusedundefined_all():
    res = subprocess.run([*_behave, "steps.unusedundefined"])

    assert res.returncode == 0


def test_warn_on_unused(capfd):
    res = subprocess.run([*_behave, "steps.unused"])

    cap = capfd.readouterr()
    if res.returncode != 0:
        warnings.warn(UserWarning(f"Unused steps encountered:\n\n{cap.out}"))
features/steps/steps.py
$ cat features/steps/steps.py 
from behave import given, when, then

@given("unused step")
def step_unused():
    very()
    complicated()
    code()
    here()

@given("that step impls exist")
@when("feature is ok")
@then("we're good!")
def step_ok():
    return True

@then("""we have "{num:d}" errors""")
def verify_num(num):
    assert isinstance(num, int)
features/scenarios/sane.feature
Feature: Sane

  Scenario: Perfect!
    Given that step impls exist
    When feature is ok
    Then we're good!
features/scenarios/broken.feature
Feature: Broken

  Scenario: Broken
    Given step does not exist
    When we're good!
    Then we have "zero" errors

@djasa
Copy link
Author

djasa commented Mar 8, 2023

An alternative approach using -D/userdata and early returns from printing functions is in #1090.

@jenisys jenisys force-pushed the main branch 4 times, most recently from fe1ca4d to fcfe5af Compare April 22, 2023 17:20
@jenisys jenisys force-pushed the main branch 2 times, most recently from 0a4d73b to 2c11d2e Compare May 14, 2024 22:39
@jenisys jenisys force-pushed the main branch 2 times, most recently from 3e51dda to c6ab01c Compare May 26, 2024 15:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant