Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add sharness tests to document existing mount bugs #362

Merged
merged 2 commits into from
Nov 19, 2014

Conversation

chriscool
Copy link
Contributor

This uses test_expect_failure to document some bugs
related to 'ipfs mount'.

The mount bugs are discussed in issue #341.

When the bug will be fixed, we can just replace test_expect_failure with test_expect_success and the test suite will then fail if the bugs reappear.

This uses test_expect_failure to document some bugs
related to 'ipfs mount'.

License: MIT
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
@jbenet
Copy link
Member

jbenet commented Nov 19, 2014

Great!!! Though, not sure about having tests passing only to document a bug? (i.e. before we invert them to fail if bug). The alternative is perhaps to keep this PR open until we fix it, and then merge. What would people prefer?

@chriscool
Copy link
Contributor Author

In Git there are a number of test_expect_failure for some known bugs.
I think it is nice because:

  1. sometimes a change might fix a bug without intending to fix it; it can be a nice surprise,
  2. it tells people that a behavior is really a bug, not a feature; sometimes people just don't know,
  3. it makes it easier for people to check that they fixed the bug,
  4. if we keep the PR open and develop other tests, the changes in the PR might not apply cleanly when the bug is eventually fixed and it will require more work to apply the changes,
  5. old issues are often forgotten or not maintained; people don't know if the bug described in the issue is still there or not, and might spend time manually testing.

In Git, 5. is especially true because Git has no bug tracking system (except the mailing list and test_expect_failure).

What I can also do is add the issue number in the test label. It will make it easy to find the issue when looking at the test, and to find the test when looking at the issue.

License: MIT
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
@btc
Copy link
Contributor

btc commented Nov 19, 2014

They're both useful, complementary sources of information.

It's nice to have the issue documented in tests/code. The system is going to get large enough such that no one person will know everything about its entire state.

Documenting the implementation is insufficient since the implementation can change. Documenting the behavior in a test is a great way to keep the memory alive. And when the issue is resolved, we have a test in place to keep it resolved.

I prefer a slightly different pattern for accomplishing this though:

  1. write the test so that it fails
  2. include a warning message that the test is being skipped (with a warning/explanation about why its being skipped)

This is a really minor detail though. With either method, we are likely to achieve similar results in the end.

+1 for issue
+1 for test

@chriscool
Copy link
Contributor Author

@maybebtc there is a kind of warning that the test is being skipped:

$ ./t0030-mount.sh 
ok 1 - ipfs init succeeds
ok 2 - prepare config
ok 3 - 'ipfs daemon' succeeds
ok 4 - 'ipfs daemon' output looks good
ok 5 - 'ipfs mount' succeeds
ok 6 - 'ipfs mount' output looks good
ok 7 - 'ipfs daemon' is still running
ok 8 - 'ipfs daemon' can be killed
ok 9 - mount directories can be removed
ok 10 - 'ipfs daemon' succeeds
ok 11 - 'ipfs daemon' output looks good
not ok 12 - 'ipfs mount' fails when no mount dir (issue #341) # TODO known breakage
not ok 13 - 'ipfs mount' looks good when it fails (issue #341) # TODO known breakage
ok 14 - 'ipfs daemon' is still running
ok 15 - 'ipfs daemon' can be killed
# still have 2 known breakage(s)
# passed all remaining 13 test(s)
1..15

We have: not ok 12 - 'ipfs mount' fails when no mount dir (issue #341) # TODO known breakage
And the whole script doesn't fail, so when we run them using make, the other test scripts are still launched.

In the end, we get:

*** aggregate ***
lib/test-aggregate-results.sh
fixed   0
success 51
failed  0
broken  2
total   53

They appear under "broken", not under "failed".

@chriscool
Copy link
Contributor Author

When they are fixed we get:

$ ./t0030-mount.sh 
ok 1 - ipfs init succeeds
ok 2 - prepare config
ok 3 - 'ipfs daemon' succeeds
ok 4 - 'ipfs daemon' output looks good
ok 5 - 'ipfs mount' succeeds
ok 6 - 'ipfs mount' output looks good
ok 7 - 'ipfs daemon' is still running
ok 8 - 'ipfs daemon' can be killed
ok 9 - mount directories can be removed
ok 10 - 'ipfs daemon' succeeds
ok 11 - 'ipfs daemon' output looks good
ok 12 - 'ipfs mount' fails when no mount dir (issue #341) # TODO known breakage vanished
ok 13 - 'ipfs mount' looks good when it fails (issue #341) # TODO known breakage vanished
ok 14 - 'ipfs daemon' is still running
ok 15 - 'ipfs daemon' can be killed
# 2 known breakage(s) vanished; please update test(s)
# passed all remaining 13 test(s)
1..15

And they appear under "fixed":

*** aggregate ***
lib/test-aggregate-results.sh
fixed   2
success 51
failed  0
broken  0
total   53

@jbenet
Copy link
Member

jbenet commented Nov 19, 2014

Okay, all this SGTM!

jbenet added a commit that referenced this pull request Nov 19, 2014
Add sharness tests to document existing mount bugs
@jbenet jbenet merged commit cefb014 into ipfs:master Nov 19, 2014
@btc
Copy link
Contributor

btc commented Nov 20, 2014

They appear under "broken", not under "failed".

Ah perfecto

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants