-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nested tests? (i.e. don't attempt test 1.a if test 1 fails) #119
Comments
Hello, and thanks for the suggestion. Currently, this is not possible in buttercup (short of wrapping |
Just to add some motivation for nested test: I'm having the scenario that in testing a data base, I have to simulate a working environment and spy on some functions. In particular, I have to work with real files, since one part of the test is i.e. checking for the existence of a file, or responding to the date of creation. If all of this works incorrectly, I might break up some real files, since setting up the environment includes creating files etc. So in my view, it makes absolutely sense to have something like a basic condition without which all subsequent tests (or nested tests) fail. If I would have to provide a failsafe environment for every case, I would have to replicate so much code that it makes the test rather useless, dangerous and very tedious. and: thanks for this module! |
Hm. I still believe that tests should be independent. It might work to have a |
Jorgen, what would you recommend as a method for testing a simple CRUD-style database like this?
It seems to me that the most straightforward method is to run each test in order on the database file created in test 1. However, you said that tests should be independent. That would require doing multiple setup steps for each test, most of which would duplicate previous tests, and that would significantly increase the overall complexity of the tests. So how would such a test be structured with Buttercup? |
You should first identify the units of functionality you are testing. And test behavior, not implementation. |
@jorgenschaefer I'm sorry, I don't understand what you mean. Could you please explain in the context of the example? I need something concrete to understand how you're thinking. |
This is unrelated to buttercup, but rather basic testing principles. Do not test "I can write stuff into the database", test the behavior of your application. To answer your question directly, you have three behaviors you are testing:
Each of these tests consists of only three steps. You have two different setup methods ("given an empty database" and "given a database with a datum in it"), three methods you are testing (write, update, delete), and two checks ("datum exists and is equal to" and "datum does not exist"). You can write functions for those. Again, this is completely unrelated to buttercup, this is how you write good tests. |
Thanks for the clarification. I fully agree with your list of the three behaviors. But I see a list of three behaviors which depend on each other: Only if I can write and read (step 1), I can update a datum (step 2) or failing in accessing it (step 3). To me, that looks like a good use case for a chain of dependencies: If step 1 fails, the rest is not only unnecessary, it might even do some harm. I am also thinking here about the possibility that step 1 fails due to a wrong setup, so that i.e. the database is writting in a wrong directory where it overwrites a database which is actually in use. BTW: Given that you rightly insist that you won't teach good testing here, do you have any good link where like principles are explained? |
Tests should be independent of each other. There is never a good use case for a chain of dependencies among tests.
Kent Beck, Test Driven Development by Example Pretty much any decent book on basic software development, really. |
The |
No activity for 20 months, closing issue for now. |
|
I know.
I'm not sure where @jorgenschaefer got the name Anyway, does the assume macro do what you originally wanted? I assume that you found some other solution in the meantime :) |
Right, and my point is that it doesn't actually assume that--it verifies that before running the test. If it actually assumed, it would run the test unconditionally. Of course, naming things is hard, but the current name implies the opposite of what it does. Since it's not yet documented, maybe it could be improved. So, something like
I don't know. I just noticed the comment mentioning that macro yesterday. I'd have to look up the tests I was working on at the time and try to rewrite them using that macro. I guess I'd have to set a variable for each test upon passing and test it in subsequent ones. So I guess the macro would partially address it, but having to define and set those variables myself means that each subsequent test would have to explicitly reference the result of previous tests, which seems to partially defeat the purpose of having the test framework handle this issue for the developer. I'd prefer having "nested" tests, or dependent tests, or whatever one might call them. Anyway, if Buttercup doesn't want to support this idea, that's fine. |
The assume macro was added in 673f84d. Not sure how much thought went into the naming. I don't think I've ever used that form. |
Hi,
In the code i"m testing, I need to be able to prevent subsequent tests from being attempted if the first test fails. For example, the first test logs in to a server, and if that fails, Buttercup shouldn't try to run other tests that do things on the server.
Maybe I'm missing something, but I can't figure out how to do this with Buttercup. Is there a way?
Thanks.
The text was updated successfully, but these errors were encountered: