Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nested tests? (i.e. don't attempt test 1.a if test 1 fails) #119

Open
alphapapa opened this issue Nov 20, 2017 · 15 comments
Open

Nested tests? (i.e. don't attempt test 1.a if test 1 fails) #119

alphapapa opened this issue Nov 20, 2017 · 15 comments
Labels

Comments

@alphapapa
Copy link
Contributor

Hi,

In the code i"m testing, I need to be able to prevent subsequent tests from being attempted if the first test fails. For example, the first test logs in to a server, and if that fails, Buttercup shouldn't try to run other tests that do things on the server.

Maybe I'm missing something, but I can't figure out how to do this with Buttercup. Is there a way?

Thanks.

@jorgenschaefer jorgenschaefer added this to the v1.10 milestone Nov 24, 2017
@jorgenschaefer
Copy link
Owner

Hello, and thanks for the suggestion. Currently, this is not possible in buttercup (short of wrapping (when (server-available) ...) around all bodies). While I can relate to the idea of not having 100 tests fail if a specific common precondition is not met, I generally dislike the concept of a "subsequent test". Tests should be independent, so I'm currently doubtful if a feature like this would make it into buttercup.

@jorgenschaefer jorgenschaefer modified the milestones: v1.10, v1.11 Jan 28, 2018
@publicimageltd
Copy link

Just to add some motivation for nested test: I'm having the scenario that in testing a data base, I have to simulate a working environment and spy on some functions. In particular, I have to work with real files, since one part of the test is i.e. checking for the existence of a file, or responding to the date of creation. If all of this works incorrectly, I might break up some real files, since setting up the environment includes creating files etc. So in my view, it makes absolutely sense to have something like a basic condition without which all subsequent tests (or nested tests) fail. If I would have to provide a failsafe environment for every case, I would have to replicate so much code that it makes the test rather useless, dangerous and very tedious.

and: thanks for this module!

@jorgenschaefer
Copy link
Owner

Hm. I still believe that tests should be independent. It might work to have a before-each clause that does (signal 'buttercup-pending "SKIPPED") if a condition is not met. Either that, or a check as the first thing in each test. That setup can do something globally and cache the state, too.

@alphapapa
Copy link
Contributor Author

alphapapa commented Jul 7, 2018

Jorgen, what would you recommend as a method for testing a simple CRUD-style database like this?

  1. Create database file, ensuring it succeeds.
  2. Insert data into database file, ensuring no error is raised, and querying the database to ensure it was inserted successfully.
  3. Update data in database, ensuring no error is raised, and querying the database to ensure it was updated successfully.
  4. Delete data from database, ensuring no error is raised, and querying the database to ensure it was deleted successfully.

It seems to me that the most straightforward method is to run each test in order on the database file created in test 1. However, you said that tests should be independent. That would require doing multiple setup steps for each test, most of which would duplicate previous tests, and that would significantly increase the overall complexity of the tests.

So how would such a test be structured with Buttercup?

@jorgenschaefer
Copy link
Owner

You should first identify the units of functionality you are testing. And test behavior, not implementation.

@alphapapa
Copy link
Contributor Author

@jorgenschaefer I'm sorry, I don't understand what you mean. Could you please explain in the context of the example? I need something concrete to understand how you're thinking.

@jorgenschaefer
Copy link
Owner

jorgenschaefer commented Jul 21, 2018

This is unrelated to buttercup, but rather basic testing principles. Do not test "I can write stuff into the database", test the behavior of your application.

To answer your question directly, you have three behaviors you are testing:

  1. Given an empty database, when I write data into the database, I can read that datum from the database
  2. Given a database with a datum in it, when I update the datum, I can read the updated datum from the database
  3. Given a database with a datum in it, when I delete the datum, querying the datum from the database says it's "not found"

Each of these tests consists of only three steps. You have two different setup methods ("given an empty database" and "given a database with a datum in it"), three methods you are testing (write, update, delete), and two checks ("datum exists and is equal to" and "datum does not exist"). You can write functions for those.

Again, this is completely unrelated to buttercup, this is how you write good tests.

@publicimageltd
Copy link

Thanks for the clarification. I fully agree with your list of the three behaviors. But I see a list of three behaviors which depend on each other: Only if I can write and read (step 1), I can update a datum (step 2) or failing in accessing it (step 3). To me, that looks like a good use case for a chain of dependencies: If step 1 fails, the rest is not only unnecessary, it might even do some harm. I am also thinking here about the possibility that step 1 fails due to a wrong setup, so that i.e. the database is writting in a wrong directory where it overwrites a database which is actually in use.

BTW: Given that you rightly insist that you won't teach good testing here, do you have any good link where like principles are explained?

@jorgenschaefer
Copy link
Owner

To me, that looks like a good use case for a chain of dependencies

Tests should be independent of each other. There is never a good use case for a chain of dependencies among tests.

BTW: Given that you rightly insist that you won't teach good testing here, do you have any good link where like principles are explained?

Kent Beck, Test Driven Development by Example
Robert Cecil Marten, Clean Code

Pretty much any decent book on basic software development, really.

@snogge
Copy link
Collaborator

snogge commented Aug 28, 2018

The assume macro could probably be used to do what @alphapapa wants.

@snogge
Copy link
Collaborator

snogge commented May 3, 2020

No activity for 20 months, closing issue for now.

@snogge snogge closed this as completed May 3, 2020
@alphapapa
Copy link
Contributor Author

@snogge

The assume macro could probably be used to do what @alphapapa wants.

  • FYI, that macro appears to be undocumented. I had to search the package's source code to find it.
  • That macro's name seems inaccurate. Its behavior seems to be like cl-assert. In English, the meaning of "assume" is like "don't check this," which is the opposite of what the macro does. Since you can't use assert as the name, a more accurate name might be something like skip-unless.

@snogge
Copy link
Collaborator

snogge commented May 4, 2020

@alphapapa

The assume macro could probably be used to do what @alphapapa wants.

* FYI, that macro appears to be undocumented.  I had to search the package's source code to find it.

I know.

* That macro's name seems inaccurate.  Its behavior seems to be like `cl-assert`.  In English, the meaning of "assume" is like "don't check this," which is the opposite of what the macro does.  Since you can't use `assert` as the name, a more accurate name might be something like `skip-unless`.

I'm not sure where @jorgenschaefer got the name assume, as far as I can see it does not exist in jasmine. But I guess the reasoning is that "this test assumes that the condition is true, there is no point in running it if the assumption does not hold".

Anyway, does the assume macro do what you originally wanted? I assume that you found some other solution in the meantime :)

@snogge snogge reopened this May 4, 2020
@alphapapa
Copy link
Contributor Author

But I guess the reasoning is that "this test assumes that the condition is true, there is no point in running it if the assumption does not hold".

Right, and my point is that it doesn't actually assume that--it verifies that before running the test. If it actually assumed, it would run the test unconditionally.

Of course, naming things is hard, but the current name implies the opposite of what it does. Since it's not yet documented, maybe it could be improved. So, something like skip-unless, or perhaps demand or insist, would be accurate in plain language.

Anyway, does the assume macro do what you originally wanted? I assume that you found some other solution in the meantime :)

I don't know. I just noticed the comment mentioning that macro yesterday. I'd have to look up the tests I was working on at the time and try to rewrite them using that macro.

I guess I'd have to set a variable for each test upon passing and test it in subsequent ones. So I guess the macro would partially address it, but having to define and set those variables myself means that each subsequent test would have to explicitly reference the result of previous tests, which seems to partially defeat the purpose of having the test framework handle this issue for the developer. I'd prefer having "nested" tests, or dependent tests, or whatever one might call them.

Anyway, if Buttercup doesn't want to support this idea, that's fine.

@jorgenschaefer
Copy link
Owner

The assume macro was added in 673f84d. Not sure how much thought went into the naming. I don't think I've ever used that form.

@snogge snogge removed this from the v1.11 milestone Sep 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants