-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nose testing: you cannot select individual tests (rather than scripts) #160
Comments
@danlipsa yes you're right. I'm not sure it's worth the work though. In theory the test suite is ran automatically and if one of these break it will be easy enough to run the loop only on the failing one while fixing it. Beside these loops are usually closely related and fixing one will likely fix most. |
@doutriaux1 As suggested by @aashish24 , you can define several tests inside a script using def test... Then you can run only these tests individually. Unfortunatelly you still have to pass the script name: |
@danlipsa I can add the search bit it's easy enough. I'll let you and @aashish24 split the tests ;) |
@doutriaux1 @aashish24 Another related drawback of the new testing framework: I had to replace several baselines so I had execute the following operations multiple time: run the script, replace a baseline, run the script, replace the next baseline, ... In the past, a test run generated all failures and then I could run a for to replace all baselines. After the release, we should make our testing framework run as in the past. |
@danlipsa all failures should be left in the directory |
@doutriaux1 The problem is that the script stops at the first failure, isn't it? So you cannot generate all baselines and then copy them over. |
@danlipsa I see your point, yres right now you have to update the baselines one at a time as you fix the trest |
@doutriaux1 @aashish24 We can use a generator to create several tests in a for loop (inside the same script) @doutriaux1 Do we need to worry about anything else? We might have one of our colleague work on this. |
@danlipsa yes I know about this, I even implemented some of it at first. The only issue is really the html generation and parsing all the mangled error messages to make sure many image comparisons are generated, I think we should break it down by one page per script that then splits it into N pages for each script failed, with each landing page being the current image_compare. If someone at Kitware wants to take the lead on this that's great but not urgent, I would rather that we spend time fixing the seg fault on travis and circleci. |
@doutriaux1 Is this because we expect only one image failure per script? The output does not change as far as I can see, but I think it won't stop at the first failure. |
@doutriaux1 @aashish24 @sankhesh nosetests seems to test the vcs in the source directory rather than the vcs installed in the coda env. Should we worry about that? (I tested this by adding an print in the sources and run the test without installing the sources into conda) |
@danlipsa it's due to setuptools. On the cdms2 version of run_test I have an option to run |
@danlipsa the post processing step that parses the output of each |
@dorukozturk As we discussed, in this branch I created several tests per script for two scripts.
|
@doutriaux1 If we write the script to submit tests results to cdash do we still need to make the html generation work for running individual tests rather than running whole scripts and stopping at the first error? |
@danlipsa i guess if we can tell cdash to dump the output to a local file so we casn run this offline. |
@doutriaux1 Why would you want to run this off-line if it is available online and it is also stored for later? |
plane, beach, etc... |
or if the lab suddenly decides to block the server where the data are uploaded |
@doutriaux1 FYI last week raw.githubusercontent.com was blocked, it was unblocked this morning, but caused some grief for @dnadeau4 and myself on Friday and the weekend |
@doutriaux1 I never used the slider feature to be honest. I always did: for i in `ls *.png | grep -v diff`; do eog $i ../uvcdat-testdata/baselines/vcs/$i; done You can switch between baseline and new image using arrows, Alt-F4 moves to the next set of differences. You replace eog with cp to copy them to the baselines. |
tha ks @danlipsa that's useful. I'll ad it to my bashrc. The slider is actually very useful too. It's nice to have both. |
Many tests share the same python script. Certain scripts contain tens of tests. In our current implementation of the nose testing framework you can only select to run individual scripts rather than individual tests.
For instance, in the ctest framework we used to be able to do 'ctest -R streamlines'.
Or, I could do ctest -R 'opacity|transparency' to quickly see if the transparency tests pass.
You cannot do that in the new framework as far as I can see.
Another problem is that a testing script stops at the first failure - this prevents upgrading baselines using scripts in case of many failures.
The text was updated successfully, but these errors were encountered: