-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running tests with an interactive terminal (noninteractive: nil) #174
Comments
Try binding |
That won't do anything, will it? |
(you meant |
I did mean nil. |
I think that's right; I was hoping for a way to tell the buttercup binary to start in non-batch mode (e.g. using an invisible frame). |
The buttercup binary (bin/buttercup) is just a bash script. emacs -Q --daemon=foo
emacs -Q --batch --eval "(message \"%s\" (progn (require 'server) (server-eval-at \"foo\" '(format-mode-line (quote \"A\")))))"
emacs -Q --batch --eval "(progn (require 'server) (server-eval-at \"foo\" '(kill-emacs)))" I have hacked together a way to actually run some simple buttercup tests on the Emacs daemon, but so far have not even started on sending the messages back to the batch Emacs for printing on the terminal. |
But for your tests it should be possible to start an Emacs daemon before you run |
@cpitclaudel, is it OK to close this issue? Or do you think |
Ideally, buttercup would do the server-eval-at itself, but wishing is easy :) We could close this. |
If you want to do this, I run emacs with (defun tests-run ()
;; We would like to use `ert-run-tests-batch-and-exit'
;; Unfortunately it doesn't work outside of batch mode, and we
;; can't use batch mode because we have tests that need windows.
;; Instead, run the tests interactively, copy the results to a
;; text file, and then exit with an appropriate code.
(setq attempt-stack-overflow-recovery nil
attempt-orderly-shutdown-on-fatal-signal nil
debug nil
debug-on-error t
)
(unwind-protect
(progn
(ert-run-tests-interactively t)
(with-current-buffer "*ert*"
(append-to-file (point-min) (point-max) "test-results.txt")
(let ((failed-tests (ert-stats-completed-unexpected ert--results-stats)))
;; (log (format "failed tests: %s" (princ failed-tests)))
;; (log (format "zerop failed-tests == %s" (princ (zerop failed-tests))))
(log "checking failed tests")
(if (zerop failed-tests)
(progn (log "0 failed tests, exiting with code 0")
(when (not debug) (kill-emacs 0)))
(progn (log "at least 1 failed test, exiting with code 1")
(when (not debug) (kill-emacs 1))))
(log "done checking failed tests") I soon want to try and get it working with buttercup, but in the short term I might just split out my tests that I can't run with buttercup in batch mode with the function above. It would be nice if buttercup supported this mode by default for testing evil functions or the one you needed I think. |
Do you have your daemon work up anywhere? My idea was just to use I'm getting very annoyed (as in enough to try helping add this feature to buttercup) with not being able to use buttercup for most of my useful tests. One example of a useful test I can't run with buttercup:
results in: Codygman's hci Haskell Integration flycheck squiggly appears underneath misspelled putStrLnORAORAORA function
Traceback (most recent call last):
(perform-replace "putStrLn" "putStrLnORAORAORA" nil nil nil nil nil nil ni...
(replace-match-maybe-edit "putStrLnORAORAORA" t t nil (69 77 #<buffer Main...
(replace-match "putStrLnORAORAORA" t t)
(ask-user-about-lock "/home/cody/hci/testdata/simple-haskell-project/Main....
(error "Cannot resolve lock conflict in batch mode")
(signal error ("Cannot resolve lock conflict in batch mode"))
error: (error "Cannot resolve lock conflict in batch mode") |
Maybe Basically it returns Upon reading the discussion above I thought perhaps
|
Sorry, I can't find the daemon code at the moment. Perhaps I deleted Following the commit I think you are going about the test the wrong Is this 'just' about testing a flycheck checker? Because I'm sure |
Yes. It's an integration test of flycheck with my increasingly complex config.
It's easy to think that, but I've had numerous issues with my own config that caused flycheck to stop working. The most recent being when I turned on a flycheck setting with lsp-haskell 1.9, flycheck stopped automatically highlighting correctly and the highlighting only showed up after manually disabling/re-enabling flycheck. I'm using buttercup where I can to write integration tests from the highest level for the most crucial parts of my emacs config where I've had issues before. At one point I had a very nice config that I depended heavily on, but for some reason things would just break at the worst moments possible and I begun to not trust my system. I'll also be doing the same with my lsp integration with haskell to ensure both that it's working and I know about performance regressions and will be warned when upgrading the Haskell lsp server for instance, would give worse performance for a use case of mine. Some of those things I might be able to upstream, some might be too specific for me. But hopefully that gives an idea of what I'm attempting so you can gauge if it's important for buttercup to support it in any way. From the outside though, I understand your perspective and how you could see those tests or using nix to ensure everything is totally reproducible along with very granular git commits and those integration tests to very quickly spot problem.
Good idea. I'll check and see if they have anything like that. |
Since it may be of interest, it looks like for some reason (perhaps the same as me) flycheck uses ert for these types of tests. I did find a much better way of waiting on flycheck in their tests, flycheck-ert-wait-for-syntax-checker. |
IMHO, Buttercup == mocha For me, I implemented a mini test framework for ??? . I run a subprocess emacs inside tmux frames for full integration tests. Running Running |
@lastquestion Thank you for the info! Any public code you can share for your tmux/emacs setup? I imagine verifying things like "functionnnn has red flycheck squiggly under it" could be annoying. |
@codygman the driver code is https://github.com/lastquestion/explain-pause-mode/blob/master/tests/cases/driver.el, but it's not very documented and kind of still early days. Most of it is related to the package under test, the actual code that runs tmux is mostly here https://github.com/lastquestion/explain-pause-mode/blob/master/tests/cases/driver.el#L176 and there's code to send key presses and such further on. I reuse an existing mechanism that package exposes to communicate with the subprocess using UNIX sockets (which you could also take inspiration from), as well as using SIGUSR1. Remember, tmux is Or you can ask tmux to take a screenshot and then check to see if the text you're checking for squiggles is underlined - which i think the default face for flycheck errors etc is just underline. I didn't do it in that This whole discussion is probably a little off topic for this issue 😁 feel free to email me or open an issue on that project about exposing or packaging the test code out into another package or something. EDIT: if you need to actually test in windowed mode, for example you want to test autocomplete, etc. that's also possible along the same veins, you can start a GUI emacs, then communicate with it by sending keyboard input and taking screenshots. On mac for example this is pretty easy. But now we're wayyyy off topic 😀 |
https://github.com/AutumnalAntlers/emacs-buttercup/tree/my-fork Based on the advice in this issue, I've been using buttercup to test functions that fundamentally depend on frames, posn-at-point, format-mode-line, etc. My script spawns an emacs server, uses emacsclient to run specs, and handles inter-op control flow through a named pipe. The biggest issue I encountered was that (Don't have it in front of me to copy and paste right now, but they span the seven frames from: |
I have tests that can only run when
noninteractive
isnil
(because they useformat-mode-line
, which just returns""
in noninteractive mode). Is there a way to run these with buttercup?The text was updated successfully, but these errors were encountered: