-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validator #1164
Comments
Given the experience of the model debug and testing for #1172 and #1293, it would be great to add testing of a number major parameters and outputs. In particular, there is the need to check that the invariants (e.g. the overall installed capacity, demand, available renewable potential) are being conserved. As an additional comment, currently there seem to be some issues with the overall available potential as demonstrated in #1270 which must be investigated further. |
We are currently working on a new, more stable version of the validator in the Testing of a number of parameters I also wanna bring to it. Basically, that you can define some hard ranges of parameters, and if they are not met, the tests will fail. In addition to the already existing manual comparison. All based on networks in the results directory. In any case, it would be great to combine development efforts here to reduce the already too high double maintenance and dev efforts of Earth and Eur. |
Hey @lkstrp, thanks a lot for the input! Great to hear that you are also making some thoughts on that 😄 HPC is always nice however I don't think that it's very valid point for this particular context. Though, indeed it would be great to test more modeling configurations including some relevant side cases. Regarding parameters to be tracked, humans are normally able to keep in focus 1 to 3 points, and track up to 7-9 ones. So, it doesn't make sense to increase the number of the parameters in the visible field as it will just make the outputs difficult to read. Everything else should go to logs. The question is how to define those major parameters to be tracked. If you have any ideas to share, you are always welcome to join the dev weeklies 😄 |
Great @lkstrp :D fully aligned! In full transparency, yesterday during a tedious testing for a PR, I've been scanning the PRs and indeed raised few high-priority to raise attention. I've seen useful validating:
Also agree that doing the testing on the network in results should be a good approach; tracking some changes also along the way could be useful but already the above could save a lot of time. I haven't investigated so much the validator, but can't we rely on the results of the CI? Probably, we could find a slot to align on the project developments and find common ground. |
Describe the feature you'd like to see
It would be interesting to adopt the validator similar to pypsa-ariadne
See for example:
PyPSA/pypsa-ariadne#269 (comment)
The text was updated successfully, but these errors were encountered: