Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validator #1164

Open
davide-f opened this issue Oct 31, 2024 · 4 comments
Open

Validator #1164

davide-f opened this issue Oct 31, 2024 · 4 comments
Labels
help wanted Extra attention is needed high priority

Comments

@davide-f
Copy link
Member

Describe the feature you'd like to see

It would be interesting to adopt the validator similar to pypsa-ariadne
See for example:
PyPSA/pypsa-ariadne#269 (comment)

@lkstrp lkstrp mentioned this issue Dec 11, 2024
8 tasks
@davide-f davide-f added help wanted Extra attention is needed high priority labels Jan 14, 2025
@ekatef
Copy link
Member

ekatef commented Jan 14, 2025

Given the experience of the model debug and testing for #1172 and #1293, it would be great to add testing of a number major parameters and outputs. In particular, there is the need to check that the invariants (e.g. the overall installed capacity, demand, available renewable potential) are being conserved.

As an additional comment, currently there seem to be some issues with the overall available potential as demonstrated in #1270 which must be investigated further.

@lkstrp
Copy link
Contributor

lkstrp commented Jan 15, 2025

We are currently working on a new, more stable version of the validator in the PyPSA/ repository. At best we will find a better solution to talk to any HPC, which will make it more scalable across all PyPSA-x repos and forks. But even if not, we could still set it up for Earth. The question is rather how tedious this will be.

Testing of a number of parameters I also wanna bring to it. Basically, that you can define some hard ranges of parameters, and if they are not met, the tests will fail. In addition to the already existing manual comparison. All based on networks in the results directory.

In any case, it would be great to combine development efforts here to reduce the already too high double maintenance and dev efforts of Earth and Eur.

@ekatef
Copy link
Member

ekatef commented Jan 15, 2025

We are currently working on a new, more stable version of the validator in the PyPSA/ repository. At best we will find a better solution to talk to any HPC, which will make it more scalable across all PyPSA-x repos and forks. But even if not, we could still set it up for Earth. The question is rather how tedious this will be.

Testing of a number of parameters I also wanna bring to it. Basically, that you can define some hard ranges of parameters, and if they are not met, the tests will fail. In addition to the already existing manual comparison. All based on networks in the results directory.

In any case, it would be great to combine development efforts here to reduce the already too high double maintenance and dev efforts of Earth and Eur.

Hey @lkstrp, thanks a lot for the input! Great to hear that you are also making some thoughts on that 😄

HPC is always nice however I don't think that it's very valid point for this particular context. Though, indeed it would be great to test more modeling configurations including some relevant side cases.

Regarding parameters to be tracked, humans are normally able to keep in focus 1 to 3 points, and track up to 7-9 ones. So, it doesn't make sense to increase the number of the parameters in the visible field as it will just make the outputs difficult to read. Everything else should go to logs. The question is how to define those major parameters to be tracked.

If you have any ideas to share, you are always welcome to join the dev weeklies 😄

@davide-f
Copy link
Member Author

We are currently working on a new, more stable version of the validator in the PyPSA/ repository. At best we will find a better solution to talk to any HPC, which will make it more scalable across all PyPSA-x repos and forks. But even if not, we could still set it up for Earth. The question is rather how tedious this will be.

Testing of a number of parameters I also wanna bring to it. Basically, that you can define some hard ranges of parameters, and if they are not met, the tests will fail. In addition to the already existing manual comparison. All based on networks in the results directory.

In any case, it would be great to combine development efforts here to reduce the already too high double maintenance and dev efforts of Earth and Eur.

Great @lkstrp :D fully aligned! In full transparency, yesterday during a tedious testing for a PR, I've been scanning the PRs and indeed raised few high-priority to raise attention.
Fully support in collaborating and aligning on various points :)

I've seen useful validating:

  • energy dispatch
  • objective function
  • installed capacity
  • optimal capacity
  • availability of renewable source [es sum of inflows across all storage_units, the sum of p_nom_max * p_max_pu by carrier]
    [this could be added potentially to the statistics block]

Also agree that doing the testing on the network in results should be a good approach; tracking some changes also along the way could be useful but already the above could save a lot of time.
Ideally, relying on the statistics module [heavily] could simplify the architecture.

I haven't investigated so much the validator, but can't we rely on the results of the CI?

Probably, we could find a slot to align on the project developments and find common ground.
There is interest in cross supporting each other and also keeping the model aligned; I'd be definitely happy to exchange

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed high priority
Projects
None yet
Development

No branches or pull requests

3 participants