-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve the INPUT workflow #974
Comments
I've been trying to imagine how an analyst would start from scratch. Not sure if this would all become a nightmare with versioning... # Example Workflow (with imaginary parts)
using TulipaIO: TIO
using DuckDB: DDB
using TulipaEnergyModel: TEM
## Create Templates
# - I think it's necessary to first start something in DuckDB?
connection = DBInterface.connect(DuckDB.DB)
# - This function would create tables in DuckDB according to the choices in the analysis_features 'file'
TIO.create_template_tables("analysis_features.csv", connection; table = all)
# - But these tables will be empty except the selection of column names,
# so maybe they also need some more input like assets or something
# - This function would be the same as create_template_tables but works on an existing table to add/remove columns
# Theoretically allowing the analyst to adjust their analysis features on the fly
TIO.change_feature_columns("analysis_features_v2.csv", connection)
## Import raw data
## Manipulate data into User Format
# - Function to map a column of raw data onto the corresponding UserFormat column
# (Maybe this could enable more automatic mapping scripts...)
TIO.map_column(col_raw, col_UF; key_raw, key_UF)
# - Function that fills in defaults - either with auto defaults or specified by user
TIO.fill_defaults(; defaults_file)
# - Other stuff
## Scenarios?
# - Some way of setting Base scenario and creating Alternatives?
# - For now it seems this is entirely on the user to run the workflow for each scenario...
## Preprocessing?
# - TulipaClustering
# - Time partitions? Created somehow in templates?
## Convert to Model Format
TIO.userformat_to_modelformat(connection)
## Run Model
TEM.run_scenario(connection; optimizer, params...)
This idea is taking shape for me, but I need feedback in case it's the wrong direction. |
If we built a template repo, it could have a few basic files and a Pluto workflow that walks them through...
And then maybe the Pluto notebook includes the template workflow above with instructions? |
The input workflow (raw to model) needs improvements and consolidation. This will take many iterations, but an initial effort should improve the user experience in the short term as our community grows.
This epic focuses mainly on the user-facing features and workflow structure, not what's under the hood (not all TIO should be listed here).
Workflow Overview
"Files"
Might not necessarily be a file (could be in DuckDB), but something you can imagine as a file.
https://github.com/TulipaEnergy/Tulipa-OBZ-CaseStudy/blob/main/functions.jl
Data Manipulation
Help the user process data from raw into User Format.
connection
for the docs #941UserFormat to ModelFormat
This should be (semi) automatic.
Other Important Stuff
Where and how should this happen? Once decided, move to (or create) that section.
resolution
input data in therep_periods_data
from somewhere in the pipeline #686Related
The text was updated successfully, but these errors were encountered: