Releases: NHERI-SimCenter/pelicun
v3.5
v3.4
Added
Documentation pages: Documentation for pelicun 3 is back online. The documentation includes guides for users and developers as well as an auto-generated API reference. A lineup of examples is planned to be part of the documentation, highlighting specific features, including the new ones listed in this section.
Consequence scaling: This feature can be used to apply scaling factors to consequence and loss functions for specific decision variables, component types, locations and directions. This can make it easier to examine several different consequence scaling schemes without the need to repeat all calculations or write extensive custom code.
Capacity scaling: This feature can be used to modify the median of normal or lognormal fragility functions of specific components. Medians can be scaled by a factor or shifted by adding or subtracting a value.
This can make it easier to use fragility functions that are a function of specific asset features.
Loss functions: Loss functions are used to estimate losses directly from the demands. The damage and loss models were substantially restructured to facilitate the use of loss functions.
Loss combinations: Loss combinations allow for the combination of two types of losses using a multi-dimensional lookup table. For example, independently calculated losses from wind and flood can be combined to produce a single loss estimate considering both demands.
Utility demand: Utility demands are compound demands calculated using a mathematical expression involving other demands. Practical examples include the application of a mathematical expression on a demand before using it to estimate damage, or combining multiple demands with a multivariate expression to generate a combined demand.Such utility demands can be used to implement those multidimensional fragility models that utilize a single, one-dimensional distribution that is defined through a combination of multiple input variables.
Normal distribution with standard deviation: Added two new variants of "normal" in uq.py
: normal_COV
and normal_STD
. Since the variance of the default normal random variables is currently defined via the coefficient of variation, the new normal_STD
is required to define a normal random variable with zero mean. normal_COV
is treated the same way as the default normal
.
Weibull random variable: Added a Weibull random variable class in uq.py
.
New DL_calculation.py
input file options: We expanded configuration options in the DL_calculation.py
input file specification. Specifically, we added CustomDLDataFolder
for specifying additional user-defined components.
Warnings in red: Added support for colored outputs. In execution environments that support colored outputs, warnings are now shown in red.
Code base related additions, which are not directly implementing new features but are nonetheless enhancing robustness, include the following:
- pelicun-specific warnings with the option to disable them
- a JSON schema for the input file used to configure simulations through
DL_calculation.py
- addition of type hints in the entire code base
- addition of slots in all classes, preventing on-the-fly definition of new attributes which is prone to bugs
Changed
- Updated random variable class names in
uq.py
. - Extensive code refactoring for improved organization and to support the new features. We made a good-faith effort to maintain backwards compatibility, and issue helpful warnings to assist migration to the new syntax.
- Moved most of the code in DL_calculation.py to assessment.py and created an assessment class.
- Migrated to Ruff for linting and code formatting. Began using mypy for type checking and codespell for spell checking.
Deprecated
.bldg_repair
attribute was renamed to.loss
.repair
had also been used in the past, please use.loss
instead.- In the damage and loss model library,
fragility_DB
was renamed todamage_DB
andbldg_repair_DB
was renamed toloss_repair_DB
. load_damage_model
was renamed toload_model_parameters
and the syntax has changed. Please see the applicable warning message when usingload_damage_model
for the updated syntax.{damage model}.sample
was deprecated in favor of{damage model}.ds_model.sample
.- The
DMG-
flag in the loss_map index is no longer required. BldgRepair
column is deprecated in favor ofRepair
.load_model
->load_model_parameters
{loss model}.save_sample
->'{loss model}.ds_model.save_sample
. The same applies toload_sample
.
Removed
- No features were removed in this version.
- We suspended the use of flake8 and pylint after adopting the use of ruff.
Fixed
- Fixed a bug affecting the random variable classes, where the anchor random variable was not being correctly set.
- Enforced a value of 1.0 for non-directional multipliers for HAZUS analyses.
- Fixed bug in demand cloning: Previously demand unit data were being left unmodified during demand cloning operations, leading to missing values.
- Reviewed and improved docstrings in the entire code base.
v3.3
-
Changes affecting backwards compatibility
- Remove "bldg" from repair consequence output filenames: The increasing scope of Pelicun now covers simulations for transportation and water networks. Hence, labeling repair consequence outputs as if they were limited to buildings no longer seems appropriate. The
bldg
label was dropped from the following files:DV_bldg_repair_sample
,DV_bldg_repair_stats
,DV_bldg_repair_grp
,DV_bldg_repair_grp_stats
,DV_bldg_repair_agg
,DV_bldg_repair_agg_stats
.
- Remove "bldg" from repair consequence output filenames: The increasing scope of Pelicun now covers simulations for transportation and water networks. Hence, labeling repair consequence outputs as if they were limited to buildings no longer seems appropriate. The
-
Deprecation warnings
- Remove
Bldg
from repair settings label in DL configuration file: Following the changes above, we droppedBldg
fromBldgRepair
when defining settings for repair consequence simulation in a configuration file. The previous version (i.e.,BldgRepair
) will keep working until the next major release, but we encourage everyone to adopt the new approach and simply use theRepair
keyword there.
- Remove
-
New features
-
Location-specific damage processes: This new feature is useful when you want damage to a component type to induce damage in another component type at the same location only. For example, damaged water pipes on a specific story can trigger damage in floor covering only on that specific story. Location-matching is performed automatically without you having to define component pairs for every location using the following syntax:
'1_CMP.A-LOC', {'DS1': 'CMP.B_DS1'}
, where DS1 ofCMP.A
at each location triggers DS1 ofCMP.B
at the same location. -
New
custom_model_dir
argument forDL_calculation
: This argument allows users to prepare custom damage and loss model files in a folder and pass the path to that folder to an auto-population script throughDL_calculation
. Within the auto-population script, they can reference only the name of the files in that folder. This provides portability for simulations that use custom models and auto population, such as some of the advanced regional simualtions in SimCenter's R2D Tool. -
Extend Hazus EQ auto population sripts to include water networks: Automatically recognize water network assets and map them to archetypes from the Hazus Earthquake technical manual.
-
Introduce
convert_units
function: Provide streamlined unit conversion using the pre-defined library of units in Pelicun. Allows you to convert a variable from one unit to another using a single line of simple code, such as
converted_height = pelicun.base.convert_units(raw_height, unit='m', to_unit='ft')
While not as powerful as some of the Python packages dedicated to unit conversion (e.g., Pint), we believe the convenience this function provides for commonly used units justifies its use in several cases.
-
-
Architectural and code updates
-
Split
model.py
into subcomponents: Themodel.py
file was too large and its contents were easy to refactor into separate modules. Each model type has its own python file now and they are stored under themodel
folder. -
Split the
RandomVariable
class into specific classes: It seems more straightforward to grow the list of supported random variables by having a specific class for each kind of RV. We split the existing largeRandomVariable
class inuq.py
leveraging inheritance to minimize redundant code. -
Automatic code formatting: Further improve consistency in coding style by using black to review and format the code when needed.
-
Remove
bldg
from variable and class names: Following the changes mentioned earlier, we droppedbldg
from lables where the functionality is no longer limited to buildings. -
Introduce
calibrated
attribute for demand model: This new attribute will allow users to check if a model has already been calibrated to the provided empirical data. -
Several other minor improvements; see commit messages for details.
-
-
Dependencies
- Ceiling raised for
pandas
, supporting version 2.0 and above up until 3.0.
- Ceiling raised for
v3.2
-
Changes that might affect backwards compatibility:
-
Unit information is included in every output file. If you parse Pelicun outputs and did not anticipate a Unit entry, your parser might need an update.
-
Decision variable types in the repair consequence outputs are named using CamelCase rather than all capitals to be consistent with other parts of the codebase. For example, we use "Cost" instead of "COST". This might affect post-processing scripts.
-
For clarity, "ea" units were replaced with "unitless" where appropriate. There should be no practical difference between the calculations due to this change. Interstory drift ratio demand types are one example.
-
Weighted component block assignment is no longer supported. We recommend using more versatile multiple component definitions (see new feature below) to achieve the same effect.
-
Damage functions (i.e., assign quantity of damage as a function of demand) are no longer supported. We recommend using the new multilinear CDF feature to develop theoretically equivalent but more efficient models.
-
-
New multilinear CDF Random Variable allows using the multilinear approximation of any CDF in the tool.
-
Capacity adjustment allows adjusting (scaling or shifting) default capacities (i.e., fragility curves) with factors specific to each Performance Group.
-
Support for multiple definitions of the same component at the same location-direction. This feature facilitates adding components with different block sizes to the same floor or defining multiple tenants on the same floor, each with their own set of components.
-
Support for cloning demands, that is, taking a provided demand dataset, creating a copy and considering it as another demand. For example, you can provide results of seismic response in the X direction and automatically prepare a copy of them to represent results in the Y direction.
-
Added a comprehensive suite of more than 140 unit tests that cover more than 93% of the codebase. Tests are automatically executed after every commit using GitHub Actions and coverage is monitored through
Codecov.io
. Badges at the top of the Readme show the status of tests and coverage. We hope this continuous integration facilitates editing and extending the existing codebase for interested members of the community. -
Completed a review of the entire codebase using
flake8
andpylint
to ensure PEP8 compliance. The corresponding changes yielded code that is easier to read and use. See guidance in Readme on linting and how to ensure newly added code is compliant. -
Models for estimating Environmental Impact (i.e., embodied carbon and energy) of earthquake damage as per FEMA P-58 are included in the DL Model Library and available in this release.
-
"ListAllDamageStates" option allows you to print a comprehensive list of all possible damage states for all components in the columns of the DMG output file. This can make parsing the output easier but increases file size. By default, this option is turned off and only damage states that affect at least one block are printed.
-
Damage and Loss Model Library
-
A collection of parameters and metadata for damage and loss models for performance based engineering. The library is available and updated regularly in the DB_DamageAndLoss GitHub Repository.
-
This and future releases of Pelicun have the latest version of the library at the time of their release bundled with them.
-
-
DL_calculation tool
-
Support for combination of built-in and user-defined databases for damage and loss models.
-
Results are now also provided in standard SimCenter
JSON
format besides the existingCSV
tables. You can specify the preferred format in the configuration file under Output/Format. The default file format is still CSV. -
Support running calculations for only a subset of available consequence types.
-
-
Several error and warning messages added to provide more meaningful information in the log file when something goes wrong in a simulation.
-
Update dependencies to more recent versions.
-
The online documentation is significantly out of date. While we are working on an update, we recommend using the documentation of the DL panel in SimCenter's PBE Tool as a resource.
v3.1
-
Calculation settings are now assessment-specific. This allows you to use more than one assessments in an interactive calculation and each will have its own set of options, including log files.
-
The uq module was decoupled from the others to enable standalone uq calculations that work without having an active assessment.
-
A completely redesigned DL_calculation.py script that provides decoupled demand, damage, and loss assessment and more flexibility when setting up each of those when pelicun is used with a configuration file in a larger workflow.
-
Two new examples that use the DL_calculation.py script and a json configuration file were added to the example folder.
-
A new example that demonstrates a detailed interactive calculation in a Jupyter notebook was added to the following DesignSafe project: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-3411v5 This project will be extended with additional examples in the future.
-
Unit conversion factors moved to an external file (settings/default_units) to make it easier to add new units to the list. This also allows redefining the internal units through a complete replacement of the factors. The internal units continue to follow the SI system.
-
Substantial improvements in coding style using flake8 and pylint to monitor and help enforce PEP8.
-
Several performance improvements made calculations more efficient, especially for large problems, such as regional assessements or tall buildings investigated using the FEMA P-58 methodology.
-
Several bugfixes and a large number of minor changes that make the engine more robust and easier to use.
-
Update recommended Python version to 3.10 and other dependencies to more recent versions.
v3.0
-
The architecture was redesigned to better support interactive calculation and provide a low-level integration across all supported methods. This is the first release with the new architecture. Frequent updates are planned to provide additional examples, tests, and bugfixes in the next few months.
-
New
assessment
module introduced to replacecontrol
module:- Provides a high-level access to models and their methods
- Integrates all types of assessments into a uniform approach
- Most of the methods from the earlier
control
module were moved to themodel
module
-
Decoupled demand, damage, and loss calculations:
- Fragility functions and consequence functions are stored in separate files. Added new methods to the
db
module to prepare the corresponding data files and re-generated such data for FEMA P58 and Hazus earthquake assessments. Hazus hurricane data will be added in a future release. - Decoupling removed a large amount of redundant data from supporting databases and made the use of HDF and json files for such data unnecessary. All data are stored in easy-to-read csv files.
- Assessment workflows can include all three steps (i.e., demand, damage, and loss) or only one or two steps. For example, damage estimates from one analysis can drive loss calculations in another one.
- Fragility functions and consequence functions are stored in separate files. Added new methods to the
-
Integrated damage and loss calculation across all methods and components:
- This includes phenomena such as collapse, including various collapse modes, and irreparable damage.
- Cascading damages and other interdependencies between various components can be introduced using a damage process file.
- Losses can be driven by damages or demands. The former supports the conventional damage->consequence function approach, while the latter supports the use of vulnerability functions. These can be combined within the same analysis, if needed.
- The same loss component can be driven by multiple types of damages. For example, replacement can be triggered by either collapse or irreparable damage.
-
Introduced Options in the configuration file and in the
base
module:- These options handle settings that concern pelicun behavior;
- general preferences that might affect multiple assessment models;
- and settings that users would not want to change frequently.
- Default settings are provided in a
default_config.json
file. These can be overridden by providing any of the prescribed keys with a user-defined value assigned to them in the configuration file for an analysis.
-
Introduced consistent handling of units. Each csv table has a standard column to describe units of the data in it. If the standard column is missing, the table is assumed to use SI units.
-
Introduced consistent handling of pandas MultiIndex objects in headers and indexes. When tabular data is stored in csv files, MultiIndex objects are converted to simple indexes by concatenating the strings at each level and separating them with a
-
. This facilitates post-processing csv files in pandas without impeding post-processing those files in non-Python environments. -
Updated the DL_calculation script to support the new architecture. Currently, only the config file input is used. Other arguments were kept in the script for backwards compatibility; future updates will remove some of those arguments and introduce new ones.
-
The log files were redesigned to provide more legible and easy-to-read information about the assessment.
v2.6
- Support EDPs with more than 3 characters and/or a variable in their name. For example, SA_1.0 or SA_T1
- Support fitting normal distribution to raw EDP data (lognormal was already available)
- Extract key settings to base.py to make them more accessible for users.
- Minor bug fixes mostly related to hurricane storm surge assessment
pelicun v2.5
- Extend the uq module to support:
- More efficient sampling, especially when most of the random variables in the model are either independent or perfectly correlated.
- More accurate and more efficient fitting of multivariate probability distributions to raw EDP data.
- Arbitrary marginals (beyond the basic Normal and Lognormal) for joint distributions.
- Latin Hypercube Sampling
- Introduce external auto-population scripts and provide an example for hurricane assessments.
- Add a script to help users convert HDF files to CSV (HDF_to_CSV.py under tools)
- Use unique and standardized attribute names in the input files
- Migrate to the latest version of Python, numpy, scipy, and pandas (see setup.py for required minimum versions of those tools).
- Bug fixes and minor improvements to support user needs:
- Add 1.2 scale factor for EDPs controlling non-directional Fragility Groups.
- Remove dependency on scipy's truncnorm function to avoid long computation times due to a bug in recent scipy versions.
pelicun v2.1.1
- Aggregate DL data from JSON files to HDF5 files. This greatly reduces the number of files and makes it easier to share databases.
- Significant performance improvements in EDP fitting, damage and loss calculations, and output file saving.
- Add log file to pelicun that records every important calculation detail and warnings.
- Add 8 new EDP types: RID, PMD, SA, SV, SD, PGD, DWD, RDR.
- Drop support for Python 2.x and add support for Python 3.8.
- Extend auto-population logic with solutions for HAZUS EQ assessments.
- Several bug fixes and minor improvements to support user needs.
pelicun v2.0.0
- migrated to the latest version of Python, numpy, scipy, and pandas
see setup.py for required minimum versions of those tools - Python 2.x is no longer supported
- improve DL input structure to
= make it easier to define complex performance models
= make input files easier to read
= support custom, non-PACT units for component quantities
= support different component quantities on every floor - updated FEMA P58 DL data to use ea for equipment instead of units such as KV, CF, AP, TN
- add FEMA P58 2nd edition DL data
- support EDP inputs in standard csv format
- add a function that produces SimCenter DM and DV json output files
- add a differential evolution algorithm to the EDP fitting function to do a better job at finding the global optimum
- enhance DL_calculation.py to handle multi-stripe analysis (significant contributions by Joanna Zou):
= recognize stripe_ID and occurrence rate in BIM/EVENT file
= fit a collapse fragility function to empirical collapse probabilities
= perform loss assessment for each stripe independently and produce corresponding outputs
v1.2
- support for HAZUS hurricane wind damage and loss assessment
- add HAZUS hurricane DL data for wooden houses
- move DL resources inside the pelicun folder so that they come with pelicun when it is pip installed
- add various options for EDP fitting and collapse probability estimation
- improved the way warning messages are printed to make them more useful