Skip to content

Commit

Permalink
Apply pre-commit code formatters (#47)
Browse files Browse the repository at this point in the history
* added python-app to run the tests when the main branch is pushed or a PR is approved
* Removed unused dependencies from the project and updated README.md
* Try to fix the tensorflow reshaping error
* Renamed python-app.yml to tests.yml
* reformatted source code using black
* added .pre-commit-config.yaml
* added black to the dev environment in pyproject.toml
* manually ran the pre-commit hooks to fix the file formats
* apply black line-length rule
* deleted unused script from the branch
* changed the private methods (convert_token_to_id, convert_id_to_token) to public for CehrBertTokenizer
* restored the function create_folder_if_not_exist
* created a property oov_token_index for CehrBertTokenizer
* fixed the hf_cehrbert_pretrain_runner integration test
* removed the github pylint action for now because it's too restrictive
* restored LICENSE.md
* applied code git hooks to re-format the source
* fixed the unittest due to the changing private members to public ones
  • Loading branch information
ChaoPang authored Sep 6, 2024
1 parent ce5c263 commit 0d7c835
Show file tree
Hide file tree
Showing 113 changed files with 8,301 additions and 8,196 deletions.
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python

name: Python application
name: Tests

on:
push:
Expand Down Expand Up @@ -36,4 +36,4 @@ jobs:
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
PYTHONPATH=./: pytest
PYTHONPATH=./: pytest
4 changes: 2 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
.idea/
.vscode/
venv*

dist/*

*ipynb_checkpoints/
*h5
Expand Down Expand Up @@ -35,4 +35,4 @@ cehr_transformers.egg-info/top_level.txt

test_data
test_dataset_prepared
test*results
test*results
83 changes: 83 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# For documentation on pre-commit usage, see https://pre-commit.com/
# This file should be updated quarterly by a developer running `pre-commit autoupdate`
# with changes added and committed.
# This will run all defined formatters prior to adding a commit.
default_language_version:
python: python3 # or python3.10 to set a specific default version

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace

- repo: https://github.com/DanielNoord/pydocstringformatter
rev: 'v0.7.3'
hooks:
- id: pydocstringformatter

- repo: https://github.com/PyCQA/autoflake
rev: v2.2.0
hooks:
- id: autoflake

- repo: https://github.com/psf/black
rev: '24.1.1'
hooks:
- id: black
# It is recommended to specify the latest version of Python
# supported by your project here, or alternatively use
# pre-commit's default_language_version, see
# https://pre-commit.com/#top_level-default_language_version
# Pre-commit hook info from: https://black.readthedocs.io/en/stable/integrations/source_version_control.html
# Editor integration here: https://black.readthedocs.io/en/stable/integrations/editors.html

- repo: https://github.com/adamchainz/blacken-docs
rev: "v1.12.1" # replace with latest tag on GitHub
hooks:
- id: blacken-docs
additional_dependencies:
- black>=22.12.0

- repo: https://github.com/pre-commit/pre-commit-hooks
rev: 'v4.5.0'
hooks:
- id: trailing-whitespace
exclude: .git/COMMIT_EDITMSG
- id: end-of-file-fixer
exclude: .git/COMMIT_EDITMSG
- id: detect-private-key
- id: debug-statements
- id: check-json
- id: pretty-format-json
- id: check-yaml
- id: name-tests-test
- id: requirements-txt-fixer

- repo: https://github.com/pre-commit/pygrep-hooks
rev: 'v1.10.0'
hooks:
- id: python-no-eval
- id: python-no-log-warn
- id: python-use-type-annotations

- repo: https://github.com/Lucas-C/pre-commit-hooks
rev: v1.5.4
hooks:
- id: remove-crlf
- id: remove-tabs # defaults to: 4
exclude: .git/COMMIT_EDITMSG

- repo: https://github.com/PyCQA/isort.git
rev: 5.13.2
hooks:
- id: isort
args: [ "--profile", "black" ]

- repo: https://github.com/PyCQA/bandit
rev: '1.7.7'
hooks:
- id: bandit
args: ["--skip", "B101,B106,B107,B301,B311,B105,B608,B403"]
21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2023 Department of Biomedical Informatics

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,15 +57,15 @@ pip install -e .[dev]

Download [jtds-1.3.1.jar](jtds-1.3.1.jar) into the spark jars folder in the python environment
```console
cp jtds-1.3.1.jar .venv/lib/python3.10/site-packages/pyspark/jars/
cp jtds-1.3.1.jar .venv/lib/python3.10/site-packages/pyspark/jars/
```

## Instructions for Use with [MEDS](https://github.com/Medical-Event-Data-Standard/meds)

### 1. Convert MEDS to the [meds_reader](https://github.com/som-shahlab/meds_reader) database

If you don't have the MEDS dataset, you could convert the OMOP dataset to the MEDS
using [meds_etl](https://github.com/Medical-Event-Data-Standard/meds_etl).
using [meds_etl](https://github.com/Medical-Event-Data-Standard/meds_etl).
We have prepared a synthea dataset with 1M patients for you to test, you could download it
at [omop_synthea.tar.gz](https://drive.google.com/file/d/1k7-cZACaDNw8A1JRI37mfMAhEErxKaQJ/view?usp=share_link)
```console
Expand Down Expand Up @@ -115,7 +115,7 @@ The sequence can be seen conceptually as [VS] [V1] [VE] [ATT] [VS] [V2] [VE], wh
concepts associated with those visits.

```console
PYTHONPATH=./: spark-submit spark_apps/generate_training_data.py -i ~/Documents/omop_test/ -o ~/Documents/omop_test/cehr-bert -tc condition_occurrence procedure_occurrence drug_exposure -d 1985-01-01 --is_new_patient_representation -iv
PYTHONPATH=./: spark-submit spark_apps/generate_training_data.py -i ~/Documents/omop_test/ -o ~/Documents/omop_test/cehr-bert -tc condition_occurrence procedure_occurrence drug_exposure -d 1985-01-01 --is_new_patient_representation -iv
```

### 3. Pre-train CEHR-BERT
Expand All @@ -125,7 +125,7 @@ at `sample/patient_sequence` in the repo. CEHR-BERT expects the data folder to b
```console
mkdir test_dataset_prepared;
mkdir test_results;
python -m cehrbert.runners.hf_cehrbert_pretrain_runner sample_configs/hf_cehrbert_pretrain_runner_config.yaml
python -m cehrbert.runners.hf_cehrbert_pretrain_runner sample_configs/hf_cehrbert_pretrain_runner_config.yaml
```

If your dataset is large, you could add ```--use_dask``` in the command above
Expand Down Expand Up @@ -157,4 +157,4 @@ Chao Pang, Xinzhuo Jiang, Krishna S. Kalluri, Matthew Spotnitz, RuiJun Chen, Adl
Perotte, and Karthik Natarajan. "Cehr-bert: Incorporating temporal information from
structured ehr data to improve prediction tasks." In Proceedings of Machine Learning for
Health, volume 158 of Proceedings of Machine Learning Research, pages 239–260. PMLR,
04 Dec 2021.
04 Dec 2021.
2 changes: 1 addition & 1 deletion db_properties.ini
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@
base_url = jdbc:jtds:sqlserver://servername:1433;useNTLMv2=true;domain=domain_name;databaseName=db
driver = net.sourceforge.jtds.jdbc.Driver
user = username
password = password
password = password
2 changes: 1 addition & 1 deletion deepspeed_configs/zero1.json
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
}
2 changes: 1 addition & 1 deletion deepspeed_configs/zero2.json
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,4 @@
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
}
2 changes: 1 addition & 1 deletion deepspeed_configs/zero3.json
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,4 @@
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
}
2 changes: 1 addition & 1 deletion full_grid_search_config.ini
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ val_4 = 1.2e-4
val_1 = True

[LSTM_UNIT]
val_1 = 128
val_1 = 128
19 changes: 17 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ authors = [
]
description = "CEHR-BERT: Incorporating temporal information from structured EHR data to improve prediction tasks"
readme = "README.md"
license = { text = "MIT License" }
requires-python = ">=3.10.0"

classifiers = [
Expand Down Expand Up @@ -47,7 +48,7 @@ dependencies = [
"scikit-learn==1.4.0",
"scipy==1.12.0",
"tensorflow==2.15.0",
"tensorflow-metal==1.1.0; sys_platform == 'darwin'", # macOS only
"tensorflow-metal==1.1.0; sys_platform == 'darwin'", # macOS only
"tensorflow-datasets==4.5.2",
"tqdm==4.66.1",
"torch==2.4.0",
Expand All @@ -60,11 +61,25 @@ dependencies = [

[tool.setuptools_scm]

[project.urls]
Homepage = "https://github.com/cumc-dbmi/cehr-bert"

[project.scripts]
cehrbert-pretraining = "cehrbert.runner.hf_cehrbert_pretrain_runner:main"
cehrbert-finetuning = "cehrbert.runner.hf_cehrbert_finetuning_runner:main"

[project.optional-dependencies]
dev = [
"pre-commit", "pytest", "pytest-cov", "pytest-subtests", "rootutils", "hypothesis"
"pre-commit", "pytest", "pytest-cov", "pytest-subtests", "rootutils", "hypothesis", "black"
]

[tool.isort]
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
use_parentheses = true
ensure_newline_before_comments = true
line_length = 120

[tool.black]
line_length = 120
Original file line number Diff line number Diff line change
Expand Up @@ -50,4 +50,4 @@ logging_steps: 100
save_total_limit:
load_best_model_at_end: true
metric_for_best_model: "eval_loss"
greater_is_better: false
greater_is_better: false
2 changes: 1 addition & 1 deletion simple_grid_search_config.ini
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ val_1 = 1.0e-4
val_1 = True

[LSTM_UNIT]
val_1 = 128
val_1 = 128
1 change: 1 addition & 0 deletions src/cehrbert/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
It contains the main functions and classes needed to extract cohorts.
"""

from importlib.metadata import PackageNotFoundError, version

__package_name__ = "cehrbert"
Expand Down
13 changes: 6 additions & 7 deletions src/cehrbert/config/grid_search_config.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
from typing import NamedTuple, List
from typing import List, NamedTuple

LEARNING_RATE = 'LEARNING_RATE'
LSTM_DIRECTION = 'LSTM_DIRECTION'
LSTM_UNIT = 'LSTM_UNIT'
LEARNING_RATE = "LEARNING_RATE"
LSTM_DIRECTION = "LSTM_DIRECTION"
LSTM_UNIT = "LSTM_UNIT"


class GridSearchConfig(NamedTuple):
"""
A data class for storing the row from the pandas data frame and the indexes for slicing the
"""
"""A data class for storing the row from the pandas data frame and the indexes for slicing the."""

learning_rates: List[float] = [1.0e-4]
lstm_directions: List[bool] = [True]
lstm_units: List[int] = [128]
18 changes: 9 additions & 9 deletions src/cehrbert/config/output_names.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
PARQUET_DATA_PATH = 'patient_sequence'
QUALIFIED_CONCEPT_LIST_PATH = 'qualified_concept_list'
TIME_ATTENTION_MODEL_PATH = 'time_aware_model.h5'
BERT_MODEL_VALIDATION_PATH = 'bert_model.h5'
MORTALITY_DATA_PATH = 'mortality'
HEART_FAILURE_DATA_PATH = 'heart_failure'
HOSPITALIZATION_DATA_PATH = 'hospitalization'
INFORMATION_CONTENT_DATA_PATH = 'information_content'
CONCEPT_SIMILARITY_PATH = 'concept_similarity'
PARQUET_DATA_PATH = "patient_sequence"
QUALIFIED_CONCEPT_LIST_PATH = "qualified_concept_list"
TIME_ATTENTION_MODEL_PATH = "time_aware_model.h5"
BERT_MODEL_VALIDATION_PATH = "bert_model.h5"
MORTALITY_DATA_PATH = "mortality"
HEART_FAILURE_DATA_PATH = "heart_failure"
HOSPITALIZATION_DATA_PATH = "hospitalization"
INFORMATION_CONTENT_DATA_PATH = "information_content"
CONCEPT_SIMILARITY_PATH = "concept_similarity"
46 changes: 28 additions & 18 deletions src/cehrbert/const/common.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,28 @@
PERSON = 'person'
VISIT_OCCURRENCE = 'visit_occurrence'
CONDITION_OCCURRENCE = 'condition_occurrence'
PROCEDURE_OCCURRENCE = 'procedure_occurrence'
DRUG_EXPOSURE = 'drug_exposure'
DEVICE_EXPOSURE = 'device_exposure'
OBSERVATION = 'observation'
MEASUREMENT = 'measurement'
CATEGORICAL_MEASUREMENT = 'categorical_measurement'
OBSERVATION_PERIOD = 'observation_period'
DEATH = 'death'
CDM_TABLES = [PERSON, VISIT_OCCURRENCE, CONDITION_OCCURRENCE, PROCEDURE_OCCURRENCE, DRUG_EXPOSURE,
DEVICE_EXPOSURE, OBSERVATION, MEASUREMENT, CATEGORICAL_MEASUREMENT,
OBSERVATION_PERIOD, DEATH]
REQUIRED_MEASUREMENT = 'required_measurement'
UNKNOWN_CONCEPT = '[UNKNOWN]'
CONCEPT = 'concept'
CONCEPT_ANCESTOR = 'concept_ancestor'
PERSON = "person"
VISIT_OCCURRENCE = "visit_occurrence"
CONDITION_OCCURRENCE = "condition_occurrence"
PROCEDURE_OCCURRENCE = "procedure_occurrence"
DRUG_EXPOSURE = "drug_exposure"
DEVICE_EXPOSURE = "device_exposure"
OBSERVATION = "observation"
MEASUREMENT = "measurement"
CATEGORICAL_MEASUREMENT = "categorical_measurement"
OBSERVATION_PERIOD = "observation_period"
DEATH = "death"
CDM_TABLES = [
PERSON,
VISIT_OCCURRENCE,
CONDITION_OCCURRENCE,
PROCEDURE_OCCURRENCE,
DRUG_EXPOSURE,
DEVICE_EXPOSURE,
OBSERVATION,
MEASUREMENT,
CATEGORICAL_MEASUREMENT,
OBSERVATION_PERIOD,
DEATH,
]
REQUIRED_MEASUREMENT = "required_measurement"
UNKNOWN_CONCEPT = "[UNKNOWN]"
CONCEPT = "concept"
CONCEPT_ANCESTOR = "concept_ancestor"
Loading

0 comments on commit 0d7c835

Please sign in to comment.