From 88667c82bdf4fd1642ea188eb5548b9028ee2576 Mon Sep 17 00:00:00 2001 From: Verena Chung <9377970+vpchung@users.noreply.github.com> Date: Fri, 26 Apr 2024 16:33:38 -0700 Subject: [PATCH] Reformat --- README.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 9230d0c..69f7547 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,9 @@ Metrics returned and used for ranking are: ### Validate ```text -python validate.py -p PATH/TO/PREDICTIONS_FILE.CSV -g PATH/TO/GOLDSTANDARD_FILE.CSV [-o RESULTS_FILE] +python validate.py \ + -p PATH/TO/PREDICTIONS_FILE.CSV \ + -g PATH/TO/GOLDSTANDARD_FILE.CSV [-o RESULTS_FILE] ``` If `-o/--output` is not provided, then results will print to STDOUT, e.g. @@ -31,7 +33,9 @@ What it will check for: ### Score ```text -python score.py -p PATH/TO/PREDICTIONS_FILE.CSV -g PATH/TO/GOLDSTANDARD_FILE.CSV [-o RESULTS_FILE] +python score.py \ + -p PATH/TO/PREDICTIONS_FILE.CSV \ + -g PATH/TO/GOLDSTANDARD_FILE.CSV [-o RESULTS_FILE] ``` If `-o/--output` is not provided, then results will output to `results.json`.