Skip to content

Commit

Permalink
Merge pull request #15 from DUNE-DAQ/sbhuller/new_scripts
Browse files Browse the repository at this point in the history
Sbhuller/new scripts
  • Loading branch information
ShyamB97 authored Oct 3, 2024
2 parents 1ff5c45 + b4cd493 commit 2e4dd44
Show file tree
Hide file tree
Showing 17 changed files with 505 additions and 21,479 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
__pycache__/
39 changes: 38 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,41 @@
# performancetest


In `performancetest` users can find all the resources to conduct benchmark and performance tests. Moreover, to process and present the results. In the `docs` folder users will find detailed test explanations, comprehensive instructions on how to execute these tests, and a comprehensive guide on how to effectively process the gathered data. In the `tools` folder the user can find the python3 notebooks and python file with the basic functions needed for creating the reports.
In `performancetest` users can find all the resources to conduct benchmark and performance tests. Moreover, to process and present the results. In the `docs` folder users will find detailed test explanations, comprehensive instructions on how to execute these tests, and a comprehensive guide on how to effectively process the gathered data. In the `tools` folder the user can find the python3 notebooks and Python file with the basic functions needed for creating the reports.

## Installation
In order to setup your environment, run

```[bash]
pip install -r requirements.txt
```

to install the necessary Python packages. Everytime you login, run

```[bash]
source setup.sh
```

or add this to `env.sh` in your dunedaq workspace.

## Generating Performance reports

To generate a performance report the tools `collect_metrics.py` and `generate_performance_report.py` are used. Both tools require a json file as input, which provides metrics about the test and necessary information requiret to retrieve the data. To generate a template json file, run

```[bash]
collect_metrics.py -g
```

which should produce a file called `template_report.json` in your current directory. In this configuration file lists all the information needed and a brief decsription describing each entry. Note that entries with `None` are optional. Once all the information is filled run

```[bash]
collect_metrics.py -f <name of your json file>
```

to collect the dashboard information and format the core utilisation output. The output of this script are csv files for each test and core utilisation file, which are automatically added to your json file under the entries `grafana_data_files` and `core_utilisation_files`, respectively.

Generate the performance report by running

```[bash]
generate_performance_report.py -f <name of your json file>
```
6 changes: 6 additions & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
fpdf2==2.7.9
matplotlib==3.9.2
pandas==2.2.2
python-dateutil==2.8.2
requests==2.25.0
tabulate==0.9.0
6 changes: 6 additions & 0 deletions setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
PERFORMANCE_TEST_PATH=`realpath $BASH_SOURCE | xargs dirname`
export PERFORMANCE_TEST_PATH
export PATH=$PERFORMANCE_TEST_PATH/tools:$PATH
export PATH=$PERFORMANCE_TEST_PATH/scripts:$PATH
export PATH=$PERFORMANCE_TEST_PATH/tests:$PATH
echo "performance test executables added to PATH"
324 changes: 0 additions & 324 deletions tools/Basic_run_report.ipynb

This file was deleted.

222 changes: 0 additions & 222 deletions tools/Benchmark_report.ipynb

This file was deleted.

Loading

0 comments on commit 2e4dd44

Please sign in to comment.