Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TRR tests in main module #172

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
99c7d3c
add arg to load only wfs and do not extract charges
Jun 23, 2024
ab1ee04
cleaner way to treat containers inside makers
Jun 26, 2024
08ffabe
new methods to find computed calibration quatities
Jun 26, 2024
7fd2fe8
update to support new version of EvB v6
Jun 26, 2024
20504b9
-bug fix SPE combined : method use to update parameters was not the w…
Jun 26, 2024
2fa7e22
add find photostat result method
Dec 9, 2024
95e2c96
user scripts
Dec 9, 2024
6f526b0
fix import errors in TRR
Jan 6, 2025
efd99ce
bugfix : logger
Jan 7, 2025
6de72ac
static method decorator were missing
Jan 7, 2025
4386513
bugfix with new version of LIghtNectarCAMEventSource from ctapipe_io_…
Jan 8, 2025
9a35828
Unit test for WaveformsNectarCAMCalibrationTool
Jan 8, 2025
e11c1fd
Unit test for the ChargesNectarCAMCalibrationTool
Jan 8, 2025
d684636
formatting
Jan 8, 2025
9826442
fix test core makers
Jan 8, 2025
42886bf
formatting
Jan 8, 2025
1f46a44
formatting
Jan 8, 2025
9f5a9d5
flake8 formatting
Jan 8, 2025
37817de
refactoring folling pep8
Jan 8, 2025
4463bfa
dqm flake8
Jan 8, 2025
399661f
refactoring containers
Jan 8, 2025
3ca2069
fix bug formatting
Jan 8, 2025
3dc5918
shitty python formatting
Jan 8, 2025
498d06d
test bugfix
Jan 8, 2025
ed5f3ef
pedestal maker : implementation of I/O with containers
Jan 9, 2025
e4ec62a
bugfix in toml file introduced in previous commits
Jan 9, 2025
1f57b2d
unit test for component core
Jan 9, 2025
d3b7f2e
change filename
Jan 9, 2025
deff3e3
specific noqa
Jan 9, 2025
ca9b6c9
tests for charges_component
Jan 10, 2025
a6e2152
test charge compopent + core component refactored
Jan 10, 2025
141c497
bugfix when ArgumentError are raised
Jan 10, 2025
d14638a
test for waveforms_component
Jan 10, 2025
ba0298e
typo
Jan 10, 2025
68a97a1
typo + paramter tol can be passed to fit
Jan 10, 2025
81b6b98
typo
Jan 10, 2025
673649b
base test for flatfiled SPE component
Jan 10, 2025
15efef1
test flatfield spe components
Jan 17, 2025
b034d6f
test for spe makers
Jan 17, 2025
ffde376
add pyqtgraph dependencies
Jan 17, 2025
8602bd7
pyqtgraph is obtained with pip
Jan 17, 2025
e820796
fix pyqt
Jan 17, 2025
dfc1cbc
fix pyqtgraph pip installation instead of conda-forge
Jan 17, 2025
600d2b3
pyqt6 to pyqt5
Jan 17, 2025
6e4ef18
fix pyqt
Jan 20, 2025
0e2bbf8
fix pyqtgraph
Jan 20, 2025
ee536ad
fix pip
Jan 20, 2025
9c5b6d9
fix pyqt
Jan 20, 2025
ed728d5
add pip dependencies to pyproject.toml
Jan 20, 2025
4aff81b
pyqt6 to pyqt5
Jan 20, 2025
96eab92
pin version of pyqtgraph
Jan 20, 2025
b4038c9
typo
Jan 20, 2025
e2856c0
same in toml
Jan 20, 2025
ce71627
idk
Jan 20, 2025
07b0cea
typo
Jan 20, 2025
884912a
fix
Jan 20, 2025
00c6c4e
test fix
Jan 20, 2025
990f00e
pyside6 vs pyqt
Jan 20, 2025
5ed7e5a
tml
Jan 20, 2025
30f966e
last try
Jan 20, 2025
c3c0c27
last
Jan 20, 2025
7a38e18
move TRR test suite into nectarchain
Jan 17, 2025
3205bbb
fix imports
Jan 20, 2025
5d48c37
fix imports and format
Jan 22, 2025
4ac60aa
add h5py depencie
Jan 22, 2025
2187184
add lmfit to pyproject.toml
Jan 22, 2025
05fefc6
fix docstring
Jan 22, 2025
cc72da0
fix sphinx issue with PyQt6 subclass imports
Jan 23, 2025
880b77f
Revert "Merge branch 'main' into TTR_tests"
Jan 23, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 1 addition & 4 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -144,10 +144,7 @@ jobs:
run: |
pytest -n auto --dist loadscope --cov=nectarchain --cov-report=xml --ignore=src/nectarchain/user_scripts

- name: Upload coverage reports to Codecov with GitHub Action
uses: codecov/codecov-action@v5
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
- uses: codecov/codecov-action@v5


docs:
Expand Down
4 changes: 3 additions & 1 deletion environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,13 @@ dependencies:
- sphinx
- sphinx-automodapi
- pydata-sphinx-theme
- lmfit # needed into TRR
- h5py # needed into TRR (should be removed to use I/O methods of containers)
- pyqt # [linux]
- pip:
- zeo
- zodb
- mechanize
- browser-cookie3
- pyqt6
- pyqtgraph
- pyqt6 # [osx and arm64]
2 changes: 2 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ dependencies = [
"scipy==1.11.4",
"zodb",
"zeo",
"lmfit",
"h5py",
"pyqt6",
"pyqtgraph",
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,6 @@ def test_call(self, instance, event):
instance(event)
assert len(instance.chargesComponent.trigger_list) == 1

# @pytest.mark.skip(reason="test multiproc make GitHub worker be killed")
def test_finish_multiproc(self):
SPEalgorithm.window_length.default_value = 2
# We need to re-instance objects because otherwise, en exception is raised :
Expand Down Expand Up @@ -188,7 +187,6 @@ def test_init(self, instance):
assert isinstance(instance.chargesComponent, ChargesComponent)
assert instance._chargesContainers is None

# @pytest.mark.skip(reason="test multiproc make GitHub worker be killed")
def test_finish_multiproc(self):
# We need to re-instance objects because otherwise, en exception is raised :
# ReferenceError('weakly-referenced object no longer exists')
Expand Down
2 changes: 1 addition & 1 deletion src/nectarchain/makers/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -339,7 +339,7 @@ def setup(self, *args, **kwargs):

def _setup_eventsource(self, *args, **kwargs):
self._load_eventsource(*args, **kwargs)
self.__npixels = self._event_source.nectarcam_service.num_pixels
self.__npixels = self._event_source.camera_config.num_pixels
self.__pixels_id = self._event_source.nectarcam_service.pixel_ids

def _setup_components(self, *args, **kwargs):
Expand Down
5 changes: 5 additions & 0 deletions src/nectarchain/trr_test_suite/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
from .gui import TestRunner

Check warning on line 1 in src/nectarchain/trr_test_suite/__init__.py

View check run for this annotation

Codecov / codecov/patch

src/nectarchain/trr_test_suite/__init__.py#L1

Added line #L1 was not covered by tests

__all__ = [

Check warning on line 3 in src/nectarchain/trr_test_suite/__init__.py

View check run for this annotation

Codecov / codecov/patch

src/nectarchain/trr_test_suite/__init__.py#L3

Added line #L3 was not covered by tests
"TestRunner",
]
Original file line number Diff line number Diff line change
Expand Up @@ -9,23 +9,33 @@
import numpy as np
from astropy import units as u
from iminuit import Minuit
from tools_components import DeadtimeTestTool
from utils import ExponentialFitter, deadtime_labels, source_ids_deadtime

from nectarchain.trr_test_suite.tools_components import DeadtimeTestTool
from nectarchain.trr_test_suite.utils import (

Check warning on line 14 in src/nectarchain/trr_test_suite/deadtime.py

View check run for this annotation

Codecov / codecov/patch

src/nectarchain/trr_test_suite/deadtime.py#L13-L14

Added lines #L13 - L14 were not covered by tests
ExponentialFitter,
deadtime_labels,
source_ids_deadtime,
)


def get_args():
"""
Parses command-line arguments for the deadtime test script.
"""Parses command-line arguments for the deadtime test script.

Returns:
argparse.ArgumentParser: The parsed command-line arguments.
"""
parser = argparse.ArgumentParser(
description="Deadtime tests B-TEL-1260 & B-TEL-1270. \n"
+ "According to the nectarchain component interface, you have to set a NECTARCAMDATA environment variable in the folder where you have the data from your runs or where you want them to be downloaded.\n"
+ "You have to give a list of runs (run numbers with spaces inbetween), a corresponding source list and an output directory to save the final plot.\n"
+ "If the data is not in NECTARCAMDATA, the files will be downloaded through DIRAC.\n For the purposes of testing this script, default data is from the runs used for this test in the TRR document.\n"
+ "You can optionally specify the number of events to be processed (default 1000).\n"
+ "According to the nectarchain component interface, you have to set a\
NECTARCAMDATA environment variable in the folder where you have the data\
from your runs or where you want them to be downloaded.\n"
+ "You have to give a list of runs (run numbers with spaces inbetween), a \
corresponding source list and an output directory to save the final plot.\n"
+ "If the data is not in NECTARCAMDATA, the files will be downloaded through \
DIRAC.\n For the purposes of testing this script, default data is from the\
runs used for this test in the TRR document.\n"
+ "You can optionally specify the number of events to be processed \
(default 1000).\n"
)
parser.add_argument(
"-r",
Expand All @@ -42,7 +52,8 @@
type=int,
choices=[0, 1, 2],
nargs="+",
help="List of corresponding source for each run: 0 for random generator, 1 for nsb source, 2 for laser",
help="List of corresponding source for each run: 0 for random generator,\
1 for nsb source, 2 for laser",
required=False,
default=source_ids_deadtime,
)
Expand Down Expand Up @@ -70,15 +81,21 @@


def main():
"""
Runs the deadtime test script, which performs deadtime tests B-TEL-1260 and B-TEL-1270.
"""Runs the deadtime test script, which performs deadtime tests B-TEL-1260 and
B-TEL-1270.

The script takes command-line arguments to specify the list of runs, corresponding sources, number of events to process, and output directory. It then processes the data for each run, performs an exponential fit to the deadtime distribution, and generates two plots:
The script takes command-line arguments to specify the list of runs, corresponding\
sources, number of events to process, and output directory. It then processes\
the data for each run, performs an exponential fit to the deadtime\
distribution, and generates two plots:

1. A plot of deadtime percentage vs. collected trigger rate, with the CTA requirement indicated.
2. A plot of the rate from the fit vs. the collected trigger rate, with the relative difference shown in the bottom panel.
1. A plot of deadtime percentage vs. collected trigger rate, with the CTA\
requirement indicated.
2. A plot of the rate from the fit vs. the collected trigger rate, with the\
relative difference shown in the bottom panel.

The script also saves the generated plots to the specified output directory, and optionally saves them to a temporary output directory for use in a GUI.
The script also saves the generated plots to the specified output directory, and\
optionally saves them to a temporary output directory for use in a GUI.
"""

parser = get_args()
Expand Down Expand Up @@ -189,7 +206,8 @@
m.limits["deadtime"] = (
0.6e-6,
1.1e-6,
) # Put some tigh constrain as the fit will be in trouble when it expect 0. and measured something instead.
) # Put some tigh constrain as the fit will be in trouble when it expect 0. and
# measured something instead.

m.print_level = 2

Expand All @@ -202,16 +220,19 @@
# print(fitted_params_err)

print(
f"Dead-Time is {1.e6*fitted_params[1]:.3f} +- {1.e6*fitted_params_err[1]:.3f} µs"
f"Dead-Time is {1.e6*fitted_params[1]:.3f} +- "
f"{1.e6*fitted_params_err[1]:.3f} µs"
)
print(
f"Rate is {1./fitted_params[2]:.2f} +- {fitted_params_err[2]/(fitted_params[2]**2):.2f} Hz"
f"Rate is {1./fitted_params[2]:.2f} +-"
f"{fitted_params_err[2]/(fitted_params[2]**2):.2f} Hz"
)
print(f"Expected run duration is {fitted_params[0]*fitted_params[2]:.2f} s")

fitted_rate.append(1.0 / fitted_params[2])

# plt.savefig(figurepath + 'deadtime_exponential_fit_nsb_run{}_newfit_cutoff.png'.format(run))
# plt.savefig(figurepath + 'deadtime_exponential_fit_nsb_run{}_newfit
# _cutoff.png'.format(run))

y = data_content
y_fit = fitter.expected_distribution(fitted_params)
Expand All @@ -236,13 +257,13 @@

parameter_R2_new_list.append(r2)

deadtime_from_fit = parameter_tau_new_list
deadtime_from_fit_err = parameter_tau_err_new_list
# deadtime_from_fit = parameter_tau_new_list
# deadtime_from_fit_err = parameter_tau_err_new_list
lambda_from_fit = parameter_lambda_new_list
lambda_from_fit_err = parameter_lambda_err_new_list
A2_from_fit = parameter_A2_new_list
A2_from_fit_err = parameter_A2_err_new_list
R2_from_fit = parameter_R2_new_list
# A2_from_fit = parameter_A2_new_list
# A2_from_fit_err = parameter_A2_err_new_list
# R2_from_fit = parameter_R2_new_list

#######################################
# PLOT
Expand All @@ -260,9 +281,9 @@
ids = np.array(ids)
runlist = np.array(runlist)

ratio_list = []
collected_rate = []
err = []
# ratio_list = []
# collected_rate = []
# err = []

for source in range(0, 3):
# runl = np.where(ids==source)[0]
Expand Down Expand Up @@ -399,15 +420,17 @@
ax1.errorbar(
collected_trigger_rate[runl] / 1000,
rate[runl],
# xerr=((df_mean_nsb[df_mean_nsb['Run']==run]['Collected_trigger_rate[Hz]_err']))/1000,
# xerr=((df_mean_nsb[df_mean_nsb['Run']==run]['Collected_trigger
# _rate[Hz]_err']))/1000,
yerr=rate_err[runl],
alpha=0.9,
ls=" ",
marker="o",
color=labels[source]["color"],
label=labels[source]["source"],
)
# label = 'Run {} ({} V)'.format(run, df_mean_rg[df_mean_rg['Run']==run]['Voltage[V]'].values[0]))
# label = 'Run {} ({} V)'.format(run, df_mean_rg[df_mean_rg['Run']=
# =run]['Voltage[V]'].values[0]))

ax1.legend(frameon=False, prop={"size": 10}, loc="upper left", ncol=1)

Expand All @@ -424,7 +447,7 @@
main()


##################################PREVIOUS###############################
# ##################################PREVIOUS###############################
# collected_rate = []


Expand All @@ -445,8 +468,10 @@

# for i, run in enumerate(runlist):
# deadtime_run, deadtime_bin_run, deadtime_err_run, deadtime_bin_length_run, \
# total_delta_t_for_busy_time, parameter_A_new, parameter_R_new, parameter_A_err_new, parameter_R_err_new, \
# first_bin_length, tot_nr_events_histo = deadtime_and_expo_fit(time_tot[i],deadtime_us[i], run)
# total_delta_t_for_busy_time, parameter_A_new, parameter_R_new, parameter_A_err_
# new, parameter_R_err_new, \
# first_bin_length, tot_nr_events_histo = deadtime_and_expo_fit(time_tot[i],deadt
# ime_us[i], run)
# total_delta_t_for_busy_time_list.append(total_delta_t_for_busy_time)
# parameter_A_new_list.append(parameter_A_new)
# parameter_R_new_list.append(parameter_R_new)
Expand All @@ -468,7 +493,8 @@
# rate_err = (np.array(parameter_R_err_new_list) * 1 / u.us).to(u.kHz).to_value()
# A_from_fit = (parameter_A_new_list)
# A_from_fit_err = (parameter_A_err_new_list)
# ucts_busy_rate = (np.array(busy_counter[:,-1]) / (np.array(time_tot) * u.s).to(u.s)).to(
# ucts_busy_rate = (np.array(busy_counter[:,-1]) / (np.array(time_tot) * u.s).to(u.s))
# .to(
# u.kHz).value
# nr_events_from_histo = (tot_nr_events_histo)
# first_bin_delta_t = first_bin_length
Expand All @@ -478,7 +504,7 @@
# deadtime_average_err_nsb = np.sqrt(1 / (np.sum(1 / deadtime_bin_length ** 2)))


# #######################################################################################
# ######################################################################################


# #B-TEL-1260
Expand All @@ -502,7 +528,8 @@

# ratio = rate[i]/freq
# ratio_list.append(np.array(ratio)*100)
# ratio_err = np.sqrt((rate_err[i]/freq)**2 + (freq_err*rate[i]/(freq**2)))
# ratio_err = np.sqrt((rate_err[i]/freq)**2 + (freq_err*rate[i]/
# (freq**2)))
# err.append(ratio_err*100)


Expand All @@ -512,7 +539,8 @@
# X_sorted = [x for y, x in sorted(zip(Y, X))]
# err_sorted = [err for y,err in sorted(zip(Y,err))]

# plt.errorbar(sorted(Y), X_sorted, yerr = err_sorted, alpha=0.6, ls='-', marker='o',color=labels[source]['color'], label = labels[source]['source'])
# plt.errorbar(sorted(Y), X_sorted, yerr = err_sorted, alpha=0.6, ls='-',
# marker='o',color=labels[source]['color'], label = labels[source]['source'])

# plt.xlabel('Collected Trigger Rate [kHz]')
# plt.ylabel(r'Deadtime [%]')
Expand Down Expand Up @@ -564,7 +592,8 @@
# ax1.plot(x, x, color='gray', ls='--', alpha=0.5)

# ax2.plot(x, np.zeros(len(x)), color='gray', ls='--', alpha=0.5)
# ax2.fill_between(x, np.ones(len(x))*(-10), np.ones(len(x))*(10), color='gray', alpha=0.1)
# ax2.fill_between(x, np.ones(len(x))*(-10), np.ones(len(x))*(10), color='gray',
# alpha=0.1)

# ax2.set_xlabel('Collected Trigger Rate [kHz]')
# ax1.set_ylabel(r'Rate from fit [kHz]')
Expand All @@ -581,11 +610,14 @@
# #print(collected_triger_rate[runl])
# ax1.errorbar(collected_triger_rate[runl]/1000,
# rate[runl],
# #xerr=((df_mean_nsb[df_mean_nsb['Run']==run]['Collected_trigger_rate[Hz]_err']))/1000,
# #xerr=((df_mean_nsb[df_mean_nsb['Run']==run]
# ['Collected_trigger_rate[Hz]_err']))/1000,
# yerr=rate_err[runl],
# alpha=0.9,
# ls=' ', marker='o', color=labels[source]['color'], label = labels[source]['source'])
# # label = 'Run {} ({} V)'.format(run, df_mean_rg[df_mean_rg['Run']==run]['Voltage[V]'].values[0]))
# ls=' ', marker='o', color=labels[source]['color'],
# label = labels[source]['source'])
# # label = 'Run {} ({} V)'.format(run, df_mean_rg[df_mean_rg['Run']==
# run]['Voltage[V]'].values[0]))

# ax1.legend(frameon=False, prop={'size':10},
# loc="upper left", ncol=1)
Expand Down
Loading
Loading