Skip to content

Commit

Permalink
Merge branch 'master' into deamon
Browse files Browse the repository at this point in the history
  • Loading branch information
marialiubarska authored Feb 2, 2024
2 parents c07e5e7 + d1959e7 commit 5291e25
Show file tree
Hide file tree
Showing 176 changed files with 532 additions and 157 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/pythonpackage.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7, '3.10']
python-version: ['3.8', '3.10']

steps:
- uses: actions/checkout@v2
Expand All @@ -25,7 +25,7 @@ jobs:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install "pip<21.3"
pip install .
- name: Lint with flake8
run: |
Expand Down
82 changes: 81 additions & 1 deletion INSTALL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,86 @@
## Installation Guide

### Quickstart
### Instructions to install PISA on Madison working group servers cobalts (current guide from August 2023):

Assuming you already are in the directory where you want to store fridge/pisa source files and the python virtualenv and build pisa. You also need access to github through your account.


#### Clone into the fridge and pisa (ideally your own fork):

```
git clone [email protected]:icecube/wg-oscillations-fridge.git ./fridge
git clone [email protected]:USERNAME/pisa.git ./pisa
```


#### Load cvmfs environment:

```
unset OS_ARCH; eval `/cvmfs/icecube.opensciencegrid.org/py3-v4.2.1/setup.sh`
```

`which python` should now point to `/cvmfs/icecube.opensciencegrid.org/py3-v4.2.1/RHEL_7_x86_64/bin/python`

**Note:** It's not tested whether this works with a cvmfs version newer than `py3-v4.2.1`.


#### Create virtual environment:

```
python -m venv ./YOUR_PISA_PY3_VENV
```


#### Start the virtual environment and install pisa from source:

```
source ./YOUR_PISA_PY3_VENV/bin/activate
```

(should now show that you are in the environment)

```
pip install -e pisa
```


#### Modify your environment by adding lines to `./YOUR_PISA_PY3_VENV/bin/activate`

Every time you want to use pisa now, you need to activate your virtual environment by running `source ./YOUR_PISA_PY3_VENV/bin/activate`. In order to set some useful environment variables you might want to add the following lines (or more if needed) to the end of the `./YOUR_PISA_PY3_VENV/bin/activate` script:

```
# PISA source
export PISA="/data/user/USERNAME/PATH_TO_PISA"
# set some custom environment variables and load fridge
export PISA_RESOURCES="/data/user/USERNAME/PATH_TO_FRIDGE/analysis/common"
export PISA_RESOURCES=$PISA_RESOURCES:"/data/user/USERNAME/PATH_TO_FRIDGE/analysis"
source "/data/user/USERNAME/PATH_TO_FRIDGE/setup_fridge.sh"
# Add this project to the python path
export PYTHONPATH=$FRIDGE_DIR:$PYTHONPATH
# Madison
export PISA_RESOURCES=/data/ana/LE:$PISA_RESOURCES
export PISA_RESOURCES=/data/ana/BSM/HNL/:$PISA_RESOURCES
export PISA_RESOURCES=$FRIDGE_DIR:$FRIDGE_DIR/analysis:$FRIDGE_DIR/analysis/YOUR_ANALYSIS/settings:$FRIDGE_DIR/analysis/YOUR_ANALYSIS/analysis:$FRIDGE_DIR/analysis/common:$PISA_RESOURCES
export PISA_CACHE=~/cache/
export PISA_FTYPE=fp64
export HDF5_USE_FILE_LOCKING='FALSE'
```


#### Install any additional packages that you might want

`pip install PACKAGE` (for example `jupyter`)



### Quickstart (old guide)

_Note that terminal commands below are intended for the bash shell. You'll have to translate if you use a different shell._

Expand Down
8 changes: 2 additions & 6 deletions pisa/core/map.py
Original file line number Diff line number Diff line change
Expand Up @@ -1048,7 +1048,7 @@ def downsample(self, *args, **kwargs):
return self.rebin(new_binning)

@_new_obj
def fluctuate(self, method, random_state=None, jumpahead=0):
def fluctuate(self, method, random_state=None, jumpahead=None):
"""Apply fluctuations to the map's values.
Parameters
Expand All @@ -1065,10 +1065,6 @@ def fluctuate(self, method, random_state=None, jumpahead=0):
random_state : None or type accepted by utils.random_numbers.get_random_state
jumpahead : int >= 0
After instantiating the random_state object, move `jumpahead`
positions forward in the Mersenne twister's finite state machine
Returns
-------
fluctuated_map : Map
Expand Down Expand Up @@ -3058,7 +3054,7 @@ def chi2_per_map(self, expected_values):
def chi2_total(self, expected_values):
return np.sum(self.chi2_per_map(expected_values))

def fluctuate(self, method, random_state=None, jumpahead=0):
def fluctuate(self, method, random_state=None, jumpahead=None):
"""Add fluctuations to the maps in the set and return as a new MapSet.
Parameters
Expand Down
6 changes: 4 additions & 2 deletions pisa/stages/discr_sys/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ Once ready, the results are stored in a `.feather` file containing all events of

### Preparation

The scripts producing the gradients are located in `$FRIDGE_DIR/analysis/oscnext_ultrasurfaces`. To produce the gradient feather file, we first need to convert PISA HDF5 files to `.feather` using the `pisa_to_feather.py` script. We need to pass the input file, the output file, and a flag setting the sample (variable names) to be used (either `--verification-sample`, `--flercnn-sample`, or no additional flag for the Retro sample).
The scripts producing the gradients are located in `$FRIDGE_DIR/analysis/oscnext_ultrasurfaces`. To produce the gradient feather file, we first need to convert PISA HDF5 files to `.feather` using the `pisa_to_feather.py` script. We need to pass the input file, the output file, and a flag setting the sample (variable names) to be used (either `--verification-sample`, `--flercnn-sample`, `--flercnn-hnl-sample`, `--upgrade-sample`, or no additional flag for the Retro sample).

```
python pisa_to_feather.py -i /path/to/pisa_hdf5/oscnext_genie_0151.hdf5 -o /path/to/pisa_hdf5/oscnext_genie_0151.feather {"--verification-sample", "--flercnn-sample", ""}
Expand All @@ -59,8 +59,10 @@ After converting all files and setting the appropriate paths in `$FRIDGE_DIR/ana

**First**: Calculate event-wise probabilities with (assuming we `cd`'d into `$FRIDGE_DIR/analysis/oscnext_ultrasurfaces/knn`)

(Note here that this needs to be run with an earlier version of sklearn, due to deprecation of some used functions, e.g. use: `scikit-learn = 1.1.2`)

```
python calculate_knn_probs.py --data-sample {"verification", "flercnn", "retro"} --root-dir /path/to/pisa_feather/ --outfile /path/to/ultrasurface_fits/genie_all_bulkice_pm10pc_knn_200pc.feather --neighbors-per-class 200 --datasets 0000 0001 0002 0003 0004 0100 0101 0102 0103 0104 0105 0106 0107 0109 0151 0500 0501 0502 0503 0504 0505 0506 0507 --jobs 24
python calculate_knn_probs.py --data-sample {"verification", "flercnn", "flercnn_hnl", "retro"} --root-dir /path/to/pisa_feather/ --outfile /path/to/ultrasurface_fits/genie_all_bulkice_pm10pc_knn_200pc.feather --neighbors-per-class 200 --datasets 0000 0001 0002 0003 0004 0100 0101 0102 0103 0104 0105 0106 0107 0109 0151 0500 0501 0502 0503 0504 0505 0506 0507 --jobs 24
```

**Second**: Calculate the gradients that best fit the probabilities with:
Expand Down
70 changes: 70 additions & 0 deletions pisa/stages/osc/decay_params.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# author: Anil Kumar ([email protected])
# date: 2023

"""
DecayParams: Characterize neutrino decay parameters
(alpha3)
"""

from __future__ import division

import numpy as np

from pisa import FTYPE, CTYPE

__all__ = ['DecayParams']

class DecayParams(object):
"""
Holds neutrino decay parameters, i.e., alpha
Parameters
----------
decay_alpha3: float
expected to be given in [eV^2]
Attributes
----------
decay_alpha3 : float
Cf. parameters
decay_matrix : 3d complex array
"""
def __init__(self):

self._decay_alpha3 = 0.
self._decay_matrix = np.zeros((3, 3), dtype=CTYPE)

# --- theta12 ---
@property
def decay_alpha3(self):
"""alpha3"""
return self._decay_alpha3

@decay_alpha3.setter
def decay_alpha3(self, value):
self._decay_alpha3 = value

@property
def decay_matrix(self):
"""Neutrino decay matrix"""
decay_mat = np.zeros((3, 3), dtype=CTYPE)

decay_mat[2, 2] = 0 -self.decay_alpha3*1j

return decay_mat

def test_decay_params():
"""
# TODO: implement me!
"""
pass


if __name__=='__main__':
from pisa import TARGET
from pisa.utils.log import set_verbosity, logging
assert TARGET == 'cpu', "Cannot test functions on GPU, set PISA_TARGET to 'cpu'"
set_verbosity(1)
test_decay_params()
60 changes: 57 additions & 3 deletions pisa/stages/osc/prob3.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,13 @@
import numpy as np
from numba import guvectorize

from pisa import FTYPE, TARGET, ureg
from pisa import FTYPE, CTYPE, TARGET, ureg
from pisa.core.stage import Stage
from pisa.utils.log import logging
from pisa.utils.profiler import profile
from pisa.stages.osc.nsi_params import StdNSIParams, VacuumLikeNSIParams
from pisa.stages.osc.osc_params import OscParams
from pisa.stages.osc.decay_params import DecayParams
from pisa.stages.osc.layers import Layers
from pisa.stages.osc.prob3numba.numba_osc_hostfuncs import propagate_array, fill_probs
from pisa.utils.numba_tools import WHERE
Expand All @@ -38,6 +39,7 @@ class prob3(Stage):
YeI : quantity (dimensionless)
YeO : quantity (dimensionless)
YeM : quantity (dimensionless)
density_scale : quantity (dimensionless)
theta12 : quantity (angle)
theta13 : quantity (angle)
theta23 : quantity (angle)
Expand All @@ -61,6 +63,7 @@ class prob3(Stage):
eps_mutau_magn : quantity (dimensionless)
eps_mutau_phase : quantity (angle)
eps_tautau : quantity (dimensionless)
decay_alpha3 : quantity (energy^2)
**kwargs
Other kwargs are handled by Stage
Expand All @@ -72,6 +75,8 @@ def __init__(
self,
nsi_type=None,
reparam_mix_matrix=False,
neutrino_decay=False,
scale_density=False,
**std_kwargs,
):

Expand All @@ -89,6 +94,10 @@ def __init__(
'deltam31',
'deltacp'
)

# Add (matter) density scale as free parameter?
if scale_density:
expected_params = expected_params + ('density_scale',)

# Check whether and if so with which NSI parameters we are to work.
if nsi_type is not None:
Expand All @@ -107,6 +116,15 @@ def __init__(
the standard one by an overall phase matrix
diag(e^(i*delta_CP), 1, 1). This has no impact on
oscillation probabilities in the *absence* of NSI."""

self.neutrino_decay = neutrino_decay

if neutrino_decay:
self.decay_flag = 1
else :
self.decay_flag = -1

"""Invoke neutrino decay with neutrino oscillation."""

if self.nsi_type is None:
nsi_params = ()
Expand All @@ -131,7 +149,13 @@ def __init__(
'eps_mutau_phase',
'eps_tautau'
)
expected_params = expected_params + nsi_params

if self.neutrino_decay :
decay_params = ('decay_alpha3',)
else:
decay_params = ()

expected_params = expected_params + nsi_params + decay_params

# init base class
super().__init__(
Expand All @@ -143,6 +167,8 @@ def __init__(
self.layers = None
self.osc_params = None
self.nsi_params = None
self.decay_params = None
self.decay_matrix = None
# Note that the interaction potential (Hamiltonian) just scales with the
# electron density N_e for propagation through the Earth,
# even(to very good approx.) in the presence of generalised interactions
Expand Down Expand Up @@ -171,6 +197,10 @@ def setup_function(self):
elif self.nsi_type == 'standard':
logging.debug('Working in standard NSI parameterization.')
self.nsi_params = StdNSIParams()

if self.neutrino_decay:
logging.debug('Working with neutrino decay')
self.decay_params = DecayParams()

# setup the layers
#if self.params.earth_model.value is not None:
Expand Down Expand Up @@ -221,9 +251,17 @@ def calc_probs(self, nubar, e_array, rho_array, len_array, out):
mix_matrix = self.osc_params.mix_matrix_reparam_complex
else:
mix_matrix = self.osc_params.mix_matrix_complex

logging.debug('mat pot:\n%s'
% self.gen_mat_pot_matrix_complex)
logging.debug('decay mat:\n%s'
% self.decay_matix)

propagate_array(self.osc_params.dm_matrix, # pylint: disable = unexpected-keyword-arg, no-value-for-parameter
mix_matrix,
self.gen_mat_pot_matrix_complex,
self.decay_flag,
self.decay_matix,
nubar,
e_array,
rho_array,
Expand Down Expand Up @@ -251,6 +289,12 @@ def compute_function(self):
self.layers.calcLayers(container['true_coszen'])
container['densities'] = self.layers.density.reshape((container.size, self.layers.max_layers))
container['distances'] = self.layers.distance.reshape((container.size, self.layers.max_layers))

# Matter density scale is a free parameter?
if 'density_scale' in self.params.names:
density_scale = self.params.density_scale.value.m_as('dimensionless')
else:
density_scale = 1.

# some safety checks on units
# trying to avoid issue of angles with no dimension being assumed to be radians
Expand Down Expand Up @@ -290,6 +334,9 @@ def compute_function(self):
self.params.eps_mutau_phase.value.m_as('rad'))
)
self.nsi_params.eps_tautau = self.params.eps_tautau.value.m_as('dimensionless')

if self.neutrino_decay:
self.decay_params.decay_alpha3 = self.params.decay_alpha3.value.m_as('eV**2')

# now we can proceed to calculate the generalised matter potential matrix
std_mat_pot_matrix = np.zeros((3, 3), dtype=FTYPE) + 1.j * np.zeros((3, 3), dtype=FTYPE)
Expand All @@ -307,11 +354,18 @@ def compute_function(self):
self.gen_mat_pot_matrix_complex = std_mat_pot_matrix
logging.debug('Using standard matter potential:\n%s'
% self.gen_mat_pot_matrix_complex)

if self.neutrino_decay:
self.decay_matix = self.decay_params.decay_matrix
logging.debug('Decay matrix:\n%s' % self.decay_params.decay_matrix)
else :
self.decay_matix = np.zeros((3, 3), dtype=FTYPE) + 1.j * np.zeros((3, 3), dtype=FTYPE)


for container in self.data:
self.calc_probs(container['nubar'],
container['true_energy'],
container['densities'],
container['densities']*density_scale,
container['distances'],
out=container['probability'],
)
Expand Down
Loading

0 comments on commit 5291e25

Please sign in to comment.