You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I am performing a standard analysis on a specific source, but the issue is source-indipendent and can be faced with other sources (details below). I am using Fermipy v.1.2.0 and FermiTools v.2.2.0, working on a Linux machine.
Once the lightcurve is generated with the specific function (lc=gta.lightcurve), I checked the variability of the source returned by fermipy (lc['ts_var']). The obtained value is unreasonable large (ts_var~721786, dof=58, for the specific case described below). I tried to investigate the problem and I limited the problem to the estimation of the delta log-likelihood entering in the fomula for the TS_var calculation. In particular, the gta.lightcurve returns an unreasonable large values ((both in negative and positive) for the loglikelihood of the contant function in few bins. Due to the additive nature of the log-likelihood, these bins dominate the sum and corrupts the correct estimation. I expected the estimation of the ts_var takes this case into account, breaking the light curve production, or highlighting the problem in a more straightforward way.
This problem occurs when one is dealing with faint sources/small time bins (in the following example the time bin is 3 months, that is small in case of misaligned AGN), while no issue in case of larger time bins (e.g. 1 year long).
The (standard) analysis was performed in the following way:
gta.setup()
gta.optimize()
gta.free_sources(free = False)
gta.free_sources(distance=10.0,pars='norm')
gta.free_source('4FGL J1630.6+8234')
gta.free_source('galdiff')
gta.free_source('isodiff')
gta.fit()
gta.localize('4FGL J1630.6+8234')
gta.fit()
#lightcurve production
gta.free_sources(free = False)
gta.free_source('4FGL J1630.6+8234')
lc = gta.lightcurve('4FGL J1630.6+8234', binsz=3*month_met, multithread=True,free_background=True)
(the TS_var value is even more unreasonable, larger, if one leaves to vary also other components (e.g., galdiff, isodiff, other sources).)
In attachments you can find the configuration file used for the analysis config.txt
I also attached the lightcurve in photon flux (top panel) and the TS value for each time bin as a function of time (bottom panel).
The colorscale shows the 2*delta log-likelihood, where the delta log-likelihood = loglike-loglike_const for each time bin. Red inverted triangles are the upper limits at 95% in case of the TS<16 (please note that in case of an upper limit also the datapoint is shown with error, just for instructive reasons and graphical purposes).
Do you have any explanation for this strange results?
All comments/suggestions are very welcomed!
The text was updated successfully, but these errors were encountered:
Hello,
I am performing a standard analysis on a specific source, but the issue is source-indipendent and can be faced with other sources (details below). I am using Fermipy v.1.2.0 and FermiTools v.2.2.0, working on a Linux machine.
Once the lightcurve is generated with the specific function (lc=gta.lightcurve), I checked the variability of the source returned by fermipy (lc['ts_var']). The obtained value is unreasonable large (ts_var~721786, dof=58, for the specific case described below). I tried to investigate the problem and I limited the problem to the estimation of the delta log-likelihood entering in the fomula for the TS_var calculation. In particular, the gta.lightcurve returns an unreasonable large values ((both in negative and positive) for the loglikelihood of the contant function in few bins. Due to the additive nature of the log-likelihood, these bins dominate the sum and corrupts the correct estimation. I expected the estimation of the ts_var takes this case into account, breaking the light curve production, or highlighting the problem in a more straightforward way.
This problem occurs when one is dealing with faint sources/small time bins (in the following example the time bin is 3 months, that is small in case of misaligned AGN), while no issue in case of larger time bins (e.g. 1 year long).
The (standard) analysis was performed in the following way:
#lightcurve production
(the TS_var value is even more unreasonable, larger, if one leaves to vary also other components (e.g., galdiff, isodiff, other sources).)
In attachments you can find the configuration file used for the analysis
config.txt
I also attached the lightcurve in photon flux (top panel) and the TS value for each time bin as a function of time (bottom panel).
The colorscale shows the 2*delta log-likelihood, where the delta log-likelihood = loglike-loglike_const for each time bin. Red inverted triangles are the upper limits at 95% in case of the TS<16 (please note that in case of an upper limit also the datapoint is shown with error, just for instructive reasons and graphical purposes).
Do you have any explanation for this strange results?
All comments/suggestions are very welcomed!
The text was updated successfully, but these errors were encountered: