Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TS_var value returned by gta.lightcurve is unreasonable large in case of small bins #577

Open
ebronzini opened this issue Mar 21, 2024 · 0 comments

Comments

@ebronzini
Copy link

ebronzini commented Mar 21, 2024

Hello,
I am performing a standard analysis on a specific source, but the issue is source-indipendent and can be faced with other sources (details below). I am using Fermipy v.1.2.0 and FermiTools v.2.2.0, working on a Linux machine.

Once the lightcurve is generated with the specific function (lc=gta.lightcurve), I checked the variability of the source returned by fermipy (lc['ts_var']). The obtained value is unreasonable large (ts_var~721786, dof=58, for the specific case described below). I tried to investigate the problem and I limited the problem to the estimation of the delta log-likelihood entering in the fomula for the TS_var calculation. In particular, the gta.lightcurve returns an unreasonable large values ((both in negative and positive) for the loglikelihood of the contant function in few bins. Due to the additive nature of the log-likelihood, these bins dominate the sum and corrupts the correct estimation. I expected the estimation of the ts_var takes this case into account, breaking the light curve production, or highlighting the problem in a more straightforward way.
This problem occurs when one is dealing with faint sources/small time bins (in the following example the time bin is 3 months, that is small in case of misaligned AGN), while no issue in case of larger time bins (e.g. 1 year long).

The (standard) analysis was performed in the following way:

  • gta.setup()
  • gta.optimize()
  • gta.free_sources(free = False)
  • gta.free_sources(distance=10.0,pars='norm')
  • gta.free_source('4FGL J1630.6+8234')
  • gta.free_source('galdiff')
  • gta.free_source('isodiff')
  • gta.fit()
  • gta.localize('4FGL J1630.6+8234')
  • gta.fit()
    #lightcurve production
  • gta.free_sources(free = False)
  • gta.free_source('4FGL J1630.6+8234')
  • lc = gta.lightcurve('4FGL J1630.6+8234', binsz=3*month_met, multithread=True,free_background=True)
    (the TS_var value is even more unreasonable, larger, if one leaves to vary also other components (e.g., galdiff, isodiff, other sources).)

In attachments you can find the configuration file used for the analysis
config.txt

I also attached the lightcurve in photon flux (top panel) and the TS value for each time bin as a function of time (bottom panel).
image
The colorscale shows the 2*delta log-likelihood, where the delta log-likelihood = loglike-loglike_const for each time bin. Red inverted triangles are the upper limits at 95% in case of the TS<16 (please note that in case of an upper limit also the datapoint is shown with error, just for instructive reasons and graphical purposes).

Do you have any explanation for this strange results?

All comments/suggestions are very welcomed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants