You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Science tool development directions

This page was started 31 October 2006; further comments added 22 Nov 2006 by SD

Inquiring minds would like to know what directions the science tools will take over the year remaining before launch (or the ~2 years before tools are delivered to guest investigators). The science tools in question for this discussion are those that will be part of the standard set that will be made available to guest investigators, the 'SAE'. The idea is to define what is needed and what is desirable, and what fits within the time/person-power constraints.

In the near term feedback from the GUC beta test will undoubtedly influence development work, although I'm not forecasting any major overhauls.

General

DLB (11/26/06)--In the beta test, the first activity after the users extracted was look at the data spatially and temporally. Consequently, users relied heavily on both fv and ds9. Our files lend themselves to this, and in the documentation we describe rudimentary uses of these tools to look at the data, but perhaps we should think a bit more about the interface to these tools.

Likelihood analysis

Some topics come to mind, pointed observations being one of them.

DLB (11/26/06)--We have discussed developing a new version of ModelEditor that edits and creates the XML model files for both likelihood and observation simulations. Since good source models are crucial for the likelihood analysis of any field that is not dominated by a single bright source, we must supply users with a robust, powerful version of ModelEditor. Also, I think we will find that users will want to create a simulated set of counts using gtobssim, which they will then analyze with likelihood, and for this they will want to use the same source model for both simulation and analysis.

DLB (11/26/06)--Do we understand what TS means, i.e., what the detection significance really is for a given TS? I know what it is supposed to be, but I recall that Jim did some simulations that showed that a detection was more significant for a given TS than theory said it should be.

DLB (11/26/06)--It would be useful to create and post running time benchmarks for various tools in the likelihood suite (this was suggested at the beta test). This would help users decide whether to do a binned or unbinned run. They would also be able to estimate how long their computer will be chugging away.

GRB analysis

Is the current complement of tools complete?

DLB (11/26/06)--We should develop some XSPEC functions that parameterize the spectra expected in the GeV range. At the very least, we could use a version of a power law that can be normalized to an energy in the LAT range.

DLB (11/26/06)--Treatment of unbinned counts needs to be added to gtburstfit. Jeff Scargle has the methodology, which needs to be implemented.

Pulsar analysis

From Masa: "In the pulsar tools area, the major works are:
1) to implement blind search tool (A4 tool),
2) to create a new tool to plot a pulse profile (w/ periodicity test results),
3) and to introduce a method that contains the full functionality of the tool, so that a software developer (or a Python script) can call it as a part of their software."
[He has updated the current status page for pulsar tools with these items, including some more details.]

"Another major task is to develop a system to ingest and distribute pulsar ephemerides database. Other than that, we have several minor improvements in mind, such as, improving gtephcomp output, implementing more time systems, and technical refactoring for easy maintenance."

Observation simulation

Will an orbit/attitude simulator with at least semi-realistic attitude profiles/knowledge of constraints really be part of the SAE? Should we assume that pointing history files will be made available (or generated on request) for various scenarios?

Are any important source types missing? Is simulating residual background at the gtobssim level important, and is it feasible?

Infrastructure

GUI(s)?

Utilities

In the past, I've argued that we need a utility in the SAE for examining/displaying the IRFs.

Question: Did anyone else try out the event display tool during DC2? It was impressive and fun to play with, but I didn't need it.

Other issues

Delivery of science tools to the GSSC

In terms of delivery, a long time ago the SSC-LAT working group, or whatever we called ourselves, declared that it would be the group that decided when a given tool was 'ready' for delivery. I think that we probably don't need to re-convene the group, but I'd like your opinions about whether the tools should pass some not-yet-written battery of tests beyond the unit tests for the packages before they are accepted. And whether, during the mission the GSSC will be issuing incremental releases of the SAE tools at the same rate that the LAT team 'delivers' them.

Real life

I think that Jim has generalized the IRF look up to allow for time dependence of the IRFs. I don't know how likely we are to need to have response functions be time dependent - e.g., owing to something like a hardware failure - but at least in principle we could want to make analyses (with Likelihood or gtrspgen) that span these changes. The only obvious problem would be with live time cubes.

Also, we'll need to figure out how we'll really assign ID numbers to events.

How we'll handle the residual backgrounds in the data is still being grappled with. Even the 'irreducible' component is not all that small at low energies. The orbit and attitude dependence of the background (and residual background) complicates modeling, but probably we should deliver some sort of reliable model for residual backgrounds just as we deliver a model of the diffuse gamma-ray emission.

DLB (11/26/06)--I don't know whether we've ever made an official decision on this, but I think we should recognize that Mac OS X must now be one of the supported platforms.

DLB (11/26/06)--We need to automate and speed up the creation of builds on all platforms. It took a long time to get the SAE running on all the different platforms for the beta test, and we ended up with various tools not working on different platforms. Half our beta testers had Macs, yet we did not have the time to test the Mac version as thoroughly as we'd have liked.

Live time and pointing history

The 'accumulated livetime since start of mission' is looking quite difficult to obtain - at one time it was going to be easy. I think that the need for it is sufficiently small that we should consider omitting it from the FT1 file.
The FT2 file will need to continue to have accumulated live times for each interval of time, for exposure calculations. The attitude and position information that we'll get from the spacecraft, and will want to use for L1 processing, are not in anything like FT2 format - among the differences are the frequency of updates (much greater), the use of quaternions, the availability of angular velocities, and the asynchronicity of the attitude and position information. Do we want to change the FT2 format to more closely relate to what comes in the telemetry?

The short answer is no, if only because the files would be very large and not offer any useful advantages in terms of, say, accuracy of the exposure calculation.

I wonder sometimes what the GBM is doing regarding position/attitude information; their FT2 equivalent uses quaternions, but I think that position and atitude are interpolated to the same point in time.

Time Series

DLB (11/26/06)--The lightcurve functionality in gtbin just bins the counts in an FT1 file in time. The lightcurve therefore does not compensate for the exposure. For those of us who look at gamma-ray burst lightcurves this is OK; indeed to plan further analysis this is what we want. However, those looking at sources on longer timescales (i.e., over orbits) may want to correct for exposure; AGN people have requested this. For the counts from a point source this makes sense, but this can be problematic when the counts originate from a large region. So...should we create/add exposure correction?

DLB(11/26/06)--gtbin was meant to be a simple binning tool, i.e., accumulating counts into energy/temporal/spatial bins. It does not calculate uncertainties for the number of counts. However, showing uncertainties would be appropriate in displaying the products, particularly when the counts are divided by livetime or exposure, or undergo some other transformation. So, how should we proceed?

XSPEC analysis of strong sources

DLB (11/26/06)--Since strong sources may be analyzed with XSPEC, we might want to create XSPEC functions suited for ~GeV range. For example, it would be useful to have a version of the standard power law, broken power law, etc., functions normalized to a GeV. When data at a GeV is fit by functions normalized at 1 keV, the normalization is highly correlated with the other parameters.

  • No labels