Date: Fri, 25 Jun 2004 12:36:26 +0200 (MEST) Subject: Re: Help for my Thesis -- again and again -- Dear Silvia, I've finally got round to various little things that I've put aside while I struggled with Planck and the OSA4.0 freeze. Among these is answering your questions about deadtime. What you've written seems just fine to me. You should stress that everything that comes out of the j_dead_time_calc program has very low resolution of 8s because the calculations are based on housekeeping data found in JMXi-CSSW-HRW: hardware triggers, software triggers, all sorts of counters for keeping track of how many events are lost due to rejections and buffer loss, cpu mode etc. etc. In most cases this will be just fine: with a lowish, constant incoming flux, deadtime is about 11%. I distinguish between two sorts of dead time: when the instrument is actually `dead' (can't read incoming events due to being busy reading in events) or wastes events in other ways (buffer loss), and when we choose for events to be lost (grey filter loss). So there's real dead time and effective grey filter dead time. There are two reasons for this distinction: Firstly, grey filter rejection changes deadtime in two different ways. It decreases it on the one hand because grey filter rejection is very quick - the readin mechanism doesn't even read the event, it simply checks whether it's due to be discarded and moves on to the next event if the answer is yes. This is very quick 16 muSec, the same as for the first buffer full loss (Is there room in the 5-event buffer for another incoming event? No? On to the next one. 16 muSec. Very fast). So grey filter loss decreases the real deadtime. The other effect of grey filtering is to increase the effective deadtime. If 15% of all events are filtered out by the grey filter, then the effective deadtime of the instrument is increased by 15% on top of the real deadtime due to actual event processing. i.e. choosing not to process some of the available events has a cost too! Secondly, of all the quantitites that determine deadtime, only one is found somewhere other than the housekeeping, and that's the grey filter value. While we only get the total number of rejected grey filter events every 8s., the event-resolution grey filter value reporting gives us a method to determine the effective deadtime much more accurately. This is important because deadtime depends very non-linearly on the rate at which events arrive. If you have 1000 events arriving more or less evenly over 8 seconds, you will have far fewer lost events than if the events arrive in five 1 second bursts. During each burst many events will be lost, while almost all events between the bursts will be measured. So for quiet fields with steady sources that don't stress the instrument, the 8s. effective deadtime will be quite sufficient. For weakly varying, or weak pulsating sources, the 8 second effective deadtime is a good first approximation, though for time-varying sources the combination of the 8 sec real deadtime plus the fine-resolution effective grey filter deadtime would be better. The point at which using the fine-resolution becomes necessary depends on the frequency of the time variation, and the strength of the source. Remember that our deadtime comes mostly from background particle rejection, so a real source has to be very strong to significantly alter the number of software triggers on which the instrument has to work. I think about 90% of all our measured events are rejected as particles. Where all of this breaks down is in the case of a very strong millisecond pulsar where the instrument experiences rapid bursts of events. This would be quite difficult to analyse without the grey filter, and almost impossible with it! In this situation an offline Possion analysis of the rate of incoming events must be done by the scientist planning to use the data, taking into account the varying grey filter values, the large buffer loss at the input as well as the real deadtime. In the very worst scenario, if events arrive so quickly that they don't even register as hardware triggers, then there is no way to determine the actual flux impingeing on the instrument. It is very unlikely that this will every happen. (The only thing worse than this would be having an energy-dependent deadtime!) I hope this has cleared things up a little. I'm sure you don't need to put all this in your thesis, it's really more for your personal understanding of the problem. Best wishes, Carol Anne PS I'm going on holiday tomorrow, and will be back on 19th July. Have a good summer! .DANISH.SPACE.RESEARCH.INSTITUTE.DANISH.SPACE.RESEARCH.INSTITUTE.DANISH. Dr. Carol Anne Oxborrow Email: oxborrow@dsri.dk Homepage: http://www.dsri.dk/~oxborrow Telephone (direct): +45 35 32 57 33 Telephone (secretary): +45 35 32 57 01 Fax: +45 35 36 24 75 > X-Original-To: oxborrow@dsri.dk > Delivered-To: oxborrow@dsri.dk > From: Silvia Martínez Núñez > To: Carol Anne Oxborrow > Subject: Help for my Thesis -- again and again -- > Date: Thu, 15 Apr 2004 19:54:09 +0200 > User-Agent: KMail/1.5 > MIME-Version: 1.0 > Content-Transfer-Encoding: quoted-printable > Content-Disposition: inline > > Dear Carol Anne - > > How are you ? I hope you have had a great time this Easter. Me I was working > hard on the Thesis but I also took some days off to be with Enrique. > > When I was at ISDC I wrote some lines about deadtime correction in the > general description chapter and I was pretty happy with them -- also Peter was > happy --. Today I am writting some lines on the evolution of deadtime > correction after launch, I am not happy with the lines I wrote. > > What it is writting in the general description is: > > "\subsection{DEAD: Dead Time Calculation}\\ > > In this step a history of dead time values for each polling cycle (8 seconds) > for a given JEM-X detector is derived.\\ > > The time to read on-board an event depends on how quickly the event is > discarded due to grey filtering, buffer loss or particle rejection. \\ > > A dead time is determined that measures the time that the hardware is occupied > with event handling and cannot take in new events and dead time due to buffer > losses. This dead time is stored in the column name DEADTIME in the data > structure JMXi-DEAD-SCP.\\ > > Fluxes must be corrected for the effect of the grey filter since the fraction > of detected events appearing in the telemetry is equal to (G+1)/32, where G > is the value of the grey filter. The grey filtering is taking into account in > the determination of an effective dead time so called DEADEFF, it is also > stored within JMXi-DEAD-SCP.\\ > > These dead time values have a 8 s resolution since this is the frequency of > housekeeping packets. A higher resolution dead times can be obtained > adding to the DEADTIME value the effect of grey filter losses by looking up > the grey filter value in the instrument status table (JMXi-INST-STA). These > grey filter values have a single event resolution.\\ > > However, for very rapidly varying sources an offline deadtime analysis should > be performed.\\" > > But according to JEM-X Performance and Verification Phase Report and > one e-mail you sent around some time ago, it looks like that the current > method is only the higher resolution one and not the one done for > each polling cycle that it is the one defined before launch and based > on the ratio of hardware and software triggers. Could you please > clarify me this point ? > > Best wishes, > > Silvia. > > *********************************************************** > Silvia Martínez Núñez > > E-mail: Silvia.Martinez@uv.es > > Grupo de Astronomía y Ciencias del Espacio > /Astronomy and Space Science Group (GACE) > > Instituto de Ciencia de los Materiales (ICMUV) > > Universidad de Valencia > > P.O.Box 22085 > E-46071 Valencia, Spain > > Phone No. (+34) 96 354 36 13 > > Fax No. (+34) 96 354 36 77 > > **************************************************************** > >