Using TMVA to distinguish photons
I tried using ROOT's TMVA to test several multivariate tools to distinguish photons from non-photons, based on the preshower energies.
FGT commissioning
EVO bridge, at 16:00 (GMT), duration : 01:00
Present: Hal, Anselm, Bernd,Gerard,Ramiro,Will, Jan, Dave U., and Steve G.
Milestones for FGT comissioning are :
- ***DONE*** demonstarted 90% of channels work before collossions (pedestals are healthy)
- ***DONE*** FGT hold 3.6kV w/ ArCo2 gass
- demonstarted 90% of channels still work when beam circulates
- timing is set
- mapping in offline DB is correct
- set workig point for pp500
FGT commissioning tasks ver 2 (time ordered, [M]- denotes a milestone)
13) study of FGT working point : multiple data sets, ~2 days per set, considered combinations:
------------ after few weeks of pp200 data taking -------
15) tune APV params to optimize signal shape, Criterium: signal amplitude to noise RMS , no code for that
16) enable ZS , requires Willie's online monitoring to work, and on-line peds computation (by Tonko's code?)
----------- before switch to pp500 ---------
17) chose final working point for FGT HV & gas-mix [M6: pp500 lworking point]
18) operational L2W algo w/ monitoring , Ross works on it now
Minutes:
- overview of commissioning plan, revision follows (draft schedule in attachment, obsolete), main conclusions:
- significant # of tasks can and should be accomplished before the first collisions - see green section
- analysis of ped+stat of one taken already run should proceed now
- we do not have schedule with dates & names, I'll not give up on this
- there are 4 people who will be allowed to operate FGT HV are: Benrd,Gerrit,Ross,Ramiro
- handling of FGT operation in to STAR crew is a very distant idea, the plan is this 4 people will support fgt operation on site as long as it is needed
- finding the collision signal in FGT data is perceived as very difficult (shown in red)
- it is safer to require the EHT trigger (plan A), but also try to see FGT signal with minB trigger during EMC timing scan (very optimistic scenario)
- FGT deadtime in the early weeks is ~700 us, e.g. 7% dead @ 100 Hz.
- may require lower EHT thres and turn of HV for 3/4 of ETOW - need dedictaed trig config & EMC settings
- deveopement of not existing code fitting time dependence of individual pulses has been elevated to the top
- in parallel developement of nonexisting code fitting integrals will be pursued
- verification of mapping in DB will have 3 phases: quadrant, APV, and strip-level
- tuning of peds in APV will be done twice: ASAP, and after timing is adjusted
- not decided what FGT gas pressuer & HV will be set for study of the best working point (days-long data sets in pp200)
- analysis criteria were discussed: efficiency vs. HV; cluster size; efficiency vs. saturation of cluster energy; MIP signal to pedestal width;
- with disconnect to avaliable software - there is no code do generate any of those observables nor people to schedule to work on it
- identified need for software to evaluate signal/noise , needed to tune APV settings
- significant # of tasks can and should be accomplished before the first collisions - see green section
- the discussion about use of existing vs. developing new code was endless. People remained faithful to their respective views. In my opinin the comissioning task #16 - enableing ZS for pp500 - requires Willie to finish up the code he started. Many side benefits justify Willie does it now, not in a month.
- questions from Jan:
- access dates - Bernd will sent a note
- will FEE be on 24/7 : yes, said Gerard
- how do we deal with correlated noise : Gerard suspects it is not a big issue, we have ped file taken w/ STAR DAQ and a maker based software to compute sth. Limitted (to 1) possible manopwer. Analysis downgraded in priority, after code for items up to 13 is in hand.
- when do we switch to ZS mode? G: at the end of pp200.
- will we ever use wave form readout mode? G: we do it already.
- NOT discuss status of FGT readout based on Willie's plots http://drupal.star.bnl.gov/STAR/node/23008/
FGT Online Monitoring
I've been working on creating a set of monitoring tools for the FGT, intended to replicate what has been done for BSMD pedestals. Currently, for every APV the software generates a two-di
SMD Residual vs Tower Energy Plots
SMD Residual vs Tower Energy Plots
These plots are for prompt photon MC sample. What's plotted on y axis is the sum of max residual
Summary Discussion
Updated on Mon, 2012-01-09 09:34. Originally created by videbaks on 2012-01-09 09:34.Speaker : All
Talk time : 17:10, Duration : 00:20
Software
Updated on Tue, 2012-01-10 14:28 by margetis. Originally created by videbaks on 2012-01-09 09:34.Speaker : S.Margetis
Talk time : 16:35, Duration : 00:35
J.Thomas
Updated on Tue, 2012-01-10 15:59 by jhthomas. Originally created by videbaks on 2012-01-09 09:33.Speaker : SSD
Talk time : 16:00, Duration : 00:35
Break
Updated on Mon, 2012-01-09 09:32. Originally created by videbaks on 2012-01-09 09:32.Talk time : 15:00, Duration : 01:00
Budget&Schedule
Updated on Tue, 2012-01-10 12:09. Originally created by videbaks on 2012-01-09 09:32.Speaker : S.Morgan
Talk time : 12:50, Duration : 00:20
Integration
Speaker : D.Beavis
Talk time : 14:40, Duration : 00:20
E.Anderssen
Updated on Tue, 2012-01-10 14:10 by ericcan. Originally created by videbaks on 2012-01-09 09:28.Speaker : Global Structures
Talk time : 14:20, Duration : 00:20
IST status
Updated on Tue, 2012-01-10 02:34 by nieuwhzs. Originally created by videbaks on 2012-01-09 09:27.Speaker : G.J. van Nieuwenhuizen ( MIT )
Talk time : 13:45, Duration : 00:35
PXL status
Updated on Tue, 2012-01-10 00:10 by lcgreine. Originally created by videbaks on 2012-01-09 09:26.Speaker : L.Greiner
Talk time : 13:10, Duration : 00:35
intriduction and Overview
Speaker : F.Videbaek
Talk time : 12:30, Duration : 00:20
Workaround for star-submit for large filelists on pdsf
On pdsf, especially when the disk is very full, it takes a very long time to enumerate a large file list (example, my file list of 181k files took ~5 hours to enumerate when eliza 17 was 94% full).
FGT SW Leak Tests
A bfc.C file was run over a DAQ file with 10k events, taken late 2011, which includes the FGT. Additionally, a different bfc.C was run on 300 events from a Pythia W .fz file for the purpose of testing the FGT slow simulator. Four difference sets of code was used, and the resident and virtual memory was recorded every 30 seconds. Plots are shown of the resident and virtual memory size as a function of the number of events processed. The results show no obvious leaks arising from the libraries in offline/StDevel/StRoot. The results also verify that both data and Monte Carlo can be run with a BFC and that the FGT data is recorded in the resulting MuDst file.