Logbook
30 March 2021
Installed a new version of
ROOT on my
MacBookPro:
Used the following Ubuntu as an example
$ wget https://root.cern/download/root_v6.22.00.Linux-ubuntu19-x86_64-gcc9.2.tar.gz
$ tar -xzvf root_v6.22.00.Linux-ubuntu19-x86_64-gcc9.2.tar.gz
$ source root/bin/thisroot.sh # also available: thisroot.{csh,fish,bat}
The following commands
HEPMBP069:~ david$ curl -O https://root.cern/download/root_v6.22.06.macos-10.13-x86_64-clang100.tar.gz
HEPMBP069:~ david$ tar -xzvf root_v6.22.06.macos-10.13-x86_64-clang100.tar.gz
HEPMBP069:~ david$ source root/bin/thisroot.sh
HEPMBP069:~ david$ root
------------------------------------------------------------------
| Welcome to ROOT 6.22/06 https://root.cern |
| (c) 1995-2020, The ROOT Team; conception: R. Brun, F. Rademakers |
| Built for macosx64 on Nov 27 2020, 15:14:08 |
| From tags/v6-22-06@v6-22-06 |
| Try '.help', '.demo', '.license', '.credits', '.quit'/'.q' |
------------------------------------------------------------------
root [0] .! ls
Applications Object_EE_perRing-450invfb.root
Desktop Pictures
Documents Public
Downloads README.txt
etc etc
...so all looks OK now.
Homebrew Installation for cmake etc
In a macOS Terminal type the following command:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
Visit Homebrew web site for details:
17 March 2021, login to lxplus:
Start up XLaunch - go through all questions
Click on 'Ubuntu' icon (white circle with 3 nobs, on orange background) on PC taskbar
ssh -XY davec@lxplus.cern.ch
To get sl6: ssh -XY lxplus6 <br>
Copy/paste works from PC command window into Twiki
EOS web page,
CMS home page,
CMS Physics homepage,
ECAL resolution paper,
cmf install
LaserFarm notes
Analysis notes
WBM run summaries
CMS OMS to get run type, ie Global, cosmics, click on Run Info, top tool bar, enter run number in 'run controller' panel.
eta[15] ={1.55 , 1.65 , 1.75 , 1.85 , 1.95 , 2.05 , 2.15 , 2.25 , 2.35 , 2.45, 2.55 , 2.65 , 2.75 , 2.85, 2.95 };
eta index [15] |
0 |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
value[15] +-0.05 |
1.55 |
1.65 |
1.75 |
1.85 |
1.95 |
2.05 |
2.15 |
2.25 |
2.35 |
2.45 |
2.55 |
2.65 |
2.75 |
2.85 |
2.95 |
NOTE IN CMS/ROOT: variable "lumi" may already be defined elsewhere - leads to a crash if defined again !!!
EE- Feds |
601 |
602 |
603 |
604 |
605 |
606 |
607 |
608 |
609 |
EE- |
-7 |
-8 |
-9 |
-1 |
-2 |
-3 |
-4 |
-5 |
-6 |
EE+ Feds |
646 |
647 |
648 |
649 |
650 |
651 |
652 |
653 |
654 |
EE+ |
+7 |
+8 |
+9 |
+1 |
+2 |
+3 |
+4 |
+5 |
+6 |
Channels |
830 |
791 |
815 |
815 |
791 |
830 |
821 |
810 |
821 |
Counting anticlockwise
Total channels, each endcap, 7324
Linux commands:
cntrl n |
new window |
cntrl-shift-t |
new terminal |
emacs commands:
Meta key, M = Alt key |
Meta key C = ctrl key |
ctrl-s |
search. Press ctrl-s again for next |
alt-shift-% |
query/replace |
ctrl-alt-n |
to match to forward/lower curly bracket |
ctrl-alt-u |
to match to upper/earlier curly bracket |
Data
DQM data
DST laser data
Dsts on eos:
ls /eos/cms/store/group/dpg_ecal/alca_ecalcalib/laser/dst.merged/2019/
Then, from my linux area:
eos cp /eos/cms/store/group/dpg_ecal/alca_ecalcalib/laser/dst.merged/2019/609/dst00328361.0000.447.00.shap3.corlinpn .
DSTs on cmsusr area:
ssh davec@cmsusr.cern.ch with YKw10 !!!!!!!!!!!!!
(or your tunnel to p5)
Then:
ssh srv-c2f38-10-01
ls /localdata/disk0/persistent/dst.merged/PIN1/v3/2019/
601 604 607 610 613 616 619 622 625 628 631 634 637 640 643 646 649 652
602 605 608 611 614 617 620 623 626 629 632 635 638 641 644 647 650 653
603 606 609 612 615 618 621 624 627 630 633 636 639 642 645 648 651 654
cd /localdata/disk0/persistent/dst.merged/PIN1/v3/2019/609/
[davec@srv-c2f38-10-01 609]$ ls -al *328337* <== MAY HAVE TO WAIT A FEW SECONDS !!
-rw-rw-r-- 1 ecallaser ecallaser 195780 Mar 14 12:23 dst00328337.0000.447.00.shap3.corlinpn
Then copy to cmsusr
scp dst00328361.0000.447.00.shap3.corlinpn davec@cmsusr:dst00328361.0000.447.00.shap3.corlinpn2
Then, from my lxplus work area, copy file over to lxplus:
scp davec@cmsusr:"/nfshome0/davec/dst00328361.0000.447.00.shap3.corlinpn2" .
Monte Carlo data, 2019
Email from Amina, 18.4.19, 14:20
450 fb-1
Folders:
/eos/project-e/ecaldpg/www/EcalNoise/run3_noise_realistic_v1/TL450/
/eos/project-e/ecaldpg/www/EcalNoise/run3_noise_realistic_v1/TL400/
/eos/project-e/ecaldpg/www/EcalNoise/run3_noise_realistic_v1/TL315/
/eos/project-e/ecaldpg/www/EcalNoise/run3_noise_realistic_v1/TL235/
/eos/project-e/ecaldpg/www/EcalNoise/run3_noise_realistic_v1/TL180/
450 fb-1, email Amina, 18.4.19, 14:20
17 Mar 2021
Check the files seen below, on 29 Jul 2020, are still there - yes - all OK !!!
29 Jul 2020
Looking for Marc's redone laser scan data. Find it here (highlight in lxplus, ctrl C here):
See 3 files for run 328361, 0000, 0001 and 0002
~ $ eoscms ls -1al /eos/cms/store/group/dpg_ecal/alca_ecalcalib/laser/dst.merged/2019/601/dst00328361.0002.447.00.shap3.corlinpn
-rw-r--r-- 2 dejardin zh 196897 Jul 21 09:37 dst00328361.0002.447.00.shap3.corlinpn
~ $ eoscms ls -1al /eos/cms/store/group/dpg_ecal/alca_ecalcalib/laser/dst.merged/2019/601/dst00328361.0001.447.00.shap3.corlinpn
-rw-r--r-- 2 dejardin zh 196905 Jul 21 09:37 dst00328361.0001.447.00.shap3.corlinpn
eoscms ls -1al /eos/cms/store/group/dpg_ecal/alca_ecalcalib/laser/dst.merged/2019/601/dst00328361.0000.447.00.shap3.corlinpn
-rw-r--r-- 2 dejardin zh 196897 Jul 21 2020 dst00328361.0000.447.00.shap3.corlinpn
No 0003.447.00.shap3.corlinpn present, only 0001 and 0002.
21 Jul 2020
Email from Marc - with laser power setting added by me:
I finally put manually the files on EOS.
You have now 3 versions of files for runs:
328346 0.194 versions 0000 and 0001
328358 0.778 versions 0000, 0001 and 0002
328359 0.583 versions 0000, 0001 and 0002
328360 0.417 versions 0000 and 0001
328361 0.097 versions 0000, 0001 and 0002
- .ref0 which is the one you have looked at on eos
- .ref which the version I had on the laser farm. I was not sure that I
had not touched them since then, but it seems that .ref and .ref0 are
the same files.
- without suffix which are the new ones, without cuts.
My eos tests, see dejardin new files dated Jul 21 2020:
eoscms ls -1al /eos/cms/store/group/dpg_ecal/alca_ecalcalib/laser/dst.merged/2019/601/dst00328361.0000.447.00.shap3.corlinpn
-rw-r--r-- 2 dejardin zh 196897 Jul 21 2020 dst00328361.0000.447.00.shap3.corlinpn
eoscms ls -1al /eos/cms/store/group/dpg_ecal/alca_ecalcalib/laser/dst.merged/2019/601/dst00328361.0001.447.00.shap3.corlinpn
-rw-r--r-- 2 dejardin zh 196905 Jul 21 2020 dst00328361.0001.447.00.shap3.corlinpn
eoscms ls -1al /eos/cms/store/group/dpg_ecal/alca_ecalcalib/laser/dst.merged/2019/601/dst00328361.0002.447.00.shap3.corlinpn
-rw-r--r-- 2 dejardin zh 196897 Jul 21 2020 dst00328361.0002.447.00.shap3.corlinpn
Unable to stat /eos/cms/store/group/dpg_ecal/alca_ecalcalib/laser/dst.merged/2019/601/dst00328361.0003.447.00.shap3.corlinpn;
No such file or directory (errc=2) (No such file or directory)
Marc .ref and .ref0
~ $ eoscms ls -1al /eos/cms/store/group/dpg_ecal/alca_ecalcalib/laser/dst.merged/2019/601/
Bruno's file is .ref0
Marc's old laser farm data is .ref, and the same size as Bruno's .ref0
-rw-r--r-- 2 dejardin zh 197269 Jul 21 2020 dst00328346.0000.447.00.shap3.corlinpn
-rw-r--r-- 2 dejardin zh 197500 Mar 14 2019 dst00328346.0000.447.00.shap3.corlinpn.ref
-rw-r--r-- 2 blenzi zh 197500 Jul 4 2019 dst00328346.0000.447.00.shap3.corlinpn.ref0
-rw-r--r-- 2 dejardin zh 197265 Jul 21 2020 dst00328346.0001.447.00.shap3.corlinpn
-rw-r--r-- 2 dejardin zh 197495 Mar 14 2019 dst00328346.0001.447.00.shap3.corlinpn.ref
-rw-r--r-- 2 blenzi zh 197495 Jul 4 2019 dst00328346.0001.447.00.shap3.corlinpn.ref0
5 Feb 2020
Getting EEP
LaserFarm data for the April 2019 laser scan
Got EEP ixiyfed file maps, such as EEP-FED646.txt, using EEP-mapping.txt that reads the full ECAL file: ecalMapping.txt
Write out full dst, all EEP, with ixiy and FED info to 'work' with:
.x write-fullEEP-dst-with-ixiyfed-to-work.C
10 Jan 2020
Problem: Gtk-WARNING **: Locale not supported by C library.
Using the fallback 'C' locale.
Some answers at:
Locale solution
sudo apt-get remove language-pack-en-base
sudo apt-get install language-pack-en language-pack-en-base
9 Jan 2020
Possible ways to get objects embedded in
GetListOfPrimitives()
TList *MyList = EmpeddedCanvasObject->GetCanvas()->GetListOfPrimitives();
TObject *obj1;
TIter next(MyList);
while(obj1=(TObject *)next())
{
std::cout << "object class name " << obj1->ClassName() << std::endl;
}
Problems with analysis. Not seeing the TCanvas with root, even with cmssw_10_2_18
Got the following error when trying to do cmsrel :
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "C.UTF-8"
are supported and installed on your system
STILL GET problems to see Canvas, but can see the data now with:
root [0] TFile ff("Object_EE_perRing-450invfb.root")
(TFile &) Name: Object_EE_perRing-450invfb.root Title:
root [1] TCanvas *c1 = (TCanvas*)ff.Get("ccRing");
root [2] c1->GetListOf
GetListOfClassSignals
GetListOfConnections
GetListOfExecs
GetListOfPrimitives
GetListOfSignals
root [2] c1->GetListOfPrimitives()->Print()
Collection name='TList', class='TList', size=4
TFrame X1=1.371908 Y1=0.147538 X2=3.073443 Y2=16.133814 FillStyle=1001
x[0]=2.93165, y[0]=14.682
x[1]=2.93165, y[1]=14.534
x[2]=2.85224, y[2]=12.4755
-------------
x[38]=1.54988, y[38]=0.181118
x[39]=1.53158, y[39]=0.176686
x[40]=1.52291, y[40]=0.167423
x[41]=1.5137, y[41]=0.189507
root [3] TCanvas c2
(TCanvas &) Name: c1 Title: c1
root [4]
6 Jan 2020
Linux analysis, directories in davec:
EE450 |
Fits to laser data vs lumi, for each eta ring, also using the next 2 higher eta rings |
EE-MC-losses |
Using Sasha/Amina predictions from 180 fb-1 to 450 fb-1 |
|
root> .x MC-invfb-ratios.C |
11 Dec 2019
Summary of LaserFarm analysis so far:
Prepared root results file, ie
LaserFarm-dst-run-328361-2.root using ~Laserfarm/read-my-ixiyfed-dst-file.C
that generates the following plots:
- hhits->Write();
- hfed->Write();
- hnvalid->Write();
- hamp->Write();
- hamprms->Write();
- adcamps->Write();
- adcrms->Write();
- adcampslow->Write();
Do TBrowser browser to access hnvalid giving:
23 Sep 2019
Looking at eos laser scan data, after processing with awk to pick up all laser cycles.
File /LaserFarm/read-merged-eos-file.C
See also /LaserFarm/plot-single-point.C
// AAAAAAAAAAAAAAAAArrrrrrrrrrrrrrrrrrrrggggggggggggggggggggghhhhhhhhhhhhhhhhhhhhhhhhhh
// Did "Fill", instead of plot SINGLE point in
TH2D !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
// Got stuff binned inside the 2D bins, not by absolute x,y !!!!!!!!!!!!!!!!
See my notes in my
ROOT Twiki
Fri 26 July 2019
Finished EE Mapping for ix, iy for the Laser Farm DST files, into the following files:
/afs/cern.ch/user/d/davec/LaserFarm/
-rw-r--r--. 1 davec zh 120K Jul 26 14:57 EEM-FED601.txt
-rw-r--r--. 1 davec zh 114K Jul 26 14:57 EEM-FED602.txt
-rw-r--r--. 1 davec zh 118K Jul 26 14:57 EEM-FED603.txt
-rw-r--r--. 1 davec zh 118K Jul 26 14:57 EEM-FED604.txt
-rw-r--r--. 1 davec zh 114K Jul 26 14:57 EEM-FED605.txt
-rw-r--r--. 1 davec zh 120K Jul 26 14:57 EEM-FED606.txt
-rw-r--r--. 1 davec zh 118K Jul 26 14:57 EEM-FED607.txt
-rw-r--r--. 1 davec zh 116K Jul 26 14:57 EEM-FED608.txt
-rw-r--r--. 1 davec zh 118K Jul 26 14:57 EEM-FED609.txt
using
/afs/cern.ch/user/d/davec/LaserFarm/EEM-mapping.C that reads the full 75,000 line ecalMapping.txt
to reduce to the EEM files above.
Thur 18 July, 2019
Working on Bruno Lenzi's dst files at
DSTs
Copied over to directory
LaserFarm:
eos cp /eos/cms/store/group/dpg_ecal/alca_ecalcalib/laser/dst.merged/2019/609/dst00328361.0000.447.00.shap3.corlinpn .
The fed is listed in the path name, ie 609.
iz can be inferred from the module: EB -> 0, EE- -> -1, EE+ -> +1
First xtal in dst file is therefore at fed 609 and elecID 0, and
ecalMapping.txt gives:
cmsswId |
dbId |
hashedId |
iy |
ix |
Module |
FED |
ccu |
strip |
Xtal |
elecID |
side |
LME |
ieta |
eta |
872423467 |
2010064043 |
3075 |
43 |
64 |
EE-06 |
609 |
1 |
1 |
1 |
0 |
0 |
92 |
-120 |
-2.63663 |
Only the fed and elecID needed to get to ix, iy, iz.
Wed 24 April, 2019
Studying loss factors with the fits to LHC data, 2 fills in 2016, 5 fills in 2017.
Extrapolations to end 2018 (186 fb-1), and end Run3 (450 fb-1).
Example of error handling:
Laser correction factors, noise/transverse noise at 450 fb-1:
~EE300/btcp-fits-and-errors.C followed by btcp-fits-and-errors-plots.C
~EE300/sic-fits-and-errors.C followed by sic-fits-and-errors-plots.C
For the btcp fits:
// now do 450 fb-1
xlumi = 450.;
for (int i=0; i<12; i++) {
ybtcp[i] = btcp0[i] + btcp1[i]*xlumi;
btcpnoise450[i]=efactor*ybtcp[i]; // peds are 2 adc counts, the adctogev converts this to GeV, corr factor is the noise multiplier for response loss
eybtcp[i] = sqrt( btcp0err[i]*btcp0err[i] + xlumi*xlumi*btcp1err[i]*btcp1err[i] );
btcpnoise450err[i]= efactor*eybtcp[i];
ly450[i] = 100.0/ybtcp[i];
ly450err[i] = 100.0*eybtcp[i]/(ybtcp[i]*ybtcp[i]);
}
TGraphErrors * btcp450 = new TGraphErrors(12, xbtcp, ybtcp, exbtcp, eybtcp); // correction factor
TGraphErrors * btcpgevnoise450 = new TGraphErrors(12, xbtcp, btcpnoise450 , exbtcp, btcpnoise450err);
TGraphErrors * btcply450 = new TGraphErrors(12, xbtcp, ly450, exbtcp, ly450err); // Remaining response to em showers, 1/(correction factor)
Remaining signal, 450 wrt 186 fb-1, showing both btcp and sic:
~EE300/fits-to-laser-corr-vs-int-lumi.C
Mon 11 March, 2019
Looking at DQM data. Using Tanmay's find in cmsusr (Ykw10 !!!) to see the DQM data for run 327984
Copy file (only 4 MB) over to lxplus, work area with scp <====== NOTE !!!!
Files such as:
DQM_V0001_R000328361__All__Run2019__MiniDAQLaser.root"
Open root file with TBrowser browser.
under DQMData;1 => Run 328361 =>
EcalEndcap;1 => Run summary =>
EElaserTask -> Laser1;1, Laser2;1, Laser3;1
Laser 3 for the laser power scan.
Projections such as EELT amplitude EE+08 L3;1
Fill up 2D EE plots with ~/LaserDQManalysis/DQM-processes/DQM-data-into-2D-histos.C
Get miniDAQ file from lxplus work area, i.e.:
TFile ff ("/afs/cern.ch/work/d/davec/LaserData/DQM_V0001_R000328335__All__Run2019__MiniDAQLaser.root");
ff.cd("DQMData/Run 328335/EcalEndcap/Run summary/EELaserTask/Laser3/");
TProfile2D *p8 = (TProfile2D*)gDirectory->Get("EELT amplitude over PN EE+08 L3")
// now loop over all sectors and fill 2D plots with amplitudes etc:
for(int i = 1; i<101; i++) {
for (int jj= 1; jj<101; jj++) {
x = float(i)-0.5;
y = float(jj)-0.5;
// fill EEP and EEM amplitude plots
amp = s1->GetBinContent( s1->FindBin(x,y) ); if(amp> 0.0000000) {EEP->Fill(x,y,amp); EEPHITS->Fill(x,y,1.); }
amp = s2->GetBinContent( s2->FindBin(x,y) ); if(amp> 0.0000000) {EEP->Fill(x,y,amp); EEPHITS->Fill(x,y,1.); }
// Write out all 2D EE data to a root file, ie Run-327587.root:
int runwanted;
runwanted = 327693;
string s;
s = to_string(runwanted);
cout<< "\n\nString is " <<s<<endl;
TString tsrun; \\ for titles in root plots, ie:
tsrun = s;
cout << tsrun << endl;
TH2D *EEPpnonly = new TH2D("EEPpnonly", "Run " + tsrun + ", EEP pn only", 100, 0, 100, 100, 0, 100);
s = "Run-" + s + ".root";
//std::string str;
const char * c = s.c_str();
//s = "Run-" + s + ".root";
TFile f(c, "RECREATE");
EEP->Write();
EEM->Write();
EEPHITS->Write();
EEMHITS->Write();
EEPpn->Write();
EEMpn->Write();
EEPHITSpn->Write();
EEMHITSpn->Write();
EEPpnonly->Write();
EEMpnonly->Write();
EEPHITSpnonly->Write();
EEMHITSpnonly->Write();
cout << "\n\n" << endl;
f.ls();
f.Close();
7 Feb 2019
Problems loading rootlogon.C.
Commented out FWLite stuff - seems OK now.
Release area for 8019 still OK at:
https://cmssdt.cern.ch/SDT/cgi-bin/ReleasesXML
21 Jan 2019
In folder ~/EE300/LaserBlueprompt2018newformat
Do .L
MyClass.C,
MyClass t, t.Loop()
Still only getting 5462 entries, so data still only to mid 2018
1 Nov 2018
Now looking at
LaserBlue_prompt_2018_newformat.root
at: eoscms ls -l /eos/cms/store/group/dpg_ecal/comm_ecal/pedestals_gainratio/LaserBlue_prompt_2018_newformat.root
Make the class, ie with:
root[] TFile f ("staff.root")
root[] f.ls()
returns: TFile** staff.root TFile* staff.root KEY: TTree T;1 staff data from ascii file, T->Print();
T->MakeClass("MyClass")
22 Oct 2018
Remove mains plug from PXRALXP4. Repower - now all OK !
File eoscms ls -l /eos/cms/store/group/dpg_ecal/comm_ecal/pedestals_gainratio/BlueLaser_2011-2018.root
Last entry is at start of 2018 run:
Last entry: jentry = 43729, looking for time = 1510267596, i = 91, time[i] = 1525924444
time[i] = 1525924444 gives Date/Time = Thu May 10 05:54:04 2018
To get 2018 data, try eos files
LaserBlue_prompt_2018_newformat.root and
LaserBlue_prompt_2018_newformat.root
OOOOOooopppsss ! Overwrote my original
MyClass.C file with time info for Petyt dates in 2016/2017.
See CERN webpage at:
https://twiki.cern.ch/twiki/bin/view/Main/BrettJacksonRecoverAFS
Try afs backup - Success !!!!!!!!!!! Got last backup on Friday 19 Oct, 2018, was item 88 in the backup logs:
afs_admin recover /afs/cern.ch/user/d/davec/EE300/MyClass.C
RECOVERING VOLUME:user.davec
2018-10-22 14:31:00,192 INFO : Starting restore session... logfile /var/ABS/log/abs-restore-session.2018.10.22-143100/abs-restore.log
0: 2018-03-29 18:43:10 (f)
1: 2018-05-08 18:38:25
2: 2018-05-13 18:39:20 (f)
......................
87: 2018-10-18 18:44:02
88: 2018-10-19 18:48:49
choose dump (number) or 'q' or '^C' to interrupt > 88
2018-10-22 14:32:37,851 INFO : Restoring volume 1933773765 at 2018-10-19 18:48:49, recalling 14 dumps
2018-10-22 14:32:37,854 INFO : Received all required dumps from archive, starting to restore
etc
Full backup of davec, including EE300 at:
/afs/cern.ch/project/afs/var/ABS/recover/R.1933773765.10221230/
ls -al /afs/cern.ch/project/afs/var/ABS/recover/
drwxr-xr-x. 68 davec root 30K Oct 19 14:43 R.1933773765.10221230
Actual time 14:43 - 1 = 13.43
Time for recover operation: ~12.15 - 13.43 = 1.5 hours.
19 Oct 2018
15.30: Not getting through to eos with eoscms ls -l /eos/cms/
Computing info for eos system etc at:
https://cern.service-now.com/service-portal/ssb.do?&&tab=homepage
18 Oct 2018
Root files with laser data, Francesca email, 7 June 2018:
I have produced the laser rootuple for all years:
2011 - prompt
2012 - final legacy
2015 - 2016 - legacy
2017 NoTP (there are no stuck crystals, and no TP correction )
2018 prompt until a couple of weeks ago
Federico has rewritten the software. For 2011-2018:
/eos/cms/store/group/dpg_ecal/comm_ecal/pedestals_gainratio/BlueLaser_2011-2018.root
To search for this file on eos: eoscms ls -l /eos/cms/store/group/dpg_ecal/comm_ecal/pedestals_gainratio/BlueLaser_2011-2018.root
April - 17 Oct 2018
Lasercalib anaysis, projections for 300 and 500 fb
-1 in EE300.
Using David Petyt's downloads from Francesca's eos files, 1st Feb 2018:
In /afs/cern.ch/user/p/petyt/public/laserplots
I have placed several files with 2D maps of EE- laser response for several points during 2017:
time=1497480466 // 15 JUN 0 fb-1 00:47:46 2017 using Linux, root>>.x tdatime-convert.C with TDatime
Lum, 10^34
time=1501546677 // 1 AUG 10 fb-1 1.2 02:17:57 2017 using Linux, root TDatime
time=1504223164 // 1 SEP 20 fb-1 0.7 01:46:04 2017 using Linux, root TDatime
time=1506816983 // 1 OCT 30 fb-1 1.0 02:16:23 2017 using Linux, root TDatime
time=1508025827 // 15 OCT 40 fb-1 0.5 02:03:47 2017 using Linux, root TDatime
time=1510267596 // 10 NOV 50 fb-1 0.8 = Thu Nov 9, 23:46:36 2017 using Linux, root TDatime
there are two plots in each file:
lasresp -> laser response for each channel
lasresp_chst0 -> laser response only for channels with channel status=0
the quantity plotted is laser response normalised to March 2011.
The alpha values to use are:
1.16 BTCP
1.0 SIC
and the files for 2016, email 3 Feb 2018:
I have placed root files with 2d maps of laser response for these two time periods in the same directory on lxplus:
/afs/cern.ch/user/p/petyt/public/laserplots
Lum, 10^34
lasmap_2016_aug14.root 1.0, 25 fb-1, From CMSSW_8017 on run 278817, Unix Time = 1471169946 Date/Time = Sun Aug 14 12:19:06 2016 ==> .x tdatime-convert.C gives SAME date/time as CMSSW
lasmap_2016_oct26.root 1.0, 40 fb-1, From CMSSW_8019 on run 284036, Unix Time = 1477485370 Date/Time = Wed Oct 26 14:36:10 2016 ==> .x tdatime-convert.C gives SAME date/time as CMSSW
Again there are two plots per file, with and without the channel status check applied.
12 Mar 2018
Trying to reconcile lasercalibs, from Petyt versus my CMSSW job for AUg 2016.
Running CMSSW_8_0_19_patch1 with Global Tag: 80X_dataRun2_ICHEP16_repro_v0
to try to get last, most up to date, database.
BUT, data were taken with CMSSW_8_0_17. Doesn't look like running with CMSSW_8_0_19_patch1 is a good idea:
'file:/afs/cern.ch/work/d/davec/pickevents-Run2016G-DoubleEG-278817-66-3428691.root'
# config dataset=/DoubleEG/Run2016G-PromptReco-v1/RECO
# Creation time: 2016-08-14 17:51:25, 2016-08-12 12:59:53,
Global Tag: 80X_dataRun2_Prompt_v10, Pset hash: GIBBERISH, Release: CMSSW_8_0_17
produces very odd lasercalib results!
Printout in txt file:
Run number = 278817 Lumi = 66 event number = 3428691
bunch crossing = 2683 orbit = 17096001 store = 0 time from DAQ = 56879
Unix Time = 1471169946
Date/Time = Sun Aug 14 12:19:06 2016
Barrel ADCtoGeV = 0.0394357
Endcap ADCtoGeV = 0.0667637
Weird lasercalbs for neerby channels:
EB lasercalib > 2 EBDetId = (EB ieta 85, iphi 322 ; ism 17 , ic 1699) rawId = 838970178 lasercalib = 19.8, channel status = 0
EB lasercalib > 2 EBDetId = (EB ieta 85, iphi 323 ; ism 17 , ic 1698) rawId = 838970179 lasercalib = 21.9, channel status = 0
EB lasercalib > 2 EBDetId = (EB ieta 85, iphi 334 ; ism 17 , ic 1687) rawId = 838970190 lasercalib = 3.31, channel status = 0
EB lasercalib > 2 EBDetId = (EB ieta 85, iphi 341 ; ism 18 , ic 1700) rawId = 838970197 lasercalib = 3.32, channel status = 0
EB lasercalib > 2 EBDetId = (EB ieta 85, iphi 342 ; ism 18 , ic 1699) rawId = 838970198 lasercalib = 4.84, channel status = 0
EE- lasercalib > 10, EEDetId = (EE iz - ix 97 , iy 56), rawId = 872427704, lasercalib = 20.5, channel status = 0
EE- lasercalib > 10, EEDetId = (EE iz - ix 97 , iy 62), rawId = 872427710, lasercalib = 10.2, channel status = 0
EE- lasercalib > 10, EEDetId = (EE iz - ix 98 , iy 41), rawId = 872427817, lasercalib = 73.7, channel status = 0
EE- lasercalib > 10, EEDetId = (EE iz - ix 98 , iy 47), rawId = 872427823, lasercalib = 21.5, channel status = 0
EE+ lasercalib > 10, EEDetId = (EE iz + ix 88 , iy 76), rawId = 872442956, lasercalib = 49.6, channel status = 0
EE+ lasercalib > 10, EEDetId = (EE iz + ix 89 , iy 32), rawId = 872443040, lasercalib = 392, channel status = 0
EE+ lasercalib > 10, EEDetId = (EE iz + ix 89 , iy 51), rawId = 872443059, lasercalib = 96.6, channel status = 0
EE+ lasercalib > 10, EEDetId = (EE iz + ix 89 , iy 54), rawId = 872443062, lasercalib = 15.3, channel status = 0
EE+ lasercalib > 10, EEDetId = (EE iz + ix 89 , iy 68), rawId = 872443076, lasercalib = 10.3, channel status = 0
4 Oct 2017
Trying to get ./convertDat2Pool.sh -d 303532all.dat to work:
From directory:
/afs/cern.ch/user/d/davec/CMSSW_9_2_10/src/Reco/HVAnalyzer/log
more 303532all.dat.1289.DatToPool.log
----- Begin Fatal Exception 16-Oct-2017 13:52:19 CEST-----------------------
An exception of category 'ConfigFileReadError' occurred while
[0] Processing the python configuration file named /afs/cern.ch/user/d/davec/CMSSW_9_2_10/src/Reco/HVAnalyzer/conf/303532all.dat.hex.1289_cfg.py
Exception Message:
python encountered the error: <type 'exceptions.SyntaxError'>
non-keyword arg after keyword arg (303532all.dat.hex.1289_cfg.py, line 22)
----- End Fatal Exception -------------------------------------------------
In /afs/cern.ch/user/d/davec/CMSSW_9_2_10/src/Reco/HVAnalyzer/conf
have the py file referred to above:
303532all.dat.hex.1289_cfg.py
OOOOOOOOps
Trying out search replace had done wrong !!!!!!!!!
Had:
#fileNames = cms.untracked.vstring('file:/afs/cern.ch/work/d/davec/P5data/HV-scans-21Sep2017-3p8T/303532all.dat'),
fileNames.untracked.vstring(
SHOULD HAVE HAD:
#fileNames = cms.untracked.vstring('file:/afs/cern.ch/work/d/davec/P5data/HV-scans-21Sep2017-3p8T/303532all.dat'),
fileNames = cms.untracked.vstring(
4 Oct 2017
At last, got eeDigi collection by adding:
from EventFilter.EcalRawToDigi.EcalUnpackerData_cfi import ecalEBunpacker
process.ecalDigis = ecalEBunpacker.clone()
process.ecalDigis.InputLabel = cms.InputTag('rawDataCollector')
# and extending the process.p tasks:
# process.p = cms.Path(process.ecalEBunpacker*process.ecalWeightUncalibRecHit*process.demo)
# to
process.p = cms.Path(process.ecalDigis+process.ecalEBunpacker*process.ecalWeightUncalibRecHit*process.demo)
NOTE: TBrowser browser for 1D histos, event status bar progressively sums up all cell counts in window !!!!! VERY USEFUL, ie bnc=4, Sum=600
3 Oct 2017
At
RecoEcal/EgammaClusterProducers/python/ecalDigiSelector_cfi.py
EcalEEDigiTag = cms.InputTag("ecalDigis","eeDigis"),
In RecoEcal/EgammaClusterProducers/src/EcalDigiSelector.cc
EcalEEDigiToken_ =
42 consumes<EEDigiCollection>(ps.getParameter<edm::InputTag>("EcalEEDigiTag"));
Checking where event type might be stored. ie:
https://github.com/cms-sw/cmssw/blob/master/CaloOnlineTools/EcalTools/plugins/EcalDigiDisplay.cc
https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideEcalOverview#InPut
cmssw/CalibCalorimetry/EcalLaserSorting
https://github.com/cms-sw/cmssw/blob/09c3fce6626f70fd04223e7dacebf0b485f73f54/CalibCalorimetry/EcalLaserSorting/src/LaserSorter.cc
https://github.com/cms-sw/cmssw/blob/09c3fce6626f70fd04223e7dacebf0b485f73f54/RecoLocalCalo/EcalRecProducers/test/testEcalRecoLocal_cfg.py
process.ecalTestRecoLocal = cms.Sequence(process.ecalEBunpacker
*process.ecalUncalibHitWeights
*process.ecalUncalibHitFixedAlphaBetaFit
*process.ecalUncalibHitRatio
*process.ecalUncalibHitGlobal
*process.ecalRecHit
*process.outputmodule
#*process.dumpEv
)
process.p = cms.Path(process.ecalTestRecoLocal)
From Jean Fay's email, Thu 13/08/2015 10:34:
Looking to the configuration of this night run (254232) I find the following 'CONIGURATION_SCRIPT-PARAMS' :
--n-phot-2 0 --n-ir 0 --n-ped 1 --n-tp 1 --n-phot-1 600 --n-green 600 --n-orange-led 600 --n-blue-led 600
--las-switch-time 400 --led-switch-time 1200 --las-switch-rst-time 3000 --eb-to-ee-las-switch-time 2000
--starting-lme-from /nfshome0/ecalpro/.lsup_starting_lme
If I understand it well, there is no blue2, no infrared laser, 1 pedestal event, 1 Testpulse event,
600 blue1 laser events, 600 green laser events, 600 orange LED events, 600 blue LED events in the calibration sequence.
You can get this information from WBM, Ecal Summary, Configuration Compare/Show button then expend Sequence 0:
Default sequence (1 cycle) Cycle 0: SelectiveReadout (the default cycle for run Cosmics-SR) and check ECAL_TTCCI_CONFIGURATION
Some py lines to try:
https://github.com/cms-sw/cmssw/blob/09c3fce6626f70fd04223e7dacebf0b485f73f54/CaloOnlineTools/EcalTools/python/ecalURecHitHists_cfg.py
process.load("EventFilter.EcalRawToDigi.EcalUnpackerMapping_cfi")
process.load("EventFilter.EcalRawToDigi.EcalUnpackerData_cfi")
..........
process.p = cms.Path(process.ecalEBunpacker*process.ecalUncalibHit*process.ecalURecHitHists)
process.ecalUncalibHit.EBdigiCollection = cms.InputTag("ecalEBunpacker","ebDigis")
process.ecalUncalibHit.EEdigiCollection = cms.InputTag("ecalEBunpacker","eeDigis")
2 Oct 2017
AAAAAAAAAAAAAAAAAAAAAAArrrrrrrrrrrrrrrrrrrrrrgggggggggggggggghhhhhhhhhhhhhh
Was only using original BuildFile.xml - no wonder could'nt compile:
AND MUST place BuildFile.xml in the plugins folder !!!!!!!!!!!!!!!!!!!!!!!!!!!!
original BuildFile.xml:
<use name="FWCore/Framework"/>
<use name="FWCore/PluginManager"/>
<use name="FWCore/ParameterSet"/>
<flags EDM_PLUGIN="1"/>
>> Building edm plugin tmp/slc6_amd64_gcc530/src/Reco/HVAnalyzer/plugins/RecoHVAnalyzerAuto/libRecoHVAnalyzerAuto.so
tmp/slc6_amd64_gcc530/src/Reco/HVAnalyzer/plugins/RecoHVAnalyzerAuto/HVAnalyzer.o: In function `HVAnalyzer::HVAnalyzer(edm::ParameterSet const&)':
HVAnalyzer.cc:(.text+0x1240): undefined reference to `TLS init function for TFileService::tFileDirectory_'
HVAnalyzer.cc:(.text+0x1264): undefined reference to `TFileService::tFileDirectory_'
HVAnalyzer.cc:(.text+0x127c): undefined reference to `TFileDirectory::_cd(std::string const&, bool) const'
HVAnalyzer.cc:(.text+0x12ce): undefined reference to `TH1F::TH1F(char const*, char const*, int, double, double)'
HVAnalyzer.cc:(.text+0x12d3): undefined reference to `TH1F::Class()'
HVAnalyzer.cc:(.text+0x12f4): undefined reference to `TH1AddDirectorySentry::TH1AddDirectorySentry()'
HVAnalyzer.cc:(.text+0x1307): undefined reference to `TH1AddDirectorySentry::~TH1AddDirectorySentry()'
HVAnalyzer.cc:(.text+0x20c2): undefined reference to `TH1AddDirectorySentry::~TH1AddDirectorySentry()'
collect2: error: ld returned 1 exit status
config/SCRAM/GMake/Makefile.rules:2067: recipe for target 'tmp/slc6_amd64_gcc530/src/Reco/HVAnalyzer/plugins/RecoHVAnalyzerAuto/libRecoHVAnalyzerAuto.so' failed
gmake: *** [tmp/slc6_amd64_gcc530/src/Reco/HVAnalyzer/plugins/RecoHVAnalyzerAuto/libRecoHVAnalyzerAuto.so] Error 1
gmake: *** [There are compilation/build errors. Please see the detail log above.] Error 2
Successfully ran cmsRun, from /afs/cern.ch/user/d/davec/CMSSW_8_0_19_patch1/src/HV/HVAnalyzer, on root file:
file:/afs/cern.ch/work/d/davec/P5data/HVscans17Dec2012-3p8T/209197.root
taken on 17Dec2012, with CMSSW_5_1_2_ONLINE, with h001 histo - channels looks like ped data, spread (std dev) = 2.364 ADV counts, 14473 entries.
Tried cmsRun, from /afs/cern.ch/user/d/davec/CMSSW_8_0_19_patch1/src/HV/HVAnalyzer, on root file:
/afs/cern.ch/work/d/davec/P5data/HV-scans-20Sep2017-0T/run303213_ls0001_streamDQM_mrg-c2f12-31-01.dat
with "NewEventStreamFileReader" for the .dat file.
Error:
----- Begin Fatal Exception 02-Oct-2017 13:52:01 CEST-----------------------
An exception of category 'ConditionDatabase' occurred while
[0] Processing run: 303213
[1] Running path 'p'
[2] Calling beginRun for module EcalRawToDigi/'ecalEBunpacker'
[3] Using EventSetup component PoolDBESSource/'GlobalTag' to make data EcalChannelStatus/'' in record EcalChannelStatusRcd
Exception Message:
Payload of type EcalCondObjectContainer<EcalChannelStatusCode> with id 9fd892a88a5e2a6aae363739849b112c03886d0c could not be loaded. An exception of category 'ConditionDatabase' occurred.
Exception Message:
De-serialization failed: the current boost version (1_57) is unable to read the payload. Data might have been serialized with an incompatible version. Payload serialization info: {
"CMSSW_version": "CMSSW_9_1_0",
"architecture": "slc6_amd64_gcc530",
"technology": "boost/serialization",
"tech_version": "1_63"
}
from default_deserialize
from Session::fetchPayload
----- End Fatal Exception -------------------------------------------------
Also tried more recent Global Tags, but still errors, ie:
cmsRun recoanalyzer-hvscans_cfg.py > 303213-2Oct2017-pm.txt
----- Begin Fatal Exception 02-Oct-2017 14:02:09 CEST-----------------------
An exception of category 'PluginNotFound' occurred while
[0] Constructing the EventProcessor
[1] Constructing ESSource: class=PoolDBESSource label='GlobalTag'
Exception Message:
Unable to find plugin 'HFPhase1PMTParamsRcd@NewProxy' in category 'CondProxyFactory'. Please check spelling of name.
----- End Fatal Exception -------------------------------------------------
Now running fine in /afs/cern.ch/user/d/davec/CMSSW_9_2_10/src/Reco/HVAnalyzer
Multiple .dat files -> OK
Tag = '92X_upgrade2017_realistic_v7'
29 Sep 2017
Editing /afs/cern.ch/user/d/davec/CMSSW_9_2_10/src/Reco/HVAnalyzer/HVAnalyzer.cc with lines from HVAnalyzer.cc-533.
Had to change string volts; to std::string volts; to avoid compiler error 'string' does not name a type.
28 Sep 2017
Trying out Jean Fay's convert file, to change a .dat file to a .root file
Had to do a chmod u+x to get the sh file to execute !!!!!!!!!!!
Had to change data_path="/tmp/fay" in 2 places !!!!!!!
- in bash section AND in process.out = cms.OutputModule("PoolOutputModule", section !!!!!!!!!
./convertDat2Pool.sh -d FileName <br>
ie:
./convertDat2Pool.sh -d run303537_ls0030_streamDQM_mrg-c2f12-31-01.dat
data to be analyzed: run303537_ls0030_streamDQM_mrg-c2f12-31-01.dat
first event analyzed will be: 1
... running
-------------------------------------------------------------------------------------------------------------------------
pool file produced here: /afs/cern.ch/user/d/davec/CMSSW_9_2_10/src/Reco/HVAnalyzer/run303537_ls0030_streamDQM_mrg-c2f12-31-01.dat.root
-------------------------------------------------------------------------------------------------------------------------
NOW can get edmProvDump to work on the root file (it doesn't work on the .dat file).
Shows CMSSW_9_2_0_patch5 as the CMSSW version used for data taking.
Shows 2 collections:
Module:
TriggerResults HLT
Module: rawDataCollector LHC
edmProvDump run303537_ls0030_streamDQM_mrg-c2f12-31-01.dat.root
Processing History:
LHC '' '"CMSSW_9_2_0_patch5"' (dd392193a5747b5c718c2371e7635ff9)
HLT '' '"CMSSW_9_2_0_patch5"' (af51e6e91aa8d2fe4c18f56d8247a4d5)
---------Producers with data in file---------
Module: TriggerResults HLT
PSet id:e20ccddd965959e04719f6b357534d82
products: {
edmTriggerResults_TriggerResults__HLT.
}
parameters: {
@trigger_paths: vstring tracked = {'HLT_EcalCalibration_v4','HLT_EcalMiniDAQ_v1'}
}
Module: rawDataCollector LHC
PSet id:78a3da76873b2cc46323eddfd231620f
products: {
FEDRawDataCollection_rawDataCollector__LHC.
}
parameters: {
@module_edm_type: string tracked = 'Source'
@module_label: string tracked = 'rawDataCollector'
@module_type: string tracked = 'FedRawDataInputSource'
}
Get python error in CMSSW_9_2_10, from: python recoanalyzer-hvscans_cfg.py
from Configuration.StandardSequences.FrontierConditions_GlobalTag_cfi import *
ImportError: No module named FrontierConditions_GlobalTag_cfi
Commenting out line 83 seems to work :
# from Configuration.StandardSequences.FrontierConditions_GlobalTag_cfi import *
# In GIT, seems to have changed from a cfi file to a cff file:
from Configuration.StandardSequences.FrontierConditions_GlobalTag_cff import *
This seems to work !!!!!!!!!!!!!
See:
SWGuideAboutPythonConfigFile
27 Sep 2017
error while running on run:
/afs/cern.ch/work/d/davec/P5data/HV-scans-20Sep2017-0T/run303213_ls0001_streamDQM_mrg-c2f12-31-01.dat
An exception of category 'ConditionDatabase' occurred while
[0] Processing run: 303213
[1] Running path 'p'
[2] Calling beginRun for module EcalRawToDigi/'ecalEBunpacker'
[3] Using EventSetup component PoolDBESSource/'GlobalTag' to make data EcalChannelStatus/'' in record EcalChannelStatusRcd
Exception Message:
Payload of type EcalCondObjectContainer<EcalChannelStatusCode> with id 9fd892a88a5e2a6aae363739849b112c03886d0c could not be loaded. An exception of category 'ConditionDatabase' occurred.
Exception Message:
De-serialization failed: the current boost version (1_57) is unable to read the payload. Data might have been serialized with an incompatible version. Payload serialization info: {
"CMSSW_version": "CMSSW_9_1_0",
"architecture": "slc6_amd64_gcc530",
"technology": "boost/serialization",
"tech_version": "1_63"
Also, when trying to run with 303213 on /afs/cern.ch/user/d/davec/CMSSW_9_2_10/src/Reco/HVAnalyzer/
get:
----- Begin Fatal Exception 28-Sep-2017 10:54:50 CEST-----------------------
An exception of category 'ConfigFileReadError' occurred while
[0] Processing the python configuration file named recoanalyzer-hvscans_cfg.py
Exception Message:
python encountered the error: <type 'exceptions.ImportError'>
No module named FrontierConditions_GlobalTag_cfi
----- End Fatal Exception -------------------------------------------------
26 Sep 2017
Checking HV code - problems with EBunpacker being recognised.
AAAAAAAAAAAAARRRRRRRRRRRRRRRRRRRRGGGGGGGGGGGGGGGGGGHHHHHHHHHHHHHHHHHH!!!!!!!!!!!!!!!!!
Forgot to bring over all the ECALunpacker python cfi files etc.
26 Sep 2017, copied lines over from work, davec/CMSSW-hvscans/CMSSW_5_3_3_patch2/src/HV/HVAnalyzer/hvanalyzer_cfg.py
Analysis of run209197.root: all works now, in /afs/cern.ch/user/d/davec/CMSSW_8_0_19_patch1/src/HV/HVAnalyzer!
Working python file:
recoanalyzer-hvscans_cfg.py.txt
Analysis of run 209197, 17 Dec 2012 at 3.8 T, gave data in:
/afs/cern.ch/work/d/davec/P5data/HVscans17Dec2012-3p8T/hv-out-209197.txt
209197-ProvDump.txt gave:
parameters: {
@module_edm_type: string tracked = 'Source'
@module_label: string tracked = 'rawDataCollector'
@module_type: string tracked = 'DaqSource'
edmevdump-209197.txt gave:
Type Module Label Process
------------------------------------------------------------------
edm::TriggerResults "TriggerResults" "" "FU"
FEDRawDataCollection "rawDataCollector" "" "LHC"
py code:
./CMSSW_8_0_19_patch1/src/HV/HVAnalyzer/recoanalyzer-hvscans_cfg.py:
#'file:/afs/cern.ch/user/d/davec/CMSSW_8_0_19_patch1/src/Reco/RecoAnalyzer/209197.root'
cc code at:
/afs/cern.ch/user/d/davec/CMSSW_8_0_19_patch1/src/HV/HVAnalyzer/plugins/HVAnalyzer.cc
HVAnalyzer::analyze(const edm::Event& iEvent, const edm::EventSetup& iSetup)
{
using namespace edm;
using namespace std;
evcount = evcount + 1;
cout << "evcount = " << evcount << endl;
edm::Handle<EEUncalibratedRecHitCollection> rechits;
iEvent.getByToken( tok_EE_Rec, rechits );
if ( !rechits.isValid() ) {cout << "No EEUncalibratedRecHitCollection" << endl; return;}
cout << "\n EEUncalibratedRecHitCollection, rechits->size() = " << rechits->size() << endl;
int chnum = 0;
for ( EEUncalibratedRecHitCollection::const_iterator rechitItr= rechits->begin();
rechitItr != rechits->end(); ++rechitItr ) { //1
chnum += 1;
EEDetId id = rechitItr->id();
// not used EEDetId detId = EEDetId((*rechitItr).id());
int eex = id.ix();
int eey = id.iy();
int eez = id.zside();
int hi = id.hashedIndex();
//uint32_t flag = rechitItr->recoFlag();
// uint32_t sev = sevlv->severityLevel(id, *rechits);
float amp = rechitItr->amplitude();
//if(chnum<10) cout << "chnum = " << chnum << " ix, iy, iz = " << eex << " " << eey << " " << eez << " ,amp = " << amp << endl;
if(amp>5) cout << "chnum = " << chnum << " ix, iy, iz = " << eex << " " << eey << " " << eez << " ,amp = " << amp << endl;
}
And output:
EEUncalibratedRecHitCollection, rechits->size() = 14473
chnum = 148 ix, iy, iz = 6 53 -1 ,amp = 6.73744
chnum = 191 ix, iy, iz = 7 46 -1 ,amp = 5.2352
chnum = 202 ix, iy, iz = 7 57 -1 ,amp = 5.0228
etc
22 sep 2017
Set up CMSSW_9_2_10 area for HV scan analysis. Contains:
afs/cern.ch/user/d/davec/CMSSW_9_2_10/src/Reco/HVAnalyzer/hvanalyzer_cfg.py"
Pull over other hv stuff from CMSSW_8_0_19_patch1.
20 Sep 2017
Steps to access raw data for the minidaq runs:
(1) ssh into cmsusr:
ssh cmsusr
pwd Ykw10
(2) (From cmsusr prompt): login to the DQM development FU:
ssh fu-c2f11-21-02
(3) The raw data should be visible on a folder mounted on this machine, ie :
ls /dqmminidaq/run303213/
or
ls /dqmminidaq/run3035*/ > 3035series.txt
to see all runs starting with 3035 etc
(4) To set up a CMSSW environment: (from the cmsusr prompt)
source ~ecalpro/DQM/proxy.sh
export SCRAM_ARCH=slc7_amd64_gcc530
source /opt/offline/cmsset_default.sh || (echo "CMSSW setup script not found on this machine"; return 1)
(5) To copy files over from the FU machine to a target machine:
#see what files are there:
ls -al /dqmminidaq/run303537/*.dat
# first copy files over to /nfshome0/davec :
cp /dqmminidaq/run303537/run303537_ls0002_streamDQM_mrg-c2f12-31-01.dat .
or
cp /dqmminidaq/run303537/run303537_ls*_streamDQM_mrg-c2f12-31-01.dat .
to copy ALL 303537 files into cmsusr:/nfshome0/davec/
From a prompt on the target machine:
scp cmsusr:/nfshome0/davec/run303537_ls0002_streamDQM_mrg-c2f12-31-01.dat .
UNFORTUNATELY * doesnt seem to work as a wildcard.
14 Sep 2017
Revision of
ECAL software.
8 Sep 2017
Looking at old py files for hvscan analysis.
CMSSW_3_9_5 has:
process.p = cms.Path(process.ecalEBunpacker*process.ecalWeightUncalibRecHit*process.demo) # root file will have a directory called 'demo;1' containing all th
e histos.
#process.p = cms.Path(process.demo)
#process.end = cms.Endpath(process.counter)
process.ecalWeightUncalibRecHit.EBdigiCollection = cms.InputTag("ecalEBunpacker","ebDigis")
process.ecalWeightUncalibRecHit.EEdigiCollection = cms.InputTag("ecalEBunpacker","eeDigis")
#process.ecalWeightUncalibRecHit.EEdigiCollection refers to the variable EEdigiCollection
#in the cfi file ecalWeightUncalibRecHit. This is changed using the InputTag as above.
#
#Now find with python that EEdigiCollection has been updated to:
#python -i hvanalyzer_cfg.py
#>>> process.ecalWeightUncalibRecHit.EEdigiCollection
#cms.InputTag("ecalEBunpacker","eeDigis")
CMSSW_5_3_3_patch2 has:
process.ecalEBunpacker.InputLabel = cms.InputTag("rawDataCollector")
process.p = cms.Path(process.ecalEBunpacker*process.ecalWeightUncalibRecHit*process.demo) # root file will have a directory called 'demo;1' containing all th
e histos.
#process.p = cms.Path(process.demo)
#process.end = cms.Endpath(process.counter)
process.ecalWeightUncalibRecHit.EBdigiCollection = cms.InputTag("ecalEBunpacker","ebDigis")
process.ecalWeightUncalibRecHit.EEdigiCollection = cms.InputTag("ecalEBunpacker","eeDigis")
#process.ecalWeightUncalibRecHit.EEdigiCollection refers to the variable EEdigiCollection
#in the cfi file ecalWeightUncalibRecHit. This is changed using the InputTag as above.
#
#Now find with python that EEdigiCollection has been updated to:
#python -i hvanalyzer_cfg.py
#>>> process.ecalWeightUncalibRecHit.EEdigiCollection
#cms.InputTag("ecalEBunpacker","eeDigis")
Notice these comments in
https://github.com/cms-steam/HLTrigger/blob/master/HLTanalyzers/test/HLTAnalysis_cfg.py
# from CMSSW_5_0_0_pre6: RawDataLikeMC=False (to keep "source")
if cmsswVersion > "CMSSW_5_0":
process.source.labelRawDataLikeMC = cms.untracked.bool( False )
5 Sep 2017
Updated
HVscan page with a complete log of past HV scans
Revisited HV scans from 2009 (Nick Ryder analysis), in
\\cern.ch\dfs\Workspaces\d\DaveC\Analysis\HV-scans\HV scan data\HV-scans-NickRyder-12Aug2009
EE+ HV scan at 3.8T with EE+1+2+3+4, 25 runs 110764 - 110803
Elog-Ryder-runs-12Aug2009.txt
Have 3.8 T data for a dynode scan, anodes held at 800 V, and a partial anode scan, dynodes held at 600 V.
Scan on one channel, 39, 68, +1.
Dynode-scan-2009.pdf
Anode-scan-2009-3p8T.pdf
4 Sep 2017
Trying the statistics exercises in the CMS workbook at
WorkBookHowToFit
To set up ROOT, go into CMSSW_8_0_19_patch1/src/Reco/RecoAnalyzer, for example, and do cmsenv
to set up the environment.
11 July 2017
Looking at ELMB/PVSS anode voltages, with instructions from Serguei for the
Generic Plotter.
Readout anode/dynode correspondance in
ELMB-PVSS-readout-lines.txt.
Plots in:
\\cern.ch\dfs\Workspaces\d\DaveC\EE folders\High Voltage\Getting-ELMB-PVSS-data\
For channel:
#cms_ecal_dcs_1:ELMB/CAN_BUS_ECAL_HVM_M/ECAL_HVM_EEM/AI/ECAL_HVM_0 Value
= EEM_N Anode 1
plot shows anode voltage of 796.8 V, rms 0.04 V, 0.2 V between two equal peaks (limit of sampling precision)
17 May 2017
Looking at end of 2016 running, laser data.
In /afs/cern.ch/user/d/davec/CMSSW_8_0_19_patch1/src/Reco/RecoAnalyzer
Using recoanalyzer_cfg.py with grid request for run 284036, Date/Time = Wed Oct 26 14:36:10 2016
After doing "cmsenv" and "grid" got CMSSW to successfully open and read file:
'root://xrootd-cms.infn.it//store/data/Run2016H/SingleElectron/AOD/PromptReco-v3/000/284/036/00000/04D47F8B-5D9F-E611-9D7C-02163E01456E.root'
# DAS: File size: 3.2GB, File type: EDM, Number of events: 12657, Site: T1_IT_CNAF_MSS, T2_US_Purdue, T1_IT_CNAF_Buffer
CMSSW output for first event:
# Run number = 284036 Lumi = 19 event number = 33003838
# bunch crossing = 2598 orbit = 4803575 store = 0 time from DAQ = 189378
# Unix Time = 1477485370
# Date/Time = Wed Oct 26 14:36:10 2016
Now have histo root file: histo-reco-May2017.root
The last pp fills in 2016:
The fill with run 284036. Inst lumi is ~0.8.10
34 cm
-2 s
-1 at 14.30 on 26 Oct 2016
10 May 2017
Back to EE laser/response data analysis.
In /afs/cern.ch/user/d/davec/CMSSW_8_0_19_patch1/src/Reco/RecoAnalyzer
Last pp fill in 2016: 5451 from
2016 pp data.
- peak lumi = 1.3.1034 cm-2 s-1
Fill details give
- fill start at 26/10/16 07:49 = 26 Oct 2016
- stable beams (SB) at 10.55
- stable beams duration 12:05 h:min
Fill details lists 12 CMS runs for fill 5451.
Run 284036 initial lumi 1.01.10
34 cm
-2 s
-1.
B = 3.8 T, all ECAL in.
Go to DAS to get list of datasets.
dataset run=284036
Then choose:
file dataset=/SingleElectron/Run2016H-PromptReco-v3/RECO
Returns (after a while) 451 records
Get: File: /store/data/Run2016H/SingleElectron/RECO/PromptReco-v3/000/284/036/00000/088C7DA9-FE9E-E611-9BC3-02163E011DCE.root
File size 10.4 GB, Number of events: 6264
Sites: - none listed !!!!!!!! ???????
Try CMSSW
OOOOOOOOOOPPPPPPPPPPPSSSSSSSSSSSSSS !!!!
Forgot to do ...$grid. Works from 17 May 2017.
7 Dec 2016
Pickevents fixed:
Hi David, for the DAS error, it has been corrected :
davidlange6 update edmPickEvents since das_client can not be imported
https://github.com/cms-sw/cmssw/tree/CMSSW_9_0_X/PhysicsTools/Utilities/scripts
Best regards. Pierre (Depasse)
31 Aug 2016
To get crystal histories
github ferriff
Use:
lava_db_dumpId.cpp: dump ECAL monitoring correction histories for a given set of
DetId (or all of them)
18 August 2016
Loking for lumi data using CMSSW:
https://twiki.cern.ch/twiki/bin/viewauth/CMS/LumiCalc
17 August 2016
Success, got grid working with AAA (Any Data, Anytime, Anywhere), see:
XrootdService
Looking for runs in 2015 to check unity laser calib status.
Check details with cmswbm
Run 254914, 23 Aug 2015, 0.88*10**33, Sam |
Run 260540 - 1 Nov 2015, 3.2*10**33, 3.8T |
Run 260425 - 30-31 Oct 2015 4**10*33, 3.8 T |
using dataset=/SinglePhoton/Run2015D-PromptReco-v4/AOD in DAS.
12 August 2016
Progress with lasercalib analysis. Problem of EcalLaserDbService::getLaserCorrection setting some channels to 1.0, particularly in inner EE.
EcalLaserDbService.cc and
EcalLaserDbService.h downloaded with git into /afs/cern.ch/user/d/davec/CMSSW_8_0_8/src with
git-cms-addpkg
CalibCalorimetry
NOTE - git wanted an empty src directory. Had to move "Reco" away temporarily.
NOTE - have to do "scramv1 b" in /afs/cern.ch/user/d/davec/CMSSW_8_0_8/src to get edited
EcalLaserDbService.cc
Output:
apdpnref = 1.000, apdpnpair.p1 = 0.480, apdpnpair.p2 = 0.480, alpha = 1.520
linValues.p1 = 1.000, linValues.p2 = 1.000, linValues.p3 = 1.000
EB lasercalib > 2.6, iphi = 340 ieta = -45 lasercalib = 3.07
EBDetId = (EB ieta -45, iphi 340 ; ism 35 , ic 900) rawId = 838884180
Calculate (1/0.480)**1.52 = pow(1/0.48,1.52) = 3.05150 which is, nearly, 3.07 from the rechit !!!
11 July 2016
reco namespace
21 June 2016
Success - got aliases etc working on the mac, changed bash_profile to .bash_profile.
Now loads up aliases etc each time terminal is opened.
Running CMSSW_8_0_8. Works if comment out all print calls from
RecoAna.cc, ie printLasercalibDB(iEvent, iSetup, evcount);
Otherwise get error:
21-Jun-2016 11:23:28 CEST Successfully opened file file:/afs/cern.ch/work/d/davec/EventChecks/eventsECALgap_RECO.root
Begin processing the 1st record. Run 273725, Event 3243342830, LumiSection 2297 at 21-Jun-2016 11:23:36.641 CEST
----- Begin Fatal Exception 21-Jun-2016 11:23:37 CEST-----------------------
An exception of category 'NoProxyException' occurred while
[0] Processing run: 273725 lumi: 2297 event: 3243342830
[1] Running path 'p'
[2] Calling event method for module RecoAnalyzer/'demo'
[3] Using EventSetup component CaloGeometryBuilder/'' to make data CaloGeometry/'' in record CaloGeometryRecord
[4] Using EventSetup component HcalHardcodeGeometryEP/'' to make data CaloSubdetectorGeometry/'HCAL' in record HcalGeometryRecord
[5] Using EventSetup component HcalTopologyIdealEP/'hcalTopologyIdeal' to make data HcalTopology/'' in record HcalRecNumberingRecord
Exception Message:
No data of type "HcalDDDRecConstants" with label "" in record "HcalRecNumberingRecord"
Please add an ESSource or ESProducer to your job which can deliver this data.
----- End Fatal Exception -------------------------------------------------
21-Jun-2016 11:23:37 CEST Closed file file:/afs/cern.ch/work/d/davec/EventChecks/eventsECALgap_RECO.root
=============================================
MessageLogger Summary
type category sev module subroutine count total
---- -------------------- -- ---------------- ---------------- ----- -----
1 Fatal Exception -s PostProcessPath 1 1
2 fileAction -s file_close 1 1
3 fileAction -s file_open 2 2
type category Examples: run/evt run/evt run/evt
---- -------------------- ---------------- ---------------- ----------------
1 Fatal Exception Run: 273725 Even
2 fileAction PostEndRun
3 fileAction pre-events pre-events
Severity # Occurrences Total Occurrences
-------- ------------- -----------------
System 4 4
Now investigate printLasercalibDB(iEvent, iSetup, evcount); to be compatible with new CMSSW coding for Handle etc.
Success,
HcalDDDRecConstants problem solved by adding process.load("Configuration.Geometry.GeometryECALHCAL_cff") to recoanalyzer_cfg.py !!!
Now un-comment printDigis(iEvent, iSetup);
Blast - now get complaints that consumes is not set up ==> no escape !!!!
Exception Message:
::getByLabel without corresponding call to consumes or mayConsumes for this module.
type: EEDigiCollection
module label: selectDigi
product instance name: 'selectedEcalEEDigiCollection'
10 June 2016
More compiler errors trying to compile with CMSSW_8_0_8 - for Werror=unused variables and Werror=unused-but-set-variable
Commented out these lines from:
/afs/cern.ch/user/d/davec/CMSSW_8_0_8/config/toolbox/slc6_amd64_gcc530/tools/selectedgcc-cxxcompiler.xml
with
<!-- --
<flags CXXFLAGS="-Werror=unused-but-set-variable -Werror=reorder"/>
<flags CXXFLAGS="-Werror=unused-variable -Werror=conversion-null"/>
<!-- -->
OK now
Try to run with cmsRun recoanalyzer_cfg.py > MET.txt<br. Errors:
Exception Message:
No data of type "HcalDDDRecConstants" with label "" in record "HcalRecNumberingRecord"
Please add an ESSource or ESProducer to your job which can deliver this data.
9 June 2016
Starting to set up CMSSW_8_0_8
Need to introduce new class definitions etc:
Comments from Dima:::
In class definition i have the following:
edm::EDGetTokenT
tok_EB_;
edm::EDGetTokenT tok_EE_;
edm::EDGetTokenT tok_EB_digi;
In class constructor where we have access to "config"
tok_EB_ = consumes(edm::InputTag("reducedEcalRecHitsEB"));
tok_EE_ = consumes(edm::InputTag("reducedEcalRecHitsEE"));
tok_EB_digi = consumes(edm::InputTag("selectDigi","selectedEcalEBDigiCollection"));
And then in analysis part I have the following:
edm::Handle EBRecHits;
edm::Handle EERecHits;
iEvent.getByToken( tok_EB_, EBRecHits );
iEvent.getByToken( tok_EE_, EERecHits );
20 Apr 2016
Reply from RootTalk at:
concerning the use of "map"
The easiest would be to:
1. rename "main" into something else, e.g. "mtest",
2. use ACLiC to precompile it:
root [0] .L map-test.C++
root [1] mtest();
BTW. If you switch to ROOT 6, you will not need to "precompile" it.
14 Apr 2016
Running Adam Butler's code for VPT, xtal, fp and proton losses, 2011, 2012 and Run2
Code doesn't run on linux with cmsenv, but does run on my MAC. Problem with "map" initialisation.
- Adam's original code at /afs/cern.ch/work/d/davec/VPT-PVSS-point5-data/AdamButler/AdamScripts
- Adam's code on my linux area at /afs/cern.ch/user/d/davec/VPT-Conditioning
- Adam's code on my MAC at /Users/david/Documents/Root-vpt-analysis/.
On the MAC, prepare for root by doing "source bash_profile" in folder david. Alias vptcond to cd to the area.
Adam's code, for VPT conditioning, ~line 144, has:
ped = 73.0233; amp1 = 12.4633; t1 = -4.78703e-3; amp2 = 14.6533; t2 = -1.28185e-3
I get, with new basic-vpts-with-types-8Jan2016.dat, remove -ve amp1, and fast fast t2:
ped = 68.2107, amp1 = 15.4107, t1 = -7.89903e-3, amp2 = 16.5807, t2 = -0.902481e-3
Adam's code for crystal/vpt damage has:
feta = 2.777; CrystalLen = 0.220; QE = 0.1863; MagFac = 0.7055; Rad = 117.97; LY0 = 17.154; alphaCrystal = 1.3;
Adam's code:
if(entryN==0) intChargeArray1[entryN] = alpha*QE*MagFac*Rad*LY* intLumiArray1[entryN]*1e-8;
otherwise
intChargeArray1[entryN] = alpha*QE*MagFac*Rad*LY* (intLumiArray1[entryN]-intLumiArray1[entryN-1])*1e-8 +intChargeArray1[entryN-1];
CondEnd11 = 1-VPTResponseArray1[entryN];
// intChargeArrayPlot1[entryN] = intChargeArray1[entryN]/0.02;
DoseArray1[entryN] = DoseFac * intLumi1;
DoseArrayPlot1[entryN] =0.9+ (DoseFac * intLumi1)/20;
FPResponseArray1[entryN] = fpConst + fpAmp*exp(DoseArray1[entryN]*fpT)+0.001257;
PDResponseArray1[entryN] = exp((-CrystalLen*getMu(eta,intLumiArray1[entryN]/pow(2,0.5)))/1.2);
cout << "Just before getCond" << endl;
CrystalTotResponseArray1[entryN] = (getCond(tstop1)/VPTResponseArray1[entryN])/FPResponseArray1[entryN];
CrystalResponseArray1[entryN] = ((getCond(tstop1)/VPTResponseArray1[entryN])/FPResponseArray1[entryN])/PDResponseArray1[entryN];
LY = LY0*pow(CrystalTotResponseArray1[entryN],alphaCrystal);
TGraph * Graph1 = new TGraph(nEntries1, unixtArray, laserArray); // laser data from Point5, 2011
TGraph * Graph2 = new TGraph(nEntries2, tstopArray1, VPTResponseArray1); // calculated VPT conditioning losses, 2011
TGraph * Graph4 = new TGraph(nEntries2, tstopArray1, FPResponseArray1); // calculated Face Plate loss, 2011
TGraph * Graph6 = new TGraph(nEntries2, tstopArray1, PDResponseArray1); // calculated Proton Damage, 2011
TGraph * Graph8 = new TGraph(nEntries2, tstopArray1, CrystalResponseArray1); // calculated crystal response, 2011
24 Mar 2016
Found error for one vpt entry in /afs/cern.ch/user/d/davec/EE-Easy-Data/scxyz.dat
scxyz.dat: 6 4655 22 0 4 2.0200 37.6000 33.5600 -99.0000 35 65 -1
The -99.0000 should have been the LY, after the number 4 (fdee)
This gave a false p*g of 37.6 for vpt 4655 !!!!!!!!!
For now, insert a LY of 10.000 (Bog xtal) after "4" and remove -99.0000
23 Mar 2016
1) Don't put #Riostream.h and #iostream etc into unnamed root script.
Screws everything up !!!!
Maybe because cmsenv was executed beforehand ??
2) Aaaaaaaaaaaaaaaaarrrgggggghhhhhhhhhh
in \\cern.ch\dfs\Workspaces\d\DaveC\ANALYSIS\VPT-rig-data\
VPT-UVA-Brunel-info.C has double lossinf[np];
VPT-loss-with-eta-and-fb.c has double lossinf[2]
Running VPT-loss-with-eta-and-fb.c after VPT-UVA-Brunel-info.C caused root to crash due to lossinf being redefined !!!!
However, the error message was opaque.
23 Sep 2015
Some trouble making muon1 and muon2 global, for Sacha's dimuon analysis.
In header, now have:
math::XYZTLorentzVector muon1;
math::XYZTLorentzVector muon2;
Now initialise at analyze, at start of event, with:
muon1 = math::XYZTLorentzVector(0.,0.,0.,0.);
muon2 = math::XYZTLorentzVector(0.,0.,0.,0.);
cout << "\nAt top of analyze, muon1.Pt() = " << muon1.Pt() << endl;
cout << "At top of analyze, muon2.Pt() = " << muon2.Pt() << endl;
Mistake was in DiMuAnalysis_Data_Muons.cc with:
math::XYZTLorentzVector muon1(0.,0.,0.,0.);
math::XYZTLorentzVector muon2(0.,0.,0.,0.);
( Also tried muon2(0.,0.,0.,0.); )
This reset muon1 and muon2 as local variables for use only in DiMuAnalysis_Data_Muons.cc
math::XYZTLorentzVector seems to be typedefined as TLorentzVector in cmssw
TLorentzVector at https://root.cern.ch/doc/master/classTLorentzVector.html
21 Sep 2015
AAAAAAAAAAArrrrrrrrrrrrrrrrrrrrggggggggggggggghhhhhhhhhhhhhhh
Looking at the EE+ laser data for Henry.
The file:
/afs/cern.ch/work/d/davec/Students-code-plots/Guldemond/original_data/ecal_laser_dumped_ids_EE+.dat
has the # symbol in the first 2 lines - so the read file doesnt work !!!!
/<5>Students-code-plots/Guldemond/original_data $ cat ecal_laser_dumped_ids_EE+.dat | awk '{print $1, $2, $3, $4, $5, $7, $8}'
# EcalLaserAPDPNRatios_data_20120131_158851_183320 @ frontier://FrontierProd/CMS_COND_42X_ECAL_LAS
#time 872436865 872436993 872437121 872437249 872437505 872437633
1318379725 0.947365 0.966679 0.956072 0.962112 0.974194 0.951571
1318382173 0.946486 0.966057 0.955203 0.961401 0.973668 0.951171
The file:
/afs/cern.ch/work/d/davec/Students-code-plots/Guldemond/ecal_laser_dumped_ids.dat
does NOT have the # symbol in the first 2 lines - the read is OK !!!!
/<4>davec/Students-code-plots/Guldemond $ cat ecal_laser_dumped_ids.dat | awk '{print $1, $2, $3, $4, $5, $7, $8}'
872436865 872436993 872437121 872437249 872437377 872437633 872437761
1318379725 0.947365 0.966679 0.956072 0.962112 0.974194 0.951571
1318382173 0.946486 0.966057 0.955203 0.961401 0.973668 0.951171
Printing out Sacha's jet variables, get NULL error in DiMuAnalysis_Data_Fill_Jet_Variables.cc
An exception of category 'InvalidReference' occurred while
[0] Processing run: 195013 lumi: 294 event: 432365639
[1] Running path 'p1'
[2] Calling event method for module DiMuAnalysis_Data/'dump'
Exception Message:
attempting get key from null RefToBase;
You should check for nullity before calling key().
Getting exception since there are not 2 valid muons, one positive, one negative, for the event.
The line is in DiMuAnalysis_Data_Fill_Jet_Variables.cc
RefToBase<Jet> jptjetRef = jptjet->getCaloJetRef(); <br>
Need to test for NULL !!
See cmssw RefToBase.h, isNull() is one of the object functions
if ( jptjetRef.isNull() )
seems to work !
16 Sep 2015
More di-muon coding. Un-comment various Handles.
Now get complaint:
Begin processing the 1st record. Run 195013, Event 155914555, LumiSection 137 at 16-Sep-2015 10:34:56.898 CEST
----- Begin Fatal Exception 16-Sep-2015 10:35:04 CEST-----------------------
An exception of category 'FileInPathError' occurred while
[0] Processing run: 195013 lumi: 137 event: 155914555
[1] Running path 'p1'
[2] Calling event method for module DiMuAnalysis_Data/'dump'
Exception Message:
edm::FileInPath unable to find file CondFormats/JetMETObjects/data/GR_R_42_V23_AK5JPT_Uncertainty.txt anywhere in the search path.
The search path is defined by: CMSSW_SEARCH_PATH
${CMSSW_SEARCH_PATH} is: /afs/cern.ch/user/d/davec/CMSSW_5_3_18/src:/afs/cern.ch/user/d/davec/CMSSW_5_3_18/external/slc6_amd64_gcc472/data:/cvmfs/cms.cern.ch/slc6_amd64_gcc472/cms/cmssw/CMSSW_5_3_18/src:/cvmfs/cms.cern.ch/slc6_amd64_gcc472/cms/cmssw/CMSSW_5_3_18/external/slc6_amd64_gcc472/data
Current directory is: /afs/cern.ch/user/d/davec/CMSSW_5_3_18/src/Sacha/RecoAnalyzer
----- End Fatal Exception -------------------------------------------------
This comes from the following lines of code in analyze:
// JES uncertainty
std::string JEC_PATH("CondFormats/JetMETObjects/data/");
edm::FileInPath fip(JEC_PATH+"GR_R_42_V23_AK5JPT_Uncertainty.txt");
JetCorrectionUncertainty *jecUnc = new JetCorrectionUncertainty(fip.fullPath());
Copy over Sacha's CondFormats directory into /afs/cern.ch/user/d/davec/CMSSW_5_3_18/src/
Has file GR_R_53_V13_Uncertainty_AK5JPT.txt
Copy this into my own GR_R_42_V23_AK5JPT_Uncertainty.txt just to keep going.
Works !!!
File GR_R_42_V23_AK5JPT_Uncertainty.txt has the following header:
{1 JetEta 1 JetPt "" Correction Uncertainty}
14 Sep 2015
AAAAAAAAaaaaaaaaaaaaaaaarrrrrrrrrrrrggggggggggghhhhhhhhhhhh
Found problem with "Request to resolve a null or invalid reference to a product of type 'std::vector<reco::Track>' "
If muon is not a global muon then globalTrack() is NULL !!!!!!!!!!!
No associated track data is filled/available => crash
Various boolean tests:
iterMuon->isGlobalMuon()
iterMuon->globalTrack().isNull()
iterMuon->innerTrack().isNull()
iterMuon->isTrackerMuon()
Also test for the presence of a COLLECTION, ie:
// Some useful typedefs
typedef reco::MuonCollection RecoMuons
Handle<RecoMuons> mus;
cout << "MuonCollection, muons " << endl;
iEvent.getByLabel("muons", mus);
if( !mus.isValid() ) {cout << "No valid muon collection" << endl; return;}
11 Sep 2015
More work on Sasha Nikitenko's muon excess data.
Sacha's code at /afs/cern.ch/user/a/anikiten/scratch0/CMSSW_5_3_11/src/Nikitenko/DPS/plugins
Files: DiMuAnalysis_Data.cc, BuildFile.xml
Python files at /afs/cern.ch/user/a/anikiten/scratch0/CMSSW_5_3_11/src/Nikitenko/DPS/test
My area for Sasha's work at /afs/cern.ch/user/d/davec/CMSSW_5_3_18/src/Sacha/RecoAnalyzer
Still getting exceptions when trying to get assocated Track data:
Begin processing the 1st record. Run 195013, Event 155914555, LumiSection 137 at 16-Sep-2015 10:34:56.898 CEST
----- Begin Fatal Exception 16-Sep-2015 10:35:04 CEST-----------------------
An exception of category 'FileInPathError' occurred while
[0] Processing run: 195013 lumi: 137 event: 155914555
[1] Running path 'p1'
[2] Calling event method for module DiMuAnalysis_Data/'dump'
Exception Message:
edm::FileInPath unable to find file CondFormats/JetMETObjects/data/GR_R_42_V23_AK5JPT_Uncertainty.txt anywhere in the search path.
The search path is defined by: CMSSW_SEARCH_PATH
${CMSSW_SEARCH_PATH} is: /afs/cern.ch/user/d/davec/CMSSW_5_3_18/src:/afs/cern.ch/user/d/davec/CMSSW_5_3_18/external/slc6_amd64_gcc472/data:/cvmfs/cms.cern.ch/slc6_amd64_gcc472/cms/cmssw/CMSSW_5_3_18/src:/cvmfs/cms.cern.ch/slc6_amd64_gcc472/cms/cmssw/CMSSW_5_3_18/external/slc6_amd64_gcc472/data
Current directory is: /afs/cern.ch/user/d/davec/CMSSW_5_3_18/src/Sacha/RecoAnalyzer
----- End Fatal Exception -------------------------------------------------
21 Aug 2015
Discussing test pulse and ped data.
Useful url - at ECAL P5: light checker
12 Aug 2015
Looking at Sasha Nikitenko's muon excess data, details
11 Aug 2015
Setting up to look at test pulse data from RAW files - using Jean Fay's python script.
Old location for local ecal runs, Jean Fay, Thu 13/01/2011 11:41:
Dear Dave,
I have learned today you wanted to use local runs. In order not to be constrained to use the CMSSW
version they are written with, I convert all of them to root files and copy them to castor in :
/castor/cern.ch/user/c/ccecal/rawFromP5/
with the same name ended by ,root
eg : the last local run can be found as :
/castor/cern.ch/user/c/ccecal/rawFromP5/ecal_local.00153297.0001.A.storageManager.00.0000.dat.root
These root files can be analyzed with any recent CMSSW version.
Tell me if you need more information. Cheers, Jean
10 Aug 2015
Events with ADC samples = 0
DAS search dataset=/MinimumBias/Run2015B*/RAW gives /MinimumBias/Run2015B-v1/RAW
Successfully incorporated Jean Fay's python code to run on RAW.
Reminder of RAW edmEventDumpContent:
Type Module Label Process
--------------------------------------------------------------------------------
L1GlobalTriggerObjectMapRecord "hltL1GtObjectMap" "" "HLT"
edm::TriggerResults "TriggerResults" "" "HLT"
trigger::TriggerEvent "hltTriggerSummaryAOD" "" "HLT"
FEDRawDataCollection "rawDataCollector" "" "LHC"
28 July 2015
CRAB3 cheat sheet
27 July 2015
Go over edmPickEvents.py to get Harpers RECO version of his double electron.
- Use EOS to see what is available - caution different folder sets from DAS
- Use DAS to get file from run, lumi: DAS commands
- Use edmPickEvents.py with run, lumi, event to get the file with the event
Event DoubleEG run 251562 lumi 605 event 52850044
1) Find where the event is.
eoscms ls -l /eos/cms/store/data/Run2015B/DoubleEG/RECO/PromptReco-v1/000/251/562/00000/
Lists dozens of root files.
DAS with command line
Download das_client.py, see DAS FAQs, DAS Query Guide
Try to find Sam's file:
Dataset value requires 3 slashes!!!!
~ $ ./das_client.py --query="dataset=/DoubleEG*/AOD"
status: fail, reason: Can not interpret the query (while creating DASQuery)
Input query contains a wild-card statement which is ambiguous:
Dataset value requires 3 slashes. The query matches one dataset pattern in our cache, </br>
but please check if this is what you intended:
/DoubleEG/*/AOD
Warns of asterix use:
Suggested queries:
dataset=/DoubleEG/*/AOD
For easier query debuging, please use the DAS web interface first.
Success, location NOT same as in EOS, Run2015 linked to PromptReco-v1:
~ $ ./das_client.py --query="dataset=/DoubleEG/*/AOD"
Showing 1-10 out of 5 results, for more results use --idx/--limit options
/DoubleEG/Run2015A-PromptReco-v1/AOD
/DoubleEG/Run2015B-PromptReco-v1/AOD
/DoubleEG/Tier0_Test_SUPERBUNNIES_vocms015-PromptReco-v27/AOD
/DoubleEG/Tier0_Test_SUPERBUNNIES_vocms015-PromptReco-v29/AOD
/DoubleEG/Tier0_Test_SUPERBUNNIES_vocms047-PromptReco-v73/AOD
~ $ ./das_client.py --query="dataset=/DoubleEG/*/AOD"
~ $
Go to DAS web page on PC and enter search for AOD file - get root file immediately:
file dataset=/DoubleEG/Run2015B-PromptReco-v1/AOD run=251562
Returns:
File: /store/data/Run2015B/DoubleEG/AOD/PromptReco-v1/000/251/562/00000/D8A11EDE-362A-E511-BBE9-02163E0124CC.root
Dataset, Sites, Runs, Parents, Children, Lumis, Download Sources: dbs3 show
Clicking on Runs gives
Run: 251562
Datasets Sources: dbs3 show
Clicking on Lumis gives
Lumi: [[605, 605], [618, 618], [644, 644], [651, 651], [662, 662], [672, 672]]
Run number: 251562
Sources: dbs3 show
Go to DAS web page on PC and enter search for RECO !!! file - get root file immediately:
file dataset=/DoubleEG/Run2015B-PromptReco-v1/RECO run=251562 lumi=605
Returns:
lumi file=/store/data/Run2015B/DoubleEG/RECO/PromptReco-v1/000/251/562/00000/44E2F5F5-362A-E511-B2BF-02163E013674.root
Clicking on Runs gives
Run: 251562
Datasets Sources: dbs3 show
Clicking on Lumis gives
Lumi: [605, 605]
Run number: 251562
Sources: dbs3 show
Hence only ONE lumi in the RECO file.
Getting the Global Tag
An example from an edmEventDump
ESSource: GlobalTag RECO
parameters: {
@module_edm_type: string tracked = 'ESSource'
@module_label: string tracked = 'GlobalTag'
@module_type: string tracked = 'PoolDBESSource'
connect: string tracked = 'frontier://FrontierProd/CMS_COND_31X_GLOBALTAG'
globaltag: string tracked = 'FT_R_53_V6::All'
DBParameters: PSet tracked = ({
26 July 2015
Analysis topics:
- Bad EE SCs
- Bad VPTs
- Grid/CRAB
- Python
Bad EE SCs:
MET recommendation for early data analysis
Workbook MiniAOD 2015 and ETmiss_filters
Hongxuan and Kenichi Hatakeyama email 30 May 2012:
The skims are also available at CERN castor /castor/cern.ch/user/l/lhx/badEESc/ with same directory names
- run : 190645 lumi : 49 event : 33595435 and others
Python examples
2012 Muon Exercise
GRID/CRAB
WorkBook CRAB3 Tutorial
22 July 2015
JetMet filters
Dowloaded EEBadScFilter.cc from GitHub at https://github.com/cms-sw/cmssw
- searched for file in the GitHub search window
- File at RecoMET/METFilters/plugins/EEBadScFilter.cc
- Click on "Raw" and then "save file as" to download to folder
Using area /afs/cern.ch/user/d/davec/CMSSW_7_4_6/src/Reco/RecoAnalyzer/plugins/ to include EEBadScFilter.cc. Success !!!
- Added lines to config file to bring in eeBadScFilter_cfi.py
- see Python notes for stumbling blocks encountered !!
Checked filter by setting path to false in EEBadScFilter.cc that turned off the event flow (zero events out of one event passed to EDAnalyser).
20 July 2015
First event request for 2015, Sam, HEEP, 2 electron, high mass event
'file:/afs/cern.ch/work/d/davec/EventChecks/doubleEG_run251562_605_528500442.root'
All looks OK, results
Using GlobalTag.globaltag is GR_P_V56::All
Numerous C++ tweeks needed to get going.
CMSSW_7_4_6
- Does not create a src area anymore. Put stuff in "plugins" instead, including the BuildFile.xml.
Web advice: Please do not put using namespace std in a header file, it pollutes the global namespace for all users of that header. See namespaces also.
More compiler errors - for Werror=unused variables and Werror=unused-but-set-variable
Removed these lines from ' ../config/toolbox/slc6_amd64_gcc491/tools/selected/gcc-cxxcompiler.xml
OK now
Compiler error
Needed a "using namespace std" for cout and endl in RecoAnalyzer::endJob(). OK now.
From edmProvDump:
Processing History:
LHC '' '"CMSSW_7_4_6"' (dd392193a5747b5c718c2371e7635ff9)
HLT '' '"CMSSW_7_4_6"' (871b1afaddd80a6e5b79a0aec7adf3d6)
RECO '' '"CMSSW_7_4_6_patch6"' (3eaabc799c477ab2acc68e3d4e155f2e)
CMSSW_7_3_2_patch3
This is the version recommended on the CMSSW workbook page. However, crashed on doing cmsRun - missing stuff, probably for multifits on the digis. Needed to go to 7_4_6, as above
- outOfTimeChi2() and outOfTimeEnergy() no longer in use - commented out from plotEERecHits.cc, printPhoton.cc, printPhoton.cc.
- maybe due to the move to multifit
Got errors for cluster covariance - went to 7_4_6 on David's advice.
(interestingly, going back to 7_3_2 later, this error did not reappear).
4 Nov 2014
Getting lots of sl6 compile errors due to unused variables
sl6 errors
Some solutions at cp3.irmp.ucl. Complier flag usage site helpful. gcc warning options.
#Before compiling it is needed to change some flags in the default compiler so it doesn't treat not used variables as errors (they will be treated as warnings)
sed -i.bak 's/-Werror=unused-variable//' ../config/toolbox/slc6_amd64_gcc472/tools/selected/gcc-cxxcompiler.xml
sed -i.bak 's/-Werror=unused-but-set-variable//' ../config/toolbox/slc6_amd64_gcc472/tools/selected/gcc-cxxcompiler.xml
Solved!
gcc-cxxcompiler.xml in my CMSSW_5_3_18 area: /afs/cern.ch/user/d/davec/CMSSW_5_3_18/config/toolbox/slc6_amd64_gcc472/tools/selected/
sed -i.bak 's/-Werror=unused-variable// gcc-cxxcompiler.xml succesfully removed the Werror statement
Did same for unused-but-set-variable.
Now compiled OK, but errors for multiple RecoAnalyzer versions. Renamed RecoAnalyzer to remove ambiguity with RecoAna.cc. All OK now.
30 Oct 2014
Upgrading to sl6:
- In .tcshrc, new ROOTSYS
- setenv ROOTSYS /afs/cern.ch/sw/lcg/app/releases/ROOT/6.02.00/x86_64-slc6-gcc48-opt/root/
- New SCRAM_ARCH architecture area
- export SCRAM_ARCH=slc6_amd64_gcc472
- New export area (for the H->ZZ work)
- export CVSROOT=":ext:davec@lxplus.cern.ch:/afs/cern.ch/user/c/cvscmssw/public/CMSSW"
Reminder: alias cmsenv gives eval `scramv1 runtime -csh`
26 Jul 2014
Trouble with WinSCP from home - big block SFTP transfer error.
Remove print statement from .tcshrc. All OK now.
Put all print statements into .login file !!!!!!!!!!
22 Jul 2014
H->ZZ->4l 2014 data analysis school
2013 Higgs Properties Exercise
Some data at:
/castor/cern.ch/user/m/mene/HIGGS/
4 pT dists at:
HZZ4lSearchExercise intro
15 Jul 2014
Looking at the data files for H-to-ZZ, used in AN-13-108, for 2012:
/DoubleElectron/Run2012A-22Jan2013-v1/AOD
/DoubleMu/Run2012A-22Jan2013-v1/AOD
/MuEG/Run2012A-22Jan2013-v1/AOD
/DoubleElectron/Run2012B-22Jan2013-v1/AOD
/DoubleMu/Run2012B-22Jan2013-v1/AOD
/MuEG/Run2012B-22Jan2013-v1/AOD
/DoubleElectron/Run2012C-22Jan2013-v1/AOD
/DoubleMu/Run2012C-22Jan2013-v1/AOD
/MuEG/Run2012C-22Jan2013-v1/AOD
/DoubleElectron/Run2012D-22Jan2013-v1/AOD
/DoubleMu/Run2012D-22Jan2013-v1/AOD
/MuEG/Run2012D-22Jan2013-v1/AOD
Go to DAS
Enter field "dataset=/DoubleElectron/Run2012A-22Jan2013-v1/*"
Get:
Dataset: /DoubleElectron/Run2012A-22Jan2013-v1/AOD
Creation time: 2013-01-25 23:26:14, Physics group: NoGroup, Status: VALID, Type: dat
Clicking on "Files"
One run is "/store/data/Run2012A/DoubleElectron/AOD/22Jan2013-v1/20000/003EC246-5E67-E211-B103-00259059642E.root"
File size: 4.2 GB, File type: EDM, Number of events: 22803
Total of 745 files !!
Total of 99 runs
Implies total 2012A dataset is 4.2 * 745 = 3129 GB = 3.13 TB.
Implies 2012A total number of events is 22803 * 745 = 17 M
Inplies each event is 4.2 GB / 22803 = 0.2 MB
Description of H-to-ZZ rootuples at:
HZZ4lnTupleDescription
Workbook on Higgs at:
PAT Example Higgs
7 Jul 2014
CMS new AOD files - WorkBookMiniAOD
Intro talks
HZZ Data Skim
14 Mar 2013
Looking at Higgs combination code
- file ~/ANALYSIS/CMSSW_6_1_1/src/HiggsAnalysis/CombinedLimit/src/Combine.cc
Structure:
Combine::Combine() :
statOptions_("Common statistics options"),
ioOptions_("Common input-output options"),
miscOptions_("Common miscellaneous options"),
rMin_(std::numeric_limits<float>::quiet_NaN()),
rMax_(std::numeric_limits<float>::quiet_NaN())
{...
namespace po = boost::program_options;
statOptions_.add_options() then many (...) items, no commas...;
ioOptions_.add_options() then many (...) items, no commas...;
miscOptions_.add_options() then some (...) items, no commas...;
....}
void Combine::applyOptions(const boost::program_options::variables_map &vm) {.......}
bool Combine::mklimit(RooWorkspace *w, RooStats::ModelConfig *mc_s,
RooStats::ModelConfig *mc_b, RooAbsData &data, double &limit, double &
limitErr) {.........}
namespace {
struct ToCleanUp {.....} }
void Combine::run(TString hlfFile, const std::string &dataset,
double &limit, double &limitErr, int &iToy, TTree *tree, int nToys) {
...........RooWorkspace *w = 0; RooStats::ModelConfig......
}
void Combine::commitPoint(bool expected, float quantile) {...}
void Combine::addBranch(const char *name, void *address, const char *leaflist) {
tree_->Branch(name,address,leaflist);
}
13 Mar 2013
Looking at Higgs combination code at Limit
Trouble with tutorial
- Had old CVS pointer. $CVSROOT gave :gserver:cmscvs.cern.ch:/cvs/CMSSW - only needed outside CERN
- Now have $CVSROOT :gserver:cmssw.cvs.cern.ch:/local/reps/CMSSW
- Had trouble loading addpkg HiggsAnalysis/CombinedLimit V02-07-03
- Requests using development version. This works, via:
- cvs co HiggsAnalysis/CombinedLimit
- Need 'rehash' to remake links
- 'combine --help' then works
- downloaded realistic-counting-experiment.txt
- combine -M Asymptotic realistic-counting-experiment.txt worked fine !!!
1 Mar 2013
Higs -> gg, ARC answers
28 Feb 2013
Back to HV scans
Error:
28-Feb-2013 08:08:18 CET Initiating request to open file file:/afs/cern.ch/work/d/davec/HVscans2012/209197.root
28-Feb-2013 08:08:18 CET Successfully opened file file:/afs/cern.ch/work/d/davec/HVscans2012/209197.root
Begin processing the 1st record. Run 209197, Event 1, LumiSection 1 at 28-Feb-2013 08:08:19.208 CET
----- Begin Fatal Exception 28-Feb-2013 08:08:19 CET-----------------------
An exception of category 'ProductNotFound' occurred while
[0] Processing run: 209197 lumi: 1 event: 1
[1] Running path 'p'
[2] Calling event method for module EcalRawToDigi/'ecalEBunpacker'
Exception Message:
Principal::getByLabel: Found zero products matching all criteria
Looking for type: FEDRawDataCollection
Looking for module label: source
Looking for productInstanceName:
Additional Info:
[a] If you wish to continue processing events after a ProductNotFound exception,
add "SkipEvent = cms.untracked.vstring('ProductNotFound')" to the "options" PSet in the configuration.
----- End Fatal Exception -------------------------------------------------
28-Feb-2013 08:08:19 CET Closed file file:/afs/cern.ch/work/d/davec/HVscans2012/209197.root
=============================================
MessageLogger Summary
type category sev module subroutine count total
---- -------------------- -- ---------------- ---------------- ----- -----
1 Fatal Exception -s PostProcessPath 1 1
2 fileAction -s file_close 1 1
3 fileAction -s file_open 2 2
type category Examples: run/evt run/evt run/evt
---- -------------------- ---------------- ---------------- ----------------
1 Fatal Exception 209197/1
2 fileAction PostEndRun
3 fileAction pre-events pre-events
Severity # Occurrences Total Occurrences
-------- ------------- -----------------
System 4 4
Needed to add
- process.ecalEBunpacker.InputLabel = cms.InputTag("rawDataCollector")
- EcalUnpackerData_cfi.py has the InputLabel set to the default 'source', correct for old data, but now need "rawDataCollector".
Root files from local daq in Dec 2012 have
Type Module Label Process
------------------------------------------------------------------
edm::TriggerResults "TriggerResults" "" "FU"
FEDRawDataCollection "rawDataCollector" "" "LHC"
Old hvscan root files, in Jan 2011, had
Type Module Label Process
----------------------------------------------------------------
FEDRawDataCollection "source" "" "FU"
edm::TriggerResults "TriggerResults" "" "FU"
23 Feb 2013
HiggsAnalysisAtATLASUsingRooStats | HiggsCombinationMoriond2013 |
6 Feb 2013
Datacards, HiggsCombinationMoriond2013 | HiggsCombinationConventions for datacards |
5 Feb 2013
EgammaCutBasedIdentification | Isolation and Corrections to Isolation |
19 Jan 2013
New year resolution list of sites to explore!
Jan 2013 analysis school
Collection of electron/photon ID sites, data card formats
Text files from databases
Data card formats
Zprime analysis
Higgs
7 Dec 2012
Trying to get to bottom of bug when doing scramv1 b
- "A fatal system signal has occurred: segmentation violation"
- trying to do prim->Fill(nsel); where prim had not been defined as a pointer, even though no complaints from scramv1 b
- trying to do prim->Fill(nsel) where prim had not been declared as a histogramme
- incorrect format for x-axis, y-axis, z-axis in histo declaration. Each should be separated by a semi-colon, within the overall double quotes !
22 Nov 2012
Trying to tun CRAB with different datasets. Message from CRAB:
- Your datasetpath has a invalid format /MinimumBias/Run2012C-PromptReco-v2/RECO/PromptReco-v2
- Expected a path in format /PRIMARY/PROCESSED/TIER
21 Nov 2012
Problems with CRAB analysis may be due to CMSSW version. On DAS, the dataset appears to be have been made with CMSSW_3_5_3.
20 Nov 2012
Repeat exercise of copying events with CRAB.
- setup crab environment
- crab -create -cfg pickevents_crab.config, gives text like:
crab: Version 2.8.3 running on Wed Nov 21 12:07:25 2012 CET (11:07:25 UTC)
crab. Working options:
scheduler glite
job type CMSSW
server ON (use_server)
working directory /afs/cern.ch/user/d/davec/CRAB/TUTORIAL/CMSSW_5_2_5/src/crab_0_121121_120720/
Enter GRID pass phrase:
Your identity: /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=davec/CN=382975/CN=Dave Cockerill
Creating temporary proxy .................................................................. Done
Contacting lcg-voms.cern.ch:15002 [/DC=ch/DC=cern/OU=computers/CN=lcg-voms.cern.ch] "cms" Done
Creating proxy .......................................................... Done
Your proxy is valid until Thu Nov 29 12:07:48 2012
crab: Contacting Data Discovery Services ...
crab: Accessing DBS at: http://cmsdbsprod.cern.ch/cms_dbs_prod_global/servlet/DBSServlet
crab: Requested (A)DS /RelValProdTTbar/JobRobot-MC_3XY_V24_JobRobot-v1/GEN-SIM-DIGI-RECO has 1 block(s).
crab: 1 jobs created to run on 1 lumis
crab: Creating 1 jobs, please wait...
crab: Total of 1 jobs created.
Log file is /afs/cern.ch/user/d/davec/CRAB/TUTORIAL/CMSSW_5_2_5/src/crab_0_121121_120720/log/crab.log
- crab -status
- crab -submit
Gives output like:
crab: Version 2.8.3 running on Wed Nov 21 12:10:25 2012 CET (11:10:25 UTC)
crab. Working options:
scheduler glite
job type CMSSW
server ON (default)
working directory /afs/cern.ch/user/d/davec/CRAB/TUTORIAL/CMSSW_5_2_5/src/crab_0_121121_120720/
crab: Registering credential to the server : t2-cms-cs0.desy.de
crab: Credential successfully delegated to the server.
crab: Starting sending the project to the storage t2-cms-cs1.desy.de...
crab: Task crab_0_121121_120720 successfully submitted to server t2-cms-cs0.desy.de
crab: Total of 1 jobs submitted
Log file is /afs/cern.ch/user/d/davec/CRAB/TUTORIAL/CMSSW_5_2_5/src/crab_0_121121_120720/log/crab.log
where pickevents_crab.config calls pickevents_runEvents.txt and pickevents.json
pickevents.json:
{"195774": [[179, 179]]}
pickevents_runEvents.txt:
195774:334180725
pickevents_crab.config:
# CRAB documentation:
# https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideCrab
#
# Once you are happy with this file, please run
# crab -create -cfg pickevents_crab.config
# crab -submit -cfg pickevents_crab.config
[CMSSW]
pycfg_params = eventsToProcess_load=pickevents_runEvents.txt outputFile=pickevents.root
lumi_mask = pickevents.json
total_number_of_lumis = -1
lumis_per_job = 1
pset = /afs/cern.ch/cms/slc5_amd64_gcc462/cms/cmssw-patch/CMSSW_5_3_3_patch2/src/PhysicsTools/Utiliti
es/configuration/copyPickMerge_cfg.py
datasetpath = /DoublePhoton/Run2012B-13Jul2012-v1/AOD
output_file = pickevents.root
[USER]
return_data = 1
email = davec@cern.ch
# if you want to copy the data or put it in a storage element, do it
# here.
[CRAB]
# use "glite" in general; you can "condor" if you run on CAF at FNAL or USG
# site AND you know the files are available locally
scheduler = glite
jobtype = cmssw
use_server = 1
19 Nov 2012
Success with CRAB tutorial. Was getting "erCode": "60307", when trying to copy to Legnaro - the site suggested in the tutorial.
Changed to set copy to zero, copy_data = 0, and return_data = 1 to send output to my local area.
crab -status
crab: Version 2.8.3 running on Mon Nov 19 14:01:23 2012 CET (13:01:23 UTC)
crab. Working options:
scheduler glite
job type CMSSW
server ON (default)
working directory /afs/cern.ch/user/d/davec/CRAB/TUTORIAL/CMSSW_5_2_5/src/crab_0_121119_102119/
crab: Server status decoding problem. Try again in 1 seconds
crab: Server status decoding problem. Try again in 2 seconds
crab: Server status decoding problem. Try again in 4 seconds
crab:
ID END STATUS ACTION ExeExitCode JobExitCode E_HOST
----- --- ----------------- ------------ ---------- ----------- ---------
1 Y Done Terminated 0 0 lcgce03.phy.bris.ac.uk
2 Y Done Terminated 0 0 grcreamce01.inr.troitsk.ru
3 Y Done Terminated 0 0 ce203.cern.ch
4 Y Done Terminated 0 0 lcgce03.phy.bris.ac.uk
5 Y Done Terminated 0 0 grid107.kfki.hu
crab: ExitCodes Summary
>>>>>>>>> 5 Jobs with Wrapper Exit Code : 0
List of jobs: 1-5
See https://twiki.cern.ch/twiki/bin/view/CMS/JobExitCodes for Exit Code meaning
crab: 5 Total Jobs
crab: You can also follow the status of this task on :
CMS Dashboard: http://dashb-cms-job-task.cern.ch/taskmon.html#task=davec_crab_0_121119_102119_s9p16j
Server page: http://vocms83.cern.ch:8888/logginfo
Your task name is: davec_crab_0_121119_102119_s9p16j
Log file is /afs/cern.ch/user/d/davec/CRAB/TUTORIAL/CMSSW_5_2_5/src/crab_0_121119_102119/log/crab.log
However, beforehand, 2 crashes while running crab job:
- printouts OK from RecoAna.cc but:
- crashed when trying to find non-existant ES collection (removed ESprint from RecoAna.cc as temp fix)
- crashed in gsfElectron, complaining of float versus double incompatibility for a MATH function (removed gsfElectron from RecoAna.cc as temp fix)
Fatal Root Error: @SUB=TStreamerInfo::BuildOld
Cannot convert reco::GsfElectron::TrackExtrapolations::positionAtVtx
from
type:ROOT::Math::PositionVector3D<ROOT::Math::Cartesian3D<double>,ROOT::Math::DefaultCoordinateSystemTag>
to
type:ROOT::Math::PositionVector3D<ROOT::Math::Cartesian3D<float>,ROOT::Math::DefaultCoordinateSystemTag>, skip element
----- End Fatal Exception -------------------------------------------------
Link back to original work with Sam for Z peek etc
17 Nov 2012
Success - found out why EcalClusterTools::matrixEnergy was not working
Before calling, set const reco::BasicCluster *cluster=0;
If not using clusters, then this pointer remains effectively 'NULL' causing the crash.
email to Stefano on 17.11.2012:
>
I think I have tracked down the problem we have in matrixEnergy.
>
> In 53 some new lines, the first of which is
>
> std::vector< std::pair<DetId, float> > v_id =
> cluster.hitsAndFractions();
>
> followed by new lines:
> float frac=getFraction(v_id,*cursor);
> energy += recHitEnergy( *cursor, recHits )*frac;
16 Nov 2012
Try to find why matrixEnergy does not work in 53 but works in 52. marixEnergy is in EcalClusterTools::matrixEnergy (and can also be called from /RecoEcal/EgammaCoreTools/src/EcalClusterLazyTools.cc).
http://cmslxr.fnal.gov/lxr/source/RecoEcal/EgammaCoreTools/src/EcalClusterTools.cc?v=CMSSW_5_2_3
float EcalClusterTools::matrixEnergy( const reco::BasicCluster &cluster,
const EcalRecHitCollection *recHits,
const CaloTopology* topology,
DetId id, int ixMin, int ixMax, int iyMin, int iyMax )
http://cmslxr.fnal.gov/lxr/source/RecoEcal/EgammaCoreTools/src/EcalClusterTools.cc?v=CMSSW_5_3_2
float EcalClusterTools::matrixEnergy( const reco::BasicCluster &cluster,
const EcalRecHitCollection *recHits,
const CaloTopology* topology,
DetId id, int ixMin, int ixMax, int iyMin, int iyMax )
******AND THIS definition which is not in 5_2_3**********
float EcalClusterTools::matrixEnergy( const reco::BasicCluster &cluster,
const EcalRecHitCollection *recHits,
const CaloTopology* topology,
DetId id, int ixMin, int ixMax, int iyMin, int iyMax,
std::vector<int> flagsexcl, std::vector<int> severitiesexcl, const EcalSeverityLevelAlgo *sevLv )
http://cmslxr.fnal.gov/lxr/source/RecoEcal/EgammaCoreTools/src/EcalClusterTools.cc?v=CMSSW_5_2_3
072 {
073 // fast version
074 CaloNavigator<DetId> cursor = CaloNavigator<DetId>( id, topology->getSubdetectorTopology( id ) );
075 float energy = 0;
076 for ( int i = ixMin; i <= ixMax; ++i ) {
077 for ( int j = iyMin; j <= iyMax; ++j ) {
078 cursor.home();
079 cursor.offsetBy( i, j );
080 energy += recHitEnergy( *cursor, recHits );
081 }
082 }
083 // slow elegant version
084 //float energy = 0;
085 //std::vector<DetId> v_id = matrixDetId( topology, id, ixMin, ixMax, iyMin, iyMax );
086 //for ( std::vector<DetId>::const_iterator it = v_id.begin(); it != v_id.end(); ++it ) {
087 // energy += recHitEnergy( *it, recHits );
088 //}
089 return energy;
090 }
http://cmslxr.fnal.gov/lxr/source/RecoEcal/EgammaCoreTools/src/EcalClusterTools.cc?v=CMSSW_5_3_2
float EcalClusterTools::matrixEnergy( const reco::BasicCluster &cluster,
const EcalRecHitCollection *recHits,
const CaloTopology* topology,
DetId id, int ixMin, int ixMax, int iyMin, int iyMax )
http://cmslxr.fnal.gov/lxr/source/RecoEcal/EgammaCoreTools/src/EcalClusterTools.cc?v=CMSSW_5_3_2
132 {
133 //take into account fractions
134 // fast version
135 CaloNavigator<DetId> cursor = CaloNavigator<DetId>( id, topology->getSubdetectorTopology( id ) );
136 float energy = 0;
137 std::vector< std::pair<DetId, float> > v_id = cluster.hitsAndFractions();
138 for ( int i = ixMin; i <= ixMax; ++i ) {
139 for ( int j = iyMin; j <= iyMax; ++j ) {
140 cursor.home();
141 cursor.offsetBy( i, j );
142 float frac=getFraction(v_id,*cursor);
143 energy += recHitEnergy( *cursor, recHits )*frac;
144 }
145 }
146 // slow elegant version
147 //float energy = 0;
148 //std::vector<DetId> v_id = matrixDetId( topology, id, ixMin, ixMax, iyMin, iyMax );
149 //for ( std::vector<DetId>::const_iterator it = v_id.begin(); it != v_id.end(); ++it ) {
150 // energy += recHitEnergy( *it, recHits );
151 //}
152 return energy;
153 }
154
155 float EcalClusterTools::matrixEnergy( const reco::BasicCluster &cluster, const EcalRecHitCollection *recHits,
const CaloTopology* topology, DetId id, int ixMin, int ixMax, int iyMin, int iyMax, std::vector<int> flagsexcl, std::vector<int> severitiesexcl, const EcalSeverityLevelAlgo *sevLv )
156 {
157 // fast version
158 CaloNavigator<DetId> cursor = CaloNavigator<DetId>( id, topology->getSubdetectorTopology( id ) );
159 float energy = 0;
160 for ( int i = ixMin; i <= ixMax; ++i ) {
161 for ( int j = iyMin; j <= iyMax; ++j ) {
162 cursor.home();
163 cursor.offsetBy( i, j );
164 energy += recHitEnergy( *cursor, recHits, flagsexcl, severitiesexcl, sevLv );
165 }
166 }
167 return energy;
168 }
Header in 523
static float matrixEnergy( const reco::BasicCluster &cluster, const EcalRecHitCollection *recHits,
const CaloTopology* topology,
DetId id, int ixMin, int ixMax, int iyMin, int iyMax );
// get the DetId and the energy of the maximum energy crystal in a vector of DetId
Header in 532
static float matrixEnergy( const reco::BasicCluster &cluster, const EcalRecHitCollection *recHits,
const CaloTopology* topology,
DetId id, int ixMin, int ixMax, int iyMin, int iyMax );
178
179 //AA
180 //Take into account the severities and flags
static float matrixEnergy( const reco::BasicCluster &cluster, const EcalRecHitCollection *recHits,
const CaloTopology* topology,
DetId id, int ixMin, int ixMax, int iyMin, int iyMax,
std::vector<int> flagsexcl, std::vector<int> severitiesexcl, const EcalSeverityLevelAlgo *sevLv );
8 Nov 2012
Shervin's tutorial
6 Nov 2012
Looking at global tag provenance:
if (verbosity > 0)
128 edm::LogInfo ("TrackerHitProducer::produce")
129 << "Number of Provenances =" << AllProv.size();
130
131 if (printProvenanceInfo && (verbosity > 0)) {
132 TString eventout("\nProvenance info:\n");
133
134 for (unsigned int i = 0; i < AllProv.size(); ++i) {
135 eventout += "\n ******************************";
136 eventout += "\n Module : ";
137 eventout += AllProv[i]->moduleLabel();
138 eventout += "\n ProductID process index: ";
139 eventout += AllProv[i]->productID().processIndex();
140 eventout += "\n ProductID product index: ";
141 eventout += AllProv[i]->productID().productIndex();
142 eventout += "\n ClassName : ";
143 eventout += AllProv[i]->className();
144 eventout += "\n InstanceName : ";
145 eventout += AllProv[i]->productInstanceName();
146 eventout += "\n BranchName : ";
147 eventout += AllProv[i]->branchName();
148 }
149 eventout += " ******************************\n";
150 edm::LogInfo("TrackerHitProducer::produce") << eventout;
151 }
152 }
5 Nov 2012
Issue with flags for Higgs analysis. The test on kWeird was not done correctly.
- the 'bit' for kWierd should have been checked.
- the 'value' of the flag was being tested instead.
- in EE, saturating, or leading edge recovered, were adding to the 'value' of the flag
- kWeird no longer being tested --> events getting through to Higgs dataset.
- Stefano, email 5.11.2012 17:19: recoFlag() is used to check if a channel has to be excluded both in EB and EE clustering.
- Stefano: email 5.11.2012 17:19: I've modified the code to use checkFlag(), which should cure the problem.
2 Nov 2012
Day of Higgs->gg event debugging, excesses at particular eta in EE.Test of CMSSW 52 vs 53. Error with EcalClusterTools::matrixEnergy:
Get event copied through the Grid, with Crab:
News from David, email 2.11.2012, 10:52
3) In principle, a rechit with kWeird set in EE should not be considered
for seeding EE superclusters. The list of rechit flags to exclude is
defined here:
http://cmssw.cvs.cern.ch/cgi-bin/cmssw.cgi/CMSSW/RecoEcal/EgammaClusterProducers/
python/multi5x5BasicClusters_cfi.py?revision=1.11&view=markup&sortby=date
RecHitFlagToBeExcluded = cms.vstring(
'kFaultyHardware',
'kPoorCalib',
'kSaturated',
'kLeadingEdgeRecovered',
'kNeighboursRecovered',
'kTowerRecovered',
'kDead',
'kWeird',
)
4) In trying to understand how a rechit with flag kWeird still appears to
be able to create SCs in 52x and 53X, I looked at the relevant lines in
the clustering code:
http://cmssw.cvs.cern.ch/cgi-bin/cmssw.cgi/CMSSW/RecoEcal/EgammaClusterAlgos/src/
Multi5x5ClusterAlgo.cc?revision=1.15&view=markup&sortby=date&sortdir=down
// avoid seeding for anomalous channels (recoFlag based)
uint32_t rhFlag = (*it).recoFlag();
std::vector<int>::const_iterator vit = std::find(
v_chstatus_.begin(), v_chstatus_.end(), rhFlag );
if ( vit != v_chstatus_.end() ) continue; // the recHit has to be
excluded from seeding
I think the crucial point here is that this test uses the old recoFlag()
method and not the new checkFlag() method - the latter uses the flagbits.
Looking at the value of recoFlag() for this hit, I obtain recoFlag=17 -
this was also seen by Federico last night. As Federico remarked, this is
because the rechit is also gain-switched. I find that rechits stuck in
gain=1 get recoFlag=17, and rechits stuck in gain 6 get recoFlag=16
Going back to the code snippet, it seems that these hits pass through the
check since recoFlag() = kWeird (14).
I think the right way to do this is to use the
flagbits - loop over the vector of flags to exclude and check whether that
specific flagbit is set.
Stefano, can you take a look and see if this is indeed the case? If it is,
we should fix it and backport that fix to 53X for future reprocessings. We
should also check the barrel clustering to make sure that is also using
the flagbits.
For the Higgs, I think a simple sigma_ietaieta cut or eta,phi cuts around
the affected xtals would be sufficient to get rid of these
anomalous "photons".
13 Sep 2012
Trouble clearing HP of large print job. Suggestions to clear spooler:
Right-click "My Computer", Manage, expand "Services and Applications", click "Services", scroll
your way down the list to "Print Spooler", click "Stop" the service. When the service returns
"Stopped", right-click the taskbar print icon and select "Update". When the Icon disappears, go
back and restarte the service. In some rare instances, you need to Log off then Log on again.
Another Solution:
stop the printer spool service: Control Panel > Administrative Tools > Services; right click on the Print Spooler service and choose Stop
in Windows Explorer go here: C:\Windows\System32\spool\PRINTERS\ (note: the drive letter of your Windows folder may be different)
delete all files within the PRINTERS folder
go back to the Services window: right click on the Print Spooler service and choose Start
May need to Log Off and then Log on
5 Sep 2012
Trying to run on prompt data but no success, with CMSSW_5_3_2_patch 2, to study Poter's channels.
Get
----- Begin Fatal Exception 05-Sep-2012 14:03:08 CEST-----------------------
An exception of category 'PluginNotFound' occurred while
[0] Constructing the EventProcessor
[1] Constructing ESSource: class=PoolDBESSource label='GlobalTag'
Exception Message:
Unable to find plugin 'SiPixelQualityRcd@NewProxy'. Please check spelling of name.
----- End Fatal Exception -------------------------------------------------
Usual suspects:
- Compiling/running with incorrect CMSSW version with respect to the version that created the data root file
- Using incorrect GlobalTag which doesnt have the necessary records, ie SiPixelQualityRcd
- Need to check using edmProvDump for the global tags used for the RECO step
16 Aug 2012
Preparing figures for an updated James Jackson sFGVB detector note.
- given pdf file from david - his extracted figures not working with LateX.
- Open with Acrobat. Go to zoom 400 on the figures.
- Use camera to save plot area to clipboard
- Go to File -> Create PDF -> From Clipboard
- Save
- Can afterwords save this pdf as different types: File -> Export -> Image -> JPEG etc
Use Divid's LateX file in Windows with TeXworks
- Click on LateX file, opens with TeXworks automatically
- See the LateX text, edit it, save
- use \usepackage{graphicx} to load up other image types, ie pdf, jpg etc
- Click on the |> icon (near top left) to create new pdf file/image of file on screen - that's all !!
15 Aug 2012
Setup Mac for Office etc. First, attach Mac to bldg 40 office ethernet line. Open Safari (Mac equiv of Internet Explorer). CERN page automatically comes up to register the device.
Particular care - got both ethernet address AND Wireless address. Needed Mac OS system, serial number etc.
Loaded Office. Route:
31 July 2012
Looking at signal significance and p-values with Puck. From Fig 5 of PAS-HIG-12-015.
- File \davec\Documents\Analysis\Statistics\stats-calc.C
- Using ROOT::Math::gaussian_cdf_c(double x,1.5,double x0=0); Upper tail from x to inf
25 May 2012
Successfully copied over run 194455, lumi 258 from Castor into my 20GB work area.
nsls -l /castor/cern.ch/cms/store/data/Run2012B/SingleElectron/AOD/PromptReco-v1/000/194/455/A6AEAAC9-27A4-E111-95A6-BCAEC518FF5F.root
mrw-rw-r-- 1 cmsprod zh 3033923219 May 22 18:14
/castor/cern.ch/cms/store/data/Run2012B/SingleElectron/AOD/PromptReco-v1/000/194/455/A6AEAAC9-27A4-E111-95A6-BCAEC518FF5F.root
rfcp /castor/cern.ch/cms/store/data/Run2012B/SingleElectron/AOD/PromptReco-v1/000/194/455/A6AEAAC9-27A4-E111-95A6-BCAEC518FF5F.root /afs/cern.ch/work/d/davec/
3033923219 bytes in 219 seconds through eth0 (in) and local (out) (13528 KB/sec)
3033923219 bytes in remote file
ls -al
total 2962826
drwxr-xr-x 4 davec zh 2048 May 25 15:39 .
drwxr-xr-x 2 23563 1000 4096 May 24 16:04 ..
-rw-r--r-- 1 davec zh 3033923219 May 25 15:43 A6AEAAC9-27A4-E111-95A6-BCAEC518FF5F.root
drwxr-xr-x 2 davec zh 2048 Mar 30 18:59 private
drwxr-xr-x 2 davec zh 2048 Mar 30 18:59 public
/afs/cern.ch/work/d/davec $ quota
Volume Name Quota Used %Used Partition
work.davec 20971520 2962822 14% 14%
So, the 3033923219 from the ls command appears to be in 'bits'. The quota used, of 2962822, appears to be 2.9GB, ie 14% of 20GB.
7 May 2012
Where flags are set
5 May 2012
The cff and cfi files outlined below used in, ie, /Validation/EcalRecHits/test/Photon_E30GeV_all_cfg.py
Looking at recHit making in /RecoLocalCalo/EcalRecProducers/plugins/EcalRecHitWorkerSimple.cc where the header file defines:
std::vector<int> v_chstatus_;
std::vector<int> v_DB_reco_flags_;
These are loaded via the python files
- /RecoLocalCalo/Configuration/python/ecalLocalRecoSequence_cff.py (see just below) for ChannelStatusToBeExcludedand and
- /RecoLocalCalo/EcalRecProducers/python/ecalRecHit_cfi.py (further down) for flagsMapDBReco
v_chstatus_ = ps.getParameter<std::vector<int> >("ChannelStatusToBeExcluded");
v_DB_reco_flags_ = ps.getParameter<std::vector<int> >("flagsMapDBReco");
where ChannelStatusToBeExcluded is declared in /RecoLocalCalo/EcalRecProducers/python/ecalRecHit_cfi.py:
- ChannelStatusToBeExcluded = cms.vint32()
Uses /RecoLocalCalo/Configuration/python/ecalLocalRecoSequence_cff.py with
from RecoLocalCalo.EcalRecProducers.ecalRecHitTPGConditions_cff import *
012 #ECAL reconstruction
013 from RecoLocalCalo.EcalRecProducers.ecalGlobalUncalibRecHit_cfi import *
014 from RecoLocalCalo.EcalRecProducers.ecalRecHit_cfi import *
015 from RecoLocalCalo.EcalRecProducers.ecalPreshowerRecHit_cfi import *
016 from RecoLocalCalo.EcalRecProducers.ecalDetIdToBeRecovered_cfi import *
017 from RecoLocalCalo.EcalRecProducers.ecalCompactTrigPrim_cfi import *
018 from RecoLocalCalo.EcalRecProducers.ecalTPSkim_cfi import *
ecalLocalRecoSequence = cms.Sequence(ecalGlobalUncalibRecHit*ecalDetIdToBeRecovered*ecalRecHit*ecalCompactTrigPrim*ecalTPSkim
+ecalPreshowerRecHit)
020 ecalLocalRecoSequence_nopreshower = cms.Sequence(ecalGlobalUncalibRecHit*ecalRecHit)
021 ecalRecHit.ChannelStatusToBeExcluded = [ 3, 4, 8, 9, 10, 11, 12, 13, 14 ]
Looking at Flag and bit setting in /DataFormats/EcalRecHit/interface/EcalRecHit.h
- From the enum, kNeighboursRecovered = 8
- int flag = kNeighboursRecovered;
- flagBits_|= (0x1 << flag);
- "<<" operator shifts bits leftwise by 'flag'.
- Since kNeighboursRecovered = 8, shifts by 8 bits, ie 2**8 = 256
- Sets flagBits_ to 256, ie bit 8 is set.
Thus the flagBits_ variable stores whether the kXxxxxxxxxx enums have been set true
Example in /RecoLocalCalo/EcalRecProducers/plugins/EcalRecHitWorkerSimple.cc#122
where if (recoFlag<=EcalRecHit::kLeadingEdgeRecovered || !killDeadChannels_)
allows a recHit to be added to the collection if if (recoFlag<=EcalRecHit::kLeadingEdgeRecovered
or !killDeadChannels_
applies.
// make the rechit and put in the output collection
120 if (recoFlag<=EcalRecHit::kLeadingEdgeRecovered || !killDeadChannels_) {
121 EcalRecHit myrechit( rechitMaker_->makeRecHit (uncalibRH, icalconst
* lasercalib, (itimeconst + offsetTime), /*recoflags_*/ 0) );
// Barrel
122 if (detid.subdetId() == EcalBarrel && (lasercalib < EBLaserMIN_ || lasercalib > EBLaserMAX_))
myrechit.setFlag(EcalRecHit::kPoorCalib);
// Endcap
123 if (detid.subdetId() == EcalEndcap && (lasercalib < EELaserMIN_ || lasercalib > EELaserMAX_))
myrechit.setFlag(EcalRecHit::kPoorCalib);
124 result.push_back(myrechit);
125 }
127 return true;
128 }
Enum definitions in /DataFormats/EcalRecHit/interface/EcalRecHit.h
// recHit flags
enum Flags {
kGood=0, // channel ok, the energy and time measurement are reliable
kPoorReco, // the energy is available from the UncalibRecHit, but approximate (bad shape, large chi2)
kOutOfTime, // the energy is available from the UncalibRecHit (sync reco), but the event is out of time
kFaultyHardware, // The energy is available from the UncalibRecHit, channel is faulty at some hardware level (e.g. noisy)
kNoisy, // the channel is very noisy
kPoorCalib, // the energy is available from the UncalibRecHit, but the calibration of the channel is poor
kSaturated, // saturated channel (recovery not tried)
kLeadingEdgeRecovered, // saturated channel: energy estimated from the leading edge before saturation
kNeighboursRecovered, // saturated/isolated dead: energy estimated from neighbours
kTowerRecovered, // channel in TT with no data link, info retrieved from Trigger Primitive
kDead, // channel is dead and any recovery fails
kKilled, // MC only flag: the channel is killed in the real detector
kTPSaturated, // the channel is in a region with saturated TP
kL1SpikeFlag, // the channel is in a region with TP with sFGVB = 0
kWeird, // the signal is believed to originate from an anomalous deposit (spike)
kDiWeird, // the signal is anomalous, and neighbors another anomalous signal
kHasSwitchToGain6, // at least one data frame is in G6
kHasSwitchToGain1, // at least one data frame is in G1
//
kUnknown // to ease the interface with functions returning flags.
};
Associated python file is /RecoLocalCalo/EcalRecProducers/python/ecalRecHit_cfi.py, with bounds for bad lasercalib,
# rechit producer
006 ecalRecHit = cms.EDProducer("EcalRecHitProducer",
007 EErechitCollection = cms.string('EcalRecHitsEE'),
008 EEuncalibRecHitCollection = cms.InputTag("ecalGlobalUncalibRecHit","EcalUncalibRecHitsEE"),
009 EBuncalibRecHitCollection = cms.InputTag("ecalGlobalUncalibRecHit","EcalUncalibRecHitsEB"),
010 EBrechitCollection = cms.string('EcalRecHitsEB'),
011 # channel flags to be exluded from reconstruction, e.g { 1, 2 }
012 ChannelStatusToBeExcluded = cms.vint32(),
013 # avoid propagation of dead channels other than after recovery
014 killDeadChannels = cms.bool(True),
015 algo = cms.string("EcalRecHitWorkerSimple"),
016 # define maximal and minimal values for the laser corrections
017
018 EBLaserMIN = cms.double(0.5),
019 EELaserMIN = cms.double(0.5),
020
021 EBLaserMAX = cms.double(2),
022 EELaserMAX = cms.double(3),
025 # apply laser corrections
026 laserCorrection = cms.bool(True),
027 # reco flags association to DB flag
028 # the vector index corresponds to the DB flag
029 # the value correspond to the reco flag
030 flagsMapDBReco = cms.vint32(
031 0, 0, 0, 0, # standard reco
032 4, # faulty hardware (noisy)
033 -1, -1, -1, # not yet assigned
034 4, 4, # faulty hardware (fixed gain)
035 7, 7, 7, # dead channel with trigger
036 8, # dead FE
037 9 # dead or recovery failed
038 ),
039
040 # for channel recovery
041 algoRecover = cms.string("EcalRecHitWorkerRecover"),
042 recoverEBIsolatedChannels = cms.bool(False),
043 recoverEEIsolatedChannels = cms.bool(False),
044 recoverEBVFE = cms.bool(False),
045 recoverEEVFE = cms.bool(False),
046 recoverEBFE = cms.bool(True),
047 recoverEEFE = cms.bool(True),
048 #db statuses for which recovery in EE/EB should not be attempted
049 dbStatusToBeExcludedEE = cms.vint32(
050 142
051 ), # dead,LV off
052 dbStatusToBeExcludedEB = cms.vint32(
053 142
054 ), # dead,LV off
055
056 # --- logWarnings for saturated DeadFEs
057 # if the logWarningThreshold is negative the Algo will not try recovery (in
EE is not tested we may need negative threshold e.g. -1.e+9)
058 # if you want to enable recovery but you don't wish to throw logWarnings put the logWarningThresholds
very high e.g +1.e+9
059 # ~64 GeV is the TP saturation level
060 logWarningEtThreshold_EB_FE = cms.double(50),# in EB logWarningThreshold is actually in E (GeV)
061 logWarningEtThreshold_EE_FE = cms.double(50),# in EE the energy should correspond to Et (GeV) but
the recovered values of energies are not tested if make sense
062 ebDetIdToBeRecovered = cms.InputTag("ecalDetIdToBeRecovered:ebDetId"),
063 eeDetIdToBeRecovered = cms.InputTag("ecalDetIdToBeRecovered:eeDetId"),
064 ebFEToBeRecovered = cms.InputTag("ecalDetIdToBeRecovered:ebFE"),
065 eeFEToBeRecovered = cms.InputTag("ecalDetIdToBeRecovered:eeFE"),
066 singleChannelRecoveryMethod = cms.string("NeuralNetworks"),
067 singleChannelRecoveryThreshold = cms.double(8),
068 triggerPrimitiveDigiCollection = cms.InputTag("ecalDigis:EcalTriggerPrimitives"),
069 cleaningConfig=cleaningAlgoConfig,
070
071 )
3 May 2012
Looking a tsource files, directories listed with:
- scram list CMSSW, gives pointers to source directories, ie:
- CMSSW CMSSW_4_4_4 --> /afs/cern.ch/cms/sw/slc4_ia32_gcc345/cms/cmssw/CMSSW_4_4_4
Look for sevLev, ie
/afs/cern.ch/cms/slc5_amd64_gcc462/cms/cmssw/CMSSW_5_2_3/src/
RecoEgamma/EgammaIsolationAlgos/src/EgammaRecHitIsolation.cc:
sevLevel_->severityLevel(EBDetId(j->detid()), *ecalBarHits_) >= severityLevelCut_)
2 May 2012
LXR, files with name Argiro, argiro.docx
30 Apr 2012
Files for channel status, recHit severity levels and flags:
- EcalChannelStatusCode.h
- class EcalChannelStatusCode {
- public: void print(std::ostream& s) const { s << "status is: " << status_; }, uint16_t getStatusCode(), uint16_t getDecodedStatusCode()
- bool isHVon(), bool isLVon()
- and private:
044 /* bits 1-5 store a status code:
045 0 channel ok
046 1 DAC settings problem, pedestal not in the design range
047 2 channel with no laser, ok elsewhere
048 3 noisy
049 4 very noisy
050 5-7 reserved for more categories of noisy channels
051 8 channel at fixed gain 6 (or 6 and 1)
052 9 channel at fixed gain 1
053 10 channel at fixed gain 0 (dead of type this)
054 11 non responding isolated channel (dead of type other)
055 12 channel and one or more neigbors not responding
056 (e.g.: in a dead VFE 5x1 channel)
057 13 channel in TT with no data link, TP data ok
058 14 channel in TT with no data link and no TP data
059
060 bit 6 : HV on/off
061 bit 7 : LV on/off
062 bit 8 : DAQ in/out
063 bit 9 : TP readout on/off
064 bit 10: Trigger in/out
065 bit 11: Temperature ok/not ok
066 bit 12: channel next to a dead channel
067 */
with
from RecoLocalCalo.EcalRecProducers.ecalRecHitTPGConditions_cff import *
012 #ECAL reconstruction
013 from RecoLocalCalo.EcalRecProducers.ecalGlobalUncalibRecHit_cfi import *
014 from RecoLocalCalo.EcalRecProducers.ecalRecHit_cfi import *
015 from RecoLocalCalo.EcalRecProducers.ecalPreshowerRecHit_cfi import *
016 from RecoLocalCalo.EcalRecProducers.ecalDetIdToBeRecovered_cfi import *
017 from RecoLocalCalo.EcalRecProducers.ecalCompactTrigPrim_cfi import *
018 from RecoLocalCalo.EcalRecProducers.ecalTPSkim_cfi import *
019 ecalLocalRecoSequence = cms.Sequence(ecalGlobalUncalibRecHit*ecalDetIdToBeRecovered*
ecalRecHit*ecalCompactTrigPrim*ecalTPSkim+ecalPreshowerRecHit)
020 ecalLocalRecoSequence_nopreshower = cms.Sequence(ecalGlobalUncalibRecHit*ecalRecHit)
021 ecalRecHit.ChannelStatusToBeExcluded = [ 3, 4, 8, 9, 10, 11, 12, 13, 14 ]
# channel flags to be exluded from reconstruction, e.g { 1, 2 }
ChannelStatusToBeExcluded = cms.vint32(),
# avoid propagation of dead channels other than after recovery
killDeadChannels = cms.bool(True),
algo = cms.string("EcalRecHitWorkerSimple"),
# define maximal and minimal values for the laser corrections
EBLaserMIN = cms.double(0.5),
EELaserMIN = cms.double(0.5),
EBLaserMAX = cms.double(2),
EELaserMAX = cms.double(3),
# apply laser corrections
laserCorrection = cms.bool(True),
# reco flags association to DB flag
# the vector index corresponds to the DB flag
# the value correspond to the reco flag
flagsMapDBReco = cms.vint32(
0, 0, 0, 0, # standard reco
4, # faulty hardware (noisy)
-1, -1, -1, # not yet assigned
4, 4, # faulty hardware (fixed gain)
7, 7, 7, # dead channel with trigger
8, # dead FE
9 # dead or recovery failed
),
# for channel recovery
algoRecover = cms.string("EcalRecHitWorkerRecover"),
recoverEBIsolatedChannels = cms.bool(False),
recoverEEIsolatedChannels = cms.bool(False),
recoverEBVFE = cms.bool(False),
recoverEEVFE = cms.bool(False),
recoverEBFE = cms.bool(True),
recoverEEFE = cms.bool(True),
#db statuses for which recovery in EE/EB should not be attempted
dbStatusToBeExcludedEE = cms.vint32(
142
), # dead,LV off
dbStatusToBeExcludedEB = cms.vint32(
142
), # dead,LV off
# --- logWarnings for saturated DeadFEs
# if the logWarningThreshold is negative the Algo will not try recovery
(in EE is not tested we may need negative threshold e.g. -1.e+9)
# if you want to enable recovery but you don't wish to throw logWarnings put the
logWarningThresholds very high e.g +1.e+9
# ~64 GeV is the TP saturation level
logWarningEtThreshold_EB_FE = cms.double(50),# in EB logWarningThreshold is actually in E (GeV)
logWarningEtThreshold_EE_FE = cms.double(50),# in EE the energy should correspond to Et (GeV)
but the recovered values of energies are not tested if make sense
ebDetIdToBeRecovered = cms.InputTag("ecalDetIdToBeRecovered:ebDetId"),
eeDetIdToBeRecovered = cms.InputTag("ecalDetIdToBeRecovered:eeDetId"),
ebFEToBeRecovered = cms.InputTag("ecalDetIdToBeRecovered:ebFE"),
eeFEToBeRecovered = cms.InputTag("ecalDetIdToBeRecovered:eeFE"),
singleChannelRecoveryMethod = cms.string("NeuralNetworks"),
singleChannelRecoveryThreshold = cms.double(8),
triggerPrimitiveDigiCollection = cms.InputTag("ecalDigis:EcalTriggerPrimitives"),
cleaningConfig=cleaningAlgoConfig,
- EcalSeverityLevelESProducer_cfi
- with flagMask = cms.PSet ( kGood = cms.vstring('kGood'),
- kProblematic= cms.vstring('kPoorReco','kPoorCalib','kNoisy','kSaturated'),
- kRecovered = cms.vstring('kLeadingEdgeRecovered','kTowerRecovered'),
- kTime = cms.vstring('kOutOfTime'),
- kWeird = cms.vstring('kWeird','kDiWeird'),
- kBad = cms.vstring('kFaultyHardware','kDead','kKilled')
- and
- dbstatusMask=cms.PSet(kGood = cms.vuint32(0), kProblematic= cms.vuint32(1,2,3,4,5,6,7,8,9,10),
- kRecovered = cms.vuint32(), kTime = cms.vuint32(), kWeird = cms.vuint32(), kBad = cms.vuint32(11,12,13,14,15,16)
- and timeThresh=cms.double(2.0)
- EleIsoDetIdCollectionProducer.cc
- with severityLevelCut_(iConfig.getParameter("severityLevelCut"))
- and const std::vector<std::string> flagnames = iConfig.getParameter<std::vector<std::string> >("recHitFlagsToBeExcluded");
- and v_chstatus_= StringToEnumValue<EcalRecHit::Flags>(flagnames);
- and ((EcalRecHit*)(&*recIt))->recoFlag()
29 Apr 2012
Looking at channel status and severity levels.
- Needed to add cfi in python file due to runtime error:
No "EcalSeverityLevelAlgoRcd" record found in the EventSetup.
Please add an ESSource or ESProducer that delivers such a record.
Added:
# ########### Channel Status and Severity Level !!! #########
process.load("RecoLocalCalo.EcalRecAlgos.EcalSeverityLevelESProducer_cfi")
All OK now - no runtime errors.
28 Apr 2012
rawId found at /DataFormats/DetId/interface/DetId.h
- 00045 uint32_t rawId() const { return id_; }
where
- DetId (Detector det, int subdet) { id_=((det&0xF)<<28)|((subdet&0x7)<<25); }
27 Apr 2012
Severity level for gsfElectrons
const std::vector<std::string> flagnames =
048 iConfig.getParameter<std::vector<std::string> >("recHitFlagsToBeExcluded");
from, for example, http://cmslxr.fnal.gov/lxr/source/RecoEgamma/EgammaIsolationAlgos/python
/interestingEleIsoDetIdModule_cff.py#018
018 recHitFlagsToBeExcluded = cms.vstring(
019 'kFaultyHardware',
020 'kPoorCalib',
021 # ecalRecHitFlag_kSaturated,
022 # ecalRecHitFlag_kLeadingEdgeRecovered,
023 # ecalRecHitFlag_kNeighboursRecovered,
024 'kTowerRecovered',
025 'kDead'
026 ),
049
050 v_chstatus_= StringToEnumValue<EcalRecHit::Flags>(flagnames);
where StringToEnumValue is at /CommonTools/Utils/interface/StringToEnumValue.h
and v_chstatus_ is defined at
/RecoEgamma/EgammaIsolationAlgos/plugins/EleIsoDetIdCollectionProducer.h
std::vector<int> v_chstatus_;
// take EcalRecHits
079 Handle<EcalRecHitCollection> recHitsH;
080 iEvent.getByLabel(recHitsLabel_,recHitsH);
edm::ESHandle<EcalSeverityLevelAlgo> sevlv;
093 iSetup.get<EcalSeverityLevelAlgoRcd>().get(sevlv);
094 const EcalSeverityLevelAlgo* sevLevel = sevlv.product();
114 CaloRecHitMetaCollectionV::const_iterator recIt;
// the recHit loop checking for severity levels
115 for (recIt = chosen->begin(); recIt!= chosen->end () ; ++recIt) { // Select RecHits
// Go to next recHit if this recHit below enrgy threshold. 'Continue' jumps to next recHit in the collection:
117 if ( fabs(recIt->energy()) < energyCut_) continue; //dont fill if below E noise value
// jump to next recHit if below ET noise threshold
double et = recIt->energy() *
caloGeom->getPosition(recIt->detid()).perp() /
caloGeom->getPosition(recIt->detid()).mag();
123
if ( fabs(et) < etCut_) continue; //dont fill if below ET noise value
//make sure we have a barrel rechit
//call the severity level method
//passing the EBDetId
//the rechit collection in order to calculate the swiss crss
//and the EcalChannelRecHitRcd
//only consider rechits with ET >
//the SpikeId method (currently kE1OverE9 or kSwissCross)
//cut value for above
//then if the severity level is too high, we continue to the next rechit
135
if(recHitsLabel_.instance() == "EcalRecHitsEB" &&
sevLevel->severityLevel(EBDetId(recIt->detid()), *recHitsH)
>= severityLevelCut_) continue;
// Function severityLevel requires the detid and the collection
// EcalSeverityLevel::SeverityLevel severityLevel(
// const DetId& id,
// const EcalRecHitCollection& rhs) const;
//
// *chStatus,
// severityRecHitThreshold_,
// spId_,
// spIdThreshold_
// ) >= severityLevelCut_) continue;
//Check based on flags to protect from recovered channels from non-read towers
//Assumption is that v_chstatus_ is empty unless doFlagChecks() has been called
std::vector<int>::const_iterator vit
= std::find(
v_chstatus_.begin(),
v_chstatus_.end(),
( (EcalRecHit*)(&*recIt) )->recoFlag() );
if ( vit != v_chstatus_.end() ) continue; // the recHit has to be excluded from the iso sum
// compares the vector of flags, v_chstatus_, with the flags of the recHit in question
// EcalRecHit from DataFormats/EcalRecHit/interface/EcalRecHit.h as follows:
// class EcalRecHit : public CaloRecHit {
// public:
// typedef DetId key_type;
// recHit flags
// enum Flags {
// kGood=0, // channel ok, the energy and time measurement are reliable
// (EcalRecHit*)(&*recIt) is then deconstructed as follows:
// * for 'value' at pointer location
// & for 'address' of variable
// *recIt is the object
// &*recIt would then be the address of the object ????
// EcalRecHit * address of object ????? ooops - what is going on ????
// Find, in /Validation/EcalRecHits/src/EcalRecHitsValidation.cc#449:
int flag = myRecHit->recoFlag();
// where
// 2) loop over RecHits
389
// for (EcalUncalibratedRecHitCollection::const_iterator uncalibRecHit = EBUncalibRecHit->begin();
// uncalibRecHit != EBUncalibRecHit->end() ;
// ++uncalibRecHit) {
..
// EBDetId EBid = EBDetId(uncalibRecHit->id()); // gets the EBDetid EBid
..
// Find corresponding recHit
// EcalRecHitCollection::const_iterator myRecHit = EBRecHit->find(EBid);
..
// if( myRecHit == EBRecHit->end() ) continue;
// ebRecMap[EBid.rawId()] += myRecHit->energy();
// So the const_iterator myRecHit is a pointer to the recHit and myRecHit->energy(); gives the energy
26 Apr 2012
ECAL Channel status page at https://twiki.cern.ch/twiki/bin/view/CMS/EcalChannelStatus
- Anything marked >= 8 is dead, no recHits made
- Anything >=3 problematic, no recHits are made
- Level 3, noisy if > 2 ADC in EB, > 3 ADC in EE, no recHit made
- 1 and 2, problematic but keep them, recHits ARE made
ECAL Severity Level Producer:
Mapping both Flags and Channel status to give a resultant severity level:
ecalSeverityLevel = cms.ESProducer("EcalSeverityLevelESProducer",
# map EcalRecHit::Flag into EcalSeverityLevel
flagMask = cms.PSet (
kGood = cms.vstring('kGood'),
kProblematic= cms.vstring('kPoorReco','kPoorCalib','kNoisy','kSaturated'),
kRecovered = cms.vstring('kLeadingEdgeRecovered','kTowerRecovered'),
kTime = cms.vstring('kOutOfTime'),
kWeird = cms.vstring('kWeird','kDiWeird'),
kBad = cms.vstring('kFaultyHardware','kDead','kKilled')
),
# map ChannelStatus flags into EcalSeverityLevel
dbstatusMask=cms.PSet(
kGood = cms.vuint32(0),
kProblematic= cms.vuint32(1,2,3,4,5,6,7,8,9,10),
kRecovered = cms.vuint32(),
kTime = cms.vuint32(),
kWeird = cms.vuint32(),
kBad = cms.vuint32(11,12,13,14,15,16)
),
#return kTime only if the rechit is above this threshold
timeThresh=cms.double(2.0),
)
Looking at ECAL severity levels, defined in
- CMSSW/DataFormats/EcalRecHit/interface/EcalSeverityLevel.h
namespace EcalSeverityLevel {
enum SeverityLevel {
kGood=0, // good channel
kProblematic, // problematic (e.g. noisy)
kRecovered, // recovered (e.g. an originally dead or saturated)
kTime, // the channel is out of time (e.g. spike)
kWeird, // weird (e.g. spike)
kBad // bad, not suitable to be used for reconstruction
};
20 Apr 2012
Problems trying to compile CMSSW_5_2_3.
- problem with header <time>
Reason from Giulio Eulisse: AFAIK, <time> was just a gcc extension that got deprecated and removed in gcc 4.6.2. You need to use <ctime> (<time.h> is actually C, not c++).
Message - check gcc area slc5_amd64_gcc462 or current in use, for existence/absence of header files.
Problem with 'extend' to include a python PSET in EDAnalyzer. Now do
- import python_cfi as spikery
- .....EDAnalyzer(...., spikery, ....) works !
9 Mar 2012
Checking weights method:
059 if (gainId != gainId0) iGainSwitch = 1; // have switched gain
060 if (!iGainSwitch)
061 frame(iSample) = double(dataFrame.sample(iSample).adc()); // get adc for this sample, gain = 12
062 else
063 frame(iSample) = double(((double)(dataFrame.sample(iSample).adc()) - pedestals[gainId-1]) * gainRatios[gainId-1]);
// this is for a gain switch. Subtract gain12 ped from adc. Multiply by gain ratio. => all set up correctly since
// weights for gain6, gain 1 are all zero except for 6th sample which = 1.0
Looking at why kdiweird flag set for Sakuma bad event, Jan 2012. kdiweird found at:
with
From python file, # for kOutOfTime flag
019 EBtimeConstantTerm= cms.double(.6),
020 EBtimeNconst = cms.double(28.5),
021 EEtimeConstantTerm= cms.double(.6),
022 EEtimeNconst = cms.double(31.8),
033 amplitudeThresholdEB = cms.double(10),
034 amplitudeThresholdEE = cms.double(10),
outOfTimeThreshP = outOfTimeThreshG61pEB_; // outOfTimeThresholdGain12pEB = cms.double(5), # times estimated precision
outOfTimeThreshM = outOfTimeThreshG61mEB_;
float cterm = EEtimeConstantTerm_;
float sigmaped = pedRMSVec[0]; // approx for lower gains
float nterm = EEtimeNconst_*sigmaped/uncalibRecHit.amplitude();
float sigmat = std::sqrt( nterm*nterm + cterm*cterm );
float sigmat = std::sqrt( nterm*nterm + cterm*cterm );
if ( ( correctedTime > sigmat*outOfTimeThreshP ) ||
( correctedTime < (-1.*sigmat*outOfTimeThreshM) ))
{ uncalibRecHit.setFlagBit( EcalUncalibratedRecHit::kOutOfTime );
and files
import FWCore.ParameterSet.Config as cms
cleaningAlgoConfig = cms.PSet(
# apply cleaning in EB above this threshold in GeV
cThreshold_barrel=cms.double(4), // 4 GeV
# apply cleaning in EE above this threshold in GeV
cThreshold_endcap=cms.double(15), // 15 GeV
# mark spike in EB if e4e1 < e4e1_a_barrel_ * log10(energy) + e4e1_b_barrel_
e4e1_a_barrel=cms.double(0.04), // equiv to old swiss cross > .96
e4e1_b_barrel=cms.double(-0.024),
# ditto for EE
e4e1_a_endcap=cms.double(0.02), // equiv to old swiss cross > .98
e4e1_b_endcap=cms.double(-0.0125),
#when calculating e4/e1, ignore hits below this threshold
e4e1Threshold_barrel= cms.double(0.080), // 80 MeV in EB
e4e1Threshold_endcap= cms.double(0.300), // 300 MeV in EE
# near cracks raise the energy threshold by this factor
tightenCrack_e1_single=cms.double(2),
# near cracks, divide the e4e1 threshold by this factor
tightenCrack_e4e1_single=cms.double(3),
# same as above for double spike
tightenCrack_e1_double=cms.double(2),
tightenCrack_e6e2_double=cms.double(3),
# consider for double spikes if above this threshold
cThreshold_double =cms.double(10), // 10 GeV
# mark double spike if e6e2< e6e2thresh
e6e2thresh=cms.double(0.04),
# ignore rechits flagged kOutOfTime above this energy threshold in EB
gnoreOutOfTimeThresh=cms.double(1e9)
)
.............
// in http://cmslxr.fnal.gov/lxr/source/RecoLocalCalo/EcalRecAlgos/src/EcalCleaningAlgo.cc#048
Flag spikey channels
044
045 Mark single spikes. Spike definition:
046
047 Barrel: e> cThreshold_barrel_ &&
048 e4e1 > e4e1_a_barrel_ * log10(e) + e4e1_b_barrel_
049
050 Near cracks: energy threshold is multiplied by tightenCrack_e1_single
051 e4e1 threshold is divided by tightenCrack_e4e1_single
052
053 Endcap : e> cThreshold_endcap_ &&
054 e4e1> e4e1_a_endcap_ * log10(e) + e4e1_b_endcap_
055
056 Mark double spikes (barrel only)
057 e> cThreshold_double_ &&
058 e6e2 > e6e2thresh_;
059
060 Near cracks:
061 energy threshold multiplied by tightenCrack_e1_double
062 e6e2 threshold divided by tightenCrack_e6e2_double
063
064
065 Out of time hits above e4e1_IgnoreOutOfTimeThresh_ are
066 ignored in topological quantities
067 */
8 Mar2012
Successfully split off printPhoton.cc from rest of code.Cannot put global variable definitions into RecoAna.h. Compiler doesnt like them being multiply defined each time RecoAna.h is included in a file. printPhoton.cc has:
#include "RecoAna.h"
void
RecoAnalyzer::printPhoton(const edm::Event& iEvent, const edm::EventSetup& iSetup) {
....code...}
RecoAna.h has
// system include files
#include <memory>
#include <iostream>
#include "Riostream.h"
.......many other includes.....
#include "TH1.h"
#include "TH2.h"
#include "TF1.h"
// class declaration
class RecoAnalyzer : public edm::EDAnalyzer {
public:
explicit RecoAnalyzer(const edm::ParameterSet&);
~RecoAnalyzer();
static void fillDescriptions(edm::ConfigurationDescriptions& descriptions);
private:
// virtual
virtual void printPhoton(const edm::Event& iEvent, const edm::EventSetup& iSetup);
virtual void printswisscross(const edm::Event& iEvent, const edm::EventSetup& iSetup, EcalRecHit & hit);
virtual void printtest(const edm::Event& iEvent, const edm::EventSetup& iSetup) ;
virtual void printklevel(EcalRecHit & hit);
virtual void beginJob() ;
virtual void analyze(const edm::Event&, const edm::EventSetup&);
virtual void endJob() ;
// virtual void printtest();
virtual void beginRun(edm::Run const&, edm::EventSetup const&);
virtual void endRun(edm::Run const&, edm::EventSetup const&);
virtual void beginLuminosityBlock(edm::LuminosityBlock const&, edm::EventSetup const&);
virtual void endLuminosityBlock(edm::LuminosityBlock const&, edm::EventSetup const&);
// ----------member data ---------------------------
int thisrun;
string EERecHitCollection_;
string EBRecHitCollection_;
string EEdigiCollection_;
edm::InputTag ebRecHitsLabel_ ;
edm::InputTag eeRecHitsLabel_ ;
edm::InputTag esRecHitsLabel_ ;
// from http://cmslxr.fnal.gov/lxr/source/DPGAnalysis/SiStripTools/plugins/TrackerDpgAnalysis.cc#506
uint32_t orbit_, orbitL1_, bx_, store_, time_, unixTime_;
uint16_t lumiSegment_, physicsDeclared_;
uint32_t vertexid_, eventid_, runid_;
TH2D *h001; // All EB calib consts
TH2D *h002; // All EE- calib consts
TH2D *h003; // All EE+ calib consts
TH2D *h0010; // All EB calib consts, nents
TH2D *h0020; // All EE- calib consts, nents
TH2D *h0030; // All EE+ calib consts, nents
TH1F *h00100; // All EB calib consts dist
TH1F *h00200; // All EE- calib consts dist
TH1F *h00300; // All EE+ calib consts dist
TH1F *h00101; // EB calib consts dist, for rechits in events
TH1F *h00201; // All EE- calib consts dist, for rechits in events
TH1F *h00301; // All EE+ calib consts dist, for rechits in events
TH1F *h001001; // EB intercalib consts dist, for rechits in events
TH1F *h002001; // All EE- intercalib consts dist, for rechits in events
TH1F *h003001; // All EE+ intercalib consts dist, for rechits in events
TH1F *h0021; // EE Rec hits chi2
TH2D *h0022; // EE time vs chi2
TH2D *h0023; // EE ph vs chi2
string MyTag_;
string run;
string side;
string volts;
}; // End of RecoAna.h
RecoAna.cc has:
#include "RecoAna.h"
// Global definitions and initialisations
// constants, enums and typedefs
typedef CaloCellGeometry::CornersVec CornersVec ;
int evcount = 0;
int irec = 1;
int xmax = -999;
int xmin = 999
......many other global variables........
double weightsum = 0.0;
// class destructor
RecoAnalyzer::~RecoAnalyzer() { .........}
RecoAnalyzer::RecoAnalyzer(const edm::ParameterSet& iConfig) {
//now do what ever initialization is needed
MyTag_ = iConfig.getUntrackedParameter<string>("MyTag");
//TString tsrun, tsside, tsvolts, tsinfo;
// tsrun = run;
// tsinfo = "Run " + run + ", " + side + ", " + volts;
edm::Service<TFileService> fs;
h001 = fs->make<TH2D>("h001", " EB lasercalib consts " , 361, -0.5, 360.5, 180, -90., 90. );
....many other histos....... etc }
// ------------ method called for each event ------------
void
RecoAnalyzer::analyze(const edm::Event& iEvent, const edm::EventSetup& iSetup)
{
using namespace edm; ........code etc...........}
5 Mar 2012
Trouble trying to use FEt. Get 'Cannot call member function without object'. Explanation on C++ forum:
3 Mar 2012
Looking at F(eta) corrections etc, some reference files:
- RecoEgamma/EgammaPhotonAlgos/src/PhotonEnergyCorrector.cc with
- void PhotonEnergyCorrector::calculate (edm::Event& evt, reco::Photon & thePhoton, int subdet,
- const reco::VertexCollection& vtxcol, const edm::EventSetup& iSetup ) {
- // f(eta) correction to e5x5
- // add correction for cracks
- // correction for low r9
- ////////// Energy Regression //
- double PhotonEnergyCorrector::applyCrackCorrection(const reco::SuperCluster &cl,
- EcalClusterFunctionBaseClass* crackCorrectionFunction){
- RecoEgamma/EgammaElectronAlgos/interface/ElectronEnergyCorrector.h
- with class ElectronEnergyCorrector
- with function declarations fEta(), fBremEta(), fEt(), fEnergy()
- RecoEgamma/EgammaElectronAlgos/src/ElectronEnergyCorrector.cc with
- float energyError( float E, float * par )
- { return sqrt( pow(par[0]/sqrt(E),2) + pow(par[1]/E,2) + pow(par[2],2) ) ; }
- where pow(float base, float exp) returns base raised to the exp power, ie (base)**exp
- pow(par[0]/sqrt(E),2) = par[0]**2 / (E)
- float barrelEnergyError( float E, int elClass ) { // steph third sigma
- float parEB[5][3] = {
- { 2.46e-02, 1.97e-01, 5.23e-03}, // golden
- { 9.99e-07, 2.80e-01, 5.69e-03}, // big brem
- { 9.37e-07, 2.32e-01, 5.82e-03}, // narrow
- { 7.30e-02, 1.95e-01, 1.30e-02}, // showering
- { 9.25e-06, 2.84e-01, 8.77e-03} // nominal --> gap } ;
- return energyError(E,parEB[elClass]) ; }
- float endcapEnergyError( float E, int elClass ) {
- void ElectronEnergyCorrector::correctEcalEnergyError( reco::GsfElectron & electron ) {
- float ElectronEnergyCorrector::fEta ( float energy, float eta, int algorithm) const {
- float ElectronEnergyCorrector::fBremEta ( float sigmaPhiSigmaEta, float eta, int algorithm, reco::GsfElectron::Classification cl ) const {
- float xcorr[nBinsEta][reco::GsfElectron::GAP+1] = { then table of numbers, 5 across * 14 rows
- float par0[nBinsEta][reco::GsfElectron::GAP+1] = { then table of numbers, 5 across * 14 rows
- float par1[nBinsEta][reco::GsfElectron::GAP+1] = { then table of numbers, 5 across * 14 rows
- float par2[nBinsEta][reco::GsfElectron::GAP+1] = { then table of numbers, 5 across * 14 rows
- float ElectronEnergyCorrector::fEt ( float ET, int algorithm, reco::GsfElectron::Classification cl ) const {
- if (algorithm==0) //Electrons EB { float par[reco::GsfElectron::GAP+1][5] = { then table 5 x5
- else if (algorithm==1) //Electrons EE { float par[reco::GsfElectron::GAP+1][5] = { then table 5 x5
- float ElectronEnergyCorrector::fEnergy (float E, int algorithm, reco::GsfElectron::Classification cl ) const {
- if (algorithm==0) // Electrons EB { return 1. ; } else if (algorithm==1) // Electrons EE {
- float par0[reco::GsfElectron::GAP+1] = { 400, 400, 400, 400, 400 } ; plus par1, 2, 3 and 4
- double ElectronEnergyCorrector::fEtaBarrelGood ( double scEta ) const
- { // f(eta) for the first 3 classes (0, 10 and 20)
- double ElectronEnergyCorrector::fEtaBarrelBad ( double scEta) const
- { // f(eta) for the class = 30
- double ElectronEnergyCorrector::fEtaEndcapGood ( double scEta ) const
- { // f(eta) for the first 3 classes (100, 110 and 120)
- double ElectronEnergyCorrector::fEtaEndcapBad( double scEta ) const
- { // f(eta) for the class = 130-134
- IN2011_002.pdf
- which references /CMSSW/Validation/EcalClusters/doc/EcalClusters, doc file states:
SDPV Package for Validation of Ecal Clusters.
\subsection interface Public interface
<!-- List the classes that are provided for use in other packages (if any) -->
- EgammaBasicClusters
- EgammaSuperClusters
\subsection modules Modules
<!-- Describe modules implemented in this package and their parameter set -->
- EgammaBasicClusters
- EgammaSuperClusters
\subsection tests Unit tests and examples
<!-- Describe cppunit tests and example configuration files -->
runEgammaAnalyzers.cfg
Useful utilities:
- CMSSW/Validation/EcalClusters/interface/AnglesUtil.h
// calculate phi from x, y
inline double phi(double x, double y);
// calculate phi for a line defined by xy1 and xy2 (xy2-xy1)
inline double phi(double xy1[2], double xy2[2]);
inline double phi(float xy1[2], float xy2[2]);
// calculate theta from x, y, z
inline double theta(double x, double y, double z);
// calculate theta for a line defined by xyz1 and xyz2 (xyz2-xyz1)
inline double theta(double xyz1[3], double xyz2[3]);
inline double theta(float xyz1[3], float xyz2[3]);
// calculate theta from eta
inline double theta(double etap);
// calculate eta from x, y, z (return also theta)
inline double eta(double x, double y, double z);
// calculate eta for a line defined by xyz1 and xyz2 (xyz2-xyz1)
inline double eta(double xyz1[3], double xyz2[3]);
inline double eta(float xyz1[3], float xyz2[3]);
// calculate eta from theta
inline double eta(double th);
// calculate rapidity from E, pz
inline double y(double E, double pz);
// calculate phi1-phi2 keeping value between 0 and pi
inline double delta_phi(double ph11, double phi2);
// calculate phi1-phi2 keeping value between -pi and pi
inline double signed_delta_phi(double ph11, double phi2);
// calculate eta1 - eta2
inline double delta_eta(double eta1, double eta2);
// calculate deltaR
inline double delta_R(double eta1, double phi1, double eta2, double phi2);
// calculate unit vectors given two points
inline void uvectors(double u[3], double xyz1[3], double xyz2[3]);
inline void uvectors(float u[3], float xyz1[3], float xyz2[3]);
inline double tanl_from_theta(double theta);
inline double theta_from_tanl(double tanl);
2 Mar 2012
Looking at Photon collection. Inherits from reco::RecoCandidate which inherits from reco::LeafCandidate which has energy(), eta(), et(), mass(), momentum(), charge() and 'virtual bool isPhoton () const'.
Looking at rechits, clusters, superclusters and Photons (Paolo event 3)
Sum of rechits, energy = 562 GeV
Sum of clusters, energy = 562 GeV
ECAL SuperCluster energy
correctedHybridSuperClusters = 566.1415 GeV
Photon energy = 559.220 GeV
1 Mar 2012
Problem compiling with new printPhoton. Got compiler error below.
Forgot to put "RecoAnalyzer::" in front of printPhoton( where the print code is defined !!!!!!!!!!!!!!!!!!!!!!!!
.....
.....
>> Building shared library tmp/slc5_amd64_gcc434/src/Reco/RecoAnalyzer/src/
RecoRecoAnalyzer/libRecoRecoAnalyzer.so
@@@@ Checking shared library for missing symbols: libRecoRecoAnalyzer.so
tmp/slc5_amd64_gcc434/src/Reco/RecoAnalyzer/src/RecoRecoAnalyzer/
libRecoRecoAnalyzer.so: undefined reference to
`RecoAnalyzer::printPhoton(edm::Event const&, edm::EventSetup const&)' <<<< here it is !!!!!!!!
collect2: ld returned 1 exit status
gmake: *** [tmp/slc5_amd64_gcc434/src/Reco/RecoAnalyzer/src/RecoRecoAnalyzer/
libRecoRecoAnalyzer.so] Error 1
24 Feb 2012
Succeeded to get code in function printtest working.
Numerous niggles. Needed to pass iEvent from RecoAnalyzer, needed 'using namespace edm' etc for Handle to work, needed to change function declaration at start of RecoAnalyzer and add in the variable types being passed. All these collected here:
In class declaration:
private:
virtual void printtest(const edm::Event& iEvent, const edm::EventSetup& iSetup) ;
In RecoAnalyzer:
printtest(iEvent, iSetup);
At the method:
void
RecoAnalyzer::printtest(const edm::Event& iEvent, const edm::EventSetup& iSetup) {
using namespace edm;
using namespace reco;
using namespace std;
cout << "\nIn print test" << endl;
Handle<EBRecHitCollection> ebrecs;
iEvent.getByLabel("reducedEcalRecHitsEB", ebrecs);
..........}
23 Feb 2012
Succeed to get CMSSW working for absent collections such as iterativeCone5PFJets and iterativeCone5CaloJets with catch(...).
Problem - there were both iterativeCone collections in the code - and I had only tried to catch the exception for one of them !!!
21 Feb 2012
Paolo's 5 events - the first event was in 2011A, the rest in 2011B. 1st event did not have iterativeCone5CaloJets, the others did. The first event needed global tag GR_P_V21 whereas the 4 other events used GR_P_V22.
Now check globaltag veracity by calculating raw energy from digis to compare to recHits. Amplitude code from ~/Anomalous-pulses/Don-Wee-Nov2011.C
20 Feb 2012
Looking at Paolo's first event, exception, job bombing because iterativeCone5CaloJets collection not found
%MSG-e My Code: : RecoAnalyzer:demo 20-Feb-2012 15:47:26 CET Run: 170876 Event: 108790209
2 GsFElectrons were found!
%MSG
20-Feb-2012 15:47:26 CET Closed file file:/afs/cern.ch/user/d/davec/scratch0/pickevents_1_1_q7a.root
%MSG-s CMSException: AfterFile 20-Feb-2012 15:47:26 CET PostEndRun
cms::Exception caught in cmsRun
---- ProductNotFound BEGIN
getByLabel: Found zero products matching all criteria
Looking for type: std::vector<reco::CaloJet>
Looking for module label: iterativeCone5CaloJets
Looking for productInstanceName:
cms::Exception going through module RecoAnalyzer/demo run: 170876 lumi: 127 event: 108790209
If you wish to continue processing events after a ProductNotFound exception,
add "SkipEvent = cms.untracked.vstring('ProductNotFound')" to the "options" PSet in the configuration.
ProcessingStopped
Exception going through path p
EventProcessingStopped
an exception occurred during current event processing
cms::Exception caught in EventProcessor and rethrown
---- ProductNotFound END
Errors written from Event.h, line 438
18 Feb 2012
Pool source commands to choose how many runs, which events etc
No success with 4_2_7_prompt for Paolo's 5 event file enujj_LQ850_selection.root.
Get single events from castor, finding them and copying them over with:
dbsql "find file where run=170876 and lumi=127" > paolo-run.txt
dbsql "find file where run=178479 and lumi=703" > paolo-pick4.txt
and then
./single-event.txt
where single-event.txt is:
cmsRun pickEvent_cfg.py \
inputFiles=/store/data/Run2011B/MET/RECO/PromptReco-v1/000/176/928/6E357581-82E7-E011-9626-BCAEC518FF62.root \
eventsToProcess=176928:38:62381656 \
outputFile=/afs/cern.ch/user/d/davec/scratch0/Paolo3-inquiry-18Feb2012.root
17 Feb 2012
A neat Jet page! Fastjet3 info
Tracker material budget discussion
Do an edmProvDump of Paolo's file enujj_LQ850_selection.root. So far, I have been analyzing with CMSSW_4_2_3. The process history suggests CMSSW_4_2_8 should now be used instead.
Processing History:
HLT '' '"CMSSW_4_2_7_onlpatch1_ONLINE"' [1] (951f8fd23689e8c16d52084f58b83433)
RECO '' '"CMSSW_4_2_8"' [1] (1895821c14b203d3321b531d8e20bce7)
HLT '' '"CMSSW_4_2_7_onlpatch1_ONLINE"' [2] (be9319233fdb591a9f59206e0f7db975)
RECO '' '"CMSSW_4_2_8"' [1] (1895821c14b203d3321b531d8e20bce7)
HLT '' '"CMSSW_4_2_7_onlpatch2_ONLINE"' [3] (1d02be6c6bdc546160934c97e2b41578)
RECO '' '"CMSSW_4_2_8"' [1] (1895821c14b203d3321b531d8e20bce7)
HLT '' '"CMSSW_4_2_5_onlpatch1_ONLINE"' [4] (fdcd5c334710b08924a2442afc69ecd8)
RECO '' '"CMSSW_4_2_8"' [1] (1895821c14b203d3321b531d8e20bce7)
HLT '' '"CMSSW_4_2_7_onlpatch1_ONLINE"' [5] (58f5c347e5716c6c69806197c953b30e)
RECO '' '"CMSSW_4_2_8"' [1] (1895821c14b203d3321b531d8e20bce7)
HLT '' '"CMSSW_4_2_7_onlpatch1_ONLINE"' [6] (25c52f80361cd51d860573e06517c674)
RECO '' '"CMSSW_4_2_8"' [1] (1895821c14b203d3321b531d8e20bce7)
16 Feb 2012
Problems with RecoAnalyzer:
~/HEEP/CMSSW_4_2_3/src/Reco/RecoAnalyzer $ cmsRun recoanalyzer_cfg.py > eleni-out.txt
16-Feb-2012 19:38:08 CET Initiating request to open file file:/afs/cern.ch/user/d/davec/scratch0/TailEvents.root
16-Feb-2012 19:38:09 CET Successfully opened file file:/afs/cern.ch/user/d/davec/scratch0/TailEvents.root
Begin processing the 1st record. Run 176797, Event 282232912, LumiSection 180 at 16-Feb-2012 19:38:10.921 CET
16-Feb-2012 19:38:10 CET Closed file file:/afs/cern.ch/user/d/davec/scratch0/TailEvents.root
%MSG-s CMSException: AfterFile 16-Feb-2012 19:38:10 CET PostEndRun
cms::Exception caught in cmsRun
---- FileReadError BEGIN
---- FatalRootError BEGIN
Fatal Root Error: @SUB=TBufferFile::CheckByteCount
object of class edm::RefCore read too few bytes: 8 instead of 12
---- FatalRootError END
cms::Exception going through module RecoAnalyzer/demo run: 176797 lumi: 180 event: 282232912
ProcessingStopped
Exception going through path p
EventProcessingStopped
an exception occurred during current event processing
cms::Exception caught in EventProcessor and rethrown
---- FileReadError END
%MSG
=============================================
MessageLogger Summary
type category sev module subroutine count total
---- -------------------- -- ---------------- ---------------- ----- -----
1 CMSException -s AfterFile 1 1
2 fileAction -s file_close 1 1
3 fileAction -s file_open 2 2
type category Examples: run/evt run/evt run/evt
---- -------------------- ---------------- ---------------- ----------------
1 CMSException PostEndRun
2 fileAction PostEndRun
3 fileAction pre-events pre-events
Severity # Occurrences Total Occurrences
-------- ------------- -----------------
System 4 4
22 Jan 2012
Seema's Sandbox code for a lasercalib filter:
BuildFile.xml example from Dave Evans to load files from his CMSSW user area
21 Jan 2012
Looking at ECAL channel maps.
//Include the following in analyser code:
#include "CondFormats/DataRecord/interface/EcalChannelStatusRcd.h"
//[...]
edm::ESHandle<EcalChannelStatus> chanstat;
iSetup.get<EcalChannelStatusRcd>().get(chanstat);
const EcalChannelStatus* cstat=chanstat.product();
% in rechit loop
uint16_t dbStatus = retrieveDBStatus( recHit.id(), chanstat );
Use of this code is in:
Looking at maps from cluster to rechits with David's file:
17 Jan 2012
Problem with config file with new global tag - thanks Ciara
- Tag = 'GR_R_42_V21B::ALL’ should not have capitalised ALL ! Should be Tag = 'GR_R_42_V21B::All’ ouch !!!!!
EE long term files in laptop
- C:\Users\Dave\Documents\New-Resolutions\fit-protons-with-lyloss.C - try getting 10**14 point correct
- C:\Users\Dave\Documents\New-Resolutions\res3D-Energy100.c - now calls out to another function, rescalc(eta, lumiref, lumi)
- C:\Users\Dave\Documents\New-Resolutions\rescalc.c
// call from res3D-Energy100.c
....
rescalc(eta, lumiref, lumi);
// in New-Resolutions\rescalc.c
....defining ints and doubles.....
....then....
void rescalc(double eta, double lumiref, double lumi) {
.....code.....}
Note filename rescalc.c = function name rescalc
13 Dec 2011
Split off cfi file and define Tag inside it. Check in cfg file with
process.load("recoanalyzer_cfi")
from recoanalyzer_cfi import Tag
print "Imported from recoanalyzer_cfi.py, Tag is ", Tag, "\n"
Use python interactive to check VPSets:
python -i recoanalyzer_cfg.py
In file recoanalyzer_cfg.py
Default tag in cfg is GR_P_V22::All
In file recoanalyzer_cfi.py
In recoanalyzer_cfi.py, Tag is GR_R_42_V21A::ALL
Imported from recoanalyzer_cfi.py, Tag is GR_R_42_V21A::ALL
Tag about to be used by GlobalTag.globaltag is GR_R_42_V21A::ALL
>>> process.GlobalTag.toGet
cms.VPSet(cms.PSet(
record = cms.string('EcalADCToGeVConstantRcd'),
tag = cms.string('EcalADCToGeVConstant_v10_offline'),
connect = cms.untracked.string('frontier://FrontierProd/CMS_COND_31X_ECAL')
),
cms.PSet(
record = cms.string('EcalWeightXtalGroupsRcd',
===>.... continues to list all remaining lines
Identify individual elements:
>>> process.GlobalTag.toGet[0]
cms.PSet(
record = cms.string('EcalADCToGeVConstantRcd'),
tag = cms.string('EcalADCToGeVConstant_v10_offline'),
connect = cms.untracked.string('frontier://FrontierProd/CMS_COND_31X_ECAL')
)
>>> process.GlobalTag.toGet[5]
cms.PSet(
record = cms.string('EcalLaserAPDPNRatiosRefRcd'),
tag = cms.string('EcalLaserAPDPNRatiosRef_v2_prompt'),
connect = cms.untracked.string('frontier://FrontierProd/CMS_COND_43X_ECAL')
)
>>> process.GlobalTag.toGet[5][1]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'PSet' object does not support indexing
>>> process.GlobalTag.toGet[5][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'PSet' object does not support indexing
Seems to have made OBJECTS to the required number (identified by dimension argument), ie
process.GlobalTag.toGet[5]
each with members 'record', 'tag', and 'connect'
>>> process.GlobalTag.toGet[5].record
cms.string('EcalLaserAPDPNRatiosRefRcd')
>>> process.GlobalTag.toGet[5].tag
cms.string('EcalLaserAPDPNRatiosRef_v2_prompt')
>>> process.GlobalTag.toGet[5].connect
cms.untracked.string('frontier://FrontierProd/CMS_COND_43X_ECAL')
>>>
9 Dec 2011
Two reminders of ROOT fun:
- need to .q if have edited source file, sometimes, to reset
- Made mistake, ROOT::MATH::chisquared_cdf_c SHOULD BE ROOT::Math::chisquared_cdf_c !!!!!!!!
7 Dec 2011
Test the P-value for the chi-square test which is P(>chisq), the probability of observing a value at least as extreme as the test statistic for a chi-square distribution with (r-1)(c-1) degrees of freedom.
- Example from Yale course, chisq = 18.564, degrees of freedom = 4
- P value = 0.001
- In ROOT, see ref
- double ROOT::Math::chisquared_cdf_c (double x, double r, double x0=0)
- Complement of the cumulative distribution function of the chisq distribution with r degrees of freedom (upper tail).
- root [1] ROOT::Math::chisquared_cdf_c (18.564, 4.0, 0.)
- double 9.57097580833007190e-004
This is very close to the 0.001 P value quoted by the Yale course!
The complement, ROOT::Math::chisquared_cdf(18.564, 4.0, 0.) for the lower tail is
- double 9.99042902419167040e-001
- This is 0.999 as expected for the integral of the total probabaility to equal unity, = upper tail + lower tail
5 Dec 2011
Start a new section on statistics, and how to access statistical functions from ROOT.
4 Dec 2011
Annotations/notes to EcalUncalibRecHitRecWeightsAlgo.h
3 Dec 2011
Success getting weights from core CMSSW code. Setup CVS access using Workbook section 1.4 . Download source files:
*** Inital folders/files:
[lxplus412]~/HEEP/CMSSW_4_2_3/src $ ls
HEEP Raw Reco cmssw-list.txt
*** Setup and check cvs links
[lxplus412]~/HEEP/CMSSW_4_2_3/src $ cmsenv
[lxplus412] ~/HEEP/CMSSW_4_2_3/src $ echo $CVSROOT
:gserver:cmscvs.cern.ch:/cvs/CMSSW
*** Download RecoLocalCalo folders with cvs
[lxplus412] ~/HEEP/CMSSW_4_2_3/src $ cvs co -r CMSSW_4_2_3 RecoLocalCalo
*** Directory now has:
~/HEEP/CMSSW_4_2_3/src $ ls
HEEP Raw Reco RecoLocalCalo cmssw-list.txt
*** file for weights is EcalUncalibRecHitRecWeightsAlgo.h in:
[lxplus412]~/HEEP/CMSSW_4_2_3/src/RecoLocalCalo/EcalRecAlgos/interface $ ls
CVS EcalSeverityLevelAlgo.h EcalUncalibRecHitRecAbsAlgo.h
ESRecHitAnalyticAlgo.h EcalSeverityLevelAlgoRcd.h EcalUncalibRecHitRecAnalFitAlgo.h
ESRecHitFitAlgo.h EcalSeverityLevelService.h EcalUncalibRecHitRecChi2Algo.h
ESRecHitSimAlgo.h EcalUncalibRecHitFixedAlphaBetaAlgo.h EcalUncalibRecHitRecWeightsAlgo.h
EcalCleaningAlgo.h EcalUncalibRecHitLeadingEdgeAlgo.h EcalUncalibRecHitRecWeightsAlgo.h.~1.12.~
EcalRecHitAbsAlgo.h EcalUncalibRecHitMaxSampleAlgo.h
EcalRecHitSimpleAlgo.h EcalUncalibRecHitRatioMethodAlgo.h
*** compile in the CMSSW_4_2_3/src directory:
[lxplus412]~/HEEP/CMSSW_4_2_3/src $ scramv1 b
*** run job back in RawAnalyzer directory:
[lxplus412] ~/HEEP/CMSSW_4_2_3/src/Raw/RawAnalyzer $ cmsRun rawanalyzer_cfg.py > raw.txt
*** Weights found with GlobalTag GR_P_V22::All on Run number = 178160 Lumi = 494 event number = 768130023
bunch crossing = 564 orbit = 129307057 store = 0 time = 603712
Barrel weights
-0.383680 -0.383680 -0.383680 0.000000 0.195222 0.426238 0.346081 0.173547 0.009950 0.000000
Endcap weights
-0.383399 -0.383399 -0.383399 0.000000 0.194527 0.414297 0.343702 0.177645 0.020025 0.000000
Finally succeed in printing weights directy with
- cout << "\n" << *(weights[0]) << endl;
- weights->Print(std::cout) ; doesnt work
- ( *(weights[0]) ).Print(std::cout) ; // this WORKS !!!
- weights[0]->Print(std::cout) ; // this also WORKS !!
- weights[1]->Print(std::cout) ; // gives the weights for gain switch. Weight = 1.0 on sample 6, all others =0
Pointer weights[2] are set in RecoLocalCalo/EcalRecProducers/plugins/EcalUncalibRecHitWorkerGlobal.h#061
- 061 const EcalWeightSet::EcalWeightMatrix* weights[2];
- Pointer weights[0] for no gain switch weights
- Pointer weights[1] for gain switch weights
EcalWeightSet defined in /CondFormats/EcalObjects/interface/EcalWeightSet.h#020
class EcalWeightSet {
017
018 public:
019
020 typedef math::Matrix<3,10>::type EcalWeightMatrix;
021 typedef math::Matrix<10,10>::type EcalChi2WeightMatrix;
30 Nov 2011
Looking at code to check weights method and print what goes on. Also, access to and printout of database weights, ADCtoGeV etc.
For gain switch, only adc counts at 6th sample are taken for the pulse height. The weights for all other samples are set to zero.
http://cmslxr.fnal.gov/lxr/source/RecoLocalCalo/EcalRecAlgos/interface/EcalUncalibRecHitRecWeightsAlgo.h
The crucial line, 63:
- frame(iSample) = double(((double)(dataFrame.sample(iSample).adc()) - pedestals[gainId-1]) * gainRatios[gainId-1]);
- This subtracts the ped (for that gain level) and multiplies the pulse by the gain ratio to get back to ‘gain 12’ units.
Take 1st of Wee events and use code from TrivialObjectAnalyzer to print weights in EB and EE:
Run number = 178160 Lumi = 494 event number = 768130023
bunch crossing = 564 orbit = 129307057 store = 0 time = 603712
Using GlobalTag GR_P_V22::All
Barrel ADCtoGeV = 0.03933
Endcap ADCtoGeV = 0.06575
number of EB rec Hits = 409
XtalGroupId.id() = 1
Lookup EcalWeightSet for groupid: 1 and TDC id 1
check size of data members in EcalWeightSet
weight matrix before gain switch:
-0.384 -0.384 -0.384 0 0.195 0.426 0.346 0.174 0.00995 0
0.306 0.306 0.306 0 0.0327 -0.0765 -0.0386 0.0429 0.12 0
0 0 0 0 0 0 0 0 0 0
weight matrix after gain switch:
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
number of EE rec Hits = 315
XtalGroupId.id() = 2
Lookup EcalWeightSet for groupid: 2 and TDC id 1
check size of data members in EcalWeightSet
weight matrix before gain switch:
-0.383 -0.383 -0.383 0 0.195 0.414 0.344 0.178 0.02 0
0.309 0.309 0.309 0 0.0318 -0.0734 -0.0396 0.0399 0.115 0
0 0 0 0 0 0 0 0 0 0
weight matrix after gain switch:
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
Interesting files:
EcalTrivialObjectAnalyzer.cc
226 // fetch TB weights
227 std::cout <<"Fetching EcalTBWeights from DB " << std::endl;
228 edm::ESHandle<EcalTBWeights> pWgts;
229 context.get<EcalTBWeightsRcd>().get(pWgts);
230 const EcalTBWeights* wgts = pWgts.product();
231 std::cout << "EcalTBWeightMap.size(): " << std::setprecision(3) << wgts->getMap().size() << std::endl;
232
233
234 // look up the correct weights for this xtal
235 //EcalXtalGroupId gid( (*git) );
236 EcalTBWeights::EcalTDCId tdcid(1);
237
238 std::cout << "Lookup EcalWeightSet for groupid: " << std::setprecision(3)
239 << gid.id() << " and TDC id " << tdcid << std::endl;
240 EcalTBWeights::EcalTBWeightMap::const_iterator wit = wgts->getMap().find( std::make_pair(gid,tdcid) );
241 EcalWeightSet wset;
242 if( wit != wgts->getMap().end() ) {
243 wset = wit->second;
244 std::cout << "check size of data members in EcalWeightSet" << std::endl;
245 //wit->second.print(std::cout);
-- DavidCockerill - 30-Nov-2011
- DAS-page-view:
- Anode-voltages.png:
- 328361-0p1-valid-events-map-328361.png: