-- AustinBaty - 2020-03-03

Anne-Marie

Abstract could be strenghtened a bit with "first time..."

I added once sentence saying that we see this deviation from perfect scaling for the first time.

* l24 remove "however". Seems a bit strange in the argumentation which
has already a However and Furthermore beforehand.

fixed

* l32 it seems v2 could be already defined here - or at least the
reference to it.

I have added a reference.

* l44 as "trigger" is referenced here, maybe the additional sentence
about the 2-stage triggering system of CMS could be added in the
previous paragraph.

I added a short sentence with a reference and a sentence with the definition.


* l47 redundant "is" before "corresponds".

removed

* table 1: too many significant figures (sf). The rule is to quote at
most 2, or 1 if appropriate.

https://twiki.cern.ch/twiki/bin/view/CMS/Internal/PubGuidelines#Significant_figures_for_measurem

Personally given the precision of the uncertainty I would quote 1 and
match the central value to the sf of the uncertainty , i.e. 0-5% : 25.8
+/- 0.5, and 70-90%: 0.167 +/- 0.008

But OK for 2 if you prefer. In that case, 20-30% becomes 8.78 +/- 0.24
(I would quote 8.8 +/- 0.2).

etc... for all bins.

I have moved to 2 sig figs, with the central value matched to it.

* l49: deposition in both the -> deposited in both

changed


* l56 and also l7-8: would it not be clearer to stick to centrality
everywhere ? And not confuse readers unfamiliar with heavy ion physics
(like me wink ) smile

I have changed the instance at L56 to 'centrality. We prefer to keep the first instance as it is, in order to define what centrality is.

* l115-118 I find confusing, if it is a data-driven background, I would
expect you use somehow the population at low A and pT to derive the
contribution at high A and pT... but what you describe is a selection to
cut it, and a MC to estimate fractions left ?? How is this data-driven ?

To make it clearer: first describe the background, and that it is
dominant in region low A, low pT. Then mention STARLIGHT MC to define
90% rejection cuts, higher threshold for electrons due to resolution.
Then make it clear you cut this region in data (with signal eff O(0.5%)
loss corrected for). And finally explain how you obtain the estimate of
the remaining background of 0.6 (0.7)%.

You are correct that this is not entirely data-driven, as it relies on STARLIGHT. I have removed the classification of the backgrounds as data/MC-driven and instead just described how they are evaluated. This also saves some space in the paper. The section about the EM background has also been rewritten according to your suggestion.


* l127-132: I really have a hard time understanding this definition ;)
Maybe it is all fine for HI community - but I wonder if there would be a
way to make it a bit clearer to the non-experts ! I think adding the
definition as in ref[32] introduction, with the full formula of v_n,
would be good in the introduction here too.

I added the definition in the intro, and the full formula w/ the Q vectors back to this section.


l129 against a reference Q-vectors measured in an HF calo --> against a
Q-vector measured in a reference HF calorimeter.

This was rewritten.


fig1 caption: explain error bars (just stat) ?

Fig 1 has been changed based on other comments.

* l133 N_mb -> should be N_MB like line 48 ?

yes, changed.

* l137 "the resulting difference..... this difference" is not clear: you
have a variation on a quantity, which is propagated to the final result,
then the variation on the final result is taken as the associated
uncertainty and is 1% ..up to ...

This has been rewritten.

* l141 in -> is included

fixed

* l152 are quite small, so ... negligible -> being small, the
uncertainties.... are neglected.

As these uncertainties are negligible, I have elected to remove this sentence in order to save space.

* l194-195 I think this sentence is unnecessary: you say beforehand that
this trend is unrelated to quenching effect....

Agreed. It has been removed.

* l214 pT>50 GeV.

fixed

* l219-221 have you discussed with nPDF experts: would there not be a
variable more sensitive to it than the pT ? Which could make these data
useful for this purpose too ? Probably not for this paper - but could be
hinted here still....

The most stringent nPDF constraints for Z bosons at LHC will probably come from pPb collisions and not PbPb collisions, because the delivered luminosity is much higher and also the available Bjorken-x region that can be probed is larger. Instead, we would just like to point out that the high-pt region seems to be sensitive to the gluon nPDF instead of the quark/anti-quark nPDF. Because the nPDF argument is much stronger for pPb collisions rather than PbPb, I do not think this should be the main focus of the paper.


* references: check the CMS guidelines... e.g. remove page ranges for
ref 2,7,8,10,19,20,22,23,30,33,36,38,39,42.

Fixed. Also changed some issues with the volume numbers.

Wei

1. Fig.1 is too small and need to be significantly enlarged. If
necessary, e.g. page limit, removing this figure is another option
because the S/B is so large, the comparison is essentially just tell if
the MC Z peak is well simulated.

Figure has been removed, although we would like to have it approved as supplemental material.


2. Fig.2 caption: need a reference [4] right after "A previous
measurements".

added

3. Fig.3: is there uncertainty on the HG-PYTHIA model predictions, or
does the width of the line represent the model uncertainty? if there's
no uncertainty included, could you clarify the reason?

There is no uncertainty on the HG-PYTHIA because it is essentially a ratio of a MC generator with itself, except that the events have been overlaid and had a centrality calibration done. The centrality calibration has no uncertainty because it is just the result of a well-defined procedure. The width of the HG-PYTHIA line was just chosen to give something that is easy to see by eye (as a thin line tended to be hidden by the X error bars of the data points).



4. Fig. 4 upper panel.

- It is not possible to distinguish the model predictions from
EPPS16 and nCTEQ15. is it possible to zoom the Y-axis to 0.5-0.9?

This panel has been made larger as it's own figure since we removed the mass peaks.

- the legend is not consistent with Fig.2, where data is
represented with a dot+line+box. Here the box represent the EPPS16 model
predictions. This can be a bit confusing. One option is to remove the
box in Fig.2. Reducing the width of sys box to that in Fig.2 would help
a bit.

I have made EPPS16 blue to avoid confusion.

5. Fig.4 lower panel

- same comments as for upper panel above.

I have made EPPS16 blue to avoid confusion

- lower sub-panel: If I understand it correctly, the box centered
at 1.0 is the sys error of data point. One can confuse it with the
EPPS16 predictions.

I have added the gray shaded region to the legend.

Edit | Attach | Watch | Print version | History: r12 < r11 < r10 < r9 < r8 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r12 - 2020-03-06 - AustinBaty
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    Main All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback