\n",
"d) Which of the two algorithms is better suited for this analysis and why?"
]
},
{
"cell_type": "markdown",
"id": "1b66263b-ce84-45cb-9c70-f6f232269ff3",
"metadata": {},
"source": [
"
\n",
"
Answer:"
]
},
{
"cell_type": "markdown",
"id": "4bbeea05-886e-4c6d-bf8f-64b2b66412a6",
"metadata": {},
"source": [
"## Statistical analysis\n",
"\n",
"Since we are looking at the hadronic decay channel of VBS, the overwhelming QCD background requires special treatment. That's why this analysis features a 3D Fit exploiting the different shape of QCD background and our signal process.\\\n",
"The ultimate fit for limit extraction is done in a 3D plane of ($m_\\mathrm{VV}, m_\\mathrm{V1}, m_\\mathrm{V2}$). This is because, as we have seen above, our signal process is resonant in $m_\\mathrm{V1}$ and $m_\\mathrm{V2}$, whereas the QCD background is exponentially falling. The 3rd axis, $m_\\mathrm{VV}$ is sensitive to the change of our EFT parameter. Furthermore, using $m_\\mathrm{VV}$ as a variable gives the intuitive interpretation of the low energy tail of a resonance, as well as a way to tackle a theoretical problem of EFT, namely unitarity restoration (not in this exercise!).\n",
"\n",
"In the following, we will derive the template for our signal process and QCD background. Other backgrounds follow a similar or easier treatment than QCD multijet production and are not explicitely included here.\n",
"\n",
"**Important:** From now on, please always work with the cut on the N-subjettiness $\\tau_{21} > 0.79$."
]
},
{
"cell_type": "markdown",
"id": "045369c1-c7cf-4390-948d-ad2d01e02992",
"metadata": {},
"source": [
"

\n",
"
Fig.3: Schematic overview of the fitting strategy in 3 dimension."
]
},
{
"cell_type": "markdown",
"id": "de58fac3-8bcd-4bd0-b288-d79c2b363372",
"metadata": {},
"source": [
"### Deriving signal templates\n",
"\n",
"Now, we will derive parametric templates of the signal process in 3D, i.e., a 3D probability density function to describe the signal process for a given value of an EFT parameter. For this, we assume the shape of $m_\\mathrm{V1}$, $m_\\mathrm{V2}$, and $m_\\mathrm{VV}$ to be uncorrelated, such that it falls into three parts:\n",
"\n",
"$$ P^\\mathrm{EFT}(m_\\mathrm{VV}, m_\\mathrm{V1}, m_\\mathrm{V2}) = P(m_\\mathrm{VV}) \\times P(m_\\mathrm{V1}) \\times P(m_\\mathrm{V2})$$\n",
"\n",
"The overall normalization follows from the scaling derived in Exercise 6b) and the cross-section of the SM process such that we have to focus now on $P(m_\\mathrm{V1})$, $P(m_\\mathrm{V2})$ and $P(m_\\mathrm{VV})$.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9d949ed2-e4a0-402a-8a5e-e015b09de3a1",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# apply the cut on tau21 > 0.79 for both AK8 jets also to the signal\n",
"signal_tau21 = signal[(signal['ak8_j1','tau21'] < 0.79) & (signal['ak8_j2','tau21'] < 0.79)]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f032f13c-75bf-4a38-85d8-fd42ffa9bb86",
"metadata": {},
"outputs": [],
"source": [
"# We now apply some further analysis cuts:\n",
"# m_{VV} > 800 GeV and m_{VV} < 5000 GeV\n",
"signal_tau21 = signal_tau21[(signal_tau21['ak8_j1_j2'].v4.mass > 800) & (signal_tau21['ak8_j1_j2'].v4.mass < 5500)]"
]
},
{
"cell_type": "markdown",
"id": "61441bcb-16eb-4ce7-99f5-d7775382435d",
"metadata": {},
"source": [
"#### 1D template for $m_\\mathrm{V1}$ and $m_\\mathrm{V2}$"
]
},
{
"cell_type": "markdown",
"id": "67916e85-a613-486e-9fb9-cba58c311023",
"metadata": {},
"source": [
"The $m_\\mathrm{V1}$ - and $m_\\mathrm{V2}$ - distributions of the signal process show a resonance at the W- or Z-Boson peak. This can be modelled by a double-sided Crystal Ball function: a Gaussian distribution in the middle with two powe-law tails. This function has 6 parameters: the center and width of the Gaussian core, and four values that describe where the tails start ( $\\alpha_{i}$ ) and how they fall off ( $\\mathrm{N}_i$ ). "
]
},
{
"cell_type": "markdown",
"id": "6faf4590-966d-44e7-adc7-8cd9fa9b5660",
"metadata": {},
"source": [
"\n",
"$$t = \\frac{x - mean}{width} $$\n",
"\n",
"$$\\mathrm{DS-CrystalBall} (x; mean, width, \\alpha_1, N_1, \\alpha_2, N_2) =\n",
"\\left\\{\n",
"\t\\begin{array}{ll}\n",
"\t\t[1 - \\frac{\\alpha _1}{N _1} (\\alpha _1 + t)]^{-\\alpha _1} \\exp{(-\\frac{1}{2}\\alpha _1^2)} & \\mathrm{if} \\ t \\leq - \\alpha _1 \\\\\n",
"\t\t\\hspace{3.5cm} \\exp{(-\\frac{1}{2}t^2)} & \\mathrm{if} \\ - \\alpha _1 < t < \\alpha _2 \\\\\n",
"\t\t[1 - \\frac{\\alpha _2}{N _2} (\\alpha _2 - t)]^{-\\alpha _2} \\exp{(-\\frac{1}{2}\\alpha _2^2)} & \\mathrm{if} \\ t \\geq \\alpha _2\n",
"\t\\end{array}\n",
"\\right.\n",
"$$"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9c534e0d-f128-45d5-bbc7-e44dfb796fc7",
"metadata": {},
"outputs": [],
"source": [
"# implementation of double-sided Crystal Ball function\n",
"def DSCB(x, mean = 80.0, width = 1.0, a1 = 1.0, N1=1.0, a2=1.0, N2=1.0, scale=1000):\n",
" \n",
" lower_bound = (-1. * np.abs(a1 * width)) + mean\n",
" upper_bound = (+1. * np.abs(a2 * width)) + mean\n",
" \n",
" condlist=[ (x <= lower_bound), (x > lower_bound) & (x < upper_bound), (x >= upper_bound)]\n",
" \n",
" funclist=[ lambda x: scale * (1 - (a1 / N1)*(a1+((x - mean) / width)))**(-1. * a1) * np.exp(-0.5 * a1 * a1),\\\n",
" lambda x: scale * np.exp(-0.5 * ((x - mean) / width) * ((x - mean) / width)),\\\n",
" lambda x: scale * ( (1 - (a2 / N2)*(a2-((x - mean) / width)))**(-1. * a2) ) * np.exp(-0.5 * a2 * a2)]\n",
" \n",
" return np.piecewise(x, condlist, funclist)\n",
"\n",
"# this function can be used to fix parameters and enforce a one-sided CB function or a Gaussian\n",
"def DSCB_fix(x, mean, width, a2, N2, scale):\n",
" ret = DSCB(x, mean, width, 1., 0.5, a2, N2, scale)\n",
"\n",
" return ret"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "19bae5d4-431f-4f71-b444-d4bfcafe8a29",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%%capture --no-stderr\n",
"# make a histogram of m_v1 \n",
"eft_values=[5,10,15,20]\n",
"histos_mv1, bin_centers, fit_results = [], [], []\n",
"for eftv in eft_values:\n",
" histos_mv1.append( np.histogram(signal_tau21['ak8_j1'].v4.mass, bins=32, weights=signal_tau21['EFT_weight_1st',str(eftv)]) )\n",
" bin_centers.append( (histos_mv1[-1])[1][:-1] + np.diff( (histos_mv1[-1])[1]) / 2 )\n",
"\n",
" # now run the fit\n",
" #fit_results.append( curve_fit(DSCB_fix, bin_centers[-1], (histo_mv1[-1])[0], p0=[90.0, 11.0, 1.6, 1.0, 10. ]) )\n",
" fit_results.append( curve_fit(DSCB, bin_centers[-1], (histos_mv1[-1])[0], p0=[90.0, 11.0, 1.6, 1., 1.6, 1.0, 10. ]) )"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "edc7e84d-e92c-4084-88ce-a5db3c679a9c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"x_line = np.arange(55, 215, 0.25)\n",
"cool_colors=['crimson','rebeccapurple','darkturquoise','orange']\n",
"for i in range(len(eft_values)):\n",
" plt.plot(bin_centers[i], (histos_mv1[i])[0], marker='+', linestyle='None', color=cool_colors[i])\n",
" plt.plot(x_line, DSCB(x_line,*(fit_results[i][0])) , label=str(eft_values[i]) + ' $\\mathrm{TeV}^{-2}$', color=cool_colors[i])\n",
" \n",
"plt.xlabel(\"$\\mathrm{m_{V1}}$ [$\\mathrm{GeV}$]\")\n",
"plt.ylabel(\"a.u.\")\n",
"plt.legend(title = \"$\\mathrm{c_{WWW}}$ / $\\mathrm{\\Lambda}^{-2} = $\")\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "cdb868d3-7019-419c-8b41-448943067e25",
"metadata": {},
"source": [
"
**Exercise 8: signal $m_{V}$ template**"
]
},
{
"cell_type": "markdown",
"id": "4f26c829-b7c0-4709-917a-095b42954fe3",
"metadata": {},
"source": [
"Read through and understand the above implementation. Then do the following tasks:"
]
},
{
"cell_type": "markdown",
"id": "c69b3c97-6a6d-4989-ae2f-c6b2268e04cf",
"metadata": {},
"source": [
"
\n",
"a) How does this distribution look for $m_{V2}$ ?"
]
},
{
"cell_type": "markdown",
"id": "a572788d-5dd6-42c9-a039-74ef1105f830",
"metadata": {},
"source": [
"
\n",
"
Answer:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "514fa231-218c-4265-b669-5742810b25d2",
"metadata": {},
"outputs": [],
"source": [
"# Your code goes here:"
]
},
{
"cell_type": "markdown",
"id": "e48802ad-116e-4f32-bb05-81353e4ba6dd",
"metadata": {},
"source": [
"
\n",
"b) Instead of a double-sided Crystal Ball function, try using a single-sided Crystal Ball function or a Gaussian. You can use the above function \"DSCB_fix\" and fix the correct parameter(s)."
]
},
{
"cell_type": "markdown",
"id": "6444f599-e6d5-495d-9a55-cceb7b4a7069",
"metadata": {},
"source": [
"
\n",
"
Answer:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b90c5717-9a03-4143-846f-1d61f1a02ec7",
"metadata": {},
"outputs": [],
"source": [
"# Your code goes here:"
]
},
{
"cell_type": "markdown",
"id": "e5fe8889-2f4f-40b1-9cac-f303ee984416",
"metadata": {},
"source": [
"#### 1D template for $m_\\mathrm{VV}$"
]
},
{
"cell_type": "markdown",
"id": "05c5cdea-d27e-4040-a51b-a94fb1064ede",
"metadata": {},
"source": [
"Now that we looked at $P_\\mathrm{V1}$ and $P_\\mathrm{V2}$, it is now time to have a look at the third axis in the final fit: $P_\\mathrm{VV}$. Since the EFT dependency is prominent in this variable and does not only affect the normalization but also noticably the shape, the parametrization also includes the EFT parameter itself.\\\n",
"The functional form for $P_\\mathrm{VV}$ is the following:\n",
"\n",
"$$ P^\\mathrm{EFT}(m_\\mathrm{VV}) = \\mathrm{N_{SM}} \\cdot \\mathrm{e}^{ \\mathrm{a_0} \\mathrm{M_{VV}} }\n",
"+ \\mathrm{N_{quadr}} \\cdot \\mathrm{c_i}^2 \\cdot \\mathrm{e}^{ \\mathrm{a_1} \\mathrm{M_{VV}} } \\cdot \\frac{1 + \\mathrm{Erf( (\\mathrm{M_{VV} - \\mathrm{a_2}) / \\mathrm{a_3} }) }} {2} $$\n",
"\n",
"\n",
"$$ %P(m_\\mathrm{VV}) = \\mathrm{N_{SM}} \\cdot \\mathrm{e}^{ \\mathrm{a_0} \\mathrm{M_{VV}} }\n",
"%+ \\mathrm{N_{intf}} \\cdot \\mathrm{c_i} \\cdot \\mathrm{e}^{ \\mathrm{a_1} \\mathrm{M_{VV}} } \n",
"%+ \\mathrm{N_{quadr}} \\cdot \\mathrm{c_i}^2 \\cdot \\frac{1 + \\mathrm{Erf( (\\mathrm{M_{VV} - \\mathrm{a_2}) / \\mathrm{a_3} }) }} {2} $$\n",
"\n",
"where the interference term has be omitted for simplicity. The PDF then falls into two parts: an exponential falling part for the SM which does not change with the Willson coefficient, $\\mathrm{c_i}$, and a term that scales quadratically with $\\mathrm{c_i}$. The functional form has been chosen to describe the turn on with higher energies and the effect of the proton PDF, which ultimately forces the distribution to zero.\\\n",
"The fitting procedure is similar to before:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "73a4c8b7-0f73-4833-be3a-d54db0e7cc71",
"metadata": {},
"outputs": [],
"source": [
"def eft_sm(x, a0, scale):\n",
" \n",
" exp_part = np.exp(a0 * x)\n",
" return scale * exp_part\n",
" \n",
"# a2 = offset, a3 = width\n",
"def eft_quadr(x, a1, offset, width, scale):\n",
" \n",
" erf_part = (1 + scipy.special.erf((x - offset) / width)) / 2\n",
" exp_part = np.exp(a1 * x)\n",
" return scale * exp_part * erf_part"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5e256574-9130-4b00-b9c8-93f04dd317b4",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# First the histograms\n",
"histo_sm = np.histogram(signal_tau21['ak8_j1_j2'].v4.mass, bins=15, weights=(signal_tau21['EFT_weight_1st',\"0\"])) \n",
"bin_centers_sm = histo_sm[1][:-1] + np.diff( histo_sm[1] ) / 2\n",
"\n",
"histo_quad = np.histogram(signal_tau21['ak8_j1_j2'].v4.mass, bins=15, weights=((signal_tau21['EFT_weight_1st',\"15\"] + signal_tau21['EFT_weight_1st',\"-15\"] -signal_tau21['EFT_weight_1st',\"0\"])/2.)) \n",
"bin_centers_quad = histo_quad[1][:-1] + np.diff( histo_quad[1] ) / 2\n",
"\n",
"# Then the fits -> needs good starting values!\n",
"fit_result_sm = curve_fit(eft_sm, bin_centers_sm, histo_sm[0], p0=[-0.003, 2800.])\n",
"fit_result_quad = curve_fit(eft_quadr, bin_centers_quad, histo_quad[0], p0=[-0.001, 2800., 1400., 9300.])\n",
"\n",
"# And finally plot it\n",
"x_line_mvv = np.arange(800, 8000, 1)\n",
"plt.figure()\n",
"\n",
"fig, ax = plt.subplots(1, 2, figsize=(9,4.5))\n",
"ax[0].plot(x_line_mvv, eft_sm(x_line_mvv,*fit_result_sm[0]), color=cool_colors[2], label='Fit')\n",
"ax[0].plot(bin_centers_sm, histo_sm[0], marker='+', linestyle='None', color=cool_colors[0], label='Simulation')\n",
"ax[0].set_xlabel(\"$\\mathrm{m_{VV}}$ [$\\mathrm{GeV}$]\")\n",
"ax[0].set_ylabel(\"a.u.\")\n",
"ax[0].set_title(\"SM contribution\")\n",
"ax[0].legend(title = \"$\\mathrm{c_{WWW}}$ / $\\mathrm{\\Lambda}^{-2} = 0 \\, \\mathrm{TeV}^{-2}$\")\n",
"\n",
"\n",
"ax[1].plot(x_line_mvv, eft_quadr(x_line_mvv,*fit_result_quad[0]), color=cool_colors[2], label='Fit')\n",
"ax[1].plot(bin_centers_quad, histo_quad[0], marker='+', linestyle='None', color=cool_colors[0], label='Simulation')\n",
"ax[1].set_xlabel(\"$\\mathrm{m_{VV}}$ [$\\mathrm{GeV}$]\")\n",
"ax[1].set_ylabel(\"a.u.\")\n",
"ax[1].set_title(\"quadratic EFT contribution\")\n",
"ax[1].legend(title = \"$\\mathrm{c_{WWW}}$ / $\\mathrm{\\Lambda}^{-2} = 15 \\, \\mathrm{TeV}^{-2}$\")\n",
"\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "3105793d-12a1-4429-8a51-d797ef3853f1",
"metadata": {},
"source": [
"Now the two parts of our $\\mathrm{m_{VV}}$ parametrization are fitted. Your task will be to combine them and make a quick cross-check. The following exercise will guide you through that:"
]
},
{
"cell_type": "markdown",
"id": "fa6661f9-04a9-43b7-88e6-c6e81dce4710",
"metadata": {},
"source": [
"**Exercise 9: signal $m_{VV}$ template**"
]
},
{
"cell_type": "markdown",
"id": "0a19495e-89b8-4517-a225-0249bb588e14",
"metadata": {},
"source": [
"\n",
"a) What are the results of the two fits for $m_{VV}$ ?"
]
},
{
"cell_type": "markdown",
"id": "b127c0d0-7f13-4c24-9208-02afd1608508",
"metadata": {},
"source": [
"
\n",
"
Answer:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "421f84d8-ab17-4858-957f-60306dfbf7e8",
"metadata": {},
"outputs": [],
"source": [
"# Your code:"
]
},
{
"cell_type": "markdown",
"id": "d79b8094-ffe5-4e73-83c2-50a0ca1fb726",
"metadata": {},
"source": [
"\n",
"b) Which value of $c_\\mathrm{HBox}$ has been used to derive the quadratic contribution?"
]
},
{
"cell_type": "markdown",
"id": "2ec59421-756d-41ec-ad87-2f70a40f5ced",
"metadata": {},
"source": [
"
\n",
"
Answer:"
]
},
{
"cell_type": "markdown",
"id": "a4c7a494-17ac-4aa1-aba3-c67333b77a36",
"metadata": {},
"source": [
"\n",
"c) Complete the code below to sum both contributions. Be careful that the first term does not scale with the Willson coefficient, while the second does.\n",
" "
]
},
{
"cell_type": "markdown",
"id": "9b66ddfa-53a8-4313-9ced-0e8a619b2676",
"metadata": {},
"source": [
"
\n",
"
Please complete the code\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce042b9f-deef-44a8-94ec-cd6b76b1c03d",
"metadata": {},
"outputs": [],
"source": [
"# function for the combination of both contributions\n",
"def p_mvv(x, a0, scaleSM, a1, offset, width, scaleQ, eft_value):\n",
" \n",
" # here we can conveniently reuse the functions from above. \n",
" # Be careful to scale the normalization of the quadratic contribution with \"eft_value ** 2\"!\n",
" # Your code goes here:\n",
" sm_part =\n",
" quad_part =\n",
" \n",
" return sm_part + quad_part"
]
},
{
"cell_type": "markdown",
"id": "3dfa6b44-9db1-449a-b2d4-507f5e00a030",
"metadata": {},
"source": [
"\n",
"d) Finally, plot the complete template together with the Monte Carlo Simulation for $c_\\mathrm{HBox} = 0,5,10,15$ in one plot."
]
},
{
"cell_type": "markdown",
"id": "c284484d-85d1-485c-a764-1f8a7a99120f",
"metadata": {},
"source": [
"
\n",
"
Please complete the code"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "32cb4aac-d8b4-4c7e-9b31-493e9e2f2f77",
"metadata": {},
"outputs": [],
"source": [
"# the combined fit results. Be careful to rescale the normalization of the quadratic contribution!\n",
"combined_fit_results=\n",
"\n",
"cwww_values=[0,5,10,15]\n",
"hs,bs = [],[]\n",
"for i,c in enumerate(cwww_values):\n",
"\n",
" # Your code goes here:\n",
"\n",
" \n",
"# some cosmetics in case you are using matplotlib\n",
"plt.xlabel(\"$\\mathrm{m_{VV}}$ [$\\mathrm{GeV}$]\")\n",
"plt.ylabel(\"a.u.\")\n",
"plt.legend(title = \"$\\mathrm{c_{WWW}}$ / $\\mathrm{\\Lambda}^{-2} = $\")\n",
"plt.title(\"full $m_\\mathrm{VV}$ contribution\")\n",
"plt.show()\n"
]
},
{
"cell_type": "markdown",
"id": "d98ed6ed-c955-403e-9b10-7adb95a7f838",
"metadata": {},
"source": [
"### QCD templates"
]
},
{
"cell_type": "markdown",
"id": "6fc723aa-ec6a-4ace-a4c2-30016055ffb8",
"metadata": {},
"source": [
"Now we will have a very short look at the QCD background. The derivation of parametric templates is not done explicitely in this exercise but we will have a look at the results.\\\n",
"The three contributions of the PDF cannot be assumed to be uncorrelated. In this case, the conditional PDFs for $m_\\mathrm{V1}$ and $m_\\mathrm{V2}$, $P_\\mathrm{V1}(m_\\mathrm{V1} | m_\\mathrm{VV})$ and $P_\\mathrm{V1}(m_\\mathrm{V2} | m_\\mathrm{VV})$, are derived in the form of 2D histograms. \\\n",
"The result is given below and shows that all three axes can be modeled (and in fact are) with a simple exponential distribution.\n",
"\n",
"$$ P^\\mathrm{QCD}(m_\\mathrm{VV}, m_\\mathrm{V1}, m_\\mathrm{V2}) = P(m_\\mathrm{VV}) \\times P_\\mathrm{cond,1}(m_\\mathrm{V1} \\vert m_\\mathrm{VV}) \\times P_\\mathrm{cond,2}(m_\\mathrm{V2} \\vert m_\\mathrm{VV})$$\n",
"\n",
"\n",
" | \n",
" | \n",
" | \n",
"
\n",
"Fig.4: $m_\\mathrm{VV}$, $m_\\mathrm{V1}$ and $m_\\mathrm{V2}$ contributions to the combined PDF, $P^\\mathrm{QCD}(m_\\mathrm{VV}, m_\\mathrm{V1}, m_\\mathrm{V2})$."
]
},
{
"cell_type": "markdown",
"id": "90a84d2d-b762-4ae5-8225-198e7eb5ab28",
"metadata": {},
"source": [
"
**Exercise 10: QCD background**"
]
},
{
"cell_type": "markdown",
"id": "a37c872a-4d34-44b7-b84a-af106fc4ab15",
"metadata": {},
"source": [
"
\n",
"a) Is the shape of $m_\\mathrm{V1}$, $m_\\mathrm{V1}$ and $m_\\mathrm{VV}$ expected? Do you know an explanation for it?"
]
},
{
"cell_type": "markdown",
"id": "94e79b7f-8c4f-4200-839d-5aa30ecfb7ab",
"metadata": {},
"source": [
"
\n",
"
Answer:"
]
},
{
"cell_type": "markdown",
"id": "a4bde95f-704b-4c89-9039-1f01ba494026",
"metadata": {},
"source": [
"
\n",
"b) Apart from looking at the explicit plots, do you have an explanation why $m_\\mathrm{V1}$ and $m_\\mathrm{VV}$ are correlated?"
]
},
{
"cell_type": "markdown",
"id": "a4401c3f-5616-462f-88fb-e1dd3d7f62f9",
"metadata": {},
"source": [
"
\n",
"
Answer:"
]
},
{
"cell_type": "markdown",
"id": "8236f953-e770-47dd-b236-61956ffc9cd3",
"metadata": {},
"source": [
"## Limits"
]
},
{
"cell_type": "markdown",
"id": "344d8dfc-eeee-42b3-945f-3c9e90bbe0d7",
"metadata": {},
"source": [
"Unfortunately, we have to cut many corners to fit this analysis into the scope of one exercise sheet. One of such corners are for example systematic uncertainties or the final fit for limit extraction. Also backgrounds, which contain both, a resonant and a exponentially falling contribution to the AK8 jet-masses have not been investigated.\\\n",
"The final fit itself runs within a CMS internal software framework based on ROOT and takes multiple hours.\n",
"Instead, we give you the expected limits derived from simulation in form of a plot which you have to understand in the exercise below."
]
},
{
"cell_type": "markdown",
"id": "8e912bf9-029f-4f09-a2d1-fac7a6fa9104",
"metadata": {},
"source": [
"

\n",
"\n",
"
Fig.5: Plot showing the final fit results and expected limits for $c_\\mathrm{HBox}$."
]
},
{
"cell_type": "markdown",
"id": "bce1ef82-5501-450a-892d-fa527e400097",
"metadata": {},
"source": [
"
**Exercise 11: Limits**"
]
},
{
"cell_type": "markdown",
"id": "beff3866-b1c3-4957-8091-08e9e177b876",
"metadata": {},
"source": [
"
\n",
"a) Explain what is shown on the x- and y- axis."
]
},
{
"cell_type": "markdown",
"id": "0d4befcb-fb64-410d-b82e-1ff5140a6780",
"metadata": {},
"source": [
"
\n",
"
Answer: "
]
},
{
"cell_type": "markdown",
"id": "5e3ce681-d5d1-4991-86f4-6aff2d1e7a5c",
"metadata": {},
"source": [
"
\n",
"b) Which regions can be excluded at 95% CL and why?"
]
},
{
"cell_type": "markdown",
"id": "74b14674-7c02-445b-99e5-e8ec1d702edd",
"metadata": {},
"source": [
"
\n",
"
Answer: "
]
},
{
"cell_type": "markdown",
"id": "6da84908-469f-4985-ad32-0767791a32ee",
"metadata": {},
"source": [
"
\n",
"c) Assuming symmetric errors, how do they compare to other public limits?"
]
},
{
"cell_type": "markdown",
"id": "3653220a-b7bc-4a74-8721-d9c14d73c64e",
"metadata": {},
"source": [
"
\n",
"
Answer:"
]
},
{
"cell_type": "markdown",
"id": "80411522-66d0-4687-9fa2-3fc0b0fa83a3",
"metadata": {},
"source": [
" \n",
" \n",
"d) How does this plot change with increased luminosity?"
]
},
{
"cell_type": "markdown",
"id": "5d52826c-52af-4577-bb09-ecfdf415065e",
"metadata": {},
"source": [
"
\n",
" Answer:"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}