{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Exercise 5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 5.1: Parameterization of Data\n", "### Exercisse 5.1.1 (obligatory)\n", "If the underlying probability distribution function (PDF) of a dataset is unknown, empirical fit functions have to be employed. The most common empirical fit functions are n-th order polynomials with constant coefficients $p_k$ to be determined by the fit:\n", "$$ P_n \\left( x \\right) = \\sum_{k = 0}^{n} p_k \\, x^k \\ .$$\n", "The fit results can usually be “stabilized” by using orthogonal polynomials\n", "$$ L_n \\left( x \\right) = \\sum_{k = 0}^n p_k \\, l_k \\left( x \\right) \\, ,$$\n", "where $l_k(x)$ are Legendre polynomials, which can be defined recursively by\n", "$$ l_0 (x) = 1; \\quad l_1 (x) = x; \\quad (k + 1)\\, l_{k+1} (x) = (2k + 1)\\, x \\, l_k (x) - k \\, l_{k - 1} (x) \\ .$$\n", "The Legendre polynomials fulfill the orthogonality relation\n", "$$ \\int\\limits_{-1}^1 \\, \\mathrm{d}x \\, l_m \\left( x \\right) \\, l_n \\left( x \\right) = \\frac{2}{2n -1} \\delta_{mn} \\ ,$$\n", "where $\\delta_{mn}$ denotes the Kronecker delta.\n", "\n", "Fit the data points given by the following pairs of $x$ and $y$ values assuming a constant uncertainty of $\\sigma_y = 0.5$ for $y$ and no uncertainty for $x$:\n", "```\n", " x = { -0.9, -0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1,\n", " 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 }\n", " y = { 5.0935, 2.1777, 0.2089, -2.3949, -2.4457, -3.0430, -2.2731,\n", " -2.0706, -1.6231, -2.5605, -0.7703, -0.3055, 1.6817, 1.8728,\n", " 3.6586, 3.2353, 4.2520, 5.2550, 3.8766, 4.2890 }\n", "```\n", "\n", "1. Use $P_2 \\left(x\\right)$, $P_3 \\left(x\\right)$, ..., $P_7 \\left(x\\right)$ as fit functions.\n", "\n", "2. Use $L_2 \\left(x\\right)$, $L_3 \\left(x\\right)$, ..., $L_7 \\left(x\\right)$ as fit functions.\n", "\n", "Plot the data and the fitted curves for all fits. Compare the resulting values for $p_k$ and their correlation matrices (to be obtained most conveniently via the `GetCorrelationMatrix()` method of Root class [`TFitResult`](https://root.cern.ch/doc/master/classTFitResult.html) or via the [`scipy.optimize.curve_fit()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html) function). In which sense is the fit using orthogonal polynomials “more stable”? Discuss which order you would choose for the fit function.\n", "\n", "**Hint**: A convenient framework for fitting and visualisation of problems like this one is included in the Root class [`TGraphErrors`](https://root.cern.ch/doc/master/classTGraphErrors.html) and its methods." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nPoints = 20\n", "data_x = np.array([-0.9, -0.8, -0.7, -0.6, -0.5, -0.4, -0.3, -0.2, -0.1, 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], dtype=float)\n", "data_y = np.array([5.0935, 2.1777, 0.2089, -2.3949, -2.4457, -3.0430, -2.2731, -2.0706, -1.6231, -2.5605, -0.7703, -0.3055, 1.6817, 1.8728, 3.6586, 3.2353, 4.2520, 5.2550, 3.8766, 4.2890], dtype=float)\n", "sigma_x = np.array(nPoints*[0.], dtype=float)\n", "sigma_y = np.array(nPoints*[0.5], dtype=float)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ROOT Approach:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from ROOT import gRandom, TGraphErrors, TF1, TMath, TVirtualFitter, TCanvas, gStyle, TPaveStats, TGraph, TFitResult" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Define polynomials.\n", "P_2 = \"[0] + [1]*x + [2]*x**2\"\n", "P_3 = \"[0] + [1]*x + [2]*x**2 + [3]*x**3\"\n", "P_4 = \"[0] + [1]*x + [2]*x**2 + [3]*x**3 + [4]*x**4\"\n", "P_5 = \"[0] + [1]*x + [2]*x**2 + [3]*x**3 + [4]*x**4 + [5]*x**5\"\n", "P_6 = \"[0] + [1]*x + [2]*x**2 + [3]*x**3 + [4]*x**4 + [5]*x**5 + [6]*x**6\"\n", "P_7 = \"[0] + [1]*x + [2]*x**2 + [3]*x**3 + [4]*x**4 + [5]*x**5 + [6]*x**6 + [7]*x**7\"\n", "\n", "# Define Legendre polynomials.\n", "L_2 = \"[0] + [1]*x + [2]*0.5*(3.*x**2 - 1.)\"\n", "L_3 = \"[0] + [1]*x + [2]*0.5*(3.*x**2 - 1.) + [3]*0.5*(5.*x**3 - 3.*x)\"\n", "L_4 = \"[0] + [1]*x + [2]*0.5*(3.*x**2 - 1.) + [3]*0.5*(5.*x**3 - 3.*x) + [4]*0.125*(35.*x**4 - 30.*x**2 + 3.)\"\n", "L_5 = \"[0] + [1]*x + [2]*0.5*(3.*x**2 - 1.) + [3]*0.5*(5.*x**3 - 3.*x) + [4]*0.125*(35.*x**4 - 30.*x**2 + 3.) + [5]*0.125*(63.*x**5 - 70.*x**3 + 15.*x)\"\n", "L_6 = \"[0] + [1]*x + [2]*0.5*(3.*x**2 - 1.) + [3]*0.5*(5.*x**3 - 3.*x) + [4]*0.125*(35.*x**4 - 30.*x**2 + 3.) + [5]*0.125*(63.*x**5 - 70.*x**3 + 15.*x) + [6]*0.0625*(231.*x**6 - 315.*x**4 + 105.*x**2 - 5.)\"\n", "L_7 = \"[0] + [1]*x + [2]*0.5*(3.*x**2 - 1.) + [3]*0.5*(5.*x**3 - 3.*x) + [4]*0.125*(35.*x**4 - 30.*x**2 + 3.) + [5]*0.125*(63.*x**5 - 70.*x**3 + 15.*x) + [6]*0.0625*(231.*x**6 - 315.*x**4 + 105.*x**2 - 5.) + [7]*0.0625*(429.*x**7 - 693.*x**5 + 315.*x**3 -35.*x)\"\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create the TGraphErrors object.\n", "\n", "# Perform the fits using the predefined functions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Python Approach:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import scipy.stats\n", "import scipy.optimize\n", "import matplotlib.pyplot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# define polynomials\n", "def P_2(x, a, b, c):\n", " return a + b * x + c * x**2\n", "\n", "def P_3(x, a, b, c, d):\n", " return a + b * x + c * x**2 + d * x**3\n", "\n", "def P_4(x, a, b, c, d, e):\n", " return a + b * x + c * x**2 + d * x**3 + e * x**4\n", "\n", "def P_5(x, a, b, c, d, e, f):\n", " return a + b * x + c * x**2 + d * x**3 + e * x**4 + f * x**5\n", "\n", "def P_6(x, a, b, c, d, e, f, g):\n", " return a + b * x + c * x**2 + d * x**3 + e * x**4 + f * x**5 + g * x**6\n", "\n", "def P_7(x, a, b, c, d, e, f, g, h):\n", " return a + b * x + c * x**2 + d * x**3 + e * x**4 + f * x**5 + g * x**6 + h * x**7\n", "\n", "# define Legendre polynomials\n", "def L_2(x, a, b, c):\n", " return a + b * x + c * 0.5 * (3. * x**2 - 1.)\n", "\n", "def L_3(x, a, b, c, d):\n", " return a + b * x + c * 0.5 * (3. * x**2 - 1.) + d * 0.5 * (5. * x**3 - 3. * x)\n", "\n", "def L_4(x, a, b, c, d, e):\n", " return a + b * x + c * 0.5 * (3. * x**2 - 1.) + d * 0.5 * (5. * x**3 - 3. * x) + e * 0.125 * (35. * x**4 - 30. * x**2 + 3.)\n", "\n", "def L_5(x, a, b, c, d, e, f):\n", " return a + b * x + c * 0.5 * (3. * x**2 - 1.) + d * 0.5 * (5. * x**3 - 3. * x) + e * 0.125 * (35. * x**4 - 30. * x**2 + 3.) + f * 0.125 * (63. * x**5 - 70. * x**3 + 15. * x)\n", "\n", "def L_6(x, a, b, c, d, e, f, g):\n", " return a + b * x + c * 0.5 * (3. * x**2 - 1.) + d * 0.5 * (5. * x**3 - 3. * x) + e * 0.125 * (35. * x**4 - 30. * x**2 + 3.) + f * 0.125 * (63. * x**5 - 70. * x**3 + 15. * x) + g * 0.0625 * (231. * x**6 - 315. * x**4 + 105. * x**2 - 5.)\n", "\n", "def L_7(x, a, b, c, d, e, f, g, h):\n", " return a + b * x + c * 0.5 * (3. * x**2 - 1.) + d * 0.5 * (5. * x**3 - 3. * x) + e * 0.125 * (35. * x**4 - 30. * x**2 + 3.) + f * 0.125 * (63. * x**5 - 70. * x**3 + 15. * x) + g * 0.0625 * (231. * x**6 - 315. * x**4 + 105. * x**2 - 5.) + h * 0.0625 * (429. * x**7 - 693. * x**5 + 315. * x**3 -35. * x)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Perform the fits using the predefined functions." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Exercise 5.1.2: (obligatory)\n", "In an accelerator experiment, the following data are numbers of events measured in 60 energy intervals equally distributed between 0 and 3 GeV:\n", "\n", "| | | | | | | | | | | | | | | | | | | | | \n", "|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n", "|6 |1 |10 |12 |6 |13 |23 |22 |15 |21 |23 |26 |36 |25 |27 |35 |40 |44 |66 |81 |\n", "|75 | 57|48 |45 |46 |41 |35 |36 |53 |32 |40 |37 |38 |31 |36 |44 |42 |37 |32 |32 |\n", "|43 |44 |35 |33 |33 |39 |29 |41 |32 |44 |26 |39 |29 |35 |32 |21 |21 |15 |25 |15 |\n", "\n", "The data show a signal resonance visible on top of a background sample. For the uncertainties of all data points we assume the statistical uncertainty according to a Poisson distribution. The goal of this exercise is to extract information on the signal by parameterizing both signal and background. The information we are interested in are the width of the signal (which is related to the lifetime) and the number of signal events. Let us assume that the background can be parametrized as a polynomial of second order in the energy (i.e., a function with 3 parameters), and the signal as a Lorentz function (also 3 parameters) given by\n", "\n", "$$ L \\left(x; A_{\\mathrm{norm}}, \\mu, \\Gamma \\right) = \\frac{A_{\\mathrm{norm}}}{\\pi} \\frac{\\Gamma/2}{\\left( x - \\mu \\right)^2 + \\Gamma^2/4}$$\n", "\n", "There are two possible methods to extract the signal:\n", "\n", "1. Fit the data with a function with 6 parameters composed by the signal function plus the background function.\n", "\n", "2. Define two intervals, left and right of the signal peak, to fit the background function. Then, fit the signal function to the data in the signal region after subtracting the background function.\n", "\n", " There are (at least) two ways how to exclude certain points for the fit. Either you can define new arrays for the fit which contain only a subset of the original data points, or you can define your own fit function which excludes certain intervals. For a Root example how to do this (not in Python, but in C) see here: [https://root.cern/doc/master/fitExclude_8C.html](https://root.cern/doc/master/fitExclude_8C.html). \n", "\n", "Plot the fitted functions on top of the data. Determine the width of the Lorentz peak and the number of signal events and their statistical uncertainties, and compare the results of both methods." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nPoints = 60\n", "data_x = np.array(np.arange(0, 3, 0.05), dtype=float) # 3 GeV / 60 bins = 0.05 GeV per bin\n", "data_y = np.array([6, 1, 10, 12, 6, 13, 23, 22, 15, 21, 23, 26, 36, 25, 27, 35, 40, 44, 66, 81, 75, 57, 48, 45, 46, 41, 35, 36, 53, 32, 40, 37, 38, 31, 36, 44, 42, 37, 32, 32, 43, 44, 35, 33, 33, 39, 29, 41, 32, 44, 26, 39, 29, 35, 32, 21, 21, 15, 25, 15], dtype=float)\n", "sigma_x = np.array(nPoints*[0], dtype=float)\n", "sigma_y = np.array(np.sqrt(data_y), dtype=float)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Using ROOT or pure Python, implement your solution following the steps below:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# For the ROOT approach: Store the values in a TGraphErrors\n", "\n", "# Part 1: Define the fit function with 6 parameters and fit signal and back simultaneously.\n", "\n", "# Part 2: Split the data in signal and background region. First fit the background distribution in the background region.\n", "# Afterwards, subtract the background expectation from data in the signal region and fit the signal function." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise 5.2: Minimization via Simulated Annealing (obligatory)\n", "\n", "Data analysis often requires to find the optimal solution, e.g., the minimum of a function in a multi-dimensional space.\n", "\n", "As an example we use the following two-dimensional function which has several local minima, but just one global minimum:\n", "\n", "$$ f(x,y) = (x^2+y−a)^2+ (x+y^2−b)^2+c·(x+y)^2$$\n", "\n", "with arbitrary parameters $a$ , $b$ and $c$.\n", "\n", "For $ c= 0$ this function would be the [Rosenbrock function](https://en.wikipedia.org/wiki/Rosenbrock_function) which is often used to validate minimization algorithms.\n", "\n", "For this exercise sheet, we make the arbitrary choice $a= 11$, $b= 7$, $c= 0.1$. Thus, $f(x,y)$ has four local minima of different depth.\n", "\n", "In this exercise you will write your own minimization algorithm following the [**Simulated Annealing**](https://en.wikipedia.org/wiki/Simulated_annealing) strategy and test with the above defined function.\n", "Use the code fragment given in the jupyter notebook as help.\n", "\n", "\n", "1. Play with the parameters: initial and final temperature, cooling speed, and step size. Choose a starting point close to the global minimum, and check if the algorithm converges into the minimum.\n", "\n", "2. Choose a starting point close to a local minimum which is not the global minimum, e.g., $(x,y) = (3,−2)$. Find a set of parameters for the algorithm such that it converges to the global minimum, but still keeping the number of iterations as low as possible, and motivate your choice. \n", "\n", " For tuning the parameters, you have two possibilities: Either you perform a scan over a meaningful range for each parameter, or you study how the algorithm reacts on changing certain parameters and then try to tune them by hand. In any case, first think what is the role of each parameter in the algorithm. E.g., both, the difference between initial and final temperature, and the cooling speed directly affect the number of iterations, but the temperature scale in addition affects the probability for jumps.\n", "\n", "3. Repeat the analysis for different random seeds. If the minimum found depends on the random seed, re-tune the parameters of the algorithm until the result is independent of the random seed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Using ROOT random number generator, because it allows setting a seed (other generators are also possible)\n", "from ROOT import gRandom" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### ROOT Approach:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from ROOT import gRandom, TF2, TMath, Math, TCanvas, TGraph, TColor, TAttLine, TLine, TMinuit, TVirtualFitter, TGraph2D" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Python Approach:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "import matplotlib.animation\n", "from IPython.display import Image" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# modified rosenbrock function: f(x,y) = (x^2+y-a)^2 + (x+y^2-b)^2 + c*(x+y)^2\n", "def modified_rosenbrock_function(x, par):\n", " \"\"\"Calculate the function value of the modified Rosenbrock function.\n", " \n", " params:\n", " x: List with two entries containing x and y value\n", " par: List of function parameters. Contains values for a, b and c.\n", " \n", " returns: float\n", " function value of the Rosenbrock function. \n", " \"\"\"\n", " return None" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def plotFunction(function, listOfPoints):\n", " \"\"\"Draw or return plot objects of scanned values of x and y and the surface of the function.\n", " \n", " Helper function for visualization of the minimization procedure.\n", " \"\"\"\n", " return None" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Code fragment for exercise 6.2 of the Computerpraktikum Datenanlyse 2014\n", "# Authors: Ralf Ulrich, Frank Schroeder (Karlsruhe Institute of Technology)\n", "# Modified: 2020-05-26 Maximilian Burkart (Karlsruhe Institute of Technology)\n", "# This code fragment probably is not the best and fastest implementation\n", "# for \"simulated annealing\", but it is a simple implementation which does its job. \n", "\n", "def simulated_annealing(init_vals=[0,0], rosenbrock_pars=[0, 0, 0],\n", " init_temp=100, final_temp=1, cool_speed=1, step_size=1,\n", " seed=None):\n", " \"\"\"Minimize the modified Rosenbrock function using simulated annealing.\n", " \n", " params:\n", " init_vals: Initial x and y values.\n", " rosenbrock_pars: Parameters of the modified Rosenbrock function.\n", " init_temp: Initial temperature the cooling starts from.\n", " final_temp: Final temperature of the cooling.\n", " cool_speed: Cooling speed in percent of the current temperature.\n", " step_size: Step size used in the cooling procedure.\n", " \n", " returns:\n", " min_pars: List of floats.\n", " List of the x and y values at the found minimum.\n", " listOfPoints: List of floats.\n", " List of the visited points during the minimization process.\n", " \"\"\"\n", " nParameter = 2 # 2 parameters: x and y\n", " if len(init_vals) != nParameter:\n", " raise Exception(\"Number of function parameters does not correspond to given number of initial values.\"\n", " \"Aborting...\")\n", " \n", " # TODO: Implement setting of seed if a seed is given.\n", " \n", " # Starting point: test the dependence of the algorithm on the initial values\n", " initialXvalue, initialYvalue = init_vals\n", "\n", " # Parameters of the algorithm:\n", " # Find a useful set of parameters which allows to determine the global\n", " # minimum of the given function:\n", " # The temperature scale must be in adequate relation to the scale of the function values,\n", " # the step size must be in adequate relation to the scale of the distance between the \n", " # different local minima\n", " initialTemperature = init_temp\n", " finalTemperature = final_temp\n", " coolingSpeed = cool_speed # in percent of current temperature --> defines number of iterations\n", " stepSize = step_size \n", " \n", " # Current parameters and cost\n", " currentParameters = [initialXvalue, initialYvalue] # x and y in our case\n", " currentFunctionValue = modified_rosenbrock_function(currentParameters, [a,b,c]) # you have to implement the function first!\n", "\n", " # keep reference of best parameters\n", " bestParameters = currentParameters\n", " bestFunctionValue = currentFunctionValue\n", "\n", " listOfPoints= []\n", " # Heat the system\n", " temperature = initialTemperature\n", "\n", " iteration = 0\n", "\n", " # Start to slowly cool the system\n", " while (temperature > finalTemperature): \n", "\n", " # Change parameters\n", " newParameters = [0]*nParameter\n", "\n", " for ipar in range(nParameter):\n", " newParameters[ipar] = gRandom.Gaus(currentParameters[ipar], stepSize)\n", "\n", " # Get the new value of the function\n", " newFunctionValue = modified_rosenbrock_function(newParameters, [a,b,c])\n", "\n", " # Compute Boltzman probability\n", " deltaFunctionValue = newFunctionValue - currentFunctionValue\n", " saProbability = np.exp(-deltaFunctionValue / temperature)\n", "\n", " # Acceptance rules :\n", " # if newFunctionValue < currentFunctionValue then saProbability > 1\n", " # else accept the new state with a probability = saProbability\n", " if ( saProbability > gRandom.Uniform() ):\n", " currentParameters = newParameters\n", " currentFunctionValue = newFunctionValue\n", " listOfPoints.append(currentParameters) # log keeping: keep track of path\n", "\n", " if (currentFunctionValue < bestFunctionValue):\n", " bestFunctionValue = currentFunctionValue\n", " bestParameters = currentParameters\n", "\n", " #print \"T = \", temperature, \"(x,y) = \",currentParameters, \" Current value: \", currentFunctionValue, \" delta = \", deltaFunctionValue # debug output\n", "\n", " # Cool the system\n", " temperature *= 1 - coolingSpeed / 100.\n", " \n", " # Count iterations\n", " iteration += 1\n", "\n", " # end of cooling loop\n", " \n", " return bestParameters, listOfPoints" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Add your code here..." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.6" } }, "nbformat": 4, "nbformat_minor": 4 }