{ "cells": [ { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "# Moderne Methoden der Datenanalyse SS2021\n", "# Practical Exercise 3" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Exercise 3: Maximum Likelihood and $\\chi^2$ Methods\n", "\n", "Fitting parametrized functions to measured data is daily business in research. By this, models can be tested against experiments. Moreover, parameters of the models and their uncertainties can be determined.\n", "The physicist often refers to this process as “*fitting*” — in general it is called “*parameter estimation*”." ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "## Exercise 3.1: Decay (obligatory)\n", "\n", "Generate uniformly distributed random numbers. Then apply the transformation method to generate random numbers following an exponential distribution $\\exp(-x/\\tau)$ for $x>0$. \n", "These values can be interpreted as measurements of decay times $t$ (e.g., of radioactive particles) corresponding to a lifetime $\\tau$, which have the following distribution:\n", " \n", "$$ f(t,\\tau) = \\frac{1}{\\tau} \\cdot \\exp\\left(-\\frac{t}{\\tau}\\right)$$\n", " \n", "\n", "**a)** Show analytically that the maximum likelihood estimator for $\\tau$ is the mean $\\hat{\\tau}$ of the sample ($\\hat{\\tau}$ = mean of all measured decay times $t_i$). \n" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "**TODO:**\n", "make your calculations in this Markdown cell using the Latex syntax!" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "**b)** Generate 1000 samples with $\\tau$=1, each with $N=10$ values of t. Evaluate the mean $\\hat{\\tau}$ for each sample and create a histogram of the resulting means. Compare the mean of $\\hat{\\tau}$ with the true value $\\tau$=1. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# Similar to the previous exercises, you can use ROOT or the pythonic approach...\n", "\n", "# Pure python:\n", "import numpy as np\n", "import matplotlib.pyplot as plt\n", "\n", "from scipy import optimize, stats\n", "\n", "# ROOT:\n", "from ROOT import gRandom, TCanvas, TH1F, TF1" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Define the function to generate the random numbers first.\n", "Use the methods introduced on the previous exercise sheets to get uniformly distributed numbers and then transfrom them as required." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "def generate_data(N: int, tau: float = 1.0) -> np.ndarray:\n", " # Generate random numbers according to exp(-x/tau) for x>0 using the transformation method\n", "\n", " # FYI: The type hints in the function signature above tell you that\n", " # - the function expects an integer value for the argument `N`\n", " # - has a second parameter `tau` for the lifetime, which has a default value of 1.0 as required by the exercise\n", " # - and will return a numpy array.\n", "\n", " # TODO: Add code to create a numpy array of the required random numbers here\n", " # You can use ROOT or phyton method to get the initial, uniformly distributed random numbers\n", " pass\n", " return ... # ... the numpy array\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now you can use this function to fill a histogram and plot it. Do this for the different sample sizes." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Hints for pure Python approach:\n", "\n", "You can use `matplotlib`'s [`matplotlib.pyplot.hist`](https://matplotlib.org/3.5.0/api/_as_gen/matplotlib.pyplot.hist.html) (= plt.hist) function to plot a histogram of given data. You can use for instance 100 bins.\n", "\n", "### Hints for ROOT approach:\n", "\n", "Use ROOT's builtin TH1F histogram class, as introduced in exercise 1." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO: Add code here to generate the 1000 data samples, create a histogram of the mean values of the data and draw it\n", "\n", "def create_histo_and_calculate_bias(N):\n", " # TODO: Add code to fill histogram and calculate the bias" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "**c)** Assume that the probability density function (p.d.f.) has been parametrized in terms of $\\lambda=1/\\tau$, which means:\n", " \n", "$$f(t,\\lambda) = \\lambda \\cdot \\exp\\left(-\\lambda \\cdot t \\right)$$\n", " \n", "Create a histogram of the estimations $\\hat{\\lambda}$. Compare the mean value of $\\hat{\\lambda}$ with the true value $\\lambda$=1, and determine numerically the bias for $N= 5, 10, 100$.\n", "\n", "Calculate the bias also for the experiments made in the exercise part **b)** and compare the results of the two approaches **b)** and **c)**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Use a similar approach as above to obtain the histograms using the alternative function definition." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO: Add code here to generate the 1000 data samples, create a histogram of the mean values of the data and draw it\n", "\n", "def create_lambda_histo_and_calculate_bias(N):\n", " # TODO: Add code to fill histogram and calculate the bias" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "**d)** Compare the results of the maximum likelihood method and the $\\chi^{2}$ method: Make three different histograms with 1000 bins from 0 to 10 containing $N$ generated decay times $t$ (try $N= 10, 1000, 100000$). Fit the function $f(t,\\tau)$ to each histogram using the $\\chi^2$ method and the binned likelihood method. Compare the fitted parameters and the $\\chi^2$ values of both methods and discuss the results." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following function template contains some hints for the ROOT approach. If you are using the pure python approach, you can get replace function body with respectively.\n", "Use [`matplotlib.pyplot.hist`](https://matplotlib.org/3.5.0/api/_as_gen/matplotlib.pyplot.hist.html) (= plt.hist) and `scipy.optimize` is in the previous exercises instead." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "from ROOT import kRed, kGreen # Use this if you want get some color into your ROOT plots...\n", "\n", "def make_histogram_and_fit(N):\n", " \n", " # TODO: Add code here to create a histogram, generate data and fill the histogram!\n", "\n", " # Define a function for the Chi2\n", " exp_forChi = TF1(\"exp_forChi\", \"[1]/[0] * exp(-x/[0])\", 0, 10)\n", " exp_forChi.SetParName(0, \"#tau\")\n", " exp_forChi.SetLineColor(kRed)\n", " exp_forChi.SetLineWidth(4)\n", " exp_forChi.SetLineStyle(1)\n", "\n", " # Define a function for the Likelihood\n", " exp_forLikelihood = #add code here\n", "\n", " c3 = TCanvas(f\"c3{N}\", f\"c3{N}\", 1500, 500) \n", "\n", " # Fix the parameter [1] in order to have a constant normalization:\n", " exp_forChi.FixParameter(1, N/100) # normalization: N/number of bins per unit (100 bins until 1)\n", "\n", " # Perform the fit using the method: TH1D.Fit(function, \"I\")\n", " # you can use the method TF1.SetParameter to set a starting value for the parameter of interest\n", "\n", "\n", " # Lookup the TH1.Fit method: https://root.cern.ch/doc/master/classTH1.html to see which options you need to use to perform the \n", " # fit using the Loglikelihood method.\n", "\n", "\n", " # TODO: Add code here to draw the histogram and the fitted functions\n", "\n", " return c3" ] }, { "cell_type": "markdown", "metadata": { "pycharm": { "name": "#%% md\n" } }, "source": [ "# Exercise 3.2: MINUIT (voluntary)\n", "\n", "The goal of this exercise is to make you familiar with the minimizer package `MINUIT` which was developed at CERN in the 70s in `FORTRAN`.\n", "This well-tested toolbox provides different minimization algorithms, the most famous one being `MIGRAD`.\n", "The package is particularly liked by physicists due to it's sophisticated methods for the parameter uncertainty estimation.\n", "\n", "For the purpose of this exercise, it is suggested to use the Python frontend to `MINUIT`,\n", "which is available in the form of the package [`iminuit`](https://iminuit.readthedocs.io/en/stable/index.html).\n", "\n", "Take the function $f(t,\\tau)$ and the generated data set from the previous Exercise 3.1,\n", "and perform an unbinned log likelihood fit for $N= 10, 1000, 100000$ entries.\n", "\n", "Plot a histogram from 0 to 10 with the $N$ entries and the fitted function normalized to the number of entries.\n", "Display the value of the negative logarithmic likelihood as a function of the fit parameter $\\tau$ from $0.5$ to $5$.\n", "How is this plot related to the uncertainty of the fitted parameter?\n", "\n", "The `iminuit` Python package provides predefined cost function classes which also cover the\n", "[unbinned case](https://iminuit.readthedocs.io/en/stable/notebooks/cost_functions.html#Unbinned-fit)\n", "we are interested in.\n", "However, to learn how to define your own cost function, you should use the\n", "[`scipy`-like interface](https://iminuit.readthedocs.io/en/stable/reference.html#module-iminuit.minimize)\n", "which allows you to provide your own cost function in form of the argument `fun`.\n", "You can use either of the two approaches and take a look at the output that MINUIT provides after the cost function is minimized.\n", "\n", "Please note, that the minimizers provided in [`scipy.optimize`](https://docs.scipy.org/doc/scipy/reference/optimize.html)\n", "are also capable of performing the minimization task.\n", "However, the MINUIT package has proven itself in particle physics for decades and provides certain functionality which is not built into the scipy optimizers by default (e.g. the handling of uncertainties).\n", "\n", "*Hint*: Make sure, that you are using a current version of iminuit, e.g. 2.11.2! Check this with the command in the next cell:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip list | grep iminuit" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# For a predefined cost function use\n", "from iminuit import cost, Minuit\n", "\n", "# or use the following, if you want to define your own cost function\n", "from iminuit.minimize import minimize as iminuit_minimize" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Definition of the fit function\n", "\n", "def fit_func(x, tau):\n", " # TODO\n", " return ..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# TODO: Generate the data as in the previous exercises, plot the histograms and fit the fit function to the data.\n", "# Draw also the function with the estimated tau value obtained with the fit.\n", "# Do this for N = 10, 1000, 100000" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "*Hint:* Check out the tutorials / exampes available in the [iminuit](https://iminuit.readthedocs.io/en/stable/) or [scipy.optimize](https://docs.scipy.org/doc/scipy/reference/optimize.html) documentation, depending on which method you choose." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.6" } }, "nbformat": 4, "nbformat_minor": 4 }