{"text": "```python\nfrom IPython.core.display import display_html\nfrom urllib.request import urlopen\n\ncssurl = 'http://j.mp/1DnuN9M'\ndisplay_html(urlopen(cssurl).read(), raw=True)\n```\n\n\n\n\n\n\n\n\n\n\n# Tarea 5 - Demostraci\u00f3n de la formula de Euler\n\nLa formula de Euler es:\n\n$$\ne^{ix} = \\cos{x} + i \\sin{x}\n$$\n\n# Tarea 6 - Determinaci\u00f3n de las $D$-particiones del espacio de parametros\n\nDado el sistema:\n\n$$\n\\dot{x}(t) = a x(t) + b x(t - h)\n$$\n\nempezamos por obtener la transformada de Laplace del sistema, lo cual nos dar\u00e1 el siguiente cuasipolinomio caracteristico:\n\n$$\np(s) = s - a - b e^{-h s} = 0\n$$\n\nPara determinar los puntos en que nuestro polinomio caracterisitico tiene polos en el eje imaginario, es decir las fronteras en que los parametros dejan de definir a un sistema estable y comienzan a definir un sistema inestable, vamos a sustitur los valores $s = 0$ y $s = j \\omega$, que son los valores que caracterizan al eje imaginario.\n\nEmpezamos sustituyendo $s = 0$, por lo que obtenemos:\n\n$$\np(0) = -a - b = 0 \\implies a = -b\n$$\n\nSi ahora sustituimos $s = j \\omega$, tendremos:\n\n$$\n\\begin{align}\np(j \\omega) &= j \\omega - a - b e^{- h j \\omega} \\\\\n&= j \\omega - a - b \\left( \\cos{(\\omega h)} -j \\sin{(\\omega h)} \\right) \\\\\n&= j \\omega - a - b \\cos{(\\omega h)} + b j \\sin{(\\omega h)}\n\\end{align}\n$$\n\nde donde podemos separar la parte real de la imaginaria y obtener:\n\n$$\n\\omega + b \\sin{(\\omega h)} = 0\n$$\n\ny\n\n$$\n-a -b \\cos{(\\omega h)} = 0\n$$\n\nAqui podemos obtener una relaci\u00f3n para $\\sin{(\\omega h)}$ y $\\cos{(\\omega h)}$:\n\n$$\n- \\omega = b \\sin{(\\omega h)} \\implies \\cos{(\\omega h)} = - \\frac{a}{b}\n$$\n\n$$\n\\omega = - b \\sin{(\\omega h)} \\implies \\sin{(\\omega h)} = - \\frac{\\omega}{b}\n$$\n\ny sabemos que $\\sin^2{(\\omega h)} + \\cos^2{(\\omega h)} = 1$, por lo que podemos sustituir los valores que obtuvimos y tenemos que:\n\n$$\n\\left( - \\frac{\\omega}{b} \\right)^2 + \\left( - \\frac{a}{b} \\right)^2 = 1 = \\frac{\\omega^2 + a^2}{b^2}\n$$\n\ndespejando $\\omega^2$ y sacando raiz cuadrada obtenemos:\n\n$$\n\\omega^2 + a^2 = b^2\n$$\n\n$$\n\\omega^2 = b^2 - a^2\n$$\n\n$$\n\\omega = \\sqrt{b^2 - a^2}\n$$\n\nal sustituir en la ecuaci\u00f3n obtenida de la parte real del cuasipolinomio, obtenemos:\n\n$$\n-a -b \\cos{(\\sqrt{b^2 - a^2} h)} = 0\n$$\n\no bien:\n\n$$\na + b \\cos{(\\sqrt{b^2 - a^2} h)} = 0\n$$\n\nEsta es la relaci\u00f3n entre $a$ y $b$ que nos dar\u00e1 las curvas de las $D$-particiones del espacio de parametros\n\n# Tarea 7 - Gr\u00e1fica de las $D$-particiones del espacio de parametros\n\nSi bien las relaciones obtenidas son convenientes para su an\u00e1lisis, el graficarla por medio de software proporciona problemas, ya que sus variables no se pueden separar para obtener una en funci\u00f3n de la otra, por lo que retrocederemos un poco y utilizaremos las relaciones obtenidas de separar las partes real e imaginaria del cuasipolinomio como ecuaciones parametricas para $a$ y $b$.\n\nEmpezamos obteniendo los valores para $a$ y $b$ en funci\u00f3n de $\\omega$:\n\n$$\n\\omega + b \\sin{(\\omega h)} = 0 \\implies b = - \\frac{\\omega}{\\sin{(\\omega h)}}\n$$\n\n$$\n- a - b \\cos{(\\omega h)} = 0 \\implies a = - b \\cos{(\\omega h)} = + \\frac{\\omega}{\\sin{(\\omega h)}} \\cos{(\\omega h)} = \\frac{\\omega}{\\tan{(\\omega h)}}\n$$\n\nPor lo que procedemos a capturar estas funciones en el programa, primero importamos las librerias que necesitamos para calcular y graficar:\n\n\n```python\n# Se importan librerias para graficar, y se define un estilo especifico\n%matplotlib inline\nfrom matplotlib.pyplot import plot, style, figure, legend\nstyle.use(\"ggplot\")\n```\n\n\n```python\n# Se importan funciones de calculo numerico a utilizar\nfrom numpy import linspace, tan, sin, pi\n```\n\nAhora definimos las funciones que hemos obtenido:\n\n\n```python\na = lambda om, h: -om/sin(om*h)\nb = lambda om, h: om/tan(om*h)\nf1 = lambda x: -x\n```\n\nEsta notaci\u00f3n es equivalente a las definiciones matematicas:\n\n$$\na(\\omega, h) := - \\frac{\\omega}{\\sin{(\\omega h)}}\n$$\n\n$$\nb(\\omega, h) := \\frac{\\omega}{\\tan{(\\omega h)}}\n$$\n\n$$\nf_1(x) := -x\n$$\n\nAhora definimos valores para $\\omega$ y $b$ para ingresar en estas funciones:\n\n\n```python\ntau = 2*pi\nw = linspace(-3*tau, 3*tau, 1000)\nbs = linspace(-15, 15, 100)\n```\n\nLo que equivale a decir que variaremos $\\omega$ en el intervalo $[-3 \\tau, 3 \\tau] = [-6 \\pi, 6 \\pi]$ y a $b$ en $[-15, 15]$.\n\nAhora graficamos $a$ contra $b$ con las funciones parametricas obtenidas y $x$ contra $f_1(x)$:\n\n\n```python\nf = figure(figsize = (10, 10))\np1, = plot(b(w, 1), a(w, 1), \".\")\np2, = plot(bs, f1(bs), \".\")\n\nax = f.gca()\nax.set_ylabel(r\"$a(\\omega)$\", fontsize=20)\nax.set_xlabel(r\"$b(\\omega)$\", fontsize=20)\nax.set_xlim(-15, 15)\nax.set_ylim(-15, 15)\n\nlegend([p1, p2], [r\"$a + b \\cos{(\\sqrt{b^2 - a^2} h)} = 0$\", r\"$a + b = 0$\"]);\n```\n\n# Tarea 8 - Teorema de la funci\u00f3n implicita\n\nPuedes acceder a este notebook a traves de la p\u00e1gina\n\nhttp://bit.ly/1xvpRgo\n\no escaneando el siguiente c\u00f3digo:\n\n\n\n\n```python\n# Codigo para generar codigo :)\nfrom qrcode import make\nimg = make(\"http://bit.ly/1xvpRgo\")\nimg.save(\"codigos/codigo5678.jpg\")\n```\n", "meta": {"hexsha": "d35c49016a01c84c483fb4600af2eaf9aadf34a2", "size": 39984, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "IPythonNotebooks/Sistemas con retardo en la entrada/Tareas 5, 6, 7, 8.ipynb", "max_stars_repo_name": "robblack007/DCA", "max_stars_repo_head_hexsha": "0ea5f8b613e2dabe1127b857c7bfe9be64c52d20", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "IPythonNotebooks/Sistemas con retardo en la entrada/Tareas 5, 6, 7, 8.ipynb", "max_issues_repo_name": "robblack007/DCA", "max_issues_repo_head_hexsha": "0ea5f8b613e2dabe1127b857c7bfe9be64c52d20", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "IPythonNotebooks/Sistemas con retardo en la entrada/Tareas 5, 6, 7, 8.ipynb", "max_forks_repo_name": "robblack007/DCA", "max_forks_repo_head_hexsha": "0ea5f8b613e2dabe1127b857c7bfe9be64c52d20", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-20T12:44:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-20T12:44:13.000Z", "avg_line_length": 88.6563192905, "max_line_length": 27016, "alphanum_fraction": 0.7953431373, "converted": true, "num_tokens": 2394, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.44167300566462553, "lm_q2_score": 0.2254166158350767, "lm_q1q2_score": 0.09956043424262655}} {"text": "
Link Github
\n\n\n\n\n\n
\n

Res\u00famen Te\u00f3rico de Medidas Electr\u00f3nicas 1

\n

Incertidumbre

\n

Lia\u00f1o, Lucas

\n
\n\n\n\n# Contenidos\n\n- **Introducci\u00f3n**\n- **Marco Te\u00f3rico**\n - Conceptos B\u00e1sicos Metrolog\u00eda\n - \u00bfQu\u00e9 es la incertidumbre?\n - Modelo matem\u00e1tico de una medici\u00f3n ($Y$)\n - Evaluaci\u00f3n incertidumbre Tipo A\n - Evaluaci\u00f3n incertidumbre Tipo B\n - Incertidumbre Conjunta\n - Grado de Confianza\n - Caso de an\u00e1lisis: $u_{i}(x_{i}) \\gg u_{j}(X_{i})$\n - Caso de an\u00e1lisis: $u_{i}(x_{i}) \\ll u_{j}(X_{i})$\n - Correlaci\u00f3n\n \n- **Experimentaci\u00f3n**\n - Caso General\n - Caso Incertidumbre tipo A dominante\n - Caso Incertidumbre tipo B dominante\n - Ejemplo Correlaci\u00f3n\n- **Bibliograf\u00eda**\n***\n \n# Introducci\u00f3n \n\nEl objetivo del presente documento es de resumir, al mismo tiempo que simular, los contenidos te\u00f3ricos correspondientes a la unidad N\u00b01 de la materia medidas 1. Para ello, utilizaremos los recursos disponibles en el drive de la materia.\n\n
\n Link: https://drive.google.com/folderview?id=1p1eVB4UoS0C-5gyienup-XiewKsTpcNc\n
\n\n***\n\n\n# Marco Te\u00f3rico\n\n## Conceptos B\u00e1sicos Metrolog\u00eda\n\nLa de medici\u00f3n de una magnitud f\u00edsica, atributo de un cuerpo mensurable, consiste en el proceso mediante el cual se da a conocer el valor de dicha magnitud. A lo largo de la historia se han desarrollado diversos modelos de medici\u00f3n, todos ellos consisten en la comparaci\u00f3n de la magnitud contra un patr\u00f3n.\n\nA su vez, a medida que se fueron confeccionando mejores m\u00e9todos de medici\u00f3n, se empez\u00f3 a tener en consideraci\u00f3n el error en la medida. Este error consiste en una indicaci\u00f3n cuantitativa de la calidad del resultado. Valor que demuestra la confiabilidad del proceso.\n\nActualmente, definimos al **resultado de una medici\u00f3n** como al conjunto de valores de una magnitud, atribuidos a un mensurando. Se puede definir a partir de una funci\u00f3n distribuci\u00f3n densidad de probabilidad (tambi\u00e9n denomidada _pdf_, de la s\u00edgla inglesa _probability density function_). El resultado de una medici\u00f3n est\u00e1 caracterizado por la media de la muestra, la incertidumbre y el grado de confianza de la medida.\n\nDenominaremos **incertidumbre de una medici\u00f3n** al par\u00e1metro asociado con el resultado de la medici\u00f3n que caracter\u00edza la dispersi\u00f3n de los valores atribuidos a un mensurando. Mientras que el **error de medida** ser\u00e1 la diferencia entre el valor medido con un valor de referencia. [[1]](http://depa.fquim.unam.mx/amyd/archivero/CALCULODEINCERTIDUMBRESDR.JAVIERMIRANDA_26197.pdf)\n\n#### Tipos de errores\n\nExisten dos tipos:\n\n> **Error sistem\u00e1tico:** Componente del error que en repetidas mediciones permanece constante.\n\n> **Error aleatorio:** Componente del error que en repetidas mediciones var\u00eda de manera impredecible.\n\n***\n## \u00bfQu\u00e9 es la incertidumbre?\n\nComo bien definimos anteriormente, la incertidumbre es un par\u00e1metro que caracter\u00edza la dispersi\u00f3n de los valores atribuidos a un mensurando. Esto significa que, considerando al resultado de la medici\u00f3n como una funci\u00f3n distribuci\u00f3n densidad de probabilidad, la incertidumbre representa el desv\u00edo est\u00e1ndar de la misma. Se suele denominar **incertidumbre est\u00e1ndar** a dicha expresi\u00f3n de la incertidumbre.\n\n#### Componentes de la incertidumbre\n\n> **Tipo A:** Componente de la incertidumbre descripta \u00fanicamente a partir del estudio estad\u00edstico de las muestras.\n\n> **Tipo B:** Componente de la incertidumbre descripta a partir de las hojas de datos previstas por los fabricantes de los instrumentos de medici\u00f3n, junto con datos de calibraci\u00f3n.\n\nEn las pr\u00f3ximas secciones se describe en detalle como son los test efectuados para determinar cada una de las componentes. [[2]](https://es.wikipedia.org/wiki/Propagaci%C3%B3n_de_errores)\n\n***\n## Modelo matem\u00e1tico de una medici\u00f3n ($Y$)\n\nSupongamos una magnitud a mensurar ($Y$), la cual se va a estimar de forma indirecta a partir de una relaci\u00f3n fundamental con otras $N$ magnitudes mensurables, de manera que se cumple:\n\n\\begin{equation}\n Y = f(x_{1},x_{2},...,x_{N})\n\\end{equation}\n\nComo definimos previamente, las variables $x_{i}$ son funciones distribuci\u00f3n densidad de probabilidad por ser resultados de mediciones. Cada una de estas mediciones viene determinada, idealmente, por el valor de su media ($\\mu_{X_{i}}$), su desv\u00edo est\u00e1ndar ($\\sigma_{x_{i}}$) y el grado de confianza de la medici\u00f3n. Dado que en la vida real no es posible conseguir una estimaci\u00f3n lo suficientemente buena de estos par\u00e1metros, se utilizar\u00e1n sus estimadores en su lugar.\n\n\nPor tanto, si se tomaron $M$ muestras de cada una de estas variables, podemos utilizar la **media poblacional ($\\bar{Y}$)** como estimador de la media ($\\mu_{Y}$) de la distribuci\u00f3n densidad de probabilidad de la medici\u00f3n como:\n\n\\begin{equation}\n \\hat{Y} = \\bar{Y} = \\frac{1}{M} \\sum_{k=0}^{M} f_{k}(x_{1},x_{2},...,x_{N}) = f(\\bar{X_{1}},\\bar{X_{2}},...,\\bar{X_{N}})\n\\end{equation}\n\n
\n Verificar que esto este bien. Sospecho que no porque estamos suponiendo que podes aplicar linealidad adentro de la funci\u00f3n. Estoy leyendo el ejemplo del calculo de resistencia y hacemos \"resistencia= (media_V/media_I)\" en la l\u00ednea 39 del documento compartido en el canal general de Slack. \n
\n\nAsimismo, para determinar el otro par\u00e1metro fundamental de la medici\u00f3n (la incertidumbre) utilizaremos como estimador a la **incertidumbre combinada ($u_{c}$)** definida a partir de la siguiente ecuaci\u00f3n,\n\n\\begin{equation}\n u_{c}^{2}(Y) = \\sum_{i=1}^{N} (\\dfrac{\\partial f}{\\partial x_{i}})^{2} \\cdot u_{c}^{2}(x_{i}) + 2 \\sum_{i=1}^{N-1} \\sum_{j = i+1}^{N} \\dfrac{\\partial f}{\\partial x_{i}} \\dfrac{\\partial f}{\\partial x_{j}} u(x_{i},x_{j})\n\\end{equation}\n\ndonde $u(x_{i},x_{j})$ es la expresi\u00f3n de la covariancia entre las pdf de las $x_{i}$.\n\nEsta expresi\u00f3n, para permitir el uso de funciones $f_{k}$ no lineales, es la aproximaci\u00f3n por serie de Taylor de primer orden de la expresi\u00f3n original para funciones que cumplen linealidad. [[2]](https://es.wikipedia.org/wiki/Propagaci%C3%B3n_de_errores)\n\nA su vez, a partir de la **ley de propagaci\u00f3n de incertidumbres**, podemos decir que para la determinaci\u00f3n de una variable unitaria mediante medici\u00f3n directa es posible reducir la expresi\u00f3n anterior a la siguiente:\n\n\\begin{equation}\n u_{c}^{2}(x_{i}) = u_{i}^{2}(x_{i}) + u_{j}^{2}(x_{i}) \n\\end{equation}\n\ndonde denominaremos $u_{i}(x_{i})$ a la incertidumbre tipo A, y $u_{j}(x_{i})$ a la incertidumbre tipo B.\n\n***\n## Evaluaci\u00f3n incertidumbre Tipo A\n\nLa incertidumbre tipo A, recordando que se trata de una medida de dispersi\u00f3n y al ser tipo A se relaciona con la estad\u00edstica de las muestras, se puede estimar con el desv\u00edo est\u00e1ndar experimental de la media ($S(\\bar{X_{i}})$). Para ello hace falta recordar algunos conceptos de estad\u00edstica.\n\nSuponiendo que se toman $N$ muestras:\n\n> **Estimador media poblacional:**\n>> $\\hat{x_{i}}=\\bar{X_{i}}=\\dfrac{1}{N} \\sum_{k=1}^{N}x_{i,k}$\n\n> **Grados de libertad:**\n>> $\\nu = N-1$\n\n> **Varianza experimental de las observaciones:**\n>> $\\hat{\\sigma^{2}(X_{i})}=S^{2}(X_{i})=\\dfrac{1}{\\nu} \\sum_{k=1}^{N}(X_{i,k} - \\bar{X_{i}})^{2}$\n\n> **Varianza experimental de la media:**\n>> $\\hat{\\sigma^{2}(\\bar{X_{i}})}=S^{2}(\\bar{X_{i}})=\\dfrac{S^{2}(x_{i})}{N}$\n\n\n\n\n
\n Por ende, la componente de la incertidumbre tipo A nos queda:\n \n\\begin{equation}\n u_{i}(x_{i}) = \\sqrt{S^{2}(\\bar{X_{i}})}\n\\end{equation}\n
\n\n
\n Nota: Para calcular el std con un divisor de $\\nu = N-1$ es necesario modificar un argumento en la funci\u00f3n de python. El comando correctamente utilizado es: 'myVars.std(ddof=1)'.\n \n
\n\n\n***\n## Evaluaci\u00f3n incertidumbre Tipo B\n\nLa incertidumbre tipo B viene determinada por la informaci\u00f3n que proveen los fabricantes de los instrumentos de medici\u00f3n, asi como tambi\u00e9n por los datos resultantes por la calibraci\u00f3n de los mismos.\n\nEn estos instrumentos de medici\u00f3n la incertidumbre viene descripta en forma de distribuciones densidad de probabilidad, no en forma estad\u00edstica. Para ello utilizamos los siguientes estad\u00edsticos que caracter\u00edzan a las variables aleatorias, en caso de que su dominio fuera continuo:\n\n> **Esperanza:**\n>> $E(x)=\\int x.f(x)dx$\n\n> **Varianza:**\n>> $V(x)=\\int x^{2}.f(x)dx$\n\n\n
\n Por tanto, si la incertidumbre es un par\u00e1metro de dispersi\u00f3n, la misma vendr\u00e1 descripta por la expresi\u00f3n:\n \n\\begin{equation}\n u_{j}(x_{i}) = \\sqrt{V(x)}\n\\end{equation}\n
\n\nPor simplicidad a la hora de trabajar, a continuaci\u00f3n se presenta una tabla con los valores t\u00edpicos del desv\u00edo est\u00e1ndar para el caso de distintas distribuciones. Se demuestra el caso de distribuci\u00f3n uniforme.\n\n\n\nSuponiendo que la distribuci\u00f3n esta centrada en $\\bar{X_{i}}$, nos quedar\u00eda que $a = \\bar{X_{i}} - \\Delta X$ y $b = \\bar{X_{i}} - \\Delta X$. \n\nPor tanto si la expresi\u00f3n de la varianza es $V(x_{i}) = \\frac{(b-a)^{2}}{12}$, finalmente quedar\u00eda:\n\n\\begin{equation}\n V(x_{i}) = \\frac{(b-a)^{2}}{12} = \\frac{(2 \\Delta X)^{2}}{12} = \\frac{4 \\Delta X^{2}}{12} = \\frac{\\Delta X^{2}}{3}\n\\end{equation}\n\n\\begin{equation}\n \\sigma_{x_{i}} = \\frac{\\Delta X}{\\sqrt{3}}\n\\end{equation}\n\nFinalmente la tabla queda,\n\n| Distribution | $u_{j}(x_{i}) = \\sigma_{x_{i}}$|\n| :----: | :----: |\n| Uniforme | $\\frac{\\Delta X}{\\sqrt{3}}$ |\n| Normal | $\\Delta X $ |\n| Normal ($K=2$) | $\\frac{\\Delta X}{2} $ |\n| Triangular | $\\frac{\\Delta X}{\\sqrt{6}}$ |\n| U | $\\frac{\\Delta X}{\\sqrt{2}}$ |\n\n
\n Verificar que esto este bien. Me genera dudas el t\u00e9rmino $\\Delta X$. Esto no creo que deba ser as\u00ed porque en el caso de la distribuci\u00f3n normal $\\sigma_{x_{i}} = \\sigma$. No creo que deba aparecer ningun error absoluto ah\u00ed.\n
\n\n***\n## Incertidumbre Conjunta\n\nComo definimos anteriormente, la incertidumbre conjunta queda definida como:\n\n\\begin{equation}\n u_{c}^{2}(x_{i}) = u_{i}^{2}(x_{i}) + u_{j}^{2}(x_{i}) \n\\end{equation}\n\n#### \u00bfQu\u00e9 funci\u00f3n distribuci\u00f3n densidad de probabilidad tiene $u_{c}$?\n\nSi se conocen $x_{1},x_{2},...,x_{N}$ y $Y$ es una combinaci\u00f3n lineal de $x_{i}$ (o en su defecto una aproximaci\u00f3n lineal, como en el caso del polinomio de taylor de primer grado de la funci\u00f3n), podemos conocer la funci\u00f3n distribuci\u00f3n densidad de probabilidad a partir de la convoluci\u00f3n de las $x_{i}$, al igual que se hace para SLIT. [[3]](https://es.wikipedia.org/wiki/Convoluci%C3%B3n)\n\nDado que habitualmente no se conoce con precisi\u00f3n la funci\u00f3n distribuci\u00f3n densidad de probabilidad de $u_{i}(x_{i})$, se suele utilizar el **teorema central del l\u00edmite** para conocer $u_{c}(x_{i})$. El mismo plantea que cuantas m\u00e1s funciones $x_{i}$ con funci\u00f3n distribuci\u00f3n densidad de probabilidad deconocida sumemos, m\u00e1s va a tender su resultado a una distribuci\u00f3n normal.\n\n***\n## Grado de Confianza\n\nFinalmente, el \u00faltimo par\u00e1metro que nos interesa conocer para determinar el resultado de la medici\u00f3n es el grado de confianza.\n\n> **Grado de confianza:** Es la probabilidad de que al evaluar nuevamente la media poblacional ($\\bar{y}$) nos encontremos con un valor dentro del intervalo $[\\bar{Y} - K.\\sigma_{Y}(\\bar{Y}) \\le \\mu_{Y} \\le \\bar{Y} - K.\\sigma_{Y}(\\bar{Y})]$ para el caso de una distribuci\u00f3n que cumpla el teorema central del l\u00edmite, donde $K$ es el factor de cobertura.\n\nOtra forma de verlo es:\n\n\n\ndonde el grado de confianza viene representado por $(1-\\alpha)$. Recomiendo ver el ejemplo [[4]](https://es.wikipedia.org/wiki/Intervalo_de_confianza#Ejemplo_pr%C3%A1ctico) en caso de no entender lo que representa.\n\nDe esta forma, el factor de cobertura ($K$) nos permite modificar el grado de confianza. Agrandar $K$ aumentar\u00e1 el \u00e1rea bajo la curva de la gaussiana, lo que representar\u00e1 un mayor grado de confianza. \n\nSe definir\u00e1 **incertidumbre expandida** a $U(x_{i}) = K \\cdot u_{c}(x_{i})$ si $u_{c}(x_{i})$ es la incertidumbre que nos prove\u00e9 un grado de confianza de aproximadamente $ 68\\% $.\n\nPara una funci\u00f3n que distribuye como normal podemos estimar el grado de confianza mediante la siguiente tabla,\n\n| Factor de cobertura | Grado de confianza|\n| :----: | :----: |\n| $K=1$ | $68.26\\% $ |\n| $K=2$ | $95.44\\% $ |\n| $K=3$ | $99.74\\% $ |\n\n\n#### \u00bfQu\u00e9 sucede si $u_{c}$ no distribuye normalmente?\n\nEn este caso tambi\u00e9n se podr\u00e1 utilizar la ecuaci\u00f3n $U(x_{i}) = K \\cdot u_{c}(x_{i})$, pero el m\u00e9todo mediante el cual obtendremos a $K$ ser\u00e1 distinto.\n\n***\n## Caso de an\u00e1lisis: $u_{i}(x_{i}) \\gg u_{j}(X_{i})$\n\nCuando sucede que la incertidumbre que prove\u00e9 la evaluaci\u00f3n tipo A es muy significativa frente a la tipo B, esto querr\u00e1 decir que no tenemos suficientes grados de libertad para que $u_{c}(x_{i})$ se aproxime a una gaussiana. En otras palabras, la muestra obtenida no es significativa.\n\nEn estos casos vamos a suponer que $u_{c}(x_{i})$ distribuye como t-Student. La distribuci\u00f3n t-Student surge justamente del problema de estimar la media de una poblaci\u00f3n normalmente distribuida cuando el tama\u00f1o de la muestra es peque\u00f1o.\n\nComo la distribuci\u00f3n de t-Student tiene como par\u00e1metro los grados de libertad efectivos, debemos calcularlos. Para ello utilizaremos la f\u00f3rmula de Welch-Satterthwaite:\n\n\\begin{equation}\n \\nu_{eff} = \\dfrac{u_{c}^{4}(y)}{\\sum_{i=1}^{N} \\dfrac{ c_{i}^{4} u^{4}(x_{i})} {\\nu_{i}} } \n\\end{equation}\n\n\ndonde $c_i = \\dfrac{\\partial f}{\\partial x_{i}}$ y $u_{i}(x_{i})$ es la incertidumbre tipo A.\n\n\n\nPara obtener el factor de cobertura que nos asegure un factor de cobertura del $95/%$ debemos recurrir a la tabla del t-Student. Para ello existe una funci\u00f3n dentro del m\u00f3dulo _scipy.stats_ que nos integra la funci\u00f3n hasta lograr un \u00e1rea del $95.4%$.\n\nA continuaci\u00f3n presentamos la funci\u00f3n que utilizaremos con dicho fin,\n\n~~~\ndef get_factor_Tstudent(V_eff, porcentaje_confianza_objetivo=95.4):\n \"\"\"\n Funcion de calculo de factor de expansi\u00f3n por T-student\n input:\n V_eff: Grados de libertad (float)\n porcentaje_confianza_objetivo: porcentaje_confianza_objetivo (float)\n returns: \n Factor de expansi\u00f3n (float)\n \"\"\"\n return np.abs( -(stats.t.ppf((1.0+(porcentaje_confianza_objetivo/100))/2.0,V_eff)) )\n~~~\n\n\n***\n## Caso de an\u00e1lisis: $u_{i}(x_{i}) \\ll u_{j}(X_{i})~$\n\nPara el caso en el que la incertidumbre del muestreo es muy inferior a la incertidumbre tipo B, nos encontramos frente al caso de incertidumbre B dominante. Esta situaci\u00f3n es equivalente a tener la convoluci\u00f3n entre una delta de dirac con una funci\u00f3n de distribuci\u00f3n cualquiera. \n\n\n\n\nComo observamos en la imagen, la funci\u00f3n distribuci\u00f3n densidad de probabilidad resultate se asemeja m\u00e1s a la distribuci\u00f3n uniforme del tipo B. En este caso para encontrar el factor de cobertura utilizaremos otra tabla distinta. En esta tabla el par\u00e1metro de entrada es el cociente $\\dfrac{u_{i}}{u_{j}}$.\n\nA continuaci\u00f3n presentamos la funci\u00f3n que utilizaremos con dicho fin,\n\n~~~\ndef tabla_B(arg):\n tabla_tipoB = np.array([\n [0.0, 1.65],\n [0.1, 1.66],\n [0.15, 1.68],\n [0.20, 1.70],\n [0.25, 1.72],\n [0.30, 1.75],\n [0.35, 1.77],\n [0.40, 1.79],\n [0.45, 1.82],\n [0.50, 1.84],\n [0.55, 1.85],\n [0.60, 1.87],\n [0.65, 1.89],\n [0.70, 1.90],\n [0.75, 1.91],\n [0.80, 1.92],\n [0.85, 1.93],\n [0.90, 1.94],\n [0.95, 1.95],\n [1.00, 1.95],\n [1.10, 1.96],\n [1.20, 1.97],\n [1.40, 1.98],\n [1.80, 1.99],\n [1.90, 1.99]])\n if arg >= 2.0:\n K = 2.0\n else:\n pos_min = np.argmin(np.abs(tabla_tipoB[:,0]-arg)) \n K = tabla_tipoB[pos_min,1]\n\n return K\n~~~\n\n\n***\n## Correlaci\u00f3n\n\nFinalmente nos encontramos con el caso mas general. En esta situaci\u00f3n las variables se encuentran correlacionadas, por lo que la expresi\u00f3n de $u_{c}(Y)$ debe utilizarse en su totalidad.\n\nPor simplicidad de computo vamos a definir al coeficiente correlaci\u00f3n como,\n\n\\begin{equation}\n r(q,w) = \\dfrac{ u(q,w) }{ u(q)u(w) }\n\\end{equation}\n\nDe esta forma podemos expresar a $u_{c}$ como:\n\n\\begin{equation}\n u_{c}^{2}(Y) = \\sum_{i=1}^{N} (\\dfrac{\\partial f}{\\partial x_{i}})^{2} \\cdot u_{c}^{2}(x_{i}) + 2 \\sum_{i=1}^{N-1} \\sum_{j = i+1}^{N} \\dfrac{\\partial f}{\\partial x_{i}} \\dfrac{\\partial f}{\\partial x_{j}} r(x_{i},x_{j})u(x_{i})u(x_{j})\n\\end{equation}\n\nEsta expresi\u00f3n debe utilizarse cada vez que $r(x_{i},x_{j}) \\ne 0$.\n\n# Experimentaci\u00f3n\n**Comenzamos inicializando los m\u00f3dulos necesarios**\n\n\n```python\n# m\u00f3dulos genericos\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\nfrom scipy import signal\n\n# M\u00f3dulos para Jupyter (mejores graficos!)\nimport warnings\nwarnings.filterwarnings('ignore')\nplt.rcParams['figure.figsize'] = [12, 4]\nplt.rcParams['figure.dpi'] = 150 # 200 e.g. is really fine, but slower\n\n\nfrom pandas import DataFrame\nfrom IPython.display import HTML\n```\n\n**Definimos las funciones previamente mencionadas**\n\n\n```python\nAhora# Tabla para el caso A dominante\ndef get_factor_Tstudent(V_eff, porcentaje_confianza_objetivo=95.4):\n \"\"\"\n Funcion de calculo de factor de expansi\u00f3n por T-student\n input:\n V_eff: Grados de libertad (float)\n porcentaje_confianza_objetivo: porcentaje_confianza_objetivo (float)\n returns: .libertad efectivosdenoted\n Factor de expansi\u00f3n (float)\n \"\"\"\n return np.abs( -(stats.t.ppf((1.0+(porcentaje_confianza_objetivo/100))/2.0,V_eff)) )\n\n# Tabla para el caso B dominante\ndef tabla_B(arg):\n tabla_tipoB = np.array([\n [0.0, 1.65],\n [0.1, 1.66],\n [0.15, 1.68],\n [0.20, 1.70],\n [0.25, 1.72],\n [0.30, 1.75],\n [0.35, 1.77],\n [0.40, 1.79],\n [0.45, 1.82],\n [0.50, 1.84],\n [0.55, 1.85],\n [0.60, 1.87],\n [0.65, 1.89],\n [0.70, 1.90],\n [0.75, 1.91],\n [0.80, 1.92],\n [0.85, 1.93],\n [0.90, 1.94],\n [0.95, 1.95],\n [1.00, 1.95],\n [1.10, 1.96],\n [1.20, 1.97],\n [1.40, 1.98],\n [1.80, 1.99],\n [1.90, 1.99]])\n if arg >= 2.0:\n K = 2.0\n else:\n pos_min = np.argmin(np.abs(tabla_tipoB[:,0]-arg)) \n K = tabla_tipoB[pos_min,1]\n\n return K\n```\n\n## Caso general\n**Definimos las constantes necesarias**\n\n\n```python\n# Constantes del instrumento\nCONST_ERROR_PORCENTUAL = 0.5 # Error porcentual del instrumento de medici\u00f3n\nCONST_ERROR_CUENTA = 3 # Error en cuentas del instrumento de medici\u00f3n\nCONST_DECIMALES = 2 # Cantidad de decimales que representa el instrumento\n\n# Constantes del muestro\nN = 10 # Cantidad de muestras tomadas\n\n# Se\u00f1al a muestrear idealizada\nmu = 100 # Valor medio de la distribuci\u00f3n normal de la poblaci\u00f3n ideal\nstd = 2 # Desv\u00edo est\u00e1ndar de la distribuci\u00f3n normal de la poblaci\u00f3n ideal\n\n# Muestreo mi se\u00f1al ideal (Normal)\nmuestra = np.random.randn(N) * std + mu\n```\n\n**Ahora solamente genero un gr\u00e1fico que compare el histograma con la distribuci\u00f3n normal de fondo**\n\n\n```python\nnum_bins = 50\nfig, ax = plt.subplots()\n# the histogram of the 1.1data\nn, bins, patches = ax.hist(muestra, num_bins, density=True)\n# add a 'best fit' line\ny = ((1 / (np.sqrt(2 * np.pi) * std)) *\n np.exp(-0.5 * (1 / std * (bins - mu))**2))\nax.plot(bins, y, '--')\nax.set_xlabel('Smarts')\nax.set_ylabel('Probability density')\nax.set_title('Histogram of IQ: $\\mu=$'+ str(mu) + ', $\\sigma=$' + str(std))\n# Tweak spacing to prevent clipping of ylabel\nfig.tight_layout()\nplt.show()\n```\n\n\n```python\nmedia = np.round(muestra.mean(), CONST_DECIMALES) # Redondeamos los decimales a los valores que puede ver el tester\ndesvio = muestra.std(ddof=1)\n\nprint(\"Mean:\",media )\nprint(\"STD:\" ,desvio)\n```\n\n Mean: 99.29\n STD: 1.6777655348895033\n\n\n**Calculamos el desv\u00edo est\u00e1ndar experimental de la media como:**\n\\begin{equation}\n u_{i}(x_{i}) = \\sqrt{S^{2}(\\bar{X_{i}})}\n\\end{equation}\n\n\n```python\n#Incertidumbre Tipo A\nui = desvio/np.sqrt(N)\nui\n```\n\n\n\n\n 0.5305560469981527\n\n\n\n**Calculamos el error porcentual total del dispositivo de medici\u00f3n como:**\n\\begin{equation}\n e_{\\%T} = e_{\\%} + \\dfrac{e_{cuenta}\\cdot 100\\%}{\\bar{X_{i}}(10^{cte_{Decimales}})}\n\\end{equation}\n\n\n```python\n#Incertidumbre Tipo B\nERROR_PORCENTUAL_CUENTA = (CONST_ERROR_CUENTA*100)/(media * (10**CONST_DECIMALES ))\n\nERROR_PORCENTUAL_TOTAL = CONST_ERROR_PORCENTUAL + ERROR_PORCENTUAL_CUENTA\n\nERROR_PORCENTUAL_CUENTA\n```\n\n\n\n\n 0.030214523114110183\n\n\n\n**Por tanto el error absoluto se representa como:**\n\\begin{equation}\n \\Delta X = e_{\\%T} \\dfrac{\\bar{X_{i}}}{100\\%}\n\\end{equation}\n\n\n```python\ndeltaX = ERROR_PORCENTUAL_TOTAL * media/100\ndeltaX\n```\n\n\n\n\n 0.5264500000000001\n\n\n\n**Finalmente la incertidumbre tipo B queda:**\n\\begin{equation}\n u_{j}(x_{i}) = \\sqrt{Var(x_{i})} = \\dfrac{\\Delta X}{\\sqrt{3}}\n\\end{equation}\n\ndonde recordamos que, al suponer una distribuci\u00f3n uniforme en el dispositivo de medici\u00f3n, la varianza nos queda $Var(X_{uniforme}) = \\dfrac {(b-a)^{2}}{12}$.\n\n\n```python\nuj = deltaX / np.sqrt(3)\nuj\n```\n\n\n\n\n 0.30394604921487856\n\n\n\n**Calculamos la incertidumbre conjunta**\n\nComo este es el caso de una medici\u00f3n directa de una sola variable, la expresi\u00f3n apropiada es:\n\n\\begin{equation}\n u_{c}^{2}(x_{i}) = u_{i}^{2}(x_{i}) + u_{j}^{2}(x_{i}) \n\\end{equation}\n\n\n```python\n#incertidumbre combinada\nuc = np.sqrt(ui**2 + uj**2)\nuc\n```\n\n\n\n\n 0.61145148608834\n\n\n\n**Ahora debemos evaluar frente a que caso nos encontramos**\n\nEn primera instancia evaluamos que componente de la incertidumbre es mayoritaria y en que proporci\u00f3n.\n\nEntonces tenemos tres situaciones posibles:\n\n1. **Caso B dominante** $\\Rightarrow \\dfrac{u_{i}(x_{i})}{u_{j}(x_{i})} \\lt 1 \\Rightarrow$ Se utiliza la tabla de B dominante.\n1. **Caso Normal** $\\Rightarrow \\dfrac{u_{i}(x_{i})}{u_{j}(x_{i})} \\gt 1$ y $V_{eff} \\gt 30 \\Rightarrow$ Se toma $K=2$.\n1. **Caso A dominante** $\\Rightarrow \\dfrac{u_{i}(x_{i})}{u_{j}(x_{i})} \\gt 1$ y $V_{eff} \\lt 30 \\Rightarrow$ Se utiliza t-Student con los grados de libertad efectivos.\n\n\n\n```python\ndef evaluacion(uc,ui,uj,N):\n cte_prop = ui/uj\n print(\"Constante de proporcionalidad\", cte_prop)\n if cte_prop > 1:\n # Calculo los grados de libertad efectivos\n veff = int ((uc**4)/((ui**4)/(N-1)))\n print(\"Grados efectivos: \", veff)\n if veff > 30:\n # Caso Normal\n k = 2\n else:\n # Caso t-Student\n k = get_factor_Tstudent(veff)\n else:\n # Caso B Dominante\n k = tabla_B(cte_prop)\n print(\"Constante de expansi\u00f3n: \",k)\n return k\n```\n\n
\n Nota: La contribuci\u00f3n de $u_{j}(x_{i})$ no se tiene en cuenta dado que, al ser una distribuci\u00f3n continua, tiene infinitos grados de libertad.\n \n \n\\begin{equation}\n \\nu_{eff} = \\dfrac{u_{c}^{4}(y)}{\\sum_{i=1}^{N} \\dfrac{ c_{i}^{4} u^{4}(x_{i})} {\\nu_{i}} } \n\\end{equation}\n
\n\n\n\n\n```python\nk = evaluacion(uc,ui,uj,N)\n```\n\n Constante de proporcionalidad 1.7455599385766958\n Grados efectivos: 15\n Constante de expansi\u00f3n: 2.175422110927068\n\n\n**An\u00e1lisis y presentaci\u00f3n del resultado**\n\nComo el cociente $\\dfrac{u_{i}(x_{i})}{u_{j}(x_{i})} \\gt 2$, entonces suponemos que nos encontramos frente al caso de distribuci\u00f3n normal o distribuci\u00f3n t-Student. Para ello utilizamos el criterio de los grados de libertad efectivos.\n\nEn este caso los grado de libertad efectivos $V_{eff} \\gt 30$, por lo que suponemos distribuci\u00f3n normal.\n\nFinalmente presentamos el resultado con 1 d\u00edgito significativo.\n\n\n```python\nU = uc*k\nprint(\"Resultado de la medici\u00f3n: (\",np.round(media,1),\"+-\",np.round(U,1),\")V con un grado de confianza del 95%\")\n```\n\n Resultado de la medici\u00f3n: ( 99.3 +- 1.3 )V con un grado de confianza del 95%\n\n\n# Bibliograf\u00eda\n\n_Nota: Las citas **no** respetan el formato APA._\n\n1. [Evaluaci\u00f3n de la Incertidumbre en Datos Experimentales, Javier Miranda Mart\u00edn del Campo](http://depa.fquim.unam.mx/amyd/archivero/CALCULODEINCERTIDUMBRESDR.JAVIERMIRANDA_26197.pdf)\n\n1. [Propagaci\u00f3n de erroes, Wikipedia](https://es.wikipedia.org/wiki/Propagaci%C3%B3n_de_errores)\n\n1. [Convoluci\u00f3n, Wikipedia](https://es.wikipedia.org/wiki/Convoluci%C3%B3n)\n\n1. [Intervalo de Confianza, Wikipedia](https://es.wikipedia.org/wiki/Intervalo_de_confianza#Ejemplo_pr%C3%A1ctico)\n", "meta": {"hexsha": "f55d90f30564ce025901c9190cc1c31e2d8a58bc", "size": 75558, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Incertidumbre/Incertidumbre.ipynb", "max_stars_repo_name": "lucasliano/Medidas1", "max_stars_repo_head_hexsha": "349f1e3783b35782a445d7e34ab9827ee5117e31", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-05-02T19:24:58.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-03T01:19:53.000Z", "max_issues_repo_path": "Incertidumbre/Incertidumbre.ipynb", "max_issues_repo_name": "lucasliano/Medidas1", "max_issues_repo_head_hexsha": "349f1e3783b35782a445d7e34ab9827ee5117e31", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Incertidumbre/Incertidumbre.ipynb", "max_forks_repo_name": "lucasliano/Medidas1", "max_forks_repo_head_hexsha": "349f1e3783b35782a445d7e34ab9827ee5117e31", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 85.6666666667, "max_line_length": 40872, "alphanum_fraction": 0.7744778845, "converted": true, "num_tokens": 7976, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48047867804790706, "lm_q2_score": 0.20689405126100327, "lm_q1q2_score": 0.09940818024586277}} {"text": "# Python for scientific computing\n\n> Marcos Duarte \n> Laboratory of Biomechanics and Motor Control [http://demotu.org](http://demotu.org) \n> Federal University of ABC, Brazil \n\n# This talk\n\n*The Python programming language with its ecosystem for scientific programming has features, maturity, and a community of developers and users that makes it the ideal environment for the scientific community.* \n\n*This talk will show some of these features and usage examples.* \n\n*If you are viewing this notebook online (served by [http://nbviewer.ipython.org](http://nbviewer.ipython.org)), you can click the button 'View as Slides' on the toolbar above to start the slide show.*\n\n## The lifecycle of a scientific idea\n\n\n```python\nfrom IPython.display import Image\nImage(filename='../images/lifecycle_FPerez.png') # From F. Perez\n```\n\n## About Python\n\n*Python is a programming language that lets you work more quickly and integrate your systems more effectively. You can learn to use Python and see almost immediate gains in productivity and lower maintenance costs* [[python.org](http://python.org/)].\n\n*Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. Its high-level built in data structures, combined with dynamic typing and dynamic binding, well suited for Rapid Application Development and for scripting or glue language to connect existing components. Python's simple, easy to learn syntax emphasizes readability and therefore reduces the cost of program maintenance. Python supports modules and packages, which encourages program modularity and code reuse. The Python interpreter and standard libraries are available without charge for all major platforms, and can be freely distributed* [[Python documentation](http://www.python.org/doc/essays/blurb/)].\n\n## About me\n\nAs a scientist, what I do it's similar to this other fellow:\n\n\n```python\nfrom IPython.display import YouTubeVideo\nYouTubeVideo('9ZlBUglE6Hc', width=480, height=360, rel=0)\n```\n\n\n\n\n\n\n\n\n\n\n## Python ecosystem for scientific computing (main libraries)\n\n- [Numpy](http://numpy.scipy.org): fundamental package for scientific computing with a N-dimensional array package.\n- [Scipy](http://scipy.org/scipylib/index.html): numerical routines for scientific computing.\n- [Matplotlib](http://matplotlib.org): comprehensive 2D Plotting.\n- [Sympy](http://sympy.org): symbolic mathematics.\n- [Pandas](http://pandas.pydata.org/): data structures and data analysis tools.\n- [Jupyter Notebook](https://jupyter.org): web application for creating and sharing documents with live code, equations, visualizations and text. \n- [Statsmodels](http://statsmodels.sourceforge.net/): to explore data, estimate statistical models, and perform statistical tests.\n- [Scikit-learn](http://scikit-learn.org/stable/): tools for data mining and data analysis (including machine learning).\n- [Pillow](http://python-pillow.github.io/): Python Imaging Library.\n- [Spyder](https://code.google.com/p/spyderlib/): interactive development environment.\n\n## Why Python and not 'X' (put any other language here)\n\nPython is not the best programming language for all needs and for all people. There is no such language. But, if you are doing scientific computing, chances are that Python is perfect for you because:\n\n1. Python is free, open source, and cross-platform. \n2. Python is easy to learn, with readable code, well documented, and with a huge and friendly user community. \n3. Python is a real programming language, able to handle a variety of problems, easy to scale from small to huge problems, and easy to integrate with other systems (including other programming languages).\n4. Python code is not the fastest but Python is one the fastest languages for programming. It is not uncommon in science to care more about the time we spend programming than the time the program took to run. But if code speed is important, one can easily integrate in different ways a code written in other languages (such as C and Fortran) with Python.\n5. The Jupyter Notebook is a versatile tool for programming, data visualization, plotting, simulation, numeric and symbolic mathematics, and writing for daily use.\n\n## Popularity of Python for teaching\n\n\n```python\nfrom IPython.display import IFrame\nIFrame('http://cacm.acm.org/blogs/blog-cacm/176450-python-is-now-the-most-popular-' +\n 'introductory-teaching-language-at-top-us-universities/fulltext',\n width='100%', height=450)\n```\n\n\n\n\n\n\n\n\n\n\n## The Jupyter Notebook\n\nThe Jupyter Notebook App is a server-client application that allows editing and running notebook documents via a web browser. The Jupyter Notebook App can be executed on a local desktop requiring no internet access (as described in this document) or installed on a remote server and accessed through the internet. \n\nNotebook documents (or \u201cnotebooks\u201d, all lower case) are documents produced by the Jupyter Notebook App which contain both computer code (e.g. python) and rich text elements (paragraph, equations, figures, links, etc...). Notebook documents are both human-readable documents containing the analysis description and the results (figures, tables, etc..) as well as executable documents which can be run to perform data analysis.\n\n[Try Jupyter Notebook in your browser](https://try.jupyter.org/).\n\n\n```python\nfrom IPython.display import IFrame\nIFrame('https://jupyter.org/', width='100%', height=450)\n```\n\n\n\n\n\n\n\n\n\n\n## Jupyter Notebook and IPython kernel architectures\n\n
\n\n## Python installation and tutorial\n\n- [Python for scientific computing](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/PythonForScientificComputing.ipynb)\n- [How to install Python for scientific computing](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/PythonInstallation.ipynb) \n- [Tutorial on Python for scientific computing](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/PythonTutorial.ipynb)\n\n## Installing the Python ecosystem\n\n**The easy way** \nThe easiest way to get Python and the most popular packages for scientific programming is to install them with a Python distribution such as [Anaconda](https://www.continuum.io/anaconda-overview). In fact, you don't even need to install Python in your computer, you can run Python for scientific programming in the cloud using [python.org](https://www.python.org/shell/), [SageMathCloud](https://cloud.sagemath.com), [Wakari](https://www.wakari.io/), [pythonanywhere](https://www.pythonanywhere.com/), or [repl.it](https://repl.it/languages/python3).\n\n**The hard way** \nYou can download Python and all individual packages you need and install them one by one. In general, it's not that difficult, but it can become challenging and painful for certain big packages heavily dependent on math, image visualization, and your operating system (i.e., Microsoft Windows).\n\n## Anaconda\n\nGo to the [*Anaconda* website](https://www.continuum.io/downloads) and download the appropriate version for your computer (but download Anaconda3! for Python 3.x). The file is big (about 350 MB). [From their website](https://www.continuum.io/downloads): \n**Linux Install** \nIn your terminal window type and follow the instructions: \n```\nbash Anaconda3-4.1.1-Linux-x86_64.sh \n```\n**OS X Install** \nFor the graphical installer, double-click the downloaded .pkg file and follow the instructions \nFor the command-line installer, in your terminal window type and follow the instructions: \n```\nbash Anaconda3-4.1.1-MacOSX-x86_64.sh \n```\n**Windows** \nDouble-click the .exe file to install Anaconda and follow the instructions on the screen \n\n## Miniconda\n\nA variation of *Anaconda* is [*Miniconda*](http://conda.pydata.org/miniconda.html) (Miniconda3 for Python 3.x), which contains only the *Conda* package manager and Python. \n\nOnce *Miniconda* is installed, you can use the `conda` command to install any other packages and create environments, etc.\n\n# My current installation\n\n\n```python\n# pip install version_information\n%load_ext version_information\n%version_information numpy, scipy, matplotlib, sympy, pandas, ipython, jupyter\n```\n\n\n\n\n
SoftwareVersion
Python3.5.2 64bit [MSC v.1900 64 bit (AMD64)]
IPython5.1.0
OSWindows 10 10.0.10586 SP0
numpy1.11.1
scipy0.18.0
matplotlib1.5.3
sympy1.0
pandas0.18.1
ipython5.1.0
jupyter1.0.0
Thu Sep 22 01:04:43 2016 E. South America Standard Time
\n\n\n\n## To learn more about Python\n\nThere is a lot of good material in the internet about Python for scientific computing, some of them are: \n\n - [How To Think Like A Computer Scientist](http://www.openbookproject.net/thinkcs/python/english2e/) or [the interactive edition](http://interactivepython.org/courselib/static/thinkcspy/index.html) (book)\n - [Python Scientific Lecture Notes](http://scipy-lectures.github.io/) (lecture notes)\n - [Lectures on scientific computing with Python](https://github.com/jrjohansson/scientific-python-lectures#lectures-on-scientific-computing-with-python) (lecture notes)\n - [A gallery of interesting IPython Notebooks](https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks)\n\n# Brief tutorial on Python\n\n## Python as a calculator\n\nOnce in the IPython notebook, if you type a simple mathematical expression and press Shift+Enter it will give the result of the expression:\n\n\n```python\n1 + 2 - 5\n```\n\n\n\n\n -2\n\n\n\n\n```python\nimport math # use the import function to import the math library\nmath.sqrt(12)\n```\n\n\n\n\n 3.4641016151377544\n\n\n\n\n```python\nx = 1\ny = 1 + math.pi\ny\n```\n\n\n\n\n 4.141592653589793\n\n\n\n## Main built-in datatypes in Python\n\n- Bolleans: True, False\n- NoneType: None\n- Numbers: int, float, complex\n- Sequences: list, tuple, range\n- Text sequence: str\n- Binary sequence: bytes, bytearray, memoryview\n- Mapping: dict\n- Set: set, frozenset\n- Boolean operations: and, or, not\n- Comparisons: <, <=, >, >=, ==, !=, is, is not\n- Math operations: +, -, \\*, /, //, %, **\n- Bitwise operations: |, ^, &, <<, >>, ~\n\n## Example: strings\n\n\n```python\ns = 'P' + 'y' + 't' + 'h' + 'o' + 'n'\nprint(s)\nprint(s*5)\n```\n\n Python\n PythonPythonPythonPythonPython\n\n\nStrings can be subscripted (indexed); like in C, the first character of a string has subscript (index) 0:\n\n\n```python\nprint('s[0] = ', s[0], ' (s[index], start at 0)')\nprint('s[5] = ', s[5])\nprint('s[-1] = ', s[-1], ' (last element)')\nprint('s[:] = ', s[:], ' (all elements)')\nprint('s[1:] = ', s[1:], ' (from this index (inclusive) till the last (inclusive))')\nprint('s[2:4] = ', s[2:4], ' (from 1st index (inclusive) till 2nd index (exclusive))')\nprint('s[:2] = ', s[:2], ' (till this index, exclusive)')\nprint('s[:10] = ', s[:10], ' (Python handles the index if it''s larger than length)')\nprint('s[-10:] = ', s[-10:])\nprint('s[0:5:2] = ', s[0:5:2], ' (s[ini:end:step])')\nprint('s[::2] = ', s[::2], ' (s[::step], initial and final indexes can be omitted)')\nprint('s[0:5:-1] = ', s[::-1], ' (s[::-step] reverses the string)')\nprint('s[:2] + s[2:] = ', s[:2] + s[2:], ' (this sounds natural with Python indexing)')\n```\n\n s[0] = P (s[index], start at 0)\n s[5] = n\n s[-1] = n (last element)\n s[:] = Python (all elements)\n s[1:] = ython (from this index (inclusive) till the last (inclusive))\n s[2:4] = th (from 1st index (inclusive) till 2nd index (exclusive))\n s[:2] = Py (till this index, exclusive)\n s[:10] = Python (Python handles the index if its larger than length)\n s[-10:] = Python\n s[0:5:2] = Pto (s[ini:end:step])\n s[::2] = Pto (s[::step], initial and final indexes can be omitted)\n s[0:5:-1] = nohtyP (s[::-step] reverses the string)\n s[:2] + s[2:] = Python (this sounds natural with Python indexing)\n\n\n## Defining a function in Python\n\n\n```python\ndef fibo(N):\n \"\"\"Fibonacci series: the sum of two elements defines the next.\n \n The series is calculated till the input parameter N and\n returned as an ouput variable.\n \n \"\"\"\n \n a, b, c = 0, 1, []\n while b < N:\n c.append(b)\n a, b = b, a + b\n \n return c\n```\n\n\n```python\nfibo(9)\n```\n\n\n\n\n [1, 1, 2, 3, 5, 8]\n\n\n\n## Defining a function in Python II\n\n\n```python\ndef bmi(weight, height):\n \"\"\"Body mass index calculus and categorization.\n Enter the weight in kg and the height in m.\n See http://en.wikipedia.org/wiki/Body_mass_index\n \"\"\"\n bmi = weight / height**2\n if bmi < 15:\n c = 'very severely underweight'\n elif 15 <= bmi < 16:\n c = 'severely underweight'\n elif 16 <= bmi < 18.5:\n c = 'underweight'\n elif 18.5 <= bmi < 25:\n c = 'normal'\n elif 25 <= bmi < 30:\n c = 'overweight'\n elif 30 <= bmi < 35:\n c = 'moderately obese'\n elif 35 <= bmi < 40:\n c = 'severely obese'\n else:\n c = 'very severely obese'\n \n s = 'For a weight of {0:.1f} kg and a height of {1:.2f} m,\\n\\\n the body mass index (bmi) is {2:.1f} kg/m2,\\n\\\n which is considered {3:s}.'\\\n .format(weight, height, bmi, c)\n print(s)\n```\n\n\n```python\nbmi(70, 1.90);\n```\n\n For a weight of 70.0 kg and a height of 1.90 m,\n the body mass index (bmi) is 19.4 kg/m2,\n which is considered normal.\n\n\n## Numeric data manipulation with Numpy\n\nNumpy is the fundamental package for scientific computing in Python and has a N-dimensional array package convenient to work with numerical data. With Numpy it's much easier and faster to work with numbers grouped as 1-D arrays (a vector), 2-D arrays (like a table or matrix), or higher dimensions. \n\n\n```python\nimport numpy as np\n\nx = np.array([1, 2, 3, 4, 5, 6])\nprint(x)\nx = np.random.randn(2,4)\nprint(x)\n```\n\n [1 2 3 4 5 6]\n [[ 0.6504986 1.21639113 0.06680213 0.43133861]\n [ 0.35556254 0.43596075 1.17614962 1.0677548 ]]\n\n\n## Moving-average filter (Numpy use for performance)\n\n*A moving-average filter has the general formula:*\n\n$$ y[i] = \\sum_{j=0}^{m-1} x[i+j] \\;\\;\\;\\; for \\;\\;\\; i=1, \\; \\dots, \\; n-m+1 $$\n\nHere are two different versions of a function to implement the moving-average filter:\n\n\n```python\nimport numpy as np\ndef mav1(x, window):\n \"\"\"Moving average of 'x' with window size 'window'.\"\"\"\n y = np.empty(len(x)-window+1)\n for i in range(len(y)):\n y[i] = np.sum(x[i:i+window])/window\n return y\n\ndef mav2(x, window):\n \"\"\"Moving average of 'x' with window size 'window'.\"\"\"\n xsum = np.cumsum(x)\n xsum[window:] = xsum[window:] - xsum[:-window]\n return xsum[window-1:]/window\n```\n\n\n```python\nx = np.random.randn(300)/10\nx[100:200] += 1\nwindow = 10\n\nprint('Performance of mav1:')\n%timeit mav1(x, window)\nprint('Performance of mav2:')\n%timeit mav2(x, window)\n```\n\n Performance of mav1:\n 1000 loops, best of 3: 1.56 ms per loop\n Performance of mav2:\n The slowest run took 5.70 times longer than the fastest. This could mean that an intermediate result is being cached.\n 100000 loops, best of 3: 11.6 \u00b5s per loop\n\n\n## Ploting with matplotlib\n\nMatplotlib is the most-widely used packge for plotting data in Python. Let's see some examples of it.\n\n\n```python\nimport matplotlib.pyplot as plt\n#%matplotlib notebook\n%matplotlib inline\nimport numpy as np\n```\n\n\n```python\ny1 = mav1(x, window)\ny2 = mav2(x, window)\n# plot\nfig, ax = plt.subplots(1, 1, figsize=(8, 4))\nax.plot(x, 'b-', linewidth=1, label = 'raw data')\nax.plot(y1, 'y-', linewidth=2, label = 'moving average 1')\nax.plot(y2, 'g--', linewidth=2, label = 'moving average 2')\nax.legend(frameon=False, loc='upper right', fontsize=12)\nax.set_xlabel(\"Data #\")\nax.set_ylabel(\"Amplitude\")\nax.grid();\n```\n\n\n```python\nplt.figure(figsize=(8, 4))\nplt.plot(x, 'b-', linewidth=1, label = 'raw data')\nplt.plot(y1, 'y-', linewidth=2, label = 'moving average 1')\nplt.plot(y2, 'g--', linewidth=2, label = 'moving average 2')\nplt.legend(frameon=False, loc='upper right', fontsize=12)\nplt.xlabel(\"Data #\")\nplt.ylabel(\"Amplitude\")\nplt.grid()\nplt.show()\n```\n\n## Ploting with matplotlib II\n\nPlot figure in an external window (outside the ipython notebook area):\n\n\n```python\n#%matplotlib qt\n```\n\n\n```python\nmu, sigma = 10, 2\nx = mu + sigma * np.random.randn(1000)\nfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 4))\nax1.plot(x, 'ro')\nax1.set_title('Data')\nax1.grid()\n\nn, bins, patches = ax2.hist(x, 25, normed=True, facecolor='r') # histogram\nax2.set_xlabel('Bins')\nax2.set_ylabel('Probability')\nax2.set_title('Histogram')\nfig.suptitle('Another example using matplotlib', fontsize=18, y=1.02)\nax2.grid()\n\nplt.tight_layout()\nplt.show()\n```\n\n\n```python\n# get back the inline plot\n#%matplotlib inline\n#%matplotlib notebook\n```\n\nInstead of \"`%matplotlib inline`\" you can use \"`%matplotlib notebook`\" which gives you a nice toolbar for zooming, panning, etc. The caveat is that once \"`%matplotlib notebook`\" is used you can't alternate between matplotlib backends as we just did.\n\n## Symbolic mathematics with Sympy\n\nSympy is a package to perform symbolic mathematics in Python. Let's see some of its features:\n\n\n```python\nfrom IPython.display import display\nimport sympy as sym\nfrom sympy.interactive import printing\nprinting.init_printing()\n```\n\nDefine some symbols and the create a second-order polynomial function (a.k.a., parabola), plot, and find the roots:\n\n\n```python\nx, y = sym.symbols('x y')\ny = -x**3 + 4*x\ny\n```\n\n\n```python\nfrom sympy.plotting import plot\n%matplotlib inline\nplot(y, (x, -3, 3));\n```\n\n\n```python\nsym.solve(y, x)\n```\n\n## More live examples\n\nLet's run stuff from:\n- [https://github.com/demotu/BMC](https://github.com/demotu/BMC)\n- [http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/Index.ipynb](http://nbviewer.ipython.org/github/ipython/ipython/blob/master/examples/Index.ipynb)\n- [http://nbviewer.jupyter.org/](http://nbviewer.jupyter.org/)\n- ...\n\n## Questions?\n\n- http://mail.scipy.org/mailman/listinfo/ipython-dev\n- http://www.reddit.com/r/python\n- http://stackoverflow.com/\n\n> This entire document was written in the Jupyter Notebook (which can be statically viewed [here](http://nbviewer.ipython.org/github/demotu/BMC/blob/master/notebooks/PythonForScientificComputing.ipynb) or downloaded [here](https://raw.githubusercontent.com/demotu/BMC/master/notebooks/PythonForScientificComputing.ipynb)). If you are watching my presentation right now, these slides are just a visualization of the same notebook (probably using the [RISE: \"Live\" Reveal.js Jupyter/IPython Slideshow Extension](https://github.com/damianavila/live_reveal)).\n", "meta": {"hexsha": "7237255343d3751485edfb3f5a723e9715bb5d28", "size": 532338, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/PythonForScientificComputing.ipynb", "max_stars_repo_name": "jagar2/BMC", "max_stars_repo_head_hexsha": "884250645693ef828471fe1d132a093dc6df7593", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-08-30T04:02:59.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-30T04:02:59.000Z", "max_issues_repo_path": "notebooks/PythonForScientificComputing.ipynb", "max_issues_repo_name": "jagar2/BMC", "max_issues_repo_head_hexsha": "884250645693ef828471fe1d132a093dc6df7593", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/PythonForScientificComputing.ipynb", "max_forks_repo_name": "jagar2/BMC", "max_forks_repo_head_hexsha": "884250645693ef828471fe1d132a093dc6df7593", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-08-30T04:03:02.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-30T04:03:02.000Z", "avg_line_length": 288.8431904504, "max_line_length": 168334, "alphanum_fraction": 0.9164553348, "converted": true, "num_tokens": 5194, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3960681662740417, "lm_q2_score": 0.25091279808829703, "lm_q1q2_score": 0.09937857183352067}} {"text": "# Cass and Koopman's Model of Optimal Growth\n\n** This project sets out to investigate the optimal level of growth. **\n\nWe're interested analysing the theoretically optimal level of optimal growth. To do so, we will be using Cass and Koopman's model of exactly that. \n\nThe model can be interpreted as an extension of the Solow model but adapted to make the savings rate the outcome of an optimal choice. This is in contrast to that of the Solow model which assumed a constant savings rate determined outside the model. The model is based on the articles:\n\n* Tjalling C. Koopmans. On the concept of optimal economic growth. In Tjalling C. Koopmans, editor, The Economic Approach to Development Planning, page 225\u2013287. Chicago, 1965.\n\n* David Cass. Optimum growth in an aggregative model of capital accumulation. Review of Economic Studies, 32(3):233\u2013240, 1965.\n\n** Imports and set magics:**\n\n\n```python\nimport numpy as np\nfrom scipy import optimize\nimport sympy as sm\nimport matplotlib.pyplot as plt\n\n# autoreload modules when code is run\n%load_ext autoreload\n%autoreload 2\n\n# local modules\nimport modelproject\n```\n\n# Description of the model\n\nTime is discrete and takes the values t=0,1...,T. A single good is consumed or invested in physical capital. The consumption good is not durable and will depreciate if it is not consumed immediately. The capital good is durable but depreciates each period with the rate $\\gamma \\epsilon (0,1)$. \n\nWe consider the a model of Cass and Koopman's optimal growth where:\n\n* $C_t$ is a nondurable consumption good at time t.\n* $K_t$ is is the stock of physical capital at time t.\n* Let $C={C_0,...,C_T}$ and $K={K_1,...,K_{T+1}}$\n\nA representative household is endowed with one unit of labour $N_t$ at each t, such that $N_t=1$ for all $t \\epsilon [0,T]$. \n\nThe representative household has preferences over consumption bundles with the utility function given by: \n\n$$ U(C)=\\sum_{T=0}^{T}\\beta^t\\frac{C_t^{1-\\gamma}}{1-\\gamma} $$\n\nwhere $\\beta \\epsilon (0,1)$ is a discount factor and $\\gamma > 0$ decides the curvature of the one-period utility function. \n\nNote that\n\n$$u(C_t)=\\frac{C_t^{1-\\gamma}}{1-\\gamma}$$\n\nsatisfies $u'>0, u''<0$. \n\nWe also note that\n* $u'>0$ asserts the consumer prefers more to less\n* $u''<0$ asserts that marginal utility declines with increases in $C_t$\n\nWe assume that $K_0>0$ is a given exogenous level of intial capital. \n\nThere is an economy-wide production function: \n\n$$ F(K_t, N_t)=AK_t^\\alpha N_t^{1-\\alpha} $$\n\nwith 0 < \\alpha < 1, A>0. \n\nA feasible allocation C, K will satisfy\n$$ C_t+K_t+1 \\leq F(K_t,N_t)+(1-\\delta)K_t,$$ \n\nfor all $t \\epsilon[0,T]$\n\nwhere $\\delta \\epsilon(0,1)$ is the rate at which capital depreciates.\n\n\n## Planning Problem\n\nA planner chooses an allocation ${C,K}$, to maximise the utility function st the feasible allocation. Let $\\mu ={\\mu_0,...,\\mu_T}$ be a sequence of non-negative Lagrange multipliers. To find an optimal allocation, we use the Lagrangian\n\n$$ L(C, K, \\mu) = \\sum_{t=1}^{T}\\beta^t{\\mu(C_t)+\\mu_t(F(K_t, 1)+(1-\\delta)K_t-C_t-K_{t+1})} $$\n\nand then solve the following max problem\n\n$$ max L(C,K, \\mu) $$\n\n**Useful Properties of Linearly Homogenous Production Functions**\n\nNotice that \n\n$$ F(K_t, N_t)=AK^\\alpha_tN^{1-\\alpha}_t=N_tA(\\frac{K_t}{N_t})^\\alpha $$\n\nWe define the output per-capital production function \n\n$$f(\\frac{K_t}{N_t})=A(\\frac{K_t}{N_t})^\\alpha $$\n\nwhose argument is capital per-capita. \n\nThen we have that \n\n$$F(K_t,N_t)=N_tf(\\frac{K_t}{N_t}) $$\n\nTaking the derivate wrt K, yields\n\n$$ \\frac{\\delta F}{\\delta K} = \\frac{\\delta N_tf({\\frac{K_t}{N_t})}}{\\delta N_t} $$\n$$ = N_tf'(\\frac{K_t}{N_t}\\frac{1}{N_t}) $$\n$$ =f' (\\frac{K_t}{N_t} \\bigg\\rvert_{N_t=1} $$\n$$ f'(K_t) $$\n\nAlso\n\n$$ \\frac{\\delta F}{\\delta N} = \\frac{\\delta N_t f(\\frac{K_t}{N_t})}{\\delta N_t} $$\n$$ = f(\\frac{K_t}{N_t})+N_tf'(\\frac{K_t}{N_t})-\\frac{-K_t}{N_t^2}) $$\n$$ = f(\\frac{K_t}{N_t})- \\frac{K_t}{N_t}f'(\\frac{K_t}{N_t})\\bigg\\rvert_{N_t=1} $$\n$$ = f(K_t)-f'(K_t)K_t $$\n\n** Returning to solving the problem **\n\nWe compute first derivatives of Lagrangian and set them equal to 0, in order to solve the Lagrangian maximisation problem. \n\nOur objective function and constraints satisfy conditions that assure that the required SOCs are satisfied at an allocation satisfying the FOCs which that are derived below.\n\nThe FOC for maximisation with respet to C, K: \n\n$$ C_t: \\mu'(C_t)=\\mu_t = 0 for all t= 0,1,...,T $$\n$$ K_t: \\beta\u00a0\\mu_t[(1-\\delta)+f'(K_t)]-\\mu\u2013{t-1}=0 for all t=1,2,...,T $$\n$$ \\mu_t:F(K_t,1)+(1-\\delta)K_t-C_t-K_{t+1}=0 for all t=0,1,...,T $$ \n$$ K_{T+1}: -\\mu_T \t\\leq 0, \t\\leq if K_{T+1}=0; =0 if K_{T+1} > 0 $$\n\nIn the equation for $C_t$ we plugged in for $\\frac{\\delta F}{\\delta K}$ using the formula given above. As $N_t=1$ for all t=1,...,T, it is not necessary to differentiate with respect to those arguments. Note that the equation for $\\mu_t$ comes from the occurrence of $K_t$ in both the period t and period t-1 feasibility constraints. The equation for $K_{T+1}$ comes from differentiating with respect to $K_{T+1}$ in the last period and applying the following Karush-Kuhn-Tucker condition: \n\n$$ \\mu_tK_{T+1}=0 $$\n\nCombining equations for $C_t$ and $K_t$ yields\n\n$$ \\mu'(C_t)[(1-\\delta)+f'(K_t)]-\\mu'(C{t-1})=0 $$ for all t=1,2,...,T+1\n\nRewriting yields \n\n$$ u'(C_{t+1})[(1-\\delta)+f'(K_{t+1})]=\\mu'(C_t) $$ for all t=0,1,...,T \n\nTaking the inverse of the utility function on both sides of the above equation yields \n\n$$ C_{t+1}=u'^{-1}((\\frac{\\beta}{u'(C_t)}[f'(K_{t+1})+(1-\\delta)])^{-1}) $$ \n\nOr using the utility function.\n\n$$ C_{t+1}=(\\beta C^{\\gamma}_t[f'(K_{t+1})+(1-\\delta)])^{1/\\gamma} $$\n$$ =C_t(\\beta[f'(K_{t+1})+(1-\\delta)])^{1/\\gamma} $$\n\nThe above FOC for consumption is an Euler Equation. It descirbes how consumption in consecuitive periods are optimally related to each other and to capital in the following period. \n\nWe now apply the equations above to calculate variables and functions that we will need to solve the planning problem. \n\nFirst we define symbols\n\n\n```python\ngamma = sm.symbols('gamma')\nalpha = sm.symbols('alpha')\ndelta = sm.symbols('delta')\nbeta = sm.symbols('beta')\nA = sm.symbols('A')\n```\n\n\n```python\n# The utility function\ndef u(c, gamma): \n if y == 1: # if y = 1 we can with L'hopital's Rule show that the utility becomes log\n return np.log(c)\n else: \n return (c**(1-gamma))/(1-gamma)\n\n#The derivative of utility\ndef u_prime(c, gamma): \n if gamma == 1:\n return 1/c\n else: \n return c**(-gamma)\n\n#The inverse utility\ndef u_prime_inverse(c, gamma):\n if gamma == 1: \n return c\n else: \n return c**(-1/gamma)\n\n#The production function \ndef f(A, k, alpha):\n return A*k**alpha\n\n#The derivative of production function\ndef f_prime(A, k, alpha):\n return alpha*A*k**(alpha-1)\n\n#The inverse production function\ndef f_prime_inverse(A, k, alpha):\n return (k/(A*alpha))**(1/(alpha-1))\n```\n\nWe will be using an algorithmic method with a for loop, to derive an optimal allocation for C,K and an associated Lagrange multiplier sequence $\\mu$. The FOCs for the planning problem, form a system of difference equations with two boundary conditions\n\n* $K_0$ is a given initial condition for capital\n* $K_{T+1}=0$ is a terminal condition for capital\n\nThe parameters are: \n* c = Initial consumption \n* k = Initial capital\n* $\\gamma$ = Coefficient of relative risk aversion \n* $\\delta$ = Depreciation rate on capital \n* $\\beta$ = Discount factor\n* $\\alpha$ = return to capital per capital\n* A = technology\n\n** The model paramters are defined **\n\n\n```python\ngamma=2\ndelta=0.02\nbeta=0.95\nalpha=0.33\nA=1\n```\n\n** The algortihmic method to solve the problem **\n\n\n```python\nT=10\nc=np.zeros(T+1) #T periods of consumption initialised to 0\nk=np.zeros(T+2) #T periods of capital initialised to 0(T+2 to include t+1 variable)\nk[0]=0.3 #Initial k\nc[0]=0.2 # Initial guess of c_0\n\ndef algorithm(c, k, gamma, delta, beta, alpha, A):\n T = len(c)-1 \n for t in range(T): \n k[t+1]=f(A=A, k=k[t], alpha=alpha)+(1-delta)*k[t]-c[t] \n if k[t+1]<0: #Ensuring nonnegativity\n k[t+1]=0 \n if beta*(f_prime(A=A, k=k[t+1], alpha=alpha)+(1-delta))==np.inf: \n#Only occurs if k[t+1] is 0, at which point nothing will be produced next period, thus consumption go to 0\n c[t+1]=0\n else: c[t+1]=u_prime_inverse(u_prime(c=c[t], gamma=gamma)/(beta*(f_prime(A=A, k=k[t+1], alpha=alpha)+(1-delta))), gamma=gamma)\n\n#Terminal condition calculation\n k[T+1]=f(A=A, k=k[T], alpha=alpha)+(1-delta)*k[T]-c[T]\n return c, k\n\npaths = algorithm(c, k, gamma, delta, beta, alpha, A)\n\nfig, axes = plt.subplots(1, 2, figsize=(10, 4))\ncolors = ['orange', 'green']\ntitles = ['Consumption', 'Capital']\nylabels = ['$c_t$', '$k_t$']\n\nfor path, color, title, y, ax in zip(paths, colors, titles, ylabels, axes):\n ax.plot(path, c=color, alpha=0.7)\n ax.set(title=title, ylabel=y, xlabel='t')\n\nax.scatter(T+1, 0, s=80)\nax.axvline(T+1, color='k', ls='--', lw=1)\n\nplt.tight_layout()\nplt.show()\n```\n\nFrom the graphs above, it is evident that our guess for $\\mu_0$ is too high and makes initial consumption too low. This is evident because the $K_{T+1}=0$ target is missing on the high side. \n\n## Bisection Method\n\nIn the following section we will automate the above procedure with the derivative-free method, Bisection. Applying the method means searching for $\\mu_0$, stopping when we reach the target $K_{T+1}=0$. \n\n\nWe take an initial guess for $C_0$ ($\\mu_0$ can be eliminated because $C_0$ is an exact function of $\\mu_0$. We know that the lowest $C_0$ can ever be is 0 and the largest it can be is initial output $f(K_0)$. We will take a guess on $C_0$ towards T+1. If $K_{T+1}>0$, let it be our new lower bound on $C_0$. If $K_{T+1}<0$, let it be our new upper bound. We will make a new guess for $C_0$ exactly halfway between our new upper and lower bounds. When $K_{T+1}$ gets close enough to 0 (wihtin some error tolerance bounds), the procedure will stop and we will have our values for consumption and capital.\n\nMore specifically the bisection methods in our model works in the following steps: \n\n1. We set $c_{low}=0$ and $c_{high}=f(k=k[0], alpha=alpha, A=A)$ where $f(c_{low})$ and $f(c_{high})$ have opposite sign, $f(c_{low})f(c_{high})<0$\n\n2. We compute $C[0]$ where $C[0]=(c_{low}+c_{high})/2$ is the midpoint\n\n3. The next sub-interval $[c_{low+1},c_{high+1}]$:\n\n - If $f(c_{low})f(C[0])<0$ (different signs) then $c_{low+1}=c_{low}$ and $c_{high+1}=C[0]$ (i.e. focus on the range $[c_{low},C[0]$)\n\n - If $fC[0]c_{high}<0$ (different signs) then $c_{low+1}=C[0]$ and $c_{high+1}=c_{high}$ (i.e. focus on the range $[C[0], c_{high}]$)\n \n4. Steps 2 and 3 are then repeated until $f(C[0]_n)<\\epsilon$\n\n\n```python\ndef bisection(c, k, gamma, delta, beta, alpha, A, tol=1e-4, max_iter=1e4, terminal=0): # Terminal is the value we are estimating towards\n\n #Step 1: Initialise\n T = len(c) - 1\n i = 1 # Initial iteration\n c_high = f(k=k[0], alpha=alpha, A=A) # Initial high value of c\n c_low = 0 # Initial low value of c\n\n path_c, path_k = algorithm(c, k, gamma, delta, beta, alpha, A)\n\n #Step 2-4: Main\n while (np.abs((path_k[T+1] - terminal)) > tol or path_k[T] == terminal) and i < max_iter:\n\n # Step 2: Midpoint and associated value\n c[0] = (c_high + c_low) / 2 \n path_c, path_k = algorithm(c, k, gamma, delta, beta, alpha, A)\n \n # Step 3: Determine sub-interval\n if path_k[T+1] - terminal > tol:\n # If assets are too high the c[0] guess is lower bound on possible values of c[0]\n c_low = c[0]\n elif path_k[T+1] - terminal < -tol:\n # If assets fell too quickly, the c[0] guess is upper bound on possible values of c[0]\n c_high=c[0]\n elif path_k[T] == terminal:\n # If assets fell too quickly, the c[0] guess is now an uppernbound on possible values of c[0]\n c_high=c[0]\n\n i += 1 \n\n if np.abs(path_k[T+1] - terminal) < tol and path_k[T] != terminal:\n print('Bisection method successful. Converged on iteration', i-1)\n else:\n print('Bisection method failed')\n\n u = u_prime(c=path_c, gamma=gamma)\n return path_c, path_k, u\n```\n\n** Plots of the above defined algorithms **\n\n\n```python\nT = 10\nc = np.zeros(T+1)\nk = np.zeros(T+2)\n\nk[0] = 0.3 # Initial k\nc[0] = 0.3 # Initial guess of c_0\n\npaths = bisection(c, k, gamma, delta, beta, alpha, A)\n\ndef plot_paths(paths, axes=None, ss=None):\n\n T = len(paths[0])\n\n if axes is None:\n fix, axes = plt.subplots(1, 3, figsize=(13, 3))\n\n ylabels = ['$c_t$', '$k_t$', '$\\mu_t$']\n titles = ['Consumption Level', 'Capital Level', 'Lagrange Multiplier']\n\n for path, y, title, ax in zip(paths, ylabels, titles, axes):\n ax.plot(path)\n ax.set(ylabel=y, title=title, xlabel='t')\n\n #Plotting the steady state value of k\n if ss is not None:\n axes[1].axhline(ss, c='k', ls='--', lw=1)\n\n axes[1].axvline(T, c='k', ls='--', lw=1)\n axes[1].scatter(T, paths[1][-1], s=80)\n plt.tight_layout()\n\nplot_paths(paths)\n```\n\nEvidently now, when our initial guess of $\\mu_0$ is higher, we get a significantly different result. \n\n# Analysis of the Steady State\n\nWe now want to analyse the steady state of the model. We set the inital level of capital to its steady state. \n\nIf T $\\rightarrow+ \\infty$, the optimal allocation will converge to the steady state values of $C_t$ and $K_t$. \n\nWe can derive these values and set $K_0$ equal to its steady state value. In a steady state we have that $K_{t+1}=K_t=\\overline{K}$ for all very large ts, the feasibility constraint previously stated is $f(\\overline{K})-\\delta\\overline{K}=\\overline{C}$ Substituting $K_t=\\overline{K}$ and $C_t=\\overline{C}$ for all t into the previously obtained equation $$u'(C_{t+1})[(1-\\delta)+f'(K_{t+1})]= u'(C_t)$$ for all t=0,1,...,T, yields $$1=\\beta\\frac{u'(\\overline{C}}{u'(\\overline{C}}[f'(\\overline{K}+(1-\\delta)]$$. Defining $\\beta=\\frac{1}{1+\\rho}$, and rearranging yields $$1+\\rho=1[f'(\\overline{K})+(1- \\delta)]$$ Simplifying yields $$f'(\\overline{K})=\\rho+\\delta$$ and $$\\overline{K}=f'^{-1}(\\rho+\\delta)$$ Using our production function from earlier yields $$\\alpha\\overline{K}^{\\alpha-1}=\\rho+\\delta$$\n\nUsing the obtained values $\\alpha$=0.33, $\\rho=\\frac{1}{\\beta}-1=\\frac{1}{\\frac{19}{20}}-1=\\frac{1}{19}$ and $\\delta=\\frac{1}{50}$, we get $$\\overline{K}=(\\frac{\\frac{33}{100}}{\\frac{1}{50}+\\frac{1}{19}})^{\\frac{67}{100}}\u22489.6$$\n\nIn the below we will verify this result and use this steady state $\\overline{K}$ as our initial capital stock $K_0$. \n\n\n```python\nrho = sm.symbols('rho')\nrho=1/beta-1\nk_ss=f_prime_inverse(k=rho+delta,A=A, alpha=alpha)\nprint(f'The steady state of capital, k, is: {k_ss}')\n```\n\n The steady state of capital, k, is: 9.57583816331462\n\n\n** We are now at a stage where we can plot given the obtained values for the steady state **\n\n\n```python\nT=150\nc=np.zeros(T+1)\nk=np.zeros(T+2)\nc[0]=0.3\nk[0]=k_ss\npaths = bisection(c, k, gamma, delta, beta, alpha, A)\n\nplot_paths(paths, ss=k_ss)\n```\n\nFrom the plots obtained above we see that in this economy with a large value of $T$, $K_t$ will stay near its initial value for as long as possible. We can from this deduct that the social planner likes the steady state capital stock and wants to stay there for as long as possible.\n\n# Changing the parameter values\n\nIn the below we examine what happens when the initial $K_0$ is pushed below $\\overline{K}$.\n\n\n```python\nk_initial = k_ss/3 #Value below steady state \nT=150\nc=np.zeros(T+1)\nk=np.zeros(T+2)\nc[0]=0.3\nk[0]=k_initial\npaths = bisection(c, k, gamma, delta, beta, alpha, A)\n\nplot_paths(paths, ss=k_ss)\n```\n\nWe now see that the planner pushes capital toward the steady state value then stays at this value for a substantial amount of time and subsequently pushes $K_t$ toward the terminal value $K_{T+1}=0$ as t gets close to T. \n\n## Changing the value of T\n\nWe are also interested in seeing how the trajectory of the paths will change, when altering the value for T. We're making a list with four differet values of T in order to see the difference when the time horizon is altered within the same graphs. The values of T are ranging from 30 to 180 to incapsule a large range of Ts.\n\n\n```python\nT_list = (180, 90, 60, 30)\nfix, axes =plt.subplots(1, 3, figsize=(13,3))\n\nfor T in T_list:\n c=np.zeros(T+1)\n k=np.zeros(T+2)\n c[0]=0.3\n k[0]=k_initial\n paths=bisection(c, k, gamma, delta, beta, alpha, A)\n plot_paths(paths, ss=k_ss, axes=axes)\n\n```\n\nThe different colours in the graphs above are tied to outcomes with different time horizons T. These are the values given in the T_list. \n\nWe see that as we increase the time horizon, the planner puts $K_{t}$ closer to the steady state value $\\overline{K}$ for longer. \n\n## Further changes to the value of T\n\n** In the following we are testing what happens when we set the value of T at a very high value. **\n\nWe expect the pllaner making the capital stockspend most of its time close to its steady state level. \n\n\n```python\nT_list = (260, 180, 60, 30)\nfix, axes = plt.subplots(1, 3, figsize=(13, 3))\n\nfor T in T_list:\n c = np.zeros(T+1)\n k = np.zeros(T+2)\n c[0] = 0.3\n k[0] = k_initial\n paths = bisection(c, k, gamma, delta, beta, alpha, A)\n plot_paths(paths, ss=k_ss, axes=axes)\n```\n\nEvidently the bisection method failed when the parameter for T is set to 260. It failed to converge and hit the maximum iteration.\n\nHowever, it is evident that the pattern from the previous analysis is repeated with the increased values of T. The pattern reflects a turnpike property of the steady state. \n\nWe can conclude that for any given initial value of $K_0$, $K_t$ is pushed toward the steady state and held at this level for as long as possible. \n\n# Further analysis\n\nIn the below we extend the Cass-Koopman's model of optimal growth by adding an environmental term with inspiration from a paper done by Luiz Fernando (Luiz Fernando Ohara Kamogawa & Ricardo Shirota, 2011. \"Economic growth, energyconsumption and emissions: an extension of Ramsey-Cass-Koopmans modelunder EKC hypothesis,\" Anais do XXXVII Encontro Nacional de Economia [Proceedings of the 37th Brazilian Economics Meeting] 187, ANPEC)\n\n## Description of the model extension\n\nThe production function remains unchanged, but new parameters changes the utility function. The parameters are:\n\n* $\\eta$ = relative $CO_2$-emission done by non-renewable energy compared to renewable energy\n* $\\Phi$ = means of the gloabl awareness of climate changes\n* $j$ = substitutability from non-renewable energy to renewable energy. \n\nThe utility function is then given by: \n\n$$U=c^{-\\gamma}-\\eta*\\Phi^j$$\n\n\n\n## Solving the model\n\nTo solve the extended model, we use the same bisection method as previous, but add the above mentioned alterations. \n\n**The new symbols are defined:**\n\n\n```python\nepsilon = sm.symbols('epsilon')\ntheta = sm.symbols('theta')\n```\n\n**The parameters are defined:**\n\nThe values of the added terms is based on empirical studies found in the literature.\n\n\n\n```python\nepsilon=0.5\ntheta=0.3\nj=0.5\n```\n\n**The model functions are defined:**\n\n\n```python\n#The derivative of environmental utility\ndef u_prime_2(c, gamma, epsilon, theta, j): \n if gamma == 1:\n return 1/c\n else: \n return c**(-gamma)-epsilon*(theta)**j\n\n#Inverse environmental utility\ndef u_prime_inverse_2(c, gamma, epsilon, theta, j):\n if gamma == 1: \n return c\n else: \n return c**(-1/gamma)-epsilon*(theta)**(1/j)\n```\n\n**The bisection method is now used to solve the model:**\n\n\n```python\ndef bisection(c, k, gamma, delta, beta, alpha, A, tol=1e-4, max_iter=1e4, terminal=0): # Terminal is the value we are estimating towards\n\n #Step 1: Initialise\n T = len(c) - 1\n i = 1 # Initial iteration\n c_high = f(k=k[0], alpha=alpha, A=A) # Initial high value of c\n c_low = 0 # Initial low value of c\n\n path_c, path_k = algorithm(c, k, gamma, delta, beta, alpha, A)\n\n #Step 2-4: Main\n while (np.abs((path_k[T+1] - terminal)) > tol or path_k[T] == terminal) and i < max_iter:\n\n # Step 2: Midpoint and associated value\n c[0] = (c_high + c_low) / 2 \n path_c, path_k = algorithm(c, k, gamma, delta, beta, alpha, A)\n \n # Step 3: Determine sub-interval\n if path_k[T+1] - terminal > tol:\n c_low = c[0]\n elif path_k[T+1] - terminal < -tol:\n c_high=c[0]\n elif path_k[T] == terminal:\n c_high=c[0]\n\n i += 1 \n\n if np.abs(path_k[T+1] - terminal) < tol and path_k[T] != terminal:\n print('Bisection method successful. Converged on iteration', i-1)\n else:\n print('Bisection method failed')\n\n u_2 = u_prime_2(c=path_c, gamma=gamma, theta=theta, j=j, epsilon=epsilon)\n return path_c, path_k, u_2\n\n T = 10\nc = np.zeros(T+1)\nk = np.zeros(T+2)\n\nk[0] = 0.3 # Initial k\nc[0] = 0.3 # Initial guess of c_0\n\npaths = bisection(c, k, gamma, delta, beta, alpha, A)\n\ndef plot_paths(paths, axes=None, ss=None):\n\n T = len(paths[0])\n\n if axes is None:\n fix, axes = plt.subplots(1, 3, figsize=(13, 3))\n\n ylabels = ['$c_t$', '$k_t$', '$\\mu_t$']\n titles = ['Consumption Level', 'Capital Level', 'Lagrange Multiplier']\n\n for path, y, title, ax in zip(paths, ylabels, titles, axes):\n ax.plot(path)\n ax.set(ylabel=y, title=title, xlabel='t')\n\n #Plotting the steady state value of k\n if ss is not None:\n axes[1].axhline(ss, c='k', ls='--', lw=1)\n\n axes[1].axvline(T, c='k', ls='--', lw=1)\n axes[1].scatter(T, paths[1][-1], s=80)\n plt.tight_layout()\n\nplot_paths(paths)\n```\n\nThe optimal consumption path has evidently changed to lower consumption in the fist 9 periods then a quite radical rise in the tenth period after having added the environmental terms. The optimal capital path has also changed to a bigger fall of capital in the last period.\n\n# Conclusion\n\nWe used a algorithmic and bisection method to solve cass-koopman model for optimal growth. The optimal growth path for consumption is found to be rise over the whole time period while the path for capital rise until steady state is achieved then fall to zero at the end of the time period. The steady state for capital is found to be 9.58. In the next step we did a graphically visualization of the optimal consumption and capital path with different time horizons, we found evidence that the time period has to be larger than 150 years to be in the steady state path. At last we analyzed an extension to the baseline model, we added an environmental term to the utility function. The new environmental growth model had different optimal consumption and capital path.\n", "meta": {"hexsha": "1860033a86e4d80153f1f50fb651978bfc6b4983", "size": 648125, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "modelproject/modelproject.ipynb", "max_stars_repo_name": "NumEconCopenhagen/projects-2020-marcus-christian-sigrid", "max_stars_repo_head_hexsha": "6529738d3ea1bb8309c2886e5fcb64cc0f122166", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "modelproject/modelproject.ipynb", "max_issues_repo_name": "NumEconCopenhagen/projects-2020-marcus-christian-sigrid", "max_issues_repo_head_hexsha": "6529738d3ea1bb8309c2886e5fcb64cc0f122166", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 8, "max_issues_repo_issues_event_min_datetime": "2020-04-13T15:55:26.000Z", "max_issues_repo_issues_event_max_datetime": "2020-05-13T16:51:47.000Z", "max_forks_repo_path": "modelproject/modelproject.ipynb", "max_forks_repo_name": "NumEconCopenhagen/projects-2020-marcus-christian-sigrid", "max_forks_repo_head_hexsha": "6529738d3ea1bb8309c2886e5fcb64cc0f122166", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 622.0009596929, "max_line_length": 77378, "alphanum_fraction": 0.736333269, "converted": true, "num_tokens": 6951, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4921881357207956, "lm_q2_score": 0.20181322706107543, "lm_q1q2_score": 0.09933007599098834}} {"text": "
\n\n\u00a0\u00a0\u00a0\u00a0\n## [mlcourse.ai](https://mlcourse.ai) - Open Machine Learning Course\n\n
\nAuteur: [Egor Polusmak](https://www.linkedin.com/in/egor-polusmak/). \nTraduit et \u00e9dit\u00e9 par [Yuanyuan Pao](https://www.linkedin.com/in/yuanyuanpao/) et [Ousmane Ciss\u00e9](https://fr.linkedin.com/in/ousmane-cisse). \nCe mat\u00e9riel est soumis aux termes et conditions de la licence [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). \nL'utilisation gratuite est autoris\u00e9e \u00e0 des fins non commerciales.\n\n#
Topic 9. Analyse des s\u00e9ries temporelles en Python
\n##
Partie 2. Pr\u00e9dire l'avenir avec Facebook Prophet
\n\nLa pr\u00e9vision de s\u00e9ries chronologiques trouve une large application dans l'analyse de donn\u00e9es. Ce ne sont que quelques-unes des pr\u00e9visions imaginables des tendances futures qui pourraient \u00eatre utiles:\n- Le nombre de serveurs dont un service en ligne aura besoin l'ann\u00e9e prochaine.\n- La demande d'un produit d'\u00e9picerie dans un supermarch\u00e9 un jour donn\u00e9.\n- Le cours de cl\u00f4ture de demain d'un actif financier n\u00e9gociable.\n\nPour un autre exemple, nous pouvons faire une pr\u00e9diction des performances d'une \u00e9quipe, puis l'utiliser comme r\u00e9f\u00e9rence: d'abord pour fixer des objectifs pour l'\u00e9quipe, puis pour mesurer les performances r\u00e9elles de l'\u00e9quipe par rapport \u00e0 la r\u00e9f\u00e9rence.\n\nIl existe plusieurs m\u00e9thodes diff\u00e9rentes pour pr\u00e9dire les tendances futures, par exemple, [ARIMA](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average), [ARCH](https://en.wikipedia.org/wiki/Autoregressive_conditional_heteroskedasticity), [mod\u00e8les r\u00e9gressifs](https://en.wikipedia.org/wiki/Autoregressive_model), [r\u00e9seaux de neurones](https://medium.com/machine-learning-world/neural-networks-for-algorithmic-trading-1-2-correct-time-series-forecasting-backtesting-9776bfd9e589).\n\nDans cet article, nous examinerons [Prophet](https://facebook.github.io/prophet/), une biblioth\u00e8que de pr\u00e9visions de s\u00e9ries chronologiques publi\u00e9e par Facebook et open source, le 23 f\u00e9vrier 2017. Nous l'essayerons \u00e9galement dans le probl\u00e8me de pr\u00e9diction du nombre quotidien de publications sur Medium.\n\n## Plan de l'article\n\n1. Introduction\n2. Le mod\u00e8le de pr\u00e9vision de Prophet\n3. Entra\u00eenez-vous avec le Prophet\n\u00a0\u00a0\u00a0\u00a0* 3.1 Installation en Python\n * 3.2 Ensemble de donn\u00e9es\n\u00a0\u00a0\u00a0\u00a0* 3.3 Analyse visuelle exploratoire\n * 3.4 Faire une pr\u00e9vision\n\u00a0\u00a0\u00a0\u00a0* 3.5 \u00c9valuation de la qualit\u00e9 des pr\u00e9visions\n * 3.6 Visualisation\n4. Transformation Box-Cox\n5. R\u00e9sum\u00e9\n6. R\u00e9f\u00e9rences\n\n## 1. Introduction\n\nSelon [l'article](https://research.fb.com/prophet-forecasting-at-scale/) sur Facebook Research, Prophet a \u00e9t\u00e9 initialement d\u00e9velopp\u00e9 dans le but de cr\u00e9er des pr\u00e9visions commerciales de haute qualit\u00e9. Cette biblioth\u00e8que tente de r\u00e9soudre les difficult\u00e9s suivantes, communes \u00e0 de nombreuses s\u00e9ries chronologiques commerciales:\n- Effets saisonniers caus\u00e9s par le comportement humain: cycles hebdomadaires, mensuels et annuels, creux et pics les jours f\u00e9ri\u00e9s.\n- Changements de tendance dus aux nouveaux produits et aux \u00e9v\u00e9nements du march\u00e9.\n- Valeurs aberrantes.\n\nLes auteurs affirment que, m\u00eame avec les param\u00e8tres par d\u00e9faut, dans de nombreux cas, leur biblioth\u00e8que produit des pr\u00e9visions aussi pr\u00e9cises que celles fournies par des analystes exp\u00e9riment\u00e9s.\n\nDe plus, Prophet dispose d'un certain nombre de personnalisations intuitives et facilement interpr\u00e9tables qui permettent d'am\u00e9liorer progressivement la qualit\u00e9 du mod\u00e8le de pr\u00e9vision. Ce qui est particuli\u00e8rement important, ces param\u00e8tres sont tout \u00e0 fait compr\u00e9hensibles m\u00eame pour les non-experts en analyse de s\u00e9ries chronologiques, qui est un domaine de la science des donn\u00e9es n\u00e9cessitant certaines comp\u00e9tences et exp\u00e9rience.\n\nSoit dit en passant, l'article d'origine s'intitule \u00abPr\u00e9visions \u00e0 grande \u00e9chelle\u00bb, mais il ne s'agit pas de l'\u00e9chelle au sens \u00abhabituel\u00bb, qui traite des probl\u00e8mes de calcul et d'infrastructure d'un grand nombre de programmes de travail. Selon les auteurs, Prophet devrait bien \u00e9voluer dans les 3 domaines suivants:\n- Accessibilit\u00e9 \u00e0 un large public d'analystes, \u00e9ventuellement sans expertise approfondie des s\u00e9ries chronologiques.\n- Applicabilit\u00e9 \u00e0 un large \u00e9ventail de probl\u00e8mes de pr\u00e9vision distincts.\n- Estimation automatis\u00e9e des performances d'un grand nombre de pr\u00e9visions, y compris la signalisation des probl\u00e8mes potentiels pour leur inspection ult\u00e9rieure par l'analyste.\n\n## 2. Le mod\u00e8le de pr\u00e9vision Prophet\n\nMaintenant, regardons de plus pr\u00e8s comment fonctionne Prophet. Dans son essence, cette biblioth\u00e8que utilise le [mod\u00e8le de r\u00e9gression additive](https://en.wikipedia.org/wiki/Additive_model) $y(t)$ comprenant les composants suivants:\n\n$$y(t) = g(t) + s(t) + h(t) + \\epsilon_{t},$$\n\no\u00f9:\n* La tendance $g(t)$ mod\u00e9lise les changements non p\u00e9riodiques.\n* La saisonnalit\u00e9 $s(t)$ repr\u00e9sente des changements p\u00e9riodiques.\n* La composante vacances $h(t)$ fournit des informations sur les vacances et les \u00e9v\u00e9nements.\n\nCi-dessous, nous consid\u00e9rerons quelques propri\u00e9t\u00e9s importantes de ces composants de mod\u00e8le.\n\n### Tendance\n\nLa biblioth\u00e8que Prophet impl\u00e9mente deux mod\u00e8les de tendance possibles pour $g(t)$.\n\nLe premier est appel\u00e9 *Croissance satur\u00e9e non lin\u00e9aire*. Il est repr\u00e9sent\u00e9 sous la forme du [mod\u00e8le de croissance logistique](https://en.wikipedia.org/wiki/Fonction_logistique):\n\n$$g(t) = \\frac{C}{1+e^{-k(t - m)}},$$\n\no\u00f9:\n\n* $C$ est la capacit\u00e9 de charge (c'est-\u00e0-dire la valeur maximale de la courbe).\n\n* $k$ est le taux de croissance (qui repr\u00e9sente \"la pente\" de la courbe).\n\n* $m$ est un param\u00e8tre de d\u00e9calage.\n\nCette \u00e9quation logistique permet de mod\u00e9liser la croissance non lin\u00e9aire avec saturation, c'est-\u00e0-dire lorsque le taux de croissance d'une valeur diminue avec sa croissance. Un des exemples typiques serait de repr\u00e9senter la croissance de l'audience d'une application ou d'un site Web.\n\nEn fait, $C$ et $k$ ne sont pas n\u00e9cessairement des constantes et peuvent varier dans le temps. Prophet prend en charge le r\u00e9glage automatique et manuel de leur variabilit\u00e9. La biblioth\u00e8que peut elle-m\u00eame choisir des points optimaux de changements de tendance en ajustant les donn\u00e9es historiques fournies.\n\nEn outre, Prophet permet aux analystes de d\u00e9finir manuellement des points de changement du taux de croissance et des valeurs de capacit\u00e9 \u00e0 diff\u00e9rents moments. Par exemple, les analystes peuvent avoir des informations sur les dates des versions pr\u00e9c\u00e9dentes qui ont influenc\u00e9 de mani\u00e8re importante certains indicateurs cl\u00e9s de produit.\n\nLe deuxi\u00e8me mod\u00e8le de tendance est un simple *mod\u00e8le lin\u00e9aire par morceaux* (Piecewise Linear Model) avec un taux de croissance constant. \nIl est le mieux adapt\u00e9 aux probl\u00e8mes sans saturation de la croissance.\n\n### Saisonnalit\u00e9\n\nLa composante saisonni\u00e8re $s(t)$ fournit un mod\u00e8le flexible de changements p\u00e9riodiques dus \u00e0 la saisonnalit\u00e9 hebdomadaire et annuelle.\n\nLes donn\u00e9es saisonni\u00e8res hebdomadaires sont mod\u00e9lis\u00e9es avec des variables factices. Six nouvelles variables sont ajout\u00e9es: \u00ablundi\u00bb, \u00abmardi\u00bb, \u00abmercredi\u00bb, \u00abjeudi\u00bb, \u00abvendredi\u00bb, \u00absamedi\u00bb, qui prennent des valeurs 0 ou 1 selon le jour de la semaine. La caract\u00e9ristique \u00abdimanche\u00bb n'est pas ajout\u00e9e car ce serait une combinaison lin\u00e9aire des autres jours de la semaine, et ce fait aurait un effet n\u00e9gatif sur le mod\u00e8le.\n\nLe mod\u00e8le de saisonnalit\u00e9 annuelle dans Prophet repose sur la s\u00e9rie de Fourier.\n\nDepuis la version 0.2, vous pouvez \u00e9galement utiliser des s\u00e9ries chronologiques infra-journali\u00e8res et faire des pr\u00e9visions infra-journali\u00e8res, ainsi qu'utiliser la nouvelle caract\u00e9ristique de saisonnalit\u00e9 quotidienne.\n\n### Vacances et \u00e9v\u00e9nements\n\nLa composante $h(t)$ repr\u00e9sente les jours anormaux pr\u00e9visibles de l'ann\u00e9e, y compris ceux dont les horaires sont irr\u00e9guliers, par exemple les Black Fridays.\n\nPour utiliser cette caract\u00e9ristique, l'analyste doit fournir une liste personnalis\u00e9e d'\u00e9v\u00e9nements.\n\n### Erreur\n\nLe terme d'erreur $\\epsilon(t)$ repr\u00e9sente des informations qui n'\u00e9taient pas refl\u00e9t\u00e9es dans le mod\u00e8le. Habituellement, il est mod\u00e9lis\u00e9 comme un bruit normalement distribu\u00e9.\n\n### Analyse comparative (benchmark) de Prophet\n\nPour une description d\u00e9taill\u00e9e du mod\u00e8le et des algorithmes derri\u00e8re Prophet, reportez-vous \u00e0 l'article [\"Forecasting at scale\"](https://peerj.com/preprints/3190/) de Sean J. Taylor et Benjamin Letham.\n\nLes auteurs ont \u00e9galement compar\u00e9 leur biblioth\u00e8que avec plusieurs autres m\u00e9thodes de pr\u00e9vision de s\u00e9ries chronologiques. Ils ont utilis\u00e9 l'[Erreur absolue moyenne en pourcentage (MAPE)](https://en.wikipedia.org/wiki/Mean _absolue_ pourcentage_erreur) comme mesure de la pr\u00e9cision de la pr\u00e9diction. Dans cette analyse, Prophet a montr\u00e9 une erreur de pr\u00e9vision consid\u00e9rablement plus faible que les autres mod\u00e8les.\n\n\n\nRegardons de plus pr\u00e8s comment la qualit\u00e9 de la pr\u00e9vision a \u00e9t\u00e9 mesur\u00e9e dans l'article. Pour ce faire, nous aurons besoin de la formule d'erreur moyenne absolue en pourcentage.\n\nSoit $y_{i}$ la *valeur r\u00e9elle (historique)* et $\\hat{y}_{i}$ la *valeur pr\u00e9vue* donn\u00e9e par notre mod\u00e8le.\n\n$e_{i} = y_{i} - \\hat{y}_{i}$ est alors *l'erreur de pr\u00e9vision* et $p_{i} =\\frac{\\displaystyle e_{i}}{\\displaystyle y_{i}}$ est *l'erreur de pr\u00e9vision relative*.\n\nNous d\u00e9finissons\n\n$$MAPE = mean\\big(\\left |p_{i} \\right |\\big)$$\n\nMAPE est largement utilis\u00e9 comme mesure de la pr\u00e9cision des pr\u00e9dictions car il exprime l'erreur en pourcentage et peut donc \u00eatre utilis\u00e9 dans les \u00e9valuations de mod\u00e8les sur diff\u00e9rents ensembles de donn\u00e9es.\n\nDe plus, lors de l'\u00e9valuation d'un algorithme de pr\u00e9vision, il peut s'av\u00e9rer utile de calculer [MAE (Mean Absolute Error)](https://en.wikipedia.org/wiki/Mean _error_ absolue) afin d'avoir une image des erreurs en nombres absolus. En utilisant des composants pr\u00e9c\u00e9demment d\u00e9finis, son \u00e9quation sera\n\n$$MAE = mean\\big(\\left |e_{i}\\right |\\big)$$\n\nQuelques mots sur les algorithmes avec lesquels Prophet a \u00e9t\u00e9 compar\u00e9. La plupart d'entre eux sont assez simples et sont souvent utilis\u00e9s comme r\u00e9f\u00e9rence pour d'autres mod\u00e8les:\n* `naive` est une approche de pr\u00e9vision simpliste dans laquelle nous pr\u00e9disons toutes les valeurs futures en nous appuyant uniquement sur l'observation au dernier moment disponible.\n* `snaive` (saisonnier na\u00eff) est un mod\u00e8le qui fait des pr\u00e9dictions constantes en tenant compte des informations sur la saisonnalit\u00e9. Par exemple, dans le cas de donn\u00e9es saisonni\u00e8res hebdomadaires pour chaque futur lundi, nous pr\u00e9dirions la valeur du dernier lundi et pour tous les futurs mardis, nous utiliserions la valeur du dernier mardi, etc.\n* `mean` utilise la valeur moyenne des donn\u00e9es comme pr\u00e9vision.\n* `arima` signifie *Autoregressive Integrated Moving Average*, voir [Wikipedia](https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average) pour plus de d\u00e9tails.\n* `ets` signifie *Lissage exponentiel*, voir [Wikipedia](https://en.wikipedia.org/wiki/Exponential_smoothing) pour plus d'informations.\n\n## 3. Entra\u00eenez-vous avec Facebook Prophet\n\n### 3.1 Installation en Python\n\nTout d'abord, vous devez installer la biblioth\u00e8que. Prophet est disponible pour Python et R. Le choix d\u00e9pendra de vos pr\u00e9f\u00e9rences personnelles et des exigences du projet. Plus loin dans cet article, nous utiliserons Python.\n\nEn Python, vous pouvez installer Prophet \u00e0 l'aide de PyPI:\n```\n$ pip install fbprophet\n```\n\nDans R, vous trouverez le package CRAN correspondant. Reportez-vous \u00e0 la [documentation](https://facebookincubator.github.io/prophet/docs/installation.html) pour plus de d\u00e9tails.\n\nImportons les modules dont nous aurons besoin et initialisons notre environnement:\n\n\n```python\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nfrom scipy import stats\n\n%matplotlib inline\n```\n\n### 3.2 Jeu de donn\u00e9es\n\nNous pr\u00e9dirons le nombre quotidien de publications publi\u00e9es sur [Medium](https://medium.com/).\n\nTout d'abord, nous chargeons notre jeu de donn\u00e9es.\n\n\n```python\ndf = pd.read_csv(\"../../data/medium_posts.csv.zip\", sep=\"\\t\")\n```\n\nEnsuite, nous omettons toutes les colonnes \u00e0 l'exception de `published` et `url`. Le premier correspond \u00e0 la dimension temporelle tandis que le second identifie de mani\u00e8re unique un message par son URL. Par la suite, nous nous d\u00e9barrassons des doublons possibles et des valeurs manquantes dans les donn\u00e9es:\n\n\n```python\ndf = df[[\"published\", \"url\"]].dropna().drop_duplicates()\n```\n\nEnsuite, nous devons convertir `published` au format datetime car par d\u00e9faut `pandas` traite ce champ comme une cha\u00eene.\n\n\n```python\ndf[\"published\"] = pd.to_datetime(df[\"published\"])\n```\n\nTrions la trame de donn\u00e9es par date et jetons un \u0153il \u00e0 ce que nous avons:\n\n\n```python\ndf.sort_values(by=[\"published\"]).head(n=3)\n```\n\nLa date de sortie publique de Medium \u00e9tait le 15 ao\u00fbt 2012. Mais, comme vous pouvez le voir sur les donn\u00e9es ci-dessus, il existe au moins plusieurs lignes avec des dates de publication beaucoup plus anciennes. Ils sont apparus d'une mani\u00e8re ou d'une autre dans notre ensemble de donn\u00e9es, mais ils ne sont gu\u00e8re l\u00e9gitimes. Nous allons simplement couper notre s\u00e9rie chronologique pour ne conserver que les lignes qui tombent sur la p\u00e9riode du 15 ao\u00fbt 2012 au 25 juin 2017:\n\n\n```python\ndf = df[\n (df[\"published\"] > \"2012-08-15\") & (df[\"published\"] < \"2017-06-26\")\n].sort_values(by=[\"published\"])\ndf.head(n=3)\n```\n\n\n```python\ndf.tail(n=3)\n```\n\nComme nous allons pr\u00e9dire le nombre de publications, nous allons agr\u00e9ger et compter les publications uniques \u00e0 chaque moment donn\u00e9. Nous nommerons la nouvelle colonne correspondante `posts`:\n\n\n```python\naggr_df = df.groupby(\"published\")[[\"url\"]].count()\naggr_df.columns = [\"posts\"]\n```\n\nDans cette pratique, nous sommes int\u00e9ress\u00e9s par le nombre de messages **par jour**. Mais en ce moment, toutes nos donn\u00e9es sont divis\u00e9es en intervalles de temps irr\u00e9guliers qui sont inf\u00e9rieurs \u00e0 une journ\u00e9e. C'est ce qu'on appelle une s\u00e9rie chronologique infra-journali\u00e8re (*sub-daily time series*). Pour le voir, affichons les 3 premi\u00e8res lignes:\n\n\n```python\naggr_df.head(n=3)\n```\n\nPour r\u00e9soudre ce probl\u00e8me, nous devons agr\u00e9ger le nombre de messages par \"bins\" d'une taille de date. Dans l'analyse des s\u00e9ries chronologiques, ce processus est appel\u00e9 *r\u00e9\u00e9chantillonnage* (*resampling*). Et si l'on *r\u00e9duit* le taux d'\u00e9chantillonnage des donn\u00e9es, il est souvent appel\u00e9 *sous-\u00e9chantillonnage* (*downsampling*).\n\nHeureusement, `pandas` a une fonctionnalit\u00e9 int\u00e9gr\u00e9e pour cette t\u00e2che. Nous allons r\u00e9\u00e9chantillonner notre indice de date jusqu'\u00e0 des \"bins\" d'un jour:\n\n\n```python\ndaily_df = aggr_df.resample(\"D\").apply(sum)\ndaily_df.head(n=3)\n```\n\n### 3.3 Analyse visuelle exploratoire\n\nComme toujours, il peut \u00eatre utile et instructif de regarder une repr\u00e9sentation graphique de vos donn\u00e9es.\n\nNous allons cr\u00e9er un trac\u00e9 de s\u00e9rie chronologique pour toute la plage de temps. L'affichage de donn\u00e9es sur une p\u00e9riode aussi longue peut donner des indices sur la saisonnalit\u00e9 et les \u00e9carts anormaux visibles.\n\nTout d'abord, nous importons et initialisons la biblioth\u00e8que `Plotly`, qui permet de cr\u00e9er de superbes graphes interactifs:\n\n\n```python\nfrom plotly import graph_objs as go\nfrom plotly.offline import init_notebook_mode, iplot\n\n# Initialize plotly\ninit_notebook_mode(connected=True)\n```\n\nNous d\u00e9finissons \u00e9galement une fonction d'aide, qui tracera nos trames de donn\u00e9es tout au long de l'article:\n\n\n```python\ndef plotly_df(df, title=\"\"):\n \"\"\"Visualize all the dataframe columns as line plots.\"\"\"\n common_kw = dict(x=df.index, mode=\"lines\")\n data = [go.Scatter(y=df[c], name=c, **common_kw) for c in df.columns]\n layout = dict(title=title)\n fig = dict(data=data, layout=layout)\n iplot(fig, show_link=False)\n```\n\nEssayons de tracer notre jeu de donn\u00e9es *tel quel*:\n\n\n```python\nplotly_df(daily_df, title=\"Posts on Medium (daily)\")\n```\n\nLes donn\u00e9es \u00e0 haute fr\u00e9quence peuvent \u00eatre assez difficiles \u00e0 analyser. M\u00eame avec la possibilit\u00e9 de zoomer fournie par `Plotly`, il est difficile d'inf\u00e9rer quoi que ce soit de significatif \u00e0 partir de ce graphique, \u00e0 l'exception de la tendance \u00e0 la hausse et \u00e0 l'acc\u00e9l\u00e9ration.\n\nPour r\u00e9duire le bruit, nous allons r\u00e9\u00e9chantillonner le compte \u00e0 rebours des postes jusqu'\u00e0 la semaine. Outre le *binning*, d'autres techniques possibles de r\u00e9duction du bruit incluent [Moving-Average Smoothing](https://en.wikipedia.org/wiki/Moving_average) et [Exponential Smoothing](https://en.wikipedia.org/wiki/Exponential_smoothing), entre autres.\n\nNous sauvegardons notre dataframe sous-\u00e9chantillonn\u00e9 dans une variable distincte, car dans cette pratique, nous ne travaillerons qu'avec des s\u00e9ries journali\u00e8res:\n\n\n```python\nweekly_df = daily_df.resample(\"W\").apply(sum)\n```\n\nEnfin, nous tra\u00e7ons le r\u00e9sultat:\n\n\n```python\nplotly_df(weekly_df, title=\"Posts on Medium (weekly)\")\n```\n\nCe graphique sous-\u00e9chantillonn\u00e9 s'av\u00e8re un peu meilleur pour la perception d'un analyste.\n\nL'une des fonctions les plus utiles fournies par `Plotly` est la possibilit\u00e9 de plonger rapidement dans diff\u00e9rentes p\u00e9riodes de la chronologie afin de mieux comprendre les donn\u00e9es et trouver des indices visuels sur les tendances possibles, les effets p\u00e9riodiques et irr\u00e9guliers.\n\nPar exemple, un zoom avant sur quelques ann\u00e9es cons\u00e9cutives nous montre des points temporels correspondant aux vacances de No\u00ebl, qui influencent grandement les comportements humains.\n\nMaintenant, nous allons omettre les premi\u00e8res ann\u00e9es d'observations, jusqu'en 2015. Premi\u00e8rement, elles ne contribueront pas beaucoup \u00e0 la qualit\u00e9 des pr\u00e9visions en 2017. Deuxi\u00e8mement, ces premi\u00e8res ann\u00e9es, ayant un nombre tr\u00e8s faible de messages par jour, sont susceptible d'augmenter le bruit dans nos pr\u00e9visions, car le mod\u00e8le serait oblig\u00e9 d'ajuster ces donn\u00e9es historiques anormales avec des donn\u00e9es plus pertinentes et indicatives des derni\u00e8res ann\u00e9es.\n\n\n```python\ndaily_df = daily_df.loc[daily_df.index >= \"2015-01-01\"]\ndaily_df.head(n=3)\n```\n\nPour r\u00e9sumer, \u00e0 partir de l'analyse visuelle, nous pouvons voir que notre ensemble de donn\u00e9es n'est pas stationnaire avec une tendance croissante importante. Il montre \u00e9galement une saisonnalit\u00e9 hebdomadaire et annuelle et un certain nombre de jours anormaux chaque ann\u00e9e.\n\n### 3.4 Faire une pr\u00e9vision\n\nL'API de Prophet est tr\u00e8s similaire \u00e0 celle que vous pouvez trouver dans `sklearn`. Nous cr\u00e9ons d'abord un mod\u00e8le, puis appelons la m\u00e9thode `fit` et, enfin, faisons une pr\u00e9vision. L'entr\u00e9e de la m\u00e9thode `fit` est un` DataFrame` avec deux colonnes:\n* `ds` (datestamp ou horodatage) doit \u00eatre de type` date` ou `datetime`.\n* `y` est une valeur num\u00e9rique que nous voulons pr\u00e9dire.\n\nPour commencer, nous allons importer la biblioth\u00e8que et \u00e9liminer les messages de diagnostic sans importance:\n\n\n```python\nimport logging\n\nfrom fbprophet import Prophet\n\nlogging.getLogger().setLevel(logging.ERROR)\n```\n\nConvertissons notre dataframe de donn\u00e9es au format requis par Prophet:\n\n\n```python\ndf = daily_df.reset_index()\ndf.columns = [\"ds\", \"y\"]\ndf.tail(n=3)\n```\n\nLes auteurs de la biblioth\u00e8que conseillent g\u00e9n\u00e9ralement de faire des pr\u00e9dictions bas\u00e9es sur au moins plusieurs mois, id\u00e9alement, plus d'un an de donn\u00e9es historiques. Heureusement, dans notre cas, nous avons plus de quelques ann\u00e9es de donn\u00e9es pour s'adapter au mod\u00e8le.\n\nPour mesurer la qualit\u00e9 de nos pr\u00e9visions, nous devons diviser notre ensemble de donn\u00e9es en une *partie historique*, qui est la premi\u00e8re et la plus grande tranche de nos donn\u00e9es, et une *partie pr\u00e9diction*, qui sera situ\u00e9e \u00e0 la fin de la chronologie. Nous allons supprimer le dernier mois de l'ensemble de donn\u00e9es afin de l'utiliser plus tard comme cible de pr\u00e9diction:\n\n\n```python\nprediction_size = 30\ntrain_df = df[:-prediction_size]\ntrain_df.tail(n=3)\n```\n\nMaintenant, nous devons cr\u00e9er un nouvel objet `Prophet`. Ici, nous pouvons passer les param\u00e8tres du mod\u00e8le dans le constructeur. Mais dans cet article, nous utiliserons les valeurs par d\u00e9faut. Ensuite, nous formons notre mod\u00e8le en invoquant sa m\u00e9thode `fit` sur notre jeu de donn\u00e9es de formation:\n\n\n```python\nm = Prophet()\nm.fit(train_df);\n```\n\nEn utilisant la m\u00e9thode `Prophet.make_future_dataframe`, nous cr\u00e9ons un dataframe qui contiendra toutes les dates de l'historique et s'\u00e9tendra \u00e9galement dans le futur pour les 30 jours que nous avons omis auparavant.\n\n\n```python\nfuture = m.make_future_dataframe(periods=prediction_size)\nfuture.tail(n=3)\n```\n\nNous pr\u00e9disons les valeurs avec `Prophet` en passant les dates pour lesquelles nous voulons cr\u00e9er une pr\u00e9vision. Si nous fournissons \u00e9galement les dates historiques (comme dans notre cas), en plus de la pr\u00e9diction, nous obtiendrons un ajustement dans l'\u00e9chantillon pour l'historique. Appelons la m\u00e9thode `predict` du mod\u00e8le avec notre dataframe `future` en entr\u00e9e:\n\n\n```python\nforecast = m.predict(future)\nforecast.tail(n=3)\n```\n\nDans le dataframe r\u00e9sultant, vous pouvez voir de nombreuses colonnes caract\u00e9risant la pr\u00e9diction, y compris les composants de tendance et de saisonnalit\u00e9 ainsi que leurs intervalles de confiance. La pr\u00e9vision elle-m\u00eame est stock\u00e9e dans la colonne `yhat`.\n\nLa biblioth\u00e8que Prophet poss\u00e8de ses propres outils de visualisation int\u00e9gr\u00e9s qui nous permettent d'\u00e9valuer rapidement le r\u00e9sultat.\n\nTout d'abord, il existe une m\u00e9thode appel\u00e9e `Prophet.plot` qui trace tous les points de la pr\u00e9vision:\n\n\n```python\nm.plot(forecast);\n```\n\nCe graphique n'a pas l'air tr\u00e8s informatif. La seule conclusion d\u00e9finitive que nous pouvons tirer ici est que le mod\u00e8le a trait\u00e9 de nombreux points de donn\u00e9es comme des valeurs aberrantes.\n\nLa deuxi\u00e8me fonction `Prophet.plot_components` pourrait \u00eatre beaucoup plus utile dans notre cas. Il nous permet d'observer diff\u00e9rentes composantes du mod\u00e8le s\u00e9par\u00e9ment: tendance, saisonnalit\u00e9 annuelle et hebdomadaire. De plus, si vous fournissez des informations sur les vacances et les \u00e9v\u00e9nements \u00e0 votre mod\u00e8le, elles seront \u00e9galement affich\u00e9es dans ce graphique.\n\nEssayons-le:\n\n\n```python\nm.plot_components(forecast);\n```\n\nComme vous pouvez le voir sur le graphique des tendances, Prophet a fait du bon travail en adaptant la croissance acc\u00e9l\u00e9r\u00e9e des nouveaux messages \u00e0 la fin de 2016. Le graphique de la saisonnalit\u00e9 hebdomadaire conduit \u00e0 la conclusion qu'il y a g\u00e9n\u00e9ralement moins de nouveaux messages le samedi et le dimanche que le les autres jours de la semaine. Dans le graphique de saisonnalit\u00e9 annuelle, il y a une baisse importante le jour de No\u00ebl.\n\n### 3.5 \u00c9valuation de la qualit\u00e9 des pr\u00e9visions\n\n\u00c9valuons la qualit\u00e9 de l'algorithme en calculant les mesures d'erreur pour les 30 derniers jours que nous avons pr\u00e9dits. Pour cela, nous aurons besoin des observations $y_i$ et des valeurs pr\u00e9dites correspondantes $\\hat{y}_i$.\n\nExaminons l'objet `forecast` que la biblioth\u00e8que a cr\u00e9\u00e9 pour nous:\n\n\n```python\nprint(\", \".join(forecast.columns))\n```\n\nNous pouvons voir que cette base de donn\u00e9es contient toutes les informations dont nous avons besoin, \u00e0 l'exception des valeurs historiques. Nous devons joindre l'objet `forecast` avec les valeurs r\u00e9elles `y` de l'ensemble de donn\u00e9es d'origine `df`. Pour cela nous allons d\u00e9finir une helper fonction que nous r\u00e9utiliserons plus tard:\n\n\n```python\ndef make_comparison_dataframe(historical, forecast):\n \"\"\"Join the history with the forecast.\n \n The resulting dataset will contain columns 'yhat', 'yhat_lower', 'yhat_upper' and 'y'.\n \"\"\"\n return forecast.set_index(\"ds\")[[\"yhat\", \"yhat_lower\", \"yhat_upper\"]].join(\n historical.set_index(\"ds\")\n )\n```\n\nAppliquons cette fonction \u00e0 notre derni\u00e8re pr\u00e9vision:\n\n\n```python\ncmp_df = make_comparison_dataframe(df, forecast)\ncmp_df.tail(n=3)\n```\n\nNous allons \u00e9galement d\u00e9finir une helper fonction que nous utiliserons pour \u00e9valuer la qualit\u00e9 de nos pr\u00e9visions avec les mesures d'erreur MAPE et MAE:\n\n\n```python\ndef calculate_forecast_errors(df, prediction_size):\n \"\"\"Calculate MAPE and MAE of the forecast.\n \n Args:\n df: joined dataset with 'y' and 'yhat' columns.\n prediction_size: number of days at the end to predict.\n \"\"\"\n\n # Make a copy\n df = df.copy()\n\n # Now we calculate the values of e_i and p_i according to the formulas given in the article above.\n df[\"e\"] = df[\"y\"] - df[\"yhat\"]\n df[\"p\"] = 100 * df[\"e\"] / df[\"y\"]\n\n # Recall that we held out the values of the last `prediction_size` days\n # in order to predict them and measure the quality of the model.\n\n # Now cut out the part of the data which we made our prediction for.\n predicted_part = df[-prediction_size:]\n\n # Define the function that averages absolute error values over the predicted part.\n error_mean = lambda error_name: np.mean(np.abs(predicted_part[error_name]))\n\n # Now we can calculate MAPE and MAE and return the resulting dictionary of errors.\n return {\"MAPE\": error_mean(\"p\"), \"MAE\": error_mean(\"e\")}\n```\n\nUtilisons notre fonction:\n\n\n```python\nfor err_name, err_value in calculate_forecast_errors(cmp_df, prediction_size).items():\n print(err_name, err_value)\n```\n\nEn cons\u00e9quence, l'erreur relative de notre pr\u00e9vision (MAPE) est d'environ 22,72%, et en moyenne notre mod\u00e8le est erron\u00e9 de 70,45 posts (MAE).\n\n### 3.6 Visualisation\n\nCr\u00e9ons notre propre visualisation du mod\u00e8le construit par Prophet. Il comprendra les valeurs r\u00e9elles, les pr\u00e9visions et les intervalles de confiance.\n\nPremi\u00e8rement, nous allons tracer les donn\u00e9es sur une p\u00e9riode de temps plus courte pour rendre les points de donn\u00e9es plus faciles \u00e0 distinguer. Deuxi\u00e8mement, nous ne montrerons les performances du mod\u00e8le que pour la p\u00e9riode que nous avons pr\u00e9vue, c'est-\u00e0-dire les 30 derniers jours. Il semble que ces deux mesures devraient nous donner un graphique plus lisible.\n\nTroisi\u00e8mement, nous utiliserons `Plotly` pour rendre notre graphique interactif, ce qui est id\u00e9al pour l'exploration.\n\nNous d\u00e9finirons notre propre helper fonction `show_forecast` et l'appellerons (pour en savoir plus sur son fonctionnement, veuillez vous r\u00e9f\u00e9rer aux commentaires dans le code et la [documentation](https://plot.ly/python/)):\n\n\n```python\ndef show_forecast(cmp_df, num_predictions, num_values, title):\n \"\"\"Visualize the forecast.\"\"\"\n\n def create_go(name, column, num, **kwargs):\n points = cmp_df.tail(num)\n args = dict(name=name, x=points.index, y=points[column], mode=\"lines\")\n args.update(kwargs)\n return go.Scatter(**args)\n\n lower_bound = create_go(\n \"Lower Bound\",\n \"yhat_lower\",\n num_predictions,\n line=dict(width=0),\n marker=dict(color=\"444\"),\n )\n upper_bound = create_go(\n \"Upper Bound\",\n \"yhat_upper\",\n num_predictions,\n line=dict(width=0),\n marker=dict(color=\"444\"),\n fillcolor=\"rgba(68, 68, 68, 0.3)\",\n fill=\"tonexty\",\n )\n forecast = create_go(\n \"Forecast\", \"yhat\", num_predictions, line=dict(color=\"rgb(31, 119, 180)\")\n )\n actual = create_go(\"Actual\", \"y\", num_values, marker=dict(color=\"red\"))\n\n # In this case the order of the series is important because of the filling\n data = [lower_bound, upper_bound, forecast, actual]\n\n layout = go.Layout(yaxis=dict(title=\"Posts\"), title=title, showlegend=False)\n fig = go.Figure(data=data, layout=layout)\n iplot(fig, show_link=False)\n\n\nshow_forecast(cmp_df, prediction_size, 100, \"New posts on Medium\")\n```\n\n\u00c0 premi\u00e8re vue, la pr\u00e9diction des valeurs moyennes par notre mod\u00e8le semble raisonnable. La valeur \u00e9lev\u00e9e de MAPE que nous avons obtenue ci-dessus peut s'expliquer par le fait que le mod\u00e8le n'a pas r\u00e9ussi \u00e0 saisir l'amplitude croissante de pic-\u00e0-pic (peak-to-peak) d'une faible saisonnalit\u00e9. \n\nEn outre, nous pouvons conclure du graphique ci-dessus que de nombreuses valeurs r\u00e9elles se trouvent en dehors de l'intervalle de confiance. Prophet peut ne pas convenir aux s\u00e9ries chronologiques avec une variance instable, du moins lorsque les param\u00e8tres par d\u00e9faut sont utilis\u00e9s. Nous allons essayer de r\u00e9soudre ce probl\u00e8me en appliquant une transformation \u00e0 nos donn\u00e9es.\n\n## 4. Transformation Box-Cox\n\nJusqu'\u00e0 pr\u00e9sent, nous avons utilis\u00e9 Prophet avec les param\u00e8tres par d\u00e9faut et les donn\u00e9es d'origine. Nous laisserons les param\u00e8tres du mod\u00e8le seuls. Mais malgr\u00e9 cela, nous avons encore des progr\u00e8s \u00e0 faire. Dans cette section, nous appliquerons la [Box\u2013Cox transformation](http://onlinestatbook.com/2/transformations/box-cox.html) \u00e0 notre s\u00e9rie originale. Voyons o\u00f9 cela nous m\u00e8nera.\n\nQuelques mots sur cette transformation. Il s'agit d'une transformation de donn\u00e9es monotone qui peut \u00eatre utilis\u00e9e pour stabiliser la variance. Nous utiliserons la transformation Box-Cox \u00e0 un param\u00e8tre, qui est d\u00e9finie par l'expression suivante:\n\n$$\n\\begin{equation}\n boxcox^{(\\lambda)}(y_{i}) = \\begin{cases}\n \\frac{\\displaystyle y_{i}^{\\lambda} - 1}{\\displaystyle \\lambda} &, \\text{if $\\lambda \\neq 0$}.\\\\\n ln(y_{i}) &, \\text{if $\\lambda = 0$}.\n \\end{cases}\n\\end{equation}\n$$\n\nNous devrons impl\u00e9menter l'inverse de cette fonction afin de pouvoir restaurer l'\u00e9chelle de donn\u00e9es d'origine. Il est facile de voir que l'inverse est d\u00e9fini comme:\n\n$$\n\\begin{equation}\n invboxcox^{(\\lambda)}(y_{i}) = \\begin{cases}\n e^{\\left (\\frac{\\displaystyle ln(\\lambda y_{i} + 1)}{\\displaystyle \\lambda} \\right )} &, \\text{if $\\lambda \\neq 0$}.\\\\\n e^{y_{i}} &, \\text{if $\\lambda = 0$}.\n \\end{cases}\n\\end{equation}\n$$\n\nLa fonction correspondante en Python est impl\u00e9ment\u00e9e comme suit:\n\n\n```python\ndef inverse_boxcox(y, lambda_):\n return np.exp(y) if lambda_ == 0 else np.exp(np.log(lambda_ * y + 1) / lambda_)\n```\n\nTout d'abord, nous pr\u00e9parons notre jeu de donn\u00e9es en d\u00e9finissant son index:\n\n\n```python\ntrain_df2 = train_df.copy().set_index(\"ds\")\n```\n\nEnsuite, nous appliquons la fonction `stats.boxcox` de` Scipy`, qui applique la transformation Box \u2013 Cox. Dans notre cas, il renverra deux valeurs. La premi\u00e8re est la s\u00e9rie transform\u00e9e et la seconde est la valeur trouv\u00e9e de $\\lambda$ qui est optimale en termes de maximum de log-vraisemblance (maximum log-likelihood):\n\n\n```python\ntrain_df2[\"y\"], lambda_prophet = stats.boxcox(train_df2[\"y\"])\ntrain_df2.reset_index(inplace=True)\n```\n\nNous cr\u00e9ons un nouveau mod\u00e8le `Prophet` et r\u00e9p\u00e9tons le cycle d'ajustement de pr\u00e9vision que nous avons d\u00e9j\u00e0 fait ci-dessus:\n\n\n```python\nm2 = Prophet()\nm2.fit(train_df2)\nfuture2 = m2.make_future_dataframe(periods=prediction_size)\nforecast2 = m2.predict(future2)\n```\n\n\u00c0 ce stade, nous devons inverser la transformation de Box \u2013 Cox avec notre fonction inverse et la valeur connue de $\\lambda$:\n\n\n```python\nfor column in [\"yhat\", \"yhat_lower\", \"yhat_upper\"]:\n forecast2[column] = inverse_boxcox(forecast2[column], lambda_prophet)\n```\n\nIci, nous allons r\u00e9utiliser nos outils pour faire le dataframe de comparaison et calculer les erreurs:\n\n\n```python\ncmp_df2 = make_comparison_dataframe(df, forecast2)\nfor err_name, err_value in calculate_forecast_errors(cmp_df2, prediction_size).items():\n print(err_name, err_value)\n```\n\nOn peut donc affirmer avec certitude que la qualit\u00e9 du mod\u00e8le s'est am\u00e9lior\u00e9e. \n\nEnfin, tracons nos performances pr\u00e9c\u00e9dentes avec les derniers r\u00e9sultats c\u00f4te \u00e0 c\u00f4te. Notez que nous utilisons `prediction_size` pour le troisi\u00e8me param\u00e8tre afin de zoomer sur l'intervalle pr\u00e9vu:\n\n\n```python\nshow_forecast(cmp_df, prediction_size, 100, \"No transformations\")\nshow_forecast(cmp_df2, prediction_size, 100, \"Box\u2013Cox transformation\")\n```\n\nNous voyons que la pr\u00e9vision des changements hebdomadaires dans le deuxi\u00e8me graphique est beaucoup plus proche des valeurs r\u00e9elles maintenant.\n\n## 5. R\u00e9sum\u00e9\n\nNous avons jet\u00e9 un coup d'\u0153il \u00e0 *Prophet*, une biblioth\u00e8que de pr\u00e9visions open source sp\u00e9cifiquement destin\u00e9e aux s\u00e9ries chronologiques commerciales. Nous avons \u00e9galement effectu\u00e9 des exercices pratiques de pr\u00e9vision des s\u00e9ries chronologiques.\n\nComme nous l'avons vu, la biblioth\u00e8que Prophet ne fait pas de merveilles et ses pr\u00e9dictions pr\u00eates \u00e0 l'emploi ne sont pas [id\u00e9ales](https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization). Il appartient toujours au data scientist d'explorer les r\u00e9sultats des pr\u00e9visions, d'ajuster les param\u00e8tres du mod\u00e8le et de transformer les donn\u00e9es si n\u00e9cessaire.\n\nToutefois, cette biblioth\u00e8que est conviviale et facilement personnalisable. La seule possibilit\u00e9 de prendre en compte les jours anormaux connus de l'analyste \u00e0 l'avance peut faire la diff\u00e9rence dans certains cas\n\nDans l'ensemble, la biblioth\u00e8que Prophet vaut la peine de faire partie de votre bo\u00eete \u00e0 outils analytiques.\n\n## 6. R\u00e9f\u00e9rences\n\n- Official [Prophet repository](https://github.com/facebookincubator/prophet) on GitHub.\n- Official [Prophet documentation](https://facebookincubator.github.io/prophet/docs/quick_start.html).\n- Sean J. Taylor, Benjamin Letham [\"Forecasting at scale\"](https://facebookincubator.github.io/prophet/static/prophet_paper_20170113.pdf) \u2014 scientific paper explaining the algorithm which lays the foundation of `Prophet`.\n- [Forecasting Website Traffic Using Facebook\u2019s Prophet Library](http://pbpython.com/prophet-overview.html) \u2014 `Prophet` overview with an example of website traffic forecasting.\n- Rob J. Hyndman, George Athanasopoulos [\"Forecasting: principles and practice\"](https://www.otexts.org/fpp) \u2013 a very good online book about time series forecasting.\n", "meta": {"hexsha": "c4fb4055ac180c39e64294e3c12204880a1a963b", "size": 48233, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "jupyter_french/topic09_time_series/topic9_part2_facebook_prophet-fr_def.ipynb", "max_stars_repo_name": "salman394/AI-ml--course", "max_stars_repo_head_hexsha": "2ed3a1382614dd00184e5179026623714ccc9e8c", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "jupyter_french/topic09_time_series/topic9_part2_facebook_prophet-fr_def.ipynb", "max_issues_repo_name": "salman394/AI-ml--course", "max_issues_repo_head_hexsha": "2ed3a1382614dd00184e5179026623714ccc9e8c", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "jupyter_french/topic09_time_series/topic9_part2_facebook_prophet-fr_def.ipynb", "max_forks_repo_name": "salman394/AI-ml--course", "max_forks_repo_head_hexsha": "2ed3a1382614dd00184e5179026623714ccc9e8c", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.3031709203, "max_line_length": 505, "alphanum_fraction": 0.6396243236, "converted": true, "num_tokens": 8616, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4339814501625211, "lm_q2_score": 0.2281565074091475, "lm_q1q2_score": 0.09901569194943782}} {"text": "# Informa\u00e7\u00f5es sobre Sistema (Linux / Mac OS)\n\nAutoria : [Roberto Colistete J\u00fanior](https://github.com/rcolistete)\n\n\u00daltima atualiza\u00e7\u00e3o : 14/02/2021\n\nVers\u00e3o usando alguns comandos compat\u00edveis com somente Linux e Mac OS, logo n\u00e3o \u00e9 100% compat\u00edvel com Windows.\n\nTal incompatibilidade com Windows tem a vantagem de obter informa\u00e7\u00f5es mais detalhadas sobre Linux/Mac OS.\n\n\u00c9 importante listar as informa\u00e7\u00f5es de sistema computacional em termos de hardware e software para :\n- confirmar se certo recurso de software, lan\u00e7ado a partir de certa vers\u00e3o, est\u00e1 dispon\u00edvel;\n- ver os requerimentos m\u00ednimos :\n * de softwares, que podem depender de vers\u00f5es m\u00ednimas de outros softwares (depend\u00eancias);\n * hardware, como exist\u00eancia de GPU (Graphical Processing Unit, placa-de-v\u00eddeo), vers\u00e3o m\u00ednima da Compute Capability da GPU para acesso a certo recurso, RAM m\u00ednima na CPU ou GPU para rodar certa aplica\u00e7\u00e3o, etc;\n- ao fazer compara\u00e7\u00f5es de desempenho (benchmark) de execu\u00e7\u00e3o de software, seja conhecida a configura\u00e7\u00e3o utilizada que normalmente afeta em muito o desempenho.\n\n## Informa\u00e7\u00f5es sobre arquitetura de hardware e sistema operacional do computador\n\n### Para Linux / Mac OS\n\nNome do sistema operacional, nome do computador na rede, vers\u00e3o do kernel, arquitetura de hardware (32/64 bits), etc :\n\n\n```python\n!uname -a\n```\n\nNome e vers\u00e3o do sistema operacional :\n\n\n```python\n!lsb_release -a\n```\n\n Diversas informa\u00e7\u00f5es da CPU, como nome do processador, frequ\u00eancia em MHz da CPU, n\u00famero de n\u00facleos/cores, n\u00famero de threads, mem\u00f3ria cache, arquitetura de hardware (32/64 bits), etc :\n\n\n```python\n!lscpu\n```\n\nParti\u00e7\u00f5es do sistema de arquivos do sistema operacional :\n\n\n```python\n!df -h\n```\n\nMem\u00f3ria RAM total e em uso pelo sistema operacional :\n\n\n```python\n!free\n```\n\nVers\u00e3o do compilador C/C++ gcc :\n\n\n```python\n!gcc --version\n```\n\n### Para Linux, Mac OS e Windows\n\nO [m\u00f3dulo Pyton \"platform\"](https://pymotw.com/2/platform/) fornece diversas informa\u00e7\u00f5es do sistema (computador, sistema operacional, Python, etc).\n\n\n```python\nimport platform\n```\n\n\n```python\nplatform.platform()\n```\n\nMais detalhes, como nome do computador na rede (node), arquitetura de hardware (32/64 bits), etc :\n\n\n```python\nplatform.uname()\n```\n\n## Informa\u00e7\u00f5es sobre vers\u00e3o de Python e alguns m\u00f3dulos\n\nLembrando que Python tem v\u00e1rios milhares de m\u00f3dulos extras que podem ser instalados, vide [PyPI - Python Package Index](https://pypi.org/) com uns 300 mil projetos atualmente, logo a lista abaixo \u00e9 somente uma escolha de m\u00f3dulos mais populares para certas aplica\u00e7\u00f5es.\n\n### Python\n\nN\u00famero da vers\u00e3o de [Python](https://www.python.org/) :\n\n\n```python\nplatform.python_version()\n```\n\ndata da vers\u00e3o :\n\n\n```python\nplatform.python_build()\n```\n\ncompilador C/C++ utilizado para criar tal vers\u00e3o de Python :\n\n\n```python\nplatform.python_compiler()\n```\n\n### NumPy\n\nVers\u00e3o de [NumPy](https://numpy.org/), onde 'np' \u00e9 um apelido comum :\n\n\n```python\nimport numpy as np\n```\n\n\n```python\nnp.__version__\n```\n\n### MatPlotLib\n\nVers\u00e3o de [MatPlotLib](https://matplotlib.org/), onde 'mpl' \u00e9 um apelido comum :\n\n\n```python\nimport matplotlib as mpl\n```\n\n\n```python\nmpl.__version__\n```\n\n### SymPy\n\nVers\u00e3o de [SymPy](https://www.sympy.org/), onde 'sp' \u00e9 um apelido comum :\n\n\n```python\nimport sympy as sp\n```\n\n\n```python\nsp.__version__\n```\n\n### Pandas\n\nVers\u00e3o de [Pandas](https://pandas.pydata.org/), onde 'pd' \u00e9 um apelido comum :\n\n\n```python\nimport pandas as pd\n```\n\n\n```python\npd.__version__\n```\n\n### Bokeh\n\nVers\u00e3o de [Bokeh](https://bokeh.org/), onde 'bk' \u00e9 um apelido comum :\n\n\n```python\nimport bokeh as bk\n```\n\n\n```python\nbk.__version__\n```\n\n### Holoviews\n\nVers\u00e3o de [Holoviews](https://holoviews.org/), onde 'hv' \u00e9 um apelido comum :\n\n\n```python\nimport holoviews as hv\n```\n\n\n```python\nhv.__version__\n```\n\n### Seaborn\n\nVers\u00e3o de [Seaborn](https://seaborn.pydata.org/), onde 'sns' \u00e9 um apelido comum :\n\n\n```python\nimport seaborn as sns\n```\n\n\n```python\nsns.__version__\n```\n\n### Numba\n\nVers\u00e3o de [Numba](https://numba.pydata.org/), onde 'nb' \u00e9 um apelido comum :\n\n\n```python\nimport numba as nb\n```\n\n\n```python\nnb.__version__\n```\n\nN\u00famero default de threads, dispon\u00edveis para paralelismo de CPU com Numba :\n\n\n```python\nnb.config.NUMBA_DEFAULT_NUM_THREADS\n```\n\n## Informa\u00e7\u00e3o sobre GPU/CUDA\n\n### Para Linux / Mac OS\n\nMostra vers\u00e3o do driver NVidia, vers\u00e3o de CUDA e v\u00e1rias informa\u00e7\u00f5es da GPU : nome, temperatura, pot\u00eancia usada e m\u00e1xima, RAM usada e m\u00e1xima, etc :\n\n\n```python\n!nvidia-smi\n```\n\nMostra dados do compilador CUDA, como vers\u00e3o :\n\n\n```python\n!nvcc --version\n```\n\n### Via Numba\n\nPrecisa ter [CUDA](https://developer.nvidia.com/cuda-zone) e [Numba](https://numba.pydata.org/) instalados.\n\n\n```python\nfrom numba import cuda\n```\n\nTesta se CUDA est\u00e1 dispon\u00edvel, i. e., se tem GPU e se software CUDA foi instalado :\n\n\n```python\ncuda.is_available()\n```\n\nMostra identifica\u00e7\u00e3o (come\u00e7a de zero) da GPU, nome da GPU, CC (Compute Capability), etc :\n\n\n```python\ncuda.detect()\n```\n\nMostra RAM livre e total da GPU com identifica\u00e7\u00e3o 0 (zero), em bytes :\n\n\n```python\ncuda.current_context(0).get_memory_info()\n```\n\n### Via CuPy\n\nVers\u00e3o de [CuPy](https://cupy.dev/), onde 'cp' \u00e9 um apelido comum :\n\n\n```python\nimport cupy as cp\n```\n\nVers\u00e3o de CuPy :\n\n\n```python\ncp.__version__\n```\n\nMostra RAM livre e total da GPU sendo usada por CuPy, em bytes :\n\n\n```python\ncp.cuda.runtime.memGetInfo()\n```\n", "meta": {"hexsha": "a2b6fa4c6cf270bbb60b0d25e0e6a225d7158054", "size": 25333, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Informacoes_Sistema/Informacoes_Sistema_LinuxMacOS.ipynb", "max_stars_repo_name": "rcolistete/Ferramentas_Ensino_Pesquisa", "max_stars_repo_head_hexsha": "b8a1f4ca5cb610b1bba79d8424f69aabd0868b2f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Informacoes_Sistema/Informacoes_Sistema_LinuxMacOS.ipynb", "max_issues_repo_name": "rcolistete/Ferramentas_Ensino_Pesquisa", "max_issues_repo_head_hexsha": "b8a1f4ca5cb610b1bba79d8424f69aabd0868b2f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Informacoes_Sistema/Informacoes_Sistema_LinuxMacOS.ipynb", "max_forks_repo_name": "rcolistete/Ferramentas_Ensino_Pesquisa", "max_forks_repo_head_hexsha": "b8a1f4ca5cb610b1bba79d8424f69aabd0868b2f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2021-03-05T18:11:21.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-07T17:09:34.000Z", "avg_line_length": 20.6294788274, "max_line_length": 273, "alphanum_fraction": 0.5376781273, "converted": true, "num_tokens": 1529, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.2782567817320044, "lm_q2_score": 0.3557749003442964, "lm_q1q2_score": 0.0989967787908285}} {"text": "Before you turn in your homework, make sure everything runs as expected.\n\nMake sure you execute every single code cell, in order, filling with your solutions in any place that says `# YOUR CODE HERE`, and always DELETE the line that says:\n\n```python\nraise NotImplementedError()\n```\n\nThe purpose of this line is to tell you if you forgot to answer a question (it will throw an error if the line is there)\n\n**IMPORTANT:**\n\n* **DO NOT DELETE ANY CELL** and do not change the title of the Notebook.\n\n* Use the same variable names as the ones written in the questions; otherwise, the tests will fail.\n\n* Before you turn in your homework, make sure everything runs as expected: restart the kernel (in the menubar, select Kernel $\\rightarrow$ Restart) and then run all cells (in the menubar, select Cell $\\rightarrow$ Run All).\n\nFill your name below:\n\n\n```python\nname = \"Yinfeng Ding\"\n```\n\n# Sod's test problems\n\nSod's test problems are standard benchmarks used to assess the accuracy of numerical solvers. The tests use a classic example of one-dimensional compressible flow: the shock-tube problem. Sod (1978) chose initial conditions and numerical discretization parameters for the shock-tube problem and used these to test several schemes, including Lax-Wendroff and MacCormack's. Since then, many others have followed Sod's example and used the same tests on new numerical methods.\n\nThe shock-tube problem is so useful for testing numerical methods because it is one of the few problems that allows an exact solution of the Euler equations for compressible flow.\n\nThis notebook complements the previous lessons of the course module [_\"Riding the wave: convection problems\"_](https://github.com/numerical-mooc/numerical-mooc/tree/master/lessons/03_wave) with Sod's test problems as an independent coding exercise. We'll lay out the problem for you, but leave important bits of code for you to write on your own. Good luck!\n\n## What's a shock tube?\n\nA shock tube is an idealized device that generates a one-dimensional shock wave in a compressible gas. The setting allows an analytical solution of the Euler equations, which is very useful for comparing with the numerical results to assess their accuracy. \n\nPicture a tube with two regions containing gas at different pressures, separated by an infinitely-thin, rigid diaphragm. The gas is initially at rest, and the left region is at a higher pressure than the region to the right of the diaphragm. At time $t = 0.0 s$, the diaphragm is ruptured instantaneously. \n\nWhat happens? \n\nYou get a shock wave. The gas at high pressure, no longer constrained by the diaphragm, rushes into the lower-pressure area and a one-dimensional unsteady flow is established, consisting of:\n\n* a shock wave traveling to the right\n* an expansion wave traveling to the left\n* a moving contact discontinuity\n\nThe shock-tube problem is an example of a *Riemann problem* and it has an analytical solution, as we said. The situation is illustrated in Figure 1.\n\n\n
Figure 1: The shock-tube problem.
\n\n## The Euler equations\n\nThe Euler equations govern the motion of an inviscid fluid (no viscosity). They consist of the conservation laws of mass and momentum, and often we also need to work with the energy equation. \n\nLet's consider a 1D flow with velocity $u$ in the $x$-direction. The Euler equations for a fluid with density $\\rho$ and pressure $p$ are:\n\n$$\n\\begin{cases}\n &\\frac{\\partial \\rho}{\\partial t} + \\frac{\\partial}{\\partial x}(\\rho u) = 0 \\\\\n &\\frac{\\partial}{\\partial t}(\\rho u) + \\frac{\\partial}{\\partial x} (\\rho u^2 + p)=0\n\\end{cases}\n$$\n\n... plus the energy equation, which we can write in this form:\n\n$$\n\\begin{equation}\n\\frac{\\partial}{\\partial t}(\\rho e_T) + \\frac{\\partial}{\\partial x} (\\rho u e_T +p u)=0\n\\end{equation}\n$$\n\nwhere $e_T=e+u^2/2$ is the total energy per unit mass, equal to the internal energy plus the kinetic energy (per unit mass).\n\nWritten in vector form, you can see that the Euler equations bear a strong resemblance to the traffic-density equation that has been the focus of this course module so far. Here is the vector representation of the Euler equation:\n\n$$\n\\begin{equation}\n\\frac{\\partial }{\\partial t} \\underline{\\mathbf{u}} + \\frac{\\partial }{\\partial x} \\underline{\\mathbf{f}} = 0\n\\end{equation}\n$$\n\nThe big difference with our previous work is that the variables $\\underline{\\mathbf{u}}$ and $\\underline{\\mathbf{f}}$ are *vectors*. If you review the [Phugoid Full Model](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/01_phugoid/01_03_PhugoidFullModel.ipynb) lesson, you will recall that we can solve for several values at once using the vector form of an equation. In the Phugoid Module, it was an ODE\u2014now we apply the same procedure to a PDE. \n\nLet's take a look at what $\\underline{\\mathbf{u}}$ and $\\underline{\\mathbf{f}}$ consist of.\n\n## The conservative form\n\nMany works in the early days of computational fluid dynamics in the 1960s showed that using the conservation form of the Euler equations is more accurate for situations with shock waves. And as you already saw, the shock-tube solutions do contain shocks.\n\nThe conserved variables $\\underline{\\mathbf{u}}$ for Euler's equations are\n\n$$\n\\begin{equation}\n\\underline{\\mathbf{u}} = \\left[\n\\begin{array}{c}\n\\rho \\\\\n\\rho u \\\\\n\\rho e_T \\\\ \n\\end{array}\n\\right]\n\\end{equation}\n$$\n\nwhere $\\rho$ is the density of the fluid, $u$ is the velocity of the fluid and $e_T = e + \\frac{u^2}{2}$ is the specific total energy; $\\underline{\\mathbf{f}}$ is the flux vector:\n\n$$\n\\begin{equation}\n\\underline{\\mathbf{f}} = \\left[\n\\begin{array}{c}\n\\rho u \\\\\n\\rho u^2 + p \\\\\n(\\rho e_T + p) u \\\\\n\\end{array}\n\\right]\n\\end{equation}\n$$\n\nwhere $p$ is the pressure of the fluid.\n\nIf we put together the conserved variables and the flux vector into our PDE, we get the following set of equations:\n\n$$\n\\begin{equation}\n \\frac{\\partial}{\\partial t}\n \\left[\n \\begin{array}{c}\n \\rho \\\\\n \\rho u \\\\\n \\rho e_T \\\\\n \\end{array}\n \\right] +\n \\frac{\\partial}{\\partial x}\n \\left[\n \\begin{array}{c}\n \\rho u \\\\\n \\rho u^2 + p \\\\\n (\\rho e_T + p) u \\\\\n \\end{array}\n \\right] =\n 0\n\\end{equation}\n$$\n\nThere's one major problem there. We have 3 equations and 4 unknowns. But there is a solution! We can use an equation of state to calculate the pressure\u2014in this case, we'll use the ideal gas law.\n\n## Calculating the pressure\n\nFor an ideal gas, the equation of state is\n\n$$\ne = e(\\rho, p) = \\frac{p}{(\\gamma -1) \\rho}\n$$\n\nwhere $\\gamma = 1.4$ is a reasonable value to model air, \n\n$$\n\\therefore p = (\\gamma -1)\\rho e\n$$ \n\nRecall from above that\n\n$$\ne_T = e+\\frac{1}{2} u^2\n$$\n\n$$\n\\therefore e = e_T - \\frac{1}{2}u^2\n$$\n\nPutting it all together, we arrive at an equation for the pressure\n\n$$\np = (\\gamma -1)\\left(\\rho e_T - \\frac{\\rho u^2}{2}\\right)\n$$\n\n## Flux in terms of $\\underline{\\mathbf{u}}$\n\nWith the traffic model, the flux was a function of traffic density. For the Euler equations, the three equations we have are coupled and the flux *vector* is a function of $\\underline{\\mathbf{u}}$, the vector of conserved variables:\n\n$$\n\\underline{\\mathbf{f}} = f(\\underline{\\mathbf{u}})\n$$\n\nIn order to get everything squared away, we need to represent $\\underline{\\mathbf{f}}$ in terms of $\\underline{\\mathbf{u}}$.\nWe can introduce a little shorthand for the $\\underline{\\mathbf{u}}$ and $\\underline{\\mathbf{f}}$ vectors and define:\n\n$$\n\\underline{\\mathbf{u}} =\n\\left[\n \\begin{array}{c}\n u_1 \\\\\n u_2 \\\\\n u_3 \\\\\n \\end{array}\n\\right] =\n\\left[\n \\begin{array}{c}\n \\rho \\\\\n \\rho u \\\\\n \\rho e_T \\\\\n \\end{array}\n\\right]\n$$\n\n$$\n\\underline{\\mathbf{f}} =\n\\left[\n \\begin{array}{c}\n f_1 \\\\\n f_2 \\\\\n f_3 \\\\\n \\end{array}\n\\right] =\n\\left[\n \\begin{array}{c}\n \\rho u \\\\\n \\rho u^2 + p \\\\\n (\\rho e_T + p) u \\\\\n \\end{array}\n\\right]\n$$ \n\nWith a little algebraic trickery, we can represent the pressure vector using quantities from the $\\underline{\\mathbf{u}}$ vector.\n\n$$\np = (\\gamma -1)\\left(u_3 - \\frac{1}{2} \\frac{u^2_2}{u_1} \\right)\n$$\n\nNow that pressure can be represented in terms of $\\underline{\\mathbf{u}}$, the rest of $\\underline{\\mathbf{f}}$ isn't too difficult to resolve:\n\n$$\\underline{\\mathbf{f}} = \\left[ \\begin{array}{c}\nf_1 \\\\\nf_2 \\\\\nf_3 \\\\ \\end{array} \\right] =\n\\left[ \\begin{array}{c}\nu_2\\\\\n\\frac{u^2_2}{u_1} + (\\gamma -1)\\left(u_3 - \\frac{1}{2} \\frac{u^2_2}{u_1} \\right) \\\\\n\\left(u_3 + (\\gamma -1)\\left(u_3 - \\frac{1}{2} \\frac{u^2_2}{u_1}\\right) \\right) \\frac{u_2}{u_1}\\\\ \\end{array}\n\\right]$$\n\n## Test conditions\n\nThe first test proposed by Sod in his 1978 paper is as follows. \n\nIn a tube spanning from $x = -10 \\text{m}$ to $x = 10 \\text{m}$ with the rigid membrane at $x = 0 \\text{m}$, we have the following initial gas states:\n\n$$\n\\underline{IC}_L =\n\\left[\n \\begin{array}{c}\n \\rho_L \\\\\n u_L \\\\\n p_L \\\\\n \\end{array}\n\\right] =\n\\left[\n \\begin{array}{c}\n 1.0 \\, kg/m^3 \\\\\n 0 \\, m/s \\\\\n 100 \\, kN/m^2 \\\\\n \\end{array}\n\\right]\n$$\n\n$$\n\\underline{IC}_R =\n\\left[\n \\begin{array}{c}\n \\rho_R \\\\\n u_R \\\\\n p_R \\\\\n \\end{array}\n\\right] =\n\\left[\n \\begin{array}{c}\n 0.125 \\, kg/m^3 \\\\\n 0 \\, m/s \\\\\n 10 \\, kN/m^2 \\\\\n \\end{array}\n\\right]\n$$\n\nwhere $\\underline{IC}_L$ are the initial density, velocity and pressure on the left side of the tube membrane and $\\underline{IC}_R$ are the initial density, velocity and pressure on the right side of the tube membrane. \n\nThe analytical solution to this test for the velocity, pressure and density, looks like the plots in Figure 2.\n\n\n
Figure 2. Analytical solution for Sod's first test.
\n\n## The Richtmyer method\n\nFor this exercise, you will use the **Lax-Friedrichs** scheme that we implemented in [lesson 2](https://nbviewer.jupyter.org/github/numerical-mooc/numerical-mooc/blob/master/lessons/03_wave/03_02_convectionSchemes.ipynb).\nBut, we will also be using a new scheme called the **Richtmyer** method.\nLike the MacCormack method, Richtmyer is a *two-step method*, given by:\n\n$$\n\\begin{align}\n\\underline{\\mathbf{u}}^{n+\\frac{1}{2}}_{i+\\frac{1}{2}} &= \\frac{1}{2} \\left( \\underline{\\mathbf{u}}^n_{i+1} + \\underline{\\mathbf{u}}^n_i \\right) - \n\\frac{\\Delta t}{2 \\Delta x} \\left( \\underline{\\mathbf{f}}^n_{i+1} - \\underline{\\mathbf{f}}^n_i\\right) \\\\\n\\underline{\\mathbf{u}}^{n+1}_i &= \\underline{\\mathbf{u}}^n_i - \\frac{\\Delta t}{\\Delta x} \\left(\\underline{\\mathbf{f}}^{n+\\frac{1}{2}}_{i+\\frac{1}{2}} - \\underline{\\mathbf{f}}^{n+\\frac{1}{2}}_{i-\\frac{1}{2}} \\right)\n\\end{align}\n$$\n\nThe flux vectors used in the second step are obtained by evaluating the flux functions on the output of the first step:\n\n$$\n\\underline{\\mathbf{f}}^{n+\\frac{1}{2}}_{i+\\frac{1}{2}} = \\underline{\\mathbf{f}}\\left(\\underline{\\mathbf{u}}^{n+\\frac{1}{2}}_{i+\\frac{1}{2}}\\right)\n$$\n\nThe first step is like a *predictor* of the solution: if you look closely, you'll see that we are applying a Lax-Friedrichs scheme here. The second step is a *corrector* that applies a leapfrog update. Figure 3 gives a sketch of the stencil for Richtmyer method, where the \"intermediate time\" $n+1/2$ will require a temporary variable in your code, just like we had in the MacCormack scheme.\n\n\n
Figure 3. Stencil of Richtmyer scheme.
\n\n## Implement your solution (40 points)\n\n---\n\nYour mission, should you wish to accept it, is to calculate the pressure, density and velocity along the shock tube at time $t = 0.01 s$ using the Richtmyer method **and** the Lax-Friedrichs method. Good luck!\n\nCode parameters to use:\n\n* Number of discrete points along the 1D domain: `nx = 81` (which gives `dx = 0.25` for a domain of length 20).\n* Time-step size: `dt = 0.0002`.\n* Heat capacity ratio: `gamma = 1.4`.\n\nImplement your solution in this section.\nYou can use as many code cells as you want.\n\n\n```python\n# YOUR CODE HERE\nimport numpy\nimport sympy\nfrom matplotlib import pyplot\n%matplotlib inline\n```\n\n\n```python\n# Set the font family and size to use for Matplotlib figures.\npyplot.rcParams['font.family'] = 'serif'\npyplot.rcParams['font.size'] = 16\nsympy.init_printing()\n```\n\n\n```python\n# Set parameters.\nnx = 81\ndx = 0.25\ndt = 0.0002\ngamma = 1.4\nt = 0.01\nnt = int(t/dt)+1\n```\n\n\n```python\n# Get the grid point coordinates.\nx = numpy.linspace(-10,10,num = nx)\n\n# Set the initial conditions.\nrho0 = numpy.ones(nx)\nmask = numpy.where(x >= 0)\nrho0[mask] = 0.125\np0 = 100000*numpy.ones(nx)\np0[mask] = 10000\nv0 = numpy.zeros(nx)\ne0 = p0 / ((gamma-1) * rho0)\neT0 = e0 + 0.5 * v0**2\n\nu0 = numpy.array([rho0,\n rho0*v0,\n rho0*eT0])\nf0 = numpy.array([u0[1],\n u0[1]**2 / u0[0] + (gamma-1)*(u0[2] - 0.5*u0[1]**2 / u0[0]),\n (u0[2] + (gamma - 1) * (u0[2] - 0.5*u0[1]**2 / u0[0])) * u0[1] / u0[0]])\n```\n\n\n```python\n# Richtmyer scheme, two step method, R1, R2\nu_R2 = u0.copy()\nu_R1 = u_R2.copy()\nf_R2 = f0.copy()\n\nfor i in range(1, nt):\n u_R1 = 0.5 * (u_R2[:,1:] + u_R2[:,:-1]) - dt / (2 * dx) * (f_R2[:,1:] - f_R2[:,:-1])# first step is like a predictor of the solution\n f_R1 = numpy.array([u_R1[1],\n u_R1[1]**2 / u_R1[0] + (gamma - 1) * (u_R1[2] - 0.5 * u_R1[1]**2 / u_R1[0]),\n (u_R1[2] + (gamma -1) * (u_R1[2] - 0.5 * u_R1[1]**2 / u_R1[0])) * u_R1[1] / u_R1[0]])\n u_R2[:,1:-1] = u_R2[:,1:-1] - dt / dx * (f_R1[:,1:] - f_R1[:,:-1])# corrector that applies a leapfrog update, advance in time\n f_R2 = numpy.array([u_R2[1],\n u_R2[1]**2 / u_R2[0] + (gamma - 1) * (u_R2[2] - 0.5 * u_R2[1]**2 / u_R2[0]),\n (u_R2[2] + (gamma -1) * (u_R2[2] - 0.5 * u_R2[1]**2 / u_R2[0])) * u_R2[1] / u_R2[0]])\n\nrho_Richtmyer = u_R2[0]\nv_Richtmyer = u_R2[1] / u_R2[0]\np_Richtmyer = (gamma -1) * (u_R2[2] - 0.5 * u_R2[1]**2 / u_R2[0])\n```\n\n\n```python\n# Lax-Friedrichs scheme\nu_L = u0.copy()\nf_L = f0.copy()\nfor n in range(1, nt):\n # Advance in time using Lax-Friedrichs scheme.\n u_L[:,1:-1] = 0.5*(u_L[:,:-2] + u_L[:,2:]) - 0.5*dt/dx * (f_L[:,2:] - f_L[:,:-2])\n f_L = numpy.array([u_L[1],\n u_L[1]**2 / u_L[0] + (gamma - 1) * (u_L[2] - 0.5 * u_L[1]**2 / u_L[0]),\n (u_L[2] + (gamma -1) * (u_L[2] - 0.5 * u_L[1]**2 / u_L[0])) * u_L[1] / u_L[0]])\nrho_Lax = u_L[0]\nv_Lax = u_L[1] / u_L[0]\np_Lax = (gamma -1) * (u_L[2] - 0.5 * u_L[1]**2 / u_L[0])\n```\n\n## Assessment (80 points)\n\n---\n\nAnswer questions in this section.\n\nDo not try to delete or modify empty code cells that are already present.\nFor each question, provide your answer in the cell **just above** the empty cell.\n(This empty cell contains hidden tests to assert the correctness of your answer and cannot be deleted.)\nPay attention to the name of the variables we ask you to create to store computed values; if the name of the variable is misspelled, the test will fail.\n\n\n```python\ntry:\n import mooc37 as mooc\nexcept:\n import mooc36 as mooc\n```\n\n* **Q1 (10 points):** Plot the numerical solution of the density, velocity, and pressure at time $t = 0.01 s$ obtained with the Richtmyer scheme **and** with the Lax-Friedrichs scheme.\n\nYou should also plot the analytical solution.\nThe analytical solution can be obtained using the function `analytical_solution` from the Python file `sod.py` (located in the same folder than the Jupyter Notebook).\nTo import the function in your Notebook, use `from sod import analytical_solution`.\nYou can use `help(analytical_solution)` to see how you should call the function.\n\nCreate one figure per variable and make sure to label your axes.\n(For example, the first figure should contain the numerical solution of the density using both schemes, as well as the analytical solution for the density.)\nMake sure to add a legend to your plots.\n\n\n```python\n# YOUR CODE HERE\nfrom sod import analytical_solution\nhelp(analytical_solution)\n```\n\n Help on function analytical_solution in module sod:\n \n analytical_solution(t, x, left_state, right_state, diaphragm=0.0, gamma=1.4)\n Compute the analytical solution of the Sod's test at a given time.\n \n Parameters\n ----------\n t : float\n The time.\n x : numpy.ndarray\n Coordinates along the tube (as a 1D array of floats).\n left_state : tuple or list\n Initial density, velocity, and pressure values\n on left side of the diaphragm.\n The argument should be a tuple or list with 3 floats.\n right_state : tuple or list\n Initial density, velocity, and pressure values\n on right side of the diaphragm.\n The argument should be a tuple or list with 3 floats.\n diaphragm : float, optional\n Location of the diaphgram (membrane), by default 0.0.\n gamma : float, optional\n Heat capacity ratio, by default 1.4.\n \n Returns\n -------\n tuple of numpy.ndarray objects\n The density, velocity, and pressure along the tube at the given time.\n This is a tuple with 3 elements: (density, velocity, pressure).\n Each element is a 1D NumPy array of floats.\n \n\n\n\n```python\n# Analytical solution\n# Set the initial conditions.\nleft_state = [1.0, 0.0, 100000.0]\nright_state = [0.125, 0.0, 10000.0]\n\n# Analytical solution at t = 0.01\nA = analytical_solution(t, x, left_state, right_state, diaphragm=0.0, gamma=1.4)\nrho_analytical = A[0]\nv_analytical = A[1]\np_analytical = A[2]\n```\n\n\n```python\n# Plot rho\npyplot.figure(figsize=(6.0, 6.0))\npyplot.title('Density at time 0.01s')\npyplot.xlabel('x')\npyplot.ylabel('rho')\npyplot.grid()\npyplot.plot(x, rho_Richtmyer, label='Richtmyer', color='C0', linestyle='-', linewidth=2)\npyplot.plot(x, rho_Lax, label='Lax-Friedrich', color='C1', linestyle='-', linewidth=2)\npyplot.plot(x, rho_analytical, label='Analytical', color='C2', linestyle='-', linewidth=2)\npyplot.legend()\npyplot.xlim(-10.0, 10.0)\npyplot.ylim(0.0, 1.1)\n```\n\n\n```python\n# Plot velocity\npyplot.figure(figsize=(6.0, 6.0))\npyplot.title('Velocity at time 0.01s')\npyplot.xlabel('x')\npyplot.ylabel('velocity')\npyplot.grid()\npyplot.plot(x, v_Richtmyer, label='Richtmyer', color='C0', linestyle='-', linewidth=2)\npyplot.plot(x, v_Lax, label='Lax-Friedrich', color='C1', linestyle='-', linewidth=2)\npyplot.plot(x, v_analytical, label='Analytical', color='C2', linestyle='-', linewidth=2)\npyplot.legend()\npyplot.xlim(-10.0, 10.0)\npyplot.ylim(0.0, 400.0)\n```\n\n\n```python\n# Plot pressure\npyplot.figure(figsize=(6.0, 6.0))\npyplot.title('Pressure at time 0.01s')\npyplot.xlabel('x')\npyplot.ylabel('pressure')\npyplot.grid()\npyplot.plot(x, p_Richtmyer, label='Richtmyer', color='C0', linestyle='-', linewidth=2)\npyplot.plot(x, p_Lax, label='Lax-Friedrich', color='C1', linestyle='-', linewidth=2)\npyplot.plot(x, p_analytical, label='analytical', color='C2', linestyle='-', linewidth=2)\npyplot.legend()\npyplot.xlim(-10.0, 10.0)\npyplot.ylim(0.0, 110000.0)\n```\n\n* **Q2 (10 points):** At $t = 0.01 s$, what type of numerical errors to you observe in the numerical solution obtained with the Richtmyer scheme and with the Lax-Friedrichs scheme? (Diffusion errors? Dispersion errors? Explain why.)\n\nYou should write your answer in the following Markdown cell.\n\nYOUR ANSWER HERE\n\nThe Richtmyer scheme has dispersion errors. Observing the curve, we can find that the richtmyer scheme curve is closer to the analytical curve, and the curve oscillates, which is achieved through second-order accuracy. Numerical dispersion occurs when a higher order discretisation scheme is used to improve accuracy of the result. Numerical dispersion often takes the form of so-called 'spurious oscillations'. This is due to the truncation error of the discretisation. This is due to the truncation error of the discretisation. A second order upwind method, the leading truncation error is odd. And odd order derivatives contribute to numerical dispersion. \n\nThe Lax-Friedrichs scheme has diffusion errors. substituting \ud835\udf0c\ud835\udc5b\ud835\udc56 by the average of its neighbors introduces a first-order error. Numerical diffusion occurs when 1st order discretisation are used. This is due to the truncation error of the discretisation. The truncation is an odd-order method, the leading truncation error is even. Even order derivatives in the truncation error contribute to numerical diffusion.\n\n* **Q3 (5 points):** At $t = 0.01 s$, what's the $L_2$-norm of the difference between the density obtained with the Richtmyer scheme and the analytical solution?\n\nStore your result in the variable `l2_norm1`; you can check your answer by calling the function `mooc.check('hw3_l2_norm1', l2_norm1)`.\n\n**WARNING:** the variable name `l2_norm1` is spelled with the number `1`, **not** the letter `l`.\n\n\n```python\n# YOUR CODE HERE\nDiff = rho_Richtmyer - rho_analytical\nhelp(numpy.linalg.norm)\nl2_norm1 = numpy.linalg.norm(Diff, ord=2, axis=0)\nprint(l2_norm1)\nmooc.check('hw3_l2_norm1', l2_norm1)\n```\n\n Help on function norm in module numpy.linalg:\n \n norm(x, ord=None, axis=None, keepdims=False)\n Matrix or vector norm.\n \n This function is able to return one of eight different matrix norms,\n or one of an infinite number of vector norms (described below), depending\n on the value of the ``ord`` parameter.\n \n Parameters\n ----------\n x : array_like\n Input array. If `axis` is None, `x` must be 1-D or 2-D, unless `ord`\n is None. If both `axis` and `ord` are None, the 2-norm of\n ``x.ravel`` will be returned.\n ord : {non-zero int, inf, -inf, 'fro', 'nuc'}, optional\n Order of the norm (see table under ``Notes``). inf means numpy's\n `inf` object. The default is None.\n axis : {None, int, 2-tuple of ints}, optional.\n If `axis` is an integer, it specifies the axis of `x` along which to\n compute the vector norms. If `axis` is a 2-tuple, it specifies the\n axes that hold 2-D matrices, and the matrix norms of these matrices\n are computed. If `axis` is None then either a vector norm (when `x`\n is 1-D) or a matrix norm (when `x` is 2-D) is returned. The default\n is None.\n \n .. versionadded:: 1.8.0\n \n keepdims : bool, optional\n If this is set to True, the axes which are normed over are left in the\n result as dimensions with size one. With this option the result will\n broadcast correctly against the original `x`.\n \n .. versionadded:: 1.10.0\n \n Returns\n -------\n n : float or ndarray\n Norm of the matrix or vector(s).\n \n See Also\n --------\n scipy.linalg.norm : Similar function in SciPy.\n \n Notes\n -----\n For values of ``ord < 1``, the result is, strictly speaking, not a\n mathematical 'norm', but it may still be useful for various numerical\n purposes.\n \n The following norms can be calculated:\n \n ===== ============================ ==========================\n ord norm for matrices norm for vectors\n ===== ============================ ==========================\n None Frobenius norm 2-norm\n 'fro' Frobenius norm --\n 'nuc' nuclear norm --\n inf max(sum(abs(x), axis=1)) max(abs(x))\n -inf min(sum(abs(x), axis=1)) min(abs(x))\n 0 -- sum(x != 0)\n 1 max(sum(abs(x), axis=0)) as below\n -1 min(sum(abs(x), axis=0)) as below\n 2 2-norm (largest sing. value) as below\n -2 smallest singular value as below\n other -- sum(abs(x)**ord)**(1./ord)\n ===== ============================ ==========================\n \n The Frobenius norm is given by [1]_:\n \n :math:`||A||_F = [\\sum_{i,j} abs(a_{i,j})^2]^{1/2}`\n \n The nuclear norm is the sum of the singular values.\n \n Both the Frobenius and nuclear norm orders are only defined for\n matrices and raise a ValueError when ``x.ndim != 2``.\n \n References\n ----------\n .. [1] G. H. Golub and C. F. Van Loan, *Matrix Computations*,\n Baltimore, MD, Johns Hopkins University Press, 1985, pg. 15\n \n Examples\n --------\n >>> from numpy import linalg as LA\n >>> a = np.arange(9) - 4\n >>> a\n array([-4, -3, -2, ..., 2, 3, 4])\n >>> b = a.reshape((3, 3))\n >>> b\n array([[-4, -3, -2],\n [-1, 0, 1],\n [ 2, 3, 4]])\n \n >>> LA.norm(a)\n 7.745966692414834\n >>> LA.norm(b)\n 7.745966692414834\n >>> LA.norm(b, 'fro')\n 7.745966692414834\n >>> LA.norm(a, np.inf)\n 4.0\n >>> LA.norm(b, np.inf)\n 9.0\n >>> LA.norm(a, -np.inf)\n 0.0\n >>> LA.norm(b, -np.inf)\n 2.0\n \n >>> LA.norm(a, 1)\n 20.0\n >>> LA.norm(b, 1)\n 7.0\n >>> LA.norm(a, -1)\n -4.6566128774142013e-010\n >>> LA.norm(b, -1)\n 6.0\n >>> LA.norm(a, 2)\n 7.745966692414834\n >>> LA.norm(b, 2)\n 7.3484692283495345\n \n >>> LA.norm(a, -2)\n 0.0\n >>> LA.norm(b, -2)\n 1.8570331885190563e-016 # may vary\n >>> LA.norm(a, 3)\n 5.8480354764257312 # may vary\n >>> LA.norm(a, -3)\n 0.0\n \n Using the `axis` argument to compute vector norms:\n \n >>> c = np.array([[ 1, 2, 3],\n ... [-1, 1, 4]])\n >>> LA.norm(c, axis=0)\n array([ 1.41421356, 2.23606798, 5. ])\n >>> LA.norm(c, axis=1)\n array([ 3.74165739, 4.24264069])\n >>> LA.norm(c, ord=1, axis=1)\n array([ 6., 6.])\n \n Using the `axis` argument to compute matrix norms:\n \n >>> m = np.arange(8).reshape(2,2,2)\n >>> LA.norm(m, axis=(1,2))\n array([ 3.74165739, 11.22497216])\n >>> LA.norm(m[0, :, :]), LA.norm(m[1, :, :])\n (3.7416573867739413, 11.224972160321824)\n \n 0.2497209782456826\n [hw3_l2_norm1] Good job!\n\n\n\n```python\n\n```\n\n* **Q4 (5 points):** At $t = 0.01 s$, what's the $L_2$-norm of the difference between the density obtained with the Lax-Friedrichs scheme and the analytical solution?\n\nStore your result in the variable `l2_norm2`; you can check your answer by calling the function `mooc.check('hw3_l2_norm2', l2_norm2)`.\n\n\n```python\n# YOUR CODE HERE\nDiff_2 = rho_Lax - rho_analytical\nl2_norm2 = numpy.linalg.norm(Diff_2, ord=2, axis=0)\nprint(l2_norm2)\nmooc.check('hw3_l2_norm2', l2_norm2)\n```\n\n 0.4610293528265613\n [hw3_l2_norm2] Good job!\n\n\n\n```python\n\n```\n\n* **Q5 (5 points):** At $t = 0.01 s$, what's the value of the density, obtained with Richtmyer scheme, at location $x = 2.5 m$ (in $kg/m^3$)?\n\nStore your result in the variable `rho1`; you can check your answer by calling the function `mooc.check('hw3_rho1', rho1)`.\n\n**WARNING**: the variable name `rho1` is spelled with the number `1`, **not** the letter `l`.\n\n\n```python\n# YOUR CODE HERE\nrho1 = rho_Richtmyer[int((2.5+10)/dx)]\nprint(rho1)\nmooc.check('hw3_rho1', rho1)\n```\n\n 0.3746914026476011\n [hw3_rho1] Good job!\n\n\n\n```python\n\n```\n\n* **Q6 (5 points):** At $t = 0.01 s$, what's the value of the velocity, obtained with Lax-Friedrichs scheme, at location $x = 2.5 m$ (in $m/s$)?\n\nStore your result in the variable `v2`; you can check your answer by calling the function `mooc.check('hw3_v2', v2)`.\n\n\n```python\n# YOUR CODE HERE\nv2 = v_Lax[int((2.5+10)/dx)]\nprint(v2)\nmooc.check('hw3_v2', v2)\n```\n\n 281.8563023522752\n [hw3_v2] Good job!\n\n\n\n```python\n\n```\n\n* **Q7 (5 points):** At $t = 0.01 s$, what's the absolute difference in the pressure, between the analytical solution and the Richtmyer solution, at location $x = 2.5 m$ (in $N/m^2$)?\n\nStore your result in the variable `p_diff`; you can check your answer by calling the function `mooc.check('hw3_p_diff', p_diff)`.\n\n\n```python\n# YOUR CODE HERE\np_R = p_Richtmyer[int((2.5+10)/dx)]\np_A = p_analytical[int((2.5+10)/dx)]\np_diff = abs(p_R - p_A)\nprint(p_diff)\nmooc.check('hw3_p_diff', p_diff)\n```\n\n 64.17847424907086\n [hw3_p_diff] Good job!\n\n\n\n```python\n\n```\n\n* **Q8 (5 points):** At $t = 0.01 s$, what's the value of the entropy, obtained with Richtmyer scheme, at location $x = -1.5 m$ (in $J/kg/K$)?\n\nThe entropy $s$ is defined as:\n\n$$\ns = \\frac{p}{\\rho^\\gamma}\n$$\n\nStore your result in the variable `s1`; you can check your answer by calling the function `mooc.check('hw3_s1', s1)`.\n\n**WARNING**: the variable name `s1` is spelled with the number `1`, **not** the letter `l`.\n\n\n```python\n# YOUR CODE HERE\nrho_Rs = rho_Richtmyer[int((10-1.5)/dx)]\np_Rs = p_Richtmyer[int((10-1.5)/dx)]\ns1 = p_Rs / rho_Rs**gamma\nprint(s1)\nmooc.check('hw3_s1', s1)\n```\n\n 100697.043028669\n [hw3_s1] Good job!\n\n\n\n```python\n\n```\n\n* **Q9 (5 points):** At $t = 0.01 s$, what's the value of the speed of sound, obtained with Lax-Friedrichs scheme, at location $x = -1.5 m$ (in $m/s$)?\n\nThe speed of sound $a$ is defined as:\n\n$$\na = \\sqrt{\\frac{\\gamma p}{\\rho}}\n$$\n\nStore your result in the variable `a2`; you can check your answer by calling the function `mooc.check('hw3_a2', a2)`.\n\n\n```python\n# YOUR CODE HERE\nrho_La = rho_Lax[int((10-1.5)/dx)]\np_La = p_Lax[int((10-1.5)/dx)]\na2 = (gamma * p_La / rho_La)**0.5\nprint(a2)\nmooc.check('hw3_a2', a2)\n```\n\n 349.455377505974\n [hw3_a2] Good job!\n\n\n\n```python\n\n```\n\n* **Q10 (5 points):** At $t = 0.01 s$, what's the value of the Mach number, obtained with Richtmyer scheme, at location $x = -1.5 m$?\n\n**Hint:** the Mach number is the ratio between the velocity and the speed of sound.\n\nStore your result in the variable `M1`; you can check your answer by calling the function `mooc.check('hw3_M1', M1)`.\n\n**WARNING**: the variable name `M1` is spelled with the number `1`, **not** the letter `l`.\n\n\n```python\n# YOUR CODE HERE\n# Mach number = velocity / speed of sound\nrho_Ra = rho_Richtmyer[int((10-1.5)/dx)]\np_Ra = p_Richtmyer[int((10-1.5)/dx)]\naR = (gamma * p_Ra / rho_Ra)**0.5\nv_Ra = v_Richtmyer[int((10-1.5)/dx)]\nM1 = v_Ra/aR\nprint(M1)\nmooc.check('hw3_M1', M1)\n```\n\n 0.5483352954050432\n [hw3_M1] Good job!\n\n\n\n```python\n\n```\n\n## Reference\n\n---\n\n* Sod, Gary A. (1978), \"A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws,\" *J. Comput. Phys.*, Vol. 27, pp. 1\u201331 DOI: [10.1016/0021-9991(78)90023-2](http://dx.doi.org/10.1016%2F0021-9991%2878%2990023-2) // [PDF from unicamp.br](http://www.fem.unicamp.br/~phoenics/EM974/TG%20PHOENICS/BRUNO%20GALETTI%20TG%202013/a%20survey%20of%20several%20finite%20difference%20methods%20for%20systems%20of%20nonlinear%20hyperbolic%20conservation%20laws%20Sod%201978.pdf), checked Oct. 28, 2014.\n", "meta": {"hexsha": "157e134d3130964999e8a26561f437a30306d1d9", "size": 174252, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "hw3/hw3/Sods_Shock_Tube.ipynb", "max_stars_repo_name": "YinfengDing/MAE6286", "max_stars_repo_head_hexsha": "41dc302762fc54ed1c8c9ff0621bd5f3c8e5d7f0", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2021-09-21T15:19:06.000Z", "max_stars_repo_stars_event_max_datetime": "2021-09-21T15:19:08.000Z", "max_issues_repo_path": "hw3/hw3/Sods_Shock_Tube.ipynb", "max_issues_repo_name": "YinfengDing/MAE6286", "max_issues_repo_head_hexsha": "41dc302762fc54ed1c8c9ff0621bd5f3c8e5d7f0", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "hw3/hw3/Sods_Shock_Tube.ipynb", "max_forks_repo_name": "YinfengDing/MAE6286", "max_forks_repo_head_hexsha": "41dc302762fc54ed1c8c9ff0621bd5f3c8e5d7f0", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 104.844765343, "max_line_length": 42672, "alphanum_fraction": 0.8304696646, "converted": true, "num_tokens": 9487, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4610167793123159, "lm_q2_score": 0.2146914140875998, "lm_q1q2_score": 0.09897634426867202}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n#####Version 0.1\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the projects [homepage](camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n###The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$.:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n###Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n####Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computational-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\" )\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to } )\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load( open(\"../styles/bmh_matplotlibrc.json\") )\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%pylab inline\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n#####Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\")\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n##Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n###Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\")\n```\n\n###Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\")\n```\n\n\n###But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC\n-----\n\nPyMC is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC is so cool.\n\nWe will model the problem above using PyMC. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC code is easy to read. The only novel thing should be the syntax, and I will interrupt the code to explain individual sections. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables:\n\n\n```\nimport pymc as mc\n\nalpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\nlambda_1 = mc.Exponential(\"lambda_1\", alpha)\nlambda_2 = mc.Exponential(\"lambda_2\", alpha)\n\ntau = mc.DiscreteUniform(\"tau\", lower=0, upper=n_count_data)\n```\n\nIn the code above, we create the PyMC variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC's *stochastic variables*, so-called because they are treated by the back end as random number generators. We can demonstrate this fact by calling their built-in `random()` methods.\n\n\n```\nprint \"Random output:\", tau.random(), tau.random(), tau.random()\n```\n\n Random output: 4 17 67\n\n\n\n```\n@mc.deterministic\ndef lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):\n out = np.zeros(n_count_data)\n out[:tau] = lambda_1 # lambda before tau is lambda1\n out[tau:] = lambda_2 # lambda after (and including) tau is lambda2\n return out\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n`@mc.deterministic` is a decorator that tells PyMC this is a deterministic function. That is, if the arguments were deterministic (which they are not), the output would be deterministic as well. \n\n\n```\nobservation = mc.Poisson(\"obs\", lambda_, value=count_data, observed=True)\n\nmodel = mc.Model([observation, lambda_1, lambda_2, tau])\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `value` keyword. We also set `observed = True` to tell PyMC that this should stay fixed in our analysis. Finally, PyMC wants us to collect all the variables of interest and create a `Model` instance out of them. This makes our life easier when we retrieve the results.\n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo*, which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```\n### Mysterious code to be explained in Chapter 3.\nmcmc = mc.MCMC(model)\nmcmc.sample(40000, 10000, 1)\n```\n\n [****************100%******************] 40000 of 40000 complete\n\n\n\n```\nlambda_1_samples = mcmc.trace('lambda_1')[:]\nlambda_2_samples = mcmc.trace('lambda_2')[:]\ntau_samples = mcmc.trace('tau')[:]\n```\n\n\n```\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n###Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages recieved\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\")\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```\n#type your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```\n#type your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC part. Just consider all instances where `tau_samples < 45`.)\n\n\n```\n#type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. .\n- [2] Norvig, Peter. 2009. [*The Unreasonable Effectiveness of Data*](http://www.csee.wvu.edu/~gidoretto/courses/2011-fall-cp/reading/TheUnreasonable EffectivenessofData_IEEE_IS2009.pdf).\n- [3] Patil, A., D. Huard and C.J. Fonnesbeck. 2010. \nPyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical \nSoftware, 35(4), pp. 1-81. \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n```\n\n```\n", "meta": {"hexsha": "8b1e4578c2ee3b25efbdf105fb95f243e42c9df4", "size": 413578, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_stars_repo_name": "jaimebayes/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "cc2ab1a3905f1537b5891028fdf097be95267c3a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_issues_repo_name": "jaimebayes/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "cc2ab1a3905f1537b5891028fdf097be95267c3a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_forks_repo_name": "jaimebayes/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "cc2ab1a3905f1537b5891028fdf097be95267c3a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2018-04-26T01:29:57.000Z", "max_forks_repo_forks_event_max_datetime": "2018-04-26T01:29:57.000Z", "avg_line_length": 401.1425800194, "max_line_length": 110534, "alphanum_fraction": 0.9050964993, "converted": true, "num_tokens": 11111, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.2720245510940225, "lm_q2_score": 0.36296921930155557, "lm_q1q2_score": 0.09873653894145347}} {"text": "```python\n%run ../../common/import_all.py\n\nfrom common.setup_notebook import *\nconfig_ipython()\nsetup_matplotlib()\nset_css_style()\n```\n\n\n\n\n\n\n\n\n\n\n#
Moments of a distribution and summary statistics\n\nIn the following, we will use $X$ to represent a random variable living in sample space (the space of all possible values it can assume) $\\Omega$.\n\nIn the discrete case, the probability of each value $x_i$ will be represented as $p_i$ (probability mass function); in the continuous case $p(x) = P(X=x)$ will be the probability density function. See [the note on probability functions](probfunctions.ipynb).\n\nLet's start with mean and variance and then we'll then give the general definitions. Also, we'll then switch to other quantities beyond moments which help drawing a comprehensive picture of how data is distributed.\n\n## Expected Value\n\nThe **expected value**, or **expectation**, or **mean value** is defined, in the *continuous* case as\n\n$$\n\\mathbb{E}[X] = \\int_\\Omega \\text{d} x \\ x p(x) \\ ,\n$$\n\nSimilarly, in the *discrete* case,\n\n$$\n\\mathbb{E}[X] = \\sum_i^N p_i x_i \\ ,\n$$\n\nThe expectation is the average of all the possible values the random variable can assume. It is the arithmetic mean in the case of discrete variables. This is easy to see if the distribution is uniform, that is, all $N$ values have the same probability $\\frac{1}{N}$: the expectation becomes $\\frac{1}{N}\\sum_i x_i$, which is the exact definition of arithmetic mean. When the distribution is not uniform, the probability is not the same for each value, but the end result is still the arithmetic mean as each different value will be weighted with its probability of occurrence, that is, the count of them over the total of values. \n\nThe expected value is typically indicated with $\\mu$. \n\n### Linearity of the expected value\n\nThe expected value is a linear operator:\n\n$$\n\\mathbb{E}[aX + bY] = a\\mathbb{E}[X] + b \\mathbb{E}[Y]\n$$\n\n*Proof*\n\nWe will prove this in the continuous case but it is clearly easily extensible.\n\n$$\n\\begin{align}\n\\mathbb{E}[aX + bY] &= \\int_{\\Omega_X}\\limits \\int_{\\Omega_Y}\\limits \\text{d} x \\ \\text{d} y \\ (ax + by) p(x, y) \\\\\n&= a \\int_{\\Omega_X}\\limits \\int_{\\Omega_Y}\\limits \\text{d} x \\ \\text{d} y \\ x p(x, y) + b \\int_{\\Omega_X}\\limits \\int_{\\Omega_Y}\\limits \\text{d} x \\ \\text{d} y \\ y p(x, y) \\\\\n&= a \\int_{\\Omega_X}\\limits \\text{d} x \\ x p(x) + b \\int_{\\Omega_Y}\\limits \\text{d} y \\ y p(y) \\\\\n&= a\\mathbb{E}[X] + b \\mathbb{E}[Y]\n\\end{align}\n$$\n\nThis is because $p(x) = \\int_{\\Omega_Y}\\limits\\text{d} y \\ x p(x, y)$ because we are effectively summing the PDFs over all the possible values of $Y$, hence eliminating the dependency from this random variable. Analogously the other one.\n\n### Expectation over two variables\n\nWe have\n\n$$\n\\mathbb{E}_{x, y}[A] = \\mathbb{E}_x[\\mathbb{E}_y[A | x]]\n$$\n\n*Proof*\n\nBy definition\n\n$$\n\\mathbb{E}_{x, y}[A] = \\int \\,dx \\,dy \\, p(x, y) \\, A\n$$\n\nand from the definition of [conditional and joint probability](joint-marg-conditional-prob.ipynb),\n\n$$\np(x, y) = p(y|x) p(x) \\ .\n$$\n\nSo, we can write\n\n$$\n\\mathbb{E}_{x, y}[A] = \\int \\, dx \\, dy A \\, p(y|x)p(x)\n$$\n\nwhich is exactly the second term in the statement.\n\n## Variance and standard deviation\n\nThe variance is the expected value of the squared difference from the expectation:\n\n$$\nVar[X] = \\mathbb{E}[(X - \\mathbb{E}[X])^2] = \\int_{\\Omega_X} \\text{d} x \\ (x - \\mathbb{E}[X])^2 p(x)\n$$\n\nThe variance is the second moment around the mean. It is typically indicated as $\\sigma^2$, $\\sigma$ being the **standard deviation**, which gives the measure of error of values from the mean.\n\n### Rewriting the variance\n\nWe can also write the variance as\n\n$$\nVar[X] = \\mathbb{E}[X^2] - \\big(\\mathbb{E}[X]\\big)^2\n$$\n\n*Proof*\n\n$$\n\\begin{align}\nVar[X] &= \\mathbb{E}[(X - \\mu)^2] \\\\\n&= \\int_{\\Omega_X} \\text{d}x \\ (x^2 - 2 \\mu x + \\mu^2) p(x) \\\\\n&= \\int_{\\Omega_X} \\text{d}x \\ x^2 p(x) -2 \\mu \\int_{\\Omega_X} \\text{d}x \\ x p(x) + \\mu^2 \\int_{\\Omega_X} \\text{d} x p(x) \\\\\n&= \\mathbb{E}[X^2] - 2 \\mu^2 + \\mu^2 \\\\\n&= \\mathbb{E}[X^2] - \\big(\\mathbb{E}[X]\\big)^2\n\\end{align}\n$$\n\n### The variance is not linear\n\nIn fact, using the linearity of the expectation\n\n$$\n\\begin{align}\nVar[aX] &= \\mathbb{E}[(aX)^2] - \\big( \\mathbb{E}[aX] \\big)^2 \\\\\n&= a^2 \\mathbb{E}[X^2] - (a^2 \\mu^2) \\\\\n&= a^2 (\\mathbb{E}[X^2] - \\mu^2) \\\\\n&= a^2 Var[X]\n\\end{align}\n$$\n\nand more in general, \n\n$$\n\\begin{align}\nVar[aX + bY] &= \\mathbb{E}[(aX + bY)^2] - \\big( \\mathbb{E}[aX+bY] \\big)^2 \\\\\n&= \\mathbb{E}[a^2 X^2 + b^2 Y^2 + 2ab XY] - \\big( a \\mathbb{E}[X] + b \\mathbb{E}[Y] \\big)^2 \\\\\n&= a^2\\mathbb{E}[X^2] + b^2\\mathbb{E}[Y^2] + 2ab\\mathbb{E}[XY] - a^2(\\mathbb{E}[X])^2 - b^2(\\mathbb{E}[Y])^2 - 2ab\\mathbb{E}[X]\\mathbb{E}[Y] \\\\\n&= a^2 Var[X] + b^2 Var[Y] + 2ab \\ \\text{cov}(X, Y)\n\\end{align}\n$$\n\n($\\text{cov}$ is the covariance).\n\n## The unbiased estimator of variance and standard deviation\n\n\n\nIf we have $n$ data points, extracted from a population (so we have a sample, refer to figure) and we want to calculate its variance (or standard deviation), using $n$ in the denominator would lead to a biased estimation. We would in fact use $n$ in the case we had the full population, in the case of a sample we have to use $n-1$ and this is because the degrees of freedom are $n-1$ as the mean is computed from $n$ data points so there is one less.\n\nFor the mean, if $\\mu$ is the one computed with the full population and $\\bar x$ the one computed with the sample, which is the estimator of $\\mu$, \n\n$$\n\\bar x = \\frac{\\sum_{i=1}^{i=n} x_i}{n}\n$$\n\nFor the variance, calling $\\sigma^2$ the one computed with the full population and $s^2$ the one computed with the sample, we have\n\n$$\ns_n^2 = \\frac{\\sum_{i=1}^{i=n} (x_i - \\bar x)^2}{n}\n$$\n\nand\n\n$$\ns_{n-1}^2 = \\frac{\\sum_{i=1}^{i=n} (x_i - \\bar x)^2}{n-1}\n$$\n\nwith subscript $n$ or $n-1$ indicating, respectively, with which denominator they are calculated. \n\n$s_n^2$ is a biased estimator of the population variance (it contains the mean, which itself eats one degree of freedom) and we have $s_n^2 < s_{n-1}^2$. This last one is the correct estimator of the population variance when you have a sample.\n\n## Standard deviation and standard error\n\nRefer again to the above about sample and population. The population follows a certain distribution, of which the distribution of the sample is an \"approximation\". This is why we have to use the sample mean (the mean of data points in the sample) as an estimate of the (unknown) population mean. The problem is now how to attribute the error to this value.\n\nIn general, following definition, what the standard deviation quantifies is the variability of individuals from the mean. Having the sample, the sample standard deviation tells how far away each sample point is from the sample mean. \n\nNow because we're using the sample mean to estimate the mean (expected value) of the population, and because if we had another sample extracted from the same population this sample mean would likely be different, in general this sample mean follows its distribution. The *standard error* (of the mean, as it can be related to other statistics), typically indicated by *SE*, is the standard deviation of these means. \n\nThe standard error is usually estimated by the sample standard deviation $s$ divided by the square root of the sample size $n$:\n\n$$\nSE = \\frac{s}{\\sqrt{n}} \\ ,\n$$\n\nunder the assumption of statistical independence of observations in the sample.\n\nIn fact, let $x_1, \\ldots, x_n$ be the sample points extracted from a population whose mean and standard deviation are, respectively, $\\mu$ and $\\sigma$, the sample mean is\n\n$$\nm = \\frac{x_1 + \\cdots + x_n}{n} \\ .\n$$\n\nThe variance of this sample mean $m$, telling how far away the sample mean is from the population mean, is\n\n$$\nVar[m] = Var \\left[\\frac{\\sum_i x_i}{n}\\right] = \n\\frac{1}{n^2} Var\\Big[\\sum_i x_i\\Big] = \\frac{1}{n^2} \\sum_i Var[x_i]\n= \\frac{1}{n^2} n Var[x_1]\n= \\frac{1}{n} \\sigma^2 \\ ,\n$$\n\nbecause each point has the same variance and the points are independent. See above for the non-linearity of the variance for the details on this calculation. Following this, the standard deviation of $m$ is then $\\frac{\\sigma}{\\sqrt{n}}$ and we will use $s$ as an estimate for $\\sigma$, which is, again, unknown.\n\n### When to use which\n\nThe Standard Error tells how far the sample mean is from the population mean so it is the error to attribute to a sample mean. The standard deviation again is about the individual data points and it tells how far away they are from the sample mean.\n\nWhile the standard error goes to 0 when $n \\to \\infty$, the standard deviation goes to $\\sigma$.\n\n## Moments: general definition\n\nThe $n$-th **raw moment** is the expected value of the $n$-th power of the random variable:\n\n$$\n\\boxed{\\mu_n' = \\int \\text{d} x \\ x^n p(x)}\n$$\n\nThe expected value is then the first raw moment.\n\n\nThe $n$-th **central moment** around the mean is defined as\n\n$$\n\\boxed{\\mu_n = \\int \\text{d} x (x-\\mu)^n p(x)}\n$$\n\nThe variance is the second central moment around the mean.\n\nMoments get standardises (normalised) by dividing for the appropriate power of the standard deviation. The $n$-th **standardised moment** is the central moment divided by standard deviation with the same order power:\n\n$$\n\\boxed{\\tilde \\mu_n = \\frac{\\mu_n}{\\sigma^n}}\n$$\n\n## Skeweness\n\nThe **skeweness** is the third standardised moment:\n\n$$\n\\gamma = \\frac{\\mathbb{E}[(X-\\mu)^3]}{\\sigma^3}\n$$\n\nThe skeweness quantifies how symmetrical a distribution is around the mean: it is zero in the case of a perfectly symmetrical shape. It is positive if the distribution is skewed on the right, that is, if the right tail is heavier than the left one; it is negative if it is skewed on the left, meaning the left tail is heavier than the right one.\n\n## Kurtosis\n\nThe **kurtosis** is the fourth standardised moment:\n\n$$\n\\kappa = \\frac{\\mu_4}{\\sigma^4}\n$$\n\nIt measures how heavy the tail of a distribution is with respect to a gaussian with the same $\\sigma$.\n\n## Further results\n\n### Variance of a matrix of constants times a random vector\n\nIn general, with a matrix of constants $\\mathbf{X}$ and a vector of observations (random variables) $\\mathbf{a}$, using the linearity of the expected value so that $\\mathbb{E}[\\mathbf{X a}] = \\mathbf{X} \\mathbb{E}[\\mathbf{a}]$, we have\n\n$$\n\\begin{align}\n Var[\\mathbf{X a}] &= \\mathbb{E}[(\\mathbf{X a} - \\mathbb{E}[\\mathbf{X a}])^2] \\\\\n &= \\mathbb{E}[(\\mathbf{X a} - \\mathbb{E}[\\mathbf{X a}])(\\mathbf{X a} - \\mathbb{E}[\\mathbf{X a}])^t] \\\\ \n &= \\mathbb{E}[(\\mathbf{X a} - \\mathbf{X}\\mathbb{E}[\\mathbf{a}])(\\mathbf{X a} - \\mathbf{X}\\mathbb{E}[\\mathbf{a}])^t] \\\\\n &= \\mathbb{E}[(\\mathbf{X a} - \\mathbf{X}\\mathbb{E}[\\mathbf{a}])((\\mathbf{X a})^t - (\\mathbf{X}\\mathbb{E}[\\mathbf{a}])^t)] \\\\\n &= \\mathbb{E}[\\mathbf{Xa}\\mathbf{a}^t\\mathbf{X}^t - \\mathbf{Xa} \\mathbb{E}[\\mathbf{a}]^t \\mathbf{X}^t - \\mathbf{X} \\mathbb{E}[\\mathbf{a}]\\mathbf{a}^t\\mathbf{X}^t + \\mathbf{X} \\mathbb{E}[\\mathbf{a}] \\mathbb{E}[\\mathbf{a}]^t\\mathbf{X}^t] \\\\\n &= \\mathbf{X} \\mathbb{E}[\\mathbf{a}\\mathbf{a}^t] \\mathbf{X}^t - \\mathbf{X} \\mathbb{E}[\\mathbf{a}] \\mathbb{E}[\\mathbf{a}]^t \\mathbf{X}^t - \\mathbf{X} \\mathbb{E}[\\mathbf{a}] \\mathbb{E}[\\mathbf{a}^t] \\mathbf{X}^t + \\mathbf{X} \\mathbb{E}[\\mathbf{a}] \\mathbb{E}[\\mathbf{a}^t] \\mathbf{X}^t \\\\\n &= \\mathbf{X} \\mathbb{E}[\\mathbf{a}\\mathbf{a}^t] \\mathbf{X}^t - 2 \\mathbf{X} \\mathbb{E}[\\mathbf{a}] \\mathbb{E}[\\mathbf{a}]^t \\mathbf{X}^t + \\mathbf{X} \\mathbb{E}[\\mathbf{a}] \\mathbb{E}[\\mathbf{a}^t] \\mathbf{X}^t = \\\\\n &= \\mathbf{X} (\\mathbb{E}[\\mathbf{a} \\mathbf{a}^t] - \\mathbb{E}[\\mathbf{a}] \\mathbb{E}[\\mathbf{a}^t]) \\mathbf{X}^t = \\\\\n &= \\mathbf{X} Var[\\mathbf{a}] \\mathbf{X}^t\n\\end{align}\n$$\n\n## Let's see all this on some known distributions!\n\nWe will extract $n$ values from several distributions, one at a time, and see what happens to the moments.\n\n\n```python\nn = 100000 # the number of points to extract\n\ng = np.random.normal(size=n) # gaussian\ne = np.random.exponential(size=n) # exponential\np = np.random.power(a=0.5, size=n) # power-law x^{0.5}, or a sqrt\nz = np.random.zipf(a=2, size=n) # Zipf (power-law) x^{-2}\n```\n\n### The gaussian \n\nThe gaussian will be the comparison distribution we refer to. Why? Because it's the queen of distributions!\n\n\n```python\n# Use 100 bins\nbins = 100 \n\nhist = np.histogram(g, bins=bins)\nhist_vals, bin_edges = hist[0], hist[1]\nbin_mids = [(bin_edges[i] + bin_edges[i+1])/2 for i in range(len(bin_edges) -1)] # middle point of bin\n \nplt.plot(bin_mids, hist_vals, marker='o')\n\nplt.title('Histogram $10^5$ normally distributed data')\nplt.xlabel('Bin mid')\nplt.ylabel('Count items')\nplt.show();\n```\n\n\n```python\n'The mean is %s, the std %s' % (np.mean(g), np.std(g))\n'The skeweness is %s, the kurtosis %s' % (stats.skew(g), stats.kurtosis(g))\n```\n\n\n\n\n 'The mean is -0.000640742485484, the std 0.999986499155'\n\n\n\n\n\n\n 'The skeweness is 0.01838211229141485, the kurtosis -0.025766042470362294'\n\n\n\nClearly, the mean is 0 (we've taken values this way!); the skeweness is also 0 as the data is normally distributed, hence symmetrical, and the kurtosis comes as 0 because Scipy gives, [by default](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kurtosis.html), the Fisher version of it, which subtracts 3 so that a normal distribution has 0 kurtosis.\n\n### The exponential\n\nSame plot as for the gaussian, except that we will also plot it in semilog scale (on the $y$), where the distribution appears linear.\n\n\n```python\n# Use 100 bins\nbins = 100 \n\nhist = np.histogram(e, bins=bins)\nhist_vals, bin_edges = hist[0], hist[1]\nbin_mids = [(bin_edges[i] + bin_edges[i+1])/2 for i in range(len(bin_edges) -1)] # middle point of bin\n\n# Main plot: in linear scale\nplt.plot(bin_mids, hist_vals)\nplt.xlabel('Bin mid')\nplt.ylabel('Count items')\nplt.title('Histogram $10^5$ exponentially distributed data')\n\n# Inset plot: in semilog (on y)\na = plt.axes([.4, .4, .4, .4], facecolor='y')\nplt.semilogy(bin_mids, hist_vals)\nplt.title('In semilog scale')\nplt.ylabel('Count items')\nplt.xlabel('Bin mid')\n\nplt.show();\n```\n\n\n```python\n'The mean is %s, the std %s' % (np.mean(e), np.std(e))\n'The skeweness is %s, the kurtosis %s' % (stats.skew(e), stats.kurtosis(e))\n```\n\n\n\n\n 'The mean is 1.00304878455, the std 0.996973171878'\n\n\n\n\n\n\n 'The skeweness is 1.9954113378643468, the kurtosis 6.02308754016868'\n\n\n\nThis time, the distribution is not symmetrical.\n\n### The power law\n\nWe chose to extract numbers from a [power law](power-law.ipynb) with exponent $-0.7$ (see the [docs](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.power.html#numpy.random.power)). Because of this, it is so much better to bin logarithmically, that is, with a bin width growing logarithmically. If we also choose a log-log scale, we get a line. Let's do it.\n\n\n```python\n# Use 100 bins\nbins = np.logspace(0, 4, num=100) \n\nhist = np.histogram(z, bins=bins)\nhist_vals, bin_edges = hist[0], hist[1]\nbin_mids = [(bin_edges[i] + bin_edges[i+1])/2 for i in range(len(bin_edges) -1)] # middle point of bin\n\n# Main plot: in linear scale\nplt.plot(bin_mids, hist_vals)\nplt.xlabel('Bin mid')\nplt.ylabel('Count items')\nplt.title('Histogram $10^5$ pow-law distributed data')\n\n# Inset plot: in semilog (on y)\na = plt.axes([.4, .4, .4, .4], facecolor='y')\nplt.loglog(bin_mids, hist_vals)\nplt.title('In log-log scale')\nplt.ylabel('Count items')\nplt.xlabel('Bin mid')\n\nplt.show();\n```\n\nClearly because it is a power law, a linear graph is really useless, can't really see anything. The inset shows the linear trend in log-log scale.\n\n\n```python\n'The mean is %s, the std %s' % (np.mean(z), np.std(z))\n'The skeweness is %s, the kurtosis %s' % (stats.skew(z), stats.kurtosis(z))\n```\n\n\n\n\n 'The mean is 37.28617, the std 7991.95971848'\n\n\n\n\n\n\n 'The skeweness is 299.37959397823863, the kurtosis 92083.86568826315'\n\n\n\nNow, this is a heavy-tail, and the kurtosis is quite verbal about it.\n\n## Mode\n\nThe mode of a distribution is simply its most frequent value. \n\n## Quantiles\n\nQuantiles are the values which divide a probability distribution into equally populated sets, how many, you decide. As special types of quantiles you got\n\n* *deciles*: 10 sets, so the first decile is the value such that 10% of the observations are smaller and the tenth decile is the value such that 90% of the observations are smaller \n* *quartiles*: 4 sets, so the first quartile is such that 25% of the observations are smaller\n* *percentile*: 100 sets, so the first percentile is such that 1% of the observations are smaller\n\nThe second quartile, corresponding to the fifth decile and to the fiftieth percentile, is kind of special and is called the *median*. Note that unlike the mean, the median is a measure of centrality in the data which is non-sensible to outliers. \n\nThis all means you can use the percentile everywhere as it's the most fine-grained one, and calculate the other splits from them. This is in fact what Numpy does, for this reason, and we'll see it below.\n\nQuartiles are conveniently displayed all together in a box plot, along with outliers. \n\n### Trying them out\n\nLet's extract 1000 numbers from a given distribution and let's compute the quartiles. We use `numpy.percentile(array, q=[0, 25, 50, 75, 100])`. Note that the quartile 0 and the quartile 100 correspond respectively to the minimum and maximum of the data.\n\n#### On a uniform distribution, between 0 and 1\n\n\n```python\nu = np.random.uniform(size=1000)\n\nnp.percentile(u, q=[0, 25 , 50, 75, 100])\nmin(u), max(u)\n```\n\n\n\n\n array([0.00205504, 0.27189483, 0.52432025, 0.75195201, 0.99904427])\n\n\n\n\n\n\n (0.0020550371300288583, 0.999044272466301)\n\n\n\n#### On a standard gaussian (mean 0, std 1)\n\nNote the median is the mean, that is, in this case, 0. It won't be precisely, because of finite size effect. A gaussian is such that median, mean and mode coincide, doesn't this make it great?\n\n\n```python\ng = np.random.normal(size=1000)\n\nnp.percentile(g, q=[0, 25 , 50, 75, 100])\n```\n\n\n\n\n array([-3.48444618, -0.61679718, 0.05202165, 0.68962899, 3.38339965])\n\n\n\n#### On a power law with exponent -0.3\n\nCan see that they span orders of magnitude.\n\n\n```python\np = np.random.power(0.7, size=1000)\n\nnp.percentile(p, q=[0, 25 , 50, 75, 100])\n```\n\n\n\n\n array([5.58878867e-05, 1.48743304e-01, 3.93775538e-01, 6.76276735e-01,\n 9.99854544e-01])\n\n\n\n### The inter-quartile range (IQR)\n\nIt is the difference between the third and first quartile and gives a measure of dispersion of the data. It is also sometimes called *midspread*. Note that it is a robust measure of dispersion specifically because it works on quartiles.\n\n$$\nIQR = Q_3 - Q-1\n$$\n\nThe IQR can be used to *test the normality of a distribution* at a simple level, because the quartiles of a normal (standardised) distribution are known so calculated ones can be compared to them. \n\nIt is also used in *spotting outliers*: the Tukey's range test defines outliers as those points that fall below $Q_1 - 1.5 IQR$ and above $Q_3 + 1.5 IQR$.\n\n\n```python\n\n```\n", "meta": {"hexsha": "9e0fed9c5a20f69aada9c2f616fb86e2d58bc37e", "size": 239449, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "prob-stats-data-analysis/foundational/moments-summarystats.ipynb", "max_stars_repo_name": "walkenho/tales-science-data", "max_stars_repo_head_hexsha": "4f271d78869870acf2b35ce54d40766af7dfa348", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-05-11T09:39:10.000Z", "max_stars_repo_stars_event_max_datetime": "2020-05-11T09:39:10.000Z", "max_issues_repo_path": "prob-stats-data-analysis/foundational/moments-summarystats.ipynb", "max_issues_repo_name": "walkenho/tales-science-data", "max_issues_repo_head_hexsha": "4f271d78869870acf2b35ce54d40766af7dfa348", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "prob-stats-data-analysis/foundational/moments-summarystats.ipynb", "max_forks_repo_name": "walkenho/tales-science-data", "max_forks_repo_head_hexsha": "4f271d78869870acf2b35ce54d40766af7dfa348", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 243.3424796748, "max_line_length": 81940, "alphanum_fraction": 0.8953889972, "converted": true, "num_tokens": 6584, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.40733338565660004, "lm_q2_score": 0.24220562872535942, "lm_q1q2_score": 0.09865843877378612}} {"text": "# Day 1. Introduction to the Notebook and NumPy\n\n* [Navigating the Notebook](#1)\n* [Python built-in functions](#2)\n* [Storage and manipulation of numerical arrays](#3)\n* [Repeated operations and universal functions](#4)\n\nThe answers to the exercises are encrypted. Feel free to ask the instructors for the decryption key whenever you need to view the solution.\n\n\n```python\nfrom IPython.display import IFrame\nfrom IPython.display import YouTubeVideo\n```\n\n\n```python\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\nfrom cryptography.fernet import Fernet\nimport base64\ndef encrypt(string, key):\n keygen = lambda x : base64.urlsafe_b64encode(x.encode() + b' '*(32 - len(x)))\n cipher = Fernet(keygen(key))\n return cipher.encrypt(string.encode())\ndef decrypt(string, key):\n keygen = lambda x : base64.urlsafe_b64encode(x.encode() + b' '*(32 - len(x)))\n cipher = Fernet(keygen(key))\n return print(cipher.decrypt(string.encode()).decode())\n```\n\n## Familiarize yourself with \n\n\n* create a new [environment](https://conda.io/docs/user-guide/concepts.html#conda-environments):\n - go to the terminal (or to the anaconda prompt on Windows) and [make sure](https://conda.io/docs/user-guide/tasks/manage-environments.html#determining-your-current-environment) that no environment is acrivated (else [deactivate](https://conda.io/docs/user-guide/tasks/manage-environments.html#deactivating-an-environment) it)\n - decide on the name of your new environment (_i.e._ `[env_name]`)\n - run `conda create -n [env_name]`\n* [activate](https://conda.io/docs/user-guide/tasks/manage-environments.html#activating-an-environment) the new environment\n* install the following packages in the new environment:\n - python=3.6\n - mdtraj=1.9 from the `conda-forge` channel (`conda install -c conda-forge mdtraj=1.9`)\n - R from the `r` channel (the package is called r-essentials)\n* list the packages installed in the environment (`conda list`)\n* deactivate the environment\n* [export](https://conda.io/docs/user-guide/tasks/manage-environments.html#sharing-an-environment) the environment to a yml file (be careful not to overwrite any yml file in your current directory)\n* [view a list](https://conda.io/docs/user-guide/tasks/manage-environments.html#viewing-a-list-of-your-environments) of all your conda environments\n* [remove](https://conda.io/docs/user-guide/tasks/manage-environments.html#removing-an-environment) the environment that you have just created\n\n## Use [Binder](https://mybinder.org/) to launch a GitHub repository\n\n\n* go to mybinder.org and launch a GitHub repository containing Jupyter Notebooks of your choice\n* navigate and run a Notebook in the executable environment\n* be aware that the repository has to contain a dependency file (_e.g._ the yml file containing the list of packages of a conda environment)\n* Binder uses the dependency file to build a Docker [container](https://www.docker.com/resources/what-container) image of the repository\n - \"A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.\" (excerpt from docker.com)\n\n\n\n# Navigating the Notebook\n\n\n```python\nIFrame(src='https://api.kaltura.nordu.net/p/310/sp/31000/embedIframeJs/uiconf_id/23449977/partner_id/310?iframeembed=true&playerId=kaltura_player&entry_id=0_z85if4is&flashvars[streamerType]=auto&flashvars[localizationCode]=en&flashvars[leadWithHTML5]=true&flashvars[sideBarContainer.plugin]=true&flashvars[sideBarContainer.position]=left&flashvars[sideBarContainer.clickToClose]=true&flashvars[chapters.plugin]=true&flashvars[chapters.layout]=vertical&flashvars[chapters.thumbnailRotator]=false&flashvars[streamSelector.plugin]=true&flashvars[EmbedPlayer.SpinnerTarget]=videoHolder&flashvars[dualScreen.plugin]=true&&wid=0_l2d1egty', width=608, height=402)\n```\n\n### Tasks\n\n1. Find and try out the keyboard shortcuts (Help > Keyboard Shortcuts) for\n - Toggling line numbers\n - Setting the cell to _code_\n - Setting the cell to _markdown_\n - Merging two consecutive cells\n - Inserting a cell above\n - Inserting a cell below\n - Deleting a cell\n2. Export this notebook as HTML and open it in a web-browser\n3. Go back to the Home page (usually a browser tab) and check which notebooks that are currently running\n\n## Output\n\nThe result from running a code cell is shown as output directly below it. In particular, the output from the _last_ command will be printed, unless explicitly suppressed by a trailing `;`\n\nPrevious output can be retrieved by:\n- `_` last output\n- `__` last last output\n- `_x` where `x` is the cell number.\n\n### Tasks\n- Retrieve the output of the following cell\n- Suppress the output of the following cell\n\n\n```python\na = 3\na\n```\n\n## Getting help\n\n- `shift`-`tab`-`tab`: access information about python functions (place cursor between brackets)\n- `tab`: tab complete functions and objects\n- `?command` or `command?`\n- The help menu has links to detailed help on Python, Markdown, Matplotlib etc.\n\n### Task\n\nUse the above different ways to explore the arguments for the `print()` function.\nWhat is the `end` argument for?\n\n## Documentation using Markdown\n\nMarkdown is a _lightweight_ markup language that\n\n- is intended to be as easy-to-read and easy-to-write as possible\n- supports equations ($f(x)=x$), [links](http://), ~~text formatting~~, tables, images etc.\n\nFor more information see [here](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet).\n\n### Task 1\n\nUse a Markdown cell to explain Pythagoras' theorem. Your answer should include\n\n- headers and text formatting\n- a link to an external web page\n- [LaTeX math](https://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Typesetting%20Equations.html)\n\n_Hint:_ images need not be local, but can be linked via a URL.\n\n\n```python\nanswer = 'gAAAAABcAVG3y9adFC6juLwdCUxdFnKhUDK9gqLlTTrVNfiedLFxfKdZqWayixC54-Anq9BhGQZvoWBm-AzZMyDhmsfFKIWkrhe9KR-9bwwXho5axladf90oUcqO3gBRYcOHt1nixpfl4ExV2z7Fr-xsxLyIpCzz6K2e0BvagxTonkRQNApDU_qZ6PGTVzpjq9VvgIj_xIQkSt0GEvUI0vtOdZUPly8eszQorUXGbGqIgt5aeE4YVQrnwWRAkwErK9_SdSNNqzWUwnraM_GhnjeFDIMFEbrpSxGYZomAy8N5zqWQIYGbi7jJkPePqIGqYdoM2Anceb_9IrZHQEaap18pBwLU6-Mkwx6k1uCc26eQBdKsUbk36zVKcvt-FeHojYSXtCF3CsDb40nE_b7oBBh5we1fI9IWLSaaOv6NlQVUD0uM5qze7w8BegBmLtMbdwlX9lqx2UC1'\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n### Task 2\n\n1. Use a markdown cell to create a table with column labels **Element**, **Symbol**, **Atomic number**, and a single row with **Hydrogen**, **H**, and **1**.\n2. In a *code cell*, import the `Markdown` function from `IPython` using\n ```.py\n from IPython.display import Markdown\n ```\n and explore its documentation.\n3. Redo subquestion 1 using the `Markdown` command:\n - write the markdown table as a string (use triple quotation marks or write it on a single line using the line escape sequence \\n)\n - populate the table using a [formatted Python string](https://pyformat.info)\n\n\n```python\nanswer = 'gAAAAABcAVHzAOYjHRhWhK28NdM61aXfc9hOcExm9TdQPvYCRa50Es4vPadouW4usg3AIm5zdbuYycJMkJ8HvGHmp-AOZcKT0W6XrSXMO8yZz44rUNWMYvT-re4ZVFx-om_Vtsj6bcWLAupO2QHKxZdRh2bl5o4sy8s-0yposy8g4CIGMC0OZs18nBru06xRa9DeLQhxBENMu7FqdC4XU9Bs_fIYtR8WJP5U3suk8O8bXX4AQlkfSCWGsCo-0729C0h6K3k2CkVFoMlnZnXCjpxND9pj5okvZNIsAR2-4KU_T6_AyP_8ifI1w4Xloe_hWourVdpagznZ3I16KLjFI9HdAC6kPeV1ao3Pm9-uKWhjq7oZWLj-aOgfBJeq3GKjyNSJ_r2RBa9bWU2hM9az05h-KtazArxp7_zdq3RDijC5NMgL6_Cyhut5G4Q243K6YiGBlrIWPC7lFhmmgsowExlBKWQne5Z6AQgYfHWPnoEteS-grvqN0iwuRT2PS1hv36RG4a6-O65HL0oQ9oslKmzCJCmd9UNoTBp_oMHWxE2kc-2lnZlB7BwrcsHhVXtrkap8tce0tyj_N3RksSJ6XN-CNoRklAA8UwbFjzSGBr20Q6HdL3QJ5o1r6gxvG06bFNx-iJvc_ALTjyIc9ZKXZnsWOoMymG-L_4FCejCKrLqHFonT2lsrn_dOZ5qVu5KQr4MzDwcxhBFlRYpHUcOCtj2wSqEM0f-ycCUZMveuJueC7s2uOKDXQQy0ZjtiZaW0VUoV-tJlwoWzlSjJvtRSLINESXfi9e60Xw=='\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n### Task: Embedded web content\n\nLund University Publications ([LUP](https://lup.lub.lu.se/search)) allows you to search for publications from specific LU departments or authors. They also provide the possibility to _embed_ the search result. Use `IPython.display.IFrame` to display a search of your choice.\n\n\n```python\n# Here's an example showing a protein - replace with something from LUP!\nfrom IPython.display import IFrame\nIFrame(src=\"http://www.ncbi.nlm.nih.gov/Structure/icn3d/full.html?pdbid=4lzt\", width=800, height=400)\n```\n\n## IPython Magic commands\n- Line magic (`%`): operates on a single line and can be mixed with other languages\n- Cell magic (`%%`): operates on the whole cell\n- [More info](http://ipython.readthedocs.io/en/stable/interactive/magics.html)\n\n\n```python\n%lsmagic\n```\n\nThis is an example of a LaTeX cell\n\n\n```latex\n%%latex\n\n\\begin{equation}\n G_O = \\int_0^\\infty \\mathrm{d}r\\; 4 \\pi r^2 \\left [ g_O(r) -1 \\right ]\n\\end{equation}\n```\n\nThis is an example of an [SVG](https://www.w3.org/Graphics/SVG/IG/resources/svgprimer.html) cell \n\n\n```python\n%%svg\n\n\n \n \n \n \n \n \n \n \n \n \n \n +\n -\n r\n \n \n```\n\n### [Shell commands](https://jakevdp.github.io/PythonDataScienceHandbook/01.05-ipython-and-shell-commands.html)\n\n* find the path of the current directory using `%pwd`\n* create a new directory using `%mkdir`\n* enter the new directory using `%cd`\n* find the path of the current directory using `%pwd`\n* get back to the parent directory using `%cd`\n* view a list of files and folders in the current directory with `%ls`\n* use `%cat` to view the environment.yml file of the course repository\n\nBesides the Magic commands, any command that works on your terminal can be run in the Notebook by prepending an exclamation mark! These commands are executed in a temporary subshell, _e.g._, compare the output of the following two cells.\n\nN.B. The command to remove a nonempty folder on Windows is `rmdir /Q /S `\n\n\n```python\n%mkdir new_dir\n%cd new_dir\n%pwd\n%ls\n%cd ..\n%pwd\n%rm -r new_dir\n```\n\n\n```python\n%mkdir new_dir\n!cd new_dir\n!ls\n!pwd\n!cd ..\n!pwd\n%rm -r new_dir\n```\n\n### Task 3\n - Write a script in a Python cell which creates and deletes this directory tree: \n ```bash\n .\n \u2514\u2500\u2500 new_dir\n \u251c\u2500\u2500 dir_1\n \u251c\u2500\u2500 dir_2\n \u251c\u2500\u2500 dir_3\n \u2514\u2500\u2500 dir_4\n ```\n - To write your script, translate the following bash shell.\n\n\n```bash\n%%bash\n\nmkdir new_dir\ncd new_dir\nfor i in {1..4}\ndo\nmkdir dir_$i\ndone\ncd ..\nrm -r new_dir\n```\n\n\n```python\nanswer = 'gAAAAABcASqXgQ5RM5983tgSRo4nD-XKkEiKOpq95dvnucHN62hTjGmSZ0IE5zbBooxiuMD72EmfXoY_3pLy89XMf9Wn-sxohva6A15UfPsfAydL7n3Qq628J4kam9LoBpinpVberru5ojcPui6p7VC9VXaE2HcqGTiCjn_GpPNineaXh8EgaMxWASEYpj2cGRfnmPV02nSK'\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n### Task 4\n - Search stackoverflow.com for different ways of saving a string to file in Python\n - Use one of those methods to write a file named dirtree.txt containing the bash script to create the directory tree of the previous task \n - Read the new file using `%cat`\n - Use the `%%writefile` Magic command to save the same string to file \n\n\n```python\nanswer = 'gAAAAABcATGAqC3y72nhYgn_E9BXBLx4ASSuSPcL4wLsn7BbprEwqVpW0GdDJfsDZYNMBbCvNjBOUrE1-AtJ4uTehitEyiU22VBerBYe5HGJoT58bBJhJOduZn5pMTQu4_UDf3Tp7TQSbceTB7d3IsGCFg92F1tiuNZkJsbELRJPcHU4NjrjcToYXAHkLRF-bC6jl312p91HpxbWnIflpUXesJqRdUmlzLwxkSYJ4DmRUiRru-BR1dEEio38ciQbs3coLJQY8edUiEyQoyRPEUzJIcxROM0PGrB9iYrsaD5eUs_NQOOaALpQDdiT_pMVmeUjSVlJpiI298l84ahqa5Io6mitG6YnZ0qj2C-BJlQUc55vK1SJvCIIGC4EoPQ9S83C-Q1VekR5ilUM5T0N1FPD4XiSurarcZRJGdMdsGPrK-24vX2Ok9l7QLCOvR03-gLMFnoKRXmBF0i8cQ-8Ms5UAGUHAYLv3AaiOmcLwbAzwwotHUmmTZw='\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n## Cross-language interaction\n\nIt is possible to integrate [multiple languages](https://blog.jupyter.org/i-python-you-r-we-julia-baf064ca1fb6) in the same Notebook. For example we can pass variables from Bash to Python and vice versa. We can also define and compile [Cython](https://cython.org/) or Fortran functions in the Notebook and run them in a Python cell (vide infra).\n\n### Task 5\n - Create a Python variable storing the path of the current directory\n - Create a Python variable storing the list of files and folders in the current directory (using `!ls` on Unix and `!dir` on Windows)\n\n\n```python\nanswer = 'gAAAAABcAGuDmNC7FPu2tRLRlcnuPGB0ZNGjMJW7dvPic723eZeqR2fueK25CFKrfgPFQ3HzAoludqrlQLAr8d1cWRW7JW-ZGS6DTBZNaRo1ejwWTnRXaexSypoZjH4YFVzwZ3t_CRRg'\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n\n\n# Python built-in functions\n\n\n```python\nYouTubeVideo('YpBUiEsTiEA')\n```\n\n### Task 6\nFind the type of the items of the arrays `np.arange(0,9,1)` and `np.arange(0.,9,1)`.\n\n\n```python\nanswer = 'gAAAAABcAXC6ubPFyqGJPj-A6Mf7CUH0yX6Q55wFMDx-WUSVnAd38Ysi4wrkkBGJJAotGDGZ452zkXKoVX1J8G-fFWnvNtPyovGhVxC6sK7yDRbTFo9LolHQ5b2w3cTfZ0m7kAu063tYE8-VmCIQBh6m6zEd5l9Evw=='\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n### Task 7\nUse a list comprehension to generate all possible 4-digit codes consisting of 2 alphabetical letters and 2 numbers (from 0 to 9). How many items does the list of contain? Turn the list into a NumPy array and count the number of unique items.\n\n\n```python\nanswer = 'gAAAAABcAXUFYR3d2T0gj4i2zvger7rLhjcdCpmHtMJ25cg0I-Cng6cC2w5ZTCvDHRpt1XOKds6fWztIMkMPoihsu-XDusbRCL7S4-BiWICQHyUqyKuHtrNen4TrJXv2UHP_pzPsCv2BM6xUXKocTsgtBuAuYCLSxX-4bnyXyLNVEsOmTt7jw-QeVmBqiGvqXqP-vv4G7ueg1f6vpRWBNaqayso5ZhVlRzUNc6fNcgWX7q3EvOs_RqY1avTku9V46ldfWSbwFxcCyX-N1aEaYGHuVCgpyZy2w6HDSbWzGfx_quF293TLAX94nJOAh2JYS6NGk9IlvQYWa4LUyGW_Xj1V01I_AlSXiA=='\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n### Task 8\nGenerate the array [9 7 5 3] with `np.linspace(start=0,stop=10,num=11)` and indexing. Convert its elements to strings with 2 trailing zeros using [`format`](https://pyformat.info/).\n\n\n```python\nanswer = 'gAAAAABcAXWqJ3EkPOBOFbWCE8-SH72Mxtjk9Vupi0T6fBICvOOaJrbNMdr2prHQGeOfWflKEbtWxCODpVK5HY7Q1PEeg1r7-SVmSIWO9HSk_rdBitN-DnIL2XbfC9AaM2SSurG4O9PYa-7Mq1lVhm1oyV9vdzVbK0W6M5Wup08NvRZliVlkve1Y4OvcsqxoQdrM1VfuM-fDU3yd7Ytk0zNIAaIueLFCOOkOoEgLipGSDRWdCSqbXquMtsd-xQ9sH2emp0k-k5yE'\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n### Task 9\n- create a linear sequence of integers from 0 to 7 stepping by 1 with `arange` and `linspace`\n- create a linear sequence of floating-point numbers from 0 to 7 stepping by 0.25 with `arange` and `linspace`\n\n\n```python\nanswer = 'gAAAAABcAXacH2s6wH7Q02OEFFeRxe9NXtcuBKGUB4UffBOYJnUZVnv8XeoToWcoQ7FGYa3_elbyhsuFj1n7t6ywpNMqfGIHyuSZA80EnCUrhBv-4hRGUEeeH-FFo_Ca2hck89vkCb-rCTXdVB3oKY_Sj5Cl1BbHY3p3lCtvRTrVY47y_GaUaIlnL1ZB1liPGl6t1m0-ejLJQTu9Jir0_-6HSaoQikAmhzZP1qhKnTbFp4-jK-ddpgxpVLjFJs316OcrGMkkk5YCosZRzzRW9QZSzbufN3nuJj13G7qbl9skcdgM2uvGWcQY3UzEmJy-tbqtnD809YrqtO9I7tpkSwBs82oxJ32hJ9FsSwGhy05GaDRhcUve7CI='\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n### Task 10\n- Create an uninitialized array of size 0 and populate it, in a for loop, with the 4-digit codes containing the letter \"a\" and selected from those generated in Task 7.
\nHint: use `empty` and `append`.\n- Create the same array in a single line using list comprehension.\n\n\n```python\nanswer = 'gAAAAABcAX7FytesghVv_mvr9wtxzVRr8OP5di6oAxZuVVELnr2VIbXR8xTdwX7kyt32y2JRd3UiOc1FKIc1IyLOlmFJma2XXQ72wPGOnqlFaPTceiASAQ8dv50fvtwCsd3CkKDPFsL6Qs2Bk7gnCR602ZmQJJJgLuuaIip0N3d_H88olMevcDyBnMUU9hUqruGTaSf3WR3cRtCInw_5ACJgYgZ9FP4FDU20Ba6bN9guR45P1ShCZOwbLf0NKeLv03sl1LWw2nQSAHEmkMvJgdNZ5_Qwupv3_JxscIBFpN8ho7gKCXbbzCF2OB5-RivPv2DulrOe8WfUl_nm-r-sHhuC0doAs2m6-g=='\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n\n\n# Storage and manipulation of numerical arrays\n\n\n\n```python\nYouTubeVideo('2xJsNi3wk-s')\n```\n\n### Task 11\nGenerate a 3x3 zero matrix and turn it into a 3x3 identity matrix by modifying values one by one or by fancy indexing.\n\n\n```python\nanswer = 'gAAAAABcAYDE5-H_i0u-50j3_pcGQwGmrLyH-98v5F-kISABJWSIDDiQues0yc1-M6rIQ57o_neV8kpksWCgjJUR-k0N7u2SrqBgz1oxnwSzryRYsfJLWKSxzNhJM0588wVeXPowZ0wjEKTeURn0f9aDhS3rghxDFaYD30qyubR1qXOqY3644FKa1x4YWgaLpN8x9s_Hs7_6MuPgnx0eoP5oDx1IfL4Dmqb51rIsgiGbQbvWT7tnAjI8JL7LCxP6PIG7FITginHriuM1edJ1h__JbxCQq78mdSFZYgsqy2TGoupVvOqNfN23ox91vXHotjp2UE6y_b0rY6d7mydOt4GgvMgKnDl6QCybpV_Td-ySVCvNuqWTADKwsfntWb-wrJwhh8SsFBrH-35h70ruU1U-307PtfjWZUCirkhANhWh4c5tJpzBJVjsTYyQDwoRNYzVgeAcHkmYocODTwhswnirZOZDwCuAgQY4F6Ba_zB47n6gltOsPYy_DUNAQClAIOABK4Zt79_1bMDDLrnVaHT1TqF4CR42PUJ143SF86zuqDoz0H_E1szwUsz_95qyBvcf1ZzO99tDdAFBt-5P4r5EVoy-iQ35kA=='\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n### Task 12\nComplete the following lines\n\n```.py\nx = \ny = \nnp.savetxt('',np.c_[],fmt='%',header='\\t',delimiter='',comments='')\n%cat ```\n\nThe cell should\n- write to file the following table as a tab-separated value file, specifying a header and the format of the numbers\n\nx | y | \n-------- | -------- | \n0 | 0 |\n1 | 1 |\n2 | 4 |\n3 | 9 |\n4 | 16 |\n5 | 25 |\n6 | 36 |\n7 | 49 |\n8 | 64 |\n9 | 81 |\n\n- read the file in the notebook to check that it look as it should.\n\n\n```python\nanswer = 'gAAAAABcAoobPB8I2ABv5t6VCQQteRUOEPicO0MQ2Auh8Jw_UjJ3I7zAFTfSmNIdWktMQP1_nGsVTe-aQRcqJXoFtyruJp4Lm23MKpKie_dS_AQEBHfWD0xjqJiQQSmsPNSgNfI2qfO-umzSR0QbkFP9a9SafpfluAwlHliBG3rJfQ_foe5O9JccdsbqzG903DsDyZASaEhQmu0bWiq_0xTI3QR2ORwc8DKrrNNYJqm6IV15G9u66yM9zapPx8dNlrpnNCf5HE6t'\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n### Task 13\n\nComplete the following lines\n\n```.py\nx,y = np.loadtxt(,dtype=str,delimiter=,skiprows=,unpack=)\nplt.plot(x,y)\nplt.show()\nx,y = np.loadtxt(,dtype=int,delimiter=,skiprows=,unpack=)\nplt.plot(x,y)\nplt.show()```\n\nThe cell should\n- load the text file created in the previous task so that the values are interpreted as strings (each column should be loaded into a separate array)\n- plot $y$ vs $x$\n- load the text file created in the previous task so that the values are interpreted as integers (each column should be loaded into a separate array)\n- plot $y$ vs $x$ and compare with the previous plot.\n\n\n```python\nanswer = 'gAAAAABcAos1k2OGaYibgW_3ODaNjTrbU92NNsxSDFCTirW3uEf7DLxBrxNwMln4ZDiMRff3N6ctjB1KZGIE4FsOpG3x2MbteJgrycQJLoiW2atBA_3oXeaE83LPOK7Qcca0cuJPrxxJXY1lbBQPw0tZXe3-0zcytax9eI37TovKhPpNGWq2vxr-f3j22_0QREjMiZGhU2T5g2dXNFO9R5OKHk6gEtDldPf196SkJT9lO_WVxe21bkMXY7ogm6a-Jaxz71eovm_iaGqDg904nCd1sccqvS-oZFxPKZxNA-xzAKNvRfFzCPVnEMfm3YnUoFS7B5gPMnb3BVGkxD5fAYVpIXeI-ByaLGvutV7HvC8-PtJKOtJg-eM='\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n\n\n# Repeated operations and universal functions\n\n\n```python\nYouTubeVideo('469ukhzwEPg')\n```\n\n### Task 14\nCompute and plot the function $z=xy\\exp[-(x^2+y^2)]$ on a square grid 100x100 with side lengths ranging from -2 to 2. Complete the following lines:\n\n```.py\ny = x[:,np.]\nz = * * np.exp( - - ) \nplt.imshow(z, extent=, origin=)\nplt.colorbar(label=)\nplt.xlabel()\nplt.ylabel();```\n\n\n```python\nanswer = 'gAAAAABcAoMby1Rf-3JRj1ehvYtOVLeVNwYOuGoUFrmbarZifjvZwDnLzdox_J9dSAkRSacDW9DcP5WbF3nCbXp0ByN7u4LiRhIVCCAJAHvQCjgKLJuimG3X-PHcCuvlofloVxMpuaYGY5QtpoFSH3FJRb40kHwFxrlKIgBqFK_yFMCOwOSc5srYCeJVgRydPqalgSqIHK5DTtIvR_DfRRcrA-MDPMM_hPyuPNYC-QYoeXH-scxbW-22kSNG8p-pcVNoySgIPn4C1YFq_T6m-oB8AOKy6pwtFog84--3yS1g63eYy6diu34nMMUOEnZEk9VmdqyhEPym'\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n### Task 15\nUse boolean masking to set to 0.18 the $z$ values that are lower than 0.04 in the previous plot.\n\n\n```python\nanswer = 'gAAAAABcApILR-KMnBa3VT2u-pmDiqF_LKSym5bvRjZmMm6ndi03WhBvgQEr3oKlJo5rOgM2MbBwwyHoLPnb81YakQFtnsG6PNdX-SR1K9gMPH9L4st8ihBFolIzrRjiuJldtCbLUmrAEx_Dpn1cjRtTlHND2OizxVrzneguu8PuVk1e-GHSUjXROBlACkkIQ--fJA4ypHYADl-rZpJh5aSgLaTF3PzziT2vvXlQBAQIjU4aAMCLFDEfH-iGI_mbxTNd6BEqY-AUfUDRd29YUobAM-MjmN-W1T79sVAY4w16ZRGptj1waz9vhYjB2avawWvW8COMFXATK-LQUH6TwFi_a_8yreMmZg=='\n\n```\n\n\n```python\ndecrypt(answer,'Ystad')\n```\n\n### Task 16\n\nDefine 3 functions that estimate the limiting value of $\\sum_1^\\infty \\frac{\\sqrt{n}}{n^3}$:\n1. a Python funtion involving a for loop\n2. a Python function exploiting universal functions for repeated operations and aggregation functions\n3. a Cython function defined with `cpdef` involving a for loop\nCython is a mix of C and Python. With `cpdef` we can define a function as we would in Python (_i.e._ without declarying any types) but we obtain a function that is almost as fast as C generated code.\n\nHere are snippets of code to complete for each of the three subtasks:\n1. ```.py\ndef func1(n):\n result = 0\n for k in range():\n result += \n return result```\n2. ```.py\ndef func2(n):\n return ( ).sum()```\n3. ```.py\n%load_ext Cython\n%%cython\ncdef extern from \"math.h\":\n double sqrt(int)\ncpdef func3(n): # note the 'p' in 'cpdef'\n result = 0\n for k in range():\n result += \n return result```\n \nUse `%timeit` to compare the speed of `func1`, `func2`, and `func3`.\n\n\n```python\nanswer = 'gAAAAABcAqbaIiWIulQgcO267q_p8PFsH41h2jQuTOZt3vnqlEBt29xULPJTQYQTRjYyVqiEIRJXYfQe-4xJ5bUnd2PVD1YClavYu5Dbj8JjpG_Y_D6DZED4DJV6qfTotHSUX8lNn-8m51YRheaWqMrPxcMyRfAw9I8DGMLlr9064KXVpLlIPIO5q95WWnl5E127o_NhRisZp5GsolVTJw7kxoipgZyGMg=='\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n\n```python\nanswer = 'gAAAAABcAqcJPTwdzkomK_6f-zDZ7wbvZnch14UFrHEHQTmKcHmobziYV7dkfnBMmjyddE10f9pR9N9ymzPL0xh1X6tYmVoI9EBj2v5ty0soSsMzttDw0f1-UCioiIFTkrKmEyVBjJlUdZWxY1ybiRDeElJuvLpr6VygSgfJYDMIuv1Ho3YQbQznXQrbAQ5EhP4o-fh2XnFI'\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n\n```python\nanswer = 'gAAAAABcAqcun5hh9C7geQYhS-uzCeB2-uRhI_9yc6HtVO-di2GNbcmnLVFWRP-vghmxLPT7vNDrI4BepkH1kxrneRCTaP45Js2od2NYvOLME7xc1cIy4xTNsbYl2hwtK0JLHUK2NyDI6X4-cE9vtc7lHXwZt8gX6h1GA81LWjUwmYlEU-bwv1s9iJytNRF4Cs0s4pl6RfyN2wzCovjeCqt4oXHccoenWnlvW2hjhpI6Zpb6MdcsAfhsaoo39oWaL1XYn-lnQLj71gfc48qP9x5paPSX-xXXATSPJoY8bcGaTyK0dbcyuN6Bg0Iipg3MYGmIcozPEeW_FY1YQhrfXm4Fx09YdkvqbA=='\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n\n\n```python\nanswer = 'gAAAAABcAqdTNpsPmdHE8Jrn6mxQGyRjquTtBOOwub9M47tFjCRCWu2xFuj9jH2-86BPouk-L3tZ7z97FaD_7cxi81KtQjHRutnrf8gwusJY9dCwgypzqtMsitla1XoBqeGX7fMXLT_Pz6B8yj8RP2SJLwuCOlXjKQ=='\n\n```\n\n\n```python\ndecrypt(answer,)\n```\n", "meta": {"hexsha": "c619e60e3b6e6f880e14211f72da2b4182589510", "size": 35569, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "exercises/day_1.ipynb", "max_stars_repo_name": "urania277/jupyter-course", "max_stars_repo_head_hexsha": "20060173e7355fc4726148f00b61404d2613b74b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2017-11-27T23:41:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-24T21:24:04.000Z", "max_issues_repo_path": "exercises/day_1.ipynb", "max_issues_repo_name": "urania277/jupyter-course", "max_issues_repo_head_hexsha": "20060173e7355fc4726148f00b61404d2613b74b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2017-12-08T20:12:35.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-26T09:28:07.000Z", "max_forks_repo_path": "exercises/day_1.ipynb", "max_forks_repo_name": "mlund/jupyter-course", "max_forks_repo_head_hexsha": "d2e12d153febc6848a1ed80a2f3f29973a3bea73", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2017-12-11T13:18:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-13T14:18:33.000Z", "avg_line_length": 33.2110177404, "max_line_length": 821, "alphanum_fraction": 0.6322640502, "converted": true, "num_tokens": 9020, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.310694383214554, "lm_q2_score": 0.31742626558767584, "lm_q1q2_score": 0.09862255780286215}} {"text": "```\n# this mounts your Google Drive to the Colab VM.\nfrom google.colab import drive\ndrive.mount('/content/drive', force_remount=True)\n\n# enter the foldername in your Drive where you have saved the unzipped\n# assignment folder, e.g. 'cs231n/assignments/assignment3/'\nFOLDERNAME = 'cs231n/assignments/assignment2/'\nassert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n\n# now that we've mounted your Drive, this ensures that\n# the Python interpreter of the Colab VM can load\n# python files from within it.\nimport sys\nsys.path.append('/content/drive/My Drive/{}'.format(FOLDERNAME))\n\n# this downloads the CIFAR-10 dataset to your Drive\n# if it doesn't already exist.\n%cd drive/My\\ Drive/$FOLDERNAME/cs231n/datasets/\n!bash get_datasets.sh\n%cd /content\n```\n\n Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n \n Enter your authorization code:\n \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n Mounted at /content/drive\n /content/drive/My Drive/cs231n/assignments/assignment2/cs231n/datasets\n /content\n\n\n# Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. \nOne idea along these lines is batch normalization which was proposed by [1] in 2015.\n\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\n\nThe authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [1] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\n\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n\n[1] [Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.](https://arxiv.org/abs/1502.03167)\n\n\n```\n# As usual, a bit of setup\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef print_mean_std(x,axis=0):\n print(' means: ', x.mean(axis=axis))\n print(' stds: ', x.std(axis=axis))\n print() \n```\n\n =========== You can safely ignore the message below if you are NOT working on ConvolutionalNetworks.ipynb ===========\n \tYou will need to compile a Cython extension for a portion of this assignment.\n \tThe instructions to do this will be given in a section of the notebook below.\n \tThere will be an option for Colab users and another for Jupyter (local) users.\n\n\n\n```\n# Load the (preprocessed) CIFAR10 data.\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)\n```\n\n X_train: (49000, 3, 32, 32)\n y_train: (49000,)\n X_val: (1000, 3, 32, 32)\n y_val: (1000,)\n X_test: (1000, 3, 32, 32)\n y_test: (1000,)\n\n\n## Batch normalization: forward\nIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.\n\nReferencing the paper linked to above in [1] may be helpful!\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization \n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint_mean_std(a,axis=0)\n\ngamma = np.ones((D3,))\nbeta = np.zeros((D3,))\n# Means should be close to zero and stds close to one\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\n# Now means should be close to beta and stds close to gamma\nprint('After batch normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n```\n\n Before batch normalization:\n means: [ -2.3814598 -13.18038246 1.91780462]\n stds: [27.18502186 34.21455511 37.68611762]\n \n After batch normalization (gamma=1, beta=0)\n means: [5.32907052e-17 7.04991621e-17 1.85962357e-17]\n stds: [0.99999999 1. 1. ]\n \n After batch normalization (gamma= [1. 2. 3.] , beta= [11. 12. 13.] )\n means: [11. 12. 13.]\n stds: [0.99999999 1.99999999 2.99999999]\n \n\n\n\n```\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\n\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint_mean_std(a_norm,axis=0)\n```\n\n After batch normalization (test-time):\n means: [-0.03927354 -0.04349152 -0.10452688]\n stds: [1.01531428 1.01238373 0.97819988]\n \n\n\n## Batch normalization: backward\nNow implement the backward pass for batch normalization in the function `batchnorm_backward`.\n\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\n\nOnce you have finished, run the following to numerically check your backward pass.\n\n\n```\n# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\n#You should expect to see relative errors between 1e-13 and 1e-8\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.7029261167605239e-09\n dgamma error: 7.420414216247087e-13\n dbeta error: 2.8795057655839487e-12\n\n\n## Batch normalization: alternative backward\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.\n\nSurprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too! \n\nIn the forward pass, given a set of inputs $X=\\begin{bmatrix}x_1\\\\x_2\\\\...\\\\x_N\\end{bmatrix}$, \n\nwe first calculate the mean $\\mu$ and variance $v$.\nWith $\\mu$ and $v$ calculated, we can calculate the standard deviation $\\sigma$ and normalized data $Y$.\nThe equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).\n\n\\begin{align}\n& \\mu=\\frac{1}{N}\\sum_{k=1}^N x_k & v=\\frac{1}{N}\\sum_{k=1}^N (x_k-\\mu)^2 \\\\\n& \\sigma=\\sqrt{v+\\epsilon} & y_i=\\frac{x_i-\\mu}{\\sigma}\n\\end{align}\n\n\n\nThe meat of our problem during backpropagation is to compute $\\frac{\\partial L}{\\partial X}$, given the upstream gradient we receive, $\\frac{\\partial L}{\\partial Y}.$ To do this, recall the chain rule in calculus gives us $\\frac{\\partial L}{\\partial X} = \\frac{\\partial L}{\\partial Y} \\cdot \\frac{\\partial Y}{\\partial X}$.\n\nThe unknown/hart part is $\\frac{\\partial Y}{\\partial X}$. We can find this by first deriving step-by-step our local gradients at \n$\\frac{\\partial v}{\\partial X}$, $\\frac{\\partial \\mu}{\\partial X}$,\n$\\frac{\\partial \\sigma}{\\partial v}$, \n$\\frac{\\partial Y}{\\partial \\sigma}$, and $\\frac{\\partial Y}{\\partial \\mu}$,\nand then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\\frac{\\partial Y}{\\partial X}$.\n\nIf it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\\frac{\\partial L}{\\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\\frac{\\partial \\mu}{\\partial x_i}, \\frac{\\partial v}{\\partial x_i}, \\frac{\\partial \\sigma}{\\partial x_i},$ then assemble these pieces to calculate $\\frac{\\partial y_i}{\\partial x_i}$. \n\nYou should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation. \n\nAfter doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\n\n\n```\nnp.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))\n```\n\n dx difference: 6.284600172572596e-13\n dgamma difference: 0.0\n dbeta difference: 0.0\n speedup: 2.10x\n\n\n## Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.\n\nConcretely, when the `normalization` flag is set to `\"batchnorm\"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\n\nHINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.\n\n\n```\nnp.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\n# You should expect losses between 1e-4~1e-10 for W, \n# losses between 1e-08~1e-10 for b,\n# and losses between 1e-08~1e-09 for beta and gammas.\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n normalization='batchnorm')\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()\n```\n\n Running check with reg = 0\n Initial loss: 2.3004790897684924\n W1 relative error: 1.48e-07\n W2 relative error: 2.21e-05\n W3 relative error: 3.53e-07\n b1 relative error: 5.38e-09\n b2 relative error: 2.09e-09\n b3 relative error: 5.80e-11\n \n Running check with reg = 3.14\n Initial loss: 7.052114776533016\n W1 relative error: 3.90e-09\n W2 relative error: 6.87e-08\n W3 relative error: 2.13e-08\n b1 relative error: 1.48e-08\n b2 relative error: 1.72e-09\n b3 relative error: 1.57e-10\n\n\n# Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.\n\n\n```\nnp.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\nprint('Solver with batch norm:')\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True,print_every=20)\nbn_solver.train()\n\nprint('\\nSolver without batch norm:')\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()\n```\n\n Solver with batch norm:\n (Iteration 1 / 200) loss: 2.302838\n (Epoch 0 / 10) train acc: 0.131000; val_acc: 0.119000\n (Epoch 1 / 10) train acc: 0.227000; val_acc: 0.188000\n (Iteration 21 / 200) loss: 2.066770\n (Epoch 2 / 10) train acc: 0.274000; val_acc: 0.231000\n (Iteration 41 / 200) loss: 2.026292\n (Epoch 3 / 10) train acc: 0.301000; val_acc: 0.255000\n (Iteration 61 / 200) loss: 2.081528\n (Epoch 4 / 10) train acc: 0.368000; val_acc: 0.278000\n (Iteration 81 / 200) loss: 1.743705\n (Epoch 5 / 10) train acc: 0.405000; val_acc: 0.285000\n (Iteration 101 / 200) loss: 1.511911\n (Epoch 6 / 10) train acc: 0.479000; val_acc: 0.308000\n (Iteration 121 / 200) loss: 1.679106\n (Epoch 7 / 10) train acc: 0.501000; val_acc: 0.266000\n (Iteration 141 / 200) loss: 1.635350\n (Epoch 8 / 10) train acc: 0.508000; val_acc: 0.286000\n (Iteration 161 / 200) loss: 1.190941\n (Epoch 9 / 10) train acc: 0.594000; val_acc: 0.311000\n (Iteration 181 / 200) loss: 1.270953\n (Epoch 10 / 10) train acc: 0.650000; val_acc: 0.332000\n \n Solver without batch norm:\n (Iteration 1 / 200) loss: 2.302332\n (Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000\n (Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000\n (Iteration 21 / 200) loss: 2.041970\n (Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000\n (Iteration 41 / 200) loss: 1.900473\n (Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000\n (Iteration 61 / 200) loss: 1.713156\n (Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000\n (Iteration 81 / 200) loss: 1.662209\n (Epoch 5 / 10) train acc: 0.434000; val_acc: 0.300000\n (Iteration 101 / 200) loss: 1.696059\n (Epoch 6 / 10) train acc: 0.535000; val_acc: 0.345000\n (Iteration 121 / 200) loss: 1.557987\n (Epoch 7 / 10) train acc: 0.530000; val_acc: 0.304000\n (Iteration 141 / 200) loss: 1.432189\n (Epoch 8 / 10) train acc: 0.628000; val_acc: 0.339000\n (Iteration 161 / 200) loss: 1.033931\n (Epoch 9 / 10) train acc: 0.661000; val_acc: 0.340000\n (Iteration 181 / 200) loss: 0.901034\n (Epoch 10 / 10) train acc: 0.726000; val_acc: 0.318000\n\n\nRun the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.\n\n\n```\ndef plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):\n \"\"\"utility function for plotting training history\"\"\"\n plt.title(title)\n plt.xlabel(label)\n bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]\n bl_plot = plot_fn(baseline)\n num_bn = len(bn_plots)\n for i in range(num_bn):\n label='with_norm'\n if labels is not None:\n label += str(labels[i])\n plt.plot(bn_plots[i], bn_marker, label=label)\n label='baseline'\n if labels is not None:\n label += str(labels[0])\n plt.plot(bl_plot, bl_marker, label=label)\n plt.legend(loc='lower center', ncol=num_bn+1) \n\n \nplt.subplot(3, 1, 1)\nplot_training_history('Training loss','Iteration', solver, [bn_solver], \\\n lambda x: x.loss_history, bl_marker='o', bn_marker='o')\nplt.subplot(3, 1, 2)\nplot_training_history('Training accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')\nplt.subplot(3, 1, 3)\nplot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n# Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\n\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.\n\n\n```\nnp.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers_ws = {}\nsolvers_ws = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers_ws[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers_ws[weight_scale] = solver\n```\n\n Running weight scale 1 / 20\n Running weight scale 2 / 20\n Running weight scale 3 / 20\n Running weight scale 4 / 20\n Running weight scale 5 / 20\n Running weight scale 6 / 20\n Running weight scale 7 / 20\n Running weight scale 8 / 20\n Running weight scale 9 / 20\n Running weight scale 10 / 20\n Running weight scale 11 / 20\n Running weight scale 12 / 20\n Running weight scale 13 / 20\n Running weight scale 14 / 20\n Running weight scale 15 / 20\n Running weight scale 16 / 20\n Running weight scale 17 / 20\n Running weight scale 18 / 20\n Running weight scale 19 / 20\n Running weight scale 20 / 20\n\n\n\n```\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers_ws[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))\n \n best_val_accs.append(max(solvers_ws[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n## Inline Question 1:\nDescribe the results of this experiment. How does the scale of weight initialization affect models with/without batch normalization differently, and why?\n\n## Answer:\nThe second plot shows the problem of vanishing gradients (small initial weights). The baseline model is very sensitive to this problem (the accuracy is very low), therefore finding the correct weight scale is difficult. For this example, the baseline obtains the best result with a weight scale equal to 1e-1. On the other hand, we can see that the batchnorm model is less sensitive to weight initialization because its accuracy is around 30% for all the different weight scales.\n\nThe behaviour of the first plot is very similar to that of the second plot. The main difference is that the first plot shows that we are overfitting our model, besides that we can see that with the batchnorm model we obtained better results than the baseline model and that occurs because batch normalization has regularization properties.\n\nThe third plot depicts the problem of exploding gradients and it is very evident in the baseline model for weight scale values greater than 1e-1. However, the batchnorm model does not suffer from this problem.\n\nIn general with batch normalization we can avoid the problem of vanishing and exploding gradients because it normalizes every affine layer (xW+b), avoiding very large/small values. Moreover, its regularization properties allow to decrease overfitting.\n\n\n# Batch normalization and batch size\nWe will now run a small experiment to study the interaction of batch normalization and batch size.\n\nThe first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.\n\n\n```\ndef run_batchsize_experiments(normalization_mode):\n np.random.seed(231)\n # Try training a very deep net with batchnorm\n hidden_dims = [100, 100, 100, 100, 100]\n num_train = 1000\n small_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n }\n n_epochs=10\n weight_scale = 2e-2\n batch_sizes = [5,10,50]\n lr = 10**(-3.5)\n solver_bsize = batch_sizes[0]\n\n print('No normalization: batch size = ',solver_bsize)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n solver = Solver(model, small_data,\n num_epochs=n_epochs, batch_size=solver_bsize,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n solver.train()\n \n bn_solvers = []\n for i in range(len(batch_sizes)):\n b_size=batch_sizes[i]\n print('Normalization: batch size = ',b_size)\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)\n bn_solver = Solver(bn_model, small_data,\n num_epochs=n_epochs, batch_size=b_size,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n bn_solver.train()\n bn_solvers.append(bn_solver)\n \n return bn_solvers, solver, batch_sizes\n\nbatch_sizes = [5,10,50]\nbn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')\n```\n\n No normalization: batch size = 5\n Normalization: batch size = 5\n Normalization: batch size = 10\n Normalization: batch size = 50\n\n\n\n```\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 2:\nDescribe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed?\n\n## Answer:\nAccording to the results, we can see that the batch size affects directly the performance of batch normalization (the smaller the batch size the worse). Even the baseline model outperforms the batchnorm model when using a very small batch size. This problem occurs because when we calculate the statistics of a batch, i.e., mean and variance, we try to find an approximation of the statistics of the entire dataset. Therefore with a small batch size, these statistics can be very noisy. On the other hand, with a large batch size we can obtain a better approximation.\n\n\n# Layer Normalization\nBatch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations. \n\nSeveral alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.\n\n[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. \"Layer Normalization.\" stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)\n\n## Inline Question 3:\nWhich of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?\n\n1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.\n2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1. \n3. Subtracting the mean image of the dataset from each image in the dataset.\n4. Setting all RGB values to either 0 or 1 depending on a given threshold.\n\n## Answer:\nNumber 2 is analogous to layer normalization when we consider: mean = 0, beta parameter = 0 (at this point we have gamma*x/std where std=sqrt(sum(x^2))) and gamma=x/std. Thus the result of layer normalization will be x^2/sum(x^2).\n\nNumber 3 is analogous to batch normalization when we consider: batch size = size of the dataset, gamma parameter = standard deviation and beta parameter = 0. Thus the result of batch normalization will be std*(x-mean)/std + 0 = x-mean.\n\n\n# Layer Normalization: Implementation\n\nNow you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.\n\nHere's what you need to do:\n\n* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_forward`. \n\nRun the cell below to check your results.\n* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`. \n\nRun the second cell below to check your results.\n* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `\"layernorm\"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity. \n\nRun the third cell below to run the batch size experiment on layer normalization.\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after layer normalization \n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 =4, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before layer normalization:')\nprint_mean_std(a,axis=1)\n\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n# Means should be close to zero and stds close to one\nprint('After layer normalization (gamma=1, beta=0)')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n\ngamma = np.asarray([3.0,3.0,3.0])\nbeta = np.asarray([5.0,5.0,5.0])\n# Now means should be close to beta and stds close to gamma\nprint('After layer normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n```\n\n Before layer normalization:\n means: [-59.06673243 -47.60782686 -43.31137368 -26.40991744]\n stds: [10.07429373 28.39478981 35.28360729 4.01831507]\n \n After layer normalization (gamma=1, beta=0)\n means: [ 4.81096644e-16 -7.40148683e-17 2.22044605e-16 -5.92118946e-16]\n stds: [0.99999995 0.99999999 1. 0.99999969]\n \n After layer normalization (gamma= [3. 3. 3.] , beta= [5. 5. 5.] )\n means: [5. 5. 5. 5.]\n stds: [2.99999985 2.99999998 2.99999999 2.99999907]\n \n\n\n\n```\n# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nln_param = {}\nfx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]\nfg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]\nfb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = layernorm_forward(x, gamma, beta, ln_param)\ndx, dgamma, dbeta = layernorm_backward(dout, cache)\n\n#You should expect to see relative errors between 1e-12 and 1e-8\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.4336158494902849e-09\n dgamma error: 4.519489546032799e-12\n dbeta error: 2.276445013433725e-12\n\n\n# Layer Normalization and batch size\n\nWe will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!\n\n\n```\nln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')\n\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 4:\nWhen is layer normalization likely to not work well, and why?\n\n1. Using it in a very deep network\n2. Having a very small dimension of features\n3. Having a high regularization term\n\n\n## Answer:\n1. [INCORRECT] In the previous example, the network had five layers and it can be considered as a deep network. Thus, using layer normalization in deep networks works correctly.\n\n2. [CORRECT] Having a small dimension of features affects the performance of layer normalization. The problem is very similar to that of batch normalization with small batch size because in layer normalization we calculate the statistics according to the number of hidden units, which represent the features that the network is learning. Thus, the smaller the hidden size the noisier the statistics used in layer normalization.\n\n3. [CORRECT] Having a high regularization term affects the performance of layer normalization. In general, when the regularization term is very high, the model learns very simple functions (underfitting).\n\n", "meta": {"hexsha": "355e8057e688a71c222953a7c17e2994e813f606", "size": 462511, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assignment2/BatchNormalization.ipynb", "max_stars_repo_name": "BatyrM/Stanford-CS231n-Spring-2020", "max_stars_repo_head_hexsha": "112ec761589296ae1007165ea7032a3d441b2307", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-11-10T09:13:55.000Z", "max_stars_repo_stars_event_max_datetime": "2021-11-10T09:13:55.000Z", "max_issues_repo_path": "assignment2/BatchNormalization.ipynb", "max_issues_repo_name": "BatyrM/CS231n-Spring-2020", "max_issues_repo_head_hexsha": "112ec761589296ae1007165ea7032a3d441b2307", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2021-06-08T21:51:35.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:37:43.000Z", "max_forks_repo_path": "assignment2/BatchNormalization.ipynb", "max_forks_repo_name": "BatyrM/CS231n-Spring-2020", "max_forks_repo_head_hexsha": "112ec761589296ae1007165ea7032a3d441b2307", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 462511.0, "max_line_length": 462511, "alphanum_fraction": 0.9364944834, "converted": true, "num_tokens": 9956, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3208213008246071, "lm_q2_score": 0.3073580105206753, "lm_q1q2_score": 0.09860699675410632}} {"text": "```python\n%matplotlib inline\n```\n\n\n```python\n# Write your imports here\nimport sympy\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math\n```\n\n# High-School Maths Exercise\n## Getting to Know Jupyter Notebook. Python Libraries and Best Practices. Basic Workflow\n\n### Problem 1. Markdown\nJupyter Notebook is a very light, beautiful and convenient way to organize your research and display your results. Let's play with it for a while.\n\nFirst, you can double-click each cell and edit its content. If you want to run a cell (that is, execute the code inside it), use Cell > Run Cells in the top menu or press Ctrl + Enter.\n\nSecond, each cell has a type. There are two main types: Markdown (which is for any kind of free text, explanations, formulas, results... you get the idea), and code (which is, well... for code :D).\n\nLet me give you a...\n#### Quick Introduction to Markdown\n##### Text and Paragraphs\nThere are several things that you can do. As you already saw, you can write paragraph text just by typing it. In order to create a new paragraph, just leave a blank line. See how this works below:\n```\nThis is some text.\nThis text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).\n\nThis text is displayed in a new paragraph.\n\nAnd this is yet another paragraph.\n```\n**Result:**\n\nThis is some text.\nThis text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).\n\nThis text is displayed in a new paragraph.\n\nAnd this is yet another paragraph.\n\n##### Headings\nThere are six levels of headings. Level one is the highest (largest and most important), and level 6 is the smallest. You can create headings of several types by prefixing the header line with one to six \"#\" symbols (this is called a pound sign if you are ancient, or a sharp sign if you're a musician... or a hashtag if you're too young :D). Have a look:\n```\n# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n##### Heading 5\n###### Heading 6\n```\n\n**Result:**\n\n# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n##### Heading 5\n###### Heading 6\n\nIt is recommended that you have **only one** H1 heading - this should be the header of your notebook (or scientific paper). Below that, you can add your name or just jump to the explanations directly.\n\n##### Emphasis\nYou can create emphasized (stonger) text by using a **bold** or _italic_ font. You can do this in several ways (using asterisks (\\*) or underscores (\\_)). In order to \"escape\" a symbol, prefix it with a backslash (\\). You can also strike thorugh your text in order to signify a correction.\n```\n**bold** __bold__\n*italic* _italic_\n\nThis is \\*\\*not \\*\\* bold.\n\nI ~~didn't make~~ a mistake.\n```\n\n**Result:**\n\n**bold** __bold__\n*italic* _italic_\n\nThis is \\*\\*not\\*\\* bold.\n\nI ~~didn't make~~ a mistake.\n\n##### Lists\nYou can add two types of lists: ordered and unordered. Lists can also be nested inside one another. To do this, press Tab once (it will be converted to 4 spaces).\n\nTo create an ordered list, just type the numbers. Don't worry if your numbers are wrong - Jupyter Notebook will create them properly for you. Well, it's better to have them properly numbered anyway...\n```\n1. This is\n2. A list\n10. With many\n9. Items\n 1. Some of which\n 2. Can\n 3. Be nested\n42. You can also\n * Mix \n * list\n * types\n```\n\n**Result:**\n1. This is\n2. A list\n10. With many\n9. Items\n 1. Some of which\n 2. Can\n 3. Be nested\n42. You can also\n * Mix \n * list\n * types\n \nTo create an unordered list, type an asterisk, plus or minus at the beginning:\n```\n* This is\n* An\n + Unordered\n - list\n```\n\n**Result:**\n* This is\n* An\n + Unordered\n - list\n \n##### Links\nThere are many ways to create links but we mostly use one of them: we present links with some explanatory text. See how it works:\n```\nThis is [a link](http://google.com) to Google.\n```\n\n**Result:**\n\nThis is [a link](http://google.com) to Google.\n\n##### Images\nThey are very similar to links. Just prefix the image with an exclamation mark. The alt(ernative) text will be displayed if the image is not available. Have a look (hover over the image to see the title text):\n```\n Do you know that \"taco cat\" is a palindrome? Thanks to The Oatmeal :)\n```\n\n**Result:**\n\n Do you know that \"taco cat\" is a palindrome? Thanks to The Oatmeal :)\n\nIf you want to resize images or do some more advanced stuff, just use HTML. \n\nDid I mention these cells support HTML, CSS and JavaScript? Now I did.\n\n##### Tables\nThese are a pain because they need to be formatted (somewhat) properly. Here's a good [table generator](http://www.tablesgenerator.com/markdown_tables). Just select File > Paste table data... and provide a tab-separated list of values. It will generate a good-looking ASCII-art table for you.\n```\n| Cell1 | Cell2 | Cell3 |\n|-------|-------|-------|\n| 1.1 | 1.2 | 1.3 |\n| 2.1 | 2.2 | 2.3 |\n| 3.1 | 3.2 | 3.3 |\n```\n\n**Result:**\n\n| Cell1 | Cell2 | Cell3 |\n|-------|-------|-------|\n| 1.1 | 1.2 | 1.3 |\n| 2.1 | 2.2 | 2.3 |\n| 3.1 | 3.2 | 3.3 |\n\n##### Code\nJust use triple backtick symbols. If you provide a language, it will be syntax-highlighted. You can also use inline code with single backticks.\n
\n```python\ndef square(x):\n    return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.\n
\n\n**Result:**\n```python\ndef square(x):\n return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.\n\n**Now it's your turn to have some Markdown fun.** In the next cell, try out some of the commands. You can just throw in some things, or do something more structured (like a small notebook).\n\n___some markdown here___\n\n### Problem 2. Formulas and LaTeX\nWriting math formulas has always been hard. But scientists don't like difficulties and prefer standards. So, thanks to Donald Knuth (a very popular computer scientist, who also invented a lot of algorithms), we have a nice typesetting system, called LaTeX (pronounced _lah_-tek). We'll be using it mostly for math formulas, but it has a lot of other things to offer.\n\nThere are two main ways to write formulas. You could enclose them in single `$` signs like this: `$ ax + b $`, which will create an **inline formula**: $ ax + b $. You can also enclose them in double `$` signs `$$ ax + b $$` to produce $$ ax + b $$.\n\nMost commands start with a backslash and accept parameters either in square brackets `[]` or in curly braces `{}`. For example, to make a fraction, you typically would write `$$ \\frac{a}{b} $$`: $$ \\frac{a}{b} $$.\n\n[Here's a resource](http://www.stat.pitt.edu/stoffer/freetex/latex%20basics.pdf) where you can look up the basics of the math syntax. You can also search StackOverflow - there are all sorts of solutions there.\n\nYou're on your own now. Research and recreate all formulas shown in the next cell. Try to make your cell look exactly the same as mine. It's an image, so don't try to cheat by copy/pasting :D.\n\nNote that you **do not** need to understand the formulas, what's written there or what it means. We'll have fun with these later in the course.\n\n\n\n$$ y = ax + b $$\n\n$$ ax^2 + bx + c = 0 $$\n\n$$ x_{1,2}= \\frac{-b \\pm\\sqrt{b^2 - 4ac}}{2a} $$\n\n\\begin{equation}\nf(x)|_{x=a} = f(a) + f\\prime(a)(x-a) + \\frac{f^n(a)}{2!}(x-a)^2 + ... + \\frac{f^n(a)}{n!}(x-a)^n + ...\n\\end{equation}\n\n\\begin{equation}\n(x + y)^n = {n\\choose 0}x^ny^0 + {n\\choose 1}x^{n-1}y^1 + ... + {n\\choose n}x^0y^n = \\sum_{k=0}^{n} {n\\choose k}x^{n-k}y^k\n\\end{equation}\n\n\\begin{equation}\n\\int_{-\\infty}^{+\\infty} e^{-x^2}dx = \\sqrt{\\pi}\n\\end{equation}\n\n\\begin{equation}\n\\left( \\begin{array}{ccc}\n2 & 1 & 3 \\\\\n2 & 6 & 8 \\\\\n6 & 8 & 18 \\end{array} \\right)\n\\end{equation}\n\n\\begin{equation}\nA = \\begin{pmatrix} \n a_{11} & a_{12} & \\dots & a_{1n} \\\\ \n a_{21} & a_{22} & \\dots & a_{2n} \\\\ \n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n a_{m1} & a_{m1} & \\dots & a_{mn} \n \\end{pmatrix}\n\\end{equation}\n\n

Write your formulas here.

\n\n### Problem 3. Solving with Python\nLet's first do some symbolic computation. We need to import `sympy` first. \n\n**Should your imports be in a single cell at the top or should they appear as they are used?** There's not a single valid best practice. Most people seem to prefer imports at the top of the file though. **Note: If you write new code in a cell, you have to re-execute it!**\n\nLet's use `sympy` to give us a quick symbolic solution to our equation. First import `sympy` (you can use the second cell in this notebook): \n```python \nimport sympy \n```\n\nNext, create symbols for all variables and parameters. You may prefer to do this in one pass or separately:\n```python \nx = sympy.symbols('x')\na, b, c = sympy.symbols('a b c')\n```\n\nNow solve:\n```python \nsympy.solve(a * x**2 + b * x + c)\n```\n\nHmmmm... we didn't expect that :(. We got an expression for $a$ because the library tried to solve for the first symbol it saw. This is an equation and we have to solve for $x$. We can provide it as a second paramter:\n```python \nsympy.solve(a * x**2 + b * x + c, x)\n```\n\nFinally, if we use `sympy.init_printing()`, we'll get a LaTeX-formatted result instead of a typed one. This is very useful because it produces better-looking formulas.\n\n\n```python\nx = sympy.symbols('x')\na, b, c = sympy.symbols('a b c')\n\nsympy.solve(a * x**2 + b * x + c)\n\nsympy.init_printing()\n\na = 5\n```\n\nHow about a function that takes $a, b, c$ (assume they are real numbers, you don't need to do additional checks on them) and returns the **real** roots of the quadratic equation?\n\nRemember that in order to calculate the roots, we first need to see whether the expression under the square root sign is non-negative.\n\nIf $b^2 - 4ac > 0$, the equation has two real roots: $x_1, x_2$\n\nIf $b^2 - 4ac = 0$, the equation has one real root: $x_1 = x_2$\n\nIf $b^2 - 4ac < 0$, the equation has zero real roots\n\nWrite a function which returns the roots. In the first case, return a list of 2 numbers: `[2, 3]`. In the second case, return a list of only one number: `[2]`. In the third case, return an empty list: `[]`.\n\n\n```python\n\ndef solve_quadratic_equation(a, b, c):\n \"\"\"\n Returns the real solutions of the quadratic equation ax^2 + bx + c = 0\n \"\"\"\n if a == 0:\n if b == 0:\n return math.nan\n elif c == 0:\n return b\n else:\n return -c / b\n else:\n d = b**2-4*a*c\n answer = []\n if d > 0:\n answer.append((-b - math.sqrt(d)) / (2*a))\n answer.append((-b + math.sqrt(d)) / (2*a))\n elif d == 0:\n answer.append(-b / (2*a))\n return answer\n```\n\n\n```python\n# Testing: Execute this cell. The outputs should match the expected outputs. Feel free to write more tests\nprint(solve_quadratic_equation(1, -1, -2)) # [-1.0, 2.0]\nprint(solve_quadratic_equation(1, -8, 16)) # [4.0]\nprint(solve_quadratic_equation(1, 1, 1)) # []\n```\n\n [-1.0, 2.0]\n [4.0]\n []\n\n\n**Bonus:** Last time we saw how to solve a linear equation. Remember that linear equations are just like quadratic equations with $a = 0$. In this case, however, division by 0 will throw an error. Extend your function above to support solving linear equations (in the same way we did it last time).\n\n### Problem 4. Equation of a Line\nLet's go back to our linear equations and systems. There are many ways to define what \"linear\" means, but they all boil down to the same thing.\n\nThe equation $ax + b = 0$ is called *linear* because the function $f(x) = ax+b$ is a linear function. We know that there are several ways to know what one particular function means. One of them is to just write the expression for it, as we did above. Another way is to **plot** it. This is one of the most exciting parts of maths and science - when we have to fiddle around with beautiful plots (although not so beautiful in this case).\n\nThe function produces a straight line and we can see it.\n\nHow do we plot functions in general? Ww know that functions take many (possibly infinitely many) inputs. We can't draw all of them. We could, however, evaluate the function at some points and connect them with tiny straight lines. If the points are too many, we won't notice - the plot will look smooth.\n\nNow, let's take a function, e.g. $y = 2x + 3$ and plot it. For this, we're going to use `numpy` arrays. This is a special type of array which has two characteristics:\n* All elements in it must be of the same type\n* All operations are **broadcast**: if `x = [1, 2, 3, 10]` and we write `2 * x`, we'll get `[2, 4, 6, 20]`. That is, all operations are performed at all indices. This is very powerful, easy to use and saves us A LOT of looping.\n\nThere's one more thing: it's blazingly fast because all computations are done in C, instead of Python.\n\nFirst let's import `numpy`. Since the name is a bit long, a common convention is to give it an **alias**:\n```python\nimport numpy as np\n```\n\nImport that at the top cell and don't forget to re-run it.\n\nNext, let's create a range of values, e.g. $[-3, 5]$. There are two ways to do this. `np.arange(start, stop, step)` will give us evenly spaced numbers with a given step, while `np.linspace(start, stop, num)` will give us `num` samples. You see, one uses a fixed step, the other uses a number of points to return. When plotting functions, we usually use the latter. Let's generate, say, 1000 points (we know a straight line only needs two but we're generalizing the concept of plotting here :)).\n```python\nx = np.linspace(-3, 5, 1000)\n```\nNow, let's generate our function variable\n```python\ny = 2 * x + 3\n```\n\nWe can print the values if we like but we're more interested in plotting them. To do this, first let's import a plotting library. `matplotlib` is the most commnly used one and we usually give it an alias as well.\n```python\nimport matplotlib.pyplot as plt\n```\n\nNow, let's plot the values. To do this, we just call the `plot()` function. Notice that the top-most part of this notebook contains a \"magic string\": `%matplotlib inline`. This hints Jupyter to display all plots inside the notebook. However, it's a good practice to call `show()` after our plot is ready.\n```python\nplt.plot(x, y)\nplt.show()\n```\n\n\n```python\nx = np.linspace(-3, 5, 1000)\ny = 2 * x + 3\nplt.plot(x, y)\nplt.show()\n```\n\nIt doesn't look too bad bit we can do much better. See how the axes don't look like they should? Let's move them to zeto. This can be done using the \"spines\" of the plot (i.e. the borders).\n\nAll `matplotlib` figures can have many plots (subfigures) inside them. That's why when performing an operation, we have to specify a target figure. There is a default one and we can get it by using `plt.gca()`. We usually call it `ax` for \"axis\".\nLet's save it in a variable (in order to prevent multiple calculations and to make code prettier). Let's now move the bottom and left spines to the origin $(0, 0)$ and hide the top and right one.\n```python\nax = plt.gca()\nax.spines[\"bottom\"].set_position(\"zero\")\nax.spines[\"left\"].set_position(\"zero\")\nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\n```\n\n**Note:** All plot manipulations HAVE TO be done before calling `show()`. It's up to you whether they should be before or after the function you're plotting.\n\nThis should look better now. We can, of course, do much better (e.g. remove the double 0 at the origin and replace it with a single one), but this is left as an exercise for the reader :).\n\n\n```python\nx = np.linspace(-3, 5, 1000)\ny = 2 * x + 3\nplt.plot(x, y)\nax = plt.gca()\nax.spines[\"bottom\"].set_position(\"zero\")\nax.spines[\"left\"].set_position(\"zero\")\nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\nplt.show()\n```\n\n### * Problem 5. Linearizing Functions\nWhy is the line equation so useful? The main reason is because it's so easy to work with. Scientists actually try their best to linearize functions, that is, to make linear functions from non-linear ones. There are several ways of doing this. One of them involves derivatives and we'll talk about it later in the course. \n\nA commonly used method for linearizing functions is through algebraic transformations. Try to linearize \n$$ y = ae^{bx} $$\n\nHint: The inverse operation of $e^{x}$ is $\\ln(x)$. Start by taking $\\ln$ of both sides and see what you can do. Your goal is to transform the function into another, linear function. You can look up more hints on the Internet :).\n\n$$ ln(y) = ln(a e^{bx}) $$\n\n$$ ln(y) = ln(a) + ln(e^{bx}) $$\n\n$$ ln(y) = ln(a) + bx $$\n\n### * Problem 6. Generalizing the Plotting Function\nLet's now use the power of Python to generalize the code we created to plot. In Python, you can pass functions as parameters to other functions. We'll utilize this to pass the math function that we're going to plot.\n\nNote: We can also pass *lambda expressions* (anonymous functions) like this: \n```python\nlambda x: x + 2```\nThis is a shorter way to write\n```python\ndef some_anonymous_function(x):\n return x + 2\n```\n\nWe'll also need a range of x values. We may also provide other optional parameters which will help set up our plot. These may include titles, legends, colors, fonts, etc. Let's stick to the basics now.\n\nWrite a Python function which takes another function, x range and number of points, and plots the function graph by evaluating it at every point.\n\n**BIG hint:** If you want to use not only `numpy` functions for `f` but any one function, a very useful (and easy) thing to do, is to vectorize the function `f` (e.g. to allow it to be used with `numpy` broadcasting):\n```python\nf_vectorized = np.vectorize(f)\ny = f_vectorized(x)\n```\n\n\n```python\ndef plot_math_function(f, min_x, max_x, num_points):\n xpts = np.linspace(min_x, max_x, num_points) \n plt.plot(xpts, [f(x) for x in xpts])\n ax = plt.gca()\n ax.spines[\"bottom\"].set_position(\"zero\")\n ax.spines[\"left\"].set_position(\"zero\")\n ax.spines[\"top\"].set_visible(False)\n ax.spines[\"right\"].set_visible(False)\n plt.show()\n```\n\n\n```python\nplot_math_function(lambda x: 2 * x + 3, -3, 5, 1000)\nplot_math_function(lambda x: -x + 8, -1, 10, 1000)\nplot_math_function(lambda x: x**2 - x - 2, -3, 4, 1000)\nplot_math_function(lambda x: np.sin(x), -np.pi, np.pi, 1000)\nplot_math_function(lambda x: np.sin(x) / x, -4 * np.pi, 4 * np.pi, 1000)\n```\n\n### * Problem 7. Solving Equations Graphically\nNow that we have a general plotting function, we can use it for more interesting things. Sometimes we don't need to know what the exact solution is, just to see where it lies. We can do this by plotting the two functions around the \"=\" sign ans seeing where they intersect. Take, for example, the equation $2x + 3 = 0$. The two functions are $f(x) = 2x + 3$ and $g(x) = 0$. Since they should be equal, the point of their intersection is the solution of the given equation. We don't need to bother marking the point of intersection right now, just showing the functions.\n\nTo do this, we'll need to improve our plotting function yet once. This time we'll need to take multiple functions and plot them all on the same graph. Note that we still need to provide the $[x_{min}; x_{max}]$ range and it's going to be the same for all functions.\n\n```python\nvectorized_fs = [np.vectorize(f) for f in functions]\nys = [vectorized_f(x) for vectorized_f in vectorized_fs]\n```\n\n\n```python\ndef plot_math_functions(functions, min_x, max_x, num_points): \n xpts = np.linspace(min_x, max_x, num_points) \n vectorized_fs = [np.vectorize(f) for f in functions]\n ys = [vectorized_f(xpts) for vectorized_f in vectorized_fs]\n for f in ys:\n plt.plot(xpts, f)\n ax = plt.gca()\n ax.spines[\"bottom\"].set_position(\"zero\")\n ax.spines[\"left\"].set_position(\"zero\")\n ax.spines[\"top\"].set_visible(False)\n ax.spines[\"right\"].set_visible(False)\n plt.show()\n```\n\n\n```python\nplot_math_functions([lambda x: 2 * x + 3, lambda x: 0], -3, 5, 1000)\nplot_math_functions([lambda x: 3 * x**2 - 2 * x + 5, lambda x: 3 * x + 7], -2, 3, 1000)\n```\n\nThis is also a way to plot the solutions of systems of equation, like the one we solved last time. Let's actually try it.\n\n\n```python\nplot_math_functions([lambda x: (-4 * x + 7) / 3, lambda x: (-3 * x + 8) / 5, lambda x: (-x - 1) / -2], -1, 4, 1000)\n```\n\n### Problem 8. Trigonometric Functions\nWe already saw the graph of the function $y = \\sin(x)$. But, how do we define the trigonometric functions once again? Let's quickly review that.\n\n\n\nThe two basic trigonometric functions are defined as the ratio of two sides:\n$$ \\sin(x) = \\frac{\\text{opposite}}{\\text{hypotenuse}} $$\n$$ \\cos(x) = \\frac{\\text{adjacent}}{\\text{hypotenuse}} $$\n\nAnd also:\n$$ \\tan(x) = \\frac{\\text{opposite}}{\\text{adjacent}} = \\frac{\\sin(x)}{\\cos(x)} $$\n$$ \\cot(x) = \\frac{\\text{adjacent}}{\\text{opposite}} = \\frac{\\cos(x)}{\\sin(x)} $$\n\nThis is fine, but using this, \"right-triangle\" definition, we're able to calculate the trigonometric functions of angles up to $90^\\circ$. But we can do better. Let's now imagine a circle centered at the origin of the coordinate system, with radius $r = 1$. This is called a \"unit circle\".\n\n\n\nWe can now see exactly the same picture. The $x$-coordinate of the point in the circle corresponds to $\\cos(\\alpha)$ and the $y$-coordinate - to $\\sin(\\alpha)$. What did we get? We're now able to define the trigonometric functions for all degrees up to $360^\\circ$. After that, the same values repeat: these functions are **periodic**: \n$$ \\sin(k.360^\\circ + \\alpha) = \\sin(\\alpha), k = 0, 1, 2, \\dots $$\n$$ \\cos(k.360^\\circ + \\alpha) = \\cos(\\alpha), k = 0, 1, 2, \\dots $$\n\nWe can, of course, use this picture to derive other identities, such as:\n$$ \\sin(90^\\circ + \\alpha) = \\cos(\\alpha) $$\n\nA very important property of the sine and cosine is that they accept values in the range $(-\\infty; \\infty)$ and produce values in the range $[-1; 1]$. The two other functions take values in the range $(-\\infty; \\infty)$ **except when their denominators are zero** and produce values in the same range. \n\n#### Radians\nA degree is a geometric object, $1/360$th of a full circle. This is quite inconvenient when we work with angles. There is another, natural and intrinsic measure of angles. It's called the **radian** and can be written as $\\text{rad}$ or without any designation, so $\\sin(2)$ means \"sine of two radians\".\n\n\nIt's defined as *the central angle of an arc with length equal to the circle's radius* and $1\\text{rad} \\approx 57.296^\\circ$.\n\nWe know that the circle circumference is $C = 2\\pi r$, therefore we can fit exactly $2\\pi$ arcs with length $r$ in $C$. The angle corresponding to this is $360^\\circ$ or $2\\pi\\ \\text{rad}$. Also, $\\pi rad = 180^\\circ$.\n\n(Some people prefer using $\\tau = 2\\pi$ to avoid confusion with always multiplying by 2 or 0.5 but we'll use the standard notation here.)\n\n**NOTE:** All trigonometric functions in `math` and `numpy` accept radians as arguments. In order to convert between radians and degrees, you can use the relations $\\text{[deg]} = 180/\\pi.\\text{[rad]}, \\text{[rad]} = \\pi/180.\\text{[deg]}$. This can be done using `np.deg2rad()` and `np.rad2deg()` respectively.\n\n#### Inverse trigonometric functions\nAll trigonometric functions have their inverses. If you plug in, say $\\pi/4$ in the $\\sin(x)$ function, you get $\\sqrt{2}/2$. The inverse functions (also called, arc-functions) take arguments in the interval $[-1; 1]$ and return the angle that they correspond to. Take arcsine for example:\n$$ \\arcsin(y) = x: sin(y) = x $$\n$$ \\arcsin\\left(\\frac{\\sqrt{2}}{2}\\right) = \\frac{\\pi}{4} $$\n\nPlease note that this is NOT entirely correct. From the relations we found:\n$$\\sin(x) = sin(2k\\pi + x), k = 0, 1, 2, \\dots $$\n\nit follows that $\\arcsin(x)$ has infinitely many values, separated by $2k\\pi$ radians each:\n$$ \\arcsin\\left(\\frac{\\sqrt{2}}{2}\\right) = \\frac{\\pi}{4} + 2k\\pi, k = 0, 1, 2, \\dots $$\n\nIn most cases, however, we're interested in the first value (when $k = 0$). It's called the **principal value**.\n\nNote 1: There are inverse functions for all four basic trigonometric functions: $\\arcsin$, $\\arccos$, $\\arctan$, $\\text{arccot}$. These are sometimes written as $\\sin^{-1}(x)$, $cos^{-1}(x)$, etc. These definitions are completely equivalent. \n\nJust notice the difference between $\\sin^{-1}(x) := \\arcsin(x)$ and $\\sin(x^{-1}) = \\sin(1/x)$.\n\n#### Exercise\nUse the plotting function you wrote above to plot the inverse trigonometric functions. Use `numpy` (look up how to use inverse trigonometric functions).\n\n\n```python\ndef plot_math_functions(min_x, max_x, num_points): \n xpts = np.linspace(min_x, max_x)\n plt.plot(xpts, np.arcsin(xpts))\n plt.plot(xpts, np.arccos(xpts))\n plt.plot(xpts, np.arctan(xpts))\n# plt.plot(xpts, np.arccos(xpts) / np.arcsin(xpts))\n ax = plt.gca()\n ax.spines[\"bottom\"].set_position(\"zero\")\n ax.spines[\"left\"].set_position(\"zero\")\n ax.spines[\"top\"].set_visible(False)\n ax.spines[\"right\"].set_visible(False)\n plt.show()\nplot_math_functions(1, -1, 20)\n```\n\n### ** Problem 9. Perlin Noise\nThis algorithm has many applications in computer graphics and can serve to demonstrate several things... and help us learn about math, algorithms and Python :).\n#### Noise\nNoise is just random values. We can generate noise by just calling a random generator. Note that these are actually called *pseudorandom generators*. We'll talk about this later in this course.\nWe can generate noise in however many dimensions we want. For example, if we want to generate a single dimension, we just pick N random values and call it a day. If we want to generate a 2D noise space, we can take an approach which is similar to what we already did with `np.meshgrid()`.\n\n$$ \\text{noise}(x, y) = N, N \\in [n_{min}, n_{max}] $$\n\nThis function takes two coordinates and returns a single number N between $n_{min}$ and $n_{max}$. (This is what we call a \"scalar field\").\n\nRandom variables are always connected to **distributions**. We'll talk about these a great deal but now let's just say that these define what our noise will look like. In the most basic case, we can have \"uniform noise\" - that is, each point in our little noise space $[n_{min}, n_{max}]$ will have an equal chance (probability) of being selected.\n\n#### Perlin noise\nThere are many more distributions but right now we'll want to have a look at a particular one. **Perlin noise** is a kind of noise which looks smooth. It looks cool, especially if it's colored. The output may be tweaked to look like clouds, fire, etc. 3D Perlin noise is most widely used to generate random terrain.\n\n#### Algorithm\n... Now you're on your own :). Research how the algorithm is implemented (note that this will require that you understand some other basic concepts like vectors and gradients).\n\n#### Your task\n1. Research about the problem. See what articles, papers, Python notebooks, demos, etc. other people have created\n2. Create a new notebook and document your findings. Include any assumptions, models, formulas, etc. that you're using\n3. Implement the algorithm. Try not to copy others' work, rather try to do it on your own using the model you've created\n4. Test and improve the algorithm\n5. (Optional) Create a cool demo :), e.g. using Perlin noise to simulate clouds. You can even do an animation (hint: you'll need gradients not only in space but also in time)\n6. Communicate the results (e.g. in the Softuni forum)\n\nHint: [This](http://flafla2.github.io/2014/08/09/perlinnoise.html) is a very good resource. It can show you both how to organize your notebook (which is important) and how to implement the algorithm.\n", "meta": {"hexsha": "61c2b2a523f30e24a88fb1e6569a9ea003d2b973", "size": 203540, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "MathConcepts/a_highSchollMath/High-School Maths Exercise.ipynb", "max_stars_repo_name": "KaPrimov/ai-module", "max_stars_repo_head_hexsha": "d0a40482830085ddf020aa5dece88b791699325f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "MathConcepts/a_highSchollMath/High-School Maths Exercise.ipynb", "max_issues_repo_name": "KaPrimov/ai-module", "max_issues_repo_head_hexsha": "d0a40482830085ddf020aa5dece88b791699325f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "MathConcepts/a_highSchollMath/High-School Maths Exercise.ipynb", "max_forks_repo_name": "KaPrimov/ai-module", "max_forks_repo_head_hexsha": "d0a40482830085ddf020aa5dece88b791699325f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 220.9989142237, "max_line_length": 18206, "alphanum_fraction": 0.8849906652, "converted": true, "num_tokens": 7761, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3276682876897044, "lm_q2_score": 0.3007455914759599, "lm_q1q2_score": 0.09854479298915515}} {"text": "```python\nfrom IPython.display import Image\nImage('../../Python_probability_statistics_machine_learning_2E.png',width=200)\n```\n\nWe considered Maximum Likelihood Estimation (MLE) and Maximum A-Posteriori\n(MAP)\nestimation and in each case we started out with a probability density\nfunction\nof some kind and we further assumed that the samples were identically\ndistributed and independent (iid). The idea behind robust statistics\n[[maronna2006robust]](#maronna2006robust) is to construct estimators that can\nsurvive the\nweakening of either or both of these assumptions. More concretely,\nsuppose you\nhave a model that works great except for a few outliers. The\ntemptation is to\njust ignore the outliers and proceed. Robust estimation methods\nprovide a\ndisciplined way to handle outliers without cherry-picking data that\nworks for\nyour favored model.\n\n### The Notion of Location\n\nThe first notion we\nneed is *location*, which is a generalization of the idea\nof *central value*.\nTypically, we just use an estimate of the mean for this,\nbut we will see later\nwhy this could be a bad idea. The general idea of\nlocation satisfies the\nfollowing requirements Let $X$ be a random variable with\ndistribution $F$, and\nlet $\\theta(X)$ be some descriptive measure of $F$. Then\n$\\theta(X)$ is said to\nbe a measure of *location* if for any constants *a* and\n*b*, we have the\nfollowing:\n\n\n
\n\n$$\n\\begin{equation}\n\\theta(X+b) = \\theta(X) +b \n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \\\n\\theta(-X) = -\\theta(X) \n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \\\nX \\ge 0 \\Rightarrow \\theta(X) \\ge 0 \n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \\\n\\theta(a X) = a\\theta(X)\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\n The first condition is called *location equivariance* (or *shift-invariance* in\nsignal processing lingo). The fourth condition is called *scale equivariance*,\nwhich means that the units that $X$ is measured in should not effect the value\nof the location estimator. These requirements capture the intuition of\n*centrality* of a distribution, or where most of the\nprobability mass is\nlocated.\n\nFor example, the sample mean estimator is $ \\hat{\\mu}=\\frac{1}{n}\\sum\nX_i $. The first\nrequirement is obviously satisfied as $\n\\hat{\\mu}=\\frac{1}{n}\\sum (X_i+b) = b +\n\\frac{1}{n}\\sum X_i =b+\\hat{\\mu}$. Let\nus consider the second requirement:$\n\\hat{\\mu}=\\frac{1}{n}\\sum -X_i =\n-\\hat{\\mu}$. Finally, the last requirement is\nsatisfied with $\n\\hat{\\mu}=\\frac{1}{n}\\sum a X_i =a \\hat{\\mu}$.\n\n### Robust Estimation and Contamination\n\nNow that we have the generalized location of centrality embodied\nin the\n*location* parameter, what can we do with it? Previously, we assumed\nthat our samples\nwere all identically distributed. The key idea is that the\nsamples might be\nactually coming from a *single* distribution that is\ncontaminated by another nearby\ndistribution, as in the following:\n\n$$\nF(X) = \\epsilon G(X) + (1-\\epsilon)H(X)\n$$\n\n where $ \\epsilon $ randomly toggles between zero and one. This means\nthat our\ndata samples $\\lbrace X_i \\rbrace$ actually derived from two separate\ndistributions, $ G(X) $ and $ H(X) $. We just don't know how they are mixed\ntogether. What we really want is an estimator that captures the location of $\nG(X) $ in the face of random intermittent contamination by $ H(X)$. For\nexample, it may be that this contamination is responsible for the outliers in a\nmodel that otherwise works well with the dominant $F$ distribution. It can get\neven worse than that because we don't know that there is only one contaminating\n$H(X)$ distribution out there. There may be a whole family of distributions\nthat\nare contaminating $G(X)$. This means that whatever estimators we construct\nhave\nto be derived from a more generalized family of distributions instead of\nfrom a\nsingle distribution, as the maximum-likelihood method assumes. This is\nwhat\nmakes robust estimation so difficult --- it has to deal with *spaces* of\nfunction distributions instead of parameters from a particular probability\ndistribution.\n\n### Generalized Maximum Likelihood Estimators\n\nM-estimators are\ngeneralized maximum likelihood estimators. Recall that for\nmaximum likelihood,\nwe want to maximize the likelihood function as in the\nfollowing:\n\n$$\nL_{\\mu}(x_i) = \\prod f_0(x_i-\\mu)\n$$\n\n and then to find the estimator $\\hat{\\mu}$ so that\n\n$$\n\\hat{\\mu} = \\arg \\max_{\\mu} L_{\\mu}(x_i)\n$$\n\n So far, everything is the same as our usual maximum-likelihood\nderivation\nexcept for the fact that we don't assume a specific $f_0$ as the\ndistribution of\nthe $\\lbrace X_i\\rbrace$. Making the definition of\n\n$$\n\\rho = -\\log f_0\n$$\n\n we obtain the more convenient form of the likelihood product and the\noptimal\n$\\hat{\\mu}$ as\n\n$$\n\\hat{\\mu} = \\arg \\min_{\\mu} \\sum \\rho(x_i-\\mu)\n$$\n\n If $\\rho$ is differentiable, then differentiating this with respect\nto $\\mu$\ngives\n\n\n
\n\n$$\n\\begin{equation}\n\\sum \\psi(x_i-\\hat{\\mu}) = 0 \n\\label{eq:muhat} \\tag{5}\n\\end{equation}\n$$\n\n with $\\psi = \\rho^\\prime$, the first derivative of $\\rho$ , and for technical\nreasons we will assume that\n$\\psi$ is increasing. So far, it looks like we just\npushed some definitions\naround, but the key idea is we want to consider general\n$\\rho$ functions that\nmay not be maximum likelihood estimators for *any*\ndistribution. Thus, our\nfocus is now on uncovering the nature of $\\hat{\\mu}$.\n\n### Distribution of M-estimates\n\nFor a given distribution $F$, we define\n$\\mu_0=\\mu(F)$ as the solution to the\nfollowing\n\n$$\n\\mathbb{E}_F(\\psi(x-\\mu_0))= 0\n$$\n\n It is technical to show, but it turns out that $\\hat{\\mu} \\sim\n\\mathcal{N}(\\mu_0,\\frac{v}{n})$ with\n\n$$\nv =\n\\frac{\\mathbb{E}_F(\\psi(x-\\mu_0)^2)}{(\\mathbb{E}_F(\\psi^\\prime(x-\\mu_0)))^2}\n$$\n\n Thus, we can say that $\\hat{\\mu}$ is asymptotically normal with asymptotic\nvalue $\\mu_0$ and asymptotic variance $v$. This leads to the efficiency ratio\nwhich is defined as the following:\n\n$$\n\\texttt{Eff}(\\hat{\\mu})= \\frac{v_0}{v}\n$$\n\n where $v_0$ is the asymptotic variance of the MLE and measures how\nnear\n$\\hat{\\mu}$ is to the optimum. In other words, this provides a sense of\nhow much\noutlier contamination costs in terms of samples. For example, if for\ntwo\nestimates with asymptotic variances $v_1$ and $v_2$, we have $v_1=3v_2$,\nthen\nfirst estimate requires three times as many observations to obtain the\nsame\nvariance as the second. Furthermore, for the sample mean (i.e.,\n$\\hat{\\mu}=\\frac{1}{n} \\sum X_i$) with $F=\\mathcal{N}$, we have $\\rho=x^2/2$\nand\n$\\psi=x$ and also $\\psi'=1$. Thus, we have $v=\\mathbb{V}(x)$.\nAlternatively,\nusing the sample median as the estimator for the location, we\nhave $v=1/(4\nf(\\mu_0)^2)$. Thus, if we have $F=\\mathcal{N}(0,1)$, for the\nsample median, we\nobtain $v={2\\pi}/{4} \\approx 1.571$. This means that the\nsample median takes\napproximately 1.6 times as many samples to obtain the same\nvariance for the\nlocation as the sample mean. The sample median is \nfar more immune to the\neffects of outliers than the sample mean, so this \ngives a sense of how much\nthis robustness costs in samples.\n\n** M-Estimates as Weighted Means.** One way\nto think about M-estimates is a\nweighted means. Operationally, this\nmeans that\nwe want weight functions that can circumscribe the\ninfluence of the individual\ndata points, but, when taken as a whole,\nstill provide good estimated\nparameters. Most of the time, we have $\\psi(0)=0$ and $\\psi'(0)$ exists so\nthat\n$\\psi$ is approximately linear at the origin. Using the following\ndefinition:\n\n$$\nW(x) = \\begin{cases}\n \\psi(x)/x & \\text{if} \\: x \\neq 0 \\\\\\\n\\psi'(x) & \\text{if} \\: x =0 \n \\end{cases}\n$$\n\n We can write our Equation [5](#eq:muhat) as follows:\n\n\n
\n\n$$\n\\begin{equation}\n\\sum W(x_i-\\hat{\\mu})(x_i-\\hat{\\mu}) = 0 \n\\label{eq:Wmuhat}\n\\tag{6}\n\\end{equation}\n$$\n\n Solving this for $\\hat{\\mu} $ yields the following,\n\n$$\n\\hat{\\mu} = \\frac{\\sum w_{i} x_i}{\\sum w_{i}}\n$$\n\n where $w_{i}=W(x_i-\\hat{\\mu})$. This is not practically useful\nbecause the\n$w_i$ contains $\\hat{\\mu}$, which is what we are trying to solve\nfor. The\nquestion that remains is how to pick the $\\psi$ functions. This is\nstill an open\nquestion, but the Huber functions are a well-studied choice.\n\n### Huber\nFunctions\n\nThe family of Huber functions is defined by the following:\n\n$$\n\\rho_k(x ) = \\begin{cases}\n x^2 & \\mbox{if } |x|\\leq\nk \\\\\\\n 2 k |x|-k^2 & \\mbox{if } |x| > k\n\\end{cases}\n$$\n\n with corresponding derivatives $2\\psi_k(x)$ with\n\n$$\n\\psi_k(x ) = \\begin{cases}\n x & \\mbox{if } \\: |x|\n\\leq k \\\\\\\n \\text{sgn}(x)k & \\mbox{if } \\: |x| > k\n\\end{cases}\n$$\n\n where the limiting cases $k \\rightarrow \\infty$ and $k \\rightarrow 0$\ncorrespond to the mean and median, respectively. To see this, take\n$\\psi_{\\infty} = x$ and therefore $W(x) = 1$ and thus the defining Equation\n[6](#eq:Wmuhat) results in\n\n$$\n\\sum_{i=1}^{n} (x_i-\\hat{\\mu}) = 0\n$$\n\n and then solving this leads to $\\hat{\\mu} = \\frac{1}{n}\\sum x_i$.\nNote that\nchoosing $k=0$ leads to the sample median, but that is not so\nstraightforward\nto solve for. Nonetheless, Huber functions provide a way\nto move between two\nextremes of estimators for location (namely, \nthe mean vs. the median) with a\ntunable parameter $k$. \nThe $W$ function corresponding to Huber's $\\psi$ is the\nfollowing:\n\n$$\nW_k(x) = \\min\\Big{\\lbrace} 1, \\frac{k}{|x|} \\Big{\\rbrace}\n$$\n\n [Figure](#fig:Robust_Statistics_0001) shows the Huber weight\nfunction for $k=2$\nwith some sample points. The idea is that the computed\nlocation, $\\hat{\\mu}$ is\ncomputed from Equation [6](#eq:Wmuhat) to lie somewhere\nin the middle of the\nweight function so that those terms (i.e., *insiders*)\nhave their values fully\nreflected in the location estimate. The black circles\nare the *outliers* that\nhave their values attenuated by the weight function so\nthat only a fraction of\ntheir presence is represented in the location estimate.\n\n\n\n\n\n

This shows the Huber weight function,\n$W_2(x)$ and some cartoon data points that are insiders or outsiders as far as\nthe robust location estimate is concerned.

\n\n\n\n\n\n###\nBreakdown Point\n\nSo far, our discussion of robustness has been very abstract. A\nmore concrete\nconcept of robustness comes from the breakdown point. In the\nsimplest terms,\nthe breakdown point describes what happens when a single data\npoint in an\nestimator is changed in the most damaging way possible. For example,\nsuppose we\nhave the sample mean, $\\hat{\\mu}=\\sum x_i/n$, and we take one of the\n$x_i$\npoints to be infinite. What happens to this estimator? It also goes\ninfinite.\nThis means that the breakdown point of the estimator is 0%. On the\nother hand,\nthe median has a breakdown point of 50%, meaning that half of the\ndata for\ncomputing the median could go infinite without affecting the median\nvalue. The median\nis a *rank* statistic that cares more about the relative\nranking of the data\nthan the values of the data, which explains its robustness.\nThe simpliest but still formal way to express the breakdown point is to\ntake $n$\ndata points, $\\mathcal{D} = \\lbrace (x_i,y_i) \\rbrace$. Suppose $T$\nis a\nregression estimator that yields a vector of regression coefficients,\n$\\boldsymbol{\\theta}$,\n\n$$\nT(\\mathcal{D}) = \\boldsymbol{\\theta}\n$$\n\n Likewise, consider all possible corrupted samples of the data\n$\\mathcal{D}^\\prime$. The maximum *bias* caused by this contamination is\nthe\nfollowing:\n\n$$\n\\texttt{bias}_{m} = \\sup_{\\mathcal{D}^\\prime} \\Vert\nT(\\mathcal{D^\\prime})-T(\\mathcal{D}) \\Vert\n$$\n\n where the $\\sup$ sweeps over all possible sets of $m$ contaminated samples.\nUsing this, the breakdown point is defined as the following:\n\n$$\n\\epsilon_m = \\min \\Big\\lbrace \\frac{m}{n} \\colon \\texttt{bias}_{m}\n\\rightarrow \\infty \\Big\\rbrace\n$$\n\n For example, in our least-squares regression, even one point at\ninfinity causes\nan infinite $T$. Thus, for least-squares regression,\n$\\epsilon_m=1/n$. In the\nlimit $n \\rightarrow \\infty$, we have $\\epsilon_m\n\\rightarrow 0$.\n\n###\nEstimating Scale\n\nIn robust statistics, the concept of *scale* refers to a\nmeasure of the\ndispersion of the data. Usually, we use the\nestimated standard\ndeviation for this, but this has a terrible breakdown point.\nEven more\ntroubling, in order to get a good estimate of location, we have to\neither\nsomehow know the scale ahead of time, or jointly estimate it. None of\nthese\nmethods have easy-to-compute closed form solutions and must be computed\nnumerically.\n\nThe most popular method for estimating scale is the *median\nabsolute deviation*\n\n$$\n\\texttt{MAD} = \\texttt{Med} (\\vert \\mathbf{x} -\n\\texttt{Med}(\\mathbf{x})\\vert)\n$$\n\n In words, take the median of the data $\\mathbf{x}$ and\nthen subtract that\nmedian from the data itself, and then take the median of the\nabsolute value of\nthe result. Another good dispersion estimate is the *interquartile range*,\n\n$$\n\\texttt{IQR} = x_{(n-m+1)} - x_{(n)}\n$$\n\n where $m= [n/4]$. The $x_{(n)}$ notation means the $n^{th}$ data\nelement after\nthe data have been sorted. Thus, in this notation,\n$\\texttt{max}(\\mathbf{x})=x_{(n)}$. In the case where $x \\sim\n\\mathcal{N}(\\mu,\\sigma^2)$, then $\\texttt{MAD}$ and $\\texttt{IQR}$ are constant\nmultiples of $\\sigma$ such that the normalized $\\texttt{MAD}$ is the following,\n\n$$\n\\texttt{MADN}(x) = \\frac{\\texttt{MAD} }{0.675}\n$$\n\n The number comes from the inverse CDF of the normal distribution\ncorresponding\nto the $0.75$ level. Given the complexity of the\ncalculations, *jointly*\nestimating both location and scale is a purely\nnumerical matter. Fortunately,\nthe Statsmodels module has many of these\nready to use. Let's create some\ncontaminated data in the following code,\n\n\n```python\nimport statsmodels.api as sm\nimport numpy as np\n\nfrom scipy import stats\ndata=np.hstack([stats.norm(10,1).rvs(10),\n stats.norm(0,1).rvs(100)])\n```\n\nThese data correspond to our model of contamination that we started\nthis\nsection with. As shown in the histogram in\n[Figure](#fig:Robust_Statistics_0002), there are two normal distributions, one\ncentered neatly at zero, representing the majority of the samples, and another\ncoming less regularly from the normal distribution on the right. Notice that\nthe\ngroup of infrequent samples on the right separates the mean and median\nestimates\n(vertical dotted and dashed lines). In the absence of the\ncontaminating\ndistribution on the right, the standard deviation for this data\nshould be close\nto one. However, the usual non-robust estimate for standard\ndeviation (`np.std`)\ncomes out to approximately three. Using the\n$\\texttt{MADN}$ estimator\n(`sm.robust.scale.mad(data)`) we obtain approximately\n1.25. Thus, the robust\nestimate of dispersion is less moved by the presence of\nthe contaminating\ndistribution.\n\n\n\n
\n

Histogram of sample data. Notice that the group of infrequent samples on the\nright separates the mean and median estimates indicated by the vertical\nlines.

\n\n\n\n\n\nThe generalized maximum likelihood M-estimation extends to\njoint\nscale and location estimation using Huber functions. For example,\n\n\n```python\nhuber = sm.robust.scale.Huber()\nloc,scl=huber(data)\n```\n\nwhich implements Huber's *proposal two* method of joint estimation of\nlocation\nand scale. This kind of estimation is the key ingredient to robust\nregression\nmethods, many of which are implemented in Statsmodels in\n`statsmodels.formula.api.rlm`. The corresponding documentation has more\ninformation.\n", "meta": {"hexsha": "9f7c38e6ba987f7c0bc99a6106676af9d5c72fb1", "size": 199747, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapter/statistics/Robust_Statistics.ipynb", "max_stars_repo_name": "derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E", "max_stars_repo_head_hexsha": "9d12a298d43ae285d9549a79bb5544cf0a9b7516", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 224, "max_stars_repo_stars_event_min_datetime": "2019-05-07T08:56:01.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T15:50:41.000Z", "max_issues_repo_path": "chapter/statistics/Robust_Statistics.ipynb", "max_issues_repo_name": "derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E", "max_issues_repo_head_hexsha": "9d12a298d43ae285d9549a79bb5544cf0a9b7516", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2019-08-27T12:57:17.000Z", "max_issues_repo_issues_event_max_datetime": "2021-09-21T15:45:13.000Z", "max_forks_repo_path": "chapter/statistics/Robust_Statistics.ipynb", "max_forks_repo_name": "derakding/Python-for-Probability-Statistics-and-Machine-Learning-2E", "max_forks_repo_head_hexsha": "9d12a298d43ae285d9549a79bb5544cf0a9b7516", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 73, "max_forks_repo_forks_event_min_datetime": "2019-05-25T07:15:47.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-07T00:22:37.000Z", "avg_line_length": 317.0587301587, "max_line_length": 176652, "alphanum_fraction": 0.9212403691, "converted": true, "num_tokens": 4677, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3345894545235253, "lm_q2_score": 0.2942149659744614, "lm_q1q2_score": 0.09844122497805259}} {"text": "# Implementing Neural Networks with Numpy for Absolute Beginners - Part 1: Introduction\n\n##### In this tutorial, you will get a brief understanding of what Neural Networks are and how they have been developed. In the end, you will gain a brief intuition as to how the network learns.\n\nThe field of Artificial Intelligence has gained a lot of popularity and momentum during the past 10 years, largely due to a huge increase in the computational capacity of computers with the use of GPUs and the availability of gigantic amounts of data. Deep Learning has become the buzzword everywhere!!\n>>>>> \n\n\nAlthough Artificial Intelligence (AI) resonates with the notion of the machines to think and behave impersonating humans, it is rather restricted to very nascent and small task-specific functions while the term Artificial General Intelligence (AGI) obliges to the terms of impersonating a human. Above these is the concept of Artificial Super Intelligence (ASI) which gives me the shrills as it represents intelligence of machines far exceeding human levels!!\n\nThe main concept for Artificial Intelligence currently holds that you have to train it before it learns to perform the task much like humans, except that here\u2026 you have to train it even for the simplest of the tasks like seeing and identifying objects!(This is surely a complex problem for our computers).\n\nThere are 3 situations that you can encounter in this domain:\n1. When you have a lot of data...\n\n> - Either your data is tagged, labelled, maintained or it is not.\n If the data is available and is fully labelled or tagged, you can train the model based on the given set of input-output pairs and ask the model to predict the output for a new set of data. This type of learning is called **Supervised Learning** (Since, you are giving the input and also mentioning that this is the correct output for the data).\n

\nSupervised Learning can be further divided into the two tasks as below:\n
\n> a. Classifcation - where you predict that the data belongs to a specific class. Eg.: Classfying a cat or a dog.\n
\n> b. Regression - where a real number value is predicted. Eg: Predicting the price of a house given it's dimensions.\n
\n\n>>In the below example, you can see that images are trained against their labels. You test the model by inputting an image and predicting it's class... like a cat.\n>>>>> \n\n> - When your data is unlabelled, the only option would be to let your model figure out by itself the patterns in the data. This is called **Unsupervised Learning**.

In the example shown below, you only provide the datapoints and the number of clusters(classes) that has to be formed and let the algorithm find out the best set of clusters.\n>>>>>>> \n\n> 2\\. When you don't have data but instead have the environment itself to learn!\n\n>Here, a learning agent is put in a predefined environment and made to learn by the actions it takes. It is either rewarded or punished based on its actions. This is the most interesting kind of learning and is also where a lot of exploration and research is happenning.It is called **Reinforcement Learning**.

As it can clearly be seen from the below image that the agent which is modelled as a person, learns to climb the wall through trial and error.\n>>>>>>> \n\n

This tutorial focuses on Neural Networks which is a part of Supervised Learning.\n\n## A little bit into the history of how Neural Networks evolved\n\nThe evolution of AI dates to back to 1950 when Alan Turing, the computer genius, came out with the Turing Test to distinguish between a Human and a Robot. He describes that when a machine performs so well, that we humans are not able to distinguish between the response given by a human and a machine, it has passed the Turing Test. Apparently this feat was achieved only in 2012, when a company named Vicarious cracked the captchas. Check out this video below on how Vicarious broke the captchas.\n\n\n```python\n#@title Vicarious Video\n%%HTML\n\n\n```\n\n\n\n\n\n\nIt must be noted that most of the Algorithms that were developed during that period(1950-2000) and now existing, are highly inspired by the working of our brain, the neurons and their structure with how they learn and transfer data. The most popular works include the Perceptron and the Neocognitron $-$(not covered in this article, but in a future article) based on which the Neural Networks have been developed. \n\nNow, before you dive into what a perceptron is, let's make sure you know a bit of all these... Although not necessarily required!\n\n## Prerequisites\n\nWhat you\u2019ll need to know for the course:\n1. A little bit of Python &\n2. The eagerness to learn Neural Networks.\n\nIf you are unsure of which environment to use for implementing this, I recommend [Google Colab](https://colab.research.google.com/). The environment comes with many important packages already installed. Installing new packages and also importing and exporting the data is quite simple. Most of all, it also comes with GPU support. So go ahead and get coding with the platform!\n\nLastly, this article is directed for those who want to learn about Neural Networks or just Linear Regression. However, there would be an inclination towards Neural Networks!\n\n## A biological Neuron\n\n>>>>> \n\nThe figure above shows a biological neuron. It has *dendrites* that recieve information from neurons. The recieved information is passed on to the *cell body or the nucleus* of the neuron. The *nucleus* is where the information is processed. The processed information is passed on to the next layer of neurons through the *axons*.\n\nOur brain consists of about 100 billion such neurons which communicate through electrochemical signals. Each neuron is connected to 100s and 1000s of other neurons which constantly transmit and recieve signals. When the sum of the signals recieved by a neuron exceeds a set threshold value, the cell is activated (although, it has been speculated that neurons use very complex activations to process the input data) and the signal is further transmitted to other neurons. You'll see that the artificial neuron or the perceptron adopts the same ideology to perform computation and transmit data in the next section.\n\nYou know that different regions of our brain are activated (/receptive) for different actions like seeing, hearing, creative thinking and so on. This is because the neurons belonging to a specific region in the brain are trained to process a certain kind of information better and hence get activated when only certain kinds of information is being sent.The figure below gives us a better understanding of the different receptive regions of the brain.\n\n>>>> \n\nIt has also been shown through the concept of Neuroplasticity that the different regions of the brain can be rewired to perform totally different tasks. Such as the neurons responsible for touch sensing can be rewired to become sensitive to smell. Check out this great TEDx video below to know more about neuroplasticity.\n\nSimilarly, an artificial neuron/perceptron can be trained to recognize some of the most comlplex pattern. Hence, they can be called Universal Function Approximators.\n\nIn the next section, we'll explore the working of a perceptron and also gain a mathematical intuition.\n\n\n```python\n#@title Neuroplasticity\n%%HTML\n\n''\n```\n\n\n\n''\n\n\n## Perceptron/Artificial Neuron\n\n>>>>>> \n\n\nFrom the figure, you can observe that the perceptron is a reflection of the biological neuron. The inputs combined with the weights($w_i$) are analogous to dendrties. These values are summed and passed through an activation function (like the thresholding function as shown in fig.). This is analogous to the nucleus. Finally, the activated value is transmitted to the next neuron/perceptron which is analogous to the axons.\n\nThe latent weights($w_i$) multiplied with each input($x_i$) depicts the significance of the respective input/feature. Larger the value of a weight, more important is the feature. Hence, the weights are what is learned in a perceptron so as to arrive at the required result. An additional bias($b$, here $w_0$) is also learned.\n\nHence, when there are multiple inputs (say n), the equation can be generalized as follows: \n\\begin{equation}\nz=w_0+w_1.x_1+w_2.x_2+w_3.x_3+......+w_n.x_n \\\\\n\\therefore z=\\sum_{i=0}^{n}w_i.x_i \\qquad \\text{where } x_0 = 1\n\\end{equation}\n\nFinally, the output of summation (assume as $z$) is fed to the *thresholding activation function*, where the function outputs $ -1 \\space \\text{if } z < 0 \\space \\& \\space 1 \\space \\text{if } z \\geq 0$.\n\n### An Example\n\nLet us consider our perceptron to perform as *logic gates* to gain more intuition.\n\nLet's choose an $AND \\space gate$. The Truth Table for the $AND \\space gate$ is shown below:\n\n>>>>>>>>> \n\nThe perceptron for the $AND \\space gate$ can be formed as shown in the figure. It is clear that the perceptron has two inputs (here $x1=A$ and $x2=B$)\n\n>>>>>>>>> \n\n\\begin{equation}\n\\text{Threshold Function,} \\qquad y = f(z) = \\begin{cases}\n1,& \\text{if }z \\geq 0.5\\\\\n0,& \\text{if } z< 0.5\\\\\n\\end{cases}\n\\end{equation}\n\nWe can see that for inputs $x1$, $x2$ & $x_0=1$, setting their weights as \n\\begin{equation}\nw_0=-0.5, \\\\\nw_1=0.6, \\space \\&\\\\\nw_2=0.6\n\\end{equation}\nrespectively and keeping the *Threshold function* as the activation function we can arrive at the $AND \\space Gate$.\n\nNow, let's get our hands dirty and codify this and test it out!\n\n\n```python\ndef and_perceptron(x1, x2):\n \n w0 = -0.5\n w1 = 0.6\n w2 = 0.6\n \n z = w0 + w1 * x1 + w2 * x2\n \n thresh = lambda x: 1 if x>= 0.5 else 0\n\n r = thresh(z)\n print(r)\n```\n\n\n```python\nand_perceptron(1, 1)\n```\n\n 1\n\n\nSimilarly for $NOR \\space Gate$ the Truth Table is,\n\n>>>>>>>>> \n\nThe perceptron for $NOR \\space Gate$ will be as below:\n\n>>>>>>>>> \n\n\nYou can set the weights as\n\\begin{equation}\nw_0 = 0.5 \\\\\nw_1 = -0.6 \\\\\nw_2 = -0.6\n\\end{equation}\nso that you obtain a $NOR \\space Gate$.\n\nYou can go ahead and implement this in code.\n\n\n```python\ndef nor_perceptron(x1, x2):\n \n w0 = 0.5\n w1 = -0.6\n w2 = -0.6\n \n z = w0 + w1 * x1 + w2 * x2\n \n thresh = lambda x: 1 if x>= 0.5 else 0\n\n r = thresh(z)\n print(r)\n```\n\n\n```python\nnor_perceptron(1, 1)\n```\n\n 0\n\n\nHere, is the Truth Table for $NAND \\space Gate$. Go ahead and guess the weights that fits the function and also implement in code.\n\n>>>>>>>>> \n\n## What you are actually calculating...\n\nIf you analyse what you were trying to do in the above examples, you will realize that you were actually trying to adjust the values of the weights to obtain the required output.\n\nLets consider the NOR Gate example and break it down to very miniscule steps to gain more understanding. \n\nWhat you would usually do first is to simply set some values to the weights and observe the result, say\n\n\\begin{equation}\nw_0 = 0.4 \\\\\nw_1 = 0.7 \\\\\nw_2 = -0.2\n\\end{equation}\n\nThen the output will be as shown in below table:\n>>>>> \n\nSo how can you fix the values of weights so that you get the right output?\n\nBy intuition, you can easily observe that $w_0$ must be increased and $w_1$ and $w_2$ must be reduced or rather made negative so that you obtain the actual output. But if you breakdown this intuition, you will observe that you are actually finding the difference between the actual output and the predicted output and finally reflecting that on the weights...\n\nThis is a very important concept that you will be digging deeper and will be the core to formulate the ideas behind *gradient descent* and also *backward propagation*.\n\n## Conclusion\n\nIn this tutorial you were introduced to the field of AI and went through an overview of perceptron. In the next tutorial, you'll learn to train a perceptron and do some predictions!!\n", "meta": {"hexsha": "75378f2458af2d0e22f3c13698ef612cac355308", "size": 21718, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "NN with Numpy 1/Neural_Networks_for_Absolute_beginners_Part_1_Introduction.ipynb", "max_stars_repo_name": "SurajDonthi/Article-Tutorials", "max_stars_repo_head_hexsha": "994a9bba02611cb79d708ae6abc32db7f03f03f1", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "NN with Numpy 1/Neural_Networks_for_Absolute_beginners_Part_1_Introduction.ipynb", "max_issues_repo_name": "SurajDonthi/Article-Tutorials", "max_issues_repo_head_hexsha": "994a9bba02611cb79d708ae6abc32db7f03f03f1", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2020-03-10T04:17:08.000Z", "max_issues_repo_issues_event_max_datetime": "2020-03-10T04:43:28.000Z", "max_forks_repo_path": "NN with Numpy 1/Neural_Networks_for_Absolute_beginners_Part_1_Introduction.ipynb", "max_forks_repo_name": "SurajDonthi/Article-Tutorials", "max_forks_repo_head_hexsha": "994a9bba02611cb79d708ae6abc32db7f03f03f1", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2019-01-26T16:59:21.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-14T13:27:17.000Z", "avg_line_length": 40.5943925234, "max_line_length": 623, "alphanum_fraction": 0.6345427756, "converted": true, "num_tokens": 2804, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4960938294709195, "lm_q2_score": 0.19682619657611938, "lm_q1q2_score": 0.09764426159964305}} {"text": "```python\n%matplotlib inline\n```\n\n\n```python\n# Write your imports here\nimport sympy\nimport math\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n### Problem 1. Markdown\nJupyter Notebook is a very light, beautiful and convenient way to organize your research and display your results. Let's play with it for a while.\n\nFirst, you can double-click each cell and edit its content. If you want to run a cell (that is, execute the code inside it), use Cell > Run Cells in the top menu or press Ctrl + Enter.\n\nSecond, each cell has a type. There are two main types: Markdown (which is for any kind of free text, explanations, formulas, results... you get the idea), and code (which is, well... for code :D).\n\nLet me give you a...\n#### Quick Introduction to Markdown\n##### Text and Paragraphs\nThere are several things that you can do. As you already saw, you can write paragraph text just by typing it. In order to create a new paragraph, just leave a blank line. See how this works below:\n```\nThis is some text.\nThis text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).\n\nThis text is displayed in a new paragraph.\n\nAnd this is yet another paragraph.\n```\n**Result:**\n\nThis is some text.\nThis text is on a new line, but it will continue the same paragraph (so you can make your paragraphs more easily readable by just continuing on a new line, or just go on and on like this one line is ever continuing).\n\nThis text is displayed in a new paragraph.\n\nAnd this is yet another paragraph.\n\n##### Headings\nThere are six levels of headings. Level one is the highest (largest and most important), and level 6 is the smallest. You can create headings of several types by prefixing the header line with one to six \"#\" symbols (this is called a pound sign if you are ancient, or a sharp sign if you're a musician... or a hashtag if you're too young :D). Have a look:\n```\n# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n##### Heading 5\n###### Heading 6\n```\n\n**Result:**\n\n# Heading 1\n## Heading 2\n### Heading 3\n#### Heading 4\n##### Heading 5\n###### Heading 6\n\nIt is recommended that you have **only one** H1 heading - this should be the header of your notebook (or scientific paper). Below that, you can add your name or just jump to the explanations directly.\n\n##### Emphasis\nYou can create emphasized (stronger) text by using a **bold** or _italic_ font. You can do this in several ways (using asterisks (\\*) or underscores (\\_)). In order to \"escape\" a symbol, prefix it with a backslash (\\). You can also strike through your text in order to signify a correction.\n```\n**bold** __bold__\n*italic* _italic_\n\nThis is \\*\\*not \\*\\* bold.\n\nI ~~didn't make~~ a mistake.\n```\n\n**Result:**\n\n**bold** __bold__\n*italic* _italic_\n\nThis is \\*\\*not\\*\\* bold.\n\nI ~~didn't make~~ a mistake.\n\n##### Lists\nYou can add two types of lists: ordered and unordered. Lists can also be nested inside one another. To do this, press Tab once (it will be converted to 4 spaces).\n\nTo create an ordered list, just type the numbers. Don't worry if your numbers are wrong - Jupyter Notebook will create them properly for you. Well, it's better to have them properly numbered anyway...\n```\n1. This is\n2. A list\n10. With many\n9. Items\n 1. Some of which\n 2. Can\n 3. Be nested\n42. You can also\n * Mix \n * list\n * types\n```\n\n**Result:**\n1. This is\n2. A list\n10. With many\n9. Items\n 1. Some of which\n 2. Can\n 3. Be nested\n42. You can also\n * Mix \n * list\n * types\n \nTo create an unordered list, type an asterisk, plus or minus at the beginning:\n```\n* This is\n* An\n + Unordered\n - list\n```\n\n**Result:**\n* This is\n* An\n + Unordered\n - list\n \n##### Links\nThere are many ways to create links but we mostly use one of them: we present links with some explanatory text. See how it works:\n```\nThis is [a link](http://google.com) to Google.\n```\n\n**Result:**\n\nThis is [a link](http://google.com) to Google.\n\n##### Images\nThey are very similar to links. Just prefix the image with an exclamation mark. The alt(ernative) text will be displayed if the image is not available. Have a look (hover over the image to see the title text):\n```\n Do you know that \"taco cat\" is a palindrome? Thanks to The Oatmeal :)\n```\n\n**Result:**\n\n Do you know that \"taco cat\" is a palindrome? Thanks to The Oatmeal :)\n\nIf you want to resize images or do some more advanced stuff, just use HTML. \n\nDid I mention these cells support HTML, CSS and JavaScript? Now I did.\n\n##### Tables\nThese are a pain because they need to be formatted (somewhat) properly. Here's a good [table generator](http://www.tablesgenerator.com/markdown_tables). Just select File > Paste table data... and provide a tab-separated list of values. It will generate a good-looking ASCII-art table for you.\n```\n| Cell1 | Cell2 | Cell3 |\n|-------|-------|-------|\n| 1.1 | 1.2 | 1.3 |\n| 2.1 | 2.2 | 2.3 |\n| 3.1 | 3.2 | 3.3 |\n```\n\n**Result:**\n\n| Cell1 | Cell2 | Cell3 |\n|-------|-------|-------|\n| 1.1 | 1.2 | 1.3 |\n| 2.1 | 2.2 | 2.3 |\n| 3.1 | 3.2 | 3.3 |\n\n##### Code\nJust use triple backtick symbols. If you provide a language, it will be syntax-highlighted. You can also use inline code with single backticks.\n
\n```python\ndef square(x):\n    return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.\n
\n\n**Result:**\n```python\ndef square(x):\n return x ** 2\n```\nThis is `inline` code. No syntax highlighting here.\n\n**Now it's your turn to have some Markdown fun.** In the next cell, try out some of the commands. You can just throw in some things, or do something more structured (like a small notebook).\n\n# This is just a test title\n## With a subtitle\n### And an even smaller subtitle\n
\nFor more playing arround, please check out the attached notebook\n\n### Problem 2. Formulas and LaTeX\nWriting math formulas has always been hard. But scientists don't like difficulties and prefer standards. So, thanks to Donald Knuth (a very popular computer scientist, who also invented a lot of algorithms), we have a nice typesetting system, called LaTeX (pronounced _lah_-tek). We'll be using it mostly for math formulas, but it has a lot of other things to offer.\n\nThere are two main ways to write formulas. You could enclose them in single `$` signs like this: `$ ax + b $`, which will create an **inline formula**: $ ax + b $. You can also enclose them in double `$` signs `$$ ax + b $$` to produce $$ ax + b $$.\n\nMost commands start with a backslash and accept parameters either in square brackets `[]` or in curly braces `{}`. For example, to make a fraction, you typically would write `$$ \\frac{a}{b} $$`: $$ \\frac{a}{b} $$.\n\n[Here's a resource](http://www.stat.pitt.edu/stoffer/freetex/latex%20basics.pdf) where you can look up the basics of the math syntax. You can also search StackOverflow - there are all sorts of solutions there.\n\nYou're on your own now. Research and recreate all formulas shown in the next cell. Try to make your cell look exactly the same as mine. It's an image, so don't try to cheat by copy/pasting :D.\n\nNote that you **do not** need to understand the formulas, what's written there or what it means. We'll have fun with these later in the course.\n\n\n\n

Write your formulas here.

\n\nEquation of a line: $$ y = ax + b $$\n\nRoots of the quadratic equation $ax^{2}+bx+c=0$: $$ x_{1,2}=\\frac{-b\\pm\\sqrt[2]{b^{2}-4ac}}{2a} $$\n\nTaylor series expansion: $$f(x)\\mid_{x=a}=f(a)+f'(a)(x-a)+\\frac{f''(a)}{2!}(x-a)^{2}+...+\\frac{f^{n}(a)}{n!}(x-a)^{n}+...$$\n\nBinomial theorem: $$ (x+y)^{n}=\\biggl({n \\atop 0}\\biggr)x^{n}y^{0}+\\biggl({n \\atop 1}\\biggr)x^{n-1}y^{1}+...+\\biggl({n \\atop n}\\biggr)x^{0}y^{n}=\\sum^{n}_{k=0}\\biggl({n \\atop k}\\biggr)x^{n-k}y^{k} $$\n\nAn integral (this one is a lot of fun to solve :D): $$ \\int_{+\\infty}^{-\\infty} e^{-x^{2}} \\,dx=\\sqrt{\\pi}$$\n\nA short matrix: $$\\begin{pmatrix} 2 & 1 & 3 \\\\ 2 & 6 & 8 \\\\ 6 & 8 & 18\\end{pmatrix}$$\n\nA long matrix: $$\\begin{pmatrix} a_{11} & a_{12} & \\cdots & a_{1n}\\\\ a_{21} & a_{22} & \\cdots & a_{2n} \\\\ \\vdots & \\vdots & \\ddots & \\vdots \\\\a_{m1} & a_{m2} & \\cdots & a_{mn}\\end{pmatrix}$$\n\n### Problem 3. Solving with Python\nLet's first do some symbolic computation. We need to import `sympy` first. \n\n**Should your imports be in a single cell at the top or should they appear as they are used?** There's not a single valid best practice. Most people seem to prefer imports at the top of the file though. **Note: If you write new code in a cell, you have to re-execute it!**\n\nLet's use `sympy` to give us a quick symbolic solution to our equation. First import `sympy` (you can use the second cell in this notebook): \n```python \nimport sympy \n```\n\nNext, create symbols for all variables and parameters. You may prefer to do this in one pass or separately:\n```python \nx = sympy.symbols('x')\na, b, c = sympy.symbols('a b c')\n```\n\nNow solve:\n```python \nsympy.solve(a * x**2 + b * x + c)\n```\n\n\n```python\n# High-School Maths Exercise\n## Getting to Know Jupyter Notebook. Python Libraries and Best Practices. Basic Workflow\n```\n\n\n```python\n# Write your code here\nx = sympy.symbols('x')\na, b, c = sympy.symbols('a b c')\nsympy.solve(a * x**2 + b * x + c)\n```\n\n\n\n\n [{a: (-b*x - c)/x**2}]\n\n\n\n\n```python\n# Write your code here\nsympy.init_printing()\nsympy.solve(a * x**2 + b * x + c, x)\n```\n\nHmmmm... we didn't expect that :(. We got an expression for $a$ because the library tried to solve for the first symbol it saw. This is an equation and we have to solve for $x$. We can provide it as a second parameter:\n```python \nsympy.solve(a * x**2 + b * x + c, x)\n```\n\nFinally, if we start with `sympy.init_printing()`, we'll get a LaTeX-formatted result instead of a typed one. This is very useful because it produces better-looking formulas. **Note:** This means we have to add the line BEFORE we start working with `sympy`.\n\nHow about a function that takes $a, b, c$ (assume they are real numbers, you don't need to do additional checks on them) and returns the **real** roots of the quadratic equation?\n\nRemember that in order to calculate the roots, we first need to see whether the expression under the square root sign is non-negative.\n\nIf $b^2 - 4ac > 0$, the equation has two real roots: $x_1, x_2$\n\nIf $b^2 - 4ac = 0$, the equation has one real root: $x_1 = x_2$\n\nIf $b^2 - 4ac < 0$, the equation has zero real roots\n\nWrite a function which returns the roots. In the first case, return a list of 2 numbers: `[2, 3]`. In the second case, return a list of only one number: `[2]`. In the third case, return an empty list: `[]`.\n\n\n```python\ndef format_decimal(func):\n \"\"\"\n Decorator to convert the sympy output to Python float format\n \"\"\"\n def inner(*args, **kwargs):\n raw_res = func(*args, **kwargs)\n if isinstance(raw_res, list) and len(raw_res) > 0:\n retval = []\n for el in raw_res:\n retval.append(float(el))\n return retval\n return raw_res\n return inner\n\n@format_decimal\ndef solve_quadratic_equation(a, b, c):\n \"\"\"\n Returns the real solutions of the quadratic equation ax^2 + bx + c = 0\n \"\"\"\n # Delete the \"pass\" statement below and write your code\n def sqrt_part():\n return b**2 - 4 * a * c\n \n if a == 0:\n return sympy.solve(b * x + c, x)\n if sqrt_part() < 0:\n return []\n return sympy.solve(a * x**2 + b * x + c, x)\n```\n\n\n```python\n# Testing: Execute this cell. The outputs should match the expected outputs. Feel free to write more tests\nprint(solve_quadratic_equation(1, -1, -2)) # [-1.0, 2.0]\nprint(solve_quadratic_equation(1, -8, 16)) # [4.0]\nprint(solve_quadratic_equation(1, 1, 1)) # []\n```\n\n [-1.0, 2.0]\n [4.0]\n []\n\n\n**Bonus:** Last time we saw how to solve a linear equation. Remember that linear equations are just like quadratic equations with $a = 0$. In this case, however, division by 0 will throw an error. Extend your function above to support solving linear equations (in the same way we did it last time).\n\n\n```python\n# Bonus: Calling the function with a = 0 for a linear equation\nprint(solve_quadratic_equation(0, -1, -2)) # [-2.0]\nprint(solve_quadratic_equation(0, -8, 16)) # [2.0]\nprint(solve_quadratic_equation(0, 1, 1)) # [-1.0]\n```\n\n [-2.0]\n [2.0]\n [-1.0]\n\n\n### Problem 4. Equation of a Line\nLet's go back to our linear equations and systems. There are many ways to define what \"linear\" means, but they all boil down to the same thing.\n\nThe equation $ax + b = 0$ is called *linear* because the function $f(x) = ax+b$ is a linear function. We know that there are several ways to know what one particular function means. One of them is to just write the expression for it, as we did above. Another way is to **plot** it. This is one of the most exciting parts of maths and science - when we have to fiddle around with beautiful plots (although not so beautiful in this case).\n\nThe function produces a straight line and we can see it.\n\nHow do we plot functions in general? We know that functions take many (possibly infinitely many) inputs. We can't draw all of them. We could, however, evaluate the function at some points and connect them with tiny straight lines. If the points are too many, we won't notice - the plot will look smooth.\n\nNow, let's take a function, e.g. $y = 2x + 3$ and plot it. For this, we're going to use `numpy` arrays. This is a special type of array which has two characteristics:\n* All elements in it must be of the same type\n* All operations are **broadcast**: if `x = [1, 2, 3, 10]` and we write `2 * x`, we'll get `[2, 4, 6, 20]`. That is, all operations are performed at all indices. This is very powerful, easy to use and saves us A LOT of looping.\n\nThere's one more thing: it's blazingly fast because all computations are done in C, instead of Python.\n\nFirst let's import `numpy`. Since the name is a bit long, a common convention is to give it an **alias**:\n```python\nimport numpy as np\n```\n\nImport that at the top cell and don't forget to re-run it.\n\nNext, let's create a range of values, e.g. $[-3, 5]$. There are two ways to do this. `np.arange(start, stop, step)` will give us evenly spaced numbers with a given step, while `np.linspace(start, stop, num)` will give us `num` samples. You see, one uses a fixed step, the other uses a number of points to return. When plotting functions, we usually use the latter. Let's generate, say, 1000 points (we know a straight line only needs two but we're generalizing the concept of plotting here :)).\n```python\nx = np.linspace(-3, 5, 1000)\n```\nNow, let's generate our function variable\n```python\ny = 2 * x + 3\n```\n\nWe can print the values if we like but we're more interested in plotting them. To do this, first let's import a plotting library. `matplotlib` is the most commnly used one and we usually give it an alias as well.\n```python\nimport matplotlib.pyplot as plt\n```\n\nNow, let's plot the values. To do this, we just call the `plot()` function. Notice that the top-most part of this notebook contains a \"magic string\": `%matplotlib inline`. This hints Jupyter to display all plots inside the notebook. However, it's a good practice to call `show()` after our plot is ready.\n```python\nplt.plot(x, y)\nplt.show()\n```\n\n\n```python\n# Write your code here\nx = np.linspace(-3, 5, 1000)\ny = 2 * x + 3\nplt.plot(x, y)\nplt.show()\n```\n\nIt doesn't look too bad bit we can do much better. See how the axes don't look like they should? Let's move them to zero. This can be done using the \"spines\" of the plot (i.e. the borders).\n\nAll `matplotlib` figures can have many plots (subfigures) inside them. That's why when performing an operation, we have to specify a target figure. There is a default one and we can get it by using `plt.gca()`. We usually call it `ax` for \"axis\".\nLet's save it in a variable (in order to prevent multiple calculations and to make code prettier). Let's now move the bottom and left spines to the origin $(0, 0)$ and hide the top and right one.\n```python\nax = plt.gca()\nax.spines[\"bottom\"].set_position(\"zero\")\nax.spines[\"left\"].set_position(\"zero\")\nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\n```\n\n**Note:** All plot manipulations HAVE TO be done before calling `show()`. It's up to you whether they should be before or after the function you're plotting.\n\nThis should look better now. We can, of course, do much better (e.g. remove the double 0 at the origin and replace it with a single one), but this is left as an exercise for the reader :).\n\n\n```python\n# Copy and edit your code here\nplt.clf()\nax = plt.gca()\nax.spines[\"bottom\"].set_position('zero')\nax.spines[\"left\"].set_position(\"zero\")\nax.spines[\"top\"].set_visible(False)\nax.spines[\"right\"].set_visible(False)\nxticks = ax.xaxis.get_major_ticks()\nyticks = ax.yaxis.get_major_ticks()\n\nplt.plot(x, y)\n\n# Let's try to remove the double 0 at the origin.\nxloc, labels = plt.xticks()\nyloc, labels = plt.yticks()\nx_zero_loc = np.where(xloc==0)[0][0]\ny_zero_loc = np.where(yloc==0)[0][0]\n\nxticks[x_zero_loc].set_visible(False)\nyticks[y_zero_loc].set_visible(False)\n\nplt.show()\n```\n\n### * Problem 5. Linearizing Functions\nWhy is the line equation so useful? The main reason is because it's so easy to work with. Scientists actually try their best to linearize functions, that is, to make linear functions from non-linear ones. There are several ways of doing this. One of them involves derivatives and we'll talk about it later in the course. \n\nA commonly used method for linearizing functions is through algebraic transformations. Try to linearize \n$$ y = ae^{bx} $$\n\nHint: The inverse operation of $e^{x}$ is $\\ln(x)$. Start by taking $\\ln$ of both sides and see what you can do. Your goal is to transform the function into another, linear function. You can look up more hints on the Internet :).\n\n

Write your result here.

\nWe start by taking the ln of both sides: $ \\ln(y) = \\ln(a) + bx\\ln(e) $\n\nThe resulting linear function is: $ \\ln(y) = \\ln(a) + bx $\n\nWhere **ln(y)** is the dependent that we plot on the y-axis, **ln(a)** is the constant and **b** is the slope. This equation is commontly presented as $y = mx + b$\n\n### * Problem 6. Generalizing the Plotting Function\nLet's now use the power of Python to generalize the code we created to plot. In Python, you can pass functions as parameters to other functions. We'll utilize this to pass the math function that we're going to plot.\n\nNote: We can also pass *lambda expressions* (anonymous functions) like this: \n```python\nlambda x: x + 2```\nThis is a shorter way to write\n```python\ndef some_anonymous_function(x):\n return x + 2\n```\n\nWe'll also need a range of x values. We may also provide other optional parameters which will help set up our plot. These may include titles, legends, colors, fonts, etc. Let's stick to the basics now.\n\nWrite a Python function which takes another function, x range and number of points, and plots the function graph by evaluating it at every point.\n\n**BIG hint:** If you want to use not only `numpy` functions for `f` but any one function, a very useful (and easy) thing to do, is to vectorize the function `f` (e.g. to allow it to be used with `numpy` broadcasting):\n```python\nf_vectorized = np.vectorize(f)\ny = f_vectorized(x)\n```\n\n\n```python\ndef remove_zero_tick(loc, axticks):\n \"\"\"\n This function will be used in this book to remove the 0 at origin\n \"\"\"\n try:\n zero_loc = np.where(loc==0)[0][0]\n axticks[zero_loc].set_visible(False)\n except IndexError:\n ax.spines[\"bottom\"].set_position(('data', 1))\n\ndef plot_math_function(f, min_x, max_x, num_points):\n x_array = np.linspace(min_x, max_x, num_points)\n f_vectorized = np.vectorize(f)\n y = f_vectorized(x_array)\n\n \n plt.clf()\n plt.cla()\n ax = plt.gca()\n ax.spines[\"bottom\"].set_position('zero')\n ax.spines[\"left\"].set_position(\"zero\")\n ax.spines[\"top\"].set_visible(False)\n ax.spines[\"right\"].set_visible(False)\n xticks = ax.xaxis.get_major_ticks()\n yticks = ax.yaxis.get_major_ticks()\n \n plt.plot(x_array, y)\n\n xloc, labels = plt.xticks()\n yloc, labels = plt.yticks()\n remove_zero_tick(xloc, xticks)\n remove_zero_tick(yloc, yticks)\n\n plt.show()\n \n```\n\n\n```python\nplot_math_function(lambda x: 2 * x + 3, -3, 5, 1000)\nplot_math_function(lambda x: -x + 8, -1, 10, 1000)\nplot_math_function(lambda x: x**2 - x - 2, -3, 4, 1000)\nplot_math_function(lambda x: np.sin(x), -np.pi, np.pi, 1000)\nplot_math_function(lambda x: np.sin(x) / x, -4 * np.pi, 4 * np.pi, 1000)\n```\n\n### * Problem 7. Solving Equations Graphically\nNow that we have a general plotting function, we can use it for more interesting things. Sometimes we don't need to know what the exact solution is, just to see where it lies. We can do this by plotting the two functions around the \"=\" sign ans seeing where they intersect. Take, for example, the equation $2x + 3 = 0$. The two functions are $f(x) = 2x + 3$ and $g(x) = 0$. Since they should be equal, the point of their intersection is the solution of the given equation. We don't need to bother marking the point of intersection right now, just showing the functions.\n\nTo do this, we'll need to improve our plotting function yet once. This time we'll need to take multiple functions and plot them all on the same graph. Note that we still need to provide the $[x_{min}; x_{max}]$ range and it's going to be the same for all functions.\n\n```python\nvectorized_fs = [np.vectorize(f) for f in functions]\nys = [vectorized_f(x) for vectorized_f in vectorized_fs]\n```\n\n\n```python\ndef plot_math_functions(functions, min_x, max_x, num_points):\n \n x_array = np.linspace(min_x, max_x, num_points)\n if hasattr(functions, \"__iter__\"):\n vectorized_fs = [np.vectorize(func) for func in functions]\n ys = [vectorized_f(x_array) for vectorized_f in vectorized_fs]\n else:\n ys = [np.vectorize(functions)(x_array)]\n \n ax = plt.gca()\n ax.spines[\"bottom\"].set_position('zero')\n ax.spines[\"left\"].set_position(\"zero\")\n ax.spines[\"top\"].set_visible(False)\n ax.spines[\"right\"].set_visible(False)\n xticks = ax.xaxis.get_major_ticks()\n yticks = ax.yaxis.get_major_ticks()\n\n for y in ys:\n plt.plot(x_array, y)\n\n xloc, labels = plt.xticks()\n yloc, labels = plt.yticks()\n remove_zero_tick(xloc, xticks)\n remove_zero_tick(yloc, yticks)\n \n plt.show()\n```\n\n\n```python\n# plot_math_functions(4, -3, 5, 1000)\nplot_math_functions([lambda x: 3 * x**2 - 2 * x + 5, lambda x: 3 * x + 7], -2, 3, 1000)\n```\n\nThis is also a way to plot the solutions of systems of equation, like the one we solved last time. Let's actually try it.\n\n\n```python\nplot_math_functions([lambda x: (-4 * x + 7) / 3, lambda x: (-3 * x + 8) / 5, lambda x: (-x - 1) / -2], -1, 4, 1000)\n```\n\n### Problem 8. Trigonometric Functions\nWe already saw the graph of the function $y = \\sin(x)$. But then again, how do we define the trigonometric functions? Let's quickly review that.\n\n\n\nThe two basic trigonometric functions are defined as the ratio of two sides:\n$$ \\sin(x) = \\frac{\\text{opposite}}{\\text{hypotenuse}} $$\n$$ \\cos(x) = \\frac{\\text{adjacent}}{\\text{hypotenuse}} $$\n\nAnd also:\n$$ \\tan(x) = \\frac{\\text{opposite}}{\\text{adjacent}} = \\frac{\\sin(x)}{\\cos(x)} $$\n$$ \\cot(x) = \\frac{\\text{adjacent}}{\\text{opposite}} = \\frac{\\cos(x)}{\\sin(x)} $$\n\nThis is fine, but using this, \"right-triangle\" definition, we're able to calculate the trigonometric functions of angles up to $90^\\circ$. But we can do better. Let's now imagine a circle centered at the origin of the coordinate system, with radius $r = 1$. This is called a \"unit circle\".\n\n\n\nWe can now see exactly the same picture. The $x$-coordinate of the point in the circle corresponds to $\\cos(\\alpha)$ and the $y$-coordinate - to $\\sin(\\alpha)$. What did we get? We're now able to define the trigonometric functions for all degrees up to $360^\\circ$. After that, the same values repeat: these functions are **periodic**: \n$$ \\sin(k.360^\\circ + \\alpha) = \\sin(\\alpha), k = 0, 1, 2, \\dots $$\n$$ \\cos(k.360^\\circ + \\alpha) = \\cos(\\alpha), k = 0, 1, 2, \\dots $$\n\nWe can, of course, use this picture to derive other identities, such as:\n$$ \\sin(90^\\circ + \\alpha) = \\cos(\\alpha) $$\n\nA very important property of the sine and cosine is that they accept values in the range $(-\\infty; \\infty)$ and produce values in the range $[-1; 1]$. The two other functions take values in the range $(-\\infty; \\infty)$ **except when their denominators are zero** and produce values in the same range. \n\n#### Radians\nA degree is a geometric object, $1/360$th of a full circle. This is quite inconvenient when we work with angles. There is another, natural and intrinsic measure of angles. It's called the **radian** and can be written as $\\text{rad}$ or without any designation, so $\\sin(2)$ means \"sine of two radians\".\n\n\nIt's defined as *the central angle of an arc with length equal to the circle's radius* and $1\\text{rad} \\approx 57.296^\\circ$.\n\nWe know that the circle circumference is $C = 2\\pi r$, therefore we can fit exactly $2\\pi$ arcs with length $r$ in $C$. The angle corresponding to this is $360^\\circ$ or $2\\pi\\ \\text{rad}$. Also, $\\pi rad = 180^\\circ$.\n\n(Some people prefer using $\\tau = 2\\pi$ to avoid confusion with always multiplying by 2 or 0.5 but we'll use the standard notation here.)\n\n**NOTE:** All trigonometric functions in `math` and `numpy` accept radians as arguments. In order to convert between radians and degrees, you can use the relations $\\text{[deg]} = 180/\\pi.\\text{[rad]}, \\text{[rad]} = \\pi/180.\\text{[deg]}$. This can be done using `np.deg2rad()` and `np.rad2deg()` respectively.\n\n#### Inverse trigonometric functions\nAll trigonometric functions have their inverses. If you plug in, say $\\pi/4$ in the $\\sin(x)$ function, you get $\\sqrt{2}/2$. The inverse functions (also called, arc-functions) take arguments in the interval $[-1; 1]$ and return the angle that they correspond to. Take arcsine for example:\n$$ \\arcsin(y) = x: sin(y) = x $$\n$$ \\arcsin\\left(\\frac{\\sqrt{2}}{2}\\right) = \\frac{\\pi}{4} $$\n\nPlease note that this is NOT entirely correct. From the relations we found:\n$$\\sin(x) = sin(2k\\pi + x), k = 0, 1, 2, \\dots $$\n\nit follows that $\\arcsin(x)$ has infinitely many values, separated by $2k\\pi$ radians each:\n$$ \\arcsin\\left(\\frac{\\sqrt{2}}{2}\\right) = \\frac{\\pi}{4} + 2k\\pi, k = 0, 1, 2, \\dots $$\n\nIn most cases, however, we're interested in the first value (when $k = 0$). It's called the **principal value**.\n\nNote 1: There are inverse functions for all four basic trigonometric functions: $\\arcsin$, $\\arccos$, $\\arctan$, $\\text{arccot}$. These are sometimes written as $\\sin^{-1}(x)$, $\\cos^{-1}(x)$, etc. These definitions are completely equivalent. \n\nJust notice the difference between $\\sin^{-1}(x) := \\arcsin(x)$ and $\\sin(x^{-1}) = \\sin(1/x)$.\n\n#### Exercise\nUse the plotting function you wrote above to plot the inverse trigonometric functions. Use `numpy` (look up how to use inverse trigonometric functions).\n\n\n```python\n# Write your code here\nplot_math_functions([lambda x: np.arcsin(x), lambda x: np.arccos(x)], -1, 1, 1000)\nplot_math_functions(lambda x: np.arctan(x), -1, 1, 1000)\n\nplot_math_functions(lambda x: np.arctan(1 / x), -1, 1, 1000)\nplot_math_functions(lambda x: (3.14 / 2) - np.arctan(x), -1, 1, 1000)\n```\n\n### ** Problem 9. Perlin Noise\nThis algorithm has many applications in computer graphics and can serve to demonstrate several things... and help us learn about math, algorithms and Python :).\n#### Noise\nNoise is just random values. We can generate noise by just calling a random generator. Note that these are actually called *pseudorandom generators*. We'll talk about this later in this course.\nWe can generate noise in however many dimensions we want. For example, if we want to generate a single dimension, we just pick N random values and call it a day. If we want to generate a 2D noise space, we can take an approach which is similar to what we already did with `np.meshgrid()`.\n\n$$ \\text{noise}(x, y) = N, N \\in [n_{min}, n_{max}] $$\n\nThis function takes two coordinates and returns a single number N between $n_{min}$ and $n_{max}$. (This is what we call a \"scalar field\").\n\nRandom variables are always connected to **distributions**. We'll talk about these a great deal but now let's just say that these define what our noise will look like. In the most basic case, we can have \"uniform noise\" - that is, each point in our little noise space $[n_{min}, n_{max}]$ will have an equal chance (probability) of being selected.\n\n#### Perlin noise\nThere are many more distributions but right now we'll want to have a look at a particular one. **Perlin noise** is a kind of noise which looks smooth. It looks cool, especially if it's colored. The output may be tweaked to look like clouds, fire, etc. 3D Perlin noise is most widely used to generate random terrain.\n\n#### Algorithm\n... Now you're on your own :). Research how the algorithm is implemented (note that this will require that you understand some other basic concepts like vectors and gradients).\n\n#### Your task\n1. Research about the problem. See what articles, papers, Python notebooks, demos, etc. other people have created\n2. Create a new notebook and document your findings. Include any assumptions, models, formulas, etc. that you're using\n3. Implement the algorithm. Try not to copy others' work, rather try to do it on your own using the model you've created\n4. Test and improve the algorithm\n5. (Optional) Create a cool demo :), e.g. using Perlin noise to simulate clouds. You can even do an animation (hint: you'll need gradients not only in space but also in time)\n6. Communicate the results (e.g. in the Softuni forum)\n\nHint: [This](http://flafla2.github.io/2014/08/09/perlinnoise.html) is a very good resource. It can show you both how to organize your notebook (which is important) and how to implement the algorithm.\n", "meta": {"hexsha": "d9f585fbf3da4e67e9e5ca586b0fe671e2f5b9d8", "size": 231148, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Math/High School Math/High-School Maths Exercise.ipynb", "max_stars_repo_name": "tankishev/Python_Fundamentals", "max_stars_repo_head_hexsha": "dce38de592ff06ec68153a4fcd4d609af2c1cf83", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-07T21:12:35.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-07T21:12:35.000Z", "max_issues_repo_path": "Math/High School Math/High-School Maths Exercise.ipynb", "max_issues_repo_name": "tankishev/Python_Fundamentals", "max_issues_repo_head_hexsha": "dce38de592ff06ec68153a4fcd4d609af2c1cf83", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Math/High School Math/High-School Maths Exercise.ipynb", "max_forks_repo_name": "tankishev/Python_Fundamentals", "max_forks_repo_head_hexsha": "dce38de592ff06ec68153a4fcd4d609af2c1cf83", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 215.4221808015, "max_line_length": 17132, "alphanum_fraction": 0.8951407756, "converted": true, "num_tokens": 8396, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3073580295544412, "lm_q2_score": 0.31742626558767584, "lm_q1q2_score": 0.09756351151985276}} {"text": "```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"./styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n__PULL__ the changes you made at home to your local copy on the M drive.\n\nIf you need a reminder of how to do this:\n\nOpen Jupyter notebook:\n
Start >> Programs >> Programming >> Anaconda3 >> JupyterNotebook\n
(Start >> \u3059\u3079\u3066\u306e\u30d7\u30ed\u30b0\u30e9\u30e0 >> Programming >> Anaconda3 >> JupyterNotebook)\n\nNavigate to where your interactive textbook is stored.\n\nOpen __S1_Introduction_to_Version_Control__. \n\nWe will start by learning to __PULL__ the solutions to the Review Excercises an online repository. \n\nThis will allow you to check your answers. \n\nOpen Jupyter notebook:\n
Start >> Programs >> Programming >> Anaconda3 >> JupyterNotebook\n
(Start >> \u3059\u3079\u3066\u306e\u30d7\u30ed\u30b0\u30e9\u30e0 >> Programming >> Anaconda3 >> JupyterNotebook)\n\nNavigate to where your interactive textbook is stored.\n\nOpen __S1_Introduction_to_Version_Control__. \n\nOpen Jupyter notebook:\n
Start >> Programs >> Programming >> Anaconda3 >> JupyterNotebook\n
(Start >> \u3059\u3079\u3066\u306e\u30d7\u30ed\u30b0\u30e9\u30e0 >> Programming >> Anaconda3 >> JupyterNotebook)\n\nWe will start by learning how to add the __solutions to the Review Exercises__ to your interative textbook. \n\nNavigate to where your interactive textbook is stored.\n\nOpen __S1_Introduction_to_Version_Control__. \n\n\n\n```python\nIn Jupyter notebook, select the tab with the contents list of the interactive textbook:\n \nOpen __Seminar 3__ by clicking on __3_Data_Structures__.\n```\n\n# Data Structures\n\n# Lesson Goal\n\n - Compose simple programs to control the flow with which the operators we have studied so far are executed on:\n - single value variables.\n - data structures (holding mutiple variables)\n\n\n\n\n# Objectives\n\n\n - Express collections of mulitple variables as `list`, `tuple` and dictionary (`dict`).\n \n- Use iteratation to visit entries in a data structure \n\n\n- Learn to select the right data structure for an application\n\nWhy we are studying this:\n\nTo use Python to solve more complex engineering problems you are likely to encounter involving:\n - multi-variable values (e.g. vectors)\n - large data sets (e.g. experiment results)\n - manipulating your data using logic \n
\n (e.g. sorting and categorising answers to an operation performed on multiple data points)\n\n\n\n Lesson structure:\n - Learn new skills together:\n - __Demonstration__ on slides.\n - __Completing examples__ in textbooks.\n - __Feedback answers__ (verbally / whiteboards)\n - Practise alone: __Completing review excercises__.\n - Skills Review: Updating your local repository using an __upstream repository.__\n - __Summary__and __quiz__.\n\nIn the last seminar we learnt to generate a rnage of numbers for use in control flow of a program, using the function `range()`:\n\n\n```python\nfor j in range(20):\n \n if j % 4 == 0: # Check remainer of j/4\n continue # continue to next value of j\n \n print(j, \"is not a multiple of 4\")\n```\n\n## Data Structures\n\nOften we want to manipulate data that is more meaningful than ranges of numbers.\n\nThese collections of variables might include:\n - the results of an experiment\n - a list of names\n - the components of a vector\n - a telephone directory with names and associated numbers.\n \n\n\n \n \n\nPython has different __data structures__ that can be used to store and manipulate these values.\n\nLike variable types (`string`, `int`,`float`...) different data structures behave in different ways.\n\nToday we will learn to use `list`, `tuple` and dictionary (`dict`) data structures.\n\n\nWe will study the differences in how they behave so that you can learn to select the most suitable data structure for an application. \n\nPrograms use data structure to collect data into useful packages. \n\n>$ r = [u, v, w] $\n\nFor example, rather than representing a vector `r` of length 3 using three seperate floats `ru`, `rv` and `rw`, we could represent \nit as a __list__ of floats:\n\n```Python\nr = [u, v, w]\n```\n\n\n\n(We will learn what a __list__ is in a moment.)\n\nIf we want to store the names of students in a laboratory group, rather than representing each students using an individual string variable, we could use a list of names, e.g.:\n\n\n\n\n```python\nlab_group0 = [\"Sarah\", \"John\", \"Joe\", \"Emily\"]\nlab_group1 = [\"Roger\", \"Rachel\", \"Amer\", \"Caroline\", \"Colin\"]\n```\n\nThis is useful because we can perform operations on lists such as:\n - checking its length (number of students in a lab group)\n - sorting the names in the list into alphabetical order\n - making a list of lists (we call this a *nested list*):\n\n\n\n```python\nlab_groups = [lab_group0, lab_group1]\n```\n\n## Lists\n\nA list is a sequence of data. \n\nWe call each item in the sequence an *element*. \n\nA list is constructed using square brackets:\n\n\n\n\n```python\na = [1, 2, 3]\n```\n\nA `range` can be converted to a list with the `list` function.\n\n\n```python\nprint(list(range(10)))\n```\n\nWhen `range` has just one *argument* (the entry in the parentheses), it will generate a range from 0 up to but not including the specified number. \n\n\n\n```python\nprint(list(range(10,20)))\n```\n\nWhen a range has two arguments:\n - the first value is the starting value.\n - the second value is the stoping value.\n - the stopping value is not included in the range\n\nYou can optionally include a step:\n\n\n```python\nprint(list(range(10, 20, 2)))\n```\n\nA list can hold a mixture of types (`int`, `string`....).\n\n\n```python\na = [1, 2.0, \"three\"]\n```\n\nAn empty list is created by\n\n\n```python\nmy_list = []\n```\n\nA list of length 5 with repeated values can be created by\n\n\n```python\nmy_list = [\"Hello\"]*5\nprint(my_list)\n```\n\nWe can check if an item is in a list using the function `in`:\n\n\n\n```python\nprint(\"Hello\" in my_list)\nprint(\"Goodbye\" in my_list)\n```\n\n\n### Iterating Over Lists\n\nLooping over each item in a list is called *iterating*. \n\nTo iterate over a list of the lab group we use a `for` loop.\n\nEach iteration, variable `d` takes the value of the next item in the list:\n\n\n```python\nfor d in [1, 2.0, \"three\"]: \n print('the value of d is:', d)\n```\n\n__Try it yourself__\n\n\nIn the cell provided in your textbook *iterate* over the list `[1, 2.0, \"three\"]`.\n\nEach time the code loops:\n1. print the value of data __cast as a string__ (Seminar 1 Data Types and Operators)\n1. print the variable type to demonstrate that the variable has been cast (note that otherwise the variable appeares to remain unchanged).\n\n\n```python\n# Iterate over a list and cast each item as a string\n```\n\n### Manipulating Lists \n\nThere are many functions for manipulating lists.\n\n\n\n### Finding the Length of a List\n\nWe can find the length (number of items) of a list using the function `len()`, by including the name of the list in the brackets. \n\n\n\n\n\n\nIn the example below, we find the length of the list `lab_group0`. \n\n\n```python\nlab_group0 = [\"Sara\", \"Mari\", \"Quang\"]\n\nsize = len(lab_group0)\n\nprint(\"Lab group members:\", lab_group0)\n\nprint(\"Size of lab group:\", size)\n\nprint(\"Check the Python object type:\", type(lab_group0))\n```\n\n\n### Sorting Lists\n\nTo sort the list we use `sorted()`.\n\n#### Sorting Numerically\n\nIf the list contains numerical variables, the numbers is sorted in ascending order.\n\n\n```python\nnumbers = [7, 1, 3.0]\n\nprint(numbers)\n\nnumbers = sorted(numbers)\n\nprint(numbers)\n```\n\n__Note:__ We can sort a list with mixed numeric types (e.g. `float` and `int`). \n\nHowever, we cannot sort a list with types that cannot be sorted by the same ordering rule \n\n(e.g. `numbers = sorted([seven, 1, 3.0])` causes an error.)\n\n\n```python\n# numbers = sorted([seven, 1, 3.0])\n```\n\n#### Sorting Alphabetically\n\nIf the list contains strings, the list is sorted by alphabetical order. \n\n\n```python\nlab_group0 = [\"Sara\", \"Mari\", \"Quang\"]\n\nprint(lab_group0)\n\nlab_group0 = sorted(lab_group0)\n\nprint(lab_group0)\n```\n\nAs with `len()` we include the name of the list we want to sort in the brackets. \n\nThere is a shortcut for sorting a list\n\n`sort` is known as a 'method' of a `list`. \n\nIf we suffix a list with `.sort()`, it performs an *in-place* sort.\n\n\n```python\nlab_group0 = [\"Sara\", \"Mari\", \"Quang\"]\n\nprint(lab_group0)\n\n#lab_group0 = sorted(lab_group0)\nlab_group0.sort()\n\nprint(lab_group0)\n```\n\n__Try it yourself__\n\nIn the cell provided in your textbook create a list of __numeric__ or __string__ values.\n\nSort the list using `sorted()` __or__ `.sort()`.\n\nPrint the sorted list.\n\nPrint the length of the list using `len()`.\n\n\n```python\n# Sorting a list\n```\n\n### Removing an Item from a List\n\nWe can remove items from a list using the method `pop`.\n\nWe place the index of the element we wich to remove in brackets. \n\n\n```python\n# Remove the second student from the list: lab_group0\n# remember indexing starts from 0\n# 1 is the second element\n\nprint(lab_group0)\n\nlab_group0.pop(1)\n\nprint(lab_group0)\n```\n\nWe can add items at the end of a list using the method `append`.\n\nWe place the element we want to add to the end of the list in brackets. \n\n\n```python\n# Add new student \"Lia\" at the end of the list\nlab_group0.append(\"Lia\")\nprint(lab_group0)\n```\n\n__Try it yourself__\n\nIn the cell provided in your textbook.\n\nRemove Sara from the list.\n\nPrint the new list.\n\nAdd a new lab group member, Tom, to the list.\n\nPrint the new list.\n\n\n```python\n# Adding and removing items from a list.\n```\n\n### Indexing\n\nLists store data in order.\n\nWe can select a single element of a list using its __index__.\n\nYou are familiar with this process; it is the same as selecting individual characters of a `string`:\n\n\n```python\na = \"string\"\nb = a[1]\nprint(b)\n```\n\n\n```python\nfirst_member = lab_group0[0]\nprint(first_member)\n```\n\nIndices can be useful when looping through the items in a list.`\n\n\n```python\n# We can express the following for loop:\n# ITERATING\nfor i in lab_group0:\n print(i)\n \n# as:\n# INDEXING\nfor i in range(len(lab_group0)):\n print(lab_group0[i])\n```\n\nAn example of where __indexing__ is more appropraite than __iterating__: \n\nSometimes we want to perform an operation on all items of a list.\n\nConsider the example we looked at earlier, where we looped through a list, expressing each element as a string. \n\nYou may have written something like this...\n\n\n```python\nfor d in [1, 2.0, \"three\"]:\n \n d = str(d)\n \n print(d, type(d))\n\n```\n\n\n```python\nWe can re-write this: \n```\n\n\n```python\ndata = [1, 2.0, \"three\"]\n\nfor d in data:\n \n d = str(d)\n \n print(d, type(d))\n```\n\n\n```python\n__Iterating:__ The type of each element in the list `data` remains unchanged.\n```\n\n\n```python\nprint(type(data[0]))\nprint(type(data[1]))\nprint(type(data[2]))\n```\n\n\n```python\n__Indexing__: We can modify each element of the list (e.g. to change its type) \n```\n\n\n```python\nfor d in range(len(data)):\n \n data[d] = str(data[d])\n \n print(data[d], type(data[d]))\n \nprint(type(data[0]))\nprint(type(data[1]))\nprint(type(data[2])) \n```\n\n__Note:__
\n- Some data structures that support *iterating* but do not support *indexing* (e.g. dictionaries, which we eill learn about later).
When possible, it is better to iterate over a list rather than use indexing.\n- When indexing:\n - the first value in the range is 0.\n - the last value in the range is (list length - 1). \n\nLists and indexing can be useful for numerical computations. \n\n### Indexing Example: Vectors\n\n__Vector:__ A quantity with magnitude and direction.\n\n\n\n\nPosition vectors (or displacement vectors) in 3D space can always be expressed in terms of x,y, and z-directions. \n\n\n\nThe position vector \ud835\udc93 indicates the position of a point in 3D space.\n\n$$\n\\mathbf{r} = x\\mathbf{i} + y\\mathbf{j} + z\\mathbf{k}\n$$\n\n\n\n$$\n\\mathbf{r} = x\\mathbf{i} + y\\mathbf{j} + z\\mathbf{k}\n$$\n\n\ud835\udc8a is the displacement one unit in the x-direction
\n\ud835\udc8b is the displacement one unit in the y-direction
\n\ud835\udc8c is the displacement one unit in the z-direction\n\nWe can conveniently express $\\mathbf{r}$ as a matrix: \n$$\n\\mathbf{r} = [x, y, z]\n$$\n\n__...which looks a lot like a Python list!__\n\n\nYou will encounter 3D vectors a lot in your engineering studies as they are used to describe many physical quantities, e.g. force.\n\n\n\n### Indexing Example: The dot product of two vectors:\n\nThe __dot product__ is a really useful algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors) and returns a single number.\n\nIt can be expressed mathematically as...\n\n__GEOMETRIC REPRESENTATION__\n\n\\begin{align}\n\\mathbf{A} \\cdot \\mathbf{B} = |\\mathbf{A}| |\\mathbf{B}| cos(\\theta)\n\\end{align}\n\n\n     $\\mathbf{B} cos(\\theta)$ is the component of $B$ acting in the direction of $A$.\n\n     $\\mathbf{B} cos(\\theta)$ is the component of $B$ acting in the direction of $A$.\n\nFor example, the component of a force, $\\mathbf{F_{app}}$, acting in the direction of the velocity of an object (x direction):\n\n\n\n$$\n\\mathbf{F_{app,x}} = \\mathbf{F_{app}}cos(\\theta)\n$$\n\n__ALGEBRAIC REPRESENTATION__\n\n>The dot product of two $n$-length-vectors:\n>
$ \\mathbf{A} = [A_1, A_2, ... A_n]$\n>
$ \\mathbf{B} = [B_1, B_2, ... B_n]$\n>
is: \n\n\\begin{align}\n\\mathbf{A} \\cdot \\mathbf{B} = \\sum_{i=1}^n A_i B_i.\n\\end{align}\n\n\n\n>So the dot product of two 3D vectors:\n>
$ \\mathbf{A} = [A_x, A_y, A_z]$\n>
$ \\mathbf{B} = [B_x, B_y, B_z]$\n>
is:\n\n\\begin{align}\n\\mathbf{A} \\cdot \\mathbf{B} &= \\sum_{i=1}^n A_i B_i \\\\\n&= A_x B_x + A_y B_y + A_z B_z.\n\\end{align}\n\n__Example:__ \n
The dot product $\\mathbf{A} \\cdot \\mathbf{B}$:\n>
$ \\mathbf{A} = [1, 3, \u22125]$\n>
$ \\mathbf{B} = [4, \u22122, \u22121]$\n\n\n\n\\begin{align}\n {\\displaystyle {\\begin{aligned}\\ [1,3,-5]\\cdot [4,-2,-1]&=(1)(4)+(3)(-2)+(-5)(-1)\\\\&=4-6+5\\\\&=3\\end{aligned}}} \n\\end{align}\n\n\n\nWe can solve this very easily using a Python `for` loop.\n\n\n\n\n```python\nA = [1.0, 3.0, -5.0]\nB = [4.0, -2.0, -1.0]\n\n# Create a variable called dot_product with value, 0.\ndot_product = 0.0\n\nfor i in range(len(A)):\n dot_product += A[i]*B[i]\n\nprint(dot_product)\n```\n\nFrom is __GEOMETRIC__ representation, we can see that the dot product allows us to quickly solve many engineering-related problems...\n\n\\begin{align}\n\\mathbf{A} \\cdot \\mathbf{B} = |\\mathbf{A}| |\\mathbf{B}| cos(\\theta)\n\\end{align}\n\nExamples:\n - Test if two vectors are:\n - perpendicular ($\\mathbf{A} \\cdot \\mathbf{B}==0$)\n - acute ($\\mathbf{A} \\cdot \\mathbf{B}>0$)\n - obtuse ($\\mathbf{A} \\cdot \\mathbf{B}<0$)\n - Find the angle between two vectors (from its cosine).\n - Find the magnitude of one vector in the direction of another.\n - Find physical quantities e.g. the work, W, when pushing an object a certain distance, d, with force, F:\n \n \n\n\n__Try it yourself:__ \n\n$\\mathbf{C} = [2, 4, 3.5]$\n\n$\\mathbf{D} = [1, 2, -6]$\n\nIn the cell below find the dot product:\n$\\mathbf{C} \\cdot \\mathbf{D}$\n\nIs the angle between the vectors obtuse or acute or are the vectors perpendicular?
\n(Perpendicular if $\\mathbf{A} \\cdot \\mathbf{B}==0$, acute if $\\mathbf{A} \\cdot \\mathbf{B}>0$, or obtuse if $\\mathbf{A} \\cdot \\mathbf{B}<0$).\n \n\n\n```python\n# The dot product of C and D\n```\n\n\n### Nested Data Structures: Lists of Lists\n\nA *nested list* is a list within a list. (Recall *nested loops* from Seminar 1: Control Flow). \n\nTo access a __single element__ we need as many indices as there are levels of nested list. \n\nThis is more easily explained with an example:\n\n\n```python\nlab_group0 = [\"Sara\", \"Mika\", \"Ryo\", \"Am\"]\nlab_group1 = [\"Hemma\", \"Miri\", \"Qui\", \"Sajid\"]\nlab_group2 = [\"Adam\", \"Yukari\", \"Farad\", \"Fumitoshi\"]\n\nlab_groups = [lab_group0, lab_group1]\n```\n\n`lab_group0`, `lab_group1` and `lab_group2` are nested within `lab_groups`.\n\n\n\nThere are __two__ levels of nested lists.\n\nWe need __two__ indices to select a single elememt from `lab_group0`, `lab_group1` or `lab_group2`. \n \nThe first index: a list (`lab_group0`, `lab_group1` or `lab_group2`). \n \nThe second index: an element in that list. \n\n\n```python\ngroup = lab_groups[0]\nprint(group)\n\nname = lab_groups[1][2]\nprint(name)\n```\n\n## Tuples\n\nTuples are similar to lists. \n\nHowever, after creatig a tuple:\n - you cannot add or remove elements from it without creating a new tuple. \n - you cannot change the value of a single tuple element e.g. by indexing. \n\n\n\n\nTuples are therefore used for values that should not change after being created.\n
e.g. a vector of length three with fixed entries\n
It is 'safer' in this case since it cannot be modified accidentally in a program. \n\nTo create a tuple, use () parentheses.\n\n\n__Example__\nIn Kyoto University, each professor is assigned an office.\n\nPhilamore-sensei is given room 32:\n\n\n```python\nroom = (\"Philamore\", 32)\n\nprint(\"Room allocation:\", room)\n\nprint(\"Length of entry:\", len(room))\n\nprint(type(room))\n```\n\n\n### Iterating over Tuples \n\nWe can *iterate* over tuples in the same way as with lists,\n\n\n```python\n# Iterate over tuple values\nfor d in room:\n print(d)\n```\n\n### Indexing\n\nWe can index into a tuple:\n\n\n```python\n# Index into tuple values\nprint(room[1])\nprint(room[0])\n```\n\n__Note__ Take care when creating a tuple of length 1:\n\n\n```python\n# Creating a list of length 1 \na = [1]\nprint(a)\nprint(type(a))\nprint(len(a))\n```\n\nHowever, if we use the same process for a tuple:\n\n\n```python\na = (1)\nprint(a)\nprint(type(a))\n#print(len(a))\n```\n\nTo create a tuple of length 1, we use a comma:\n\n\n```python\na = (1,)\nprint(a)\nprint(type(a))\nprint(len(a))\n```\n\n\n```python\nroom = (\"Endo\",)\nprint(\"Room allocation:\", room))\nprint(\"Length of entry:\", len(room))\nprint(type(room))\n```\n\n### Nested Data Structures: Lists of Tuples\nAs part of a rooms database, we can create a list of tuples:\n\n\n```python\nroom_allocation = [(\"Endo\",), \n (\"Philamore\", 32), \n (\"Matsuno\", 31), \n (\"Sawaragi\", 28), \n (\"Okino\", 28), \n (\"Kumegawa\", 19)]\n\nprint(room_allocation)\n```\n\nIndex into the list room allocation \n\nRefer to Link to the destination' for how to index into *nested* data structures.\n\nIn the cell below use indexing to print:\n - Matsuno-sensei's room number\n - Kumegawa-sensei's room number\n - The variable type of Kumegawa-sensei's room number\n\n\n```python\n# Matsuno-sensei's room number\n\n# Kumegawa-sensei's room number\n\n# The Python variable type of Kumegawa-sensei's room number\n\n```\n\n### Sorting Tuples\nTo make it easier to look up the office number each professor, we can __sort__ the list of tuples into an office directory.\n\nThe ordering rule is determined by the __first element__ of each tuple.\n\nIf the first element of each tuple is a numeric type (`int`, `float`...) the tulpes are sorted by ascending numerical order of the first element:\n\nIf the first element of each tuple is a `string` (as in this case), the tuples are sorted by alphabetical order of the first element.\n\nA tuple is sorted using the same method to sort a list. \n\nRefer to Sorting Lists remind yourself of this method.\n\nIn the cell provided below, sort the list, `room_allocation` by alphabetical order. \n\n\n```python\n# room_allocation sorted by alphabetical order\n```\n\nThe office directory can be improved by excluding professors who do not have an office at Yoshida campus:\n\n\n```python\nfor entry in room_allocation:\n \n # only professors with an office have an entry length > 1\n if len(entry) > 1:\n print(\"Name:\", entry[0], \", Room:\", entry[1])\n```\n\nIn summary, use tuples over lists when the length will not change.\n\n\n## Dictionaries \n\nWe used a list of tuples in the previous section to store room allocations. \n\nWhat if we wanted to use a program to find which room a particular professor has been allocated?\n\nwe would need to either:\n- iterate through the list and check each name. \n\n> For a very large list, this might not be very efficient.\n\n- use the index to select a specific entry of a list or tuple. \n\n> This works if we know the index to the entry of interest. For a very large list, this is unlikely.\n\nA human looking would identify individuals in an office directory by name (or \"keyword\") rather than a continuous set of integers. \n\nUsing a Python __dictionary__ we can build a 'map' from names (*keys*) to room numbers (*values*). \n\nA Python dictionary (`dict`) is declared using curly braces:\n\n\n```python\nroom_allocation = {\"Endo\": None, \n \"Philamore\": 32, \n \"Matsuno\": 31, \n \"Sawaragi\": 28, \n \"Okino\": 28, \n \"Kumegawa\": 19}\n\nprint(room_allocation)\n\nprint(type(room_allocation))\n```\n\nEach entry is separated by a comma. \n\nFor each entry we have:\n - a 'key' (followed by a colon)\n - a 'value'. \n \n__Note:__ For empty values (e.g. `Endo` in the example above) we use '`None`' for the value.\n\n`None` is a Python keyword for 'nothing' or 'empty'.\n\nNow if we want to know which office belongs to Philamore-sensei, we can query the dictionary by key:\n\n\n```python\nphilamore_office = room_allocation[\"Philamore\"]\nprint(philamore_office)\n```\n\n### Iterating over Dictionaries\n\nWe can __*iterate*__ over the keys in a dictionary as we iterated over the elements of a list or tuple:\n\n__Try it yourself:__\n
\nRefer back to:\n - Iterating Over Lists\n - Iterating Over Tuples\nto remind yourself how to *iterate* over a data structure.\n\n
\nUsing __exactly the same method__, iterate over the entries in the dictionary `room allocation` using a `for` loop.\n
\nEach time the code loops, print the next dictionary entry. \n\n\n```python\n# iterate over the dictionary, room_allocation.\n# print each entry\n\n```\n\nNotice that this only prints the keys.\n\nWe can access `keys` and `values` seperately by:\n - creating two variable names before `in` \n - putting `items()` after the dictionary name\n\n\n```python\nfor name, room_number in room_allocation.items():\n print(name, room_number) \n```\n\n__Try it yourself__
\nCopy and paste the code from the previous cell.\n
\nEdit it so that it prints the room numbers only. \n\nRemember you can __\"comment out\"__ the existing code (instead of deleting it) so that you can refer to it later.\ne.g.\n```python\n#print(name, room_number)\n```\n\n\n\n```python\n# iterate over the dictionary, room_allocation.\n# print each name\n```\n\nNote that the order of the printed entries in the dictionary is different from the input order. \n\nA dictionary stores data differently from a list or tuple. \n\n\n\n\n\n\n\n\n\n\n### Look-up Keys\n\nLists and tuples store entries as continuous pieces of memory, which is why we can access entries by index. \n\nIndexing cannot be used to access the entries of a dictionary. For example:\n```python\nprint(room_allocation[0])\n```\nraises an error. \n\nDictionaries use a different type of storage which allows us to perform look-ups using a 'key'.\n\nprint(room_allocation[\"Philamore\"])\n\n\n### Adding Entries to a Dictionary\n\nWe use this same code to add new entries to an existing dictionary: \n\n\n```python\nprint(room_allocation)\n\nroom_allocation[\"Fujiwara\"]= 34\n\nprint(\"\")\n\nprint(room_allocation)\n\n```\n\n\n### Removing Entries from a Dictionary\n\nTo remove an item from a disctionary we use the command `del`.\n\n\n```python\nprint(room_allocation)\n\ndel room_allocation[\"Fujiwara\"]\n\nprint(\"\")\n\nprint(room_allocation)\n```\n\n__Try it yourself__\n
\nOkino-sensei is leaving Kyoto University. \n\nHer office will be re-allocated to a new member of staff, Ito-sensei.\n\nIn the cell below, update the dictionary by deleting the entry for Okino-sensei and creating a new entry for Ito-sensei.\n\nPrint the new list.\n\n\n```python\n# Remove Okino-sensei (room 28) from the dictionary.\n# Add a new entry for Ito-sensei (room 28)\n```\n\nSo far we have used a string variable types for the dictionary keys.\n\nHowever, we can use almost any variable type as a key and we can mix types. \n\n\n\n\n### Re-structuring to make a new Dictionary\n__Example__: We could 'invert' the room allocation dictionary to create a room-to-name map.\n\nLet's build a new dictionary (`room_map`) by looping through the old dictionary (`room_allocation`) using a `for` loop:\n\n\n```python\n# Create empty dictionary\nroom_map = {}\n\n# Build dictionary to map 'room number' -> name \nfor name, room_number in room_allocation.items():\n \n # Insert entry into new dictionary\n room_map[room_number] = name\n\nprint(room_map)\n```\n\nWe can now consult the room-to-name map to find out if a particular room is occupied and by whom.\n\nLet's assume some rooms are unoccupied and therefore do not exist in this dictionary.\n\n\n\n\nIf we try to use a key that does not exist in the dictionary, e.g.\n\n occupant17 = room_map[17]\n\nPython will give an error (raise an exception). \n\nIf we're not sure that a __key__ is present (that a room is occupied or unocupied in this case), we can check using the funstion in '`in`' \n
(we used this function to check wether an entry exists in a __list__)\n\n\n\n```python\nprint(19 in room_map)\nprint(17 in room_map)\n```\n\nSo we know that:\n - room 17 is unoccupied\n - room 19 is occupied\n\n\nWhen using `in`, take care to check for the __key__ (not the value)\n\n\n```python\nprint('Kumegawa' in room_map)\n```\n\n#### Potential application: avoid generating errors if unoccupied room numbers are entered. \n\nFor example, in a program that checks the occupants of rooms by entering the room number: \n\n\n```python\nrooms_to_check = [17, 19]\n\nfor room in rooms_to_check:\n \n if room in room_map:\n print(\"Room\", room, \"is occupied by\", room_map[room], \"-sensei\")\n \n else:\n print(\"Room\", room, \"is unoccupied.\")\n```\n\n## Choosing a data structure\n\nAn important task when developing a computer program is selecting the *appropriate* data structure for a task.\n\nHere are some examples of the suitablity of the data types we have studied for some common computing tasks.\n\n\n- __Dynamically changing individual elements of a data structure.__ \n
\ne.g. updating the occupant of a room number or adding a name to a list of group members.
\n__Lists and dictionaries__ allow us to do this.
\n__Tuples__ do not.\n\n- __Storing items in a perticular sequence (so that they can be addressed by index or in a particular order)__.\n
\ne.g. representing the x, y, z coordinates of a 3D position vector, storing data collected from an experiment as a time series. \n
\n__Lists and tuples__ allow us to do this.\n
\n__Dictionaries__ do not.\n\n- __Performing an operation on every item in a sequence.__ \n
\ne.g. checking every item in a data set against a particular condition (e.g. prime number, multiple of 5....etc), performing an algebraic operation on every item in a data set. \n
\n__Lists and tuples__ make this simple as we can call each entry in turn using its index.\n
\n__Dictionaries__ this is less efficient as it requires more code.\n\n- __Selecting a single item from a data structure without knowing its position in a sequence.__ \ne.g. looking up the profile of a person using their name, avoiding looping through a large data set in order to identify a single entry. \n
\n__Dictionaries__ allow us to select a single entry by an associated (unique) key variable.\n
\n__Lists and tuples__ make this difficult as to pick out a single value we must either i) know it's position in an ordered sequence, ii)loop through every item until we find it.\n\n\n- __Protecting individual items of a data sequence from being added, removed or changed within the program.__\n
\ne.g. representing a vector of fixed length with fixed values, representing the coordintes of a fixed point. \n
\n__Tuples__ allow us to do this.\n
\n__Lists and dictionaries__ do not.\n\n- __Speed__\nFor many numerical computations, efficiency is essential. More flexible data structures are generally less efficient computationally. They require more computer memory. We will study the difference in speed there can be between different data structures in a later seminar.\n\n## Review Exercises\nHere are a series of engineering problems for you to practise each of the new Python skills that you have learnt today.\n\n### Review Exercise: Data structures.\n\n__(A)__ In the cell below, what type of data structure is C?\n\n__(B)__ Write a line of code that checks whether 3 exists within the data structure.\n\n__(C)__ Write a line of code that checks whether 3.0 exists within the data structure.\n\n__(D)__ Write a line of code that checks whether \"3\" exists within the data structure.\n\n\n\n```python\nC = (2, 3, 5, 6, 1, \"hello\")\n```\n\n### Review Exercise: Using Lists with `for` Loops.\n\nIn the cell below:\n\n- Create a list with the names of the months. \n
\n- Create a second list with the number of days in each month (for a regular year). \n
\n- Create a `for` loop that prints:\n\n`The number of days in MONTH is XX days`\n\nwhere, `MONTH` is the name of the month and `XX` is the correct number of days in that month.\n\nHint: Refer to Indexing Example: The dot product of two vectors for how to use two vectors in a loop.\n\n\n\n```python\n# A for loop to print the number of days in each month\n```\n\n### Review Exercise: Indexing.\n\n__(A)__ In the cell below write a program that adds two vectors, $\\mathbf{A}$ and $\\mathbf{B}$, expressed as lists \n
\n\n$\\mathbf{A} = [-2, 1, 3]$\n\n$\\mathbf{B} = [6, 2, 2]$\n\n $ \\mathbf{C} = [C_1, \n C_2, ...\n C_n] = \\mathbf{A} + \\mathbf{B} = [(A_1 + B_1), \n (A_2 + B_2), ... \n (A_n + B_n)]$\n\n__Hints:__ \n- Refer to Indexing Example: The dot product of two vectors for how to use two vectors in a loop. \n- Start by creating an empty list, `C = []`. \n
Add an element to the list each time the code loops using the method `C.append()`\n
Jump to Adding an Item to a List\n \n
\n__(B)__ To add two vectors, the number of elements in each vectors must be equal. \n
Use the function `len()` to print the length of $\\mathbf{A}$ and the length of $\\mathbf{B}$ before adding the two vectors.\n
Jump to Finding the Length of a List\n\n
\n__(C)__ Use `if` and `else` statements (Seminar 2) to:\n- add the two vectors __only__ if the length of $\\mathbf{A}$ and the length of $\\mathbf{B}$ are equal.\n- otherwise print a message (e.g. \"`unequal vector length!`\") \n\nHint: Use a logical operator (`==`, `<`, `>`....) to compare the lengths of $\\mathbf{A}$ and $\\mathbf{B}$.
Refer to __Logical Operators__ (Seminar 2). \n\n
\n__(D)__ Check your code works by using it to try and add two vectors with:\n
i) the same number of elements in each vector\n
ii) a different number of elements in each vector\n\n\n```python\n# Vector addition program with length check.\n```\n\n### Review Exercise: `if` and `else` statements.\n\nCopy and paste the program you wrote earlier to find the dot product of two vectors into the cell below.\n\nWithin the loop use `if`, `elif` and else `else` to make the program print:\n - \"`The angle between vectors is acute`\" if the dot product is positive.\n - \"`The angle between vectors is obtuse`\" if the dot product is negative.\n - \"`The vectors are perpendicular`\" if the dot product is 0.\n\n\n```python\n# Determinig angle types using the dot product.\n```\n\n### Review Exercise: Dictionaries.\n\n\n\n__(A)__ Choose 5 elements from the periodic table.\n
\nIn the cell below create a dictionary: \nJump to Dictionaries\n - __keys:__ chemical symbol names \n - __values:__ atomic numbers \n \n e.g. \n ```python\n dictionary = {\"C\":6, \"N\":7, \"O\":8....}\n \n ```\n\n\n__(B)__ Remove one entry from the dictionary and print the updated version.\n
Jump to Removing Entries from a Dictionary\n\n__(C)__ Add a new entry (chemical symbol and atomic number) to the dictionary and print the updated version.\n
Jump to Adding Entries to a Dictionary\n\n__(D)__ Use a `for` loop to create a new dictionary: \n - __keys:__ atomic numbers \n - __values:__ chemical symbols\nusing your original dictionary. \n
Hint: Refer to the earlier example of re-structuring to make a new dictioary. \n\n__*Optional Extension*__\n\n__(E)__ Print a __list__ of the chemical symbols in your dictionary, sorted into alphabetical order.\nHints:\n - Create an empty list \n - Use a for loop to add each chemical symbol to the list \n - Sort the list in alphabetical order \n\n\n```python\n# Dictionary of periodic table items.\n```\n\n### Review Exercise: `while` loops (bisection)\n\nBisection is an iterative method for approximating a root of a function $y = F(x)$ \n
i.e. a value of $x$ for which the function $F(x)$ is equal to zero. \n
Therefore the roots are found where the line of the function F(x) __crosses__ the x axis (the red dot indicates the root of the function):\n\n\n\n\n\nIf we know such a __crossing point__ lies within the interval x = a and x = b we can repeatedly *bisect* this interval to narrow down the interval in which x = root must lie. \n\nEach iteration, x$_{mid} = \\frac{a + b}{2}$ is determined and used to determine whether the crossing point is between x$_{mid}$ and a or x$_{mid}$ and b.\n
This is used to define a new, narrower interval in which we know the crossing point lies.\n\n x_mid = (a + b) / 2\n \n\n # If F(x) changes sign between F(x_mid) and F(a), \n # the root must lie between F(x_mid) and F(a)\n \n if F(x_mid) * F(a) < 0:\n b = x_mid\n x_mid = (a + b)/2\n \n \n # If F(x) changes sign between F(x_mid) and F(b), \n # the root must lie between F(x_mid) and F(b)\n \n else:\n a = x_mid\n x_mid = (a + b)/2 \n \n\n\nIn the example shown, the midpoint (x$_{mid}$) of a$_1$ and b$_1$ is b$_2$ \n
F(a$_1$) $\\times$ F(b$_2$) = negative\n
F(b$_1$) $\\times$ F(b$_2$) = positive\n\nSo the new increment is between a$_1$ and b$_2$.\n\n
\n\nBy repeating this process, the value of F(x$_{mid}$) should become closer to zero with each iteration.\n\nThe process is repeated until the *absolute* value |F(x$_{mid}$)| is sufficiently small (below a predetermined value (*tolerance*)). \n\nWe then determine x$_{mid}$ is the root of the function. \n\nIt is a very simple and robust method.\n\n**Task:** \n\n$$\nF(x) = 4x^3 - 3x^2 - 25x - 6\n$$\n\n\n\nThe function has one root between x = 0 and x = -0.6.\n\n__(A)__ Use the bisection method to estimate the value of the root between x = 0 and x = -0.6.\n
Instructions:\n- Use a while loop to repeat the code above __while__ absF(x$_{mid}$) > 1 $\\times10^{-6}$.\n- Each time the code loops:\n - __Compute__ F(a), F(b) and F(x_mid) [Hint: Use approprate variable names that don't contain () parentheses) \n - __Print__ F(x$_{mid}$) to check absF(x$_{mid}$) $< 1 \\times10^{-6}$.
Use the function `abs()` to compute the absolute value of a number,
https://docs.python.org/2/library/functions.html#abs
e.g. `y = abs(x)` assigns the absolute value of `x` to `y`. \n - __Bisect__ the increment using the code shown above\n- __After__ the loop print the final value of x$_{mid}$ using `print(\"root = \", x_mid) `.
This value is the estimate of the root.\n\nJump to While Loops'\n\n__(B)__ The bisection method is only effective where F(a) and F(b) are of opposite sign.\n
i.e. where F(a) $\\times$ F(b) $ < 0$\n
Add an if statement to your code so that the while loop is only run *if* the inputs a and b are of opposite sign.\n\n\n```python\n# Bisection while loop\n```\n\n __(C)__ In the previous example you stopped the while loop when the value of the function was sufficiently small (abs(F(x$_{mid}$)) $< 1 \\times10^{-6}$) that we can consider the corresponding value of x to be a root of the function. \n\nThis time we are going to edit your code so that the loop is stopped when it reaches a __maximum number of iterations__.
Copy and paste your code from the cell above in the cel below. \n
Replace your __while loop__ with a __for loop__ that runs the code in the loop 25 times then stops. \n\n__(D)__ Within the for loop, add a `break` statement.\n
The `break` statement should exit the for loop __if__ abs(F$_mid$) $< 1 \\times10^{-6}$.\n
i.e. __if__ abs(F$_mid$) $< 1 \\times10^{-6}$ the loop will stop before the maximum number of iterations is reached.\n
Before the command `break`, print the value of x$_{mid}$ using `print(\"root = \", x_mid) `.
This value is the estimate of the root.\n\nJump to break' \n\n\n```python\n# Copy and paste your code from the cell above, here\n```\n\n\n```python\n# Program to calculate the area of a polygon.\n```\n\n# Updating your git repository\n\nYou have made several changes to your interactive textbook.\n\nThe final thing we are going to do is add these changes to your online repository so that:\n - I can check your progress\n - You can access the changes from outside of the university server. \n \n > Save your work.\n >
`git add -A`\n >
`git commit -m \"A short message describing changes\"`\n >
`git push origin master`\n \n
Refer to supplementary material: __S1_Introduction_to_Version_Control.ipynb__. \n\n## Summary\n - A data structure is used to assign a collection of values to a single collection name.\n - A Python list can store multiple items of data in sequentially numbered elements (numbering starts at zero)\n - Data stored in a list element can be referenced using the list name can be referenced using the list name followed by an index number in [] square brackets.\n - The `len()` function returns the length of a specified list.\n - A Python tuple whose values can not be individually changed, removed or added to (except by adding another tuple).\n - Data stored in a tuple element can be referenced using the tuple name followed by an index number in [] square brackets.\n - A Python dictionary is a list of key: value pairs of data in which each key must be unique.\n - Data stored in a dictionary element can be referenced using the dictionary name followed by its key in [] square brackets. \n\n# Homework \n\n1. __PULL__ the changes you made in-class today to your personal computer.\n1. __COMPLETE__ any unfinished Review Exercises.\n1. __PUSH__ the changes you make at home to your online repository. \n\n
Refer to supplementary material: __S1_Introduction_to_Version_Control.ipynb__. \n\nIn particular, please complete: __Review Exercise: `while` loops (bisection)__. \n
You will need to refer to your answer in next week's Seminar. \n", "meta": {"hexsha": "76fc7aa87b0be0aff74329c9b7250bb70ffc0e33", "size": 72506, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "3_Data_structures.ipynb", "max_stars_repo_name": "hphilamore/dummyupstream", "max_stars_repo_head_hexsha": "d7683603f8832b579f2af907958b9f544d7cb01a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "3_Data_structures.ipynb", "max_issues_repo_name": "hphilamore/dummyupstream", "max_issues_repo_head_hexsha": "d7683603f8832b579f2af907958b9f544d7cb01a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "3_Data_structures.ipynb", "max_forks_repo_name": "hphilamore/dummyupstream", "max_forks_repo_head_hexsha": "d7683603f8832b579f2af907958b9f544d7cb01a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 25.5843330981, "max_line_length": 335, "alphanum_fraction": 0.5398725623, "converted": true, "num_tokens": 10284, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.2658804614657029, "lm_q2_score": 0.36658972248186, "lm_q1q2_score": 0.09746904458206089}} {"text": "## Scientific Computing for Chemists: An Undergraduate Course in Simulations, Data Processing, and Visualization\n\n#### Abstract:\n\nThe Scientific Computing for Chemists course taught at Wabash College teaches chemistry students to use the Python programming language, Jupyter notebooks, and a number of common Python scientific libraries to process, analyze, and visualize data. Assuming no prior programming experience, the course introduces students to basic programming and applies these skills to solve a variety of chemical problems. The course is structured around Jupyter notebooks as easily shareable documents for lectures, homework sets, and projects; the software used in this course is free and open source, making it easily accessible to any school or research lab. \n\n
\n\n[Weiss, Charles J. \"Scientific Computing for Chemists: An Undergraduate Course in Simulations, Data Processing, and Visualization.\" Journal of Chemical Education 94.5 (2017): 592-597.](http://pubs.acs.org/doi/abs/10.1021/acs.jchemed.7b00078)\n \n\n# What is a Jupyter Notebook?\n## What does it do, how does it work, and why should you use it?\n\n\"The [Jupyter Notebook](http://jupyter.org) is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.\"\n\n
\n\nNotebooks can be shared on [GitHub](https://github.com). Check out [my GitHub repository](https://github.com/robraddi).\n\n# Hidden Features:\n\n### List the %Magic Commands\n\n\n```python\n%lsmagic\n```\n\n\n\n\n Available line magics:\n %alias %alias_magic %autocall %automagic %autosave %bookmark %cat %cd %clear %colors %config %connect_info %cp %debug %dhist %dirs %doctest_mode %ed %edit %env %gui %hist %history %killbgscripts %ldir %less %lf %lk %ll %load %load_ext %loadpy %logoff %logon %logstart %logstate %logstop %ls %lsmagic %lx %macro %magic %man %matplotlib %mkdir %more %mv %notebook %page %pastebin %pdb %pdef %pdoc %pfile %pinfo %pinfo2 %popd %pprint %precision %profile %prun %psearch %psource %pushd %pwd %pycat %pylab %qtconsole %quickref %recall %rehashx %reload_ext %rep %rerun %reset %reset_selective %rm %rmdir %run %save %sc %set_env %store %sx %system %tb %time %timeit %unalias %unload_ext %who %who_ls %whos %xdel %xmode\n \n Available cell magics:\n %%! %%HTML %%SVG %%bash %%capture %%debug %%file %%html %%javascript %%js %%latex %%perl %%prun %%pypy %%python %%python2 %%python3 %%ruby %%script %%sh %%svg %%sx %%system %%time %%timeit %%writefile\n \n Automagic is ON, % prefix IS NOT needed for line magics.\n\n\n\n### Example:\n\n\n```python\n%ls\n```\n\n \u001b[31m67-66-3-IR.jdx\u001b[m\u001b[m* \u001b[31mdance_of_the_goblins.mp3\u001b[m\u001b[m*\r\n \u001b[31mHPLC.xlsx\u001b[m\u001b[m* \u001b[31mdance_of_the_goblins.wav\u001b[m\u001b[m*\r\n \u001b[31mJupyterNotebookExample.ipynb\u001b[m\u001b[m* \u001b[31mintegration_plot.png\u001b[m\u001b[m*\r\n README.md \u001b[31mipython_log.py\u001b[m\u001b[m*\r\n \u001b[1m\u001b[36mThinkDSP\u001b[m\u001b[m/ \u001b[31mjcamp.txt\u001b[m\u001b[m*\r\n \u001b[31mTraj_1071_THR18_87.B_pub.mp4\u001b[m\u001b[m* \u001b[31mplots.py\u001b[m\u001b[m*\r\n \u001b[31mcleopatra.flac\u001b[m\u001b[m* plots.pyc\r\n \u001b[31mcustom.css\u001b[m\u001b[m* \u001b[31mtesting.txt\u001b[m\u001b[m*\r\n\n\n# Use various languages:\n## Languages: Python, HTML, Bash, R, LaTex, etc.\n\n### _MathJax_ (LaTeX-like) \n\nApplication of Hookes Law to vibrational frequency:\n$$\\displaystyle \\widetilde{\\nu}={\\frac{1}{2\\pi c}}{\\sqrt {\\frac {k_{force}}{\\mu}}} \\tag{1}$$\n\nLennard-Jones potential:\n$$\nLJ(r) = 4\\epsilon[ {(\\frac{\\sigma}{r})}^{12} - {(\\frac{\\sigma}{r})}^{6} ] \\tag{2}\n$$\n\n### Redox Reactions:\n\n$$ [Fe(CN)_{6}]^{3-} + e^{-} \\rightleftharpoons [Fe(CN)_{6}]^{4-} \\hspace{0.5cm} E_{cell} = 0.356\\hspace{0.1cm} V \\tag{3} $$\n\n\n\n#### Bash shell\n\n\n```bash\n%%bash\n# Very basic bash script:\n\njob=\"count\" # the object that you want to toggle\nnum=10 # the max value of our loop\n\n# Creating a list of elements\nenu=(zero one two three four five six \\\n seven eight nine ten)\n\nif [ $job == \"count\" ]; # if the job = \"count\", then...\nthen\n for i in $(seq 0 $num);\n do\n if [ $i -eq 3 ] || [ $i -eq 7 ] ; # if i = 3 OR i = 7\n then echo ${enu[i]}; # then print the ith element\n \n else\n echo $i; \n fi\n done\nelse\n echo ${enu[$num]}; # If job \u2260 count, then...\nfi\n```\n\n 0\n 1\n 2\n three\n 4\n 5\n 6\n seven\n 8\n 9\n 10\n\n\n# Here is a list of all the [Jupyter Kernels](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels) available.\n## They can be installed and used inside the notebook.\n\n# Sympy\n\n\n```python\nimport sympy #symbolic mathematics library in python\nx,y,z = sympy.symbols('x,y,z') # creating our variables\nf = (2 + x) * (3 + x) # creating our function\nF = '(2 + x)(3 + x)' # string of the function\nprint(sympy.expand(f),sympy.solve(f))\nprint(sympy.solve(f))\nprint(sympy.diff(f,x)) # differentiate \nprint(sympy.integrate(f,x)) # integrate \n```\n\n (x**2 + 5*x + 6, [-3, -2])\n [-3, -2]\n 2*x + 5\n x**3/3 + 5*x**2/2 + 6*x\n\n\n\n```python\nsympy.init_printing() # pretty print\nsympy.integrate(f,x)\n```\n\n\n```python\n%matplotlib inline \n#matplotlib inline = have the plot render within the cell\nimport numpy as np # linear algebra/scientfic computing library\n\n# from -50 to 50 with 22 evenly spaced points\nx = np.linspace(-50,50,22)\nprint(len(x)) # print the length of the vector\nprint(x) # print the vector\n\ny_ = [] # appending elements to this list\nfor i in range(0,len(x)):\n y_.append(x[i]**3. /3. + 5.*x[i]**2. /2. + 6.*x[i])\n \ny = np.array(y_) # convert the list into an array \n\n# Lets plot the function:\nfrom matplotlib import pyplot as plt \n\nfig = plt.figure(figsize=(9,9)) # creating a figure object\nax = fig.add_subplot(211) # setting the size of the\n# plot of x and y and label it:\nax.plot(x,y,label='1st integration of %s'%F) \nax.set_xlabel('x') \nax.set_ylabel('y') \nax.legend(loc='best')\nfig.show()\n\n```\n\n\n```python\n# Import \"plot\" script of my own creation\nfrom plots import simple_plot as sp \n\n# call on the function simple_plot:\nsp(Type='scatter',size=211,fig_size=(9,9),x=x,y=y,\n xlabel='x',ylabel='y',color='k',invert_x_axis=False,\n fit=True,order=3,name='integration_plot.png')\n```\n\n## But, what if we wanted to look at the source-code?\n\n\n```bash\n%%bash\n# Lets print the first 5 lines of this script:\nhead -n 5 ./plots.py\n```\n\n #!/usr/bin/env python\n \n # Load Libraries:{{{\n import numpy as np ############################# Linear Algebra Library\n from scipy.optimize import fsolve\n\n\n# Data acquisition \n### Importing Data:\n### Using text files - \".dat\";\".txt\";\".csv\";\".log\";\".db\";etc.\n### Using excel files\n\n\n```python\nimport openpyxl # from excel to python\nwb = openpyxl.load_workbook('HPLC.xlsx')\nSheet1 = wb.get_sheet_by_name('Sheet1')\n```\n\n\n```python\npeak = [Sheet1.cell(row=i, column=k).value for i in range(2, 12) for k in range(1,2)]\narea = [Sheet1.cell(row=i, column=k).value for i in range(2, 12) for k in range(4,5)]\nheight = [Sheet1.cell(row=i, column=k).value for i in range(2, 12) for k in range(5,6)]\ntR = [Sheet1.cell(row=i, column=k).value for i in range(2, 12) for k in range(2,3)]\nvol = [Sheet1.cell(row=i, column=k).value for i in range(2, 12) for k in range(3,4)]\n```\n\n\n```python\nprint peak\nprint area\nprint vol\nprint tR\nprint height\n```\n\n [u'caff std', u'Caff + h2o', u'Caff 3 mL', u'Caff 6 mL', u'Caff 9 mL', u'Caff 12 mL', u'decaf +h2o', u'decaf 3mL', u'decaf 6mL', u'decaf 9mL']\n [500000L, 215000L, 254000L, 285000L, 336000L, 370000L, 15400L, 42100L, 73700L, 122000L]\n [None, 0L, 3L, 6L, 9L, 12L, 0L, 3L, 6L, 9L]\n [2.125, 2.123, 2.125, 2.125, 2.127, 2.128, 2.118, 2.13, 2.13, 2.132]\n [110000L, 43600L, 50800L, 56100L, 65700L, 69700L, 1960L, 6890L, 12700L, 21000L]\n\n\n# Playing With Sounds:\n \n\n# Plotting the Waveform for the first 4.5 seconds of \"Dance of the Goblins\" by Antonio Bazzini\n\n\n\n\n\n## Waveform $\\rightarrow$ mp4\n\n\n```bash\n%%bash\ncd /volumes/rdrive/Fall2017/Instrumental_Design/Project/\nffmpeg -i dance_of_the_goblins.mp3 dance_of_the_goblins.wav\n```\n\n bash: line 1: cd: /volumes/rdrive/Fall2017/Instrumental_Design/Project/: No such file or directory\n ffmpeg version 3.4.2 Copyright (c) 2000-2018 the FFmpeg developers\n built with Apple LLVM version 9.0.0 (clang-900.0.39.2)\n configuration: --prefix=/usr/local/Cellar/ffmpeg/3.4.2 --enable-shared --enable-pthreads --enable-version3 --enable-hardcoded-tables --enable-avresample --cc=clang --host-cflags= --host-ldflags= --disable-jack --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-opencl --enable-videotoolbox --disable-lzma --enable-nonfree\n libavutil 55. 78.100 / 55. 78.100\n libavcodec 57.107.100 / 57.107.100\n libavformat 57. 83.100 / 57. 83.100\n libavdevice 57. 10.100 / 57. 10.100\n libavfilter 6.107.100 / 6.107.100\n libavresample 3. 7. 0 / 3. 7. 0\n libswscale 4. 8.100 / 4. 8.100\n libswresample 2. 9.100 / 2. 9.100\n libpostproc 54. 7.100 / 54. 7.100\n Input #0, mp3, from 'dance_of_the_goblins.mp3':\n Metadata:\n major_brand : dash\n minor_version : 0\n compatible_brands: iso6mp41\n encoder : Lavf56.25.101\n Duration: 00:04:58.03, start: 0.025057, bitrate: 192 kb/s\n Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 192 kb/s\n Metadata:\n encoder : Lavc56.13\n File 'dance_of_the_goblins.wav' already exists. Overwrite ? [y/N] Not overwriting - exiting\n\n\n\n```python\nimport os\nwd = '/Volumes/RMR_4TB/Undergrad_courses/Instrumental_Design/Project/'\nos.chdir(wd+'ThinkDSP/code')\nfrom __future__ import print_function, division\nimport thinkdsp\nimport thinkplot\nimport scipy.fftpack\nfrom scipy.fftpack import fft as fft\nfrom scipy.fftpack import ifft as ifft\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nos.chdir(wd)\n```\n\n## Create widget for playing inside notebook\n\n\n```python\nwave = wd+'dance_of_the_goblins.wav'\nresponse = thinkdsp.read_wave(wave)\nstart = 2.0\nduration = response.duration/65.\nresponse = response.segment(start=start, duration=duration)\nresponse.shift(-start)\nresponse.normalize()\nprint(duration,'seconds')\n```\n\n 4.58470713414 seconds\n\n\n\n```python\nresponse.make_audio()\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\nsegment = response.segment(start=start,duration=duration)\nsegment.normalize()\ndata = segment.quantize(bound=True,dtype=float)\nprint('Number of data points = ',len(data))\n```\n\n Number of data points = 113986\n\n\n# Setting x and y\n\n\n```python\n# Waveform:\ntotal_time = segment.duration+start\nx = np.array([total_time/len(data)*i for i in range(0,len(data),1)])\ny = [data[i] for i in range(0,len(data),1)]\n\n# Fourier Transform\n# Number of sample points\nn = len(y) # len = length of a vector\nd = 1/response.framerate\nprint('timestep = ',d)\nFT_y = fft(y)\nfreq = np.fft.fftfreq(n,d)\nxf,yf = [],[]\nfor i in range(0,len(FT_y)):\n if freq[i] > 0 and FT_y[i] >0:\n yf.append(FT_y[i])\n xf.append(freq[i])\n\n```\n\n timestep = 2.26757369615e-05\n\n\n## Plotting and Signal Filtering\n\n\n```python\n%pylab inline\nfrom plots import onebytwo\nx1,x2 = x,xf\ny1,y2 = y,yf\nx1label,x2label = 'Time,t/(seconds)','Frequency, $\\\\nu$ /(Hz)'\ny1label,y2label = 'Signal Intensity','Signal Intensity'\nType1,Type2 = 'line','line'\nfit1,fit2 = False,False\ncolor = 'k'\norder1,order2 = 1,1\nname = 'FT.png'\nonebytwo(Type1=Type1,Type2=Type2,x1=x1,y1=y1,x2=x2,y2=y2,\n x1label=x1label,y1label=y1label,x2label=x2label,\n y2label=y2label,name=name,color=color,fit1=fit1,fit2=fit2,\n order1=order1,order2=order2,\n invert_x1_axis=False,invert_x2_axis=False)\n\n```\n\n# Weighted Moving Average Smoothing\n\nThe above method treats each point in the average the same and only takes the average with the immediately adjacent data points. The triangular smooth approach weights the different data points. For example, if we take the average with five data points as described below.\n\n\n$$ S_{j} = \\frac{D_{j\u22122} +2D_{j\u22121} +3D_{j} +2D_{j+1} +D_{j+1}}{ 9}$$\n\n\n```python\ndef tri_smooth(array):\n '''\n (ndarray) -> ndarray\n For an ndarray x, returns a new array xs as the weighted average of each point\n The new array xs is shorter than x, len(xs) = len(x)-4.\n '''\n sum = array[:-4] + 2*array[1:-3] + 3*array[2:-2] + 2*array[3:-1] + array[4:] \n array_smooth = sum/9\n return(array_smooth)\n```\n\n## Compare Moving Average and Savitzky\u2013Golay Filter:\n\n\n```python\nfrom scipy.fftpack import *\nfrom scipy.signal import *\n\n# Weighted Moving average:\nxs = tri_smooth(array=np.array(xf))\nys = tri_smooth(array=np.array(yf))\n\n# \nsfx = scipy.signal.savgol_filter(xf, window_length=55, polyorder=3)\nsfy = scipy.signal.savgol_filter(yf, window_length=55, polyorder=3)\n#x,y = scipy.ifft(X),scipy.ifft(Y)\n```\n\n\n```python\n%pylab inline\n\nx1,x2 = xs,sfx\ny1,y2 = ys,sfy\nx1label,x2label = '','Frequency, $\\\\nu$ /(Hz)'\ny1label,y2label = 'Signal Intensity','Signal Intensity'\nType1,Type2 = 'line','line'\nfit1,fit2 = False,False\ncolor = 'k'\norder1,order2 = 1,1\nname = 'FT.png'\nonebytwo(Type1=Type1,Type2=Type2,x1=x1,y1=y1,x2=x2,y2=y2,\n x1label=x1label,y1label=y1label,x2label=x2label,\n y2label=y2label,name=name,color=color,fit1=fit1,fit2=fit2,\n order1=order1,order2=order2,\n invert_x1_axis=False,invert_x2_axis=False)\n```\n\n### If we wanted, we could compare these frequencies to notes played on a piano or violin \n\n\n```python\nfrom IPython.core.display import display,HTML\n# uncomment to render webpage in notebook\n#display(HTML(\"https://en.wikipedia.org/wiki/Piano_key_frequencies\"))\n```\n\n\n```python\n# Print the first 5000 frequencies, but only find the meaningful ones:\nfor i in range(0,len(yf)):\n if xf[i] <= 5000:\n #print(xf[i],yf[i])\n if yf[i] >= 1500 and yf[i] <= np.max(yf):\n print('High Intensity:',xf[i],yf[i])\n elif yf[i] <= 750 and yf[i] >= 500:\n print('Mid Intensity:',xf[i],yf[i])\n \n```\n\n Mid Intensity: 124.578457003 (599.246490355+93.5075799439j)\n Mid Intensity: 244.514238591 (661.958345537-1041.80975188j)\n Mid Intensity: 369.86647483 (674.382456481+409.785261663j)\n High Intensity: 370.640254066 (1676.74743638-64.1712179436j)\n Mid Intensity: 371.414033302 (725.545215007-1791.96148067j)\n Mid Intensity: 374.509150247 (518.009259438-88.2589895154j)\n Mid Intensity: 377.991156809 (683.853566481-424.141803253j)\n Mid Intensity: 388.824066113 (557.481733683-13.2501579961j)\n Mid Intensity: 433.316372186 (648.480327644+335.481788001j)\n Mid Intensity: 494.444931834 (550.480488249-669.992062283j)\n Mid Intensity: 498.700717632 (664.274249436+121.336504039j)\n Mid Intensity: 501.02205534 (570.497927931+547.527433763j)\n Mid Intensity: 502.569613812 (688.067674151-125.349911749j)\n Mid Intensity: 526.55677013 (549.160479997+189.187274991j)\n Mid Intensity: 616.702051129 (505.703582791+311.682746081j)\n High Intensity: 619.797168073 (1538.80471177+642.965678797j)\n High Intensity: 620.184057691 (1643.01242333-785.611098359j)\n Mid Intensity: 745.923183549 (520.156615555+525.941166264j)\n Mid Intensity: 990.824311758 (666.28585511-532.515190507j)\n Mid Intensity: 1003.20477953 (621.363188902-10.3078738489j)\n Mid Intensity: 1241.52878424 (535.379174535+302.805023687j)\n Mid Intensity: 1241.91567385 (611.523613107-236.93916157j)\n Mid Intensity: 1246.94523889 (609.664113881-149.173723693j)\n Mid Intensity: 1747.58040461 (617.889090108+33.9607469876j)\n\n\n# Thank you for listening!\n### Special thanks to the following people: Dr. Vincent Voelz, Matt Hurley, Hongbin Wan & Yunhui Ge\n\n# Sources & Dependencies:\n\n[Anaconda](https://www.anaconda.com/what-is-anaconda/) - \"Anaconda is the world\u2019s most popular Python data science platform\". \"Package Management: Manage packages, dependencies and environments with conda\"\n\nNumpy,\nJupyter/IPython,\nMatplotlib,\nScipy,\nSymPy,\nScikit-image,\nPandas\n\n### These are not required:\n\n[nbextensions](https://github.com/ipython-contrib/jupyter_contrib_nbextensions) - Jupyter Notebook extensions. This consists of line numbering,cell folding, various layouts, spell checking, etc.\n", "meta": {"hexsha": "e9a0e797fec79bfbcb1ddbd7f58face4b6b22ec0", "size": 768265, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Miscellaneous/Scientific_Comp_JupyterNotebook/JupyterNotebookExample.ipynb", "max_stars_repo_name": "robraddi/tu_chem", "max_stars_repo_head_hexsha": "18b8247d6c00e33f15f040a57a32b5fc2372137a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-29T04:26:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-29T04:26:42.000Z", "max_issues_repo_path": "Miscellaneous/Scientific_Comp_JupyterNotebook/JupyterNotebookExample.ipynb", "max_issues_repo_name": "robraddi/tu_chem", "max_issues_repo_head_hexsha": "18b8247d6c00e33f15f040a57a32b5fc2372137a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Miscellaneous/Scientific_Comp_JupyterNotebook/JupyterNotebookExample.ipynb", "max_forks_repo_name": "robraddi/tu_chem", "max_forks_repo_head_hexsha": "18b8247d6c00e33f15f040a57a32b5fc2372137a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-12-03T17:47:05.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-03T17:47:05.000Z", "avg_line_length": 438.7578526556, "max_line_length": 539316, "alphanum_fraction": 0.907422569, "converted": true, "num_tokens": 378195, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.39233681595684605, "lm_q2_score": 0.24798743735585305, "lm_q1q2_score": 0.09729460156949321}} {"text": "# Introduction to Neural Networks and Pytorch \n\n Notebook version: 0.1 (Nov 14, 2020)\n\n Authors: Jer\u00f3nimo Arenas Garc\u00eda (jarenas@ing.uc3m.es)\n\n Changes: v.0.1. (Nov 14, 2020) - First version\n \n Pending changes: - Use epochs instead of iters in first part of notebook\n - Add an example with dropout\n - Add theory about CNNs\n - Define functions for the training of neural nets and display of the results\n in order to simplify code cells\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nsize=18\nparams = {'legend.fontsize': 'Large',\n 'axes.labelsize': size,\n 'axes.titlesize': size,\n 'xtick.labelsize': size*0.75,\n 'ytick.labelsize': size*0.75}\nplt.rcParams.update(params)\n```\n\n## 1. Introduction and purpose of this Notebook \n\n### 1.1. About Neural Networks \n\n* Neural Networks (NN) have become the state of the art for many machine learning problems\n * Natural Language Processing\n * Computer Vision\n * Image Recognition\n\n\n* They are in widespread use for many applications, e.g.,\n * Language Translantion (Google Neural Machine Translation System) \n * Automatic Speech recognition (Hey Siri! DNN overview)\n * Autonomous Navigation (Facebook Robot Autonomous 3D Navigation)\n * Automatic Plate recognition\n \n\n \n\nFeed Forward Neural Networks have been around since 1960 but only recently (last 10-12 years) have they met their expectations, and improve other machine learning algorithms\n\n* Computation resources are now available at large scale\n* Cloud Computing (AWS, Azure)\n* From MultiLayer Perceptrons to Deep Learning\n* Big Data sets\n* This has also made possible an intense research effort resulting in\n * Topologies better suited to particular problems (CNNs, RNNs)\n * New training strategies providing better generalization\n\nIn parallel, Deep Learning Platforms have emerged that make design, implementation, training, and production of DNNs feasible for everyone\n\n### 1.2. Scope\n\n* To provide just an overview of most important NNs and DNNs concepts\n* Connecting with already studied methods as starting point\n* Introduction to PyTorch\n* Providing links to external sources for further study\n\n### 1.3. Outline\n\n1. Introduction and purpose of this Notebook\n2. Introduction to Neural Networks\n3. Implementing Deep Networks with PyTorch\n\n### 1.4. Other resources \n\n* We point here to external resources and tutorials that are excellent material for further study of the topic\n* Most of them include examples and exercises using numpy and PyTorch\n* This notebook uses examples and other material from some of these sources\n\n|Tutorial|Description|\n|-----|---------------------|\n| |Very general tutorial including videos and an overview of top deep learning platforms|\n| |Very complete book with a lot of theory and examples for MxNET, PyTorch, and TensorFlow|\n| |Official tutorials from the PyTorch project. Contains a 60 min overview, and a very practical *learning PyTorch with examples* tutorial|\n| |Kaggle tutorials covering an introduction to Neural Networks using Numpy, and a second one offering a PyTorch tutorial|\n\n\n\n\n\n\nIn addition to this, PyTorch MOOCs can be followed for free in main sites: edX, Coursera, Udacity\n\n## 2. Introduction to Neural Networks \n\nIn this section, we will implement neural networks from scratch using Numpy arrays\n\n* No need to learn any new Python libraries\n* But we need to deal with complexity of multilayer networks\n* Low-level implementation will be useful to grasp the most important concepts concerning DNNs\n * Back-propagation\n * Activation functions\n * Loss functions\n * Optimization methods\n * Generalization\n * Special layers and configurations\n\n### 2.0. Data preparation \n\nWe start by loading some data sets that will be used to carry out the exercises\n\n### Sign language digits data set\n\n* Dataset is taken from Kaggle and used in the above referred tutorial\n* 2062 digits in sign language. $64 \\times 64$ images\n* Problem with 10 classes. One hot encoding for the label matrix\n* Input data are images, we create also a flattened version\n\n\n```python\ndigitsX = np.load('./data/Sign-language-digits-dataset/X.npy')\ndigitsY = np.load('./data/Sign-language-digits-dataset/Y.npy')\nK = digitsX.shape[0]\nimg_size = digitsX.shape[1]\ndigitsX_flatten = digitsX.reshape(K,img_size*img_size)\n\nprint('Size of Input Data Matrix:', digitsX.shape)\nprint('Size of Flattned Input Data Matrix:', digitsX_flatten.shape)\nprint('Size of label Data Matrix:', digitsY.shape)\nselected = [260, 1400]\nplt.subplot(1, 2, 1), plt.imshow(digitsX[selected[0]].reshape(img_size, img_size)), plt.axis('off')\nplt.subplot(1, 2, 2), plt.imshow(digitsX[selected[1]].reshape(img_size, img_size)), plt.axis('off')\nplt.show()\nprint('Labels corresponding to figures:', digitsY[selected,])\n```\n\n### Dogs vs Cats data set\n\n* Dataset is taken from Kaggle\n* 25000 pictures of dogs and cats\n* Binary problem\n* Input data are images, we create also a flattened version\n* Original images are RGB, and arbitrary size\n* Preprocessed images are $64 \\times 64$ and gray scale\n\n\n```python\n# Preprocessing of original Dogs and Cats Pictures\n# Adapted from https://medium.com/@mrgarg.rajat/kaggle-dogs-vs-cats-challenge-complete-step-by-step-guide-part-1-a347194e55b1\n# RGB channels are collapsed in GRAYSCALE\n# Images are resampled to 64x64\n\n\"\"\"\nimport os, cv2 # cv2 -- OpenCV\n\ntrain_dir = './data/DogsCats/train/'\nrows = 64\ncols = 64\ntrain_images = sorted([train_dir+i for i in os.listdir(train_dir)])\n\ndef read_image(file_path):\n image = cv2.imread(file_path, cv2.IMREAD_GRAYSCALE)\n return cv2.resize(image, (rows, cols),interpolation=cv2.INTER_CUBIC)\n\ndef prep_data(images):\n m = len(images)\n X = np.ndarray((m, rows, cols), dtype=np.uint8)\n y = np.zeros((m,))\n print(\"X.shape is {}\".format(X.shape))\n \n for i,image_file in enumerate(images) :\n image = read_image(image_file)\n X[i,] = np.squeeze(image.reshape((rows, cols)))\n if 'dog' in image_file.split('/')[-1].lower():\n y[i] = 1\n elif 'cat' in image_file.split('/')[-1].lower():\n y[i] = 0\n \n if i%5000 == 0 :\n print(\"Proceed {} of {}\".format(i, m))\n \n return X,y\n\nX_train, y_train = prep_data(train_images)\nnp.save('./data/DogsCats/X.npy', X_train)\nnp.save('./data/DogsCats/Y.npy', y_train)\n\"\"\"\n```\n\n\n```python\nDogsCatsX = np.load('./data/DogsCats/X.npy')\nDogsCatsY = np.load('./data/DogsCats/Y.npy')\nK = DogsCatsX.shape[0]\nimg_size = DogsCatsX.shape[1]\nDogsCatsX_flatten = DogsCatsX.reshape(K,img_size*img_size)\n\nprint('Size of Input Data Matrix:', DogsCatsX.shape)\nprint('Size of Flattned Input Data Matrix:', DogsCatsX_flatten.shape)\nprint('Size of label Data Matrix:', DogsCatsY.shape)\nselected = [260, 16000]\nplt.subplot(1, 2, 1), plt.imshow(DogsCatsX[selected[0]].reshape(img_size, img_size)), plt.axis('off')\nplt.subplot(1, 2, 2), plt.imshow(DogsCatsX[selected[1]].reshape(img_size, img_size)), plt.axis('off')\nplt.show()\nprint('Labels corresponding to figures:', DogsCatsY[selected,])\n```\n\n### 2.1. Logistic Regression as a Simple Neural Network \n\n* We can consider logistic regression as an extremely simple (1 layer) neural network\n\n\n\n* In this context, $\\text{NLL}({\\bf w})$ is normally referred to as cross-entropy loss\n\n\n* We need to find parameters $\\bf w$ and $b$ to minimize the loss $\\rightarrow$ GD / SGD\n* Gradient computation can be simplified using the **chain rule**\n\n
\n\\begin{align}\n\\frac{\\partial \\text{NLL}}{\\partial {\\bf w}} & = \\frac{\\partial \\text{NLL}}{\\partial {\\hat y}} \\cdot \\frac{\\partial \\hat y}{\\partial o} \\cdot \\frac{\\partial o}{\\partial {\\bf w}} \\\\\n& = \\sum_{k=0}^{K-1} \\left[\\frac{1 - y_k}{1 - \\hat y_k} - \\frac{y_k}{\\hat y_k}\\right]\\hat y_k (1-\\hat y_k) {\\bf x}_k \\\\\n& = \\sum_{k=0}^{K-1} \\left[(1 - y_k) \\hat y_k - y_k (1 - \\hat y_k) \\right] {\\bf x}_k \\\\\n\\frac{\\partial \\text{NLL}}{\\partial b} & = \\sum_{k=0}^{K-1} \\left[(1 - y_k) \\hat y_k - y_k (1 - \\hat y_k) \\right]\n\\end{align}\n\n* Gradient Descent Optimization\n\n
\n$${\\bf w}_{n+1} = {\\bf w}_n + \\rho_n \\sum_{k=0}^{K-1} \\left[y_k (1 - \\hat y_k) - (1 - y_k) \\hat y_k \\right] {\\bf x}_k = {\\bf w}_n + \\rho_n \\sum_{k=0}^{K-1} (y_k - \\hat y_k){\\bf x}_k$$\n$$b_{n+1} = b_n + \\rho_n \\sum_{k=0}^{K-1} \\left[y_k (1 - \\hat y_k) - (1 - y_k) \\hat y_k \\right] = b_n + \\rho_n \\sum_{k=0}^{K-1} (y_k - \\hat y_k)$$\n\n\n```python\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.model_selection import train_test_split\n\n#dataset = 'DogsCats'\ndataset = 'digits'\n\nif dataset=='DogsCats':\n X = DogsCatsX_flatten\n y = DogsCatsY\n \nelse:\n #Zero and Ones are one hot encoded in columns 1 and 4\n X0 = digitsX_flatten[np.argmax(digitsY, axis=1)==1,]\n X1 = digitsX_flatten[np.argmax(digitsY, axis=1)==4,]\n X = np.vstack((X0, X1))\n y = np.zeros(X.shape[0])\n y[X0.shape[0]:] = 1\n \n#Joint normalization of all data. For images [-.5, .5] scaling is frequent\nmin_max_scaler = MinMaxScaler(feature_range=(-.5, .5))\nX = min_max_scaler.fit_transform(X)\n\n#Generate train and validation data, shuffle\nX_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42, shuffle=True)\n```\n\n\n```python\n# Define some useful functions\ndef logistic(t):\n return 1.0 / (1 + np.exp(-t))\n\ndef forward(w,b,x):\n #Calcula la salida de la red\n return logistic(x.dot(w)+b)\n\ndef backward(y,y_hat,x):\n #Calcula los gradientes\n #w_grad = x.T.dot((1-y)*y_hat - y*(1-y_hat))/len(y)\n #b_grad = np.sum((1-y)*y_hat - y*(1-y_hat))/len(y)\n w_grad = x.T.dot(y_hat-y)/len(y)\n b_grad = np.sum(y_hat-y)/len(y)\n return w_grad, b_grad\n \ndef accuracy(y, y_hat):\n return np.mean(y == (y_hat>=0.5))\n\ndef loss(y, y_hat):\n return -np.sum(y*np.log(y_hat)+(1-y)*np.log(1-y_hat))/len(y)\n```\n\n\n```python\n#Neural Network Training\n\nepochs = 50\nrho = .05 #Use this setting for Sign Digits Dataset\n\n#Parameter initialization\nw = .1 * np.random.randn(X.shape[1])\nb = .1 * np.random.randn(1)\n\nloss_train = np.zeros(epochs)\nloss_val = np.zeros(epochs)\nacc_train = np.zeros(epochs)\nacc_val = np.zeros(epochs)\n\nfor epoch in np.arange(epochs):\n y_hat_train = forward(w, b, X_train)\n y_hat_val = forward(w, b, X_val)\n w_grad, b_grad = backward(y_train, y_hat_train, X_train)\n w = w - rho * w_grad\n b = b - rho * b_grad\n \n loss_train[epoch] = loss(y_train, y_hat_train)\n loss_val[epoch] = loss(y_val, y_hat_val)\n acc_train[epoch] = accuracy(y_train, y_hat_train)\n acc_val[epoch] = accuracy(y_val, y_hat_val)\n```\n\n\n```python\nplt.figure(figsize=(14,5))\nplt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')\nplt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')\nplt.show()\n```\n\n### Exercise\n\n* Study the behavior of the algorithm changing the number of epochs and the learning rate\n\n* Repeat the analysis for the other dataset, trying to obtain as large an accuracy value as possible\n\n* What do you believe are the reasons for the very different performance for both datasets?\n\nLinear logistic regression allowed us to review a few concepts that are key for Neural Networks:\n\n* Network topology (In this case, a linear network with one layer)\n* Activation functions\n* Parametric approach ($\\bf w$/$b$)\n* Parameter initialization\n* Obtaining the network prediction using *forward* computation\n* Loss function\n* Parameter gradient calculus using *backward* computation\n* Optimization method for parameters update (here, GD)\n\n### 2.2. (Multiclass) SoftMax Regression \n\n* One hot encoding output, e.g., $[0, 1, 0, 0]$, $[0, 0, 0, 1]$\n* Used to encode categorial variables without predefined order\n* Similar to logistic regression, network tries to predict class probability\n$$\\hat y_{k,j} = \\hat P(y_k=j|{\\bf x}_k)$$\n* Network output should satisfy \"probability constraints\"\n$$\\hat y_{k,j} \\in [0,1]\\qquad \\text{and} \\qquad \\sum_j \\hat y_{k,j} = 1$$\n\n* Softmax regression network topology:\n\n\n### Notation\n\nIn this section, it is important to pay attention to subindexes:\n\n|Notation/ Variable Name|Definition|\n|-----------------------|---------------------------------|\n|$y_k \\in [0,\\dots,M-1]$|The label of pattern $k$|\n|${\\bf y}_k$|One hot encoding of the label of pattern $k$|\n|$y_{k,m}$|$m$-th component of vector ${\\bf y}_k$|\n|$y_{m}$|$m$-th component of generic vector ${\\bf y}$ (i.e., for an undefined pattern)|\n|$\\hat {\\bf y}_k$|Network output for pattern $k$|\n|$\\hat y_{k,m}$|$m$-th network output for pattern $k$|\n|$\\hat y_{m}$|$m$-th network output for an undefined pattern)|\n|$k$|Index used for pattern enumeration|\n|$m$|Index used for network output enumeration|\n|$j$|Secondary index for selected network output|\n\n\n\n\n\n### The softmax function\n\n* It is to multiclass problems as the logistic function for binary classification\n* Invented in 1959 by the social scientist R. Duncan Luce\n* Transforms a set of $M$ real numbers to satisfy \"probability\" constraints\n\n
\n$${\\bf \\hat y} = \\text{softmax}({\\bf o}) \\qquad \\text{where} \\qquad \\hat y_j = \\frac{\\exp(o_j)}{\\sum_m \\exp(o_m)}\u00a0$$\n\n* Continuous and **differentiable** function\n\n
\n$$\\frac{\\partial \\hat y_j}{\\partial o_j} = \\hat y_j (1 - \\hat y_j) \\qquad \\text{and} \\qquad \\frac{\\partial \\hat y_j}{\\partial o_m} = - \\hat y_j \\hat y_m$$\n\n\n\n* The classifier is still linear, since\n\n
\n$$\\arg\\max \\hat {\\bf y} = \\arg\\max \\hat {\\bf o} = \\arg\\max {\\bf W} {\\bf x} + {\\bf b}$$\n\n### Cross-entropy loss for multiclass problems\n\n* Similarly to logistic regression, minimization of the log-likelihood can be stated to obtain ${\\bf W}$ and ${\\bf b}$\n\n
\n$$\\text{Binary}: \\text{NLL}({\\bf w}, b) = - \\sum_{k=0}^{K-1} \\log \\hat P(y_k|{\\bf x}_k)$$\n$$\\text{Multiclass}: \\text{NLL}({\\bf W}, {\\bf b}) = - \\sum_{k=0}^{K-1} \\log \\hat P(y_k|{\\bf x}_k)$$\n\n* Using one hot encoding for the label vector of each sample, e.g., $y_k = 2 \\rightarrow {\\bf y}_k = [0, 0, 1, 0]$\n\n$$\\text{NLL}({\\bf W}, {\\bf b}) = - \\sum_{k=0}^{K-1} \\sum_{m=0}^{M-1} y_{k,m} \\log \\hat P(m|{\\bf x}_k)= - \\sum_{k=0}^{K-1} \\sum_{m=0}^{M-1} y_{k,m} \\log \\hat y_{k,m} = \\sum_{k=0}^{K-1} l({\\bf y}_k, \\hat {\\bf y}_k)$$\n\n* Note that for each pattern, only one element in the inner sum (the one indexed with $m$) is non-zero\n\n* In the context of Neural Networks, this cost is referred to as the cross-entropy loss\n\n
\n$$l({\\bf y}, \\hat {\\bf y}) = - \\sum_{m=0}^{M-1} y_{m} \\log \\hat y_{m}$$\n\n### Network optimization\n\n* Gradient Descent Optimization\n\n
\n$${\\bf W}_{n+1} = {\\bf W}_n - \\rho_n \\sum_{k=0}^{K-1} \\frac{\\partial l({\\bf y}_k,{\\hat {\\bf y}_k})}{\\partial {\\bf W}}$$\n$${\\bf b}_{n+1} = {\\bf b}_n - \\rho_n \\sum_{k=0}^{K-1} \\frac{\\partial l({\\bf y}_k,{\\hat {\\bf y}_k})}{\\partial {\\bf b}}$$\n\n* We compute derivatives using the chain rule (we ignore dimension mismatchs, and rearrange at the end)\n\n
\n\\begin{align}\n\\frac{\\partial l({\\bf y},{\\hat {\\bf y}})}{\\partial {\\bf W}} &= \\frac{\\partial l({\\bf y},{\\hat {\\bf y}})}{\\partial \\hat {\\bf y}} \\cdot \\frac{\\partial \\hat {\\bf y}}{\\partial {\\bf o}} \\cdot \\frac{\\partial {\\bf o}}{\\partial {\\bf W}} \\\\ & = \\left[\\begin{array}{c}\u00a00 \\\\ 0 \\\\ \\vdots \\\\ - 1/\\hat y_j \\\\ \\vdots \\end{array}\\right] \\left[\u00a0\\begin{array}{ccccc} \\hat y_1 (1 - \\hat y_1) & -\\hat y_1 \\hat y_2 & \\dots & -\\hat y_1 \\hat y_j & \\dots \\\\ -\\hat y_2 \\hat y_1 & \\hat y_2 (1 - \\hat y_2) & \\dots & -\\hat y_2 \\hat y_j & \\dots \\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\ - \\hat y_j \\hat y_1 & -\\hat y_j \\hat y_2 & \\dots & \\hat y_j (1-\\hat y_j) & \\dots \\\\ \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\end{array}\\right] {\\bf x}^\\top \\\\\n& = \\left[\\begin{array}{c}\\hat y_1 \\\\ \\hat y_2 \\\\ \\vdots \\\\ \\hat y_j - 1 \\\\ \\vdots \\end{array}\u00a0\\right] {\\bf x}^\\top \\\\\n& = (\\hat {\\bf y} - {\\bf y}){\\bf x}^\\top \\\\\n\\\\\n\\frac{\\partial l({\\bf y},{\\hat {\\bf y}})}{\\partial {\\bf b}} & = (\\hat {\\bf y} - {\\bf y}){\\bf 1}^\\top\n\\end{align}\n\n\n```python\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.model_selection import train_test_split\n\ndataset = 'digits'\n\n#Joint normalization of all data. For images [-.5, .5] scaling is frequent\nmin_max_scaler = MinMaxScaler(feature_range=(-.5, .5))\nX = min_max_scaler.fit_transform(digitsX_flatten)\n\n#Generate train and validation data, shuffle\nX_train, X_val, y_train, y_val = train_test_split(X, digitsY, test_size=0.2, random_state=42, shuffle=True)\n```\n\n\n```python\n# Define some useful functions\ndef softmax(t):\n \"\"\"Compute softmax values for each sets of scores in t.\"\"\"\n e_t = np.exp(t)\n return e_t / e_t.sum(axis=1)[:,np.newaxis]\n\ndef forward(w,b,x):\n #Calcula la salida de la red\n return softmax(x.dot(w.T)+b.T)\n\ndef backward(y,y_hat,x):\n #Calcula los gradientes\n W_grad = (y_hat-y).T.dot(x)/len(y)\n b_grad = ((y_hat-y).sum(axis=0)[:,np.newaxis])/len(y)\n return W_grad, b_grad\n \ndef accuracy(y, y_hat):\n return np.mean(np.argmax(y, axis=1) == np.argmax(y_hat, axis=1))\n\ndef loss(y, y_hat):\n return -np.sum(y * np.log(y_hat))/len(y)\n```\n\n\n```python\n#Neural Network Training\n\nepochs = 300\nrho = .1\n\n#Parameter initialization\nW = .1 * np.random.randn(y_train.shape[1], X_train.shape[1])\nb = .1 * np.random.randn(y_train.shape[1],1)\n\nloss_train = np.zeros(epochs)\nloss_val = np.zeros(epochs)\nacc_train = np.zeros(epochs)\nacc_val = np.zeros(epochs)\n\nfor epoch in np.arange(epochs):\n y_hat_train = forward(W, b, X_train)\n y_hat_val = forward(W, b, X_val)\n W_grad, b_grad = backward(y_train, y_hat_train, X_train)\n W = W - rho * W_grad\n b = b - rho * b_grad\n \n loss_train[epoch] = loss(y_train, y_hat_train)\n loss_val[epoch] = loss(y_val, y_hat_val)\n acc_train[epoch] = accuracy(y_train, y_hat_train)\n acc_val[epoch] = accuracy(y_val, y_hat_val)\n```\n\n\n```python\nplt.figure(figsize=(14,5))\nplt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')\nplt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')\nplt.show()\n```\n\n### Exercise\n\n* Study the behavior of the algorithm changing the number of iterations and the learning rate\n\n* Obtain the confusion matrix, and study which classes are more difficult to classify\n\n* Think about the differences between using this 10-class network, vs training 10 binary classifiers, one for each class\n\nAs in linear logistic regression note that we covered the following aspects of neural network design, implementation, and training:\n\n* Network topology (In this case, a linear network with one layer and $M$ ouptuts)\n* Activation functions (softmax activation)\n* Parameter initialization ($\\bf W$/$b$)\n* Obtaining the network prediction using *forward* computation\n* Loss function\n* Parameter gradient calculus using *backward* computation\n* Optimization method for parameters update (here, GD)\n\n### 2.3. Multi Layer Networks (Deep Networks) \n\nPrevious networks are constrained in the sense that they can only implement linear classifiers. In this section we analyze how we can extend them to implement non-linear classification:\n* Fixed non-linear transformations of inputs: ${\\bf z} = {\\bf{f}}({\\bf x})$\n\n* Parametrize the transformation using additional non-linear layers\n\n\n* When counting layers, we normally ignore the input layer, since there is no computation involved\n* Intermediate layers are normally referred to as \"hidden\" layers\n* Non-linear activations result in an overall non-linear classifier\n* We can still use Gradient Descent Optimization as long as the network loss derivatives with respect to all parameters exist and are continuous\n* This is already deep learning. We can have two layers or more, each with different numbers of neurons. But as long as derivatives with respect to parameters can be calculated, the network can be optimized\n* Finding an appropriate number of layers for a particular problem, as well as the number of neurons per layer, requires exploration\n* The more data we have for training the network, the more parameters we can afford, making feasible the use of more complex topologies\n\n### Example: 2-layer network for binary classification\n\n* Network topology\n * Hidden layer with $n_h$ neurons\n * Hyperbolic tangent activation function for the hidden layer\n $${\\bf h} = \\text{tanh}({\\bf o}^{(1)})= \\text{tanh}\\left({\\bf W}^{(1)} {\\bf x} + {\\bf b}^{(1)}\\right)$$\n * Output layer is linear with logistic activation (as in logistic regression)\n $$\\hat y = \\text{logistic}(o) = \\text{logistic}\\left({{\\bf w}^{(2)}}^\\top {\\bf h} + b^{(2)}\\right)$$\n \n* Cross-entropy loss\n\n$$l(y,\\hat y) = -\\left[ y \\log(\\hat y) + (1 - y ) \\log(1 - \\hat y) \\right], \\qquad \\text{with } y\\in [0,1]$$\n\n* Update of output layer weights as in logistic regression (use ${\\bf h}$ instead of ${\\bf x}$)\n\n$${\\bf w}_{n+1}^{(2)} = {\\bf w}_n^{(2)} + \\rho_n \\sum_{k=0}^{K-1} (y_k - \\hat y_k){\\bf h}_k$$\n$$b_{n+1}^{(2)} = b_n^{(2)} + \\rho_n \\sum_{k=0}^{K-1} (y_k - \\hat y_k)$$\n\n\n\n* For updating the input layer parameters we need to use the chain rule (we ignore dimensions and rearrange at the end)\n\n\\begin{align}\\frac{\\partial l(y, \\hat y)}{\\partial {\\bf W}^{(1)}} & = \\frac{\\partial l(y, \\hat y)}{\\partial o} \\cdot \\frac{\\partial o}{\\partial {\\bf h}} \\cdot \\frac{\\partial {\\bf h}}{\\partial {\\bf o}^{(1)}} \\cdot \\frac{\\partial {\\bf o}^{(1)}}{\\partial {\\bf W}^{(1)}} \\\\\n& = (\\hat y - y) [{\\bf w}^{(2)} .\\ast ({\\bf 1}-{\\bf h})^2] {\\bf x}^{\\top}\n\\end{align}\n\n\\begin{align}\\frac{\\partial l(y, \\hat y)}{\\partial {\\bf b}^{(1)}} & = \\frac{\\partial l(y, \\hat y)}{\\partial o} \\cdot \\frac{\\partial o}{\\partial {\\bf h}} \\cdot \\frac{\\partial {\\bf h}}{\\partial {\\bf o}^{(1)}} \\cdot \\frac{\\partial {\\bf o}^{(1)}}{\\partial {\\bf b}^{(1)}} \\\\\n& = (\\hat y - y) [{\\bf w}^{(2)} .\\ast ({\\bf 1}-{\\bf h})^2]\n\\end{align}\n\n* GD update rules become\n$${\\bf W}_{n+1}^{(1)} = {\\bf W}_n^{(1)} + \\rho_n \\sum_{k=0}^{K-1} (y_k - \\hat y_k)[{\\bf w}^{(2)} .\\ast ({\\bf 1}-{\\bf h}_k)^2] {\\bf x}_k^{\\top}$$\n$${\\bf b}_{n+1}^{(1)} = {\\bf b}_n^{(1)} + \\rho_n \\sum_{k=0}^{K-1} (y_k - \\hat y_k)[{\\bf w}^{(2)} .\\ast ({\\bf 1}-{\\bf h}_k)^2]$$\n\n\n\n\n* The process can be implemented as long as the derivatives of the network overall loss with respect to parameters can be computed\n\n* Forward computation graphs represent how the network output can be computed\n\n* We can then reverse the graph to compute derivatives with respect to parameters\n\n* Deep Learning libraries implement automatic gradient camputation\n * We just define network topology\n * Computation of gradients is carried out automatically\n\n\n```python\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.model_selection import train_test_split\n\n#dataset = 'DogsCats'\ndataset = 'digits'\n\nif dataset=='DogsCats':\n X = DogsCatsX_flatten\n y = DogsCatsY\n \nelse:\n #Zero and Ones are one hot encoded in columns 1 and 4\n X0 = digitsX_flatten[np.argmax(digitsY, axis=1)==1,]\n X1 = digitsX_flatten[np.argmax(digitsY, axis=1)==4,]\n X = np.vstack((X0, X1))\n y = np.zeros(X.shape[0])\n y[X0.shape[0]:] = 1\n \n#Joint normalization of all data. For images [-.5, .5] scaling is frequent\nmin_max_scaler = MinMaxScaler(feature_range=(-.5, .5))\nX = min_max_scaler.fit_transform(X)\n\n#Generate train and validation data, shuffle\nX_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42, shuffle=True)\n```\n\n\n```python\n# Define some useful functions\ndef logistic(t):\n return 1.0 / (1 + np.exp(-t))\n\ndef forward(W1,b1,w2,b2,x):\n #Calcula la salida de la red\n h = x.dot(W1.T)+b1\n y_hat = logistic(h.dot(w2)+b2)\n #Provide also hidden units value for backward gradient step\n return h, y_hat\n\ndef backward(y,y_hat,h,x,w2):\n #Calcula los gradientes\n w2_grad = h.T.dot(y_hat-y)/len(y)\n b2_grad = np.sum(y_hat-y)/len(y)\n W1_grad = ((w2[np.newaxis,]*((1-h)**2)*(y_hat - y)[:,np.newaxis]).T.dot(x))/len(y)\n b1_grad = ((w2[np.newaxis,]*((1-h)**2)*(y_hat - y)[:,np.newaxis]).sum(axis=0))/len(y)\n return w2_grad, b2_grad, W1_grad, b1_grad\n \ndef accuracy(y, y_hat):\n return np.mean(y == (y_hat>=0.5))\n\ndef loss(y, y_hat):\n return -np.sum(y*np.log(y_hat)+(1-y)*np.log(1-y_hat))/len(y)\n```\n\n\n```python\n#Neural Network Training\nepochs = 1000\nrho = .05\n\n#Parameter initialization\nn_h = 5\nW1 = .01 * np.random.randn(n_h, X_train.shape[1])\nb1 = .01 * np.random.randn(n_h)\nw2 = .01 * np.random.randn(n_h)\nb2 = .01 * np.random.randn(1)\n\nloss_train = np.zeros(epochs)\nloss_val = np.zeros(epochs)\nacc_train = np.zeros(epochs)\nacc_val = np.zeros(epochs)\n\nfor epoch in np.arange(epochs):\n h, y_hat_train = forward(W1, b1, w2, b2, X_train)\n dum, y_hat_val = forward(W1, b1, w2, b2, X_val)\n w2_grad, b2_grad, W1_grad, b1_grad = backward(y_train, y_hat_train, h, X_train, w2)\n W1 = W1 - rho/10 * W1_grad\n b1 = b1 - rho/10 * b1_grad\n w2 = w2 - rho * w2_grad\n b2 = b2 - rho * b2_grad\n \n loss_train[epoch] = loss(y_train, y_hat_train)\n loss_val[epoch] = loss(y_val, y_hat_val)\n acc_train[epoch] = accuracy(y_train, y_hat_train)\n acc_val[epoch] = accuracy(y_val, y_hat_val)\n \n if not ((epoch+1)%(epochs/5)):\n print('N\u00famero de iteraciones:', epoch+1)\n```\n\n### Results in Dogs vs Cats dataset ($epochs = 1000$ and $\\rho = 0.05$)\n\n\n```python\nplt.figure(figsize=(14,5))\nplt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')\nplt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')\nplt.show()\n```\n\n### Results in Binary Sign Digits Dataset ($epochs = 10000$ and $\\rho = 0.001$)\n\n\n```python\nplt.figure(figsize=(14,5))\nplt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')\nplt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')\nplt.show()\n```\n\n### Exercises\n\n* Train the network using other settings for:\n * The number of iterations\n * The learning step\n * The number of neurons in the hidden layer\n \n* You may find divergence issues for some settings\n * Related to the use of the hyperbolic tangent function in the hidden layer (numerical issues)\n * This is also why learning step was selected smaller for the hidden layer\n * Optimized libraries rely on certain modifications to obtain more robust implementations\n \n* Try to solve both problems using scikit-learn implementation\n * You can also explore other activation functions\n * You can also explore other solvers to speed up convergence\n * You can also adjust the size of minibatches\n * Take a look at the *early_stopping* parameter\n\n### 2.4. Multi Layer Networks for Regression \n\n* Deep Learning networks can be used to solve regression problems with the following common adjustments\n\n * Linear activation for the output unit\n \n * Square loss: \n $$l(y, \\hat y) = (y - \\hat y)^2, \\qquad \\text{where} \\qquad y, \\hat y \\in \\Re$$\n\n### 2.5. Activation Functions\n\nYou can refer to the Dive into Deep Learning book for a more detailed discussion on common actiation functions for the hidden units. \n\nWe extract some information about the very important **ReLU** function\n\n> *The most popular choice, due to both simplicity of implementation and its good performance on a variety of predictive tasks, is the rectified linear unit (ReLU). ReLU provides a very simple nonlinear transformation. Given an element $x$, the function is defined as the maximum of that element and 0.*\n\n> *When the input is negative, the derivative of the ReLU function is 0, and when the input is positive, the derivative of the ReLU function is 1. When the input takes value precisely equal to 0, we say that the derivative is 0 when the input is 0.*\n\n> *The reason for using ReLU is that its derivatives are particularly well behaved: either they vanish or they just let the argument through. This makes optimization better behaved and it mitigated the well-documented problem of vanishing gradients that plagued previous versions of neural networks.*\n\n\n```python\nx_array = np.linspace(-6,6,100)\ny_array = np.clip(x_array, 0, a_max=None)\nplt.plot(x_array, y_array)\nplt.title('ReLU activation function')\nplt.show()\n```\n\n## 3. Implementing Deep Networks with PyTorch \n\n* Pytorch is a Python library that provides different levels of abstraction for implementing deep neural networks\n\n* The main features of PyTorch are:\n\n * Definition of numpy-like n-dimensional *tensors*. They can be stored in / moved to GPU for parallel execution of operations\n * Automatic calculation of gradients, making *backward gradient calculation* transparent to the user\n * Definition of common loss functions, NN layers of different types, optimization methods, data loaders, etc, simplifying NN implementation and training\n * Provides different levels of abstraction, thus a good balance between flexibility and simplicity\n \n* This notebook provides just a basic review of the main concepts necessary to train NNs with PyTorch taking materials from:\n * Learning PyTorch with Examples, by Justin Johnson\n * What is *torch.nn* really?, by Jeremy Howard\n * Pytorch Tutorial for Deep Learning Lovers, by Kaggle user kanncaa1\n\n### 3.0. Installation and PyTorch introduction\n\n* PyTorch can be installed with or without GPU support\n * If you have an Anaconda installation, you can install from the command line, using the instructions of the project website\n \n* PyTorch is also preinstalled in Google Collab with free GPU access\n * Follow RunTime -> Change runtime type, and select GPU for HW acceleration\n \n* Please, refer to Pytorch getting started tutorial for a quick introduction regarding tensor definition, GPU vs CPU storage of tensors, operations, and bridge to Numpy\n\n### 3.1. Torch tensors (very) general overview\n\n* We can create tensors with different construction methods provided by the library, either to create new tensors from scratch or from a Numpy array\n\n\n```python\nimport torch\n\nx = torch.rand((100,200))\ndigitsX_flatten_tensor = torch.from_numpy(digitsX_flatten)\n\nprint(x.type())\nprint(digitsX_flatten_tensor.size())\n```\n\n torch.FloatTensor\n torch.Size([2062, 4096])\n\n\n* Tensors can be converted back to numpy arrays\n\n* Note that in this case, a tensor and its corresponding numpy array **will share memory**\n\n* Operations and slicing use a syntax similar to numpy\n\n\n```python\nprint('Size of tensor x:', x.size())\nprint('Tranpose of vector has size', x.t().size()) #Transpose and compute size\nprint('Extracting upper left matrix of size 3 x 3:', x[:3,:3])\nprint(x.mm(x.t()).size()) #mm for matrix multiplications\nxpx = x.add(x)\nxpx2 = torch.add(x,x)\nprint((xpx!=xpx2).sum()) #Since all are equal, count of different terms is zero\n```\n\n Size of tensor x: torch.Size([100, 200])\n Tranpose of vector has size torch.Size([200, 100])\n Extracting upper left matrix of size 3 x 3: tensor([[0.6252, 0.0318, 0.4039],\n [0.0704, 0.7540, 0.4963],\n [0.1201, 0.1661, 0.0053]])\n torch.Size([100, 100])\n tensor(0)\n\n\n* Adding underscore performs operations \"*in place*\", e.g., ```x.add_(y)```\n\n* If a GPU is available, tensors can be moved to and from the GPU device\n\n* Operations on tensors stored in a GPU will be carried out using GPU resources and will typically be highly parallelized\n\n\n```python\nif torch.cuda.is_available():\n device = torch.device('cuda')\n x = x.to(device)\n y = x.add(x)\n y = y.to('cpu')\nelse:\n print('No GPU card is available')\n```\n\n No GPU card is available\n\n\n### 3.2. Automatic gradient calculation \n\n* PyTorch tensors have a property ```requires_grad```. When true, PyTorch automatic gradient calculation will be activated for that variable\n\n* In order to compute these derivatives numerically, PyTorch keeps track of all operations carried out on these variables, organizing them in a forward computation graph.\n\n* When executing the ```backward()``` method, derivatives will be calculated\n\n* However, this should only be activated when necessary, to save computation\n\n\n```python\nx.requires_grad = True\ny = (3 * torch.log(x)).sum()\ny.backward()\nprint(x.grad[:2,:2])\nprint(3/x[:2,:2])\n\nx.requires_grad = False\nx.grad.zero_()\nprint('Automatic gradient calculation is deactivated, and gradients set to zero')\n```\n\n tensor([[ 4.7981, 94.4270],\n [42.5862, 3.9788]])\n tensor([[ 4.7981, 94.4270],\n [42.5862, 3.9788]], grad_fn=)\n Automatic gradient calculation is deactivated, and gradients set to zero\n\n\nExercise\n\n* Initialize a tensor ```x``` with the upper right $5 \\times 10$ submatrix of flattened digits\n* Compute output vector ```y``` applying a function of your choice to ```x```\n* Compute scalar value ```z``` as the sum of all elements in ```y``` squared\n* Check that ```x.grad``` calculation is correct using the ```backward``` method\n* Try to run your cell multiple times to see if the calculation is still correct. If not, implement the necessary mnodifications so that you can run the cell multiple times, but the gradient does not change from run to run\n\n**Note:** The backward method can only be run on scalar variables\n\n### 3.2. Feed Forward Network using PyTorch \n\n* In this section we will change our code for a neural network to use tensors instead of numpy arrays. We will work with the sign digits datasets.\n\n* We will introduce all concepts using a single layer perceptron (softmax regression), and then implement networks with additional hidden layers\n\n\n### 3.2.1. Using Automatic differentiation \n\n* We start by loading the data, and converting to tensors.\n\n* As a first step, we refactor our code to use tensor operations\n\n* We do not need to pay too much attention to particular details regarding tensor operations, since these will not be necessary when moving to higher PyTorch abstraction levels\n\n* We do not need to implement gradient calculation. PyTorch will take care of that\n\n\n```python\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.model_selection import train_test_split\n\ndataset = 'digits'\n\n#Joint normalization of all data. For images [-.5, .5] scaling is frequent\nmin_max_scaler = MinMaxScaler(feature_range=(-.5, .5))\nX = min_max_scaler.fit_transform(digitsX_flatten)\n\n#Generate train and validation data, shuffle\nX_train, X_val, y_train, y_val = train_test_split(X, digitsY, test_size=0.2, random_state=42, shuffle=True)\n\n#Convert to Torch tensors\nX_train_torch = torch.from_numpy(X_train)\nX_val_torch = torch.from_numpy(X_val)\ny_train_torch = torch.from_numpy(y_train)\ny_val_torch = torch.from_numpy(y_val)\n```\n\n\n```python\n# Define some useful functions\ndef softmax(t):\n \"\"\"Compute softmax values for each sets of scores in t\"\"\"\n return t.exp() / t.exp().sum(-1).unsqueeze(-1)\n\ndef model(w,b,x):\n #Calcula la salida de la red\n return softmax(x.mm(w) + b)\n \ndef accuracy(y, y_hat):\n return (y.argmax(axis=-1) == y_hat.argmax(axis=-1)).float().mean()\n\ndef nll(y, y_hat):\n return -(y * y_hat.log()).mean()\n```\n\n* Syntaxis is a bit different because input variables are tensors, not arrays\n\n* This time we did not need to implement the backward function\n\n\n```python\n#Parameter initialization\nW = .1 * torch.randn(X_train_torch.size()[1], y_train_torch.size()[1])\nW.requires_grad_()\nb = torch.zeros(y_train_torch.size()[1], requires_grad=True)\n\nepochs = 500\nrho = .5\n\nloss_train = np.zeros(epochs)\nloss_val = np.zeros(epochs)\nacc_train = np.zeros(epochs)\nacc_val = np.zeros(epochs)\n```\n\n\n```python\n# Network training\n\nfor epoch in range(epochs):\n \n if not ((epoch+1)%(epochs/5)):\n print('Current epoch:', epoch+1)\n \n #Compute network output and cross-entropy loss\n pred = model(W,b,X_train_torch)\n loss = nll(y_train_torch, pred)\n \n #Compute gradients\n loss.backward()\n \n #Deactivate gradient automatic updates\n with torch.no_grad():\n #Computing network performance after iteration\n loss_train[epoch] = loss.item()\n acc_train[epoch] = accuracy(y_train_torch, pred).item()\n pred_val = model(W, b, X_val_torch)\n loss_val[epoch] = nll(y_val_torch, pred_val).item()\n acc_val[epoch] = accuracy(y_val_torch, pred_val).item()\n\n #Weight update\n W -= rho * W.grad\n b -= rho * b.grad\n #Reset gradients\n W.grad.zero_()\n b.grad.zero_()\n```\n\nIt is important to deactivate gradient updates after the network has been evaluated on training data, and gradients of the loss function have been computed\n\n\n```python\nplt.figure(figsize=(14,5))\nplt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')\nplt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')\nplt.show()\n```\n\n### 3.2.2. Using torch *nn* module \n\n* PyTorch *nn* module provides many attributes and methods that make the implementation and training of Neural Networks simpler\n\n* ```nn.Module``` and ```nn.Parameter``` allow to implement a more concise training loop\n\n* ```nn.Module``` is a PyTorch class that will be used to encapsulate and design a specific neural network, thus, it is central to the implementation of deep neural nets using PyTorch\n\n* ```nn.Parameter``` allow the definition of trainable network parameters. In this way, we will simplify the implementation of the training loop.\n\n* All parameters defined with ```nn.Parameter``` will have ```requires_grad = True```\n\n\n```python\nfrom torch import nn\n\nclass my_multiclass_net(nn.Module):\n def __init__(self, nin, nout):\n \"\"\"This method initializes the network parameters\n Parameters nin and nout stand for the number of input parameters (features in X)\n and output parameters (number of classes)\"\"\"\n super().__init__()\n self.W = nn.Parameter(.1 * torch.randn(nin, nout))\n self.b = nn.Parameter(torch.zeros(nout))\n \n def forward(self, x):\n return softmax(x.mm(self.W) + self.b)\n \n def softmax(t):\n \"\"\"Compute softmax values for each sets of scores in t\"\"\"\n return t.exp() / t.exp().sum(-1).unsqueeze(-1)\n```\n\n\n```python\nmy_net = my_multiclass_net(X_train_torch.size()[1], y_train_torch.size()[1])\n\nepochs = 500\nrho = .5\n\nloss_train = np.zeros(epochs)\nloss_val = np.zeros(epochs)\nacc_train = np.zeros(epochs)\nacc_val = np.zeros(epochs)\n\nfor epoch in range(epochs):\n \n if not ((epoch+1)%(epochs/5)):\n print('Current epoch:', epoch+1)\n \n #Compute network output and cross-entropy loss\n pred = my_net(X_train_torch)\n loss = nll(y_train_torch, pred)\n \n #Compute gradients\n loss.backward()\n \n #Deactivate gradient automatic updates\n with torch.no_grad():\n #Computing network performance after iteration\n loss_train[epoch] = loss.item()\n acc_train[epoch] = accuracy(y_train_torch, pred).item()\n pred_val = my_net(X_val_torch)\n loss_val[epoch] = nll(y_val_torch, pred_val).item()\n acc_val[epoch] = accuracy(y_val_torch, pred_val).item()\n\n #Weight update\n for p in my_net.parameters():\n p -= p.grad * rho\n #Reset gradients\n my_net.zero_grad()\n```\n\n\n```python\nplt.figure(figsize=(14,5))\nplt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')\nplt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')\nplt.show()\n```\n\n* ```nn.Module``` comes with several kinds of pre-defined layers, thus making it even simpler to implement neural networks\n\n* We can also import the Cross Entropy Loss from ```nn.Module```. When doing so:\n - We do not have to compute the softmax, since the ```nn.CrossEntropyLoss``` already does so\n - ```nn.CrossEntropyLoss``` receives two input arguments, the first is the output of the network, and the second is the true label as a 1-D tensor (i.e., an array of integers, one-hot encoding should not be used)\n\n\n```python\nfrom torch import nn\n\nclass my_multiclass_net(nn.Module):\n def __init__(self, nin, nout):\n \"\"\"Note that now, we do not even need to initialize network parameters ourselves\"\"\"\n super().__init__()\n self.lin = nn.Linear(nin, nout)\n \n def forward(self, x):\n return self.lin(x)\n \nloss_func = nn.CrossEntropyLoss()\n```\n\n\n```python\nmy_net = my_multiclass_net(X_train_torch.size()[1], y_train_torch.size()[1])\n\nepochs = 500\nrho = .1\n\nloss_train = np.zeros(epochs)\nloss_val = np.zeros(epochs)\nacc_train = np.zeros(epochs)\nacc_val = np.zeros(epochs)\n\nfor epoch in range(epochs):\n \n if not ((epoch+1)%(epochs/5)):\n print('Current epoch:', epoch+1)\n \n #Compute network output and cross-entropy loss\n pred = my_net(X_train_torch)\n loss = loss_func(pred, y_train_torch.argmax(axis=-1))\n \n #Compute gradients\n loss.backward()\n \n #Deactivate gradient automatic updates\n with torch.no_grad():\n #Computing network performance after iteration\n loss_train[epoch] = loss.item()\n acc_train[epoch] = accuracy(y_train_torch, pred).item()\n pred_val = my_net(X_val_torch)\n loss_val[epoch] = loss_func(pred_val, y_val_torch.argmax(axis=-1)).item()\n acc_val[epoch] = accuracy(y_val_torch, pred_val).item()\n\n #Weight update\n for p in my_net.parameters():\n p -= p.grad * rho\n #Reset gradients\n my_net.zero_grad()\n```\n\n\n```python\nplt.figure(figsize=(14,5))\nplt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')\nplt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')\nplt.show()\n```\n\nNote faster convergence is observed in this case. It is actually due to a more convenient initialization of the hidden layer\n\n### 3.2.3. Network Optimization \n\n* We cover in this subsection two different aspects about network training using PyTorch:\n\n + Using ```torch.optim``` allows an easier and more interpretable encoding of neural network training, and opens the door to more sophisticated training algorithms\n \n + Using minibatches can speed up network convergence\n\n \n* ```torch.optim``` provides two convenient methods for neural network training:\n - ```opt.step()``` updates all network parameters using current gradients\n - ```opt.zero_grad()``` resets all network parameters\n\n\n```python\nfrom torch import optim\n\nmy_net = my_multiclass_net(X_train_torch.size()[1], y_train_torch.size()[1])\nopt = optim.SGD(my_net.parameters(), lr=0.1)\n\nepochs = 500\n\nloss_train = np.zeros(epochs)\nloss_val = np.zeros(epochs)\nacc_train = np.zeros(epochs)\nacc_val = np.zeros(epochs)\n\nfor epoch in range(epochs):\n \n if not ((epoch+1)%(epochs/5)):\n print('Current epoch:', epoch+1)\n \n #Compute network output and cross-entropy loss\n pred = my_net(X_train_torch)\n loss = loss_func(pred, y_train_torch.argmax(axis=-1))\n \n #Compute gradients\n loss.backward()\n \n #Deactivate gradient automatic updates\n with torch.no_grad():\n #Computing network performance after iteration\n loss_train[epoch] = loss.item()\n acc_train[epoch] = accuracy(y_train_torch, pred).item()\n pred_val = my_net(X_val_torch)\n loss_val[epoch] = loss_func(pred_val, y_val_torch.argmax(axis=-1)).item()\n acc_val[epoch] = accuracy(y_val_torch, pred_val).item()\n\n opt.step()\n opt.zero_grad()\n```\n\nNote network optimization is carried out outside ```torch.no_grad()``` but network evaluation (other than forward output calculation for the training patterns) still need to deactivate gradient updates\n\n\n```python\nplt.figure(figsize=(14,5))\nplt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')\nplt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')\nplt.show()\n```\n\n### Exercise \n\nImplement network training with other optimization methods. You can refer to the official documentation and select a couple of methods. You can also try to implement adaptive learning rates using ```torch.optim.lr_scheduler```\n\n \n* Each epoch of the previous implementation of network training was actually implementing Gradient Descent\n\n* In SGD only a *minibatch* of training patterns are used at every iteration\n\n* In each epoch we iterate over all training patterns sequentially selecting non-overlapping *minibatches*\n\n* Overall, convergence is usually faster than when using Gradient Descent\n\n* Torch provides methods that simplify the implementation of this strategy\n\n\n```python\nfrom torch.utils.data import TensorDataset, DataLoader\n\ntrain_ds = TensorDataset(X_train_torch, y_train_torch)\ntrain_dl = DataLoader(train_ds, batch_size=64)\n```\n\n\n```python\nfrom torch import optim\n\nmy_net = my_multiclass_net(X_train_torch.size()[1], y_train_torch.size()[1])\nopt = optim.SGD(my_net.parameters(), lr=0.1)\n\nepochs = 200\n\nloss_train = np.zeros(epochs)\nloss_val = np.zeros(epochs)\nacc_train = np.zeros(epochs)\nacc_val = np.zeros(epochs)\n\nfor epoch in range(epochs):\n \n if not ((epoch+1)%(epochs/5)):\n print('Current epoch:', epoch+1)\n \n for xb, yb in train_dl:\n \n #Compute network output and cross-entropy loss for current minibatch\n pred = my_net(xb)\n loss = loss_func(pred, yb.argmax(axis=-1))\n \n #Compute gradients and optimize parameters\n loss.backward()\n opt.step()\n opt.zero_grad()\n \n #At the end of each epoch, evaluate overall network performance\n with torch.no_grad():\n #Computing network performance after iteration\n pred = my_net(X_train_torch)\n loss_train[epoch] = loss_func(pred, y_train_torch.argmax(axis=-1)).item()\n acc_train[epoch] = accuracy(y_train_torch, pred).item()\n pred_val = my_net(X_val_torch)\n loss_val[epoch] = loss_func(pred_val, y_val_torch.argmax(axis=-1)).item()\n acc_val[epoch] = accuracy(y_val_torch, pred_val).item()\n```\n\n\n```python\nplt.figure(figsize=(14,5))\nplt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')\nplt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')\nplt.show()\n```\n\n### 3.2.4. Multi Layer networks using ```nn.Sequential``` \n\n* PyTorch simplifies considerably the implementation of neural network training, since we do not need to implement derivatives ourselves\n\n* We can also make a simpler implementation of multilayer networks using ```nn.Sequential``` function\n\n* It returns directly a network with the requested topology, including parameters **and forward evaluation method**\n\n\n```python\nmy_net = nn.Sequential(\n nn.Linear(X_train_torch.size()[1], 200),\n nn.ReLU(),\n nn.Linear(200,50),\n nn.ReLU(),\n nn.Linear(50,20),\n nn.ReLU(),\n nn.Linear(20,y_train_torch.size()[1])\n)\n\nopt = optim.SGD(my_net.parameters(), lr=0.1)\n```\n\n\n```python\nepochs = 200\n\nloss_train = np.zeros(epochs)\nloss_val = np.zeros(epochs)\nacc_train = np.zeros(epochs)\nacc_val = np.zeros(epochs)\n\nfor epoch in range(epochs):\n \n if not ((epoch+1)%(epochs/5)):\n print('N\u00famero de \u00e9pocas:', epoch+1)\n \n for xb, yb in train_dl:\n \n #Compute network output and cross-entropy loss for current minibatch\n pred = my_net(xb)\n loss = loss_func(pred, yb.argmax(axis=-1))\n \n #Compute gradients and optimize parameters\n loss.backward()\n opt.step()\n opt.zero_grad()\n \n #At the end of each epoch, evaluate overall network performance\n with torch.no_grad():\n #Computing network performance after iteration\n pred = my_net(X_train_torch)\n loss_train[epoch] = loss_func(pred, y_train_torch.argmax(axis=-1)).item()\n acc_train[epoch] = accuracy(y_train_torch, pred).item()\n pred_val = my_net(X_val_torch)\n loss_val[epoch] = loss_func(pred_val, y_val_torch.argmax(axis=-1)).item()\n acc_val[epoch] = accuracy(y_val_torch, pred_val).item()\n```\n\n\n```python\nplt.figure(figsize=(14,5))\nplt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')\nplt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')\nplt.show()\n```\n\n\n```python\nprint('Validation accuracy with this net:', acc_val[-1])\n```\n\n Validation accuracy with this net: 0.8692494034767151\n\n\n### 3.3. Generalization\n\n* For complex network topologies (i.e., many parameters), network training can incur in over-fitting issues\n\n* Some common strategies to avoid this are:\n\n - Early stopping\n - Dropout regularization\n \n
Image Source
\n\n* Data augmentation can also be used to avoid overfitting, as well as to achieve improved accuracy by providing the network some a priori expert knowledge\n - E.g., if image rotations and scalings do not affect the correct class, we could enlarge the dataset by creating artificial images with these transformations\n\n\n### 3.5. Convolutional Networks for Image Processing \n\n* PyTorch implements other layers that are better suited for different applications\n\n* In image processing, we normally recur to Convolutional Neural Networks, since they are able to capture the true spatial information of the image\n\n
Image Source
\n\n\n```python\ndataset = 'digits'\n\n#Generate train and validation data, shuffle\nX_train, X_val, y_train, y_val = train_test_split(digitsX[:,np.newaxis,:,:], digitsY, test_size=0.2, random_state=42, shuffle=True)\n\n#Convert to Torch tensors\nX_train_torch = torch.from_numpy(X_train)\nX_val_torch = torch.from_numpy(X_val)\ny_train_torch = torch.from_numpy(y_train)\ny_val_torch = torch.from_numpy(y_val)\n\ntrain_ds = TensorDataset(X_train_torch, y_train_torch)\ntrain_dl = DataLoader(train_ds, batch_size=64)\n```\n\n\n```python\nclass Lambda(nn.Module):\n def __init__(self, func):\n super().__init__()\n self.func = func\n\n def forward(self, x):\n return self.func(x)\n\nmy_net = nn.Sequential(\n nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),\n nn.ReLU(),\n nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),\n nn.ReLU(),\n nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),\n nn.ReLU(),\n nn.AvgPool2d(4),\n Lambda(lambda x: x.view(x.size(0), -1)),\n)\n\nopt = optim.SGD(my_net.parameters(), lr=0.1)\n```\n\n\n```python\nepochs = 2500\n\nloss_train = np.zeros(epochs)\nloss_val = np.zeros(epochs)\nacc_train = np.zeros(epochs)\nacc_val = np.zeros(epochs)\n\nfor epoch in range(epochs):\n \n if not ((epoch+1)%(epochs/5)):\n print('N\u00famero de \u00e9pocas:', epoch+1)\n \n for xb, yb in train_dl:\n \n #Compute network output and cross-entropy loss for current minibatch\n pred = my_net(xb)\n loss = loss_func(pred, yb.argmax(axis=-1))\n \n #Compute gradients and optimize parameters\n loss.backward()\n opt.step()\n opt.zero_grad()\n \n #At the end of each epoch, evaluate overall network performance\n with torch.no_grad():\n #Computing network performance after iteration\n pred = my_net(X_train_torch)\n loss_train[epoch] = loss_func(pred, y_train_torch.argmax(axis=-1)).item()\n acc_train[epoch] = accuracy(y_train_torch, pred).item()\n pred_val = my_net(X_val_torch)\n loss_val[epoch] = loss_func(pred_val, y_val_torch.argmax(axis=-1)).item()\n acc_val[epoch] = accuracy(y_val_torch, pred_val).item()\n```\n\n N\u00famero de \u00e9pocas: 500\n N\u00famero de \u00e9pocas: 1000\n N\u00famero de \u00e9pocas: 1500\n N\u00famero de \u00e9pocas: 2000\n N\u00famero de \u00e9pocas: 2500\n\n\n\n```python\nplt.figure(figsize=(14,5))\nplt.subplot(1, 2, 1), plt.plot(loss_train, 'b'), plt.plot(loss_val, 'r'), plt.legend(['train', 'val']), plt.title('Cross-entropy loss')\nplt.subplot(1, 2, 2), plt.plot(acc_train, 'b'), plt.plot(acc_val, 'r'), plt.legend(['train', 'val']), plt.title('Accuracy')\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "d5f0afe67bec9057b4851160021ebe74e99422fd", "size": 656922, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "C5.Classification_NN/NeuralNetworks_professor.ipynb", "max_stars_repo_name": "ML4DS/ML4all", "max_stars_repo_head_hexsha": "7336489dcb87d2412ad62b5b972d69c98c361752", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 27, "max_stars_repo_stars_event_min_datetime": "2016-11-30T17:34:00.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-23T23:11:48.000Z", "max_issues_repo_path": "C5.Classification_NN/NeuralNetworks_professor.ipynb", "max_issues_repo_name": "ML4DS/ML4all", "max_issues_repo_head_hexsha": "7336489dcb87d2412ad62b5b972d69c98c361752", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 5, "max_issues_repo_issues_event_min_datetime": "2019-08-12T18:28:49.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-26T11:01:39.000Z", "max_forks_repo_path": "C5.Classification_NN/NeuralNetworks_professor.ipynb", "max_forks_repo_name": "ML4DS/ML4all", "max_forks_repo_head_hexsha": "7336489dcb87d2412ad62b5b972d69c98c361752", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 14, "max_forks_repo_forks_event_min_datetime": "2016-11-30T17:34:18.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-15T09:53:32.000Z", "avg_line_length": 245.3032113518, "max_line_length": 77468, "alphanum_fraction": 0.9101887286, "converted": true, "num_tokens": 15258, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.480478678047907, "lm_q2_score": 0.20181322706107538, "lm_q1q2_score": 0.0969669525508876}} {"text": "# Heteroskedasticity\n## Consequences of Heteroskedasticity for OLS\n\n$\\DeclareMathOperator*{\\argmin}{argmin}\n\\DeclareMathOperator*{\\argmax}{argmax}\n\\DeclareMathOperator*{\\plim}{plim}\n\\newcommand{\\using}[1]{\\stackrel{\\mathrm{#1}}{=}}\n\\newcommand{\\ffrac}{\\displaystyle \\frac}\n\\newcommand{\\asim}{\\overset{\\text{a}}{\\sim}}\n\\newcommand{\\space}{\\text{ }}\n\\newcommand{\\bspace}{\\;\\;\\;\\;}\n\\newcommand{\\QQQ}{\\boxed{?\\:}}\n\\newcommand{\\void}{\\left.\\right.}\n\\newcommand{\\Tran}[1]{{#1}^{\\mathrm{T}}}\n\\newcommand{\\d}[1]{\\displaystyle{#1}}\n\\newcommand{\\CB}[1]{\\left\\{ #1 \\right\\}}\n\\newcommand{\\SB}[1]{\\left[ #1 \\right]}\n\\newcommand{\\P}[1]{\\left( #1 \\right)}\n\\newcommand{\\abs}[1]{\\left| #1 \\right|}\n\\newcommand{\\norm}[1]{\\left\\| #1 \\right\\|}\n\\newcommand{\\dd}{\\mathrm{d}}\n\\newcommand{\\Exp}{\\mathrm{E}}\n\\newcommand{\\RR}{\\mathbb{R}}\n\\newcommand{\\EE}{\\mathbb{E}}\n\\newcommand{\\II}{\\mathbb{I}}\n\\newcommand{\\NN}{\\mathbb{N}}\n\\newcommand{\\ZZ}{\\mathbb{Z}}\n\\newcommand{\\QQ}{\\mathbb{Q}}\n\\newcommand{\\PP}{\\mathbb{P}}\n\\newcommand{\\AcA}{\\mathcal{A}}\n\\newcommand{\\FcF}{\\mathcal{F}}\n\\newcommand{\\AsA}{\\mathscr{A}}\n\\newcommand{\\FsF}{\\mathscr{F}}\n\\newcommand{\\Var}[2][\\,\\!]{\\mathrm{Var}_{#1}\\left[#2\\right]}\n\\newcommand{\\Avar}[2][\\,\\!]{\\mathrm{Avar}_{#1}\\left[#2\\right]}\n\\newcommand{\\Cov}[2][\\,\\!]{\\mathrm{Cov}_{#1}\\left(#2\\right)}\n\\newcommand{\\Corr}[2][\\,\\!]{\\mathrm{Corr}_{#1}\\left(#2\\right)}\n\\newcommand{\\I}[1]{\\mathrm{I}\\left( #1 \\right)}\n\\newcommand{\\N}[1]{\\mathcal{N} \\left( #1 \\right)}\n\\newcommand{\\ow}{\\text{otherwise}}\n\\newcommand{\\FSD}{\\text{FSD}}Review$\n\n>Homoskedasticity assumption $\\text{MLR}.5$, that $\\Var{u\\mid x_1,x_2,\\dots, x_k} = \\sigma^2$, plays no role in showing whether OLS was unbiased or consistent. Only something like omitting an important variable would have this effect.\n>\n>Also, $R^2$ and $\\bar R^2$ are unaffected by the presence of heteroskedasticity. They are the estimators of population $R^2 = 1 - \\sigma_u^2/\\sigma_y^2$ where the two variances are *un*conditional while heteroskedasticity is under conditioning on $\\mathbf{x}$. \n\nUnder $\\text{MLR}.1$ to $\\text{MLR}.4$, $\\text{SSR}/n$ consistently estimates $\\sigma_u^2$ and $\\text{SST}/n$ consistently estimates $\\sigma_y^2$. Therefore, $R^2$ and $\\bar R^2$ are both consistant estimators of the population $R^2$, whether or not the homoskedasticity assumption holds.\n\nBut to do the inference by $t$ statistic and $F$ statistic, we still need that assumption. Besides, if $\\Var{u\\mid\\mathbf{x}}$ is no longer constant, OLS is no longer BLUE.\n\n## Heteroskedasticity-Robust Inference after OLS Estimation\n\nConsider the simple linear regression, the estimator of slope parameter is\n\n$$\\hat\\beta_1 = \\beta_1 + \\ffrac{\\d{\\sum_{i=1}^{n} \\P{x_i - \\bar x}} u_i}{\\d{\\sum_{i=1}^{n} \\P{x_i - \\bar x}^2}}$$\n\n$$\\Var{u_i\\mid x_i} = \\sigma_i^2\\;{\\Longrightarrow}\\;\\Var{\\hat\\beta_1} = \\ffrac{\\sum \\P{x_i - \\bar x}^2\\sigma_i^2}{\\text{SST}_x^2}$$\n\nNow we give a valid estimator for general MLR\n\n$$\\widehat{\\Var{\\hat \\beta_j}} = \\ffrac{\\d{\\sum_{i=1}^{n}\\hat r_{ij}^2 \\hat u_i^2}}{\\text{SSR}_j^2}$$\n\nwhere $\\hat r_{ij}$ denotes the $i$th residual from regressing $x_j$ on all other independent variables, and $\\text{SSR}_j$ is the sum of squared residuals from this regression. More than that, we have the ***heteroskedasticity-robust standard error*** for $\\hat\\beta_j$:\n\n$$\\sqrt{\\widehat{\\Var{\\hat \\beta_j}}}= \\ffrac{\\d{\\sqrt{\\sum_{i=1}^{n}\\hat r_{ij}^2 \\hat u_i^2}}}{\\text{SSR}_j^2}$$\n\nHere the $\\text{SSR}_j^2$ can be replaced by $\\text{SST}_j^2\\P{1-R_j^2}$, where $\\text{SST}_j^2$ is the total sum of squares of $x_j$, and $R_j^2$ is the usual $R^2$ from regressing $x_j$ on all other explanatory variables.\n\nThen the ***heteroskedasticity-robust $t$ statistic***.\n\n$$t = \\ffrac{\\text{estimate} - \\text{hypothesized value}}{\\textbf{heteroskedasticity-robust} \\text{ standard error}}$$\n\nUsing these formulas, the usual $t$ test is valid asymptotically; but the usual $F$-statistic does not work under heteroskedasticity.\n\n### Computing Heteroskedasticity-Robust LM Tests\n\nSkipped, however. OK, the usual LM statistic:\n\n1. Estimate the restricted model to obtain the residual $\\tilde u$\n2. regress $\\tilde u$ on all of the independent variables\n3. $\\text{LM} = n \\cdot R_{\\tilde u}^2$, where $R_{\\tilde u}^2$ is the $R^2$ from this regression\n\nThen the HR LM statistic in the general case\n\n1. The same: estimate the restricted model to obtain the residuals $\\tilde u$\n2. Regress each of the independent variables which are excluded under the null hypothesis, on all of the included variables; say there're $q$ excluded variables (restricted model for them: $x_j = \\beta_0 + \\beta_1 x_1 + \\cdots + \\beta_{k-q} x_{k-q} + u$, $j = k-q+1,\\dots,k$)\n3. Obtain $q$ sets of residuals $\\P{\\tilde r_1,\\tilde r_2,\\dots, \\tilde r_q}$, then find the element-wise production of $\\tilde r_j$ and $\\tilde u$\n4. Regress $y\\equiv 1$ on $\\beta = 0, \\tilde r_1 \\tilde u, \\tilde r_2 \\tilde u,\\dots,\\tilde r_q \\tilde u$\n5. The **Heteroskedasticity-Robust LM Statistic** is now $n-\\text{SSR}_1$ (I bet this is a minus sign, it could be some other signs though...), where $\\text{SSR}_1$ is just the usual sum of squared residuals from the regression in the final step.\n\nBy the way, under $H_0$, $\\text{LM}$ is distributed approximately as $\\chi_q^2$\n\n## Testing for Heteroskedasticity\n\nModel: $y = \\beta_0 + \\beta_1 x_1 + \\cdots + \\beta_k x_k + u$; assumption: $\\text{MLR}.1$ through $\\text{MLR}.4$, so that the OLS estimators are still unbiased and consistent.\n\nTo test the heteroskedasticity, we have $H_0: \\Var{u\\mid x_1,x_2,\\dots,x_k} = \\sigma^2$. Since $u$ has a zero conditional expectation, this is equivalent to\n\n$$H_0: \\Exp\\SB{u^2\\mid x_1,\\dots,x_k} = \\Exp\\SB{u^2} = \\sigma^2$$\n\nSo we are actually testing weather $u^2$ is related (in expected value) to one or more of the explanatory variables. Then we assume the linear function\n\n$$u^2 = \\delta_0 + \\delta_1 x_1 + \\delta_2 x_2 + \\cdots + \\delta_k x_k + v$$\n\nwhere $v$ is an error term with mean zero given the $x_j$. Then we rewrite the null hypothesis of homoskedasticity as $H_0: \\delta_1 = \\delta_2 = \\cdots = \\delta_k = 0$.\n\nThen we can use $F$ statistic or $\\text{LM}$ to test this. And to do so, we first need to estimate the left side, the residual and since it's unable to obtain, we will use its estimation $\\hat u_i$ so actually the equation to be process is actually\n\n$$\\hat u^2 = \\delta_0 + \\delta_1 x_1 + \\delta_2 x_2 + \\cdots + \\delta_k x_k + \\text{error}$$\n\nThen apply the $F$ test or $\\text{LM}$ test. And to distinguish two different $R$-square, we denote the $F$ statistic in this regression\n\n$$F = \\ffrac{\\ffrac{R_{\\hat u^2}^2}{k}}{\\ffrac{1-R_{\\hat u^2}^2}{n-k-1}}$$\n\nAnd the $\\text{LM}$ statistic is $\\text{LM} = n\\cdot R_{\\hat u^2}^2$. Under $H_0$, it's distributed asymptotically as $\\chi_k^2$. The $\\text{LM}$ version of the test is typically called the ***Breusch-Pagan test for heteroskedasticity (BP test)***.\n\n$Remark$\n\n>Larger $R_{\\hat u^2}^2$ could be the evidence against the null hypothesis.\n\n**Steps for BP test**:\n\n1. Estimate the model $y = \\beta_0 + \\beta_1 x_1 + \\cdots + \\beta_k x_k + u$ by OLS, as usual. And for each observation, find the residual $\\hat u^2$\n2. Regression the equation $\\hat u^2 = \\delta_0 + \\delta_1 x_1 + \\delta_2 x_2 + \\cdots + \\delta_k x_k + \\text{error}$, with the $R$-squared $R_{\\hat u^2}^2$.\n3. Form either the $F$ statistic or the $\\text{LM}$ statistic and compute the $p$-value. Use $F_{k,n-k-1}$ distribution for $F$ statistic and $\\chi_k^2$ distribution for the otherone. If the $p$-value is below the chosen significance level, meaning that it's sufficiently small, we reject the null hypothesis and admit the heteroskedasticity.\n\n$Remark$\n\n> If we suspect that heteroskedasticity depends only upon certain independent variables, we can simple modify the BP test that we regress $\\hat u^2$ only on the chosen variables and then carry out the appropriate $F$ or $\\text{LM}$ test.\n\n### The White Test for Heteroskedasticity\n\nThe ***White test for heteroskedasticity*** is the $\\text{LM}$ statistic for testing that all of the $\\delta_j$ where $j=1,2,\\dots$ (no $0$ here), defined by\n\n$$\\begin{align}\n\\hat u^2 &= \\P{y - \\hat\\beta_0 - \\hat\\beta_1 x_1 - \\cdots - \\hat\\beta_k x_k}^2 \\\\\n&\\equiv \\delta_0 + \\sum_{i=1}^k \\delta_i x_i + \\sum_{i=1}^k \\delta_{k+i} x_i^2 + \\sum_{jFeasible GLS is consistent and asymptotically more efficient than OLS.
\n***\n\n### What if the Assumed Heteroskedasticity Function is Wrong?\n\n- WLS is still consistent under $\\text{MLR}.1$ through $\\text{MLR}.4$\n- robust standard errors should be computed\n- WLS is consistent under $\\text{MLR}.4$ but not necessarily under $\\text{MLR}.4'$\n\n### Prediction and Prediction Intervals with Heteroskedasticity\n\n## The Linear Probability Model Revisited\n\n$$\\Var{y\\mid \\mathbf x} = p\\P{\\mathbf x} \\P{1-p\\P{\\mathbf x}}\\Rightarrow \\hat h_i = \\hat y_i \\P{1-\\hat y_i}$$\n\n***\n", "meta": {"hexsha": "3e1475e3bdecb406ed5592a7d1493eb6e2270e1a", "size": 18305, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FinMath/Econometrics/Chap_08.ipynb", "max_stars_repo_name": "XavierOwen/Notes", "max_stars_repo_head_hexsha": "d262a9103b29ee043aa198b475654aabd7a2818d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-11-27T10:31:08.000Z", "max_stars_repo_stars_event_max_datetime": "2019-01-20T03:11:58.000Z", "max_issues_repo_path": "FinMath/Econometrics/Chap_08.ipynb", "max_issues_repo_name": "XavierOwen/Notes", "max_issues_repo_head_hexsha": "d262a9103b29ee043aa198b475654aabd7a2818d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "FinMath/Econometrics/Chap_08.ipynb", "max_forks_repo_name": "XavierOwen/Notes", "max_forks_repo_head_hexsha": "d262a9103b29ee043aa198b475654aabd7a2818d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-07-14T19:57:23.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-14T19:57:23.000Z", "avg_line_length": 52.7521613833, "max_line_length": 354, "alphanum_fraction": 0.5816989893, "converted": true, "num_tokens": 4882, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4571367168274948, "lm_q2_score": 0.21206881182686205, "lm_q1q2_score": 0.09694444038003952}} {"text": "# MAE 3120 Methods of Engineering Experiments\n\n>__Philippe M Bardet__\n\n>__Mechanical and Aerospace Engineering__\n\n>__The George Washington University__\n\n\n\n# Module 01: Introduction to measurement system\n\nThis class will be focused mainly on experiments, but many of the concepts seen here are also applied in many other fields where analytical thinking is needed. In fact, with the advancement of computers one talks more and more of numerical experiments. The stock market could also be considered as a huge real time experiment...\n\nThis is an introductory lecture and we need to establish a common (rigorous) language for the rest of the class (and your career). We are going to introduce a lot of definitions that we will use for the rest of the semester. Many definitions and notions introduced here should only be reviews at this point (I hope). \n\nAlong with the common language, we will also adopt a notation convention that is consistent throughout the class. Depending on the textbook you choose to follow, the convention might be slightly different than the one adopted here.\n\n\n## DIKW pyramid\n\nIn engineering, knowledge could be defined as a model that describes and (ideally) predicts the behavior of a complex system. The context of data acquisition, analysis, and model development can be put in the context of the wisdom pyramid (or DIKW pyramyd). This concept has been developed over the years by the information theory field.\n\n\n\nhttps://en.wikipedia.org/wiki/DIKW_pyramid\n\nStarting from the bottom of the pyramid:\n\n__data__: Signal reading from sensor/transducer\n\n__information__: \"organized or structured data, which has been processed in such a way that the information now has relevance for a specific purpose or context, and is therefore meaningful, valuable, useful and relevant.\" information + data: \"know what\". \n\n__knowledge__: \"organization and processing to convey understanding, experience, and accumulated learning\". It can be seen as having an engineering model that describes a phenomena or system. \"Why is\".\n\n__wisdom__: \"Why do\". Applicability of model to predict new behaviors.\n\nUse example of taking data at 3 points, to extract mean values, from mean values, trend line, and then knowing when to use the trendline.\n\nHere is another representation of the pyramid:\n\n\n\n## Goals of experiments\n\nExperiments serve two main purposes: \n\n>__1- Engineering/scientific experimentation__:\n>The goal is to seek new information. For example when developing a new product one needs to know: how hot does it get? When will it fail? Another example would be to determine a model that describes the behavior of a system.\n \n>__2- Operational system__:\n>The goal is to monitor and control processes. In other words to create a reliable operational sysem. This is generally applied to existing equipment (or equipment under design), rather than used to design a new equipment (first application of measurement). For example, this could be the A/C control system of a room: one needs to measure temperature and regulate the heating/cooling based on a set point.\n \nIt is convenient to think of the measurement process with a block diagram.\n\n\n\n\n## A brief history of measurement (length)\n\nIn Ancient Egypt, (~3,000 BC) people used a measure called a CUBIT. This word comes from a Latin word \u201ccubitum\u201d which meant elbow and it was the length of a person\u2019s outstretched forearm - from the elbow to the tip of the middle finger. It was based on the Pharaoh\u2019s body: the ROYAL CUBIT. A stick was marked with this distance and copies were distributed to merchants throughout the land. When the Pharaoh died, a new Pharaoh took the throne, a new Royal Cubit came into being, and a new stick and it\u2019s copies had to be made and sent across the land. Plus, there were OTHER problems: The lengths that people wanted to measure were sometimes shorter or longer than a cubit. So other lengths like the PALM, DIGIT, and FOOT were used. 7 PALMS was the same as 1 CUBIT. The ROYAL FOOT was equal to about 18 fingers or \u2154 of a Royal Cubit.\n\n\n\n\nAlongside the Ancient Egyptians, people throughout the Mediterranean used parts of their bodies to create units of measurement. Mediterranean sailors used Fathoms to measure depth. The Hebrew people of long ago used a measurement called a SPAN. Hand spans are still a unit used to measure horses! The people of Ancient Greece adapted the Egyptian-Hebrew measurements and added more measurements based upon multiples of fingers. Ancient Romans created a measurement meant based on the width of the thumb or \u201cuncia\u201d in Latin. That\u2019s where the word for INCH comes from. Roman armies measured the distance from one step to another using PACES. A PACE was equal to 2 Egyptian cubits and is still used to describe speed in foot races. The word MILE comes from the Latin word \u201cmilliare,\u201d a distance of 1,000 paces covered by the Roman army at a forced march.\n\n\n\nAs the Roman empire spread, the Roman measuring units became the accepted system of measurement throughout Europe. A Roman FOOT was equal to 12 UNCIA. These uncia came to England and over time became INCHES. Like the Pharaohs of Ancient Egypt, the King of England at the time also wanted to STANDARDIZE (or make the same) the units of measurement across the land based on his own body. A royal decree went out: a YARD was to be the distance from the nose to the tip of the middle finger of the outstretched arm...or about 3 FEET. From this time, all units were derived from the King\u2019s foot and yard. The English Mile was derived from that. This English or Customary system spread back down through Europe and across the ocean to North America with the English who arrived here. As trade and communication increased again, once again a more uniform or regulated system was needed. It took several hundred years for the Customary system to become more and more dependable and standardized. In 1855, a new distance for a yard was formalized. This was still about the size of the original Roman yard. Since then all of the other units of measurement for length have been derived from multiplications and divisions of the Yard. 1 Inch was 1/36 of a yard. 1 Foot was \u2153 yard. \n\n\n\nWhile the English or Customary system came to be widely used in Europe, and is still used here in the United States, today there is another widely used system of measurement that is not based on the measurements of the human body. It\u2019s beginnings can be found in the 1600s in Europe, when people began to talk about finding better STANDARDS for measurement. In the 1790s, a group of French scientists decided to create a new standard of measurement that would be unchangeable. The name for the unit of measure chosen was the METER; it was based upon a measurement of the Earth. The system they developed is called the METRIC SYSTEM and it is still in use today, throughout France, Europe and almost every nation in the world. All scientists use the Metric system because it is the most precise. The length of the meter was taken to be one ten-millionth of the distance from the North Pole to the equator along a line of longitude near Paris, France. The word meters comes from the Greek word \u201cmetron\u201d which means to measure. The metric system is based on multiples of 10.\n\n\n\n\n\n## Dimensions and Units\nFor measured data to be useful, one needs to have a common language: i.e. a unique definition of dimensions (length, time, etc.) with associated dimensional units (meter, second, etc.).\n\nThere are two types of dimensions: \n\n>__1- Primary or base.__ 7 in total.\n\n\\begin{array}{l l l}\n\\hline\n\\mathrm{primary\\, dimension} & \\mathrm{symbol} & \\mathrm{unit} \\\\\n\\hline\n\\text{mass} & m & \\mathrm{kg}\\\\\n\\mathrm{length} & L & \\mathrm{m}\\\\\n\\mathrm{time} & t & \\mathrm{s}\\\\\n\\mathrm{thermodynamic\\,temperature} & T& \\mathrm{K}\\\\\n\\mathrm{electrical \\,current} & I & \\mathrm{A}\\\\\n\\mathrm{amount\\,of\\,light} & C& \\mathrm{Cd,\\,Candela}\\\\\n\\mathrm{amount\\,of\\,matter} & mol & \\mathrm{mole}\\\\\n\\hline\n\\end{array}\n\n\n\n\n>__2- Secondary or derived.__ They are made of a combination of primary/base dimensions\n\n\\begin{equation}\n\\mathrm{force} = \\frac{ m \\times L}{t^2} \n\\end{equation}\n\nAll other dimensions and units can be derived as combinations of the primaries. Here is a table with examples.\n\n\\begin{array}{l l l l}\n\\hline\n\\mathrm{secondary\\, dimension} & \\mathrm{Symbol} & \\mathrm{unit} & \\text{unit name}\\\\\n\\hline\n\\mathrm{force} & F & \\mathrm{N= kg \\cdot m/s^2} & \\text{Newton}\\\\\n\\mathrm{pressure} & P \\,(p) & \\mathrm{Pa = N/m^2} & \\text{Pascal}\\\\\n\\mathrm{energy} & E & \\mathrm{J = N \\cdot m = kg \\cdot m^2 / s^2} & \\text{Joule} \\\\\n\\mathrm{power} & \\dot{W} \\, (P) & \\mathrm{W = N \\cdot m / s = kg \\cdot m^2 / s^3} & \\text{Watt}\\\\\n\\hline\n\\end{array}\n\n\nTo avoid confusions we will use the SI (International Standard) system of units. \n\nPlease also note the notation in how we report the units and symbols. We will use the same notation throughout the class and you will also throughtout your career. In scientific notation:\n> - mathematical symbols are reported as italic (e.g. temperature $T$, pressure $P$, velocity $U$), \n> - while units are reported in roman fonts and with a space in front of the value it characterizes (e.g. $P$ = 100 Pa, $U$ = 5 m/s, $T$ = 400 K). \n\n\nAnecdote: the Mars Climate Orbiter crashed in 1990's due to a problem of unit conversion. Source of the failure (from official report): ''failure using metric units''.\n\nWhile we will use the SI system in the class it is useful to know how to convert dimensions from one unit system to another (i.e. imperial to SI). Here are some useful quantities to keep handy.\n\n### Unit conversion\n\n\n\\begin{array}{l l l}\n\\hline\n\\mathrm{length} & & \\\\\n\\hline\n1 \\,\\mathrm{in} & = & 25.4 \\times 10^{-3}\\,\\mathrm{m}\\\\\n1 \\,\\mathrm{ft} & = & 0.3048 \\,\\mathrm{m} \\\\\n & = & 12 \\,\\mathrm{in}\\\\\n1\\, \\AA & = & 10^{-10}\\,\\mathrm{m} \\\\\n1\\,\\mathrm{mile \\,(statute)} & = & 1,609 \\,\\mathrm{m}\\\\\n1 \\,\\mathrm{mile \\,(nautical)} & = & 1,852 \\,\\mathrm{m} \\\\\n%\n\\hline\n\\mathrm{volume} & & \\\\\n\\hline\n1 \\,\\mathrm{l\\, (liter)} & = & 10^{-3}\\,\\mathrm{m}^3 \\\\ \n1\\,\\mathrm{ in}^3 & = & 16.387 \\,\\mathrm{cm}^3\\\\\n1 \\,\\mathrm{gal\\, (U.S.\\, liq.)} & = & 3.785\\,\\mathrm{l} \\\\\n1 \\,\\mathrm{gal\\, (U.S.\\, dry)} & = & 1.164\\,\\mathrm{ U.S.-liq.\\, gal}\\\\\n1 \\,\\mathrm{gal \\,(British)} & = & 1.201\\,\\mathrm{ U.S.-liq.\\, gal}\\\\\n%\n\\hline\n\\mathrm{mass} & &\\\\\n\\hline\n1 \\,\\mathrm{lb \\,(mass)} & = & 0.454\\,\\mathrm{ kg}\\\\\n%\n\\hline\n\\mathrm{force}& &\\\\\n\\hline\n1 \\,\\mathrm{N }& = & 1\\,\\mathrm{ kg\\cdot m/s}^2\\\\\n1\\,\\mathrm{ dynes} & = & 10^{-5}\\,\\mathrm{ N}\\\\\n1 \\,\\mathrm{lb \\,(force)} & = & 4.448 \\,\\mathrm{N}\\\\\n%\n\\hline\n\\mathrm{energy} & & \\\\\n\\hline\n1 \\,\\mathrm{J }& = & 1\\, \\mathrm{kg \\cdot m}^2/\\mathrm{s}^2\\\\\n1 \\,\\mathrm{BTU} & = & 1,055.1\\,\\mathrm{ J}\\\\\n1 \\,\\mathrm{cal} & \\equiv & 4.184\\,\\mathrm{ J}\\\\\n1 \\,\\mathrm{kg-TNT} & \\equiv & 4.184\\,\\mathrm{ MJ}\\\\\n%\n\\hline\n\\mathrm{power} & & \\\\\n\\hline\n1 \\,\\mathrm{W} & \\equiv & 1 \\,\\mathrm{J/s}\\\\\n1 \\,\\mathrm{HP\\, (imperial)} & \\equiv & 745.7 \\,\\mathrm{W}\\\\\n1 \\,\\mathrm{HP\\, (metric)} & \\equiv & 735.5 \\,\\mathrm{W}\\\\\n\\hline\n\\end{array}\n\n### Dimensionless numbers\n\n\\begin{array}{l l l}\n\\mathrm{Reynolds\\, number} & Re & U L / \\nu \\\\\n\\mathrm{Mach\\, number} & M & U/a \\\\\n\\mathrm{Prandtl\\, number} & Pr & \\mu c_p / k = \\nu / \\kappa \\\\\n\\mathrm{Strouhal\\, number} & St & L/U \\tau \\\\\n\\mathrm{Knudsen\\, number} & Kn & \\Lambda / L \\\\\n\\mathrm{Peclet\\, number} & Pe & U L / \\kappa = Pr \\cdot Re \\\\\n\\mathrm{Schmidt\\, number} & Sc & \\nu / D \\\\\n\\mathrm{Lewis\\, number} & Le & D / \\kappa \\\\\n\\end{array}\n\n### Useful constants\n\nAvogadro's number: \n\\begin{align*}\nN_A & = 6.022\\, 1367 \\times 10^{23} \\mathrm{\\, molecules/(mol)} %\\nolabel\n\\end{align*}\n\nBoltzman constant:\n\\begin{align*}\nk_B & = 1.380\\, 69 \\times 10^{-23} \\mathrm{\\, J/K} \\\\\nk_B T & = 2.585 \\times 10^{-2} \\mathrm{\\,eV} \\sim \\frac{1}{40} \\mathrm{\\, eV, at \\,} T = 300 \\mathrm{K} \n\\end{align*}\n\nUniversal gas constant:\n\\begin{align*}\nR_u & = 8.314\\, 510 \\mathrm{\\, J/(mol} \\cdot \\mathrm{K)}\n\\end{align*}\n\nEarth radius (at equator):\n\\begin{align*}\nr_{earth} & = 6\\,378.1370 \\mathrm{\\, km} \n\\end{align*}\n\n\n## Dimensional analysis\n\nNow that we know the primary dimensionns, we can make use of it to reduce the number of experimental runs one needs to perform. This is the foundation of dimensional analysis, which you have seen in MAE 3126 (Fluid Mechanics). To reduce the number of experimental runs will see other techniques, such as Taguchi arrays in a few weeks when we treat design of experiments. The benefit of dimensional analysis is best seen through the graph below:\n\n\n\nPlease review your notes of Fluid Mechanics on dimensional analysis and the method of repeating variables (also called Buckingham $\\Pi$ theorem)\n\n## Errors and Uncertainties\n\nAn __error__ is defined as:\n\n\\begin{align*}\n\\epsilon = x_m - x_{true}\n\\end{align*}\n\nwhere $x_m$ is the measured value and $x_{true}$ the true value. The problem is that we do not always (rarely in fact) know the true value. This will lead to the concept of uncertainty.\n\nErrors can be categorized into two types:\n\n>__1- Systematic or bias error__: Those are errors that are consistent or repeatable. For example, I use a ruler with the first 3 mm missing, all the measurements will be short by 3 mm.\n\n>__2- Random or precision error__: errors that are inconsistent or unrepeatable. This will be seen as scatter in the measured data. For examl]ple, this could be caused by electo-magnetic noise in a voltmeter (with implication on grounding and shielding of the instrument).\n\nSystematic vs random errors can be best seen visually:\n\n\n\n\nIn light of the two types of errors defined above, one would like to define mathematical formulas to quantify them.\n\n__systematic/bias error__\n\n\\begin{align*}\n\\epsilon_b = x_m - x_{true}\n\\end{align*}\n\n_Question_: What are sources of bias errors?\nHow can we reduce bias errors?\n\n__relative mean bias error__: non-dimensional (normalized) form of the mean bias error.\n\n\\begin{align*}\n\\frac{ - x_{true}}{x_{true}}\n\\end{align*}\n\n__random/precision error__\n\n\\begin{align*}\n\\epsilon_p = x_m - \n\\end{align*}\n\n_example_: We have five temperature measurements: Can you find the maximum precision error?\n\nThe __overal precision error or standard error__ is found by computing the standard deviation, $S$, divided by the square root of the number of samples, $n$.\n\n\\begin{align*}\n\\epsilon_{op} = \\frac{S}{\\sqrt{n}}\n\\end{align*}\n\n\n\n```python\nimport numpy\n\nT=[372.80, 373.00, 372.90, 373.30, 373.10]\n\nTm = numpy.mean(T)\nprint(Tm)\nep_max = 373.30-Tm\nprint(ep_max)\n```\n\n 373.02\n 0.28000000000002956\n\n\n_Question_: Are five measurement enough to quantify the precision error? We also need a statistics that represents the __mean__ precision error. We will see this soon.\n\nNow that we have described the two types of errors and hinted at how to estimate them, let's look at instruments specifically.\n\n### Calibration\n\nCalibration aims to determine and imrove its accuracy. Calibration can be accomplished in 3 manners: 1- comparison with a primary standard (such as developed by NIST, like the mass or meter defined earlier), 2- a secondary standard, such as another instrument of known and higher accuracy, 3- a known input source.\n\nHere are examples of primary standards for temperature that are ''easy'' to implement in any labs: \n\\begin{array}{l l}\n\\hline\n\\mathrm{definition} & \\text{temperature (K)}\\\\\n\\hline\n\\mathrm{triple\\, point\\, of\\, hydrogen} & 13.8033 \\\\\n\\mathrm{triple\\, point\\, of\\, oxygen} & 54.3584\\\\\n\\mathrm{triple\\, point\\, of\\, water} & 273.16 \\\\\n\\mathrm{Ice\\, point} & 273.15 \\\\\n\\mathrm{normal\\, boiling\\, point\\, of\\, water} & 373.15 \\\\\n\\hline\n\\end{array}\n\nCalibration can be done in a static and/or dynamic manner. It is a very important step to verify the accuracy of an instrument or sensor. In most laboratories, calibration has to be performed regularly. Some commercial entities are specialised in doing so. \n\nBefore each important test campaign or after a company recertify instrument, the results have be to be documented for traceability. Here is an example for load balance.\n\n\n\n\n\n### Uncertainty\n\nThe concept of __uncertainty__ needs to be taken into account when we conduct experiments. Uncertainty can be defined as (S.J. Kline) ''What we think the error would be if we could and did measure it by calibration''. Taking data is a very small part of doing an experiment and we are going to spend a lot of time doing uncertainty analysis. \n\nAn error, $\\epsilon$ has a particular sign and magnitude (see the equations above). If it is known, then if can be removed from the measurments (through calibration for example). Any remaining error that does not have a sign and mangitude cannot be removed. We will define an uncertainty $\\pm u$ as the range that contains the remaining (unknown) errors. \n\nBecause we are doing measurement in an uncertain world, we need to be able to express our __confidence level__ in our results. This wil require set of sophisticated statistical tools.\n\n__Uncertainty analysis__ is an extremely important tool and step in experiments and we are going to spend a significant amount of time on it during the class. This analysis is performed typically in the experimental planning phase (to help in determining the appropriate components to use in our instrumentation chain see in the first figure). An extensive analysis is also performed after the campaign to characterize the actual uncertainties in the measurement.\n\n### Instrument rating\n\nWhen selecting an instrument for a measurement, one has many options. Ideally, one would like to select a system that will meet our requirements for the measurement (such as expected range, but also for uncertainty) while not breaking the bank... Luckily manufacturers report a lot of data with their sensor/transducer that can help us in making an educated guess of the expected performance without having to characterize it ourselves. Here is an example from Omega Scientific for a pressure transducer:\n\n\n\n\nLet's define a few of the terms used.\n\n__Accuracy__ is difference between true and measured value. \n\n\\begin{align*}\nu_a = x_m - x_{true}\n\\end{align*}\n\nA small difference between true and measured value leads to a high accuracy and vice-versa. It can be expressed as a percentage of reading, of full-scale, or an absolute value. Accuracy can be assessed and minimized by calibrating the system.\n\n__Precision__ of an instrument is the reading minus the average of readings. It characterizes random error of instrument output or the reproducibility of an instrument.\n\n\\begin{align*}\nu_p = x_m - \n\\end{align*}\n\n_Questions_: \n\n>Can we improve precision by calibrating the system?\n\n> Is there a limit up to which we can improve the accuracy of a system?\n\nIf we recall our previous definitions for errors, you will remark that they are the same than the two terms introduced above and imply that we compare the instrument readings to the true, known, value. However, in most cases, we do not know the true value and instead we are only confident that we are within a certain range ($\\pm$) of the true value. Therefore to be consistent with the definitions introduced so far, we should use the term uncertainty and not error, when describing experimental results (except for a few cases).\n\nAccuracy and precision are the two main categories of uncertainties in our measurements; however, they are each comprised of elemental components. A non-exhaustive list includes:\n\n__resolution__: smallest change or increment in the measurand that the instrumente can detect. Note for digital instrument, resolution is associated with the number of digits on display, ie a 5 digit Digital Multi-Meter (DMM) has better resolution than a 4 digit DMM. The values reported by the DMM will be at $\\pm$ the last digit.\n\n__sensitivity__: it is defined \n\nOther sources of errors are zero, linearity, sensitivity, hysteresis, etc.\n\n\n\n## Experiment planning\n\n\n\n\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "54acba237adf094fc649af291c536f9348c7ffb8", "size": 26217, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/00_Introduction.ipynb", "max_stars_repo_name": "eiriniflorou/GWU-MAE3120_2022", "max_stars_repo_head_hexsha": "52cd589c4cfcb0dda357c326cc60c2951cedca3b", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lectures/00_Introduction.ipynb", "max_issues_repo_name": "eiriniflorou/GWU-MAE3120_2022", "max_issues_repo_head_hexsha": "52cd589c4cfcb0dda357c326cc60c2951cedca3b", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/00_Introduction.ipynb", "max_forks_repo_name": "eiriniflorou/GWU-MAE3120_2022", "max_forks_repo_head_hexsha": "52cd589c4cfcb0dda357c326cc60c2951cedca3b", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 53.7233606557, "max_line_length": 1293, "alphanum_fraction": 0.6296677728, "converted": true, "num_tokens": 5259, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3522017684487511, "lm_q2_score": 0.275129717879598, "lm_q1q2_score": 0.0969011731900004}} {"text": "\n\n##### Copyright 2020 The TensorFlow Authors.\n\n\n```\n#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n```\n\n# MNIST classification\n\n\n \n \n \n \n
\n View on TensorFlow.org\n \n Run in Google Colab\n \n View source on GitHub\n \n Download notebook\n
\n\nThis tutorial builds a quantum neural network (QNN) to classify a simplified version of MNIST, similar to the approach used in Farhi et al. The performance of the quantum neural network on this classical data problem is compared with a classical neural network.\n\n## Setup\n\n\n```\n!pip install tensorflow==2.3.1\n```\n\n Collecting tensorflow==2.3.1\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/eb/18/374af421dfbe74379a458e58ab40cf46b35c3206ce8e183e28c1c627494d/tensorflow-2.3.1-cp37-cp37m-manylinux2010_x86_64.whl (320.4MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 320.4MB 50kB/s \n \u001b[?25hRequirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (0.12.0)\n Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (1.1.2)\n Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (0.3.3)\n Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (1.15.0)\n Collecting numpy<1.19.0,>=1.16.0\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/d6/c6/58e517e8b1fb192725cfa23c01c2e60e4e6699314ee9684a1c5f5c9b27e1/numpy-1.18.5-cp37-cp37m-manylinux1_x86_64.whl (20.1MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 20.1MB 1.3MB/s \n \u001b[?25hRequirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (3.3.0)\n Collecting tensorflow-estimator<2.4.0,>=2.3.0\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/e9/ed/5853ec0ae380cba4588eab1524e18ece1583b65f7ae0e97321f5ff9dfd60/tensorflow_estimator-2.3.0-py2.py3-none-any.whl (459kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 460kB 47.3MB/s \n \u001b[?25hRequirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (1.12.1)\n Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (0.36.2)\n Requirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (0.2.0)\n Requirement already satisfied: tensorboard<3,>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (2.4.1)\n Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (2.10.0)\n Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (1.32.0)\n Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (1.6.3)\n Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (3.12.4)\n Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (1.1.0)\n Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (0.4.4)\n Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.3.4)\n Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (2.23.0)\n Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.28.1)\n Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.0.1)\n Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (56.0.0)\n Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.8.0)\n Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.3.0)\n Requirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.10.1)\n Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.24.3)\n Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (2020.12.5)\n Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (2.10)\n Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.0.4)\n Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (0.2.8)\n Requirement already satisfied: rsa<5,>=3.1.4; python_version >= \"3.6\" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (4.7.2)\n Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (4.2.1)\n Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.1.0)\n Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < \"3.8\"->markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.4.1)\n Requirement already satisfied: typing-extensions>=3.6.4; python_version < \"3.8\" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < \"3.8\"->markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.7.4.3)\n Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (0.4.8)\n \u001b[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.\u001b[0m\n \u001b[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.\u001b[0m\n Installing collected packages: numpy, tensorflow-estimator, tensorflow\n Found existing installation: numpy 1.19.5\n Uninstalling numpy-1.19.5:\n Successfully uninstalled numpy-1.19.5\n Found existing installation: tensorflow-estimator 2.4.0\n Uninstalling tensorflow-estimator-2.4.0:\n Successfully uninstalled tensorflow-estimator-2.4.0\n Found existing installation: tensorflow 2.4.1\n Uninstalling tensorflow-2.4.1:\n Successfully uninstalled tensorflow-2.4.1\n Successfully installed numpy-1.18.5 tensorflow-2.3.1 tensorflow-estimator-2.3.0\n\n\n\n\nInstall TensorFlow Quantum:\n\n\n```\n!pip install tensorflow-quantum\n```\n\n Collecting tensorflow-quantum\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/53/02/878b2d4e7711f5c7f8dff9ff838e8ed84d218a359154ce06c7c01178a125/tensorflow_quantum-0.4.0-cp37-cp37m-manylinux2010_x86_64.whl (5.9MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5.9MB 4.2MB/s \n \u001b[?25hCollecting sympy==1.5\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/4d/a7/25d5d6b3295537ab90bdbcd21e464633fb4a0684dd9a065da404487625bb/sympy-1.5-py2.py3-none-any.whl (5.6MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5.6MB 26.4MB/s \n \u001b[?25hCollecting cirq==0.9.1\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/18/05/39c24828744b91f658fd1e5d105a9d168da43698cfaec006179c7646c71c/cirq-0.9.1-py3-none-any.whl (1.6MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.6MB 45.9MB/s \n \u001b[?25hRequirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy==1.5->tensorflow-quantum) (1.2.1)\n Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (3.7.4.3)\n Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (1.1.5)\n Requirement already satisfied: networkx~=2.4 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (2.5.1)\n Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (1.4.1)\n Requirement already satisfied: google-api-core[grpc]<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (1.26.3)\n Requirement already satisfied: protobuf~=3.12.0 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (3.12.4)\n Requirement already satisfied: numpy~=1.16 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (1.18.5)\n Requirement already satisfied: sortedcontainers~=2.0 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (2.3.0)\n Requirement already satisfied: matplotlib~=3.0 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (3.2.2)\n Collecting freezegun~=0.3.15\n Downloading https://files.pythonhosted.org/packages/17/5d/1b9d6d3c7995fff473f35861d674e0113a5f0bd5a72fe0199c3f254665c7/freezegun-0.3.15-py2.py3-none-any.whl\n Requirement already satisfied: requests~=2.18 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (2.23.0)\n Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->cirq==0.9.1->tensorflow-quantum) (2.8.1)\n Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->cirq==0.9.1->tensorflow-quantum) (2018.9)\n Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx~=2.4->cirq==0.9.1->tensorflow-quantum) (4.4.2)\n Requirement already satisfied: google-auth<2.0dev,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (1.28.1)\n Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (56.0.0)\n Requirement already satisfied: six>=1.13.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (1.15.0)\n Requirement already satisfied: packaging>=14.3 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (20.9)\n Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (1.53.0)\n Requirement already satisfied: grpcio<2.0dev,>=1.29.0; extra == \"grpc\" in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (1.32.0)\n Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib~=3.0->cirq==0.9.1->tensorflow-quantum) (2.4.7)\n Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib~=3.0->cirq==0.9.1->tensorflow-quantum) (0.10.0)\n Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib~=3.0->cirq==0.9.1->tensorflow-quantum) (1.3.1)\n Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests~=2.18->cirq==0.9.1->tensorflow-quantum) (2020.12.5)\n Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests~=2.18->cirq==0.9.1->tensorflow-quantum) (1.24.3)\n Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests~=2.18->cirq==0.9.1->tensorflow-quantum) (3.0.4)\n Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests~=2.18->cirq==0.9.1->tensorflow-quantum) (2.10)\n Requirement already satisfied: rsa<5,>=3.1.4; python_version >= \"3.6\" in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (4.7.2)\n Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (0.2.8)\n Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (4.2.1)\n Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.7/dist-packages (from rsa<5,>=3.1.4; python_version >= \"3.6\"->google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (0.4.8)\n Installing collected packages: sympy, freezegun, cirq, tensorflow-quantum\n Found existing installation: sympy 1.7.1\n Uninstalling sympy-1.7.1:\n Successfully uninstalled sympy-1.7.1\n Successfully installed cirq-0.9.1 freezegun-0.3.15 sympy-1.5 tensorflow-quantum-0.4.0\n\n\nNow import TensorFlow and the module dependencies:\n\n\n```\nimport tensorflow as tf\nimport tensorflow_quantum as tfq\n\nimport cirq\nimport sympy\nimport numpy as np\nimport seaborn as sns\nimport collections\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit\n```\n\n## 1. Load the data\n\nIn this tutorial you will build a binary classifier to distinguish between the digits 3 and 6, following Farhi et al. This section covers the data handling that:\n\n- Loads the raw data from Keras.\n- Filters the dataset to only 3s and 6s.\n- Downscales the images so they fit can fit in a quantum computer.\n- Removes any contradictory examples.\n- Converts the binary images to Cirq circuits.\n- Converts the Cirq circuits to TensorFlow Quantum circuits. \n\n### 1.1 Load the raw data\n\nLoad the MNIST dataset distributed with Keras. \n\n\n```\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\n\n# Rescale the images from [0,255] to the [0.0,1.0] range.\nx_train, x_test = x_train[..., np.newaxis]/255.0, x_test[..., np.newaxis]/255.0\n\nprint(\"Number of original training examples:\", len(x_train))\nprint(\"Number of original test examples:\", len(x_test))\n```\n\n Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n 11493376/11490434 [==============================] - 0s 0us/step\n Number of original training examples: 60000\n Number of original test examples: 10000\n\n\nFilter the dataset to keep just the 3s and 6s, remove the other classes. At the same time convert the label, `y`, to boolean: `True` for `3` and `False` for 6. \n\n\n```\ndef filter_36(x, y):\n keep = (y == 3) | (y == 6)\n x, y = x[keep], y[keep]\n y = y == 3\n return x,y\n```\n\n\n```\nx_train, y_train = filter_36(x_train, y_train)\nx_test, y_test = filter_36(x_test, y_test)\n\nprint(\"Number of filtered training examples:\", len(x_train))\nprint(\"Number of filtered test examples:\", len(x_test))\n```\n\n Number of filtered training examples: 12049\n Number of filtered test examples: 1968\n\n\nShow the first example:\n\n\n```\nprint(y_train[0])\n\nplt.imshow(x_train[0, :, :, 0])\nplt.colorbar()\n```\n\n### 1.2 Downscale the images\n\nAn image size of 28x28 is much too large for current quantum computers. Resize the image down to 4x4:\n\n\n```\nx_train_small = tf.image.resize(x_train, (4,4)).numpy()\nx_test_small = tf.image.resize(x_test, (4,4)).numpy()\n```\n\nAgain, display the first training example\u2014after resize: \n\n\n```\nprint(y_train[0])\n\nplt.imshow(x_train_small[0,:,:,0], vmin=0, vmax=1)\nplt.colorbar()\n```\n\n### 1.3 Remove contradictory examples\n\nFrom section *3.3 Learning to Distinguish Digits* of Farhi et al., filter the dataset to remove images that are labeled as belonging to both classes.\n\nThis is not a standard machine-learning procedure, but is included in the interest of following the paper.\n\n\n```\ndef remove_contradicting(xs, ys):\n mapping = collections.defaultdict(set)\n orig_x = {}\n # Determine the set of labels for each unique image:\n for x,y in zip(xs,ys):\n orig_x[tuple(x.flatten())] = x\n mapping[tuple(x.flatten())].add(y)\n \n new_x = []\n new_y = []\n for flatten_x in mapping:\n x = orig_x[flatten_x]\n labels = mapping[flatten_x]\n if len(labels) == 1:\n new_x.append(x)\n new_y.append(next(iter(labels)))\n else:\n # Throw out images that match more than one label.\n pass\n \n num_uniq_3 = sum(1 for value in mapping.values() if len(value) == 1 and True in value)\n num_uniq_6 = sum(1 for value in mapping.values() if len(value) == 1 and False in value)\n num_uniq_both = sum(1 for value in mapping.values() if len(value) == 2)\n\n print(\"Number of unique images:\", len(mapping.values()))\n print(\"Number of unique 3s: \", num_uniq_3)\n print(\"Number of unique 6s: \", num_uniq_6)\n print(\"Number of unique contradicting labels (both 3 and 6): \", num_uniq_both)\n print()\n print(\"Initial number of images: \", len(xs))\n print(\"Remaining non-contradicting unique images: \", len(new_x))\n \n return np.array(new_x), np.array(new_y)\n```\n\nThe resulting counts do not closely match the reported values, but the exact procedure is not specified.\n\nIt is also worth noting here that applying filtering contradictory examples at this point does not totally prevent the model from receiving contradictory training examples: the next step binarizes the data which will cause more collisions. \n\n\n```\nx_train_nocon, y_train_nocon = remove_contradicting(x_train_small, y_train)\n```\n\n Number of unique images: 10387\n Number of unique 3s: 4912\n Number of unique 6s: 5426\n Number of unique contradicting labels (both 3 and 6): 49\n \n Initial number of images: 12049\n Remaining non-contradicting unique images: 10338\n\n\n### 1.4 Encode the data as quantum circuits\n\nTo process images using a quantum computer, Farhi et al. proposed representing each pixel with a qubit, with the state depending on the value of the pixel. The first step is to convert to a binary encoding.\n\n\n```\nTHRESHOLD = 0.5\n\nx_train_bin = np.array(x_train_nocon > THRESHOLD, dtype=np.float32)\nx_test_bin = np.array(x_test_small > THRESHOLD, dtype=np.float32)\n```\n\nIf you were to remove contradictory images at this point you would be left with only 193, likely not enough for effective training.\n\n\n```\n_ = remove_contradicting(x_train_bin, y_train_nocon)\n```\n\n Number of unique images: 193\n Number of unique 3s: 80\n Number of unique 6s: 69\n Number of unique contradicting labels (both 3 and 6): 44\n \n Initial number of images: 10338\n Remaining non-contradicting unique images: 149\n\n\nThe qubits at pixel indices with values that exceed a threshold, are rotated through an $X$ gate.\n\n\n```\ndef convert_to_circuit(image):\n \"\"\"Encode truncated classical image into quantum datapoint.\"\"\"\n values = np.ndarray.flatten(image)\n qubits = cirq.GridQubit.rect(4, 4)\n circuit = cirq.Circuit()\n for i, value in enumerate(values):\n if value:\n circuit.append(cirq.X(qubits[i]))\n return circuit\n\n\nx_train_circ = [convert_to_circuit(x) for x in x_train_bin]\nx_test_circ = [convert_to_circuit(x) for x in x_test_bin]\n```\n\nHere is the circuit created for the first example (circuit diagrams do not show qubits with zero gates):\n\n\n```\nSVGCircuit(x_train_circ[0])\n```\n\n findfont: Font family ['Arial'] not found. Falling back to DejaVu Sans.\n\n\n\n\n\n \n\n \n\n\n\nCompare this circuit to the indices where the image value exceeds the threshold:\n\n\n```\nbin_img = x_train_bin[0,:,:,0]\nindices = np.array(np.where(bin_img)).T\nindices\n```\n\n\n\n\n array([[2, 2],\n [3, 1]])\n\n\n\nConvert these `Cirq` circuits to tensors for `tfq`:\n\n\n```\nx_train_tfcirc = tfq.convert_to_tensor(x_train_circ)\nx_test_tfcirc = tfq.convert_to_tensor(x_test_circ)\n```\n\n## 2. Quantum neural network\n\nThere is little guidance for a quantum circuit structure that classifies images. Since the classification is based on the expectation of the readout qubit, Farhi et al. propose using two qubit gates, with the readout qubit always acted upon. This is similar in some ways to running small a Unitary RNN across the pixels.\n\n### 2.1 Build the model circuit\n\nThis following example shows this layered approach. Each layer uses *n* instances of the same gate, with each of the data qubits acting on the readout qubit.\n\nStart with a simple class that will add a layer of these gates to a circuit:\n\n\n```\nclass CircuitLayerBuilder():\n def __init__(self, data_qubits, readout):\n self.data_qubits = data_qubits\n self.readout = readout\n \n def add_layer(self, circuit, gate, prefix):\n for i, qubit in enumerate(self.data_qubits):\n symbol = sympy.Symbol(prefix + '-' + str(i))\n circuit.append(gate(qubit, self.readout)**symbol)\n```\n\nBuild an example circuit layer to see how it looks:\n\n\n```\ndemo_builder = CircuitLayerBuilder(data_qubits = cirq.GridQubit.rect(4,1),\n readout=cirq.GridQubit(-1,-1))\n\ncircuit = cirq.Circuit()\ndemo_builder.add_layer(circuit, gate = cirq.XX, prefix='xx')\nSVGCircuit(circuit)\n```\n\n\n\n\n \n\n \n\n\n\nNow build a two-layered model, matching the data-circuit size, and include the preparation and readout operations.\n\n\n```\ndef create_quantum_model():\n \"\"\"Create a QNN model circuit and readout operation to go along with it.\"\"\"\n data_qubits = cirq.GridQubit.rect(4, 4) # a 4x4 grid.\n readout = cirq.GridQubit(-1, -1) # a single qubit at [-1,-1]\n circuit = cirq.Circuit()\n \n # Prepare the readout qubit.\n circuit.append(cirq.X(readout))\n circuit.append(cirq.H(readout))\n \n builder = CircuitLayerBuilder(\n data_qubits = data_qubits,\n readout=readout)\n\n # Then add layers (experiment by adding more).\n builder.add_layer(circuit, cirq.XX, \"xx1\")\n builder.add_layer(circuit, cirq.ZZ, \"zz1\")\n\n # Finally, prepare the readout qubit.\n circuit.append(cirq.H(readout))\n\n return circuit, cirq.Z(readout)\n```\n\n\n```\nmodel_circuit, model_readout = create_quantum_model()\n```\n\n### 2.2 Wrap the model-circuit in a tfq-keras model\n\nBuild the Keras model with the quantum components. This model is fed the \"quantum data\", from `x_train_circ`, that encodes the classical data. It uses a *Parametrized Quantum Circuit* layer, `tfq.layers.PQC`, to train the model circuit, on the quantum data.\n\nTo classify these images, Farhi et al. proposed taking the expectation of a readout qubit in a parameterized circuit. The expectation returns a value between 1 and -1.\n\n\n```\n# Build the Keras model.\nmodel = tf.keras.Sequential([\n # The input is the data-circuit, encoded as a tf.string\n tf.keras.layers.Input(shape=(), dtype=tf.string),\n # The PQC layer returns the expected value of the readout gate, range [-1,1].\n tfq.layers.PQC(model_circuit, model_readout),\n])\n```\n\nNext, describe the training procedure to the model, using the `compile` method.\n\nSince the the expected readout is in the range `[-1,1]`, optimizing the hinge loss is a somewhat natural fit. \n\nNote: Another valid approach would be to shift the output range to `[0,1]`, and treat it as the probability the model assigns to class `3`. This could be used with a standard a `tf.losses.BinaryCrossentropy` loss.\n\nTo use the hinge loss here you need to make two small adjustments. First convert the labels, `y_train_nocon`, from boolean to `[-1,1]`, as expected by the hinge loss.\n\n\n```\ny_train_hinge = 2.0*y_train_nocon-1.0\ny_test_hinge = 2.0*y_test-1.0\n```\n\nSecond, use a custiom `hinge_accuracy` metric that correctly handles `[-1, 1]` as the `y_true` labels argument. \n`tf.losses.BinaryAccuracy(threshold=0.0)` expects `y_true` to be a boolean, and so can't be used with hinge loss).\n\n\n```\ndef hinge_accuracy(y_true, y_pred):\n y_true = tf.squeeze(y_true) > 0.0\n y_pred = tf.squeeze(y_pred) > 0.0\n result = tf.cast(y_true == y_pred, tf.float32)\n\n return tf.reduce_mean(result)\n```\n\n\n```\nmodel.compile(\n loss=tf.keras.losses.Hinge(),\n optimizer=tf.keras.optimizers.Adam(),\n metrics=[hinge_accuracy])\n```\n\n\n```\nprint(model.summary())\n```\n\n Model: \"sequential\"\n _________________________________________________________________\n Layer (type) Output Shape Param # \n =================================================================\n pqc (PQC) (None, 1) 32 \n =================================================================\n Total params: 32\n Trainable params: 32\n Non-trainable params: 0\n _________________________________________________________________\n None\n\n\n### Train the quantum model\n\nNow train the model\u2014this takes about 45 min. If you don't want to wait that long, use a small subset of the data (set `NUM_EXAMPLES=500`, below). This doesn't really affect the model's progress during training (it only has 32 parameters, and doesn't need much data to constrain these). Using fewer examples just ends training earlier (5min), but runs long enough to show that it is making progress in the validation logs.\n\n\n```\nEPOCHS = 3\nBATCH_SIZE = 32\n\nNUM_EXAMPLES = len(x_train_tfcirc)\n```\n\n\n```\nx_train_tfcirc_sub = x_train_tfcirc[:NUM_EXAMPLES]\ny_train_hinge_sub = y_train_hinge[:NUM_EXAMPLES]\n```\n\nTraining this model to convergence should achieve >85% accuracy on the test set.\n\n\n```\nqnn_history = model.fit(\n x_train_tfcirc_sub, y_train_hinge_sub,\n batch_size=32,\n epochs=EPOCHS,\n verbose=1,\n validation_data=(x_test_tfcirc, y_test_hinge))\n\nqnn_results = model.evaluate(x_test_tfcirc, y_test)\n```\n\n Epoch 1/3\n 206/324 [==================>...........] - ETA: 3:31 - loss: 0.7359 - hinge_accuracy: 0.8359\n\nNote: The training accuracy reports the average over the epoch. The validation accuracy is evaluated at the end of each epoch.\n\n## 3. Classical neural network\n\nWhile the quantum neural network works for this simplified MNIST problem, a basic classical neural network can easily outperform a QNN on this task. After a single epoch, a classical neural network can achieve >98% accuracy on the holdout set.\n\nIn the following example, a classical neural network is used for for the 3-6 classification problem using the entire 28x28 image instead of subsampling the image. This easily converges to nearly 100% accuracy of the test set.\n\n\n```\ndef create_classical_model():\n # A simple model based off LeNet from https://keras.io/examples/mnist_cnn/\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Conv2D(32, [3, 3], activation='relu', input_shape=(28,28,1)))\n model.add(tf.keras.layers.Conv2D(64, [3, 3], activation='relu'))\n model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))\n model.add(tf.keras.layers.Dropout(0.25))\n model.add(tf.keras.layers.Flatten())\n model.add(tf.keras.layers.Dense(128, activation='relu'))\n model.add(tf.keras.layers.Dropout(0.5))\n model.add(tf.keras.layers.Dense(1))\n return model\n\n\nmodel = create_classical_model()\nmodel.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n optimizer=tf.keras.optimizers.Adam(),\n metrics=['accuracy'])\n\nmodel.summary()\n```\n\n\n```\nmodel.fit(x_train,\n y_train,\n batch_size=128,\n epochs=1,\n verbose=1,\n validation_data=(x_test, y_test))\n\ncnn_results = model.evaluate(x_test, y_test)\n```\n\nThe above model has nearly 1.2M parameters. For a more fair comparison, try a 37-parameter model, on the subsampled images:\n\n\n```\ndef create_fair_classical_model():\n # A simple model based off LeNet from https://keras.io/examples/mnist_cnn/\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Flatten(input_shape=(4,4,1)))\n model.add(tf.keras.layers.Dense(2, activation='relu'))\n model.add(tf.keras.layers.Dense(1))\n return model\n\n\nmodel = create_fair_classical_model()\nmodel.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n optimizer=tf.keras.optimizers.Adam(),\n metrics=['accuracy'])\n\nmodel.summary()\n```\n\n\n```\nmodel.fit(x_train_bin,\n y_train_nocon,\n batch_size=128,\n epochs=20,\n verbose=2,\n validation_data=(x_test_bin, y_test))\n\nfair_nn_results = model.evaluate(x_test_bin, y_test)\n```\n\n## 4. Comparison\n\nHigher resolution input and a more powerful model make this problem easy for the CNN. While a classical model of similar power (~32 parameters) trains to a similar accuracy in a fraction of the time. One way or the other, the classical neural network easily outperforms the quantum neural network. For classical data, it is difficult to beat a classical neural network.\n\n\n```\nqnn_accuracy = qnn_results[1]\ncnn_accuracy = cnn_results[1]\nfair_nn_accuracy = fair_nn_results[1]\n\nsns.barplot([\"Quantum\", \"Classical, full\", \"Classical, fair\"],\n [qnn_accuracy, cnn_accuracy, fair_nn_accuracy])\n```\n", "meta": {"hexsha": "a619a3534b76e2ef26daedafc066c079119ce68e", "size": 76644, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Copy_of_mnist.ipynb", "max_stars_repo_name": "QDaria/QDaria.github.io", "max_stars_repo_head_hexsha": "f60d00270a651cceff47629edcee22c70d747185", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Copy_of_mnist.ipynb", "max_issues_repo_name": "QDaria/QDaria.github.io", "max_issues_repo_head_hexsha": "f60d00270a651cceff47629edcee22c70d747185", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Copy_of_mnist.ipynb", "max_forks_repo_name": "QDaria/QDaria.github.io", "max_forks_repo_head_hexsha": "f60d00270a651cceff47629edcee22c70d747185", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.0999281093, "max_line_length": 7930, "alphanum_fraction": 0.6299906059, "converted": true, "num_tokens": 9681, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4921881506183194, "lm_q2_score": 0.19682620128743877, "lm_q1q2_score": 0.09687552400489356}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nToggle cell visibility here.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n```\n\n\n\nToggle cell visibility here.\n\n\n\n```python\n%matplotlib notebook\n\nimport matplotlib.pyplot as plt\nimport matplotlib.animation as animation\nimport numpy as np\nimport sympy as sym\n\nfrom ipywidgets import widgets, Layout\nfrom ipywidgets import interact\n\nfrom IPython.display import Latex, display, Markdown # For displaying Markdown and LaTeX code\n\nfrom matplotlib import patches\n```\n\n## Sistema di controllo dell'azimut di una antenna\n\nUn esempio di un sistema di controllo dell'azimut di una antenna \u00e8 mostrato schematicamente nella figura in basso a sinistra. L'obiettivo di questo sistema di controllo \u00e8 mantenere la posizione desiderata dell'antenna impostando l'angolo desiderato $\\theta_{ref}$ con il potenziometro di riferimento (RP). Lo schema a blocchi di questo sistema (mostrato nella figura in basso a destra) inizia quindi con il segnale $\\theta_{ref}$, che viene convertito in tensione $U_1$. La tensione $U_2$ viene quindi sottratta da $U_1$. $U_2$ \u00e8 l'uscita dal potenziometro di misurazione (MP), che fornisce le informazioni sull'angolo effettivo. La differenza di tensione $U_1-U_2$ rappresenta l'errore che ci dice quanto l'angolo effettivo differisce da quello desiderato. In base a questo errore il controller agisce sull'elettromotore che (tramite ingranaggi) fa ruotare l'antenna in modo da ridurre l'errore. $d_w$ \u00e8 un disturbo dovuto al vento che fa ruotare l'antenna in modo casuale.\n\n
\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n
Rappresentazione schematica del sistema di controllo dell'azimut di una antenna Diagramma a blocchi del sistema di controllo dell'azimut di una antenna
Legenda: RP = potenziometro di riferimento, MP = potenziometro di misurazione, dw = disturbo dovuto al vento.
\n\n---\n\n### Come usare questo notebook?\n\n- Spostare i cursori per modificare i valori dell'angolo azimutale dell'antenna desiderato ($\\theta_{ref}$), del disturbo dovuto al vento ($d_w$) e dei coefficienti di controllo proporzionale ($K_p$), integrale ($K_i$) e derivativo ($K_d$).\n\n- Premere i pulsanti per alternare tra il tipo di controller proporzionale (P), proporzionale-integrale (PI) e proporzionale-integrale-derivativo (PID).\n\n---\n\n### Note\n\n- La dimensione della freccia rossa sulla rappresentazione schematica dell'antenna \u00e8 proporzionale all'entit\u00e0 del disturbo dovuto al vento ($d_w$), mentre la direzione della freccia indica la direzione del disturbo.\n- La linea blu tratteggiata sulla rappresentazione schematica dell'antenna indica l'angolo effettivo.\n- La linea verde tratteggiata sulla rappresentazione schematica dell'antenna indica l'angolo desiderato.\n- La linea rossa tratteggiata sulla rappresentazione schematica dell'antenna indica l'angolo effettivo precedente.\n\n\u00c8 possibile selezionare tra due diverse opzioni per la visualizzazione dei risultati:\n1. Resettare la rappresentazione schematica quando si modifica il tipo di controller.\n2. Resettare il grafico quando viene modificato il tipo di controller.\n\n\n```python\n# define system constants\n_Kpot = 0.318\n\n_K1 = 100\n_a = 100\n_Km = 2.083\n_am = 1.71\n_Kg = 0.1\n_R = 8\n_Kt = 0.5\n_Tv = 200 #in milliseconds\n\n#set current theta and theta reference:\nth = [0,0,0,0,0,0]\nthref = [0,0,0,0,0,0]\n# disturbance:\nm = [0,0,0,0,0,0]\n#joined together (first theta reference, second disturbance, then theta measured):\nvariables = [thref, m, th]\n\n# variables of controller:\n_K = 1\n_taui = 1\n_taud = 1\n\n```\n\n\n```python\n# symbolic calculus:\ntaui, taud, K, s, z = sym.symbols('taui, taud, K, s, z')\n\n_alpha=0.1\n#controller:\nP = K\nI = K/(taui*s)\nD = K*taud*s/(_alpha*taud*s+1)\n\ndef make_model(controller):\n if controller == 'P':\n C = P\n elif controller == 'PI':\n C = P+I\n elif controller == 'PID':\n C = P+I+D\n else:\n print('Sistema di controllo non modellato')\n \n tf_s = C*_K1*_Km*_Kg*_Kpot/(s*(s+_a)*(s+_am)+C*_K1*_Km*_Kg*_Kpot)\n tf_s = tf_s.simplify()\n\n tf_z = tf_s.subs(s,2/(_Tv/1000)*(z-1)/(z+1))\n tf_z = tf_z.simplify()\n \n num = [sym.fraction(tf_z.factor())[0].expand().coeff(z, i) for i in reversed(range(1+sym.degree(sym.fraction(tf_z.factor())[0], gen=z)))]\n den = [sym.fraction(tf_z.factor())[1].expand().coeff(z, i) for i in reversed(range(1+sym.degree(sym.fraction(tf_z.factor())[1], gen=z)))]\n #print(num)\n #print(den)\n\n tf_sM = _Km*_Kg*_R*(s+_a)/(s*(s+_a)*(s+_am)*_Kt+C*_K1*_Km*_Kg*_Kpot*_Kt)\n \n tf_zM = tf_sM.subs(s,2/(_Tv/1000)*(z-1)/(z+1))\n tf_zM = tf_zM.simplify()\n num_M = [sym.fraction(tf_zM.factor())[0].expand().coeff(z, i) for i in reversed(range(1+sym.degree(sym.fraction(tf_zM.factor())[0], gen=z)))]\n #print(num_M)\n #print(den_M)\n \n #print('\\n........finished........')\n return sym.lambdify((K, taui, taud), [np.array(num), -np.array(num_M), -np.array(den)])\n\nz_transform_p = make_model('P')\nz_transform_pi = make_model('PI')\nz_transform_pid = make_model('PID')\n```\n\n\n```python\ndef calculate_next(z_transform):\n variables[-1][0] = 0 # set current to zero\n z_transform = z_transform(_K, _taui, _taud)\n \n temp = 0\n for i in range(len(z_transform)): # for every polynomial\n for j in range(len(z_transform[i])): # for every term in polynomial\n temp += z_transform[i][j] * variables[i][j]\n\n return temp / z_transform[-1][0]*(-1)\n```\n\n\n```python\nfig = plt.figure(figsize=(9.8, 4),num='Sistema di controllo dell\\'azimut di una antenna')\n# add axes\nax = fig.add_subplot(121)\ngraph = fig.add_subplot(122)\n \n#set current theta and theta reference:\nth = [0,0,0,0,0,0]\nthref = [1,0,0,0,0,0]\n# disturbance:\nm = [.1,0,0,0,0,0]\n#joined together (first theta reference, second disturbance, then theta measured):\nvariables = [thref, m, th]\n\n# variables of controller:\n_K = 20\n_taui = 10\n_taud = 1\n\nnew_flag_value = [True, 0] # flag for displaying old value of th, before th_ref was changed [flag, angle]\n\n#slider widgets:\nth_ref_widget = widgets.FloatSlider(value=variables[0][0],min=0.0,max=2*np.pi,step=.01,description=r'\\(\\theta_{ref} \\) [rad]',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.2f')\nm_widget = widgets.FloatSlider(value=variables[1][0],min=-.3,max=.3,step=.01,description=r'\\(d_{w} \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.2f')\nK_widget = widgets.FloatSlider(value=_K,min=0.0,max=40,step=.1,description=r'\\(K_p \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.1f')\ntaui_widget = widgets.FloatSlider(value=_taui,min=0.01,max=60,step=.01,description=r'\\(K_i \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.2f')\ntaud_widget = widgets.FloatSlider(value=_taud,min=0.0,max=5,step=.1,description=r'\\(K_d \\)',\n disabled=False,continuous_update=True,orientation='horizontal',readout=True,readout_format='.2f')\n#interact(set_coefficients, setK=K_widget, setthref=th_ref_widget, setm=m_widget, settaui=taui_widget, settaud=taud_widget)\n\n#checkboxes\n#checkbox_reset_antenna = widgets.Checkbox(value=False, description='Reset schematic representation of antenna when type of controller is changed', disabled=False)\n#checkbox_reset_graph = widgets.Checkbox(value=False, description='Reset graph when type of controller is changed', disabled=False)\n\ncheckbox_reset_antenna = widgets.Checkbox(value=False, disabled=False, layout=Layout(width='100px'))\nlabel_scheme = widgets.Label('Resetta la rappresentazione schematica dell\\'antenna quando viene cambiato il tipo di controller', layout=Layout(width='600px'))\nbox1 = widgets.HBox([checkbox_reset_antenna, label_scheme])\n \ncheckbox_reset_graph = widgets.Checkbox(value=False, disabled=False, layout=Layout(width='100px'))\nlabel_graph = widgets.Label('Resetta il grafico temporale quando viene cambiato il tipo di controller', layout=Layout(width='500px'))\nbox2 = widgets.HBox([checkbox_reset_graph, label_graph])\n\nstyle = {'description_width': 'initial'}\n\n#buttons:\ndef buttons_clicked(event):\n global controller_type, equation, list_th, list_th_ref, list_time\n controller_type = buttons.options[buttons.index]\n if controller_type =='P':\n taui_widget.disabled=True\n taud_widget.disabled=True\n equation = '$Kp$'\n if controller_type =='PI':\n taui_widget.disabled=False\n taud_widget.disabled=True\n equation = '$Kp\\,(1+\\dfrac{1}{T_{i}\\,s})$'\n if controller_type =='PID':\n taui_widget.disabled=False\n taud_widget.disabled=False\n equation = '$Kp\\,(1+\\dfrac{1}{T_{i}\\,s}+\\dfrac{T_{d}\\,s}{a\\,T_{d}\\,s+1})$'\n if checkbox_reset_antenna.value:\n #reset values to zero:\n for i in range(len(variables)):\n for j in range(1, len(variables[i])):\n variables[i][j] = 0\n variables[-1][0] = 0\n if checkbox_reset_graph.value:\n list_th = []\n list_th_ref = []\n list_time = []\n \nbuttons = widgets.ToggleButtons(\n options=['P', 'PI', 'PID'],\n description='Seleziona il tipo di controller:',\n disabled=False,\n style=style)\nbuttons.observe(buttons_clicked)\n\n\n#updating values\ndef set_values(event):\n global _K, _taui, _taud\n if event['name'] != 'value':\n return\n if th_ref_widget.value != variables[0][0] and not new_flag_value[0]:\n new_flag_value[0] = True\n new_flag_value[1] = variables[-1][0]\n \n variables[0][0] = th_ref_widget.value\n variables[1][0] = m_widget.value\n _K = K_widget.value\n _taui = taui_widget.value\n _taud = taud_widget.value\nth_ref_widget.observe(set_values)\nm_widget.observe(set_values)\nK_widget.observe(set_values)\ntaui_widget.observe(set_values)\ntaud_widget.observe(set_values)\n\n#displaying widgets:\ndisplay(buttons)\nvbox1 = widgets.VBox([th_ref_widget, m_widget, K_widget, taui_widget, taud_widget])\nvbox2 = widgets.VBox([box1, box2])\nhbox = widgets.HBox([vbox1, vbox2])\ndisplay(hbox)\n\n#setting at start:\ncontroller_type = 'P'\ntaui_widget.disabled=True\ntaud_widget.disabled=True\nequation = '$Kp$'\nset_values({'name':'value'})\n\n#lists for graph in time:\nlist_time = []\nlist_th = []\nlist_th_ref = []\n\n#previous th before change of th_ref:\nprev_th = 0\n\ncycles_flag = True\n\ndef update_figure(i_time):\n global cycles_flag, variables, _K, controller_type, equation\n \n if cycles_flag == True:\n cycles_flag = False\n return\n \n if controller_type == 'P':\n th = calculate_next(z_transform_p)\n elif controller_type == 'PI':\n th = calculate_next(z_transform_pi)\n elif controller_type == 'PID':\n th = calculate_next(z_transform_pid)\n variables[-1][0] = th\n \n # save variables for next time step:\n for i in range(len(variables)):\n for j in reversed(range(len(variables[i])-1)):\n variables[i][j+1] = variables[i][j]\n\n list_time.append((i_time+1)*_Tv/1000)\n list_th.append(th)\n list_th_ref.append(variables[0][0])\n \n #plot:\n ax.clear()\n ax.plot([-1.5, 1.5, 1.5, -1.5], [-1.5, -1.5, 1.5, 1.5], ',', color='b')\n \n #plot line:\n ax.plot([np.cos(th)*-.5, np.cos(th)*1.5], [np.sin(th)*-.5, np.sin(th)*1.5], 'b--', linewidth=.7, alpha=.7)\n \n #plot antenna:\n center1 = 1\n center2 = 3\n d1 = 2.2\n d2 = 5.5\n x1 = center1*np.cos(th)\n y1 = center1*np.sin(th)\n x2 = center2*np.cos(th)\n y2 = center2*np.sin(th)\n arc1 = patches.Arc((x1, y1), d1, d1,\n angle=th/np.pi*180+180, theta1=-58, theta2=58, linewidth=2, color='black', alpha=.7)\n arc2 = patches.Arc((x2, y2), d2, d2,\n angle=th/np.pi*180+180, theta1=-20, theta2=20, linewidth=2, color='black', alpha=.7)\n ax.add_patch(arc1)\n ax.add_patch(arc2)\n if m_widget.value > 0:\n ax.plot(0, 0, 'r', alpha=.1, marker=r'$\\circlearrowright$',ms=150*m_widget.value)\n elif m_widget.value < 0:\n ax.plot(0, 0, 'r', alpha=.1, marker=r'$\\circlearrowleft$',ms=-150*m_widget.value)\n ax.set_title('Rappresentazione schematica dell\\'antenna')\n\n \n #plot direction of antenna before thref change\n if abs(variables[0][0] - th) < 0.03:\n new_flag_value[0] = False\n if new_flag_value[0]:\n ax.plot([0,np.cos(new_flag_value[1])], [0, np.sin(new_flag_value[1])], 'r-.', alpha=.3, linewidth=0.5)\n #plot desired direction of antenna\n ax.plot([0,np.cos(variables[0][0])], [0, np.sin(variables[0][0])], 'g-.', alpha=.7, linewidth=0.7)\n \n ax.text(-1, 1.3, 'angolo attuale: %.2f rad' %th)\n ax.text(-1, -1.3, 'Tipo di controller:')\n ax.text(-1, -1.6, equation)\n \n ax.set_aspect('equal', adjustable='datalim')\n ax.set_xlim(-1.5,1.5)\n ax.set_ylim(-1.5,1.5)\n ax.axis('off')\n \n graph.clear()\n graph.plot(list_time, list_th_ref, 'g', label='angolo desiderato')\n graph.plot(list_time, list_th, 'b', label='angolo attuale') \n graph.set_xlabel('$t$ [s]')\n graph.set_ylabel('$\\\\theta$ [rad]')\n graph.legend(loc=4, fontsize=8)\n graph.set_title('Azimut vs. tempo')\n \n plt.show()\n\nani = animation.FuncAnimation(fig, update_figure, interval=_Tv)\n```\n\n\n \n\n\n\n\n\n\n\n ToggleButtons(description='Seleziona il tipo di controller:', options=('P', 'PI', 'PID'), style=ToggleButtonsS\u2026\n\n\n\n HBox(children=(VBox(children=(FloatSlider(value=1.0, description='\\\\(\\\\theta_{ref} \\\\) [rad]', max=6.283185307\u2026\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "7cc0a913d0b5c33ba15998ef5376d0a9b4bbadcd", "size": 132087, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_it/examples/02/.ipynb_checkpoints/TD-02-Sistema-di-controllo-della-posizione-azimutale-di-una-antenna-checkpoint.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_it/examples/02/.ipynb_checkpoints/TD-02-Sistema-di-controllo-della-posizione-azimutale-di-una-antenna-checkpoint.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_it/examples/02/.ipynb_checkpoints/TD-02-Sistema-di-controllo-della-posizione-azimutale-di-una-antenna-checkpoint.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 99.388261851, "max_line_length": 76723, "alphanum_fraction": 0.7737778888, "converted": true, "num_tokens": 4183, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4225046202709847, "lm_q2_score": 0.22815650216092534, "lm_q1q2_score": 0.09639717630785785}} {"text": "```javascript\n%%javascript\n MathJax.Hub.Config({\n TeX: { equationNumbers: { autoNumber: \"AMS\" } }\n });\n```\n\n\n \n\n\n\n```python\nfrom IPython.display import HTML\n\nHTML('''\n
''')\n```\n\n\n\n\n\n
\n\n\n\n\n```python\nfrom IPython.display import HTML\n\nHTML('''\n\n\n\n''')\n```\n\n\n\n\n\n\n\n\n\n\n\n\n# Benchmark Problem 7: MMS Allen-Cahn\n\n\n```python\nfrom IPython.display import HTML\n\nHTML('''{% include jupyter_benchmark_table.html num=\"[7]\" revision=0 %}''')\n```\n\n\n\n\n{% include jupyter_benchmark_table.html num=\"[7]\" revision=0 %}\n\n\n\n

Table of Contents

\n\n\n\nSee the journal publication entitled [\"Benchmark problems for numerical implementations of phase field models\"][benchmark_paper] for more details about the benchmark problems. Furthermore, read [the extended essay][benchmarks] for a discussion about the need for benchmark problems.\n\n[benchmarks]: ../\n[benchmark_paper]: http://dx.doi.org/10.1016/j.commatsci.2016.09.022\n\n# Overview\n\nThe Method of Manufactured Solutions (MMS) is a powerful technique for verifying the accuracy of a simulation code. In the MMS, one picks a desired solution to the problem at the outset, the \"manufactured solution\", and then determines the governing equation that will result in that solution. With the exact analytical form of the solution in hand, when the governing equation is solved using a particular simulation code, the deviation from the expected solution can be determined exactly. This deviation can be converted into an error metric to rigously quantify the error for a calculation. This error can be used to determine the order of accuracy of the simulation results to verify simulation codes. It can also be used to compare the computational efficiency of different codes or different approaches for a particular code at a certain level of error. Furthermore, the spatial/temporal distribution can give insight into the conditions resulting in the largest error (high gradients, changes in mesh resolution, etc.).\n\nAfter choosing a manufactured solution, the governing equation must be modified to force the solution to equal the manufactured solution. This is accomplished by taking the nominal equation that is to be solved (e.g. Allen-Cahn equation, Cahn-Hilliard equation, Fick's second law, Laplace equation) and adding a source term. This source term is determined by plugging the manufactured solution into the nominal governing equation and setting the source term equal to the residual. Thus, the manufactured solution satisfies the MMS governing equation (the nominal governing equation plus the source term). A more detailed discussion of MMS can be found in [the report by Salari and Knupp][mms_report].\n\nIn this benchmark problem, the objective is to use the MMS to rigorously verify phase field simulation codes and then provide a basis of comparison for the computational performance between codes and for various settings for a single code, as discussed above. To this end, the benchmark problem was chosen as a balance between two factors: simplicity, to minimize the development effort required to solve the benchmark, and transferability to a real phase field system of physical interest. \n\n[mms_report]: http://prod.sandia.gov/techlib/access-control.cgi/2000/001444.pdf\n\n# Governing equation and manufactured solution\nFor this benchmark problem, we use a simple Allen-Cahn equation as the governing equation\n\n$$\\begin{equation}\n\\frac{\\partial \\eta}{\\partial t} = - \\left[ 4 \\eta \\left(\\eta - 1 \\right) \\left(\\eta-\\frac{1}{2} \\right) - \\kappa \\nabla^2 \\eta \\right] + S(x,y,t) \n\\end{equation}$$\n\nwhere $S(x,y,t)$ is the MMS source term and $\\kappa$ is a constant parameter (the gradient energy coefficient). \n\nThe manufactured solution, $\\eta_{sol}$ is a hyperbolic tangent function, shifted to vary between 0 and 1, with the $x$ position of the middle of the interface ($\\eta_{sol}=0.5$) given by the function $\\alpha(x,t)$:\n\n$$\\begin{equation}\n\\eta_{sol}(x,y,t) = \\frac{1}{2}\\left[ 1 - \\tanh\\left( \\frac{y-\\alpha(x,t)}{\\sqrt{2 \\kappa}} \\right) \\right] \n\\end{equation}$$\n\n$$\\begin{equation}\n\\alpha(x,t) = \\frac{1}{4} + A_1 t \\sin\\left(B_1 x \\right) + A_2 \\sin \\left(B_2 x + C_2 t \\right)\n\\end{equation}$$\n\nwhere $A_1$, $B_1$, $A_2$, $B_2$, and $C_2$ are constant parameters. \n\nThis manufactured solution is an equilbrium solution of the governing equation, when $S(x,y,t)=0$ and $\\alpha(x,t)$ is constant. The closeness of this manufactured solution to a solution of the nominal governing equation increases the likihood that the behavior of simulation codes when solving this benchmark problem is representive of the solution of the regular Allen-Cahn equation (i.e. without the source term). The form of $\\alpha(x,t)$ was chosen to yield complex behavior while still retaining a (somewhat) simple functional form. The two spatial sinusoidal terms introduce two controllable length scales to the interfacial shape. Summing them gives a \"beat\" pattern with a period longer than the period of either individual term, permitting a domain size that is larger than the wavelength of the sinusoids without a repeating pattern. The temporal sinusoidal term introduces a controllable time scale to the interfacial shape in addition to the phase transformation time scale, while the linear temporal dependence of the other term ensures that the sinusoidal term can go through multiple periods without $\\eta_{sol}$ repeating itself.\n\nInserting the manufactured solution into the governing equation and solving for $S(x,y,t)$ yields:\n\n$$\\begin{equation}\nS(x,y,t) = \\frac{\\text{sech}^2 \\left[ \\frac{y-\\alpha(x,t)}{\\sqrt{2 \\kappa}} \\right]}{4 \\sqrt{\\kappa}} \\left[-2\\sqrt{\\kappa} \\tanh \\left[\\frac{y-\\alpha(x,t)}{\\sqrt{2 \\kappa}} \\right] \\left(\\frac{\\partial \\alpha(x,t)}{\\partial x} \\right)^2+\\sqrt{2} \\left[ \\frac{\\partial \\alpha(x,t)}{\\partial t}-\\kappa \\frac{\\partial^2 \\alpha(x,t)}{\\partial x^2} \\right] \\right]\n\\end{equation}$$\n\nwhere $\\alpha(x,t)$ is given above and where:\n\n$$\\begin{equation}\n\\frac{\\partial \\alpha(x,t)}{\\partial x} = A_1 B_1 t \\cos\\left(B_1 x\\right) + A_2 B_2 \\cos \\left(B_2 x + C_2 t \\right)\n\\end{equation}$$\n\n$$\\begin{equation}\n\\frac{\\partial^2 \\alpha(x,t)}{\\partial x^2} = -A_1 B_1^2 t \\sin\\left(B_1 x\\right) - A_2 B_2^2 \\sin \\left(B_2 x + C_2 t \\right)\n\\end{equation}$$\n\n$$\\begin{equation}\n\\frac{\\partial \\alpha(x,t)}{\\partial t} = A_1 \\sin\\left(B_1 x\\right) + A_2 C_2 \\cos \\left(B_2 x + C_2 t \\right)\n\\end{equation}$$\n\n** *N.B.*: Don't transcribe these equations. Please download the appropriate files from the [Appendix](#Appendix) **.\n\n# Domain geometry, boundary conditions, initial conditions, and stopping condition\nThe domain geometry is a rectangle that spans [0, 1] in $x$ and [0, 0.5] in $y$. This elongated domain was chosen to allow multiple peaks and valleys in $\\eta_{sol}$ without stretching the interface too much in the $y$ direction (which causes the thickness of the interface to change) or having large regions where $\\eta_{sol}$ never deviates from 0 or 1. Periodic boundary conditions are applied along the $x = 0$ and the $x = 1$ boundaries to accomodate the periodicity of $\\alpha(x,t)$. Dirichlet boundary conditions of $\\eta$ = 1 and $\\eta$ = 0 are applied along the $y = 0$ and the $y = 0.5$ boundaries, respectively. These boundary conditions are chosen to be consistent with $\\eta_{sol}(x,y,t)$. The initial condition is the manufactured solution at $t = 0$:\n\n$$\n\\begin{equation}\n\\eta_{sol}(x,y,0) = \\frac{1}{2}\\left[ 1 - \\tanh\\left( \\frac{y-\\left(\\frac{1}{4}+A_2 \\sin(B_2 x) \\right)}{\\sqrt{2 \\kappa}} \\right) \\right] \n\\end{equation}\n$$\n\nThe stopping condition for all calculations is when t = 8 time units, which was chosen to let $\\alpha(x,t)$ evolve substantially, while still being slower than the characteristic time for the phase evolution (determined by the CFL condition for a uniform mesh with a reasonable level of resolution of $\\eta_{sol}$).\n\n# Parameter values\nThe nominal parameter values for the governing equation and manufactured solution are given below. The value of $\\kappa$ will change in Part (b) in the following section and the values of $\\kappa$ and $C_2$ will change in Part (c).\n\n| Parameter | Value |\n|-----------|-------|\n| $\\kappa$ | 0.0004|\n| $A_1$ | 0.0075|\n| $B_1$ | $8.0 \\pi$ |\n| $A_2$ | 0.03 |\n| $B_2$ | $22.0 \\pi$ |\n| $C_2$ | $0.0625 \\pi$|\n\n# Benchmark simulation instructions\nThis section describes three sets of tests to conduct using the MMS problem specified above. The primary purpose of the first test is provide a computationally inexpensive problem to verify a simulation code. The second and third tests are more computationally demanding and are primarily designed to serve as a basis for performance comparisons.\n\n## Part (a)\nThe objective of this test is to verify the accuracy of your simulation code in both time and space. Here, we make use of convergence tests, where either the mesh size (or grid point spacing) or the time step size is systematically changed to determine the response of the error to these quantities. Once a convergence test is completed the order of accuracy can be calculated from the result. The order of accuracy can be compared to the theoretical order of accuracy for the numerical method employed in the simulation. If the two match (to a reasonable degree), then one can be confident that the simulation code is working as expected. The remainder of this subsection will give instructions for convergence tests for this MMS problem.\n\nImplement the MMS problem specified above using the simulation code of your choice. Perform a spatial convergence test by running the simulation for a variety of mesh sizes. For each simulation, determine the discrete $L_2$ norm of the error at $t=8$:\n\n$$\\begin{equation}\n L_2 = \\sqrt{\\sum\\limits_{x,y}\\left(\\eta^{t=8}_{x,y} - \\eta_{sol}(x,y,8)\\right)^2 \\Delta x \\Delta y}\n\\end{equation}$$\n\nFor all of these simulations, verify that the time step is small enough that any temporal error is much smaller that the total error. This can be accomplished by decreasing the time step until it has minimal effect on the error. Ensure that at least three simulation results have $L_2$ errors in the range $[5\\times10^{-3}, 1\\times10^{-4}]$, attempting to cover as much of that range as possible/practical. This maximum and minimum errors in the range roughly represent a poorly resolved simulation and a very well-resolved simulation.\n\nSave the effective element size, $h$, and the $L_2$ error for each simulation.\n[Archive this data](https://github.com/usnistgov/pfhub/issues/491) in a\nCSV or JSON file, using one column (or key) each for $h$ and $L_2$. \nCalculate the effective element size as the square root of the area of\nthe finest part of the mesh for nonuniform meshes. For irregular meshes\nwith continuous distributions of element sizes, approximate the effective\nelement size as the average of the square root of the area of the smallest\n5% of the elements. Then [submit your results on the PFHub website](https://pages.nist.gov/pfhub/simulations/upload_form/) as a 2D data set with the effective mesh size as the x-axis column and the $L_2$ error as the y-axis column.\n\nNext, confirm that the observed order of accuracy is approximately equal to the expected value. Calculate the order of accuracy, $p$, with a least squares fit of the following function:\n\n$$\\begin{equation}\n \\log(E)=p \\log(R) + b\n\\end{equation}$$\n\nwhere $E$ is the $L_2$ error, $R$ is the effective element size, and b is an intercept. Deviations of \u00b10.2 or more from the theoretical value are to be expected (depending on the range of errors considered and other factors).\n\nFinally, perform a similar convergence test, but for the time step, systematically changing the time step and recording the $L_2$ error. Use a time step that does not vary over the course of any single simulation. Verify that the spatial discretization error is small enough that it does not substantially contribute to the total error. Once again, ensure that at least three simulations have $L_2$ errors in the range $[5\\times10^{-3}, 1\\times10^{-4}]$, attempting to cover as much of that range as possible/practical. [Archive the effective mesh size and $L_2$ error](https://github.com/usnistgov/pfhub/issues/491) for each individual simulation in a CSV or JSON file. [Submit your results to the PFHub website](https://pages.nist.gov/pfhub/simulations/upload_form/) as a 2D data set with the time step size as the x-axis column and the $L_2$ error as the y-axis column. Confirm that the observed order of accuracy is approximately equal to the expected value.\n\n## Part (b)\nNow that your code has been verified in (a), the objective of this part is to determine the computational performance of your code at various levels of error. These results can then be used to objectively compare the performance between codes or settings within the same code. To make the problem more computationally demanding and stress solvers more than in (a), decrease $\\kappa$ by a factor of $256$ to $1.5625\\times10^{-6}$. This change will reduce the interfacial thickness by a factor of $16$.\n\nRun a series of simulations, attempting to optimize solver parameters (mesh, time step, tolerances, etc.) to minimize the required computational resources for at least three levels of $L_2$ error in range $[5\\times10^{-3}, 1\\times10^{-5}]$. Use the same CPU and processor type for all simulations. For the best of these simulations, save the wall time (in seconds), number of computing cores, normalized computing cost (wall time in seconds $\\times$ number of cores $\\times$ nominal core speed $/$ 2 GHz), maximum memory usage, and $L_2$ error at $t=8$ for each individual simulation. [Archive this data](https://github.com/usnistgov/pfhub/issues/491) in a\nCSV or JSON file with one column (or key) for each of the quantities mentioned above. [Submit your results to the PFHub website](https://pages.nist.gov/pfhub/simulations/upload_form/) as two 2D data sets. For the first data set use the $L_2$ error as the x-axis column and the normalized computational cost as the y-axis column. For the second data set, use the $L_2$ error as the x-axis column and the wall time as the y-axis column.\n\n## Part (c)\nThis final part is designed to stress time integrators even further by increasing the rate of change of $\\alpha(x,t)$. Increase $C_2$ to $0.5$. Keep $\\kappa= 1.5625\\times10^{-6}$ from (b).\n\nRepeat the process from (b), uploading the wall time, number of computing cores, processor speed, normalized computing cost, maximum memory usage, and $L_2$ error at $t=8$ to the PFHub website.\n\n# Submission Guidelines\n\n## Part (a) Guidelines\n\nTwo data items are required in the \"Data Files\" section of the [upload form]. The data items should be labeled as `spatial` and `temporal` in the `Short name of data` box. The 2D radio button should be checked and the columns corresponding to the x-axis (either $\\Delta t$ or $\\Delta x$) and the y-axis ($e_{L2}$) should be labeled correctly for each CSV file. The CSV file for the spatial data should have the form\n\n```\nmesh_size,L2_error\n0.002604167,2.55E-06\n0.00390625,6.26E-06\n...\n```\n\nand the CSV file for the temporal data should have the form\n\n```\ntime_step,L2_error\n5.00E-04,5.80162E-06\n4.00E-04,4.69709E-06\n...\n\n```\n\n\n## Parts (b) and (c) Guidelines\n\nTwo data items are required in the \"Data Files\" section of the [upload form]. The data items should be labeled as `cost` and `time` in the `Short name of data` box. The 2D radio button should be checked and the columns corresponding to the x-axis ($e_{L2}$) and the y-axis (either $F_{\\text{cost}}$ or $t_{\\text{wall}}$) should be labeled correctly for each CSV file. The CSV file for the cost data should have the form\n\n```\ncores,wall_time,memory,error,cost\n1,1.35,25800,0.024275131,1.755\n1,4.57,39400,0.010521502,5.941\n...\n```\n\nOnly one CSV file is required with the same link in both data sections.\n\n[upload form]: ../../simulations/upload_form/\n\n# Results\nResults from this benchmark problem are displayed on the [simulation result page]({{ site.baseurl }}/simulations) for different codes.\n\n# Feedback\nFeedback on this benchmark problem is appreciated. If you have questions, comments, or seek clarification, please contact the [CHiMaD phase field community]({{ site.baseurl }}/community/) through the [Gitter chat channel]({{ site.links.chat }}) or by [email]({{ site.baseurl }}/mailing_list/). If you found an error, please file an [issue on GitHub]({{ site.links.github }}/issues/new).\n\n# Appendix\n\n## Computer algebra systems\nRigorous verification of software frameworks using MMS requires posing the equation and manufacturing the solution with as much complexity as possible. This can be straight-forward, but interesting equations produce complicated source terms. To streamline the MMS workflow, it is strongly recommended that you use a CAS such as SymPy, Maple, or Mathematica to generate source equations and turn it into executable code automatically. For accessibility, we will use [SymPy](http://www.sympy.org/), but so long as vector calculus is supported, CAS will do.\n\n## Source term\n\n\n```python\n# Sympy code to generate expressions for PFHub Problem 7 (MMS)\n\nfrom sympy import symbols, simplify\nfrom sympy import sin, cos, cosh, tanh, sqrt\nfrom sympy.physics.vector import divergence, gradient, ReferenceFrame, time_derivative\nfrom sympy.utilities.codegen import codegen\nfrom sympy.abc import kappa, S, x, y, t\n\n# Spatial coordinates: x=R[0], y=R[1], z=R[2]\nR = ReferenceFrame('R')\n\n# sinusoid amplitudes\nA1, A2 = symbols('A1 A2')\nB1, B2 = symbols('B1 B2')\nC2 = symbols('C2')\n\n# Define interface offset (alpha)\nalpha = 0.25 + A1 * t * sin(B1 * R[0]) \\\n + A2 * sin(B2 * R[0] + C2 * t)\n\n# Define the solution equation (eta)\neta = 0.5 * (1 - tanh((R[1] - alpha) / sqrt(2*kappa)))\n\n# Compute the source term from the equation of motion\nsource = simplify(time_derivative(eta, R)\n + 4 * eta * (eta - 1) * (eta - 1/2)\n - kappa * divergence(gradient(eta, R), R))\n\n# Replace R[i] with (x, y)\nalpha = alpha.subs({R[0]: x, R[1]: y})\neta = eta.subs({R[0]: x, R[1]: y})\neta0 = eta.subs(t, 0)\nsource = source.subs({R[0]: x, R[1]: y})\n\nprint(\"alpha =\", alpha, \"\\n\")\nprint(\"eta =\", eta, \"\\n\")\nprint(\"eta0 =\", eta0, \"\\n\")\nprint(\"S =\", source)\n```\n\n alpha = A1*t*sin(B1*x) + A2*sin(B2*x + C2*t) + 0.25 \n \n eta = -0.5*tanh(sqrt(2)*(-A1*t*sin(B1*x) - A2*sin(B2*x + C2*t) + y - 0.25)/(2*sqrt(kappa))) + 0.5 \n \n eta0 = -0.5*tanh(sqrt(2)*(-A2*sin(B2*x) + y - 0.25)/(2*sqrt(kappa))) + 0.5 \n \n S = -(tanh(sqrt(2)*(A1*t*sin(B1*x) + A2*sin(B2*x + C2*t) - y + 0.25)/(2*sqrt(kappa)))**2 - 1)*(0.5*sqrt(kappa)*((A1*B1*t*cos(B1*x) + A2*B2*cos(B2*x + C2*t))**2 + 1)*tanh(sqrt(2)*(A1*t*sin(B1*x) + A2*sin(B2*x + C2*t) - y + 0.25)/(2*sqrt(kappa))) - 0.5*sqrt(kappa)*tanh(sqrt(2)*(A1*t*sin(B1*x) + A2*sin(B2*x + C2*t) - y + 0.25)/(2*sqrt(kappa))) + 0.25*sqrt(2)*kappa*(A1*B1**2*t*sin(B1*x) + A2*B2**2*sin(B2*x + C2*t)) + 0.25*sqrt(2)*(A1*sin(B1*x) + A2*C2*cos(B2*x + C2*t)))/sqrt(kappa)\n\n\n## Code\n\n### Python\n\nCopy the first cell under Source Term directly into your program.\nFor a performance boost, convert the expressions into lambda functions:\n\n```python\nfrom sympy.utilities.lambdify import lambdify\n\napy = lambdify([x, y], alpha, modules='sympy')\nepy = lambdify([x, y], eta, modules='sympy')\nipy = lambdify([x, y], eta0, modules='sympy')\nSpy = lambdify([x, y], S, modules='sympy')\n```\n\n> Note: Click \"Code Toggle\" at the top of the page to see the Python expressions.\n\n### C\n\n\n```python\n[(c_name, code), (h_name, header)] = \\\ncodegen([(\"alpha\", alpha),\n (\"eta\", eta),\n (\"eta0\", eta),\n (\"S\", S)],\n language=\"C\",\n prefix=\"manufactured\",\n project=\"PFHub\")\nprint(\"manufactured.h:\\n\")\nprint(header)\nprint(\"\\nmanufactured.c:\\n\")\nprint(code)\n```\n\n manufactured.h:\n \n /******************************************************************************\n * Code generated with sympy 1.2 *\n * *\n * See http://www.sympy.org/ for more information. *\n * *\n * This file is part of 'PFHub' *\n ******************************************************************************/\n \n \n #ifndef PFHUB__MANUFACTURED__H\n #define PFHUB__MANUFACTURED__H\n \n double alpha(double A1, double A2, double B1, double B2, double C2, double t, double x);\n double eta(double A1, double A2, double B1, double B2, double C2, double kappa, double t, double x, double y);\n double eta0(double A1, double A2, double B1, double B2, double C2, double kappa, double t, double x, double y);\n double S(double S);\n \n #endif\n \n \n \n manufactured.c:\n \n /******************************************************************************\n * Code generated with sympy 1.2 *\n * *\n * See http://www.sympy.org/ for more information. *\n * *\n * This file is part of 'PFHub' *\n ******************************************************************************/\n #include \"manufactured.h\"\n #include \n \n double alpha(double A1, double A2, double B1, double B2, double C2, double t, double x) {\n \n double alpha_result;\n alpha_result = A1*t*sin(B1*x) + A2*sin(B2*x + C2*t) + 0.25;\n return alpha_result;\n \n }\n \n double eta(double A1, double A2, double B1, double B2, double C2, double kappa, double t, double x, double y) {\n \n double eta_result;\n eta_result = -0.5*tanh((1.0/2.0)*M_SQRT2*(-A1*t*sin(B1*x) - A2*sin(B2*x + C2*t) + y - 0.25)/sqrt(kappa)) + 0.5;\n return eta_result;\n \n }\n \n double eta0(double A1, double A2, double B1, double B2, double C2, double kappa, double t, double x, double y) {\n \n double eta0_result;\n eta0_result = -0.5*tanh((1.0/2.0)*M_SQRT2*(-A1*t*sin(B1*x) - A2*sin(B2*x + C2*t) + y - 0.25)/sqrt(kappa)) + 0.5;\n return eta0_result;\n \n }\n \n double S(double S) {\n \n double S_result;\n S_result = S;\n return S_result;\n \n }\n \n\n\n### Fortran\n\n\n```python\n[(f_name, code), (f_name, header)] = \\\ncodegen([(\"alpha\", alpha),\n (\"eta\", eta),\n (\"eta0\", eta),\n (\"S\", S)],\n language=\"f95\",\n prefix=\"manufactured\",\n project=\"PFHub\")\n\nprint(\"manufactured.f:\\n\")\nprint(code)\n```\n\n manufactured.f:\n \n !******************************************************************************\n !* Code generated with sympy 1.2 *\n !* *\n !* See http://www.sympy.org/ for more information. *\n !* *\n !* This file is part of 'PFHub' *\n !******************************************************************************\n \n REAL*8 function alpha(A1, A2, B1, B2, C2, t, x)\n implicit none\n REAL*8, intent(in) :: A1\n REAL*8, intent(in) :: A2\n REAL*8, intent(in) :: B1\n REAL*8, intent(in) :: B2\n REAL*8, intent(in) :: C2\n REAL*8, intent(in) :: t\n REAL*8, intent(in) :: x\n \n alpha = A1*t*sin(B1*x) + A2*sin(B2*x + C2*t) + 0.25d0\n \n end function\n \n REAL*8 function eta(A1, A2, B1, B2, C2, kappa, t, x, y)\n implicit none\n REAL*8, intent(in) :: A1\n REAL*8, intent(in) :: A2\n REAL*8, intent(in) :: B1\n REAL*8, intent(in) :: B2\n REAL*8, intent(in) :: C2\n REAL*8, intent(in) :: kappa\n REAL*8, intent(in) :: t\n REAL*8, intent(in) :: x\n REAL*8, intent(in) :: y\n \n eta = -0.5d0*tanh(0.70710678118654752d0*kappa**(-0.5d0)*(-A1*t*sin(B1*x &\n ) - A2*sin(B2*x + C2*t) + y - 0.25d0)) + 0.5d0\n \n end function\n \n REAL*8 function eta0(A1, A2, B1, B2, C2, kappa, t, x, y)\n implicit none\n REAL*8, intent(in) :: A1\n REAL*8, intent(in) :: A2\n REAL*8, intent(in) :: B1\n REAL*8, intent(in) :: B2\n REAL*8, intent(in) :: C2\n REAL*8, intent(in) :: kappa\n REAL*8, intent(in) :: t\n REAL*8, intent(in) :: x\n REAL*8, intent(in) :: y\n \n eta0 = -0.5d0*tanh(0.70710678118654752d0*kappa**(-0.5d0)*(-A1*t*sin(B1*x &\n ) - A2*sin(B2*x + C2*t) + y - 0.25d0)) + 0.5d0\n \n end function\n \n REAL*8 function S(S)\n implicit none\n REAL*8, intent(in) :: S\n \n S = S\n \n end function\n \n\n\n### Julia\n\n\n```python\n[(f_name, code)] = \\\ncodegen([(\"alpha\", alpha),\n (\"eta\", eta),\n (\"eta0\", eta),\n (\"S\", S)],\n language=\"julia\",\n prefix=\"manufactured\",\n project=\"PFHub\")\n\nprint(\"manufactured.jl:\\n\")\nprint(code)\n```\n\n manufactured.jl:\n \n # Code generated with sympy 1.2\n #\n # See http://www.sympy.org/ for more information.\n #\n # This file is part of 'PFHub'\n \n function alpha(A1, A2, B1, B2, C2, t, x)\n \n out1 = A1.*t.*sin(B1.*x) + A2.*sin(B2.*x + C2.*t) + 0.25\n \n return out1\n end\n \n function eta(A1, A2, B1, B2, C2, kappa, t, x, y)\n \n out1 = -0.5*tanh(sqrt(2)*(-A1.*t.*sin(B1.*x) - A2.*sin(B2.*x + C2.*t) + y - 0.25)./(2*sqrt(kappa))) + 0.5\n \n return out1\n end\n \n function eta0(A1, A2, B1, B2, C2, kappa, t, x, y)\n \n out1 = -0.5*tanh(sqrt(2)*(-A1.*t.*sin(B1.*x) - A2.*sin(B2.*x + C2.*t) + y - 0.25)./(2*sqrt(kappa))) + 0.5\n \n return out1\n end\n \n function S(S)\n \n out1 = S\n \n return out1\n end\n \n\n\n### Mathematica\n\n\n```python\nfrom sympy.printing import mathematica_code\n\nprint(\"alpha =\", mathematica_code(alpha), \"\\n\")\nprint(\"eta =\", mathematica_code(eta), \"\\n\")\nprint(\"eta0 =\", mathematica_code(eta0), \"\\n\")\nprint(\"S =\", mathematica_code(source), \"\\n\")\n```\n\n alpha = A1*t*Sin[B1*x] + A2*Sin[B2*x + C2*t] + 0.25 \n \n eta = -0.5*Tanh[(1/2)*2^(1/2)*(-A1*t*Sin[B1*x] - A2*Sin[B2*x + C2*t] + y - 0.25)/kappa^(1/2)] + 0.5 \n \n eta0 = -0.5*Tanh[(1/2)*2^(1/2)*(-A2*Sin[B2*x] + y - 0.25)/kappa^(1/2)] + 0.5 \n \n S = -(Tanh[(1/2)*2^(1/2)*(A1*t*Sin[B1*x] + A2*Sin[B2*x + C2*t] - y + 0.25)/kappa^(1/2)]^2 - 1)*(0.5*kappa^(1/2)*((A1*B1*t*Cos[B1*x] + A2*B2*Cos[B2*x + C2*t])^2 + 1)*Tanh[(1/2)*2^(1/2)*(A1*t*Sin[B1*x] + A2*Sin[B2*x + C2*t] - y + 0.25)/kappa^(1/2)] - 0.5*kappa^(1/2)*Tanh[(1/2)*2^(1/2)*(A1*t*Sin[B1*x] + A2*Sin[B2*x + C2*t] - y + 0.25)/kappa^(1/2)] + 0.25*2^(1/2)*kappa*(A1*B1^2*t*Sin[B1*x] + A2*B2^2*Sin[B2*x + C2*t]) + 0.25*2^(1/2)*(A1*Sin[B1*x] + A2*C2*Cos[B2*x + C2*t]))/kappa^(1/2) \n \n\n\n### Matlab\n\n\n```python\ncode = \\\ncodegen([(\"alpha\", alpha),\n (\"eta\", eta),\n (\"eta0\", eta),\n (\"S\", S)],\n language=\"octave\",\n project=\"PFHub\")\n\nprint(\"manufactured.nb:\\n\")\nfor f in code[0]:\n print(f)\n```\n\n manufactured.nb:\n \n alpha.m\n function out1 = alpha(A1, A2, B1, B2, C2, t, x)\n %ALPHA Autogenerated by sympy\n % Code generated with sympy 1.2\n %\n % See http://www.sympy.org/ for more information.\n %\n % This file is part of 'PFHub'\n \n out1 = A1.*t.*sin(B1.*x) + A2.*sin(B2.*x + C2.*t) + 0.25;\n \n end\n \n function out1 = eta(A1, A2, B1, B2, C2, kappa, t, x, y)\n \n out1 = -0.5*tanh(sqrt(2)*(-A1.*t.*sin(B1.*x) - A2.*sin(B2.*x + C2.*t) + y - 0.25)./(2*sqrt(kappa))) + 0.5;\n \n end\n \n function out1 = eta0(A1, A2, B1, B2, C2, kappa, t, x, y)\n \n out1 = -0.5*tanh(sqrt(2)*(-A1.*t.*sin(B1.*x) - A2.*sin(B2.*x + C2.*t) + y - 0.25)./(2*sqrt(kappa))) + 0.5;\n \n end\n \n function out1 = S(S)\n \n out1 = S;\n \n end\n \n\n", "meta": {"hexsha": "56be1dc2a24c40ef013d280bc05d7cd39e8809db", "size": 43490, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "benchmarks/benchmark7.ipynb", "max_stars_repo_name": "wd15/chimad-phase-field", "max_stars_repo_head_hexsha": "b8ead2ef666201b500033052d0a4efb55796c2da", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "benchmarks/benchmark7.ipynb", "max_issues_repo_name": "wd15/chimad-phase-field", "max_issues_repo_head_hexsha": "b8ead2ef666201b500033052d0a4efb55796c2da", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2015-02-06T16:45:52.000Z", "max_issues_repo_issues_event_max_datetime": "2017-12-12T17:39:56.000Z", "max_forks_repo_path": "benchmarks/benchmark7.ipynb", "max_forks_repo_name": "wd15/chimad-phase-field", "max_forks_repo_head_hexsha": "b8ead2ef666201b500033052d0a4efb55796c2da", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 47.4263904035, "max_line_length": 4297, "alphanum_fraction": 0.5609105542, "converted": true, "num_tokens": 9591, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.43782349911420193, "lm_q2_score": 0.22000709974589316, "lm_q1q2_score": 0.0963242782407142}} {"text": "# 10 Gauss\u7a4d\u5206, \u30ac\u30f3\u30de\u51fd\u6570, \u30d9\u30fc\u30bf\u51fd\u6570\n\n\u9ed2\u6728\u7384\n\n2018-06-21\n\n* Copyright 2018 Gen Kuroki\n* License: MIT https://opensource.org/licenses/MIT\n* Repository: https://github.com/genkuroki/Calculus\n\n\u3053\u306e\u30d5\u30a1\u30a4\u30eb\u306f\u6b21\u306e\u5834\u6240\u3067\u304d\u308c\u3044\u306b\u95b2\u89a7\u3067\u304d\u308b:\n\n* http://nbviewer.jupyter.org/github/genkuroki/Calculus/blob/master/10%20Gauss%2C%20Gamma%2C%20Beta.ipynb\n\n* https://genkuroki.github.io/documents/Calculus/10%20Gauss%2C%20Gamma%2C%20Beta.pdf\n\n\u3053\u306e\u30d5\u30a1\u30a4\u30eb\u306f Julia Box \u3067\u5229\u7528\u3067\u304d\u308b.\n\n\u81ea\u5206\u306e\u30d1\u30bd\u30b3\u30f3\u306bJulia\u8a00\u8a9e\u3092\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3057\u305f\u3044\u5834\u5408\u306b\u306f\n\n* Windows\u3078\u306eJulia\u8a00\u8a9e\u306e\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\n\n\u3092\u53c2\u7167\u305b\u3088.\n\n\u8ad6\u7406\u7684\u306b\u5b8c\u74a7\u306a\u8aac\u660e\u3092\u3059\u308b\u3064\u3082\u308a\u306f\u306a\u3044. \u7d30\u90e8\u306e\u3044\u3044\u52a0\u6e1b\u306a\u90e8\u5206\u306f\u81ea\u5206\u3067\u8a02\u6b63\u30fb\u4fee\u6b63\u305b\u3088.\n\n$\n\\newcommand\\eps{\\varepsilon}\n\\newcommand\\ds{\\displaystyle}\n\\newcommand\\Z{{\\mathbb Z}}\n\\newcommand\\R{{\\mathbb R}}\n\\newcommand\\C{{\\mathbb C}}\n\\newcommand\\QED{\\text{\u25a1}}\n\\newcommand\\root{\\sqrt}\n\\newcommand\\bra{\\langle}\n\\newcommand\\ket{\\rangle}\n\\newcommand\\d{\\partial}\n\\newcommand\\sech{\\operatorname{sech}}\n\\newcommand\\cosec{\\operatorname{cosec}}\n\\newcommand\\sign{\\operatorname{sign}}\n\\newcommand\\sinc{\\operatorname{sinc}}\n\\newcommand\\real{\\operatorname{Re}}\n\\newcommand\\imag{\\operatorname{Im}}\n\\newcommand\\Li{\\operatorname{Li}}\n\\newcommand\\PROD{\\mathop{\\coprod\\kern-1.35em\\prod}}\n$\n\n

Table of Contents

\n
\n\n\n```julia\nusing Plots\ngr(); ENV[\"PLOTS_TEST\"] = \"true\"\n#clibrary(:colorcet)\nclibrary(:misc)\n\nfunction pngplot(P...; kwargs...)\n sleep(0.1)\n pngfile = tempname() * \".png\"\n savefig(plot(P...; kwargs...), pngfile)\n showimg(\"image/png\", pngfile)\nend\npngplot(; kwargs...) = pngplot(plot!(; kwargs...))\n\nshowimg(mime, fn) = open(fn) do f\n base64 = base64encode(f)\n display(\"text/html\", \"\"\"\"\"\")\nend\n\nusing SymPy\n#sympy[:init_printing](order=\"lex\") # default\n#sympy[:init_printing](order=\"rev-lex\")\n\nusing SpecialFunctions\nusing QuadGK\n```\n\n## Gauss\u7a4d\u5206\n\n### Gauss\u7a4d\u5206\u306e\u516c\u5f0f\n\n$$\n\\int_{-\\infty}^\\infty e^{-x^2}\\,dx = \\sqrt{\\pi}\n$$\n\n\u3092**Gauss\u7a4d\u5206\u306e\u516c\u5f0f**\u3068\u547c\u3076\u3053\u3068\u306b\u3059\u308b. \u8a3c\u660e\u306f\u5f8c\u3067\u884c\u3046.\n\n\u3053\u306e\u30ce\u30fc\u30c8\u306e\u7b46\u8005\u306f\u5927\u5b66\u65b0\u5165\u751f\u304c\u7fd2\u3046\u7a4d\u5206\u306e\u516c\u5f0f\u306e\u4e2d\u3067\u3053\u308c\u304c**\u6700\u3082\u91cd\u8981**\u3067\u3042\u308b\u3068\u8003\u3048\u3066\u3044\u308b. \u30ac\u30a6\u30b9\u7a4d\u5206\u304c\u91cd\u8981\u3060\u3068\u8003\u3048\u308b\u7406\u7531\u306f\u4ee5\u4e0b\u306e\u901a\u308a.\n\n(1) \u3053\u306e\u516c\u5f0f\u81ea\u4f53\u304c\u975e\u5e38\u306b\u9762\u767d\u3044\u5f62\u3092\u3057\u3066\u3044\u308b. \u5de6\u8fba\u3092\u898b\u3066\u3082\u3069\u3053\u306b\u3082\u5186\u5468\u7387\u306f\u898b\u3048\u306a\u3044\u304c, \u53f3\u8fba\u306b\u306f\u5186\u5468\u7387\u304c\u51fa\u3066\u6765\u308b. \u3057\u304b\u3082\u5186\u5468\u7387\u304c\u305d\u306e\u307e\u307e\u51fa\u3066\u6765\u308b\u306e\u3067\u306f\u306a\u304f, \u305d\u306e\u5e73\u65b9\u6839\u304c\u51fa\u3066\u6765\u308b.\n\n(2) \u69d8\u3005\u306a\u65b9\u6cd5\u3092\u4f7f\u3063\u3066Gauss\u7a4d\u5206\u306e\u516c\u5f0f\u3092\u8a3c\u660e\u3067\u304d\u308b.\n\n(3) Gauss\u7a4d\u5206\u306e\u516c\u5f0f\u306f\u78ba\u7387\u8ad6\u3084\u7d71\u8a08\u5b66\u3067\u6b63\u898f\u5206\u5e03\u3092\u6271\u3046\u3068\u304d\u306b\u306f\u5fc5\u9808\u3067\u3042\u308b. \u6b63\u898f\u5206\u5e03\u306f\u4e2d\u5fc3\u6975\u9650\u5b9a\u7406\u306b\u3088\u3063\u3066\u7279\u5225\u306b\u91cd\u8981\u306a\u5f79\u76ee\u3092\u679c\u305f\u3059\u78ba\u7387\u5206\u5e03\u3067\u3042\u308b. \n\n(4) Gauss\u7a4d\u5206\u306f\u30ac\u30f3\u30de\u51fd\u6570\u306b\u4e00\u822c\u5316\u3055\u308c\u308b. \n\n(5) Gauss\u7a4d\u5206\u306fLaplace\u306e\u65b9\u6cd5\u306e\u57fa\u790e\u3067\u3042\u308b. Laplace\u306e\u65b9\u6cd5\u306f\u3042\u308b\u7a2e\u306e\u7a4d\u5206\u306e\u6f38\u8fd1\u6319\u52d5\u3092\u8abf\u3079\u308b\u305f\u3081\u306e\u6700\u3082\u57fa\u672c\u7684\u306a\u65b9\u6cd5\u3067\u3042\u308a, \u89e3\u6790\u5b66\u306e\u5fdc\u7528\u306b\u304a\u3044\u3066\u57fa\u672c\u7684\u304b\u3064\u91cd\u8981\u3067\u3042\u308b.\n\n(6) \u7279\u306bGauss\u7a4d\u5206\u3067\u968e\u4e57\u306b\u7b49\u3057\u3044\u7a4d\u5206\u3092\u8fd1\u4f3c\u3059\u308b\u3053\u3068\u306b\u3088\u3063\u3066, Stirling\u306e\u516c\u5f0f\u304c\u5f97\u3089\u308c\u308b. (Stirling\u306e\u516c\u5f0f $n!\\sim n^n e^{-n}\\sqrt{2\\pi n}$ \u306e\u5e73\u884c\u6839\u306e\u56e0\u5b50\u306fGauss\u7a4d\u5206\u3092\u7d4c\u7531\u3057\u3066\u5f97\u3089\u308c\u308b.)\n\n\u4ee5\u4e0a\u306e\u3088\u3046\u306bGauss\u7a4d\u5206\u306f\u7d14\u7c8b\u6570\u5b66\u7684\u306b\u3082\u5fdc\u7528\u6570\u5b66\u7684\u306b\u3082\u57fa\u672c\u7684\u304b\u3064\u91cd\u8981\u3067\u3042\u308b.\n\n### Gauss\u7a4d\u5206\u3092\u4f7f\u3046\u7c21\u5358\u306a\u8a08\u7b97\u4f8b\n\n**\u554f\u984c:** \u4e0a\u306e\u516c\u5f0f\u3092\u4f7f\u3063\u3066, $a>0$ \u306e\u3068\u304d,\n\n$$\n\\int_{-\\infty}^\\infty e^{-y^2/a}\\,dy = \\sqrt{a\\pi}\n$$\n\n\u3068\u306a\u308b\u3053\u3068\u3092\u793a\u305b. \n\n**\u6ce8\u610f:** $a$ \u3092 $1/a$ \u3067\u7f6e\u304d\u63db\u3048\u308c\u3070\n\n$$\n\\int_{-\\infty}^\\infty e^{-ay^2}\\,dy = \\sqrt{\\frac{\\pi}{a}}\n$$\n\n\u3082\u5f97\u3089\u308c\u308b.\n\n**\u89e3\u7b54\u4f8b:** Gauss\u7a4d\u5206\u306e\u516c\u5f0f\u3067 $\\ds x=\\frac{y}{\\sqrt{a}}$ \u3068\u7f6e\u63db\u7a4d\u5206\u3059\u308b\u3068\n\n$$\n\\sqrt{\\pi} = \\int_{-\\infty}^\\infty e^{-x^2}\\,dx =\n\\frac{1}{\\sqrt{a}}\\int_{-\\infty}^\\infty e^{-y^2/a}\\,dy\n$$\n\n\u306a\u306e\u3067, \u4e21\u8fba\u306b $\\sqrt{a}$ \u3092\u304b\u3051\u308c\u3070\u793a\u3057\u305f\u3044\u516c\u5f0f\u304c\u5f97\u3089\u308c\u308b. $\\QED$\n\n**\u554f\u984c:** \u5206\u6563 $\\sigma^2>0$, \u5e73\u5747 $\\mu$ \u306e\u6b63\u898f\u5206\u5e03\u306e\u78ba\u7387\u5bc6\u5ea6\u51fd\u6570 $p(x)$ \u304c\n\n$$\np(x) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}} e^{-(x-\\mu)^2/(2\\sigma^2)}\n$$\n\n\u3067\u5b9a\u7fa9\u3055\u308c\u308b. \u3053\u306e\u3068\u304d\n\n$$\n\\int_{-\\infty}^\\infty p(x)\\,dx = 1\n$$\n\n\u3068\u306a\u308b\u3053\u3068\u3092\u793a\u305b. (\u3053\u306e\u554f\u984c\u3088\u308a, \u78ba\u7387\u7d71\u8a08\u5b66\u306b\u304a\u3044\u3066Gauss\u7a4d\u5206\u306e\u516c\u5f0f\u306f\u5fc5\u9808\u3067\u3042\u308b\u3053\u3068\u304c\u308f\u304b\u308b.)\n\n**\u89e3\u7b54\u4f8b:** $x=y+\\mu$ \u3068\u7f6e\u63db\u3057, \u4e0a\u306e\u554f\u984c\u306e\u7d50\u679c\u3092\u4f7f\u3046\u3068, \n\n$$\n\\begin{aligned}\n\\int_{-\\infty}^\\infty p(x)\\,dx &=\n\\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\int_{-\\infty}^\\infty e^{-(x-\\mu)^2/(2\\sigma^2)}\\,dx =\n\\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\int_{-\\infty}^\\infty e^{-y^2/(2\\sigma^2)}\\,dy \n\\\\ &=\n\\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\sqrt{2\\sigma^2\\pi} = 1.\n\\qquad \\QED\n\\end{aligned}\n$$\n\n**\u554f\u984c(Lebesgue\u306e\u53ce\u675f\u5b9a\u7406\u306e\u7d50\u8ad6\u304c\u6210\u7acb\u3057\u306a\u3044\u5834\u54082):** \u51fd\u6570\u5217 $f_n(x)$ \u3092\n\n$$\nf_n(x)=\\frac{1}{\\sqrt{n\\pi}}e^{-x^2/n}\n$$\n\n\u3068\u5b9a\u3081\u308b. \u4ee5\u4e0b\u3092\u793a\u305b.\n\n(1) $\\ds\\int_{-\\infty}^\\infty f_n(x)\\,dx = 1$.\n\n(2) \u5404 $x\\in\\R$ \u3054\u3068\u306b $\\ds\\lim_{n\\to\\infty}f_n(x)= 0$.\n\n(3) \u3057\u305f\u304c\u3063\u3066 $\\ds\\lim_{n\\to\\infty}\\int_{-\\infty}^\\infty f_n(x)\\,dx \\ne \\int_{-\\infty}^\\infty \\lim_{n\\to\\infty}f_n(x)\\,dx$.\n\n**\u89e3\u7b54\u4f8b:** (1)\u306fGauss\u7a4d\u5206\u306e\u516c\u5f0f\u304b\u3089\u5f97\u3089\u308c\u308b(\u8a73\u7d30\u306f\u81ea\u5206\u3067\u8a08\u7b97\u3057\u3066\u78ba\u8a8d\u305b\u3088). (3)\u306f(1)\u3068(2)\u304b\u3089\u305f\u3060\u3061\u306b\u5f97\u3089\u308c\u308b\u306e\u3067, \u3042\u3068\u306f(2)\u306e\u307f\u3092\u793a\u305b\u3070\u5341\u5206\u3067\u3042\u308b. $x\\in\\R$ \u3092\u4efb\u610f\u306b\u53d6\u3063\u3066\u56fa\u5b9a\u3059\u308b. \u3053\u306e\u3068\u304d $n\\to\\infty$ \u3067 $\\dfrac{x^2}{n}\\to 0$, $\\dfrac{1}{\\sqrt{n\\pi}}\\to 0$ \u3068\u306a\u308b\u306e\u3067, $f_n(x)\\to 0$ \u3068\u306a\u308b\u3053\u3068\u3082\u308f\u304b\u308b. $\\QED$\n\n**\u554f\u984c:** \u3059\u3050\u4e0a\u306e\u554f\u984c\u306e\u51fd\u6570 $f_n(x)$ \u306e\u30b0\u30e9\u30d5\u3092\u63cf\u3044\u3066\u307f\u3088. \n\n**\u89e3\u7b54\u4f8b:** \u4ee5\u4e0b\u306e\u30bb\u30eb\u306e\u3088\u3046\u306b\u306a\u308b. \n\n$n$ \u304c\u5927\u304d\u304f\u306a\u308b\u3068, $f_n(x)$ \u306e\u300c\u5206\u5e03\u300d\u306f\u5e83\u304f\u62e1\u304c\u308b. $\\QED$\n\n\n```julia\nf(n,x) = exp(-x^2/n)/\u221a(n*\u03c0)\nx = -10.0:0.05:10.0\nP = plot(size=(400,250))\nfor n in [1,2,3,4,5,10, 30, 100]\n plot!(x, f.(n,x), label=\"n = $n\")\nend\nP\n```\n\n\n\n\n \n\n \n\n\n\n**\u554f\u984c:** \u6b21\u3092\u793a\u305b: $a>0$ \u3068 $k=0,1,2,\\ldots$ \u306b\u3064\u3044\u3066\n\n$$\n\\int_{-\\infty}^\\infty e^{-ax^2}x^{2k}\\,dx = \n\\sqrt{\\pi}\\; \\frac{1\\cdot3\\cdots(2k-1)}{2^k} a^{-(2k+1)/2} =\n\\sqrt{\\pi}\\; \\frac{(2k)!}{2^{2k}k!} a^{-(2k+1)/2}.\n\\tag{1}\n$$\n\n**\u6ce8\u610f:** $a$ \u3092 $1/a$ \u3067\u7f6e\u304d\u63db\u3048\u308c\u3070\u6b21\u3082\u5f97\u3089\u308c\u308b:\n\n$$\n\\int_{-\\infty}^\\infty e^{-x^2/a}x^{2k}\\,dx = \n\\frac{1\\cdot3\\cdots(2k-1)}{2^k} \\sqrt{a^{2k+1}\\pi} =\n\\frac{(2k)!}{2^{2k}k!} \\sqrt{a^{2k+1}\\pi}.\n\\tag{2}\n$$\n\n**\u89e3\u7b54\u4f8b:** Gauss\u7a4d\u5206\u306e\u516c\u5f0f\u304b\u3089\u5f97\u3089\u308c\u308b\u516c\u5f0f\n\n$$\n\\int_{-\\infty}^\\infty e^{-ax^2}\\,dx = \\sqrt{\\pi}\\;a^{-1/2}\n$$\n\n\u306e\u4e21\u8fba\u3092 $a$ \u3067\u5fae\u5206\u3057\u3066 $-1$ \u500d\u3059\u308b\u64cd\u4f5c\u3092\u7e70\u308a\u8fd4\u3059\u3068((K)\u3092\u4f7f\u3046),\n\n$$\n\\begin{aligned}\n&\n\\int_{-\\infty}^\\infty e^{-ax^2}x^2\\,dx = \\sqrt{\\pi}\\;\\frac{1}{2}a^{-3/2},\n\\\\ &\n\\int_{-\\infty}^\\infty e^{-ax^2}x^4\\,dx = \\sqrt{\\pi}\\;\\frac{1}{2}\\frac{3}{2}a^{-5/2},\n\\\\ &\n\\int_{-\\infty}^\\infty e^{-ax^2}x^6\\,dx = \\sqrt{\\pi}\\;\\frac{1}{2}\\frac{3}{2}\\frac{5}{2}a^{-7/2}.\n\\end{aligned}\n$$\n\n$k$ \u56de\u305d\u306e\u64cd\u4f5c\u3092\u7e70\u308a\u8fd4\u3059\u3068, \n\n$$\n\\int_{-\\infty}^\\infty e^{-ax^2}x^{2k}\\,dx = \n\\sqrt{\\pi}\\;\\frac{1}{2}\\frac{3}{2}\\cdots\\frac{2k-1}{2}a^{-(2k+1)/2}.\n$$\n\n\u3053\u308c\u3088\u308a, (1)\u306e\u524d\u534a\u304c\u6210\u7acb\u3059\u308b\u3053\u3068\u304c\u308f\u304b\u308b. \u5f8c\u534a\u306e\u6210\u7acb\u306f\n\n$$\n\\frac{1\\cdot3\\cdots(2k-1)}{2^k} =\n\\frac{1\\cdot3\\cdots(2k-1)}{2^k} \\frac{2\\cdot4\\cdots(2k)}{2^k k!} =\n\\frac{(2k)!}{2^{2k}k!}\n\\tag{3}\n$$\n\n\u306b\u3088\u3063\u3066\u78ba\u8a8d\u3067\u304d\u308b. $\\QED$\n\n**\u6ce8\u610f:** \u5947\u6570\u306e\u7a4d $1\\cdot3\\cdots(2k-1)$ \u306b\u3064\u3044\u3066(3)\u306e\u8a08\u7b97\u6cd5\u306f\u3088\u304f\u4f7f\u308f\u308c\u308b:\n\n$$\n1\\cdot3\\cdots(2k-1) = \n1\\cdot3\\cdots(2k-1) \\frac{2\\cdot4\\cdots(2k)}{2^k k!} =\n\\frac{(2k)!}{2^k k!}.\n$$\n\n\u4f8b\u3048\u3070\u4e8c\u9805\u4fc2\u6570\u306b\u95a2\u3059\u308b\n\n$$\n\\begin{aligned}\n(-1)^k\\binom{-1/2}{k} &=\n(-1)^k\\frac{(-1/2)(-3/2)\\cdots(-(2k-1)/2)}{k!} \\\\ &=\n\\frac{1\\cdot3\\cdots(2k-1)}{2^k k!} =\n\\frac{(2k)!}{2^{2k}k!k!} =\n\\frac{1}{2^{2k}}\\binom{2k}{k}\n\\end{aligned}\n$$\n\n\u3082\u3088\u304f\u51fa\u3066\u6765\u308b. $\\QED$\n\n### Gauss\u5206\u5e03\u306eFourier\u5909\u63db\n\n$a>0$ \u3067\u3042\u308b\u3068\u3059\u308b. $e^{-x^2/a}$ \u578b\u306e\u51fd\u6570\u3092**Gauss\u5206\u5e03\u51fd\u6570**\u3068\u547c\u3076\u3053\u3068\u304c\u3042\u308b.\n\n\u4e00\u822c\u306b\u51fd\u6570 $f(x)$ \u306b\u5bfe\u3057\u3066,\n\n$$\n\\hat{f}(p) = \\int_{-\\infty}^\\infty e^{-ipx} f(x)\\,dx\n$$\n\n\u3092 $f$ \u306e**Fourier\u5909\u63db**(\u30d5\u30fc\u30ea\u30a8\u5909\u63db)\u3068\u547c\u3076. \u3082\u3057\u3082\u5b9f\u6570\u5024\u51fd\u6570 $f(x)$ \u304c\u5076\u51fd\u6570\u3067\u3042\u308c\u3070, \n\n$$\ne^{-ipx} f(x) = f(x)\\cos(px) - i f(x)\\sin(px)\n$$\n\n\u306e\u865a\u90e8\u306f\u5947\u51fd\u6570\u306b\u306a\u308a, \u305d\u306e\u7a4d\u5206\u306f\u6d88\u3048\u308b\u306e\u3067\n\n$$\n\\hat{f}(p) = \\int_{-\\infty}^\\infty f(x)\\cos(px)\\,dx\n$$\n\n\u3068\u306a\u308b.\n\n**\u554f\u984c:** $a>0$ \u3068\u3059\u308b. $f(x)=e^{-x^2/a}$ \u306eFourier\u5909\u63db\u3092\u6c42\u3081\u3088.\n\n**\u89e3\u7b54\u4f8b1:** $\\ds\\cos(px)=\\sum_{k=0}^\\infty\\frac{(-p^2)^k x^{2k}}{(2k)!}$ \u3088\u308a,\n\n$$\n\\begin{align}\n\\hat{f}(p) &=\n\\int_{-\\infty}^\\infty e^{-x^2/a} \\cos(px)\\,dx =\n\\sum_{k=0}^\\infty\\frac{(-p^2)^k}{(2k)!}\\int_{-\\infty}^\\infty e^{-x^2/a}x^{2k}\\,dx\n\\\\ &=\n\\sum_{k=0}^\\infty\\frac{(-p^2)^k}{(2k)!}\\frac{(2k)!}{2^{2k}k!} \\sqrt{a^{2k+1}\\pi} =\n\\sqrt{a\\pi}\\sum_{k=0}^\\infty\\frac{(-ap^2/4)^k}{k!} = \\sqrt{a\\pi}\\;e^{-ap^2/4}.\n\\end{align}\n$$\n\n3\u3064\u76ee\u306e\u7b49\u53f7\u3067\u4e0a\u306e\u65b9\u306e\u554f\u984c\u306e\u7d50\u679c\u3092\u7528\u3044\u305f. $\\QED$\n\n**\u89e3\u7b54\u4f8b2:** \u8907\u7d20\u89e3\u6790\u3092\u7528\u3044\u308b. \u8907\u7d20\u89e3\u6790\u3055\u3048\u8a8d\u3081\u3066\u4f7f\u3048\u3070, \u5f62\u5f0f\u7684\u306b\u3088\u308a\u308f\u304b\u308a\u6613\u304f\u8a08\u7b97\u3067\u304d\u308b.\n\n$$\n-\\frac{x^2}{a}-ipx = \n-\\frac{1}{a}\\left(x^2 + iapx\\right) =\n-\\frac{1}{a}\\left(\\left(x+\\frac{iap}{2}\\right)^2-\\frac{-a^2p^2}{4}\\right) =\n-\\frac{1}{a}\\left(x+\\frac{iap}{2}\\right)^2 - \\frac{ap^2}{4}\n$$\n\n\u3068\u5e73\u65b9\u5b8c\u6210\u3057, $\\ds x=y-\\frac{iap}{2}$ \u3068\u7f6e\u63db\u3059\u308b\u3068,\n\n$$\n\\begin{aligned}\n\\hat{f}(p) &= \\int_{-\\infty}^\\infty e^{-x^2/a}e^{-ipx}\\,dx =\n\\int_{-\\infty}^\\infty e^{-(x^2/a+ipx)}\\,dx \n\\\\ &=\n\\int_{-\\infty}^\\infty \\exp\\left(-\\frac{1}{a}\\left(x+\\frac{iap}{2}\\right)^2 - \\frac{ap^2}{4}\\right)\\,dx =\ne^{-ap^2/4} \\int_{-\\infty+iap/2}^{\\infty+iap/2} e^{-y^2/a}\\,dy.\n\\end{aligned}\n$$\n\nCauchy\u306e\u7a4d\u5206\u5b9a\u7406\u3088\u308a,\n\n$$\n\\int_{-\\infty+iap/2}^{\\infty+iap/2} e^{-y^2/a}\\,dy =\n\\int_{-\\infty}^{\\infty} e^{-y^2/a}\\,dy = \\sqrt{a\\pi}.\n$$\n\n\u3057\u305f\u304c\u3063\u3066, \n\n$$\n\\hat{f}(p) = \\int_{-\\infty}^\\infty e^{-x^2/a}e^{-ipx}\\,dx = \\sqrt{a\\pi}\\;e^{-ap^2/4}.\n\\qquad \\QED\n$$\n\n**\u88dc\u8db3:** \u8907\u7d20\u5e73\u9762\u4e0a\u306e\u7d4c\u8def $C$ \u3092\u6b21\u306e\u3088\u3046\u306b\u5b9a\u3081\u308b: \u307e\u305a $-R$ \u304b\u3089 $R$ \u306b\u76f4\u7dda\u7684\u306b\u79fb\u52d5\u3059\u308b. \u6b21\u306b $R$ \u304b\u3089 $R+iap/2$ \u306b\u76f4\u7dda\u7684\u306b\u79fb\u52d5\u3059\u308b. \u305d\u306e\u6b21\u306b $R+iap/2$ \u304b\u3089 $-R+iap/2$ \u306b\u76f4\u7dda\u7684\u306b\u79fb\u52d5\u3059\u308b. \u6700\u5f8c\u306b $-R+iap/2$ \u304b\u3089 $-R$ \u306b\u76f4\u7dda\u7684\u306b\u79fb\u52d5\u3059\u308b. \u3053\u308c\u306b\u3088\u3063\u3066\u5f97\u3089\u308c\u308b\u9577\u65b9\u5f62\u578b\u306e\u7d4c\u8def\u304c $C$ \u3067\u3042\u308b. \u4e0a\u306e\u89e3\u7b54\u4f8b2\u306e\u4e2d\u306eCauchy\u306e\u7a4d\u5206\u5b9a\u7406\u3092\u3053\u306e\u7d4c\u8def $C_R$ \u306b\u9069\u7528\u3057\u305f\u5834\u5408\u3092\u4f7f\u3063\u3066\u3044\u308b. $R\\to\\infty$ \u3068\u3059\u308b\u3068, \u5de6\u53f3\u306e\u7e26\u65b9\u5411\u306b\u79fb\u52d5\u3059\u308b\u7d4c\u8def\u4e0a\u3067\u306e\u7a4d\u5206\u304c $0$ \u306b\u53ce\u675f\u3059\u308b\u3053\u3068\u3092\u4f7f\u3046. $\\QED$\n\n$e^{-ax^2}$ \u306eFourier\u5909\u63db\u306b\u3064\u3044\u3066\u306f\n\n* \u9ed2\u6728\u7384, \u30ac\u30f3\u30de\u5206\u5e03\u306e\u4e2d\u5fc3\u6975\u9650\u5b9a\u7406\u3068Stirling\u306e\u516c\u5f0f\n\n\u306e\u7b2c6\u7bc0\u3082\u53c2\u7167\u305b\u3088.\n\n### Gauss\u7a4d\u5206\u306e\u516c\u5f0f\u306e\u5c0e\u51fa\n\nGauss\u7a4d\u5206\u306e\u8a08\u7b97\u306e\u4ed5\u65b9\u306b\u3064\u3044\u3066\u306f\n\n* \u9ed2\u6728\u7384, \u30ac\u30f3\u30de\u5206\u5e03\u306e\u4e2d\u5fc3\u6975\u9650\u5b9a\u7406\u3068Stirling\u306e\u516c\u5f0f\n\n\u306e\u7b2c7\u7bc0\u304a\u3088\u3073\n\n* \u9ad8\u6728\u8c9e\u6cbb, \u89e3\u6790\u6982\u8ad6, \u5ca9\u6ce2\u66f8\u5e97 (1983)\n\n\u306e\u7b2c3\u7ae0\u00a735\u306e\u4f8b5,6\u3092\u53c2\u7167\u305b\u3088.\n\n$\\ds I = \\int_{-\\infty}^\\infty e^{-x^2}\\,dx$ \u3068\u304a\u304f. $I=\\sqrt{\\pi}$ \u3067\u3042\u308b\u3053\u3068\u3092\u793a\u3057\u305f\u3044. \u305d\u306e\u305f\u3081\u306b\u306f\n\n$$\n\\begin{aligned}\nI^2 &= \\int_{-\\infty}^\\infty e^{-x^2}\\,dx\\cdot \\int_{-\\infty}^\\infty e^{-y^2}\\,dy\n\\\\ &=\n\\int_{-\\infty}^\\infty \\left(\\int_{-\\infty}^\\infty e^{-x^2}\\,dx\\right)e^{-y^2}\\,dy =\n\\int_{-\\infty}^\\infty\\left(\\int_{-\\infty}^\\infty e^{-(x^2+y^2)}\\,dx\\right)\\,dy\n\\end{aligned}\n$$\n\n\u304c $\\pi$ \u306b\u7b49\u3057\u3044\u3053\u3068\u3092\u8a3c\u660e\u3059\u308c\u3070\u3088\u3044. \u4e0a\u306e\u8a08\u7b97\u306e2\u3064\u76ee\u30683\u3064\u76ee\u306e\u7b49\u53f7\u3067\u7a4d\u5206\u306e\u7dda\u5f62\u6027(A)\u3092\u7528\u3044\u305f.\n\n#### \u65b9\u6cd51: \u9ad8\u3055 $z$ \u3067\u8f2a\u5207\u308a\u306b\u3059\u308b\u65b9\u6cd5\n\n$\\ds I^2 = \\int_{-\\infty}^\\infty\\left(\\int_{-\\infty}^\\infty e^{-(x^2+y^2)}\\,dx\\right)\\,dy$ \u306f2\u5909\u6570\u51fd\u6570 $z=e^{-(x^2+y^2)}$ \u306e $xyz$ \u7a7a\u9593\u5185\u306e\u30b0\u30e9\u30d5\u3068 $xy$ \u5e73\u9762 $z=0$ \u306e\u3042\u3044\u3060\u306b\u631f\u307e\u308c\u305f\u5c71\u578b\u306e\u9818\u57df\u306e\u4f53\u7a4d\u3092\u610f\u5473\u3059\u308b. \n\n\u306a\u305c\u306a\u3089\u3070, $S(y) = \\ds \\int_{-\\infty}^\\infty e^{-(x^2+y^2)}\\,dx$ \u306f\u305d\u306e\u9818\u57df\u306e $y$ \u3092\u56fa\u5b9a\u3057\u305f\u3068\u304d\u306e\u5207\u65ad\u9762\u306e\u9762\u7a4d\u306b\u7b49\u3057\u304f, $\\int_{-\\infty}^\\infty S(y)\\,dy$ \u306f\u305d\u306e\u5207\u65ad\u9762\u306e\u9762\u7a4d\u306e\u7a4d\u5206\u306a\u306e\u3067\u9818\u57df\u5168\u4f53\u306e\u4f53\u7a4d\u306b\u7b49\u3057\u3044\u304b\u3089\u3067\u3042\u308b. \u4e00\u822c\u306b, \u9577\u3055\u3092\u7a4d\u5206\u3059\u308c\u3070\u9762\u7a4d\u306b\u306a\u308a\u3001\u9762\u7a4d\u3092\u7a4d\u5206\u3059\u308c\u3070\u4f53\u7a4d\u306b\u306a\u308b.\n\n\u305d\u306e\u5c71\u578b\u306e\u9818\u57df\u306e\u4f53\u7a4d\u306f\u9ad8\u3055 $z$ \u3067\u306e\u5207\u65ad\u9762\u306e\u9762\u7a4d\u306e $z=0$ \u304b\u3089 $z=1$ \u307e\u3067\u306e\u7a4d\u5206\u306b\u7b49\u3057\u3044. \u9ad8\u3055 $z$ \u3067\u306e\u5207\u65ad\u9762\u306f\u534a\u5f84 $\\sqrt{x^2+y^2}=\\sqrt{-\\log z}$ \u306e\u5186\u76e4\u306b\u306a\u308a, \u305d\u306e\u9762\u7a4d\u306f $-\\pi\\log z$ \u306b\u306a\u308b. \u3086\u3048\u306b\n\n$$\nI^2 = \\int_0^1 (-\\pi\\log z)\\,dz = -\\pi\\,[z\\log z - z]_0^1 = \\pi.\n$$\n\n\u3053\u308c\u3088\u308a $I=\\int_{-\\infty}^\\infty e^{-x^2}\\,dx = \\sqrt{\\pi}$ \u3067\u3042\u308b\u3053\u3068\u304c\u308f\u304b\u308b.\n\n#### \u65b9\u6cd52: \u6975\u5ea7\u6a19\u3092\u4f7f\u3046\u65b9\u6cd5\n\n\u4ee5\u4e0b\u306e\u65b9\u6cd5\u306f2\u91cd\u7a4d\u5206\u306e\u7a4d\u5206\u5909\u6570\u306e\u5909\u63db\u306e\u4ed5\u65b9(Jacobian\u304c\u51fa\u3066\u6765\u308b)\u3092\u77e5\u3063\u3066\u304a\u304b\u306a\u3051\u308c\u3070\u4f7f\u3048\u306a\u3044. \n\n$x=r\\cos\\theta$, $y=r\\sin\\theta$ \u3068\u304a\u304f\u3068,\n\n$$\nI^2 = \\int_0^{2\\pi}d\\theta\\int_0^\\infty r e^{-r^2}\\,dr =\n2\\pi \\left[\\frac{e^{-r^2}}{-2}\\right]_0^\\infty = 2\\pi\\frac{1}{2}=\\pi.\n$$\n\n\u3086\u3048\u306b $I=\\sqrt{\\pi}$.\n\n#### \u65b9\u6cd53: $y=x \\tan\\theta$ \u3068\u5909\u6570\u5909\u63db\u3059\u308b\u65b9\u6cd5 \n\n$I^2$ \u306f\u6b21\u306e\u3088\u3046\u306b\u3082\u8868\u305b\u308b:\n\n$$\nI^2 = 2\\int_0^\\infty\\left(\\int_{-\\infty}^\\infty e^{-(x^2+y^2)}\\,dy\\right)\\,dx.\n$$\n\n\u3053\u306e\u7a4d\u5206\u5185\u3067 $x$ \u306f $x>0$ \u3092\u52d5\u304f\u3068\u8003\u3048\u308b.\n\n\u5185\u5074\u306e\u7a4d\u5206\u3067\u7a4d\u5206\u5909\u6570\u3092 $-\\infty 0, \\quad\ndy = \\frac{x}{\\cos^2\\theta}\\,d\\theta, \\quad\nx^2+y^2 = x^2(1+\\tan^2\\theta) = \\frac{x^2}{\\cos^2\\theta}\n$$\n\n\u306a\u306e\u3067\n\n$$\nI^2 = 2 \\int_0^\\infty\\left(\\int_{-\\pi/2}^{\\pi/2} \\exp\\left(-\\frac{x^2}{\\cos^2\\theta}\\right)\\frac{x}{\\cos^2\\theta}\\,d\\theta\\right)\\,dx.\n$$\n\n\u3086\u3048\u306b\u7a4d\u5206\u306e\u9806\u5e8f\u3092\u4ea4\u63db\u3059\u308b\u3068((J)\u3092\u4f7f\u3046),\n\n$$\n\\begin{aligned}\nI^2 &= 2 \\int_{-\\pi/2}^{\\pi/2}\\left(\\int_0^\\infty \\exp\\left(-\\frac{x^2}{\\cos^2\\theta}\\right)\\frac{x}{\\cos^2\\theta}\\,dx\\right)\\,d\\theta\n\\\\ &=\n2 \\int_{-\\pi/2}^{\\pi/2}\\left[\\frac{1}{-2}\\exp\\left(-\\frac{x^2}{\\cos^2\\theta}\\right)\\right]_{x=0}^{x=\\infty}\\,d\\theta =\n2 \\int_{-\\pi/2}^{\\pi/2}\\frac{1}{2}\\,d\\theta = 2\\frac{\\pi}{2} = \\pi.\n\\end{aligned}\n$$\n\n\u3057\u305f\u304c\u3063\u3066 $I=\\sqrt{\\pi}$.\n\n**\u6ce8\u610f:** \u6975\u5ea7\u6a19\u5909\u63db $(x,y)=(r\\cos\\theta, r\\sin\\theta)$ \u304c\u6709\u52b9\u306a\u5834\u9762\u3067\u306f, $y=x\\tan\\theta$ \u3068\u3044\u3046\u5909\u6570\u5909\u63db\u3082\u6709\u52b9\u306a\u3053\u3068\u304c\u591a\u3044. $\\tan\\theta$ \u306e\u5e7e\u4f55\u7684\u306a\u610f\u5473\u306f\u300c\u539f\u70b9\u3092\u901a\u308b\u76f4\u7dda\u306e\u50be\u304d\u300d\u3067\u3042\u3063\u305f. \u305d\u306e\u610f\u5473\u3067\u3082 $y=x\\tan\\theta$ \u306f\u81ea\u7136\u306a\u5909\u6570\u5909\u63db\u3060\u3068\u8a00\u3048\u308b. $\\QED$\n\n## \u30ac\u30f3\u30de\u51fd\u6570\u3068\u30d9\u30fc\u30bf\u51fd\u6570\n\n### \u30ac\u30f3\u30de\u51fd\u6570\u3068\u30d9\u30fc\u30bf\u51fd\u6570\u306e\u5b9a\u7fa9\n\n$s>0$, $p>0$, $q>0$ \u3068\u4eee\u5b9a\u3059\u308b. $\\Gamma(s)$ \u3068 $B(p,q)$ \u3092\u6b21\u306e\u7a4d\u5206\u3067\u5b9a\u7fa9\u3059\u308b:\n\n$$\n\\Gamma(s) = \\int_0^\\infty e^{-x}x^{s-1}\\,dx, \\quad\nB(p,q) = \\int_0^1 x^{p-1}(1-x)^{q-1}\\,dx.\n$$\n\n$\\Gamma(s)$ \u3092\u30ac\u30f3\u30de\u51fd\u6570\u3068, $B(p,q)$ \u3092\u30d9\u30fc\u30bf\u51fd\u6570\u3068\u547c\u3076.\n\n**\u554f\u984c(\u30ac\u30f3\u30de\u51fd\u6570\u306eGauss\u7a4d\u5206\u578b\u306e\u8868\u793a):** \u6b21\u3092\u793a\u305b:\n\n$$\n\\Gamma(s) = 2\\int_0^\\infty e^{-y^2} y^{2s-1}\\,dy.\n$$\n\n**\u89e3\u7b54\u4f8b:** \u30ac\u30f3\u30de\u51fd\u6570\u306e\u7a4d\u5206\u306b\u3088\u308b\u5b9a\u7fa9\u5f0f\u306b\u304a\u3044\u3066 $x=y^2$ \u3068\u7f6e\u63db\u3059\u308b\u3068, \n\n$$\n\\Gamma(s) = \\int_0^\\infty e^{-y^2} y^{2s-2}\\,2y\\,dy = 2\\int_0^\\infty e^{-y^2} y^{2s-1}\\,dy.\n\\qquad\\QED\n$$\n\n**\u6ce8\u610f:** \u3053\u306e\u516c\u5f0f\u3088\u308a, \u30ac\u30f3\u30de\u51fd\u6570\u306f\u672c\u8cea\u7684\u306bGauss\u7a4d\u5206\u306e\u4e00\u822c\u5316\u306b\u306a\u3063\u3066\u3044\u308b\u3053\u3068\u304c\u308f\u304b\u308b. $\\QED$\n\n\n```julia\ny = symbols(\"y\")\ns = symbols(\"s\", positive=true)\n2*integrate(e^(-y^2)*y^(2s-1), (y,0,oo))\n```\n\n\n\n\n$$\\Gamma\\left(s\\right)$$\n\n\n\n**\u554f\u984c:** \u6b21\u3092\u793a\u305b: $r>0$ \u306b\u3064\u3044\u3066\n\n$$\n\\int_0^\\infty e^{-x^r}\\,dx = \n\\frac{1}{r}\\Gamma\\left(\\frac{1}{r}\\right).\n$$\n\n**\u7565\u89e3:** $x=t^{1/r}$ \u3068\u7f6e\u63db\u3059\u308c\u3070\u305f\u3060\u3061\u306b\u5f97\u3089\u308c\u308b. $\\QED$\n\n**\u6ce8\u610f:** \u30ac\u30f3\u30de\u51fd\u6570\u306e\u51fd\u6570\u7b49\u5f0f(\u4e0b\u306e\u65b9\u3067\u793a\u3059)\u3082\u3057\u304f\u306f\u90e8\u5206\u7a4d\u5206\u306b\u3088\u3063\u3066 $\\ds \\frac{1}{r}\\Gamma\\left(\\frac{1}{r}\\right)=\\Gamma\\left(1+\\frac{1}{r}\\right)$ \u304c\u6210\u7acb\u3059\u308b\u3053\u3068\u3082\u308f\u304b\u308b. $\\QED$\n\n\n```julia\nx = symbols(\"x\")\nr = symbols(\"r\", positive=true)\nintegrate(e^(-x^r), (x,0,oo))\n```\n\n\n\n\n$$\\Gamma\\left(1 + \\frac{1}{r}\\right)$$\n\n\n\n**\u554f\u984c(\u30ac\u30f3\u30de\u51fd\u6570\u306e\u30b9\u30b1\u30fc\u30eb\u5909\u63db):** \u6b21\u3092\u793a\u305b:\n\n$$\n\\int_0^\\infty e^{-x/\\theta}x^{s-1}\\,dx = \\theta^s\\Gamma(s) \\quad (\\theta>0,\\ s>0).\n$$\n\n\u30ac\u30f3\u30de\u51fd\u6570\u306f\u3053\u306e\u5f62\u5f0f\u3067\u3082\u975e\u5e38\u306b\u3088\u304f\u4f7f\u308f\u308c\u308b.\n\n**\u89e3\u7b54\u4f8b:** $x=\\theta y$ \u3068\u7f6e\u63db\u3059\u308b\u3068, $x^{s-1}\\,dx = \\theta^s y^{s-1}\\,dy$ \u306a\u306e\u3067\u793a\u3057\u305f\u3044\u516c\u5f0f\u304c\u5f97\u3089\u308c\u308b. $\\QED$.\n\n\n```julia\nx = symbols(\"x\")\ns = symbols(\"s\", positive=true)\nt = symbols(\"t\", positive=true)\nsimplify(integrate(e^(-x/t)*x^(s-1), (x,0,oo)))\n```\n\n\n\n\n$$t^{s} \\Gamma\\left(s\\right)$$\n\n\n\n**\u554f\u984c(\u30d9\u30fc\u30bf\u51fd\u6570\u306e\u5225\u306e\u8868\u793a):** \u6b21\u3092\u793a\u305b:\n\n$$\nB(p,q) = \n2\\int_0^{\\pi/2} (\\cos\\theta)^{2p-1}(\\sin\\theta)^{2q-1}\\,d\\theta =\n\\int_0^\\infty \\frac{t^{p-1}}{(1+t)^{p+q}}\\,dt =\n\\frac{1}{p}\\int_0^\\infty \\frac{du}{(1+u^{1/p})^{p+q}}.\n$$\n\n\u30d9\u30fc\u30bf\u51fd\u6570\u306e\u3053\u308c\u3089\u306e\u8868\u793a\u3082\u3088\u304f\u4f7f\u308f\u308c\u308b.\n\n**\u89e3\u7b54\u4f8b:** $B(p,q)=\\int_0^1 x^{p-1}(1-x)^{q-1}\\,dx$ \u3067 $x=\\cos^2\\theta$ \u3068\u7f6e\u63db\u3059\u308b\u3068,\n\n$$\ndx = -2\\cos\\theta\\;\\sin\\theta\\;d\\theta\n$$\n\n\u3088\u308a, \n\n$$\nB(p,q) = 2\\int_0^{\\pi/2} (\\cos\\theta)^{2p-1}(\\sin\\theta)^{2q-1}\\,d\\theta.\n$$\n\n$B(p,q)=\\int_0^1 x^{p-1}(1-x)^{q-1}\\,dx$ \u3067 $\\ds x=\\frac{t}{1+t}=1-\\frac{1}{1+t}$ \u3068\u7f6e\u63db\u3059\u308b\u3068,\n\n$$\ndx = \\frac{dt}{1+t}\n$$\n\n\u3088\u308a, \n\n$$\nB(p,q) = \n\\int_0^\\infty \\left(\\frac{t}{1+t}\\right)^{p-1} \\left(\\frac{1}{1+t}\\right)^{q-1}\\,\\frac{dt}{(1+t)^2} =\n\\int_0^\\infty \\frac{t^{p-1}}{(1+t)^{p+q}}\\,dt\n$$\n\n\u3055\u3089\u306b $t=u^{1/p}$ \u3068\u7f6e\u63db\u3059\u308b\u3068, \n\n$$\nt^{p-1}\\,dt = \\frac{1}{p} \\, du\n$$\n\n\u3088\u308a, \n\n$$\nB(p,q) = \\int_0^\\infty \\frac{t^{p-1}}{(1+t)^{p+q}}\\,dt =\n\\frac{1}{p}\\int_0^\\infty \\frac{du}{(1+u^{1/p})^{p+q}}.\n\\qquad \\QED\n$$\n\n**\u554f\u984c:** \u6b21\u3092\u793a\u305b. $a0$, $q>0$ \u306e\u3068\u304d,\n\n$$\n\\int_a^b (x-a)^{p-1}(b-x)^{q-1}\\,dx = (b-a)^{p+q-1} B(p,q).\n$$\n\n**\u8a3c\u660e:** $x=(1-t)a+tb=a+(b-a)t$ \u3068\u7a4d\u5206\u5909\u6570\u3092\u7f6e\u63db\u3059\u308b\u3068,\n\n$$\n\\int_a^b (x-a)^{p-1}(b-x)^{q-1}\\,dx =\n\\int_0^1 ((b-a)t)^{p-1}((b-a)(1-t))^{q-1}(b-a)\\,dt = (b-a)^{p+q-1}B(p,q).\n\\qquad\\QED\n$$\n\n**\u4f8b:** $\\ds B(2,2)=\\int_0^1 x(1-x)\\,dx = \\frac{1}{2}-\\frac{1}{3}=\\frac{1}{6}$ \u306a\u306e\u3067\n\n$$\n\\int_a^b (x-a)(b-x)\\,dx = (b-a)^3 B(2,2) = \\frac{(b-a)^3}{6}.\n\\qquad \\QED\n$$\n\n**\u554f\u984c:** \u30ac\u30f3\u30de\u51fd\u6570\u3092\u5b9a\u7fa9\u3059\u308b\u7a4d\u5206\u306e\u88ab\u7a4d\u5206\u51fd\u6570\u306e\u30b0\u30e9\u30d5\u3092\u8272\u3005\u306a $s>0$ \u306b\u3064\u3044\u3066\u63cf\u3044\u3066\u307f\u3088.\n\n**\u89e3\u7b54\u4f8b:** \u6b21\u306e\u30bb\u30eb\u3092\u898b\u3088. $\\QED$\n\n\n```julia\n# \u30ac\u30f3\u30de\u51fd\u6570\u306e\u7a4d\u5206\u306e\u88ab\u7a4d\u5206\u51fd\u6570\u306e\u30b0\u30e9\u30d5\n\nf(s,x) = e^(-x)*x^(s-1)\nx = 0.00:0.05:30.0\nPP = []\nfor s in [1/2, 1, 2, 3, 6, 10]\n P = plot(x, f.(s,x), title=\"s = $s\", titlefontsize=10)\n push!(PP, P)\nend\nfor s in [15, 20, 30]\n x = 0:0.02:2.2s\n P = plot(x, f.(s,x), title=\"s = $s\", titlefontsize=10)\n push!(PP, P)\nend\nplot(PP[1:3]..., size=(750, 200), legend=false, layout=@layout([a b c]))\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nplot(PP[4:6]..., size=(750, 200), legend=false, layout=@layout([a b c]))\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nplot(PP[7:9]..., size=(750, 200), legend=false, layout=@layout([a b c]))\n```\n\n\n\n\n \n\n \n\n\n\n$s$ \u3092\u5927\u304d\u304f\u3059\u308b\u3068, \u30ac\u30f3\u30de\u51fd\u6570\u306e\u88ab\u7a4d\u5206\u51fd\u6570(\u3092\u51fd\u6570\u3067\u5272\u3063\u305f\u3082\u306e)\u306f\u6b63\u898f\u5206\u5e03\u306e\u78ba\u7387\u5bc6\u5ea6\u51fd\u6570\u3068\u307b\u3068\u3093\u3069\u3074\u3063\u305f\u308a\u4e00\u81f4\u3059\u308b\u3088\u3046\u306b\u306a\u308b. \u6b21\u306e\u30bb\u30eb\u3092\u898b\u3088.\n\n\n```julia\n# f(s,x) = e^{-x} x^{s-1} / \u0393(s)\n# g(s,x) = e^{-(x-s)^2/(2s)} / \u221a(2\u03c0s)\n\nf(s,x) = e^(-x+(s-1)*log(x)-lgamma(s))\ng(s,x) = e^(-(x-s)^2/(2s)) / \u221a(2\u03c0*s)\ns = 100\nx = 0:0.5:2s\nplot(size=(400, 250))\nplot!(title=\"y = e^(-x) x^(s-1)/Gamma(s), s = $s\", titlefontsize=11)\nplot!(x, f.(s,x), label=\"Gamma dist\", lw=2)\nplot!(x, g.(s,x), label=\"normal dist\", ls=:dash, lw=2)\n```\n\n\n\n\n \n\n \n\n\n\n**\u554f\u984c:** \u30d9\u30fc\u30bf\u51fd\u6570\u3092\u5b9a\u7fa9\u3059\u308b\u7a4d\u5206\u306e\u88ab\u7a4d\u5206\u51fd\u6570\u306e\u30b0\u30e9\u30d5\u3092\u8272\u3005\u306a $p,q>0$ \u306b\u3064\u3044\u3066\u63cf\u3044\u3066\u307f\u3088.\n\n**\u89e3\u7b54\u4f8b:** \u6b21\u306e\u30bb\u30eb\u3092\u898b\u3088. $\\QED$\n\n\n```julia\n# \u30d9\u30fc\u30bf\u51fd\u6570\u306e\u7a4d\u5206\u306e\u88ab\u7a4d\u5206\u51fd\u6570\u306e\u30b0\u30e9\u30d5\n\nf(p,q,x) = x^(p-1)*(1-x)^(q-1)\nx = 0.002:0.002:0.998\nPP = []\nfor (p,q) in [(1/2,1/2), (1,1), (1,2), (2,2), (2,3), (2,4), (4,6), (8, 12), (16, 24)]\n y = f.(p,q,x)\n P = plot(x, y, title=\"(p,q) = ($p,$q)\", titlefontsize=10, xlims=(0,1), ylims=(0,1.05*maximum(y)))\n push!(PP, P)\nend\nplot(PP[1:3]..., size=(750, 200), legend=false, layout=@layout([a b c]))\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nplot(PP[4:6]..., size=(750, 200), legend=false, layout=@layout([a b c]))\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nplot(PP[7:9]..., size=(750, 200), legend=false, layout=@layout([a b c]))\n```\n\n\n\n\n \n\n \n\n\n\n$p,q$ \u304c\u305d\u308c\u3089\u306e\u6bd4\u3092\u4fdd\u3061\u306a\u304c\u3089\u5927\u304d\u304f\u3059\u308b\u3068, \u30d9\u30fc\u30bf\u51fd\u6570\u306e\u88ab\u7a4d\u5206\u51fd\u6570\u3092\u30d9\u30fc\u30bf\u51fd\u6570\u3067\u5272\u3063\u305f\u3082\u306e\u306f\u6b63\u898f\u5206\u5e03\u306e\u88ab\u7a4d\u5206\u51fd\u6570\u306b\u307b\u3068\u3093\u3069\u3074\u3063\u305f\u308a\u4e00\u81f4\u3059\u308b\u3088\u3046\u306b\u306a\u308b. \u6b21\u306e\u30bb\u30eb\u3092\u898b\u3088.\n\n\n```julia\n# f(p,q,x) = x^{p-1} (1-x)^{q-1} / B(p,q)\n# \u03bc = p/(p+q)\n# \u03c3\u00b2 = pq/((p+q)^2(p+q+1))\n# g(\u03bc,\u03c3\u00b2,x) = e^{-(x-\u03bc)^2/(2\u03c3\u00b2)} / \u221a(2\u03c0\u03c3\u00b2)\n\nf(p,q,x) = x^(p-1)*(1-x)^(q-1)/beta(p,q)\ng(\u03bc,\u03c3\u00b2,x) = e^(-(x-\u03bc)^2/(2*\u03c3\u00b2)) / \u221a(2\u03c0*\u03c3\u00b2)\np, q = 45,55\n\u03bc = p/(p+q)\n\u03c3\u00b2 = p*q/((p+q)^2*(p+q+1))\nx = 0.000:0.002:1.000\nplot(size=(400, 250))\nplot!(title=\"y = x^(p-1) (1-x)^(q-1) / B(p,q), (p,q) = ($p,$q)\", titlefontsize=10)\nplot!(x, f.(p,q,x), label=\"Beta dist\", lw=2)\nplot!(x, g.(\u03bc,\u03c3\u00b2,x), label=\"normal dist\", lw=2, ls=:dash)\n```\n\n\n\n\n \n\n \n\n\n\n### \u30ac\u30f3\u30de\u51fd\u6570\u306e\u7279\u6b8a\u5024\u3068\u51fd\u6570\u7b49\u5f0f\n\n**\u554f\u984c(\u30ac\u30f3\u30de\u51fd\u6570\u306e\u6700\u3082\u7c21\u5358\u306a\u7279\u6b8a\u5024):** $\\Gamma(1)=1$ \u3068 $\\Gamma(1/2)=\\sqrt{\\pi}$ \u3092\u793a\u305b.\n\n**\u89e3\u7b54\u4f8b:** \u524d\u8005\u306f\n\n$$\n\\Gamma(1)=\\int_0^\\infty e^{-x}\\,dx = [-e^{-x}]_0^\\infty = 1.\n$$\n\n\u3068\u5bb9\u6613\u306b\u793a\u3055\u308c\u308b. \u5f8c\u8005\u3092\u793a\u3059\u305f\u3081\u306b\u306f $\\Gamma(1/2)$ \u304cGauss\u7a4d\u5206 $\\int_{-\\infty}^\\infty e^{-y^2}\\,dy=\\sqrt{\\pi}$ \u306b\u7b49\u3057\u3044\u3053\u3068\u3092\u793a\u305b\u3070\u3088\u3044. $x=y^2$ \u3067\u7f6e\u63db\u7a4d\u5206\u3059\u308b\u3068,\n\n$$\n\\begin{aligned}\n\\Gamma(1/2) &= \\int_0^\\infty e^{-x}x^{1/2-1}\\,dx =\n\\int_0^\\infty e^{-y^2} \\frac{1}{y} 2y\\,dy \n\\\\ &= \n2\\int_0^\\infty e^{-y^2}\\,dy =\n\\int_{-\\infty}^\\infty e^{-y^2}\\,dy = \\sqrt{\\pi}. \n\\qquad \\QED\n\\end{aligned}\n$$\n\n**\u6ce8\u610f:** \u4e0a\u306e\u554f\u984c\u306e\u89e3\u7b54\u3088\u308a, $\\Gamma(1/2)$ \u306f\u672c\u8cea\u7684\u306bGauss\u7a4d\u5206\u306b\u7b49\u3057\u3044. \u305d\u306e\u610f\u5473\u3067\u30ac\u30f3\u30de\u51fd\u6570\u306fGauss\u7a4d\u5206\u306e\u4e00\u822c\u5316\u306b\u306a\u3063\u3066\u3044\u308b\u3068\u8a00\u3048\u308b. $\\QED$\n\n**\u554f\u984c(\u30ac\u30f3\u30de\u51fd\u6570\u306e\u51fd\u6570\u7b49\u5f0f):** $s>0$ \u306e\u3068\u304d $\\Gamma(s+1)=s\\Gamma(s)$ \u3068\u306a\u308b\u3053\u3068\u3092\u793a\u305b.\n\n**\u89e3\u7b54\u4f8b:** \u90e8\u5206\u7a4d\u5206\u3092\u4f7f\u3046. $s>0$ \u3068\u4eee\u5b9a\u3059\u308b. \u3053\u306e\u3068\u304d\n\n$$\n\\begin{aligned}\n\\Gamma(s+1) &=\n\\int_0^\\infty e^{-x}x^s\\,dx =\n\\int_0^\\infty (-e^{-x})'x^s\\,dx\n\\\\ &=\n\\int_0^\\infty e^{-x} (x^s)'\\,dx =\n\\int_0^\\infty e^{-x} sx^{s-1}\\,dx =\ns\\Gamma(s).\n\\end{aligned}\n$$\n\n3\u3064\u76ee\u306e\u7b49\u53f7\u3067\u90e8\u5206\u7a4d\u5206\u3092\u884c\u3063\u305f. \u305d\u306e\u3068\u304d, $x\\searrow 0$ \u3067\u3082 $x\\to\\infty$ \u3067\u3082 $e^{-x}x^s\\to 0$ \u3068\u306a\u308b\u3053\u3068\u3092\u4f7f\u3063\u305f($s>0$ \u3068\u4eee\u5b9a\u3057\u305f\u3053\u3068\u306b\u6ce8\u610f\u305b\u3088). (\u7a4d\u5206\u4ee5\u5916\u306e\u9805\u304c\u6d88\u3048\u308b.) $\\QED$\n\n**\u6ce8\u610f:** \u4e0a\u306e\u554f\u984c\u306e\u7d50\u679c\u3092\u4f7f\u3048\u3070, $s< 0$, $s\\ne 0,-1,-2,\\ldots$ \u306e\u3068\u304d $s+n>0$ \u3068\u306a\u308b\u6574\u6570 $n$ \u3092\u53d6\u308c\u3070, \n\n$$\n\\Gamma(s) = \\frac{\\Gamma(s+n)}{s(s+1)\\cdots(s+n-1)}\n$$\n\n\u306e\u53f3\u8fba\u306fwell-defined\u306b\u306a\u308b\u306e\u3067, \u3053\u306e\u516c\u5f0f\u306b\u3088\u3063\u3066\u30ac\u30f3\u30de\u51fd\u6570\u3092 $s<0$, $s\\ne 0,-1,-2,\\ldots$ \u306e\u5834\u5408\u306b\u81ea\u7136\u306b\u62e1\u5f35\u3067\u304d\u308b. $\\QED$ \n\n**\u6ce8\u610f(\u30ac\u30f3\u30de\u51fd\u6570\u306f\u968e\u4e57\u306e\u4e00\u822c\u5316):** \u4ee5\u4e0a\u306e\u554f\u984c\u306e\u7d50\u679c\u3088\u308a, \u975e\u8ca0\u306e\u6574\u6570 $n$ \u306b\u3064\u3044\u3066\n\n$$\n\\Gamma(n+1)=n\\Gamma(n)=n(n-1)\\Gamma(n-1)=\\cdots=n(n-1)\\cdots1\\,\\Gamma(1)=n!.\n$$\n\n\u3059\u306a\u308f\u3061, $\\Gamma(s+1)$ \u306f\u968e\u4e57 $n!$ \u306e\u9023\u7d9a\u5909\u6570 $s$ \u3078\u306e\u62e1\u5f35\u306b\u306a\u3063\u3066\u3044\u308b\u3053\u3068\u304c\u308f\u304b\u308b. $\\QED$\n\n**\u554f\u984c(\u30ac\u30f3\u30de\u51fd\u6570\u306e\u6b63\u306e\u534a\u6574\u6570\u3067\u306e\u5024):** \u6b21\u3092\u793a\u305b: \u975e\u8ca0\u306e\u6574\u6570 $k$ \u306b\u5bfe\u3057\u3066\n\n$$\n\\Gamma((2k+1)/2) = \\frac{1\\cdot3\\cdots(2k-1)}{2^k}\\sqrt{\\pi} =\n\\frac{(2k)!}{2^{2k}k!}\\sqrt{\\pi}\n$$\n\n**\u89e3\u7b54\u4f8b1:** \u30ac\u30f3\u30de\u51fd\u6570\u306e\u51fd\u6570\u7b49\u5f0f\u3068 $\\Gamma(1/2)=\\sqrt{\\pi}$ \u3088\u308a\n\n$$\n\\begin{aligned}\n\\Gamma\\left(\\frac{2k+1}{2}\\right) &=\n\\frac{2k-1}{2}\\Gamma\\left(\\frac{2k-1}{2}\\right) =\n\\frac{2k-1}{2}\\frac{2k-3}{2}\\Gamma\\left(\\frac{2k-3}{2}\\right) = \\cdots \n\\\\ &=\n\\frac{2k-1}{2}\\frac{2k-3}{2}\\cdots\\frac{1}{2}\\Gamma\\left(\\frac{1}{2}\\right) = \n\\frac{1\\cdot3\\cdots(2k-1)}{2^k}\\sqrt{\\pi}.\n\\end{aligned}\n$$\n\n\u3053\u308c\u3067\u793a\u3057\u305f\u3044\u516c\u5f0f\u306e1\u3064\u76ee\u306e\u7b49\u53f7\u306f\u793a\u305b\u305f. 2\u3064\u76ee\u306e\u7b49\u53f7\u306f\u4e0a\u306e\u65b9\u306eGauss\u7a4d\u5206\u306e\u5fdc\u7528\u554f\u984c\u3067\u4f7f\u3063\u305f\u65b9\u6cd5\u3092\u4f7f\u3048\u3070\u540c\u69d8\u306b\u793a\u3055\u308c\u308b. $\\QED$\n\n**\u89e3\u7b54\u4f8b2:** \u30ac\u30f3\u30de\u51fd\u6570\u306e $\\Gamma(s)=2\\int_0^\\infty e^{-y^2}y^{2s-1}\\,dy$ \u3068\u3044\u3046\u8868\u793a\u3092\u4f7f\u3046\u3068,\n\n$$\n\\Gamma((2k+1)/2) = 2\\int_0^\\infty e^{-y^2} y^{2k}\\,dy = \\int_{-\\infty}^\\infty e^{-y^2} y^{2k}\\,dy\n$$\n\n\u306a\u306e\u3067, \u4e0a\u306e\u65b9\u306eGauss\u7a4d\u5206\u306e\u5fdc\u7528\u554f\u984c\u306b\u95a2\u3059\u308b\u7d50\u679c\u304b\u3089\u6b32\u3057\u3044\u516c\u5f0f\u304c\u5f97\u3089\u308c\u308b. $\\QED$\n\n### Riemann\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7a4d\u5206\u8868\u793a\u3068\u51fd\u6570\u7b49\u5f0f\u3068\u8ca0\u306e\u6574\u6570\u3068\u6b63\u306e\u5076\u6570\u306b\u304a\u3051\u308b\u7279\u6b8a\u5024\n\n\u3053\u306e\u7bc0\u306f\u3053\u306e\u30ce\u30fc\u30c8\u3092\u6700\u521d\u306b\u8aad\u3080\u3068\u304d\u306b\u306f\u98db\u3070\u3057\u3066\u8aad\u3093\u3067\u3082\u69cb\u308f\u306a\u3044. \u30ac\u30f3\u30de\u51fd\u6570\u306e\u7406\u8ad6\u304cRiemann\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7406\u8ad6\u3068\u5bc6\u63a5\u306b\u95a2\u4fc2\u3057\u3066\u3044\u308b\u3053\u3068\u3092\u8a8d\u8b58\u3057\u3066\u304a\u3051\u3070\u554f\u984c\u306a\u3044. \n\nBernoulli\u6570\u3084Bernoulli\u591a\u9805\u5f0f\u306b\u95a2\u3057\u3066\u306f\u30ce\u30fc\u30c8\u300c13 Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u300d\u306b\u3088\u308a\u8a73\u3057\u3044\u89e3\u8aac\u304c\u3042\u308b.\n\n#### Riemann\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7a4d\u5206\u8868\u793a1\n\n**\u554f\u984c(Riemann\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7a4d\u5206\u8868\u793a1):** \u6b21\u3092\u304c\u6210\u7acb\u3059\u308b\u3053\u3068\u3092\u793a\u305b.\n\n$$\n\\zeta(s)=\\sum_{n=1}^\\infty \\frac{1}{n^s} = \n\\frac{1}{\\Gamma(s)}\\int_0^\\infty \\frac{x^{s-1}\\,dx}{e^x-1}\n\\quad (s>1).\n$$\n\n**\u6ce8\u610f:** $x\\to 0$ \u306e\u3068\u304d $\\ds \\frac{e^x-1}{x}\\to 1$ \u3068\u306a\u308b\u306e\u3067, $\\ds \\frac{x}{e^x-1}$ \u306f $x=0$ \u307e\u3067\u9023\u7d9a\u7684\u306b\u62e1\u5f35\u3055\u308c, \u3053\u306e\u516c\u5f0f\u306e\u7a4d\u5206\u306f\n\n$$\n\\int_0^\\infty \\frac{x^{s-1}\\,dx}{e^x-1} = \n\\int_0^\\infty \\frac{x}{e^x-1} x^{s-2}\\,dx\n$$\n\n\u3068\u66f8\u3051\u308b\u306e\u3067, $s-2 > -1$ \u3059\u306a\u308f\u3061 $s>1$ \u306a\u3089\u3070\u53ce\u675f\u3057\u3066\u3044\u308b. $\\QED$\n\n**\u89e3\u7b54\u4f8b:** \u4e0a\u306e\u554f\u984c\u306e\u7d50\u679c\u3088\u308a, $\\ds\\frac{1}{n^s} = \\frac{1}{\\Gamma(s)}\\int_0^\\infty e^{-nx}x^{s-1}\\,dx$ \u306a\u306e\u3067,\n\n$$\n\\begin{aligned}\n\\zeta(s) &=\\sum_{n=1}^\\infty \\frac{1}{n^s} =\n\\frac{1}{\\Gamma(s)}\\sum_{n=1}^\\infty\\int_0^\\infty e^{-nx}x^{s-1}\\,dx\n\\\\ &=\n\\frac{1}{\\Gamma(s)}\\int_0^\\infty\\sum_{n=1}^\\infty e^{-nx}x^{s-1}\\,dx =\n\\frac{1}{\\Gamma(s)}\\int_0^\\infty \\frac{x^{s-1}\\,dx}{e^x-1}.\n\\qquad \\QED\n\\end{aligned}\n$$\n\n**\u5b9a\u7fa9:** Bernoulli\u6570 $B_n$ ($n=0,1,2,\\ldots$) \u3092\u6b21\u306e\u6761\u4ef6\u306b\u3088\u3063\u3066\u5b9a\u3081\u308b:\n\n$$\n\\frac{z}{e^z-1} = \\sum_{n=1}^\\infty \\frac{B_n}{n!}z^n.\n\\qquad\\QED\n$$\n\n**\u554f\u984c:** $B_0=1$, $\\ds B_1=-\\frac{1}{2}$ \u3067\u3042\u308a, $n$ \u304c3\u4ee5\u4e0a\u306e\u5947\u6570\u306e\u3068\u304d $B_n=0$ \u3068\u306a\u308b\u3053\u3068\u3092\u793a\u305b.\n\n**\u89e3\u7b54\u4f8b:** $z\\to 0$ \u306e\u3068\u304d, $\\ds \\frac{z}{e^z-1}\\to 1$ \u3088\u308a $B_0=1$ \u3068\u306a\u308b.\n\n$$\n\\frac{z}{e^z-1}-\\frac{z}{2} = \\frac{z}{2}\\frac{e^z+1}{e^z-1} = \n\\frac{z}{2}\\frac{e^{z/2}+e^{-z/2}}{e^{z/2}-e^{-z/2}}\n$$\n\n\u3067\u3042\u308b\u3053\u3068\u3068, \u3053\u308c\u304c\u5076\u51fd\u6570\u3067\u3042\u308b\u3053\u3068\u304b\u3089, $\\ds B_1=-\\frac{1}{2}$ \u3067 $n$ \u304c3\u4ee5\u4e0a\u306e\u5947\u6570\u306a\u3089\u3070 $B_n=0$ \u3068\u306a\u308b\u3053\u3068\u304c\u308f\u304b\u308b. $\\QED$\n\n**\u554f\u984c(Riemann\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7a4d\u5206\u8868\u793a1'):** \u975e\u8ca0\u306e\u6574\u6570 $N$ \u306b\u5bfe\u3057\u3066, \u6b21\u3092\u793a\u305b:\n\n$$\n\\zeta(s) = \n\\frac{1}{\\Gamma(s)}\\left[\n\\int_1^\\infty \\frac{x^{s-1}\\,dx}{e^x-1} +\n\\int_0^1 \\left(\\frac{x}{e^x-1} - \\sum_{k=0}^N \\frac{B_k}{k!}x^k\\right)x^{s-2}\\,dx +\n\\sum_{k=0}^N \\frac{B_k}{k!}\\frac{1}{s+k-1}\n\\right].\n$$\n\n\u3055\u3089\u306b\u53f3\u8fba\u306e\u62ec\u5f27\u306e\u5185\u5074\u306e2\u3064\u76ee\u306e\u7a4d\u5206\u304c $s>-N$ \u3067\u7d76\u5bfe\u53ce\u675f\u3057\u3066\u3044\u308b\u3053\u3068\u3092\u793a\u305b.\n\n**\u89e3\u7b54\u4f8b:** Riemann\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7a4d\u5206\u8868\u793a1\u306e\u516c\u5f0f\u3067\u7a4d\u5206\u3092 $\\int_1^\\infty$ \u3068 $\\int_0^1$ \u306b\u5206\u3051\u3066, $k=0,1,\\ldots,N$ \u306b\u5bfe\u3059\u308b\n\n$$\n\\ds\\int_0^1\\frac{B_k}{k!}x^{s+k-2}\\,dx = \\frac{B_k}{k!}\\frac{1}{s+k-1}\n$$\n\n\u3092\u8db3\u3057\u3066\u5f15\u3051\u3070\u793a\u3057\u305f\u3044\u516c\u5f0f\u304c\u5f97\u3089\u308c\u308b. \n\n$$\n\\frac{x}{e^x-1} - \\sum_{k=0}^N \\frac{B_k}{k!}x^k = O(x^{N+1})\n$$\n\n\u3067\u3042\u308a, $\\ds\\int_0^1 x^{N+1}x^{s-2}\\,dx=\\int_0^1 x^{s+N-1}\\,dx$ \u304c $s>-N$ \u3067\u7d76\u5bfe\u53ce\u675f\u3057\u3066\u3044\u308b\u3053\u3068\u304b\u3089, \u53f3\u8fba\u306e\u62ec\u5f27\u306e\u5185\u5074\u306e2\u3064\u76ee\u306e\u7a4d\u5206\u3082\u305d\u3053\u3067\u53ce\u675f\u3057\u3066\u3044\u308b. $\\QED$\n\n**\u554f\u984c:** Riemann\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7a4d\u5206\u8868\u793a1'\u306e\u53f3\u8fba\u3067 $\\zeta(s)$ \u3092 $s>-N$ \u307e\u3067\u62e1\u5f35\u3057\u3066\u304a\u304f\u3068\u304d, \n\n$$\n\\zeta(0) = -\\frac{1}{2}, \\quad \\zeta(-r) = -\\frac{B_{r+1}}{r+1} \\quad (r=1,2,3,\\ldots)\n$$\n\n\u3068\u306a\u308b\u3053\u3068\u3092\u793a\u305b. ($r$ \u304c2\u4ee5\u4e0a\u306e\u5076\u6570\u306e\u3068\u304d $B_{r+1}=0$ \u3068\u306a\u308b\u3053\u3068\u306b\u6ce8\u610f\u305b\u3088.)\n\n**\u89e3\u7b54\u4f8b:** \u30ac\u30f3\u30de\u51fd\u6570\u306e\u51fd\u6570\u7b49\u5f0f\u3088\u308a,\n\n$$\n\\begin{aligned}\n\\frac{1}{\\Gamma(s)}\\frac{B_k}{k!}\\frac{1}{s+k-1} &=\n\\frac{s(s+1)\\cdots(s+k-2)(s+k-1)}{\\Gamma(s+k)}\\frac{B_k}{k!}\\frac{1}{s+k-1} \n\\\\ &=\n\\frac{s(s+1)\\cdots(s+k-2)}{\\Gamma(s+k)}\\frac{B_k}{k!}\n\\end{aligned}\n$$\n\n\u306a\u306e\u3067, \u975e\u8ca0\u306e\u6574\u6570 $r$ \u306b\u5bfe\u3057\u3066, $k=r+1$ \u3068\u304a\u3044\u3066 $s\\to -r$ \u3068\u3059\u308b\u3068,\n\n$$\n\\frac{1}{\\Gamma(s)}\\frac{B_k}{k!}\\frac{1}{s+k-1}\\to\n(-1)^r \\frac{B_{r+1}}{r+1} =\n\\begin{cases}\n-\\dfrac{1}{2} & (r=0) \\\\\n-\\dfrac{B_{r+1}}{r+1} & (r=1,2,3,\\ldots)\n\\end{cases}.\n$$\n\n\u305f\u3060\u3057, \u7b49\u53f7\u3067, $\\ds B_1=-\\frac{1}{2}$ \u3068 $r+1$ \u304c3\u4ee5\u4e0a\u306e\u5947\u6570\u306e\u3068\u304d $B_{r+1}=0$ \u3068\u306a\u308b\u3053\u3068\u3092\u4f7f\u3063\u305f. \u3053\u308c\u3092Riemann\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7a4d\u5206\u8868\u793a1'\n\n$$\n\\zeta(s) = \n\\frac{1}{\\Gamma(s)}\\left[\n\\int_1^\\infty \\frac{x^{s-1}\\,dx}{e^x-1} +\n\\int_0^1 \\left(\\frac{x}{e^x-1} - \\sum_{k=0}^N \\frac{B_k}{k!}x^k\\right)x^{s-2}\\,dx +\n\\sum_{k=0}^N \\frac{B_k}{k!}\\frac{1}{s+k-1}\n\\right]\n$$\n\n\u306b\u9069\u7528\u3059\u308c\u3070,\n\n$$\n\\zeta(0) = -\\frac{1}{2}, \\quad \\zeta(-r) = -\\frac{B_{r+1}}{r+1} \\quad (r=1,2,3,\\ldots)\n$$\n\n\u304c\u5f97\u3089\u308c\u308b. $\\QED$\n\n**\u554f\u984c:** \u6b21\u3092\u793a\u305b.\n\n$$\n(1-2^{1-s})\\zeta(s)=\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}}{n^s} = \\frac{1}{\\Gamma(s)}\\int_0^\\infty \\frac{x^{s-1}\\,dx}{e^x+1} \\quad (s>1).\n$$\n\n**\u6ce8\u610f:** \u3053\u306e\u516c\u5f0f\u306e\u7a4d\u5206\u306f $s>0$ \u3068\u3044\u3046\u6761\u4ef6\u3092\u5916\u3057\u3066, $s$ \u304c\u4efb\u610f\u306e\u8907\u7d20\u6570\u306b\u3057\u3066\u3082\u7d76\u5bfe\u53ce\u675f\u3057\u3066\u3044\u308b. \u3053\u306e\u516c\u5f0f\u306f $(1-2^{1-s})\\zeta(s)$ \u306e\u8907\u7d20\u5e73\u9762\u5168\u4f53\u3078\u306e\u89e3\u6790\u63a5\u7d9a\u3092\u4e0e\u3048\u308b. $\\QED$\n\n**\u89e3\u7b54\u4f8b:** 1\u3064\u76ee\u306e\u7b49\u53f7\u3092\u793a\u305d\u3046:\n\n$$\n\\begin{aligned}\n&\n\\zeta(s) = \\frac{1}{1^s}+\\frac{1}{2^s}+\\frac{1}{3^s}+\\frac{1}{4^s}+\\cdots,\n\\\\ &\n2^{1-s}\\zeta(s) = \\frac{2}{2^s}+\\frac{2}{4^s}+\\frac{2}{6^s}+\\frac{2}{8^s}+\\cdots,\n\\\\ &\n(1-2^{1-s})\\zeta(s) = \\frac{1}{1^s}-\\frac{1}{2^s}+\\frac{1}{3^s}-\\frac{1}{4^s}+\\cdots =\n\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}}{n^s}.\n\\end{aligned}\n$$\n\n2\u3064\u76ee\u306e\u7b49\u53f7\u3092\u793a\u305d\u3046. \u4e0a\u306e\u554f\u984c\u306e\u89e3\u7b54\u4f8b\u3068\u540c\u69d8\u306b\u3057\u3066, $\\ds\\frac{1}{n^s} = \\frac{1}{\\Gamma(s)}\\int_0^\\infty e^{-nx}x^{s-1}\\,dx$ \u306a\u306e\u3067,\n\n$$\n\\begin{aligned}\n\\sum_{n=1}^\\infty \\frac{(-1)^{n-1}}{n^s} &=\n\\frac{1}{\\Gamma(s)}\\sum_{n=1}^\\infty(-1)^{n-1}\\int_0^\\infty e^{-nx}x^{s-1}\\,dx\n\\\\ &=\n\\frac{1}{\\Gamma(s)}\\int_0^\\infty\\sum_{n=1}^\\infty (-1)^{n-1}e^{-nx}x^{s-1}\\,dx =\n\\frac{1}{\\Gamma(s)}\\int_0^\\infty \\frac{x^{s-1}\\,dx}{e^x+1}\\,dx.\n\\qquad \\QED\n\\end{aligned}\n$$\n\n**\u6ce8\u610f:** \u4ee5\u4e0a\u306e\u8a08\u7b97\u306f\u7d71\u8a08\u529b\u5b66\u306b\u304a\u3051\u308bFermi-Dirac\u7d71\u8a08\u306b\u95a2\u3059\u308b\u8b70\u8ad6\u306b\u767b\u5834\u3059\u308b. \u30bc\u30fc\u30bf\u51fd\u6570\u306f\u6570\u8ad6\u306e\u57fa\u672c\u3067\u3042\u308b\u3060\u3051\u3067\u306f\u306a\u304f, \u7d71\u8a08\u529b\u5b66\u7684\u306b\u3082\u610f\u5473\u3092\u6301\u3063\u3066\u3044\u308b. $\\QED$\n\n#### Hurwitz\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7a4d\u5206\u8868\u793a1\n\n**\u554f\u984c:** **Hurwitz\u306e\u30bc\u30fc\u30bf\u51fd\u6570** $\\zeta(s,x)$ \u3068**Bernoulli\u591a\u9805\u5f0f** $B_k(x)$ \u3092\n\n$$\n\\zeta(s,x) = \\sum_{k=0}^\\infty \\frac{1}{(x+k)^s}\\quad (x>0,\\;\\; s>1), \\qquad\n\\frac{te^{xt}}{e^t-1} = \\sum_{k=0}^\\infty \\frac{B_k(x)}{k!}t^k\n$$\n\n\u3068\u5b9a\u3081\u308b. $\\zeta(s)=\\zeta(s,1)$ \u306a\u306e\u3067Hurwitz\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306fRiemann\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u62e1\u5f35\u306b\u306a\u3063\u3066\u3044\u308b. \u4ee5\u4e0b\u3092\u793a\u305b:\n\n(1) $\\quad\\ds \\zeta(s,x) = \\frac{1}{\\Gamma(s)}\\int_0^\\infty \\frac{e^{(1-x)t}t^{s-1}}{e^t-1}\\,dt$.\n\n(2) $\\quad\\ds \\zeta(s,x) = \\frac{1}{\\Gamma(s)}\\left[\n\\int_1^\\infty \\frac{e^{(1-x)t}t^{s-1}}{e^t-1}\\,dt +\n\\int_1^\\infty\\left(\\frac{t e^{(1-x)t}}{e^t-1}-\\sum_{k=0}^N\\frac{B_k(1-x)}{k!}t^k\\right)t^{s-2}\\,dt +\n\\sum_{k=0}^N \\frac{B_k(1-x)}{k!}\\frac{1}{s+k-1}\n\\right].\n$\n\n(3) Hurwitz\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u3092(2)\u306b\u3088\u3063\u3066 $s<1$ \u306b\u62e1\u5f35\u3059\u308b\u3068, $0$ \u4ee5\u4e0a\u306e\u6574\u6570 $m$ \u306b\u3064\u3044\u3066\n\n$$\n\\zeta(-m,x) = \\frac{(-1)^m B_{m+1}(1-x)}{m+1} = -\\frac{B_{m+1}(x)}{m+1}.\n$$\n\n**\u89e3\u7b54\u4f8b:** (1) $x,s>0$, $k\\geqq 0$ \u306b\u5bfe\u3057\u3066, $\\ds \\frac{1}{(x+k)^s}=\\frac{1}{\\Gamma(s)}\\int_0^\\infty e^{-(x+k)t}t^{s-1}\\,dt$ \u3092\u4f7f\u3046\u3068, \n\n$$\n\\begin{aligned}\n\\zeta(s,x) &=\n\\sum_{k=0}^\\infty \\frac{1}{\\Gamma(s)}\\int_0^\\infty e^{-(x+k)t}t^{s-1}\\,dt =\n\\frac{1}{\\Gamma(s)}\\int_0^\\infty \\left(\\sum_{k=0}^\\infty e^{-kt}\\right)e^{-xt}t^{s-1}\\,dt \n\\\\ &=\n\\frac{1}{\\Gamma(s)}\\int_0^\\infty \\frac{e^{-xt}t^{s-1}}{1-e^{-t}}\\,dt =\n\\frac{1}{\\Gamma(s)}\\int_0^\\infty \\frac{e^{(1-x)t}t^{s-1}}{e^t-1}\\,dt.\n\\end{aligned}\n$$\n\n(2) \u4e0a\u306e(1)\u306e\u7d50\u679c\u306e\u53f3\u8fba\u306e\u7a4d\u5206\u3092 $0$ \u304b\u3089 $1$ \u3078\u306e\u7a4d\u5206\u3068 $1$ \u304b\u3089 $\\infty$ \u306e\u7a4d\u5206\u306b\u5206\u3051\u3066, $0$ \u304b\u3089 $1$ \u3078\u306e\u7a4d\u5206\u306e\u88ab\u7a4d\u5206\u51fd\u6570\u306b $\\ds\\sum_{k=0}^N\\frac{B_k(1-x)}{k!}t^k$ \u3092\u8db3\u3057\u3066\u5f15\u304d, \u5f15\u3044\u305f\u65b9\u306e\u7a4d\u5206\u3092\u8a08\u7b97\u3059\u308c\u3070, (2)\u306e\u516c\u5f0f\u304c\u5f97\u3089\u308c\u308b.\n\n(3) Bernoulli\u591a\u9805\u5f0f\u306e\u5b9a\u7fa9\u3088\u308a, $\\ds \\left(\\frac{t e^{(1-x)t}}{e^t-1} - \\sum_{k=0}^N\\frac{B_k(1-x)}{k!}t^k\\right)t^{s-1} = O(t^{s+N-1})$ \u3068\u306a\u308b\u306e\u3067, (2)\u306e\u53f3\u8fba\u306e $0$ \u304b\u3089 $1$ \u3078\u306e\u7a4d\u5206\u306f $s>-N$ \u3067\u7d76\u5bfe\u53ce\u675f\u3057\u3066\u3044\u308b. $N > m$ \u3068\u4eee\u5b9a\u3059\u308b. $s$ \u304c $0$ \u4ee5\u4e0b\u306e\u6574\u6570\u306b\u8fd1\u4ed8\u304f\u3068 $\\ds\\frac{1}{\\Gamma(s)}\\to 0$ \u3068\u306a\u308a, \n\n$$\n\\frac{1}{\\Gamma(s)}\\frac{1}{s+(m+1)-1} = \\frac{s(s+1)\\cdots(s+m-1)}{\\Gamma(s+m+1)} \\to (-1)^m m! \\quad (s\\to -m)\n$$\n\n\u3088\u308a, $s\\to -m$ \u306e\u3068\u304d,\n\n$$\n\\frac{1}{\\Gamma(s)}\\frac{B_{m+1}(1-x)}{(m+1)!}\\frac{1}{s+(m+1)-1} \\to \n(-1)^m m! \\frac{B_{m+1}(1-x)}{(m+1)!} =\n\\frac{(-1)^m B_{m+1}(1-x)}{m+1}\n$$\n\n\u3068\u306a\u308b\u3053\u3068\u304b\u3089, $\\ds\\zeta(s,x)=\\frac{(-1)^m B_{m+1}(1-x)}{m+1}$ \u304c\u5f97\u3089\u308c\u308b. \u3055\u3089\u306b, $\\ds\\frac{te^{(1-x)t}}{e^t-1} = \\frac{(-t)e^{a(-t)}}{e^{-t}-1}$ \u306b\u3088\u3063\u3066, $B_k(1-x)=(-1)^k B_k(x)$ \u3068\u306a\u308b\u3053\u3068\u304c\u308f\u304b\u308b\u306e\u3067, $\\ds \\frac{(-1)^m B_{m+1}(1-x)}{m+1}=-\\frac{B_{m+1}(x)}{m+1}$ \u3082\u5f97\u3089\u308c\u308b. $\\QED$\n\n#### Riemann\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7a4d\u5206\u8868\u793a2\u3068\u51fd\u6570\u7b49\u5f0f\n\n**\u554f\u984c(Riemann\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7a4d\u5206\u8868\u793a2):** $\\theta(t)$ \u3092\n\n$$\n\\theta(t) = \\sum_{n=1}^\\infty e^{-\\pi n^2 t} \\quad (t>0)\n$$\n\n\u3068\u304a\u304f\u3068, \u6b21\u304c\u6210\u7acb\u3059\u308b\u3053\u3068\u3092\u793a\u305b:\n\n$$\n\\pi^{-s/2}\\Gamma(s/2)\\zeta(s) = \\int_0^\\infty \\theta(t) t^{s/2-1}\\,dt \\quad (s>2).\n$$\n\n**\u89e3\u7b54\u4f8b:** \n\n$$\n\\begin{aligned}\n\\pi^{-s/2}\\Gamma(s/2)\\zeta(s) &=\\sum_{n=1}^\\infty \\frac{\\Gamma(s/2)}{(\\pi n^2)^{s/2}} =\n\\sum_{n=1}^\\infty\\int_0^\\infty e^{-\\pi n^2 t} t^{s/2-1}\\,dx\n\\\\ &=\n\\int_0^\\infty\\sum_{n=1}^\\infty e^{-\\pi n^2 t} t^{s/2-1}\\,dx =\n\\int_0^\\infty \\theta(t) t^{s/2-1}\\,dt .\n\\qquad \\QED\n\\end{aligned}\n$$\n\n**\u554f\u984c(Riemann\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7a4d\u5206\u8868\u793a2'):** \u4e0a\u306e\u554f\u984c\u306e\u7d9a\u304d. $\\theta(t)$ \u304c\n\n$$\n1+2\\theta(1/t)=t^{1/2}(1+2\\theta(t)) \\quad (t>0)\n$$\n\n\u3059\u306a\u308f\u3061\n\n$$\n\\theta(1/t) =-\\frac{1}{2} + \\frac{1}{2}t^{1/2} + t^{1/2}\\theta(t)\n$$\n\n\u3092\u6e80\u305f\u3057\u3066\u3044\u308b\u3053\u3068\u3092\u8a8d\u3081\u3066, \u6b21\u3092\u793a\u305b:\n\n$$\n\\pi^{-s/2}\\Gamma(s/2)\\zeta(s) = \n-\\frac{1}{s}-\\frac{1}{1-s} +\n\\int_1^\\infty \\theta(t) (t^{s/2}+t^{(1-s)/2})\\,\\frac{dt}{t}.\n$$\n\n$\\theta(t)$ \u306b\u95a2\u3059\u308b\u4e0a\u306e\u516c\u5f0f\u306e\u8a3c\u660e\u306b\u3064\u3044\u3066\u306f\u30ce\u30fc\u30c8\u300c12 Fourier\u89e3\u6790\u300d\u306b\u304a\u3051\u308bPoisson\u306e\u548c\u516c\u5f0f\u306e\u89e3\u8aac\u3092\u898b\u3088.\n\n**\u6ce8\u610f:** \u4e0a\u306e\u554f\u984c\u306e\u516c\u5f0f\u306e\u53f3\u8fba\u306e\u7a4d\u5206\u306f $s$ \u304c\u4efb\u610f\u306e\u8907\u7d20\u6570\u3067\u3042\u3063\u3066\u3082\u3057\u3066\u3044\u308b\u306e\u3067, \u53f3\u8fba\u306f\u5de6\u8fba\u306e\u8907\u7d20\u5e73\u9762\u4e0a\u3078\u306e\u89e3\u6790\u63a5\u7d9a\u3092\u4e0e\u3048\u308b. \u3055\u3089\u306b, \u53f3\u8fba\u306f $s$ \u3092 $1-s$ \u3067\u7f6e\u304d\u63db\u3048\u308b\u64cd\u4f5c\u3067\u4e0d\u5909\u3067\u3042\u308b\u304b\u3089,\n\n$$\n\\hat{\\zeta}(s) = \\pi^{-s/2}\\Gamma(s/2)\\zeta(s)\n$$\n\n\u3068\u304a\u304f\u3068, \n\n$$\n\\hat{\\zeta}(1-s) = \\hat{\\zeta}(s)\n$$\n\n\u304c\u6210\u7acb\u3057\u3066\u3044\u308b. \u3053\u308c\u3092**\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u51fd\u6570\u7b49\u5f0f**\u3068\u547c\u3076. $\\QED$\n\n**\u89e3\u7b54\u4f8b:** \u4e0a\u306e\u554f\u984c\u3068\u4ee5\u4e0b\u306e\u8a08\u7b97\u3092\u5408\u308f\u305b\u308c\u3070\u6b32\u3057\u3044\u7d50\u679c\u304c\u5f97\u3089\u308c\u308b. \u7a4d\u5206\u533a\u9593\u3092 $0$ \u304b\u3089 $1$ \u3068 $1$ \u304b\u3089 $\\infty$ \u306b\u5206\u3051\u3066, $t=1/u$ \u3068\u304a\u304f\u3068, $t^{s/2-1}\\,dt=-u^{-s/2+1}u^{-2}\\,du=-u^{-s/2-1}\\,du$ \u3092\u4f7f\u3046\u3068, \n$$\n\\begin{aligned}\n&\n\\int_0^\\infty \\theta(t) t^{s/2-1}\\,dt =\n\\int_0^1 \\theta(t) t^{s/2-1}\\,dt + \n\\int_1^\\infty \\theta(t) t^{s/2}\\,dt,\n\\\\ &\n\\int_0^1 \\theta(t) t^{s/2-1}\\,dt =\n\\int_1^\\infty \\left(-\\frac{1}{2} + \\frac{1}{2}t^{1/2} + t^{1/2}\\theta(t)\\right)t^{-s/2-1}\\,dt\n\\\\ &\\qquad =\n\\int_1^\\infty\\left(\n-\\frac{1}{2}t^{-s/2-1}+\\frac{1}{2}t^{(1-s)/2-1} + \\theta(t)t^{(1-s)/2-1}\n\\right)\\,dt\n\\\\ &\\qquad =\n-\\frac{1}{s}-\\frac{1}{s-1} + \\int_1^\\infty \\theta(t)t^{(1-s)/2-1}\\,dt.\n\\end{aligned}\n$$\n\n\u4e0a\u306e\u554f\u984c\u306e\u7d50\u679c\u3068\u4ee5\u4e0a\u306e\u8a08\u7b97\u3092\u307e\u3068\u3081\u308b\u3068, \u6b32\u3057\u3044\u7d50\u679c\u304c\u5f97\u3089\u308c\u308b. $\\QED$\n\n**\u554f\u984c:** \u4e0a\u306e\u554f\u984c\u306e\u7d9a\u304d. \u30ac\u30f3\u30de\u51fd\u6570\u304c Euler's reflection formula\n\n$$\n\\Gamma(s)\\Gamma(1-s) = \\frac{\\pi}{\\sin(\\pi s)}\n$$\n\n\u3068 Legendre's duplication formula\n\n$$\n\\Gamma(s)\\Gamma(s+1/2) = 2^{1-2s}\\pi^{1/2}\\Gamma(2s)\n$$\n\n\u3092\u6e80\u305f\u3057\u3066\u3044\u308b\u3053\u3068\u3092\u8a8d\u3081\u3066, \u4e0a\u306e\u554f\u984c\u306e\u6ce8\u610f\u306b\u304a\u3051\u308b\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u51fd\u6570\u7b49\u5f0f $\\hat\\zeta(1-s)=\\hat\\zeta(s)$ \u304c\n\n$$\n\\zeta(s) = 2^s \\pi^{s-1}\\sin\\frac{\\pi s}{2}\\,\\Gamma(1-s)\\,\\zeta(1-s)\n$$\n\n\u3068\u66f8\u304d\u76f4\u3055\u308c\u308b\u3053\u3068\u3092\u793a\u305b.\n\nLegendre's duplication formula \u3068 Euler's reflection formula \u306f\u3053\u306e\u30ce\u30fc\u30c8\u306e\u4e0b\u306e\u65b9\u3067\u521d\u7b49\u7684\u306b\u8a3c\u660e\u3055\u308c\u308b. Euler's reflection formula\u306e\u8a3c\u660e\u306b\u3064\u3044\u3066\u306f\u30ce\u30fc\u30c8\u300c12 Fourier\u89e3\u6790\u300d\u306e\u30ac\u30f3\u30de\u51fd\u6570\u3068sin\u306e\u95a2\u4fc2\u306e\u7bc0\u3082\u53c2\u7167\u305b\u3088.\n\n**\u89e3\u7b54\u4f8b:** $\\hat\\zeta(s)=\\pi^{-s/2}\\Gamma(s/2)\\zeta(s)$, $\\hat\\zeta(s)=\\hat\\zeta(1-s)$ \u3088\u308a,\n\n$$\n\\pi^{-s/2}\\Gamma(s/2)\\zeta(s) = \\pi^{-(1-s)/2}\\Gamma((1-s)/2)\\zeta(1-s).\n$$\n\n\u3053\u308c\u306f\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u66f8\u304d\u76f4\u3055\u308c\u308b:\n\n$$\n\\zeta(s) = \\pi^{s-1/2}\\frac{\\Gamma((1-s)/2)}{\\Gamma(s/2)}\\zeta(s).\n$$\n\n\u4e00\u65b9, Euler's reflection formula \u306e $s$ \u306b $s/2$ \u3092\u4ee3\u5165\u3059\u308b\u3068,\n\n$$\n\\Gamma(s/2)\\Gamma(1-s/2)=\\frac{\\pi}{\\sin(\\pi s/2)},\n\\quad\\text{i.e.}\\quad\n\\frac{1}{\\Gamma(s/2)} = \\pi^{-1}\\sin\\frac{\\pi s}{2}\\Gamma(1-s/2)\n$$\n\n\u3068\u306a\u308a, Legendre's duplication formula \u306e $s$ \u306b $(1-s)/2$ \u3092\u4ee3\u5165\u3059\u308b\u3068,\n\n$$\n\\Gamma((1-s)/2)\\Gamma(1-s/2)=2^s \\pi^{1/2}\\,\\Gamma(1-s),\n$$\n\n\u3068\u306a\u308b\u306e\u3067, \u305d\u308c\u3089\u3092\u4e0a\u306e\u516c\u5f0f\u306b\u4ee3\u5165\u3059\u308b\u3068,\n\n$$\n\\zeta(s) = 2^s \\pi^{s-1}\\sin\\frac{\\pi s}{2}\\,\\Gamma(1-s)\\,\\zeta(1-s)\n$$\n\n\u304c\u5f97\u3089\u308c\u308b. $\\QED$\n\n**\u554f\u984c:** $k$ \u304c\u6b63\u306e\u6574\u6570\u3067\u3042\u308b\u3068\u304d $\\ds\\zeta(-(2k-1)) = -\\frac{B_{2k}}{2k}$ \u3067\u3042\u308b\u3068\u3044\u3046\u4e8b\u5b9f\u3068\u4e0a\u306e\u554f\u984c\u306e\u7d50\u679c\u304b\u3089\n\n$$\n\\zeta(2k) = \\frac{2^{2k-1}(-1)^{k-1}B_{2k}}{(2k)!}\\pi^{2k}\n$$\n\n\u304c\u5c0e\u304b\u308c\u308b\u3053\u3068\u3092\u793a\u305b.\n\n**\u89e3\u7b54\u4f8b:** $\\ds\\zeta(-(2k-1)) = -\\frac{B_{2k}}{2k}$ \u3068\u4e0a\u306e\u554f\u984c\u306e\u7d50\u679c\u3088\u308a, \n\n$$\n-\\frac{B_{2k}}{2k} = \\zeta(-(2k-1)) = 2^{-(2k-1)}\\pi^{-2k}(-1)^k(2k-1)!\\zeta(2k).\n$$\n\n\u3053\u308c\u3088\u308a\u793a\u3057\u305f\u3044\u516c\u5f0f\u304c\u5f97\u3089\u308c\u308b. $\\QED$\n\n### \u30d9\u30fc\u30bf\u51fd\u6570\u3068\u30ac\u30f3\u30de\u51fd\u6570\u306e\u95a2\u4fc2\n\n\u30d9\u30fc\u30bf\u51fd\u6570\u306f\u30ac\u30f3\u30de\u51fd\u6570\u306b\u3088\u3063\u3066\n\n$$\nB(p,q) = \\frac{\\Gamma(p)\\Gamma(q)}{\\Gamma(p+q)}\n\\tag{$*$}\n$$\n\n\u3068\u8868\u308f\u3055\u308c\u308b. \u3053\u308c\u3092\u8a3c\u660e\u3057\u305f\u3044. \u305d\u306e\u305f\u3081\u306b\u306f\n\n$$\n\\Gamma(p)\\Gamma(q)=\n\\int_0^\\infty\n\\left(\n\\int_0^\\infty e^{-(x+y)} x^{p-1} y^{q-1}\\,dy\n\\right)\\,dx\n$$\n\n\u304c\n\n$$\n\\Gamma(p+q)B(p,q)=\\int_0^\\infty e^{-z}z^{p+q-1}\\,dz\n\\,\\int_0^1 t^{p-1}(1-t)^{q-1}\\,dt\n$$\n\n\u306b\u7b49\u3057\u3044\u3053\u3068\u3092\u793a\u305b\u3070\u3088\u3044. \u30ac\u30f3\u30de\u51fd\u6570\u3068\u30d9\u30fc\u30bf\u51fd\u6570\u306e\u5225\u306e\u8868\u793a\u3092\u4f7f\u3048\u3070\u53f3\u8fba\u3082\u5225\u306e\u5f62\u306b\u306a\u308b\u3053\u3068\u306b\u6ce8\u610f\u305b\u3088.\n\n#### \u65b9\u6cd51: \u7f6e\u63db\u7a4d\u5206\u3068\u7a4d\u5206\u306e\u9806\u5e8f\u4ea4\u63db\u306e\u307f\u3092\u4f7f\u3046\u65b9\u6cd5\n\n\u30ac\u30f3\u30de\u51fd\u6570\u3068\u30d9\u30fc\u30bf\u51fd\u6570\u306e\u3042\u3044\u3060\u306e\u95a2\u4fc2\u5f0f\u306f1\u5909\u6570\u306e\u7f6e\u63db\u7a4d\u5206\u3068\u7a4d\u5206\u306e\u9806\u5e8f\u4ea4\u63db\u306e\u307f\u3092\u4f7f\u3063\u3066\u8a3c\u660e\u53ef\u80fd\u3067\u3042\u308b. \u6761\u4ef6 $A$ \u306b\u5bfe\u3057\u3066, $x,y$ \u304c\u6761\u4ef6 $A$ \u3092\u307f\u305f\u3059\u3068\u304d\u5024\u304c $1$ \u306b\u306a\u308a, \u305d\u308c\u4ee5\u5916\u306e\u3068\u304d\u306b\u5024\u304c $0$ \u306b\u306a\u308b $x,y$ \u306e\u51fd\u6570\u3092 $1_A(x,y)$ \u3068\u66f8\u304f\u3053\u3068\u306b\u3059\u308b\u3068,\n$$\n\\begin{aligned}\n\\Gamma(p)\\Gamma(q) &=\n\\int_0^\\infty\n\\left(\n\\int_0^\\infty e^{-(x+y)} x^{p-1} y^{q-1}\\,dy\n\\right)\\,dx\n\\\\ &=\n\\int_0^\\infty\n\\left(\n\\int_x^\\infty e^{-z} x^{p-1} (z-x)^{q-1}\\,dz\n\\right)\\,dx\n\\\\ &=\n\\int_0^\\infty\n\\left(\n\\int_0^\\infty 1_{x\u30ac\u30f3\u30de\u5206\u5e03\u306e\u4e2d\u5fc3\u6975\u9650\u5b9a\u7406\u3068Stirling\u306e\u516c\u5f0f\n\n\u306e\u7b2c7.4\u7bc0\u304b\u3089\u306e\u5f15\u304d\u5199\u3057\u3067\u3042\u308b.\n\n#### \u65b9\u6cd52: \u6975\u5ea7\u6a19\u5909\u63db\u3092\u4f7f\u3046\u65b9\u6cd5\n\n\u3053\u306e\u65b9\u6cd5\u306f2\u91cd\u7a4d\u5206\u306b\u95a2\u3059\u308b\u77e5\u8b58\u304c\u5fc5\u8981\u306b\u306a\u308b. 2\u91cd\u7a4d\u5206\u306b\u3064\u3044\u3066\u77e5\u3089\u306a\u3044\u4eba\u306f\u6b21\u306e\u7bc0\u306e\u5225\u306e\u65b9\u6cd5\u3092\u53c2\u7167\u305b\u3088.\n\n$x=X^2$, $y=Y^2$ \u3068\u5909\u6570\u5909\u63db\u3059\u308b\u3068, \n\n$$\n\\Gamma(p)\\Gamma(q) = 4\\int_0^\\infty\\int_0^\\infty e^{-(X^2+Y^2)} X^{2p-1} Y^{2q-1}\\,dX\\,dY.\n$$\n\n\u3055\u3089\u306b $X=r\\cos\\theta$, $Y=r\\sin\\theta$ \u3068\u5909\u6570\u5909\u63db\u3059\u308b\u3068,\n\n$$\n\\begin{aligned}\n\\Gamma(p)\\Gamma(q) &= \n4\\int_0^{\\pi/2}d\\theta\\int_0^\\infty e^{-r^2} (r\\cos\\theta)^{2p-1} (r\\sin\\theta)^{2q-1} r\\,dr\n\\\\ &=\n4\\int_0^{\\pi/2}(\\cos\\theta)^{2p-1} (\\sin\\theta)^{2q-1}\\,d\\theta\n\\int_0^\\infty e^{-r^2} r^{2(p+q)-1}\\,dr =\nB(p,q)\\Gamma(p+q).\n\\end{aligned}\n$$\n\n\u6700\u5f8c\u306e\u7b49\u53f7\u3067\u30d9\u30fc\u30bf\u51fd\u6570\u306e\u4e09\u89d2\u51fd\u6570\u3092\u7528\u3044\u305f\u8868\u793a\u3068\u30ac\u30f3\u30de\u51fd\u6570\u306eGauss\u7a4d\u5206\u306b\u4f3c\u305f\u8868\u793a\u3092\u7528\u3044\u305f. $\\QED$\n\n#### \u65b9\u6cd53: y = tx \u3068\u5909\u6570\u5909\u63db\u3059\u308b\u65b9\u6cd5\n\n$y=tx$ \u3068\u304a\u304f\u3068, $dy = x\\,dt$ \u3088\u308a, \n\n$$\n\\begin{aligned}\n\\Gamma(p)\\Gamma(q) &=\n\\int_0^\\infty\\left(\\int_0^\\infty e^{-(x+y)}x^{p-1}y^{q-1}\\,dy\\right)\\,dx =\n\\int_0^\\infty\\left(\\int_0^\\infty e^{-(1+t)x}x^{p+q-1}t^{q-1}\\,dt\\right)\\,dx\n\\\\ &=\n\\int_0^\\infty\\left(\\int_0^\\infty e^{-(1+t)x}x^{p+q-1}\\,dx\\right)t^{q-1}\\,dt =\n\\int_0^\\infty \\frac{\\Gamma(p+q)}{(1+t)^{p+q}} t^{q-1}\\,dt\n\\\\ &=\n\\Gamma(p+q)\\int_0^\\infty \\frac{t^{q-1}}{(1+t)^{p+q}}\\,dt =\n\\Gamma(p+q)B(p,q).\n\\end{aligned}\n$$\n\n3\u3064\u76ee\u306e\u7b49\u53f7\u3067\u7a4d\u5206\u9806\u5e8f\u3092\u4ea4\u63db\u3057, 4\u3064\u76ee\u306e\u7b49\u53f7\u3067 $s,c>0$ \u306b\u3064\u3044\u3066\u3088\u304f\u4f7f\u308f\u308c\u308b\u516c\u5f0f($x=y/c$ \u3068\u7f6e\u3051\u3070\u5f97\u3089\u308c\u308b\u516c\u5f0f)\n\n$$\n\\int_0^\\infty e^{-cx}x^{s-1}\\,dx = \\frac{\\Gamma(s)}{c^s}\n$$\n\n\u3092\u4f7f\u3044, \u6700\u5f8c\u306e\u7b49\u53f7\u3067\u30d9\u30fc\u30bf\u51fd\u6570\u306e\u6b21\u306e\u8868\u793a\u306e\u4ed5\u65b9\u3092\u7528\u3044\u305f:\n\n$$\nB(p,q) = \\int_0^1 x^{p-1}(1-x)^{q-1}\\,dx =\n\\int_0^\\infty \\frac{t^{q-1}}{(1+t)^{p+q}}\\,dt\n$$\n\n\u3053\u306e\u516c\u5f0f\u306f\u7a4d\u5206\u5909\u6570\u3092 $\\ds x=\\frac{1}{1+t}$ \u3068\u7f6e\u63db\u3059\u308c\u3070\u5f97\u3089\u308c\u308b. $\\ds x = \\frac{t}{1+t}$ \u3068\u7f6e\u63db\u3059\u308c\u3070 $p,q$ \u3092\u4ea4\u63db\u3057\u305f\u516c\u5f0f\u304c\u5f97\u3089\u308c\u308b. \n\n\u30d9\u30fc\u30bf\u51fd\u6570\u306b\u95a2\u3059\u308b\u305d\u306e\u516c\u5f0f\u3092\u77e5\u3063\u3066\u3044\u308c\u3070, \u30ac\u30f3\u30de\u51fd\u6570\u3068\u30d9\u30fc\u30bf\u51fd\u6570\u306e\u95a2\u4fc2\u3092\u5c0e\u304f\u306b\u306f\u3053\u306e\u65b9\u6cd5\u304c\u7c21\u5358\u304b\u3082\u3057\u308c\u306a\u3044.\n\n$y=tx$ \u306e $t$ \u306f\u76f4\u7dda\u306e\u50be\u304d\u3068\u3044\u3046\u610f\u5473\u3092\u6301\u3063\u3066\u3044\u308b. $xy$ \u5e73\u9762\u306e\u7b2c\u4e00\u8c61\u9650\u306e\u70b9\u3092 $(x,y)$ \u3067\u6307\u5b9a\u3057\u3066\u3044\u305f\u306e\u3092, $(x,y)=(x,tx)$ \u3068\u76f4\u7dda\u306e\u50be\u304d $t$ \u3068 $x$ \u3067\u6307\u5b9a\u3059\u308b\u3088\u3046\u306b\u3057\u305f\u3053\u3068\u304c, \u4e0a\u306e\u8a08\u7b97\u3067\u63a1\u7528\u3057\u305f\u65b9\u6cd5\u3067\u3042\u308b. \u3053\u306e\u65b9\u6cd5\u306fJacobian\u304c\u51fa\u3066\u6765\u308b\u4e8c\u91cd\u7a4d\u5206\u306e\u7a4d\u5206\u5909\u6570\u306e\u5909\u63db\u3092\u907f\u3051\u305f\u3044\u5834\u5408\u306b\u4fbf\u5229\u3067\u3042\u308b.\n\n#### \u30d9\u30fc\u30bf\u51fd\u6570\u3068\u30ac\u30f3\u30de\u51fd\u6570\u306e\u95a2\u4fc2\u306e\u7c21\u5358\u306a\u8a08\u7b97\u554f\u984c\u3078\u306e\u5fdc\u7528\n\n**\u554f\u984c:** \u30d9\u30fc\u30bf\u51fd\u6570\u3068\u30ac\u30f3\u30de\u51fd\u6570\u306e\u95a2\u4fc2\u3092\u7528\u3044\u3066 $\\Gamma(1/2)=\\sqrt{\\pi}$ \u3092\u8a3c\u660e\u305b\u3088.\n\n**\u89e3\u7b54\u4f8b:** \n$$\n\\Gamma(1/2)^2 = \\frac{\\Gamma(1/2)\\Gamma(1/2)}{\\Gamma(1)} = B(1/2,1/2) =\n2\\int_0^{\\pi/2}(\\cos\\theta)^{2\\cdot1/2-1}(\\sin\\theta)^{2\\cdot1/2-1}\\,d\\theta =\n2\\int_0^{\\pi/2}d\\theta = \\pi.\n$$\n\n1\u3064\u76ee\u306e\u7b49\u53f7\u3067 $\\Gamma(1)=1$ \u3092\u4f7f\u3044, 2\u3064\u76ee\u306e\u7b49\u53f7\u3067\u30d9\u30fc\u30bf\u51fd\u6570\u3068\u30ac\u30f3\u30de\u51fd\u6570\u306e\u95a2\u4fc2\u3092\u7528\u3044, 3\u3064\u76ee\u306e\u7b49\u53f7\u3067\u30d9\u30fc\u30bf\u51fd\u6570\u306e\u4e09\u89d2\u51fd\u6570\u3092\u7528\u3044\u305f\u8868\u793a\u3092\u4f7f\u3063\u305f. \u3086\u3048\u306b $\\Gamma(1/2)=\\sqrt{\\pi}$. $\\QED$\n\n**\u6ce8\u610f:** \u3053\u306e\u554f\u984c\u306e\u89e3\u7b54\u4f8b\u306fGauss\u7a4d\u5206\u306e\u516c\u5f0f\u306e\u5225\u8a3c\u660e $\\int_{-\\infty}^\\infty e^{-x^2}\\,dx=\\Gamma(1/2)=\\sqrt{\\pi}$ \u3092\u4e0e\u3048\u308b. $\\QED$\n\n**\u554f\u984c:** \u6b21\u306e\u7a4d\u5206\u3092\u8a08\u7b97\u305b\u3088:\n\n$$\nA = \\int_0^1 x^5(1-x^2)^{3/2}\\,dx.\n$$\n\n**\u89e3\u7b54\u4f8b:** $x=t^{1/2}$ \u3068\u7f6e\u63db\u3059\u308b\u3068 $\\ds dx=\\frac{1}{2}t^{-1/2}\\,dt$ \u306a\u306e\u3067,\n\n$$\nA = \\int_0^1 t^{5/2}(1-t)^{3/2}\\,\\frac{1}{2}t^{-1/2}\\,dt = \n\\frac{1}{2}\\int_0^1 t^2(1-t)^{3/2}\\,dt = \\frac{1}{2}B(3, 5/2) =\n\\frac{\\Gamma(3)\\Gamma(5/2)}{2\\Gamma(3+5/2)}.\n$$\n\n3\u3064\u76ee\u306e\u7b49\u53f7\u3067 $2=3-1$, $3/2=5/2-1$ \u3068\u307f\u306a\u3057\u3066\u304b\u3089\u30d9\u30fc\u30bf\u51fd\u6570\u306e\u8868\u793a\u3092\u5f97\u3066\u3044\u308b\u3053\u3068\u306b\u6ce8\u610f\u305b\u3088. \u3053\u306e\u30b9\u30c6\u30c3\u30d7\u3067\u3088\u304f\u9593\u9055\u3046.\n\n\u4e00\u822c\u306b\u975e\u8ca0\u306e\u6574\u6570 $n$ \u306b\u3064\u3044\u3066\n\n$$\n\\Gamma(n+1) = n!, \\quad\n\\frac{\\Gamma(s)}{\\Gamma(s+n)} = \\frac{1}{s(s+1)\\cdots(s+n-1)}\n$$\n\n\u306a\u306e\u3067, \n\n$$\n\\Gamma(3) = 2! = 2, \\quad\n\\frac{\\Gamma(5/2)}{\\Gamma(3+5/2)} = \\frac{1}{(5/2)(7/2)(9/2)} = \\frac{2^3}{5\\cdot 7\\cdot 9}.\n$$\n\n\u3057\u305f\u304c\u3063\u3066\n\n$$\nA = \\frac{2}{2}\\frac{2^3}{5\\cdot7\\cdot9} = \\frac{8}{315}.\n\\qquad \\QED\n$$\n\n\n```julia\nx = symbols(\"x\", real=true)\nintegrate(x^5*(1-x^2)^(Sym(3)/2), (x,0,1))\n```\n\n\n\n\n$$\\frac{8}{315}$$\n\n\n\n#### B(s, 1/2)\u306e\u7d1a\u6570\u5c55\u958b\n\n$\\ds\\binom{-1/2}{n}$ \u306f\u6b21\u3092\u6e80\u305f\u3057\u3066\u3044\u308b:\n\n$$\n\\binom{-1/2}{n}(-x)^n =\n\\frac{(1/2)(3/2)\\cdots((2n-1)/2)}{n!}x^n =\n\\frac{1}{2^{2n}}\\binom{2n}{n}x^n.\n$$\n\n\u3086\u3048\u306b, $|x|<1$ \u306e\u3068\u304d,\n\n$$\n(1-x)^{-1/2} = \\sum_{n=0}^\\infty \\frac{1}{2^{2n}}\\binom{2n}{n}x^n.\n$$\n\n\u3057\u305f\u304c\u3063\u3066,\n\n$$\nB(s,1/2)=\\int_0^1 x^{s-1}(1-x)^{-1/2}\\,dx=\n\\sum_{n=0}^\\infty \\frac{1}{2^{2n}}\\binom{2n}{n}\\int_0^1 x^{s+n-1}\\,dx =\n\\sum_{n=0}^\\infty \\frac{1}{2^{2n}}\\binom{2n}{n}\\frac{1}{s+n}.\n$$\n\n\u4f8b\u3048\u3070, $s=1/2$ \u306e\u3068\u304d, $B(1/2,1/2)=\\Gamma(1/2)^2=\\pi$ \u306a\u306e\u3067, \u4e21\u8fba\u30922\u3067\u5272\u308b\u3068,\n\n$$\n\\sum_{n=0}^\\infty \\frac{1}{2^{2n}}\\binom{2n}{n}\\frac{1}{2n+1} =\n\\frac{1}{2}B(1/2,1/2) = \\frac{\\pi}{2}.\n$$\n\n\u3053\u306e\u3088\u3046\u306a\u516c\u5f0f\u306f\u30d9\u30fc\u30bf\u51fd\u6570\u306b\u3064\u3044\u3066\u77e5\u3089\u306a\u3044\u3068\u9a5a\u304f\u3079\u304d\u516c\u5f0f\u306b\u898b\u3048\u3066\u3057\u307e\u3046\u304c, \u30d9\u30fc\u30bf\u51fd\u6570\u306b\u3064\u3044\u3066\u77e5\u3063\u3066\u3044\u308c\u3070\u5358\u306b\u4e8c\u9805\u5c55\u958b\u3092\u30d9\u30fc\u30bf\u51fd\u6570\u306e\u88ab\u7a4d\u5206\u51fd\u6570\u306b\u9069\u7528\u3057\u305f\u3060\u3051\u306e\u516c\u5f0f\u306b\u904e\u304e\u306a\u3044.\n\n### \u30ac\u30f3\u30de\u51fd\u6570\u306e\u7121\u9650\u7a4d\u8868\u793a\n\n**\u554f\u984c(Gauss\u306e\u516c\u5f0f):** \u30d9\u30fc\u30bf\u51fd\u6570\u3068\u30ac\u30f3\u30de\u51fd\u6570\u306e\u95a2\u4fc2\u3092\u7528\u3044\u3066, \u6b21\u306e\u516c\u5f0f\u3092\u793a\u305b.\n\n$$\n\\Gamma(s) = \\lim_{n\\to\\infty}\\frac{n^s n!}{s(s+1)\\cdots(s+n)}.\n$$\n\n**\u89e3\u7b54\u4f8b:** \u53f3\u8fba\u3092\u30d9\u30fc\u30bf\u51fd\u6570\u3068\u8868\u793a\u3059\u308b\u3053\u3068\u3092\u8003\u3048\u308b. \u4ee5\u4e0b\u3067\u306f $n$ \u306f\u6b63\u306e\u6574\u6570\u3067\u3042\u308b\u3068\u3057, $s>0$ \u3068\u4eee\u5b9a\u3059\u308b. \u30d9\u30fc\u30bf\u51fd\u6570\u3068\u30ac\u30f3\u30de\u51fd\u6570\u306e\u51fd\u6570\u7b49\u5f0f\u304a\u3088\u3073 $\\Gamma(n+1)=n!$ \u3088\u308a,\n\n$$\nB(s,n+1) = \\frac{\\Gamma(s)\\Gamma(n+1)}{\\Gamma(s+n+1)} =\n\\frac{n!}{s(s+1)\\cdots(s+n)}.\n$$\n\n\u3086\u3048\u306b\n\n$$\nn^s B(s,n+1) = \\frac{n^s n!}{s(s+1)\\cdots(s+n)}.\n$$\n\n\u5de6\u8fba\u3092 $n\\to\\infty$ \u3067\u306e\u6975\u9650\u3092\u53d6\u308a\u6613\u3044\u5f62\u306b\u5909\u5f62\u3057\u3088\u3046. $x=t/n$ \u3068\u7f6e\u63db\u3059\u308b\u3053\u3068\u306b\u3088\u3063\u3066, $n\\to\\infty$ \u306e\u3068\u304d\n\n$$\n\\begin{aligned}\nn^s B(s,n+1) &= n^s \\int_0^1 x^{s-1}(1-x)^n\\,dx\n\\\\ &=\n\\int_0^n t^{s-1}\\left(1-\\frac{t}{n}\\right)^n\\,dt \\to \\int_0^\\infty t^{s-1}e^{-t}\\,dt = \\Gamma(s).\n\\end{aligned}\n$$\n\n\u4ee5\u4e0a\u3092\u307e\u3068\u3081\u308b\u3068\u793a\u3057\u305f\u3044\u7d50\u679c\u304c\u5f97\u3089\u308c\u308b. $\\QED$\n\n**\u554f\u984c:** \u4e0a\u306e\u89e3\u7b54\u4f8b\u4e2d\u3067\u6975\u9650\u3068\u7a4d\u5206\u306e\u9806\u5e8f\u3092\u4ea4\u63db\u3057\u305f. \u305d\u306e\u90e8\u5206\u306e\u8b70\u8ad6\u3092\u6307\u6570\u51fd\u6570\u306b\u95a2\u3059\u308b\u4e0d\u7b49\u5f0f\n\n$$\n\\left(1+\\frac{t}{a}\\right)^a \\leqq e^t \\leqq \\left(1-\\frac{t}{b}\\right)^{-b}\\qquad(-a0)\n\\tag{1}\n$$\n\n\u3068 $\\ds \\left(1+\\frac{t}{a}\\right)^a$, $\\ds \\left(1-\\frac{t}{b}\\right)^{-b}$ \u304c\u305d\u308c\u305e\u308c $a,b$ \u306b\u3064\u3044\u3066\u5358\u8abf\u5897\u52a0, \u5358\u8abf\u6e1b\u5c11\u3059\u308b\u3053\u3068\u3092\u7528\u3044\u3066\u6b63\u5f53\u5316\u305b\u3088.\n\n**\u89e3\u7b54\u4f8b:** \u554f\u984c\u6587\u306e\u4e2d\u3067\u4e0e\u3048\u3089\u308c\u305f\u4e0d\u7b49\u5f0f\u306e\u5168\u4f53\u306e\u9006\u5143\u3092\u53d6\u308a, $a=m$, $b=n$ \u3068\u304a\u304f\u3068, \n\n$$\n\\left(1-\\frac{t}{n}\\right)^n \\leqq e^{-t} \\leqq \\left(1+\\frac{t}{m}\\right)^{-m} \\qquad (-m0$, $m>s$ \u306e\u3068\u304d, \n\n$$\nn^s B(s,n+1) = \\int_0^n t^{s-1}\\left(1-\\frac{t}{n}\\right)^n\\,dt, \\qquad \nm^s B(s,m-s) = \\int_0^n t^{s-1}\\left(1+\\frac{t}{m}\\right)^m\\,dt\n$$\n\n\u304c\u5f97\u3089\u308c, \u305d\u308c\u305e\u308c, $n$, $m$ \u306b\u3064\u3044\u3066\u5358\u8abf\u5897\u52a0, \u5358\u8abf\u6e1b\u5c11\u3059\u308b\u3053\u3068\u304c\u308f\u304b\u308b. \u3053\u308c\u3089\u3068 $\\ds\\Gamma(s)=\\int_0^\\infty t^{s-1}e^{-t}\\,dt$ \u3092\u6bd4\u8f03\u3059\u308b\u3068, \n\n$$\nn^s B(s,n+1) \\leqq \\Gamma(s) \\leqq m^s B(s,m-s).\n\\tag{$*$}\n$$\n\n$n^s B(s,n+1)$, $ m^s B(s,m-s)$ \u306f\u305d\u308c\u305e\u308c $n$, $m$ \u306b\u3064\u3044\u3066\u5358\u8abf\u5897\u52a0, \u5358\u8abf\u6e1b\u5c11\u3059\u308b\u306e\u3067, \u3069\u3061\u3089\u3082 $n,m\\to\\infty$ \u3067\u53ce\u675f\u3059\u308b. \u305d\u3057\u3066, $m=n+s+1$ \u3068\u304a\u304f\u3068, \n\n$$\n\\frac{m^s B(s,m-s)}{n^s B(s,n+1)} = \\frac{(n+s+1)^s B(s,n+1)}{n^s B(s,n+1)} =\n\\left(1+\\frac{s+1}{n}\\right)^s \\to 1 \\quad(n\\to\\infty)\n$$\n\n\u306a\u306e\u3067, $n^s B(s,n+1)$, $ m^s B(s,m-s)$ \u306f $n,m\\to\\infty$ \u3067\u540c\u3058\u5024\u306b\u53ce\u675f\u3059\u308b. \u3053\u308c\u3068\u4e0d\u7b49\u5f0f($*$)\u3092\u5408\u308f\u305b\u308b\u3068, $n^s B(s,n+1)$, $ m^s B(s,m-s)$ \u306f $n,m\\to\\infty$ \u3067 $\\Gamma(s)$ \u306b\u53ce\u675f\u3059\u308b\u3053\u3068\u304c\u308f\u304b\u308b. $\\QED$\n\n**\u6ce8\u610f:** \u4e0d\u7b49\u5f0f(1),(2)\u3068 $a,b$, $m,n$ \u306b\u95a2\u3059\u308b\u5358\u8abf\u6027\u306f\u6975\u9650\u3067\u6307\u6570\u51fd\u6570\u304c\u73fe\u308f\u308c\u308b\u7d50\u679c\u3092\u521d\u7b49\u7684\u306b\u6b63\u5f53\u5316\u3059\u308b\u305f\u3081\u306b\u975e\u5e38\u306b\u4fbf\u5229\u3067\u3042\u308b. $\\QED$\n\n**\u554f\u984c(Weierstrass\u306e\u516c\u5f0f):** \u4e0a\u306e\u554f\u984c\u306e\u7d50\u679c\u3092\u7528\u3044\u3066, \u6b21\u306e\u516c\u5f0f\u3092\u793a\u305b.\n\n$$\n\\frac{1}{\\Gamma(s)} = \ne^{\\gamma s} s\\prod_{n=1}^\\infty\\left[\\left(1+\\frac{s}{n}\\right)e^{-s/n}\\right].\n\\tag{$*$}\n$$\n\n\u3053\u3053\u3067 $\\gamma$ \u306fEuler\u5b9a\u6570\u3067\u3042\u308b:\n\n$$\n\\gamma = \\lim_{n\\to\\infty}\\left(\\sum_{k=1}^n\\frac{1}{k}-\\log n\\right) =\n0.5772\\cdots\n$$\n\n**\u89e3\u7b54\u4f8b:**\n$$\n\\begin{aligned}\n&\n\\frac{s(s+1)\\cdots(s+n)}{n^s n!}\n\\\\ &=\ns\\left(1+s\\right)\\left(1+\\frac{s}{2}\\right)\\cdots\\left(1+\\frac{s}{n}\\right) e^{-s\\log n}\n\\\\ &=\ns\\left(1+s\\right)e^{-s}\n\\left(1+\\frac{s}{2}\\right)e^{-s/2}\n\\cdots\n\\left(1+\\frac{s}{n}\\right)e^{-s/n}\ne^{s\\left(1+\\frac{1}{2}+\\cdots+\\frac{1}{n}-\\log n\\right)}\n\\end{aligned}\n$$\n\n\u3067\u3042\u308b\u304b\u3089, \u516c\u5f0f($*$)\u3092\u5f97\u308b. $\\QED$\n\n**\u6ce8\u610f:** \n$$\n\\begin{aligned}\n\\log\\left[\\left(1+\\frac{s}{n}\\right)e^{-s/n}\\right] &=\n\\log\\left(1+\\frac{s}{n}\\right) - \\frac{s}{n} \n\\\\ &=\n\\frac{s}{n} - \\frac{s^2}{2n^2} + O\\left(\\frac{1}{n^3}\\right) - \\frac{s}{n} \n\\\\ &= -\n\\frac{s^2}{2n^2} + O\\left(\\frac{1}{n^3}\\right)\n\\end{aligned}\n$$\n\n\u306a\u306e\u3067\n\n$$\n\\prod_{n=1}^\\infty\\left[\\left(1+\\frac{s}{n}\\right)e^{-s/n}\\right] =\n\\prod_{n=1}^\\infty\\left[1 + O\\left(\\frac{1}{n^2}\\right)\\right]\n$$\n\n\u3068\u306a\u308a, \u3053\u306e\u7121\u9650\u7a4d\u306f\u4efb\u610f\u306e\u8907\u7d20\u6570 $s$ \u306b\u3064\u3044\u3066\u53ce\u675f\u3059\u308b. \u3057\u305f\u304c\u3063\u3066, Weierstrass\u306e\u516c\u5f0f\u306f $1/\\Gamma(s)$ \u306e\u3059\u3079\u3066\u306e\u8907\u7d20\u6570 $s$ \u3078\u306e\u81ea\u7136\u306a\u62e1\u5f35\u3092\u4e0e\u3048\u308b. $\\QED$\n\n### sin\u3068\u30ac\u30f3\u30de\u51fd\u6570\u306e\u95a2\u4fc2\n\nsin\u306e\u7121\u9650\u7a4d\u8868\u793a\u3068 Euler's reflection formula\u306e\u8a3c\u660e\u306b\u3064\u3044\u3066\u306f\u30ce\u30fc\u30c8\u300c12 Fourier\u89e3\u6790\u300d\u306e\u30ac\u30f3\u30de\u51fd\u6570\u3068sin\u306e\u95a2\u4fc2\u306e\u7bc0\u3082\u53c2\u7167\u305b\u3088. \u4ee5\u4e0b\u3067\u306fsin\u306e\u5947\u6570\u500d\u89d2\u306e\u516c\u5f0f\u3092\u7528\u3044\u305f\u8a3c\u660e\u3092\u7d39\u4ecb\u3059\u308b.\n\n#### sin\u306e\u7121\u9650\u7a4d\u8868\u793a\n\nsin\u306e\u7121\u9650\u7a4d\u8868\u793a\n\n$$\n\\frac{\\sin(\\pi s)}{\\pi} =\ns \\prod_{n=1}^\\infty\\left(1-\\frac{s^2}{n^2}\\right)\n$$\n\n\u3092\u5c0e\u51fa\u3057\u305f\u3044. \u3053\u306e\u516c\u5f0f\u306f\u6b63\u5f26\u51fd\u6570\u306e\u5947\u6570\u500d\u89d2\u306e\u516c\u5f0f\u306e\u6975\u9650\u3068\u3057\u3066\u3082\u5c0e\u51fa\u3055\u308c\u308b\u3053\u3068\u3092\u4ee5\u4e0b\u3067\u8aac\u660e\u3057\u3088\u3046. (\u4ed6\u306b\u3082\u69d8\u3005\u306a\u7d4c\u8def\u3067\u306e\u8a3c\u660e\u304c\u3042\u308b.)\n\n\u975e\u8ca0\u306e\u6574\u6570 $n$ \u306b\u95a2\u3059\u308b $e^{inx} = (e^{ix})^n$ \u306e\u53f3\u8fba\u306b $e^{ix} = \\cos x + i\\sin x$ \u3092\u4ee3\u5165\u3057\u3066\u4e8c\u9805\u5b9a\u7406\u3092\u9069\u7528\u3057, \u4e21\u8fba\u306e\u865a\u90e8\u3092\u53d6\u308b\u3068\u6b21\u304c\u5f97\u3089\u308c\u308b:\n\n$$\n \\sin(nx) = \\sum_{0\\leqq kThe Gamma Function\n\n\u306e\u7b2c4\u7bc0\u306b\u66f8\u3044\u3066\u3042\u308b.\n\n\u51fd\u6570 $f(t)$ \u3092\n\n$$\nf(t) = \\Gamma(t)\\Gamma(1-t)\\frac{\\sin(\\pi t)}{\\pi}\n$$\n\n\u3068\u5b9a\u3081\u308b. $0\u30ac\u30f3\u30de\u5206\u5e03\u306e\u4e2d\u5fc3\u6975\u9650\u5b9a\u7406\u3068Stirling\u306e\u516c\u5f0f\n\n\u306e\u7b2c8\u7bc0\u3092\u53c2\u7167\u305b\u3088.\n\n### Lerch\u306e\u5b9a\u7406\u3068\u30bc\u30fc\u30bf\u6b63\u898f\u5316\u7a4d\n\n#### Lerch\u306e\u5b9a\u7406 (Hurwitz\u306e\u30bc\u30fc\u30bf\u51fd\u6570\u3068\u30ac\u30f3\u30de\u51fd\u6570\u306e\u95a2\u4fc2)\n\n**Lerch\u306e\u5b9a\u7406:** Hurwitz\u306e\u30bc\u30fc\u30bf\u51fd\u6570 $\\zeta(s,x)$ \u304b\u3089\u30ac\u30f3\u30de\u51fd\u6570\u304c\n\n$$\n\\zeta_s(0,x) = \\log\\frac{\\Gamma(x)}{\\sqrt{2\\pi}}, \\qquad\n\\Gamma(x) = \\sqrt{2\\pi}\\;\\exp(\\zeta_s(0,x))\n$$\n\n\u306b\u3088\u3063\u3066\u5f97\u3089\u308c\u308b. \u3053\u3053\u3067 $\\zeta_s(s,x)$ \u306f $\\zeta(s,x)$ \u306e $s$ \u306b\u95a2\u3059\u308b\u504f\u5c0e\u51fd\u6570\u3067\u3042\u308b.\n\n**\u8a3c\u660e:** $F(x)=\\zeta_s(0,x)-\\log\\Gamma(x)$ \u3068\u304a\u304f. $F(x)=-\\log\\sqrt{2\\pi}$ \u3067\u3042\u308b\u3053\u3068\u3092\u793a\u305b\u3070\u5341\u5206\u3067\u3042\u308b. \n\n(1) $(\\zeta_s(0,x))'' = (\\log\\Gamma(x))''$ \u3092\u793a\u305d\u3046. \u3053\u3053\u3067 $'$ \u306f $x$ \u306b\u3088\u308b\u5fae\u5206\u3092\u8868\u308f\u3059. \u307e\u305aHurwitz\u306e\u30bc\u30fc\u30bf\u51fd\u6570\n\n$$\n\\zeta(s,x) = \\sum_{k=0}^\\infty \\frac{1}{(x+k)^s}\n$$\n\n\u306b\u3064\u3044\u3066\u306f\n\n$$\n\\zeta_x(s,x) = -s\\zeta(s+1,x)\n$$\n\n\u304c\u6210\u7acb\u3057\u3066\u3044\u308b\u306e\u3067, \n\n$$\n\\zeta_{xx}(s,x) = s(s+1)\\zeta(s+2,x).\n$$\n\n\u3053\u308c\u3088\u308a, \n\n$$\n(\\zeta_s(0,x))'' = \\zeta_{xxs}(0,x) = \\zeta(2,x).\n$$\n\n\u4e00\u65b9, \u30ac\u30f3\u30de\u51fd\u6570\n\n$$\n\\Gamma(x) = \\lim_{\\to\\infty}\\frac{n!\\,n^x}{x(x+1)\\cdots(x+n)}\n$$\n\n\u306b\u3064\u3044\u3066\u306f,\n\n$$\n\\begin{aligned}\n&\n\\log\\Gamma(x) =\n\\lim_{n\\to\\infty}\\left(\n\\log n! + n\\log x - \\log x - \\log(x+1) - \\cdots - \\log(x+n)\n\\right),\n\\\\ &\n(\\log\\Gamma(x))' =\n\\lim_{n\\to\\infty}\\left(\n\\log n - \\frac{1}{x} - \\frac{1}{x+1} - \\cdots - \\frac{1}{x+n}\n\\right),\n\\\\ &\n(\\log\\Gamma(x))'' =\n\\lim_{n\\to\\infty}\\left(\n\\frac{1}{x^2} + \\frac{1}{(x+1)^2} + \\cdots + \\frac{1}{(x+n)^2}\n\\right) = \\zeta(2,x).\n\\end{aligned}\n$$\n\n\u3053\u308c\u3067 $(\\zeta_s(0,x))'' = (\\log\\Gamma(x))''$ \u304c\u793a\u3055\u308c\u305f.\n\n(2) \u4e0a\u306e\u7d50\u679c\u3088\u308a, $F(x)=\\zeta_s(0,x)-\\log\\Gamma(x)$ \u306f $x$ \u306e\u4e00\u6b21\u51fd\u6570\u3067\u3042\u308b.\n\n(3) $\\zeta_s(0,x)$ \u3068 $\\log\\Gamma(x)$ \u304c\u3069\u3061\u3089\u3082\u540c\u4e00\u306e\u51fd\u6570\u7b49\u5f0f $f(x+1)=f(x)+\\log x$ \u3092\u6e80\u305f\u3059\u3053\u3068\u3092\u793a\u305d\u3046. \n\n$$\n\\begin{aligned}\n&\n\\zeta(s,x+1) = \\zeta(s,x) - \\frac{1}{x^s},\n\\qquad\\therefore\\quad\n\\zeta_s(0,x+1) = \\zeta_s(0,x) + \\log x.\n\\\\ &\n\\log\\Gamma(x+1) = \\log(x\\Gamma(x)) = \\log\\Gamma(x) + \\log x.\n\\end{aligned}\n$$\n\n(4) $F(x)=\\zeta_s(0,x)-\\log\\Gamma(x)$ \u306f $x$ \u306e\u4e00\u6b21\u51fd\u6570\u3060\u3063\u305f\u306e\u3067, \u4e0a\u306e\u7d50\u679c\u3088\u308a $F(x)$ \u306f\u5b9a\u6570\u306b\u306a\u308b.\n\n(5) $\\zeta_s(0,1/2)=-\\log\\sqrt{2}$ \u3092\u793a\u305d\u3046.\n\n$$\n\\begin{aligned}\n\\zeta(s) - 2^{-s}\\zeta(s) &=\n\\left(\\frac{1}{1^s}+\\frac{1}{2^s}+\\frac{1}{3^s}+\\frac{1}{4^s}+\\cdots\\right) -\n\\left(\\frac{1}{2^s}+\\frac{1}{4^s}+\\cdots\\right) \n\\\\ &=\n\\frac{1}{1^s}+\\frac{1}{3^s}+\\cdots =\n\\sum_{k=0}^\\infty\\frac{1}{(2k+1)^s}\n\\end{aligned}\n$$\n\n\u306a\u306e\u3067\n\n$$\n\\begin{aligned}\n&\n\\zeta(s,1/2) = \\sum_{k=0}^\\infty\\frac{1}{(k+1/2)^s} =\n2^s\\sum_{k=0}^\\infty\\frac{1}{(2k+1)^s} = \n2^s(\\zeta(s) - 2^{-s}\\zeta(s)) =\n(2^s-1)\\zeta(s),\n\\\\ &\\therefore\\quad\n\\zeta_s(0,1/2) = \\zeta(0)\\log 2 = -\\frac{1}{2}\\log 2 = -\\log\\sqrt{2}.\n\\end{aligned}\n$$\n\n(6) $\\log\\Gamma(1/2)=\\log\\sqrt{\\pi}$ \u306a\u306e\u3067, \u4e0a\u306e\u7d50\u679c\u3088\u308a, $F(x)=-\\log\\sqrt{2\\pi}$ \u3067\u3042\u308b\u3053\u3068\u304c\u308f\u304b\u308b. $\\QED$\n\n#### \u30bc\u30fc\u30bf\u6b63\u898f\u5316\u7a4d \n\n\u6570\u5217 $a_n$ \u306b\u5bfe\u3057\u3066,\n\n$$\nf(s) = \\sum_{n=1}^N \\frac{1}{a_n^s}\n$$\n\n\u3068\u304a\u304f\u3068\u304d, \n\n$$\nf'(0) = -\\sum_{n=1}^N \\log a_n\n$$\n\n\u306a\u306e\u3067, \n\n$$\n\\exp(-f'(0)) = \\prod_{n=1}^N a_n\n$$\n\n\u304c\u6210\u7acb\u3057\u3066\u3044\u308b. \u3082\u3057\u3082 $N=\\infty$ \u306e\u3068\u304d\u306e $\\ds\\prod_{n=1}^\\infty a_n$ \u304c\u767a\u6563\u3057\u3066\u3044\u3066\u3082, $\\ds f(s)=\\sum_{n=1}^\\infty \\frac{1}{a_n^s}$ \u306e\u89e3\u6790\u63a5\u7d9a\u306b\u3088\u3063\u3066, \u5de6\u8fba\u306e $\\exp(-f'(0))$ \u306fwell-defined\u306b\u306a\u308b\u53ef\u80fd\u6027\u304c\u3042\u308b. \u305d\u306e\u3068\u304d, $\\exp(-f'(0))$ \u3092\n\n$$\n\\exp(-f'(0)) = \\PROD_{n=1}^\\infty a_n\n$$\n\n\u3068\u66f8\u304d, $a_n$ \u9054\u306e**\u30bc\u30fc\u30bf\u6b63\u898f\u5316\u7a4d**\u3068\u547c\u3076. \n\n\u4f8b\u3048\u3070 $x,x+1,x+2,x+3,\\ldots$ \u306e\u30bc\u30fc\u30bf\u6b63\u898f\u5316\u7a4d\u306fLerch\u306e\u5b9a\u7406\u3088\u308a, $\\ds\\frac{\\sqrt{2\\pi}}{\\Gamma(x)}$ \u306b\u306a\u308b. \u7279\u306b $x=1$ \u306e\u3068\u304d\u306e $1,2,3,4,\\ldots$ \u306e\u30bc\u30fc\u30bf\u6b63\u898f\u5316\u7a4d\u306f $\\sqrt{2\\pi}$ \u306b\u306a\u308b:\n\n$$\n\"\\! 1\\times 2\\times 3\\times 4\\times\\cdots \\!\" \\,= \n\\PROD_{n=1}^\\infty n = \\exp(-\\zeta'(0)) = \\sqrt{2\\pi}.\n$$\n\n\u3053\u308c\u306f\n\n$$\n\"\\! 1+2+3+4+\\cdots \\!\"\\, = \\zeta(-1) = -\\frac{1}{12}\n$$\n\n\u306e\u7a4d\u30d0\u30fc\u30b8\u30e7\u30f3\u3067\u3042\u308b. \n\n## Stirling\u306e\u516c\u5f0f\u3068Laplace\u306e\u65b9\u6cd5\n\n\u4e00\u822c\u306b\u6570\u5217 $a_n,b_n$ \u306b\u3064\u3044\u3066\n\n$$\n\\lim_{n\\to\\infty}\\frac{a_n}{b_n} = 1\n$$\n\n\u304c\u6210\u7acb\u3059\u308b\u3068\u304d,\n\n$$\na_n\\sim b_n\n$$\n\n\u3068\u66f8\u304f\u3053\u3068\u306b\u3059\u308b. \n\n### Stirling\u306e\u516c\u5f0f\n\n**Stirling\u306e(\u8fd1\u4f3c)\u516c\u5f0f:** $n\\to\\infty$ \u306e\u3068\u304d,\n\n$$\nn!\\sim n^n e^{-n} \\sqrt{2\\pi n}.\n$$\n\n\u3055\u3089\u306b, \u4e21\u8fba\u306e\u5bfe\u6570\u3092\u53d6\u308b\u3053\u3068\u306b\u3088\u3063\u3066, $n\\to\\infty$ \u306e\u3068\u304d,\n\n$$\n\\log n! = n\\log n - n + \\frac{1}{2}\\log n + \\log\\sqrt{2\\pi} + o(1).\n$$\n\nStirling\u306e\u516c\u5f0f\u306e\u300c\u7269\u7406\u5b66\u7684\u300d\u3082\u3057\u304f\u306f\u300c\u60c5\u5831\u7406\u8ad6\u7684\u300d\u306a\u5fdc\u7528\u306b\u3064\u3044\u3066\u306f\n\n* \u9ed2\u6728\u7384, Kullback-Leibler\u60c5\u5831\u91cf\u3068Sanov\u306e\u5b9a\u7406\n\n\u306e\u7b2c1\u7bc0\u3092\u53c2\u7167\u305b\u3088.\n\n**Stirling\u306e\u516c\u5f0f\u306e\u8a3c\u660e:**\n\n$$\nn! = \\Gamma(n+1) = \\int_0^\\infty e^{-x} x^n\\,dx\n$$\n\n\u3067 $x = n+\\sqrt{n}\\;y = n(1+y/\\sqrt{n})$ \u3068\u7f6e\u63db\u3059\u308b\u3068, \n\n$$\nn! = \nn^n e^{-n} \\sqrt{n} \\int_{-\\sqrt{n}}^\\infty e^{-\\sqrt{n}\\;y}\\;\\left(1+\\frac{y}{\\sqrt{n}}\\right)^n\\,dy =\nn^n e^{-n} \\sqrt{n} \\int_{-\\sqrt{n}}^\\infty \\;f_n(y)\\,dy.\n$$\n\n\u3053\u3053\u3067, \u88ab\u7a4d\u5206\u51fd\u6570\u3092 $f_n(y)$ \u3068\u66f8\u3044\u305f. \u305d\u306e\u3068\u304d $n\\to\\infty$ \u3067\n\n$$\n\\begin{aligned}\n\\log f_n(y) &= -\\sqrt{n}\\;y + n\\log\\left(1+\\frac{y}{\\sqrt{n}}\\right) =\n-\\sqrt{n}\\;y + n\\left(\\frac{y}{\\sqrt{n}} - \\frac{y^2}{2n} + O\\left(\\frac{1}{n\\sqrt{n}}\\right)\\right) \n\\\\ &=\n-\\frac{y^2}{2} + O\\left(\\frac{1}{\\sqrt{n}}\\right) \\to -\\frac{y^2}{2}.\n\\end{aligned}\n$$\n\n\u3059\u306a\u308f\u3061 $f_n(y)\\to e^{-y^2/2}$ \u3068\u306a\u308b. \u3086\u3048\u306b\n\n$$\n\\frac{n!}{n^n e^{-n} \\sqrt{2\\pi n}} =\n\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\sqrt{n}}^\\infty\\;f_n(y)\\,dy\n\\to \\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty e^{-y^2/2}\\,dy = 1.\n$$\n\n\u6700\u5f8c\u306e\u7b49\u53f7\u3067Gauss\u7a4d\u5206\u306e\u516c\u5f0f $\\int_{-\\infty}^\\infty e^{-y^2/a}\\,dy=\\sqrt{a\\pi}$ \u3092\u7528\u3044\u305f. $\\QED$\n\n**Stirling\u306e\u516c\u5f0f\u306e\u8a3c\u660e\u306e\u89e3\u8aac:** \u4e0a\u306e\u8a3c\u660e\u306e\u30dd\u30a4\u30f3\u30c8\u306f $x=n+\\sqrt{n}\\;y$ \u3068\u3044\u3046\u7a4d\u5206\u5909\u6570\u5909\u63db\u3067\u3042\u308b. \u3053\u306e\u5909\u6570\u5909\u63db\u306e\u300c\u6b63\u4f53\u300d\u306f $\\Gamma(n+1)=\\int_0^\\infty e^{-x} x^n\\,dx$ \u306e\u88ab\u7a4d\u5206\u51fd\u6570 $f(x)=e^{-x}x^n$ \u306e\u30b0\u30e9\u30d5\u3092\u63cf\u3044\u3066\u307f\u308c\u3070\u898b\u5f53\u304c\u3064\u304f.\n\n$g(x)=\\log f(x)=n\\log x - x$ \u306e\u5c0e\u51fd\u6570\u306f $g'(x)=n/x-1$ \u306f $x$ \u306b\u3064\u3044\u3066\u5358\u8abf\u6e1b\u5c11\u3067\u3042\u308a, $x=n$ \u3067 $0$ \u306b\u306a\u308b. \u3086\u3048\u306b $g(x)=\\log f(x)$ \u306f $x=n$ \u3067\u6700\u5927\u306b\u306a\u308b. \u305d\u3053\u3067 $x=n$ \u306b\u304a\u3051\u308b $g(x)=\\log f(x)$ \u306eTaylor\u5c55\u958b\u3092\u6c42\u3081\u3066\u307f\u3088\u3046. $g''(x)=-n/x^2$, $g'''(x)=2n/x^3$ \u306a\u306e\u3067, $g(n)=n\\log n - n$, $g'(n)=0$, $g''(n)=-1/n$, $g'''(n)=2/n^2$ \u306a\u306e\u3067,\n\n$$\ng(x) = n \\log n - n -\\frac{(x-n)^2}{2n} + \\frac{(x-n)^3}{3\\,n^2} + \\cdots\n$$\n\n\u3053\u308c\u306e2\u6b21\u306e\u9805\u304c $-y^2/2$ \u306b\u306a\u308b\u3088\u3046\u306a\u5909\u6570\u5909\u63db\u304c\u3061\u3087\u3046\u3069 $x=n+\\sqrt{n}\\;y$ \u306b\u306a\u3063\u3066\u3044\u308b. \u3053\u308c\u304c\u4e0a\u306e\u8a3c\u660e\u3067\u7528\u3044\u305f\u5909\u6570\u5909\u63db\u306e\u300c\u6b63\u4f53\u300d\u3067\u3042\u308b. $\\QED$\n\n\n```julia\n# y = f(x) = e^{-x} x^n / (n^n * e^{-n}) \u306e\u30b0\u30e9\u30d5\u306f n \u304c\u5927\u304d\u306a\u3068\u304d,\n# Gauss\u8fd1\u4f3c y = e^{-(x-n)^2/(2n)} \u306e\u30b0\u30e9\u30d5\u306b\u307b\u307c\u4e00\u81f4\u3059\u308b.\n\nf(n,x) = e^(-x + n*log(x) - (n*log(n) - n))\ng(n,x) = e^(-(x-n)^2/(2n))\nPP = []\nfor n in [10, 30, 100, 300]\n x = 0:2.5n/400:2.5n\n n \u2264 20 && (x = 0:3n/400:3n)\n P = plot()\n plot!(title=\"n = $n\", titlefontsize=9)\n plot!(x, f.(n,x), label=\"\")\n plot!(x, g.(n,x), label=\"Gaussian\")\n push!(PP, P)\nend\n\nplot(PP[1:2]..., size=(700, 200))\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nplot(PP[3:4]..., size=(700, 200))\n```\n\n\n\n\n \n\n \n\n\n\n**\u6ce8\u610f(\u30ac\u30f3\u30de\u51fd\u6570\u306eStirling\u306e\u8fd1\u4f3c\u516c\u5f0f):** \u4e0a\u306e\u8a3c\u660e\u3067 $n$ \u304c\u6574\u6570\u3067\u3042\u308b\u3053\u3068\u306f\u4f7f\u3063\u3066\u3044\u306a\u3044. \u3086\u3048\u306b\u6b63\u306e\u5b9f\u6570 $s$ \u306b\u3064\u3044\u3066\n\n$$\n\\Gamma(s+1) \\sim s^s e^{-s} \\sqrt{2\\pi s} \\quad (s\\to\\infty)\n$$\n\n\u304c\u8a3c\u660e\u3055\u308c\u3066\u3044\u308b. \u3053\u308c\u306e\u4e21\u8fba\u3092 $s$ \u3067\u5272\u308b\u3068,\n\n$$\n\\Gamma(s) \\sim s^s e^{-s} s^{-1/2} \\sqrt{2\\pi} \\quad (s\\to\\infty)\n$$\n\n\u304c\u5f97\u3089\u308c\u308b. \u3053\u308c\u3089\u3092\u3082**Stirling\u306e\u8fd1\u4f3c\u516c\u5f0f**\u3068\u547c\u3076. $\\QED$\n\nStirling\u306e\u516c\u5f0f\u306e\u91cd\u8981\u306a\u5fdc\u7528\u306b\u3064\u3044\u3066\u306f\n\n* \u9ed2\u6728\u7384, 11 Kullback-Leibler\u60c5\u5831\u91cf\n\n\u3082\u53c2\u7167\u305b\u3088. \u300cStirling\u306e\u516c\u5f0f\u300d\u3068\u305d\u306e\u5fdc\u7528\u3068\u3057\u3066\u306e\u300cKL\u60c5\u5831\u91cf\u306b\u95a2\u3059\u308bSanov\u306e\u5b9a\u7406\u300d\u306b\u3064\u3044\u3066\u306f\u3067\u304d\u308b\u3060\u3051\u65e9\u304f\u7406\u89e3\u3057\u3066\u304a\u3044\u305f\u65b9\u304c\u3088\u3044. $\\QED$\n\n**\u554f\u984c:** $n=1,2,\\ldots,10$ \u306b\u3064\u3044\u3066 Stirling \u306e\u516c\u5f0f\u306e\u76f8\u5bfe\u8aa4\u5dee\n\n$$\n\\frac{n^n e^{-n} \\sqrt{2\\pi n}}{n!}-1\n$$\n\n\u3092\u6c42\u3081\u3088.\n\n**\u89e3\u7b54\u4f8b:** \u4ee5\u4e0b\u306e\u30bb\u30eb\u3092\u53c2\u7167\u305b\u3088. $n=5$ \u3067\u76f8\u5bfe\u8aa4\u5dee\u306f2%\u3092\u5207\u3063\u3066\u3044\u308b. $\\QED$\n\n\n```julia\nf(n) = factorial(n)\ng(n) = n^n * exp(-n) * \u221a(2\u03c0*n)\n[(n, f(n), g(n), g(n)/f(n)-1) for n in 1:10]\n```\n\n\n\n\n 10-element Array{Tuple{Int64,Int64,Float64,Float64},1}:\n (1, 1, 0.922137, -0.077863) \n (2, 2, 1.919, -0.0404978) \n (3, 6, 5.83621, -0.0272984) \n (4, 24, 23.5062, -0.020576) \n (5, 120, 118.019, -0.0165069) \n (6, 720, 710.078, -0.0137803) \n (7, 5040, 4980.4, -0.0118262) \n (8, 40320, 39902.4, -0.0103573) \n (9, 362880, 3.59537e5, -0.00921276) \n (10, 3628800, 3.5987e6, -0.00829596)\n\n\n\n**\u53c2\u8003:** \u4e0a\u306e\u8a08\u7b97\u3092\u898b\u308c\u3070, $n^n e^{-n} \\sqrt{2\\pi n}$ \u306f $n!$ \u3088\u308a\u3082\u5fae\u5c0f\u306b\u5c0f\u3055\u3044\u3053\u3068\u304c\u308f\u304b\u308b. \u305d\u306e\u5206\u3092\u88dc\u6b63\u3057\u305f\u3088\u308a\u7cbe\u5bc6\u306a\u8fd1\u4f3c\u5f0f\n\n$$\nn! = n^n e^{-n} \\sqrt{2\\pi n}\\left(1+\\frac{1}{12n}+O\\left(\\frac{1}{n^2}\\right)\\right)\n$$\n\n\u304c\u6210\u7acb\u3057\u3066\u3044\u308b. (\u5b9f\u969b\u306b\u306f $O(1/n^2)$ \u306e\u90e8\u5206\u306b\u3064\u3044\u3066\u3082\u3063\u3068\u8a73\u3057\u3044\u3053\u3068\u304c\u308f\u304b\u308b.)\n\n$1/(12n)$ \u3067\u88dc\u6b63\u3057\u305f\u8fd1\u4f3c\u5f0f\u306e\u76f8\u5bfe\u8aa4\u5dee\u306f $n=1$ \u3067\u3059\u3067\u306b0.1%\u7a0b\u5ea6\u3068\u975e\u5e38\u306b\u5c0f\u3055\u304f\u306a\u308b. \u6b21\u306e\u30bb\u30eb\u3092\u898b\u3088. $\\QED$\n\n\n```julia\nf(n) = factorial(n)\ng1(n) = n^n * exp(-n) * \u221a(2\u03c0*n) * (1+1/(12n))\n[(n, f(n), g1(n), g1(n)/f(n)-1) for n in 1:10]\n```\n\n\n\n\n 10-element Array{Tuple{Int64,Int64,Float64,Float64},1}:\n (1, 1, 0.998982, -0.00101824) \n (2, 2, 1.99896, -0.000518567) \n (3, 6, 5.99833, -0.000278913) \n (4, 24, 23.9959, -0.00017137) \n (5, 120, 119.986, -0.000115383) \n (6, 720, 719.94, -8.28033e-5) \n (7, 5040, 5039.69, -6.22504e-5) \n (8, 40320, 40318.0, -4.84771e-5) \n (9, 362880, 3.62866e5, -3.88063e-5) \n (10, 3628800, 3.62868e6, -3.17601e-5)\n\n\n\n### Wallis\u306e\u516c\u5f0f\u306eStirling\u306e\u516c\u5f0f\u3092\u4f7f\u3063\u305f\u8a3c\u660e\n\n**\u554f\u984c(Wallis\u306e\u516c\u5f0f):** Stirling\u306e\u516c\u5f0f\u3092\u7528\u3044\u3066\u6b21\u3092\u793a\u305b:\n\n$$\n\\frac{1}{2^{2n}}\\binom{2n}{n} \\sim \\frac{1}{\\sqrt{\\pi n}}.\n$$\n\n**\u89e3\u7b54\u4f8b:**\n$$\n\\frac{1}{2^{2n}}\\binom{2n}{n} = \\frac{(2n)!}{2^{2n}(n!)^2}\n\\sim \\frac{(2n)^{2n}e^{-2n}\\sqrt{4\\pi n}}{2^{2n}n^{2n}e^{-2n}2\\pi n} = \\frac{1}{\\sqrt{\\pi n}}.\n\\qquad \\QED\n$$\n\n**\u6ce8\u610f:** \u3053\u306e\u5f62\u306eWallis\u306e\u516c\u5f0f\u306f1\u6b21\u5143\u306e\u5358\u7d14\u30e9\u30f3\u30c0\u30e0\u30a6\u30a9\u30fc\u30af\u306e\u9006\u6b63\u5f26\u6cd5\u5247\u306b\u95a2\u4fc2\u3057\u3066\u3044\u308b.\n\n* \u9ed2\u6728\u7384, \u5358\u7d14\u30e9\u30f3\u30c0\u30e0\u30a6\u30a9\u30fc\u30af\u306e\u9006\u6b63\u5f26\u6cd5\u5247 (\u624b\u63cf\u304d\u306e\u30ce\u30fc\u30c8\u306ePDF)\n\n\u3092\u53c2\u7167\u305b\u3088. \u7279\u306b\u624b\u63cf\u304d\u306e\u30ce\u30fc\u30c8\u306ePDF\u30d5\u30a1\u30a4\u30eb\u306e12\u9801\u4ee5\u964d\u306b\u307e\u3068\u307e\u3063\u305f\u89e3\u8aac\u304c\u3042\u308b. 1\u6b21\u5143\u306e\u5358\u7d14\u30e9\u30f3\u30c0\u30e0\u30a6\u30a9\u30fc\u30af\u306e\u5834\u5408\u306b\u306f\u9ad8\u6821\u6570\u5b66\u30ec\u30d9\u30eb\u306e\u7d44\u307f\u5408\u308f\u305b\u8ad6\u7684\u306a\u8b70\u8ad6\u3068Wallis\u306e\u516c\u5f0f\u304b\u3089\u9006\u6b63\u5f26\u6cd5\u5247\u3092\u5c0e\u304f\u3053\u3068\u304c\u3067\u304d\u308b. 1\u6b21\u5143\u306e\u4e00\u822c\u30e9\u30f3\u30c0\u30e0\u30a6\u30a9\u30fc\u30af\u306e\u5834\u5408\u306b\u306fTauber\u578b\u5b9a\u7406\u3092\u4f7f\u3063\u3066Wallis\u306e\u516c\u5f0f\u306b\u5bfe\u5fdc\u3059\u308b\u6f38\u8fd1\u6319\u52d5\u3092\u8a3c\u660e\u3059\u308b\u3053\u3068\u306b\u306a\u308b. $\\QED$\n\n\n```julia\n# Wallis\u306e\u516c\u5f0f\u3088\u308a\n#\n# [ 2^{2n} (n!)^2 / ((2n)! \u221an) ]^2 ---\u2192 \u03c0\n#\n# \u4ee5\u4e0b\u306f\u3053\u308c\u306e\u6570\u5024\u7684\u78ba\u8a8d\n#\n# log n! \u3092 log lgamma(n+1) \u3067\u8a08\u7b97\u3057\u3066\u3044\u308b. \u3053\u3053\u3067 lgamma(x) = log(\u0393(x)).\n# lgamma(x) \u306f\u5bfe\u6570\u30ac\u30f3\u30de\u51fd\u6570\u3092\u5de8\u5927\u306a x \u306b\u3064\u3044\u3066\u3082\u8a08\u7b97\u3057\u3066\u304f\u308c\u308b.\n\nf(n) = exp((2n)*log(typeof(n)(2)) + 2lgamma(n+1) - lgamma(2n+1) - log(n)/2)^2\nWallis_pi = f(big\"10.0\"^40)\nExact__pi = big(\u03c0)\n@show Wallis_pi\n@show Exact__pi\nWallis_pi - Exact__pi\n```\n\n Wallis_pi = 3.14159265358979323846264338327950280112510935008936482449955348608333403219364\n Exact__pi = 3.141592653589793238462643383279502884197169399375105820974944592307816406286198\n\n\n\n\n\n -8.307206004928574099647539110622448237409255782069327815244440488293374992724481e-35\n\n\n\n### Gauss's multiplication formula\n\n**\u554f\u984c(Gauss's multiplication formula):** \u6b21\u3092\u793a\u305b: \u6b63\u306e\u6574\u6570 $n$ \u306b\u5bfe\u3057\u3066,\n\n$$\n\\Gamma(s)\\Gamma\\left(s+\\frac{1}{n}\\right)\\cdots\\Gamma\\left(s+\\frac{n-1}{n}\\right) =\nn^{1/2-ns}(2\\pi)^{(n-1)/2}\\Gamma(ns).\n$$\n\n**\u89e3\u7b54\u4f8b:** \u51fd\u6570 $f(s)$ \u3092\u6b21\u306e\u3088\u3046\u306b\u5b9a\u3081\u308b:\n\n$$\nf(s) = \\frac{\\Gamma(s)\\Gamma\\left(s+\\frac{1}{n}\\right)\\cdots\\Gamma\\left(s+\\frac{n-1}{n}\\right)}{n^{-ns}\\Gamma(ns)}\n$$\n\n$f(s)=n^{1/2}(2\\pi)^{(n-1)/2}$ \u3092\u793a\u305b\u3070\u3088\u3044.\n\n\u30ac\u30f3\u30de\u51fd\u6570\u306e\u51fd\u6570\u7b49\u5f0f\u3060\u3051\u3092\u4f7f\u3063\u3066, $f(s+1)=f(s)$ \u3092\u793a\u305b\u308b:\n\n$$\nf(s+1) = \nf(s)\\frac\n{s\\left(s+\\frac{1}{n}\\right)\\cdots\\left(s+\\frac{n-1}{n}\\right)}\n{n^{-n}(ns+n-1)\\cdots(ns+1)(ns)} = f(s).\n$$\n\n\u4e0a\u3067\u8a3c\u660e\u3055\u308c\u3066\u3044\u308bStirling\u306e\u8fd1\u4f3c\u516c\u5f0f\n\n$$\n\\Gamma(s) \\sim s^s e^{-s} s^{-1/2}\\sqrt{2\\pi} \\quad (s\\to\\infty)\n$$\n\n\u3092\u4f7f\u3063\u3066, $s\\to\\infty$ \u306e\u3068\u304d\u306e $f(s)$ \u306e\u6975\u9650\u3092\u6c42\u3081\u3088\u3046. $s\\to\\infty$ \u306e\u3068\u304d, $\\ds \\left(1+\\frac{a}{s}\\right)^s\\to e^a$ \u306a\u306e\u3067, $s\\to\\infty$ \u306b\u304a\u3044\u3066, \n\n$$\n\\begin{aligned}\n\\Gamma(s+a) &\\sim (s+a)^{s+a-1/2} e^{-s-a} \\sqrt{2\\pi} \n\\\\ &=\ns^{s+a-1/2}e^{-s}\\sqrt{2\\pi}\\;\\left(1+\\frac{a}{s}\\right)^{s+a-1/2} e^{-a} \n\\\\ &\\sim\ns^{s+a-1/2}e^{-s}\\sqrt{2\\pi}.\n\\end{aligned}\n$$\n\n\u3068\u306a\u308b. \u3086\u3048\u306b, $\\frac{1}{n}+\\frac{2}{n}+\\cdots+\\frac{n-1}{n}=\\frac{n-1}{2}$ \u306a\u306e\u3067, $s\\to\\infty$ \u306b\u304a\u3044\u3066, \n\n$$\n\\begin{aligned}\n&\n\\Gamma(s) \\sim\ns^{s-1/2} e^{-s}\\sqrt{2\\pi},\n\\\\ &\n\\Gamma\\left(s+\\frac{1}{n}\\right)\\sim\ns^{s+1/n-1/2} e^{-s} \\sqrt{2\\pi},\n\\\\ &\n\\qquad\\qquad\\cdots\\cdots\\cdots\\cdots\\cdots\n\\\\ &\n\\Gamma\\left(s+\\frac{n-1}{n}\\right)\\sim\ns^{s+(n-1)/n-1/2} e^{-s} \\sqrt{2\\pi}\n\\\\ &\n\\therefore\\quad\n\\Gamma(s)\\Gamma\\left(s+\\frac{1}{n}\\right)\\cdots\\Gamma\\left(s+\\frac{n-1}{n}\\right)\\sim\ns^{ns-1/2}e^{-ns}(2\\pi)^{n/2}\n\\\\ &\nn^{-ns}\\Gamma(ns)\\sim\nn^{-ns}(ns)^{ns-1/2}e^{-ns}\\sqrt{2\\pi} =\nn^{-1/2}s^{ns-1/2}e^{-ns}(2\\pi)^{1/2}\n\\end{aligned}\n$$\n\n\u3068\u306a\u308a, \n\n$$\nf(s)\\sim\\frac{s^{ns-1/2}e^{-ns}(2\\pi)^{n/2}}{n^{-1/2}s^{ns-1/2}e^{-ns}(2\\pi)^{1/2}}=\nn^{1/2}(2\\pi)^{(n-1)/2}.\n$$\n\n\u3086\u3048\u306b\u6574\u6570 $N$ \u306b\u3064\u3044\u3066, $f(s+N)=f(s)$ \u306a\u306e\u3067, $N\\to\\infty$ \u306e\u3068\u304d $f(s)=f(s+N)\\to n^{1/2}(2\\pi)^{(n-1)/2}$ \u3068\u306a\u308b. \u3053\u308c\u3067 $f(s)=2^{1/2}(2\\pi)^{(n-1)/2}$ \u304c\u793a\u3055\u308c\u305f. $\\QED$\n\n**\u554f\u984c:** Gauss's multiplication formula \u306e $n=2$ \u306e\u5834\u5408\u3067\u3042\u308b Legendre's duplication formula \u306f\u5b9a\u7a4d\u5206\u306e\u8a08\u7b97\u3060\u3051\u3067\u8a3c\u660e\u3067\u304d\u308b\u306e\u3067\u3042\u3063\u305f. \u4e0a\u306e\u89e3\u7b54\u4f8b\u306f\u672c\u8cea\u7684\u306bStirling\u306e\u8fd1\u4f3c\u516c\u5f0f\u3092\u4f7f\u3063\u3066\u3044\u308b. Gauss's multiplication formula \u306b\u3082\u5b9a\u7a4d\u5206\u306e\u8a08\u7b97\u3060\u3051\u3067\u8a3c\u660e\u3059\u308b\u65b9\u6cd5\u304c\u306a\u3044\u3060\u308d\u3046\u304b. \u4ee5\u4e0b\u306e\u65b9\u91dd\u3067 Gauss's multiplication formula \u3092\u8a3c\u660e\u305b\u3088. \u305f\u3060\u3057, (3)\u306e\u8a3c\u660e\u306b\u306f Euler's reflection formula \u306f\u4f7f\u3063\u3066\u3088\u3044\u3053\u3068\u306b\u3059\u308b. \n\n$t>0$ \u306b\u5bfe\u3059\u308b $n-1$ \u91cd\u7a4d\u5206 $I(t)$ \u3092\u6b21\u306e\u3088\u3046\u306b\u5b9a\u3081\u308b:\n\n$$\nI(t) = \\int_0^\\infty\\cdots\\int_0^\\infty \ne^{-(t^n/(x_2\\cdots x_n)+x_2+\\cdots+x_n)}\nx_2^{-(n-1)/n}x_3^{-(n-2)/n}\\cdots x_n^{-1/n} \\,dx_2\\cdots dx_n.\n$$\n\n\u4ee5\u4e0b\u3092\u793a\u305b:\n\n(1) $\\ds I(t) = \\Gamma\\left(\\frac{1}{n}\\right)\\Gamma\\left(\\frac{2}{n}\\right)\\cdots\\Gamma\\left(\\frac{n-1}{n}\\right)e^{-nt}$.\n\n(2) $\\ds \\Gamma(s)\\Gamma\\left(s+\\frac{1}{n}\\right)\\cdots\\Gamma\\left(s+\\frac{n-1}{n}\\right) = \nn^{1-ns}I(0)\\Gamma(ns) =\nn^{1-ns}\\Gamma\\left(\\frac{1}{n}\\right)\\Gamma\\left(\\frac{2}{n}\\right)\\cdots\\Gamma\\left(\\frac{n-1}{n}\\right)\\Gamma(ns)$.\n\n(3) $\\ds I(0) = \n\\Gamma\\left(\\frac{1}{n}\\right)\\Gamma\\left(\\frac{2}{n}\\right)\\cdots\\Gamma\\left(\\frac{n-1}{n}\\right) =\n(2\\pi)^{(n-1)/2} n^{-1/2}$.\n\n**\u6ce8\u610f:** (1), (2) \u306e\u65b9\u91dd\u306e\u8a3c\u660e\u306f\n\n* Andrews, G.E., Askey,R., and Roy, R. Special functions. Encyclopedia of Mathematics and its Applications, Vol. 71, Cambridge University Press, 1999, 2000, 681 pages.\n\n\u306epp.24-25\u3067\u89e3\u8aac\u3055\u308c\u3066\u3044\u308b. \u305d\u306e\u65b9\u6cd5\u306f\n\n* Liouville, J. Sur un th\u00e9or\u00e8me relatif \u00e0 l\u2019int\u00e9grale eul\u00e9rienne de seconde esp\u00e8ce. Journal de math\u00e9matiques pures et appliqu\u00e9es 1re s\u00e9rie, tome 20 (1855), p. 157-160. PDF\n\n\u306e\u65b9\u6cd5\u306e\u518d\u69cb\u6210\u3068\u3044\u3046\u3053\u3068\u3089\u3057\u3044.\n\n**\u89e3\u7b54\u4f8b:** (1) $t=0$ \u306e\u3068\u304d, $I(0)$ \u306e\u7a4d\u5206\u306f\u5909\u6570\u5206\u96e2\u5f62\u306b\u306a\u3063\u3066, \n\n$$\nI(0) = \\Gamma\\left(\\frac{1}{n}\\right)\\Gamma\\left(\\frac{2}{n}\\right)\\cdots\\Gamma\\left(\\frac{n-1}{n}\\right)\n$$\n\n\u3068\u306a\u308b\u3053\u3068\u304c\u3059\u3050\u306b\u308f\u304b\u308b. $I(t)$ \u3092 $t$ \u3067\u5fae\u5206\u3057\u3066, $\\ds x_2 = \\frac{t^n}{x_3\\cdots x_n x_1}$ \u306b\u3088\u3063\u3066\u7a4d\u5206\u5909\u6570 $x_2$ \u3092\u7a4d\u5206\u5909\u6570 $x_1$ \u306b\u5909\u63db\u3059\u308b\u3068, \n\n$$\n\\begin{aligned}\nI'(t) &=\n\\int_0^\\infty\\cdots\\int_0^\\infty \ne^{-(t^n/(x_2\\cdots x_n)+x_2+\\cdots+x_n)}\n\\frac{-nt^{n-1}}{x_2\\cdots x_n}\nx_2^{-(n-1)/n}x_3^{-(n-2)/n}\\cdots x_n^{-1/n} \\,dx_2\\cdots dx_n\n\\\\ &=\n-n\\int_0^\\infty\\cdots\\int_0^\\infty \ne^{-(t^n/(x_2\\cdots x_n)+x_2+\\cdots+x_n)}\nt^{n-1}\nx_2^{-(n-1)/n-1}x_3^{-(n-2)/n-1}\\cdots x_n^{-1/n-1} \\,dx_2\\cdots dx_n\n\\\\ &=\n-n\\int_0^\\infty\\cdots\\int_0^\\infty \ne^{-(x_1+t^n/(x_3\\cdots x_n x_1)+x_3+\\cdots+x_n)}\n\\\\ & \\qquad\\qquad\\quad\\times\nt^{n-1}\n\\left(\\frac{t^n}{x_3\\cdots x_n x_1}\\right)^{-(2n-1)/n}\nx_3^{-(2n-2)/n}\\cdots x_n^{-(n+1)/n} \\frac{t^n}{x_3\\cdots x_n x_1^2}\\,dx_3\\cdots dx_n\\,dx_1\n\\\\ &=\n-n\\int_0^\\infty\\cdots\\int_0^\\infty \ne^{-(x_1+t^n/(x_3\\cdots x_n x_1)+x_3+\\cdots+x_n)}\nx_3^{-(n-1)/n}\\cdots x_n^{-2/n} x_1^{-1/n}\\,dx_3\\cdots dx_n\\,dx_1\n\\\\ &=\n-n I(t)\n\\end{aligned}\n$$\n\n\u3086\u3048\u306b $I(t)=I(0)e^{-nt}$. \u3053\u308c\u3067(1)\u304c\u793a\u3055\u308c\u305f.\n\n(2) \u5de6\u8fba\u3092LHS\u3068\u66f8\u304d, $\\ds x_1=\\frac{t^n}{x_2\\cdots x_n}$ \u3068\u304a\u304f\u3068, \n\n$$\n\\begin{aligned}\n\\text{LHS} &=\n\\int_0^\\infty\\cdots\\int_0^\\infty e^{-(x_1+\\cdots+x_n)} x_1^{s-1}x_2^{s-(n-1)/n}\\cdots x_n^{s-1/n}\\,dx_1\\cdots dx_n\n\\\\ &=\n\\int_0^\\infty\\cdots\\int_0^\\infty e^{-(t^n/(x_2\\cdots x_n)+x_2\\cdots+x_n)}\n\\left(\\frac{t^n}{x_2\\cdots x_n}\\right)^{s-1}\nx_2^{s-(n-1)/n}\\cdots x_n^{s-1/n}\n\\frac{nt^{n-1}}{x_2\\cdots x_n}\n\\,dt\\,dx_2\\cdots dx_n\n\\\\ &=\nn\\int_0^\\infty\\cdots\\int_0^\\infty e^{-(t^n/(x_2\\cdots x_n)+x_2\\cdots+x_n)}\nx_2^{-(n-1)/n}\\cdots x_n^{-1/n} t^{ns-1}\n\\,dx_2\\cdots dx_n\\,dt\n\\\\ &=\nn\\int_0^\\infty I(t) t^{ns-1}\\,dt =\nnI(0)\\int_0^\\infty e^{-nt} t^{ns-1}\\,dt =\nn^{1-ns}I(0)\\Gamma(ns).\n\\end{aligned}\n$$\n\n\u3053\u308c\u3067(2)\u304c\u793a\u3055\u308c\u305f.\n\n(3) $I(0)=(2\\pi)^{(n-1)/2}n^{-1/2}$ \u3092\u793a\u3057\u305f\u3044. \u305d\u306e\u305f\u3081\u306b\u306f $I(0)^2 = (2\\pi)^{n-1} n^{-1}$ \u3092\u793a\u305b\u3070\u3088\u3044. Euler's reflection formula \u3088\u308a, $\\ds \\Gamma\\left(\\frac{k}{n}\\right)\\Gamma\\left(\\frac{n-k}{n}\\right) = \\frac{\\pi}{\\sin(k\\pi/n)}$ \u306a\u306e\u3067\n\n$$\nI(0)^2 = \\prod_{k=1}^{n-1}\\frac{\\pi}{\\sin(k\\pi/n)} =\n\\frac{\\pi^{n-1}}{\\ds\\prod_{k=1}^{n-1}\\sin\\frac{k\\pi}{n}}.\n$$\n\n\u305d\u3057\u3066, \n\n$$\n\\prod_{k=1}^{n-1}\\sin\\frac{k\\pi}{n} = \n\\prod_{k=1}^{n-1}\\frac{e^{\\pi ik/n}-e^{-\\pi ik/n}}{2i} =\n\\frac{e^{\\pi i(1+2+\\cdots+(n-1))/n}}{2^{n-1}i^{n-1}}\\prod_{k=1}^{n-1}(1-e^{-2\\pi ik/n}) =\n\\frac{1}{2^{n-1}}\\prod_{k=1}^{n-1}(1-e^{-2\\pi ik/n})\n$$\n\n\u3067\u3042\u308a, \n\n$$\n\\frac{x^n-1}{x-1} = \\prod_{k=1}^{n-1}(x-e^{-2\\pi ik/n})\n$$\n\n\u306b\u304a\u3044\u3066 $x\\to 1$ \u3068\u3059\u308b\u3068 $\\ds \\prod_{k=1}^{n-1}(1-e^{-2\\pi ik/n})=n$ \u304c\u5f97\u3089\u308c\u308b. \u4ee5\u4e0a\u3092\u5408\u308f\u305b\u308b\u3068\n\n$$\nI(0)^2 = \\frac{(2\\pi)^{n-1}}{n}.\n$$\n\n\u4e21\u8fba\u306e\u5e73\u65b9\u6839\u3092\u53d6\u308c\u3070(3)\u304c\u5f97\u3089\u308c\u308b. $\\QED$\n\n### Laplace\u306e\u65b9\u6cd5\n\n**Laplace\u306e\u65b9\u6cd5:** Stirling\u306e\u516c\u5f0f\u306e\u8a3c\u660e\u306e\u89e3\u8aac\u306e\u3088\u3046\u306b\u3057\u3066\u898b\u4ed8\u304b\u308b\u5909\u6570\u5909\u63db\u306f\u3088\u308a\u4e00\u822c\u306e\u5834\u5408\u306b\u975e\u5e38\u306b\u6709\u7528\u3067\u3042\u308b. \u4ee5\u4e0b\u3067\u306f $\\int_{-\\infty}^\\infty$ \u3084 $\\int_0^\\infty$ \u3092\u5358\u306b $\\int$ \u3068\u66f8\u304f\u3053\u3068\u306b\u3057, \n\n$$\nZ_n = \\int e^{-nf(x)}g(x)\\,dx\n$$\n\n\u3068\u304a\u304f. \u305f\u3060\u3057, $f(x)$ \u306f\u5b9f\u6570\u5024\u51fd\u6570\u3067\u552f\u4e00\u3064\u306e\u6700\u5c0f\u5024 $f(x_0)$ \u3092\u6301\u3061, $x=x_0$ \u306b\u304a\u3044\u3066, \n\n$$\nf(x) = f(x_0) + \\frac{a}{2}(x-x_0)^2 + O((x-x_0)^3), \\quad a=f''(x_0) > 0\n$$\n\n\u3068Taylor\u5c55\u958b\u3055\u308c\u3066\u3044\u308b\u3068\u4eee\u5b9a\u3059\u308b\u3057, \u3055\u3089\u306b, $0$ \u4ee5\u4e0a\u306e\u5024\u3092\u6301\u3064\u5b9f\u6570\u5024\u51fd\u6570 $g(x)$ \u306f\u7a4d\u5206 $Z_n$ \u304c\u3046\u307e\u304f\u5b9a\u7fa9\u3055\u308c\u308b\u3088\u3046\u306a\u9069\u5f53\u306a\u6761\u4ef6\u3092\u6e80\u305f\u3057\u3066\u3044\u308b\u3068\u4eee\u5b9a\u3057, $x_0$ \u306e\u8fd1\u508d\u3067 $g(x)>0$ \u3092\u6e80\u305f\u3057\u3066\u3044\u308b\u3068\u4eee\u5b9a\u3059\u308b. (\u3053\u3053\u3067, $x_0$ \u306e\u8fd1\u508d\u3067 $g(x)>0$ \u304c\u6210\u7acb\u3057\u3066\u3044\u308b\u3068\u306f, \u3042\u308b $\\delta>0$ \u304c\u5b58\u5728\u3057\u3066, $|x-x_0|<\\delta$ \u306a\u3089\u3070 $g(x)>0$ \u3068\u306a\u308b\u3053\u3068\u3067\u3042\u308b.) \u3053\u306e\u3068\u304d, \n\n$$\nZ_n = e^{-nf(x_0)} \\int \\exp\\left(-n\\left(\\frac{a}{2}(x-x_0)^2+O((x-x_0)^3)\\right)\\right)\\;g(x)\\,dx.\n$$\n\n$x=x_0+y/\\sqrt{n}$ \u3068\u5909\u6570\u5909\u63db\u3059\u308b\u3068\n\n$$\nZ_n = \\frac{e^{-nf(x_0)}}{\\sqrt{n}}\n\\int \\exp\\left(-\\frac{a}{2}y^2+O\\left(\\frac{1}{\\sqrt{n}}\\right)\\right)\\;\ng\\left(x_0+\\frac{y}{\\sqrt{n}}\\right)\\,dy.\n$$\n\n\u305d\u3057\u3066, $n\\to\\infty$ \u3067\n\n$$\n\\int \\exp\\left(-\\frac{a}{2}y^2+O\\left(\\frac{1}{\\sqrt{n}}\\right)\\right)\\;\ng\\left(x_0+\\frac{y}{\\sqrt{n}}\\right)\\,dy \\to\n\\int \\exp\\left(-\\frac{a}{2}y^2\\right)g(x_0)\\,dy =\n\\sqrt{\\frac{2\\pi}{a}}\\;g(x_0).\n$$\n\n$a=f''(x_0)$ \u3068\u304a\u3044\u305f\u3053\u3068\u3092\u601d\u3044\u51fa\u3057\u306a\u304c\u3089, \u4ee5\u4e0a\u3092\u307e\u3068\u3081\u308b\u3068, $n\\to\\infty$ \u3067\n\n$$\nZ_n \\sim \\frac{e^{-nf(x_0)}}{\\sqrt{n}} \\sqrt{\\frac{2\\pi}{f''(x_0)}}\\;g(x_0).\n$$\n\n\u3059\u306a\u308f\u3061, \n\n$$\n-\\log Z_n = nf(x_0) + \\frac{1}{2}\\log n - \\log\\left(\\sqrt{\\frac{2\\pi}{f''(x_0)}}\\;g(x_0)\\right) + o(1).\n$$\n\n$Z_n$ \u306e $n\\to\\infty$ \u306b\u304a\u3051\u308b\u6f38\u8fd1\u6319\u52d5\u3092\u8abf\u3079\u308b\u305f\u3081\u306e\u4ee5\u4e0a\u306e\u65b9\u6cd5\u3092**Laplace\u306e\u65b9\u6cd5**(Laplace's method)\u3068\u547c\u3076. $\\QED$\n\n**\u554f\u984c(Stirling\u306e\u516c\u5f0f):** $\\ds n! = \\int_0^\\infty e^{-t}t^n\\,dt$ \u306bLapalce\u306e\u65b9\u6cd5\u3092\u9069\u7528\u3057\u3066, Stirling\u306e\u516c\u5f0f\u3092\u5c0e\u51fa\u305b\u3088.\n\n**\u89e3\u7b54\u4f8b:** \u7a4d\u5206\u5909\u6570\u3092 $t=nx$ \u3067\u7f6e\u63db\u3059\u308b\u3068,\n\n$$\nn! = \\int_0^\\infty e^{-t+n\\log t}\\,dt = n^{n+1} \\int_0^\\infty e^{-n(x-\\log x)}\\,dx.\n$$\n\n$f(x)=x-\\log x$, $g(x)=1$ \u3068\u304a\u304f. $f'(x)=1-1/x$, $f''(x)=1/x^2$ \u306a\u306e\u3067 $f(x)$ \u306f $x_0=1$ \u3067\u6700\u5c0f\u306b\u306a\u308a, $f(1)=f''(1)=1$ \u3068\u306a\u308b. \u3086\u3048\u306b, \u305d\u308c\u3089\u306bLaplace\u306e\u65b9\u6cd5\u3092\u9069\u7528\u3059\u308b\u3068,\n\n$$\nn! \\sim n^{n+1}\\frac{e^{-n}}{\\sqrt{n}}\\sqrt{2\\pi} = n^n e^{-n}\\sqrt{2\\pi n}.\n\\qquad \\QED\n$$\n\nLaplace\u306e\u65b9\u6cd5\u306f\u672c\u8cea\u7684\u306bGauss\u7a4d\u5206\u306e\u5fdc\u7528\u3067\u3042\u308b.\n\nGauss\u7a4d\u5206\u3092\u30ac\u30f3\u30de\u51fd\u6570\u306b\u7f6e\u304d\u63db\u3048\u308b\u3053\u3068\u306b\u3088\u3063\u3066\u5f97\u3089\u308c\u308b\u4e00\u822c\u5316\u3055\u308c\u305fLaplace\u306e\u65b9\u6cd5\u306e\u7d20\u63cf\u306b\u3064\u3044\u3066\u306f\n\n* \u9ed2\u6728\u7384, \u4e00\u822c\u5316\u3055\u308c\u305fLaplace\u306e\u65b9\u6cd5\n\n\u3092\u53c2\u7167\u305b\u3088. \u4e00\u822c\u5316\u3055\u308c\u305fLaplace\u306e\u65b9\u6cd5\u306f\n\n* \u6e21\u8fba\u6f84\u592b, \u30d9\u30a4\u30ba\u7d71\u8a08\u306e\u7406\u8ad6\u3068\u65b9\u6cd5, 2012\n\n\u306e\u7b2c4\u7ae0\u306e\u4e3b\u7d50\u679c\u3067\u3042\u308b\u30d9\u30a4\u30ba\u7d71\u8a08\u306b\u304a\u3051\u308b\u81ea\u7531\u30a8\u30cd\u30eb\u30ae\u30fc\u306e\n\n$$\nF_n = -\\log Z_n = nS + \\lambda \\log n - (m-1)\\log\\log n + O(1)\n$$\n\n\u306e\u5f62\u306e\u6f38\u8fd1\u6319\u52d5\u3092\u5c0e\u304f\u8b70\u8ad6\u3092\u521d\u7b49\u5316\u3059\u308b\u305f\u3081\u306b\u5f79\u306b\u7acb\u3064. \u7279\u7570\u70b9\u89e3\u6d88\u306f\u672c\u8cea\u7684\u306b\u4e0d\u53ef\u907f\u3060\u304c, \u3053\u306e\u5f62\u306e\u6f38\u8fd1\u6319\u52d5\u3060\u3051\u304c\u6b32\u3057\u3044\u306e\u3067\u3042\u308c\u3070\u30bc\u30fc\u30bf\u51fd\u6570\u3092\u7528\u3044\u305f\u7cbe\u5bc6\u306a\u8b70\u8ad6\u306f\u5fc5\u8981\u306a\u3044.\n\n### Laplace\u306e\u65b9\u6cd5\u306e\u5f31\u5f62\n\n**Laplace\u306e\u65b9\u6cd5\u306e\u5f31\u5f62:** Laplace\u306e\u65b9\u6cd5\u304c\u4f7f\u3048\u308b\u72b6\u6cc1\u3067\u306f, \n\n$$\nZ_n = \\int e^{-nf(x)}g(x)\\,dx\n$$\n\n\u306b\u3064\u3044\u3066, \u7279\u306b, $n\\to\\infty$ \u306e\u3068\u304d, \n\n$$\n-\\frac{1}{n}\\log Z_n \\to f(x_0) = \\min f(x), \\quad\\text{i.e.}\\quad\nZ_n = \\int e^{-nf(x)}g(x)\\,dx = \\exp\\left(-n\\min f(x)+o(n)\\right)\n$$\n\n\u304c\u6210\u7acb\u3057\u3066\u3044\u308b. \u3053\u306e\u7d50\u8ad6\u3092**Laplace\u306e\u65b9\u6cd5\u306e\u5f31\u5f62**\u3068\u547c\u3076\u3053\u3068\u306b\u3059\u308b. Laplace\u306e\u65b9\u6cd5\u306e\u3088\u3046\u306a\u7cbe\u5bc6\u306a\u5f62\u3067\u306a\u304f\u3066\u3082, \u3053\u3061\u3089\u306e\u5f31\u5f62\u3060\u3051\u3067\u7528\u304c\u8db3\u308a\u308b\u3053\u3068\u306f\u7d50\u69cb\u591a\u3044. $\\QED$\n\n**\u554f\u984c(Laplace\u306e\u65b9\u6cd5\u306e\u5f31\u5f62\u304c\u660e\u77ad\u306b\u6210\u7acb\u3059\u308b\u5834\u5408):** \u9589\u533a\u9593 $[a,b]$ \u4e0a\u306e\u5b9f\u6570\u5024\u9023\u7d9a\u51fd\u6570 $f(x)$ \u3068 $0$ \u4ee5\u4e0a\u306e\u5024\u3092\u6301\u3064\u5b9f\u6570\u5024\u51fd\u6570 $g(x)$ \u306f, $\\ds f(x_0) = \\min_{a\\leqq x\\leqq b} f(x)$ \u3092\u6e80\u305f\u3059\u3042\u308b $x_0\\in [a,b]$ \u306e\u8fd1\u508d\u3067 $g(x)>0$ \u3092\u6e80\u305f\u3057\u3066\u3044\u308b\u3068\u4eee\u5b9a\u3059\u308b. \u3053\u306e\u3068\u304d, $n\\to\\infty$ \u306b\u304a\u3044\u3066, \n\n$$\n\\int_a^b e^{-nf(x)}g(x)\\,dx = \\exp\\left(-n\\min_{a\\leqq x\\leqq b} f(x) + o(n)\\right)\n$$\n\n\u304c\u6210\u7acb\u3057\u3066\u3044\u308b\u3053\u3068\u3092\u793a\u305b. \u3059\u306a\u308f\u3061, $n\\to\\infty$ \u306e\u3068\u304d, \n\n$$\n-\\frac{1}{n}\\log\\int_a^b e^{-nf(x)}g(x)\\,dx \\to \\min_{a\\leqq x\\leqq b} f(x)\n$$\n\n\u304c\u6210\u7acb\u3057\u3066\u3044\u308b\u3053\u3068\u3092\u793a\u305b.\n\n**\u89e3\u7b54\u4f8b:** $\\ds f_0(x) = f(x)-\\min_{a\\leqq \\xi\\leqq b}f(\\xi)$ \u3068\u304a\u304f\u3068, $f_0(x)$ \u306e\u6700\u5c0f\u5024\u306f $0$ \u306b\u306a\u308a, \n\n$$\n-\\frac{1}{n}\\log\\int_a^b e^{-nf(x)}g(x)\\,dx = \n\\min_{a\\leqq x\\leqq b}f(x) - \\frac{1}{n}\\log\\int_a^b e^{-nf_0(x)}g(x)\\,dx\n$$\n\n\u306a\u306e\u3067, $n\\to\\infty$ \u306e\u3068\u304d\n\n$$\n-\\frac{1}{n}\\log\\int_a^b e^{-nf_0(x)}g(x)\\,dx \\to 0\n$$\n\n\u3068\u306a\u308b\u3053\u3068\u3092\u793a\u305b\u3070\u3088\u3044. \n\n$\\eps > 0$ \u3092\u4efb\u610f\u306b\u53d6\u3063\u3066\u56fa\u5b9a\u3057, $A = \\{\\, x\\in[a,b]\\mid f_0(x)\\leqq\\eps\\,\\}$ \u3068\u304a\u304d, \u305d\u306e $[a,b]$ \u3067\u306e\u88dc\u96c6\u5408\u3092 $A^c$ \u3068\u66f8\u304d, \n\n$$\nZ_{0,n} = \\int_a^b e^{-nf_0(x)}g(x)\\,dx = I_n + J_n, \\quad\nI_n = \\int_A e^{-nf_0(x)}g(x)\\,dx, \\quad\nJ_n = \\int_{A^c} e^{-nf_0(x)}g(x)\\,dx.\n$$\n\n\u3068\u304a\u304f. $n\\to\\infty$ \u306e\u3068\u304d $-\\frac{1}{n}\\log Z_{0,n} \\to 0$ \u3068\u306a\u308b\u3053\u3068\u3092\u793a\u3057\u305f\u3044.\n\n$x\\in A$ \u306b\u3064\u3044\u3066 $\\eps\\geqq f_0(x)\\geqq 0$ \u306a\u306e\u3067, $e^{-n\\eps}\\leqq e^{-nf_0(x)}\\leqq 1$ \u3068\u306a\u308b\u306e\u3067, \n\n$$\ne^{-n\\eps}\\int_A g(x)\\,dx\\leqq I_n = \\int_A e^{-nf_0(x)}g(x)\\,dx \\leqq \\int_A g(x)\\,dx.\n$$\n\n$\\ds f(x_0) = \\min_{a\\leqq x\\leqq b} f(x)$ \u3092\u6e80\u305f\u3059\u3042\u308b $x_0\\in [a,b]$ \u306e\u8fd1\u508d\u3067 $g(x)>0$ \u3068\u306a\u3063\u3066\u3044\u308b\u3068\u4eee\u5b9a\u3057\u305f\u3053\u3068\u3088\u308a, $\\ds \\int_A g(x)\\,dx > 0$ \u3068\u306a\u308b\u3053\u3068\u306b\u3082\u6ce8\u610f\u305b\u3088.\n\n$x\\in A^c$ \u306b\u3064\u3044\u3066 $f_0(x)>\\eps$ \u306a\u306e\u3067, $0 < e^{-nf_0(x)}0$ \u306f\u5e7e\u3089\u3067\u3082\u5c0f\u3055\u304f\u3067\u304d\u308b\u306e\u3067, \u4e0b\u6975\u9650\u3068\u4e0a\u6975\u9650\u304c\u7b49\u3057\u304f\u306a\u308b\u3053\u3068\u304c\u308f\u304b\u308a, \n\n$$\n\\lim_{n\\to\\infty}\\left(-\\frac{1}{n}\\log Z_{0,n}\\right) = 0\n$$\n\n\u304c\u5f97\u3089\u308c\u308b. $\\QED$\n\n\n```julia\n\n```\n", "meta": {"hexsha": "cd2f608e88392be367c1c991a41d7df3d0bda5e4", "size": 760860, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Old_Ver_for_Julia_v0.6/10 Gauss, Gamma, Beta.ipynb", "max_stars_repo_name": "genkuroki/Calculus", "max_stars_repo_head_hexsha": "424ef53bf493242ce48c58ba39e43b8e601eb403", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2018-06-22T13:24:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-17T00:04:57.000Z", "max_issues_repo_path": "Old_Ver_for_Julia_v0.6/10 Gauss, Gamma, Beta.ipynb", "max_issues_repo_name": "genkuroki/Calculus", "max_issues_repo_head_hexsha": "424ef53bf493242ce48c58ba39e43b8e601eb403", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Old_Ver_for_Julia_v0.6/10 Gauss, Gamma, Beta.ipynb", "max_forks_repo_name": "genkuroki/Calculus", "max_forks_repo_head_hexsha": "424ef53bf493242ce48c58ba39e43b8e601eb403", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-12-28T19:57:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-06T23:23:46.000Z", "avg_line_length": 82.6034089675, "max_line_length": 7225, "alphanum_fraction": 0.6139802329, "converted": true, "num_tokens": 46219, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4649015713733885, "lm_q2_score": 0.20689404148054266, "lm_q1q2_score": 0.0961853649920953}} {"text": "# Homework 5\n## Due Date: Tuesday, October 3rd at 11:59 PM\n\n# Problem 1\nWe discussed documentation and testing in lecture and also briefly touched on code coverage. You must write tests for your code for your final project (and in life). There is a nice way to automate the testing process called continuous integration (CI).\n\nThis problem will walk you through the basics of CI and show you how to get up and running with some CI software.\n\n### Continuous Integration\nThe idea behind continuous integration is to automate away the testing of your code.\n\nWe will be using it for our projects.\n\nThe basic workflow goes something like this:\n\n1. You work on your part of the code in your own branch or fork\n2. On every commit you make and push to GitHub, your code is automatically tested on a fresh machine on Travis CI. This ensures that there are no specific dependencies on the structure of your machine that your code needs to run and also ensures that your changes are sane\n3. Now you submit a pull request to `master` in the main repo (the one you're hoping to contribute to). The repo manager creates a branch off `master`. \n4. This branch is also set to run tests on Travis. If all tests pass, then the pull request is accepted and your code becomes part of master.\n\nWe use GitHub to integrate our roots library with Travis CI and Coveralls. Note that this is not the only workflow people use. Google git..github..workflow and feel free to choose another one for your group.\n\n### Part 1: Create a repo\nCreate a public GitHub repo called `cs207test` and clone it to your local machine.\n\n**Note:** No need to do this in Jupyter.\n\n### Part 2: Create a roots library\nUse the example from lecture 7 to create a file called `roots.py`, which contains the `quad_roots` and `linear_roots` functions (along with their documentation).\n\nAlso create a file called `test_roots.py`, which contains the tests from lecture.\n\nAll of these files should be in your newly created `cs207test` repo. **Don't push yet!!!**\n\n### Part 3: Create an account on Travis CI and Start Building\n\n#### Part A:\nCreate an account on Travis CI and set your `cs207test` repo up for continuous integration once this repo can be seen on Travis.\n\n#### Part B:\nCreate an instruction to Travis to make sure that\n\n1. python is installed\n2. its python 3.5\n3. pytest is installed\n\nThe file should be called `.travis.yml` and should have the contents:\n```yml\nlanguage: python\npython:\n - \"3.5\"\nbefore_install:\n - pip install pytest pytest-cov\nscript:\n - pytest\n```\n\nYou should also create a configuration file called `setup.cfg`:\n```cfg\n[tool:pytest]\naddopts = --doctest-modules --cov-report term-missing --cov roots\n```\n\n#### Part C:\nPush the new changes to your `cs207test` repo.\n\nAt this point you should be able to see your build on Travis and if and how your tests pass.\n\n### Part 4: Coveralls Integration\nIn class, we also discussed code coverage. Just like Travis CI runs tests automatically for you, Coveralls automatically checks your code coverage. One minor drawback of Coveralls is that it can only work with public GitHub accounts. However, this isn't too big of a problem since your projects will be public.\n\n#### Part A:\nCreate an account on [`Coveralls`](https://coveralls.zendesk.com/hc/en-us), connect your GitHub, and turn Coveralls integration on.\n\n#### Part B:\nUpdate your the `.travis.yml` file as follows:\n```yml\nlanguage: python\npython:\n - \"3.5\"\nbefore_install:\n - pip install pytest pytest-cov\n - pip install coveralls\nscript:\n - py.test\nafter_success:\n - coveralls\n```\n\nBe sure to push the latest changes to your new repo.\n\n### Part 5: Update README.md in repo\nYou can have your GitHub repo reflect the build status on Travis CI and the code coverage status from Coveralls. To do this, you should modify the `README.md` file in your repo to include some badges. Put the following at the top of your `README.md` file:\n\n```\n[](https://travis-ci.org/dsondak/cs207testing.svg?branch=master)\n\n[](https://coveralls.io/github/dsondak/cs207testing?branch=master)\n```\n\nOf course, you need to make sure that the links are to your repo and not mine. You can find embed code on the Coveralls and Travis CI sites.\n\n---\n\n# Problem 2\nWrite a Python module for reaction rate coefficients. Your module should include functions for constant reaction rate coefficients, Arrhenius reaction rate coefficients, and modified Arrhenius reaction rate coefficients. Here are their mathematical forms:\n\\begin{align}\n &k_{\\textrm{const}} = k \\tag{constant} \\\\\n &k_{\\textrm{arr}} = A \\exp\\left(-\\frac{E}{RT}\\right) \\tag{Arrhenius} \\\\\n &k_{\\textrm{mod arr}} = A T^{b} \\exp\\left(-\\frac{E}{RT}\\right) \\tag{Modified Arrhenius}\n\\end{align}\n\nTest your functions with the following paramters: $A = 10^7$, $b=0.5$, $E=10^3$. Use $T=10^2$.\n\nA few additional comments / suggestions:\n* The Arrhenius prefactor $A$ is strictly positive\n* The modified Arrhenius parameter $b$ must be real \n* $R = 8.314$ is the ideal gas constant. It should never be changed (except to convert units)\n* The temperature $T$ must be positive (assuming a Kelvin scale)\n* You may assume that units are consistent\n* Document each function!\n* You might want to check for overflows and underflows\n\n**Recall:** A Python module is a `.py` file which is not part of the main execution script. The module contains several functions which may be related to each other (like in this problem). Your module will be importable via the execution script. For example, suppose you have called your module `reaction_coeffs.py` and your execution script `kinetics.py`. Inside of `kinetics.py` you will write something like:\n```python\nimport reaction_coeffs\n# Some code to do some things\n# :\n# :\n# :\n# Time to use a reaction rate coefficient:\nreaction_coeffs.const() # Need appropriate arguments, etc\n# Continue on...\n# :\n# :\n# :\n```\nBe sure to include your module in the same directory as your execution script.\n\n\n```python\n%%file reaction_coeffs.py\nimport numpy as np\nR=8.314\ndef const(k):\n return k\n\ndef arr(A, E, T):\n #Check that A,T,E are numbers\n if ((type(A) != int and type(A) != float) or \n (type(T) != int and type(T) != float) or\n (type(E) != int and type(E) != float)):\n raise TypeError(\"All arguments must be numbers!\")\n \n elif (T<0 or A<0): # A & T must be positive\n raise ValueError(\"Temperature and Arrhenius prefactor must be positive!\")\n \n else:\n #Calculate karr\n karr = A*(np.exp(-E/(R*T)))\n return karr\n \ndef mod_arr(A,E,b,T):\n #Check that A,T,E are numbers\n if ((type(A) != int and type(A) != float) or \n (type(b) != int and type(b) != float) or\n (type(E) != int and type(E) != float) or\n (type(T) != int and type(T) != float)):\n raise TypeError(\"All arguments must be numbers!\")\n \n elif (T<0 or A<0): # A & T must be positive\n raise ValueError(\"Temperature and Arrhenius prefactor must be positive!\")\n \n else:\n #Calculate karr\n karr = A*(T**b)*(np.exp(-E/(R*T)))\n return karr\n```\n\n Overwriting reaction_coeffs.py\n\n\n\n```python\n%%file kinetics.py\nimport reaction_coeffs\n\n# Time to use a reaction rate coefficient:\nreaction_coeffs.const(107)\nreaction_coeffs.arr(107,103,102)\nreaction_coeffs.mod_arr(107,103,0.5,102)\n```\n\n Overwriting kinetics.py\n\n\n\n```python\n%%file kinetics_tests.py\nimport reaction_coeffs\n#Test k_const\ndef test_const():\n assert reaction_coeffs.const(107) == 107\n \n#Test k_arr\ndef test_arr():\n assert reaction_coeffs.arr(107,103,102) == 94.762198593430469\n\ndef test_arr_values1():\n try:\n reaction_coeffs.arr(-1,103,102)\n except ValueError as err:\n assert(type(err) == ValueError)\n \ndef test_arr_values2():\n try:\n reaction_coeffs.arr(107,103,-2)\n except ValueError as err:\n assert(type(err) == ValueError)\n\ndef test_arr_types1():\n try:\n reaction_coeffs.arr('107',103,102)\n except TypeError as err:\n assert(type(err) == TypeError)\n \ndef test_arr_types2():\n try:\n reaction_coeffs.arr(107,'103',102)\n except TypeError as err:\n assert(type(err) == TypeError)\n\ndef test_arr_types3():\n try:\n reaction_coeffs.arr(107,103,[102])\n except TypeError as err:\n assert(type(err) == TypeError) \n \n#Test mod_arr\ndef test_mod_arr():\n assert reaction_coeffs.mod_arr(107,103,0.5,102) == 957.05129266439894\n \ndef test_mod_arr_values1():\n try:\n reaction_coeffs.mod_arr(-1,103,0.5,102)\n except ValueError as err:\n assert(type(err) == ValueError)\n\ndef test_mod_arr_values2():\n try:\n reaction_coeffs.mod_arr(107,103,0.5,-2)\n except ValueError as err:\n assert(type(err) == ValueError)\n \ndef test_mod_arr_types1():\n try:\n reaction_coeffs.mod_arr('107',103,0.5,102)\n except TypeError as err:\n assert(type(err) == TypeError)\n \ndef test_mod_arr_types2():\n try:\n reaction_coeffs.mod_arr(107,'103',0.5,102)\n except TypeError as err:\n assert(type(err) == TypeError)\n\ndef test_mod_arr_types3():\n try:\n reaction_coeffs.mod_arr(107,103,[0.5],102)\n except TypeError as err:\n assert(type(err) == TypeError)\n\ndef test_mod_arr_types4():\n try:\n reaction_coeffs.mod_arr(107,103,0.5,False)\n except TypeError as err:\n assert(type(err) == TypeError)\n \n\ntest_const()\ntest_mod_arr()\ntest_arr()\ntest_arr_values1()\ntest_arr_values2()\ntest_arr_types1()\ntest_arr_types2()\ntest_arr_types3()\ntest_mod_arr_values1()\ntest_mod_arr_values2()\ntest_mod_arr_types1()\ntest_mod_arr_types2()\ntest_mod_arr_types3()\ntest_mod_arr_types4()\n```\n\n Overwriting kinetics_tests.py\n\n\n---\n\n# Problem 3\nWrite a function that returns the **progress rate** for a reaction of the following form:\n\\begin{align}\n \\nu_{A} A + \\nu_{B} B \\longrightarrow \\nu_{C} C.\n\\end{align}\nOrder your concentration vector so that \n\\begin{align}\n \\mathbf{x} = \n \\begin{bmatrix}\n \\left[A\\right] \\\\\n \\left[B\\right] \\\\\n \\left[C\\right]\n \\end{bmatrix}\n\\end{align}\n\nTest your function with\n\\begin{align}\n \\nu_{i}^{\\prime} = \n \\begin{bmatrix}\n 2.0 \\\\\n 1.0 \\\\\n 0.0\n \\end{bmatrix}\n \\qquad \n \\mathbf{x} = \n \\begin{bmatrix}\n 1.0 \\\\ \n 2.0 \\\\ \n 3.0\n \\end{bmatrix}\n \\qquad \n k = 10.\n\\end{align}\n\nYou must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.\n\n\n```python\ndef progress_rate(v1,x,k):\n \"\"\"Returns the progress rate of a reaction of the form: v1A + v2B -> v3C.\n \n INPUTS\n =======\n v1: list, \n stoichiometric coefficient of reactants in a reaction(s)\n x: float,\n Concentration of A, B, C\n k: float, \n reaction rate coefficient\n \n RETURNS\n ========\n progress rate: a list of the progress rate of each reaction\n \n EXAMPLES\n =========\n >>> progress_rate([2.0,1.0,0.0],[1.0,2.0,3.0],10)\n [20.0]\n \"\"\"\n #Check that x and v1 are lists\n if (type(v1) != list or type(x) != list):\n raise TypeError(\"v' & x must be passed in as a list\")\n #Check that x,and k are numbers/list of numbers\n if (any(type(i) != int and type(i) != float for i in x)):\n raise TypeError(\"All elements in x must be numbers!\")\n elif (type(k) != int and type(k) != float and type(k) != list):\n raise TypeError(\"k must be a numbers!\")\n else:\n progress_rates = []\n #check for multiple reactions\n if(type(v1[0]) == list):\n for v in v1:\n for reactant in v:\n if(type(reactant) != int and type(reactant) != float):\n raise TypeError(\"All elements in v1 must be numbers!\")\n #Calculate the progress rate of each reaction\n reactions = len(v1[0])\n for j in range(reactions):\n if (type(k)== list):\n progress_rate = k[j]\n else:\n progress_rate = k\n for i in range(len(v1)):\n progress_rate = progress_rate*(x[i]**v1[i][j])\n progress_rates.append(progress_rate) \n else:\n #Check types of V1\n if (any(type(i) != int and type(i) != float for i in v1)):\n raise TypeError(\"All elements in v1 must be numbers!\")\n #Calculate the progress rate of each reaction\n progress_rate = k\n for i in range(len(v1)):\n progress_rate = progress_rate*(x[i]**v1[i])\n progress_rates.append(progress_rate)\n return progress_rates\n```\n\n\n```python\nprogress_rate([2.0,1.0,0.0],[1.0,2.0,3.0],10)\n```\n\n\n\n\n [20.0]\n\n\n\n---\n# Problem 4\nWrite a function that returns the **progress rate** for a system of reactions of the following form:\n\\begin{align}\n \\nu_{11}^{\\prime} A + \\nu_{21}^{\\prime} B \\longrightarrow \\nu_{31}^{\\prime\\prime} C \\\\\n \\nu_{12}^{\\prime} A + \\nu_{32}^{\\prime} C \\longrightarrow \\nu_{22}^{\\prime\\prime} B + \\nu_{32}^{\\prime\\prime} C\n\\end{align}\nNote that $\\nu_{ij}^{\\prime}$ represents the stoichiometric coefficient of reactant $i$ in reaction $j$ and $\\nu_{ij}^{\\prime\\prime}$ represents the stoichiometric coefficient of product $i$ in reaction $j$. Therefore, in this convention, I have ordered my vector of concentrations as \n\\begin{align}\n \\mathbf{x} = \n \\begin{bmatrix}\n \\left[A\\right] \\\\\n \\left[B\\right] \\\\\n \\left[C\\right]\n \\end{bmatrix}.\n\\end{align}\n\nTest your function with \n\\begin{align}\n \\nu_{ij}^{\\prime} = \n \\begin{bmatrix}\n 1.0 & 2.0 \\\\\n 2.0 & 0.0 \\\\\n 0.0 & 2.0\n \\end{bmatrix}\n \\qquad\n \\nu_{ij}^{\\prime\\prime} = \n \\begin{bmatrix}\n 0.0 & 0.0 \\\\\n 0.0 & 1.0 \\\\\n 2.0 & 1.0\n \\end{bmatrix}\n \\qquad\n \\mathbf{x} = \n \\begin{bmatrix}\n 1.0 \\\\\n 2.0 \\\\\n 1.0\n \\end{bmatrix}\n \\qquad\n k_{j} = 10, \\quad j=1,2.\n\\end{align}\n\nYou must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.\n\n\n```python\nprogress_rate([[1.0,2.0],[2.0,0.0],[0.0,2.0]],[1.0,2.0,1.0],10)\n```\n\n\n\n\n [40.0, 10.0]\n\n\n\n---\n# Problem 5\nWrite a function that returns the **reaction rate** of a system of irreversible reactions of the form:\n\\begin{align}\n \\nu_{11}^{\\prime} A + \\nu_{21}^{\\prime} B &\\longrightarrow \\nu_{31}^{\\prime\\prime} C \\\\\n \\nu_{32}^{\\prime} C &\\longrightarrow \\nu_{12}^{\\prime\\prime} A + \\nu_{22}^{\\prime\\prime} B\n\\end{align}\n\nOnce again $\\nu_{ij}^{\\prime}$ represents the stoichiometric coefficient of reactant $i$ in reaction $j$ and $\\nu_{ij}^{\\prime\\prime}$ represents the stoichiometric coefficient of product $i$ in reaction $j$. In this convention, I have ordered my vector of concentrations as \n\\begin{align}\n \\mathbf{x} = \n \\begin{bmatrix}\n \\left[A\\right] \\\\\n \\left[B\\right] \\\\\n \\left[C\\right]\n \\end{bmatrix}\n\\end{align}\n\nTest your function with \n\\begin{align}\n \\nu_{ij}^{\\prime} = \n \\begin{bmatrix}\n 1.0 & 0.0 \\\\\n 2.0 & 0.0 \\\\\n 0.0 & 2.0\n \\end{bmatrix}\n \\qquad\n \\nu_{ij}^{\\prime\\prime} = \n \\begin{bmatrix}\n 0.0 & 1.0 \\\\\n 0.0 & 2.0 \\\\\n 1.0 & 0.0\n \\end{bmatrix}\n \\qquad\n \\mathbf{x} = \n \\begin{bmatrix}\n 1.0 \\\\\n 2.0 \\\\\n 1.0\n \\end{bmatrix}\n \\qquad\n k_{j} = 10, \\quad j = 1,2.\n\\end{align}\n\nYou must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.\n\n\n```python\ndef reaction_rate(v1,v2,x,k):\n \"\"\"Returns the reaction rate of a reaction of the form: v1A + v2B -> v3C.\n \n INPUTS\n =======\n v1: 2D list, \n stoichiometric coefficient of reactants in a reaction(s)\n v2: 2D list, optional, default value is 2\n stoichiometric coefficient of products in a reaction(s)\n x: float,\n Concentration of A, B, C\n k: float, \n reaction rate coefficient\n \n RETURNS\n ========\n reaction rate: a list of the reaction rate of each reactant\n \n EXAMPLES\n =========\n >>> reaction_rate([[1.0,0.0],[2.0,0.0],[0.0,2.0]],[[0.0,1.0],[0.0,2.0],[1.0,0.0]],[1.0,2.0,1.0],10)\n [-30.0, -60.0, 20.0]\n \"\"\"\n w = progress_rate(v1,x,k)\n #Check that x,v2 and v1 are lists\n if (type(v2) != list or type(x) != list or type(v1)!= list):\n raise TypeError(\"v' & x must be passed in as a list\")\n if (any(type(i) != int and type(i) != float for i in x)):\n raise TypeError(\"All arguments must be numbers!\")\n elif (type(k) != int and type(k) != float and type(k) !=list):\n raise TypeError(\"All arguments must be numbers!\")\n else:\n reaction_rates = []\n for i in (range(len(v2))):\n reaction_rate = 0\n for j in (range(len(v2[0]))):\n reaction_rate = reaction_rate + (v2[i][j]-v1[i][j])*w[j]\n reaction_rates.append(reaction_rate)\n return reaction_rates\n```\n\n\n```python\nreaction_rate([[1.0,0.0],[2.0,0.0],[0.0,2.0]],[[0.0,1.0],[0.0,2.0],[1.0,0.0]],[1.0,2.0,1.0],10)\n```\n\n\n\n\n [-30.0, -60.0, 20.0]\n\n\n\n---\n# Problem 6\nPut parts 3, 4, and 5 in a module called `chemkin`.\n\nNext, pretend you're a client who needs to compute the reaction rates at three different temperatures ($T = \\left\\{750, 1500, 2500\\right\\}$) of the following system of irreversible reactions:\n\\begin{align}\n 2H_{2} + O_{2} \\longrightarrow 2OH + H_{2} \\\\\n OH + HO_{2} \\longrightarrow H_{2}O + O_{2} \\\\\n H_{2}O + O_{2} \\longrightarrow HO_{2} + OH\n\\end{align}\n\nThe client also happens to know that reaction 1 is a modified Arrhenius reaction with $A_{1} = 10^{8}$, $b_{1} = 0.5$, $E_{1} = 5\\times 10^{4}$, reaction 2 has a constant reaction rate parameter $k = 10^{4}$, and reaction 3 is an Arrhenius reaction with $A_{3} = 10^{7}$ and $E_{3} = 10^{4}$.\n\nYou should write a script that imports your `chemkin` module and returns the reaction rates of the species at each temperature of interest given the following species concentrations:\n\n\\begin{align}\n \\mathbf{x} = \n \\begin{bmatrix}\n H_{2} \\\\\n O_{2} \\\\\n OH \\\\\n HO_{2} \\\\\n H_{2}O\n \\end{bmatrix} = \n \\begin{bmatrix}\n 2.0 \\\\\n 1.0 \\\\\n 0.5 \\\\\n 1.0 \\\\\n 1.0\n \\end{bmatrix}\n\\end{align}\n\nYou may assume that these are elementary reactions.\n\n\n```python\n%%file chemkin.py\n\ndef progress_rate(v1,x,k):\n \"\"\"Returns the progress rate of a reaction of the form: v1A + v2B -> v3C.\n \n INPUTS\n =======\n v1: list, \n stoichiometric coefficient of reactants in a reaction(s)\n x: float,\n Concentration of A, B, C\n k: float, \n reaction rate coefficient\n \n RETURNS\n ========\n progress rate: a list of the progress rate of each reaction\n \n EXAMPLES\n =========\n >>> progress_rate([2.0,1.0,0.0],[1.0,2.0,3.0],10)\n [20.0]\n \"\"\"\n #Check that x and v1 are lists\n if (type(v1) != list or type(x) != list):\n raise TypeError(\"v' & x must be passed in as a list\")\n #Check that x,and k are numbers/list of numbers\n if (any(type(i) != int and type(i) != float for i in x)):\n raise TypeError(\"All elements in x must be numbers!\")\n elif (type(k) != int and type(k) != float and type(k) != list):\n raise TypeError(\"k must be a numbers or list!\")\n else:\n progress_rates = []\n #check for multiple reactions\n if(type(v1[0]) == list):\n for v in v1:\n for reactant in v:\n if(type(reactant) != int and type(reactant) != float):\n raise TypeError(\"All elements in v1 must be numbers!\")\n #Calculate the progress rate of each reaction\n reactions = len(v1[0])\n for j in range(reactions):\n if (type(k) == list):\n progress_rate = k[j]\n else:\n progress_rate = k\n for i in range(len(v1)):\n progress_rate = progress_rate*(x[i]**v1[i][j])\n progress_rates.append(progress_rate) \n else:\n #Check types of V1\n if (any(type(i) != int and type(i) != float for i in v1)):\n raise TypeError(\"All elements in v1 must be numbers!\")\n #Calculate the progress rate of each reaction\n progress_rate = k\n for i in range(len(v1)):\n progress_rate = progress_rate*(x[i]**v1[i])\n progress_rates.append(progress_rate)\n return progress_rates\n \ndef reaction_rate(v1,v2,x,k):\n \"\"\"Returns the reaction rate of a reaction of the form: v1A + v2B -> v3C.\n \n INPUTS\n =======\n v1: 2D list, \n stoichiometric coefficient of reactants in a reaction(s)\n v2: 2D list, optional, default value is 2\n stoichiometric coefficient of products in a reaction(s)\n x: float,\n Concentration of A, B, C\n k: float, \n reaction rate coefficient\n \n RETURNS\n ========\n reaction rate: a list of the reaction rate of each reactant\n \n EXAMPLES\n =========\n >>> reaction_rate([[1.0,0.0],[2.0,0.0],[0.0,2.0]],[[0.0,1.0],[0.0,2.0],[1.0,0.0]],[1.0,2.0,1.0],10)\n [-30.0, -60.0, 20.0]\n \"\"\"\n w = progress_rate(v1,x,k)\n #Check that x,v2 and v1 are lists\n if (type(v2) != list or type(x) != list or type(v1)!= list):\n raise TypeError(\"v' & x must be passed in as a list\")\n if (any(type(i) != int and type(i) != float for i in x)):\n raise TypeError(\"All arguments must be numbers!\")\n elif (type(k) != int and type(k) != float and type(k) !=list):\n raise TypeError(\"All arguments must be numbers!\")\n else:\n reaction_rates = []\n for i in (range(len(v2))):\n reaction_rate = 0\n for j in (range(len(v2[0]))):\n reaction_rate = reaction_rate + (v2[i][j]-v1[i][j])*w[j]\n reaction_rates.append(reaction_rate)\n return reaction_rates\n```\n\n Writing chemkin.py\n\n\n\n```python\nimport chemkin\nimport reaction_coeffs\n\nv1 = [[2.0,0.0,0.0],[1.0,0.0,1.0],[0.0,1.0,0.0],[0.0,1.0,0.0],[0.0,0.0,1.0]]\nv2 = [[1.0,0.0,0.0],[0.0,1.0,0.0],[2.0,0.0,1.0],[0.0,0.0,1.0],[0.0,1.0,0.0]]\nx = [2.0,1.0,0.5,1.0,1.0]\n\nk1T1 = reaction_coeffs.mod_arr((10**7),(5*(10**4)),0.5,750)\nk2T1 = reaction_coeffs.const(10**4)\nk3T1 = reaction_coeffs.arr((10**8),(10**4),750)\nk1 = [k1T1,k2T1,k3T1]\n\nk2T2 = reaction_coeffs.const(10**4)\nk3T2 = reaction_coeffs.arr((10**7),(10**4),1500)\nk1T2 = reaction_coeffs.mod_arr((10**8),(5*(10**4)),0.5,1500)\nk2 = [k1T2,k2T2,k3T2]\n\nk2T3 = reaction_coeffs.const(10**4)\nk3T3 = reaction_coeffs.arr((10**7),(10**4),2500)\nk1T3 = reaction_coeffs.mod_arr((10**8),(5*(10**4)),0.5,2500)\nk3 = [k1T3,k2T3,k3T3]\n\nprint([chemkin.reaction_rate(v1,v2,x,k1),chemkin.reaction_rate(v1,v2,x,k2), chemkin.reaction_rate(v1,v2,x,k3)])\n```\n\n [[-360707.78728040616, -20470380.895447683, 20831088.682728089, 20109673.108167276, -20109673.108167276], [-281117620.76487017, -285597559.23804539, 566715180.0029155, 4479938.4731752202, -4479938.4731752202], [-1804261425.9632478, -1810437356.938905, 3614698782.902153, 6175930.9756572321, -6175930.9756572321]]\n\n\n\n```python\n%%file test.py\nimport chemkin\ndef test_progress_rate():\n assert chemkin.progress_rate([3.0,1.0,1.0],[1.0,2.0,3.0],10) == [60.0]\ntest_progress_rate()\n\ndef test_reaction_rate():\n assert chemkin.reaction_rate([[3.0,1.0,1.0]],[[1.0,2.0,1.0]],[1.0,2.0,3.0],10) == [-10.0]\ntest_reaction_rate()\n```\n\n Overwriting test.py\n\n\n\n```python\n!pytest --doctest-modules --cov-report term-missing --cov\n```\n\n \u001b[1m============================= test session starts ==============================\u001b[0m\n platform darwin -- Python 3.6.1, pytest-3.2.1, py-1.4.33, pluggy-0.4.0\n rootdir: /Users/riddhishah/Documents/cs207/cs207_riddhi_shah/homeworks/HW5, inifile:\n plugins: cov-2.3.1\n collected 0 items \u001b[0m\u001b[1m\u001b[1m\n \n \n ---------- coverage: platform darwin, python 3.6.1-final-0 -----------\n Name Stmts Miss Cover Missing\n --------------------------------------------------\n chemkin.py 43 9 79% 5, 8, 10, 18, 23, 32, 44, 46, 48\n kinetics.py 4 0 100%\n kinetics_tests.py 76 0 100%\n reaction_coeffs.py 18 0 100%\n test.py 7 0 100%\n --------------------------------------------------\n TOTAL 148 9 94%\n \n \u001b[33m\u001b[1m========================= no tests ran in 2.02 seconds =========================\u001b[0m\n\n\n---\n# Problem 7\nGet together with your project team, form a GitHub organization (with a descriptive team name), and give the teaching staff access. You can have has many repositories as you like within your organization. However, we will grade the repository called **`cs207-FinalProject`**.\n\nWithin the `cs207-FinalProject` repo, you must set up Travis CI and Coveralls. Make sure your `README.md` file includes badges indicating how many tests are passing and the coverage of your code.\n\n\n```python\n'''Done by Hongxiang in our group!'''\n```\n\n\n\n\n 'Done by Hongxiang in our group!'\n\n\n", "meta": {"hexsha": "c625354adbfec6893599341e4325c661cf073275", "size": 36022, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "homeworks/HW5/HW5-Final.ipynb", "max_stars_repo_name": "HeyItsRiddhi/cs207_riddhi_shah", "max_stars_repo_head_hexsha": "18d7d6f1fcad213ce35a93ee33c03620f8b06b65", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "homeworks/HW5/HW5-Final.ipynb", "max_issues_repo_name": "HeyItsRiddhi/cs207_riddhi_shah", "max_issues_repo_head_hexsha": "18d7d6f1fcad213ce35a93ee33c03620f8b06b65", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "homeworks/HW5/HW5-Final.ipynb", "max_forks_repo_name": "HeyItsRiddhi/cs207_riddhi_shah", "max_forks_repo_head_hexsha": "18d7d6f1fcad213ce35a93ee33c03620f8b06b65", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.0580580581, "max_line_length": 424, "alphanum_fraction": 0.5100771751, "converted": true, "num_tokens": 7414, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.23370634623958195, "lm_q2_score": 0.411110869232168, "lm_q1q2_score": 0.09607921914762856}} {"text": "# Week 1\n\n__Goals for this week__\n\nWe will talk about the organization of this course, including the weekly tasks and 3 \"mini\"-projects that await you during on during this semester.\nWe will introduce the frameworks you will use and a you will get a geniality quick course (rychlokurz geniality) in Python, required library and standard structures.\n\n__What is this file?__\n\nThis is a Jupyter notebook. A file, that contains python scripts and also stores the binary results.\n\n__How can I run it?__\n\nFirs you have to run/configure your Jupyter server in your desired python environment.\nThen just follow this notebook and run it cell by cell. Try to answer - code - the questions, so you'll get better grasp on python array/list/dictionary structures.\n\n## Course Information\nIf nothing changes:\n- __Weekly tasks [7 pts]:__ First 3 weeks (after this one), you will work on the same tasks for better understanding of how neural networks work.\n- __Projects [5 pts, 13 pts, 20 pts]:__ You will work in pairs on 3 deep learning project with gradually increasing difficulty throughout the entire semester. You will present your progress during the consultations, which are marked so you will be scored continuously and for the final presentations of your solutions.\n- __Midterm [15 pts]:__ This is gonna be fun\n- __Exam [40 pts]__ This is gonna be a nightmare\n(No worries, not for you, for me to create them)\n\n### Feedback\n\n- Please use\n- Please fill our\n- This notebook is a work in progress. If you notice a mistake, notify us, raise an issue or make a pull request on\n\n### Python\n\nWe will use _Python 3.7_ compatible code in these notebooks, however it may work with older versions.\nWe assume that you have seen Python code before. If you have not, the quick course may help and also you should learn the basics as soon as possible (e.g. [W3Schools tutorial](https://www.w3schools.com/python/default.asp))).\nYour first task is to configure python environment, Tensorflow (CPU or GPU based on you HW), PyTorch, and to practice python concepts in the scripts below.\nYou should understand it fully, otherwise review your knowledge before you proceed. You will need it in the following weeks.\n\n\n```python\n# Line comment as you\n\"\"\"\nBlock comment\n\"\"\"\n\n# input and output\nname=input()\nprint(\"Hello, \"+name)\n\n```\n\n Hello, Petko a Denko\n\n\n\n```python\n# variables don't need explicit type declaration\nvar = 'Neural Networks rule!'\nvar = 2021\nvar = 20.21\nvar = True\nvar = [29,1,2021]\nvar = {'NN':'Rule!', 'year':2021}\n\n# Basic types\n2021 # integer\n36.5 # float\n'cool' # string\nTrue, False # boolean operands\nNone # null-like operand\n# python specific\n[12,34,'hello','world'] # list\n{123:456,'DL':'NN'} # dictionary\n\n# Type conversion\nfloat('36.5')\nint(36.5)\nstr(3.65)\n\n# Basic operations\na = 2 + 5 - 2.5\na += 3\nb = 2 ** 3 # exponentiation\nprint(a, b)\nprint(5 / 2)\nprint(5 // 2) # Notice the difference between these two\nprint('DL' + 'NN')\n# All compound assignment operators available\n# including += -= *= **= /= //=\n# pre/post in/decrementers not available (++ --)\n\n\n# F-strings\nprint(f'1 + 2 = {1 + 2}, a = {a}')\nprint('1 + 2 = {1 + 2}, a = {a}') # We need the f at the start for {} to work as expression wrappers.\n\n```\n\n 7.5 8\n 2.5\n 2\n DLNN\n 1 + 2 = 3, a = 7.5\n 1 + 2 = {1 + 2}, a = {a}\n\n\n\n```python\n# Conditions\nif a > 4 and b < 3: # and, or and not are the basic logical operators\n print('a') # Indentation by spaces or tabs tells us where the statement belongs. print('a') is in the if.\nelif b > 5:\n print('b') # Indentation by spaces or tabs tells us where the statement belongs. print('b') is in the elif.\nelse:\n print('c') # Indentation by spaces or tabs tells us where the statement belongs. print('c') is in the else.\nprint('d') # But print('d') is outside, it will print every time.\n\n# Loops\nwhile a < 10:\n if b > 3:\n a += 1 # More indentation for code that is \"deeper\"\n else:\n a += 2\nprint(f'a = {a}')\n\n# 'while' loops are not considered 'pythonic'. 'for' loops are more common\nfor char in 'string':\n print(char)\n\n```\n\n b\n d\n a = 10.5\n s\n t\n r\n i\n n\n g\n\n\n\n```python\n# Lists - work like C arrays, but they are not dependent on element type\na = [1, 2.4, 10e-7, 0b1001, 'some \"text\"'] # embedded \"\" in '', works vice versa\nlen(a) # Length\nprint(a[1]) # Second element\nprint(a[1:3]) # Second to third element\na.append(4) # Adding at the end\ndel a[3] # Removing at the index\nprint([]) # Empty array\nprint(a)\n\n# This is why for loops are used more often\nfor el in a:\n# but be careful with iterations over list which contain different var types - it is your responsibility\n print(el + 1)\n\n```\n\n\n```python\na = [1, 2.4, 10e-7, 0b1001]\n# We can define lists with list comprehension statements\nb = [el + 2 for el in a]\nprint(b)\n\n```\n\n [3, 4.4, 2.000001, 11]\n\n\n\n```python\n# Dictionaries - key-based structures\na = {\n 'layer_1': 'dense',\n 5: 'five',\n 4: 'four',\n 'result': [1, 2, 3]\n}\nprint(a['layer_1'])\na['layer_2'] = 'ReLU'\nif 'result' in a: # Does key exist?\n print(a['result'])\n{} # Empty\ndel a[5] # Remove record\n \nprint()\nprint('Keys:')\nfor key in a:\n print(key)\n \nprint()\nprint('Keys and values:')\nfor key, value in a.items():\n print(key, ':', value)\n \n# Dictionaries can be also defined via comprehension statement\na = {i: i**2 for i in [1, 2, 3, 4]}\nprint(a)\n\n```\n\n dense\n [1, 2, 3]\n \n Keys:\n layer_1\n 4\n result\n layer_2\n \n Keys and values:\n layer_1 : dense\n 4 : four\n result : [1, 2, 3]\n layer_2 : ReLU\n {1: 1, 2: 4, 3: 9, 4: 16}\n\n\n\n```python\n# Most common and useful iterators\nprint('range(start, stop, step)')\nfor i in range(10,20,2):\n print(i)\n\nprint()\nprint('enumerate')\nlowercase = ['a', 'b', 'c']\nfor i, el in enumerate(lowercase): # iterates over elements attaching the index order\n print(i, el)\n\nprint()\nprint('zip') # Like a zip - side-by-side merges lists together for iteration\nuppercase = ['A', 'B', 'C']\nnumbers = [1,2,3]\nfor n, a, b in zip(numbers,lowercase, uppercase):\n print(n, a, b)\n\n```\n\n range(start, stop, step)\n 10\n 12\n 14\n 16\n 18\n \n enumerate\n 0 a\n 1 b\n 2 c\n \n zip\n 1 a A\n 2 b B\n 3 c C\n\n\n\n```python\n# Functions\ndef example_function(a, b=1, c=1): # b and c have default values\n return a*b, a*c # we return two values at the same time\n\na, b = example_function(1, 2, 3) # and we can assign both values at the same time as well\nprint(a, b)\nprint(example_function(4))\nprint(example_function(5, 2))\nprint(example_function(5, c=2)) # Notice how do the arguments behave\n\n# Classes\nclass A:\n\n def __init__(self, b): # Constructor\n self.b = b # Object variable\n\n def add_to_b(self, c): # self is always the first argument and it references the object itself\n self.b += c\n\n def sub_from_b(self, c):\n self.add_to_b(-c) # Calling object method\n\n def __str__(self): # every python class contain several default methods that start and end with __\n return f'Class A, b={self.b}'\n\n # be careful with naming and using underscores _\n # private, protected and public is expressed by underscores\n # default is public\n def foo(self):\n print(\"I'm public\")\n\n def _bar(self):\n print(\"I'm protected\")\n def __nope(self):\n\n print(\"You shall not print me, I'm private\")\n\na = A(5)\na.add_to_b(1)\nprint(a.b)\na.sub_from_b(2)\nprint(a.b)\nprint(a)\na.foo()\na.bar()\na.nope()\n```\n\n### Linear Algebra\n\nNeural network models can be defined using vectors and matrices, i.e. concepts from linear algebra.\nThe _DeepLearningBook_ dedicates first pages for linear algebra, we will be using a bit of it during this semester, therefore you should know how basic linear operations work. Some of the concepts were covered during your _Algebra and Discrete Mathematics_ course. Read the provided links to review necessary topics (note that there are some questions at the end of each page) and solve the exercises in this notebook.\n\n#### Vectors\n- [On vectors](https://www.mathsisfun.com/algebra/vectors.html)\n- [On dot product](https://www.mathsisfun.com/algebra/vectors-dot-product.html)\n\nIn these labs we use _DeepLearningBook_ notation: simple italic for scalars $x$, lowercase bold italic for vectors $\\boldsymbol{x}$ and uppercase bold italics for matrices $\\boldsymbol{X}$.\nPlease, keep this notation in mind.\n\n### NumPy\n\n[Numpy](https://numpy.org/) is a popular Python library for scientific computation. It provides a convenient way of working with vectors and matrices.\nIdeally try to use NumPy to solve these exercises.\n\n\n\n```python\nimport numpy as np\n```\n\n__Exercise 1.1:__ Calculate the following:\n\n$\n\\begin{align}\n\\boldsymbol{a} = \\begin{bmatrix}0 \\\\ 1 \\\\ 3 \\end{bmatrix} \\ \\\n\\boldsymbol{b} = \\begin{bmatrix}2 \\\\ 4 \\\\ 1 \\end{bmatrix}\n\\end{align}\n$\n\n$5\\boldsymbol{a} = ?$\n\n$\\boldsymbol{a} + \\boldsymbol{b} = ?$\n\n$\\boldsymbol{a} \\cdot \\boldsymbol{b} = ?$\n\n$||\\boldsymbol{a}|| = ?$\n\n\n\nTODO\n\n\n```python\n# Init vectors\na = np.array([0, 1, 3])\nb = np.array([2, 4, 1])\n\n# Basic operations, results for E 1.1\nprint(5*a)\nprint(a + b)\nprint(np.dot(a, b))\nprint(np.linalg.norm(a))\n```\n\n [ 0 5 15]\n [2 5 4]\n 7\n 3.1622776601683795\n\n\n__Exercise 1.2:__ Determine quickly whether or not two vectors (e.g. $\\boldsymbol{a}$ and $\\boldsymbol{b}$) are orthogonal (perpendicular)?\n\nTODO\n\n\n```python\n# Perpendicularity (similarity) of vectors\nnp.linalg.norm(a - b)\n```\n\n\n\n\n 4.123105625617661\n\n\n\n__Exercise 1.3:__ Compute which vector is longer, $\\boldsymbol{a}$ or $\\boldsymbol{b}$?\n\n\n```python\n# Length of vectors\nlen_a = np.linalg.norm(a)\nlen_b = np.linalg.norm(b)\n\nprint('a' if len_a > len_b else 'b')\n```\n\n b\n\n\n#### Matrices\n- [On matrices](https://www.mathsisfun.com/algebra/matrix-introduction.html)\n- [On matrix multiplication](https://www.mathsisfun.com/algebra/matrix-multiplying.html)\n\n\n__INFO: NumPy array indexing__\n\n\n```python\n# Indexing, i.e. selecting elements from an array / vector / matrix\n\nW = np.array([\n [1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]\n])\n\nW[1, 1] # Element from second row of second column\nW[0] # First row\nW[[0, 2]] # First AND third row\nW[:, 0] # First column\nW[1, [0, 2]] # First AND third column of second row\n\n# Access array slices by index\na = np.zeros([10,10])\na[:3] = 1\na[:, :3] = 2\na[:3, :3] = 3\nrows = [4,6,7]\ncols = [9,3,5]\na[rows, cols] = 4\nprint(a)\n\n# transposition\na = np.arange(24).reshape(2,3,4)\nprint(a.shape)\nprint(a)\na=np.transpose(a, (2,1,0))\n# swap 0th and 2nd axes\nprint(a.shape)\nprint(a)\n\n```\n\n [[3. 3. 3. 1. 1. 1. 1. 1. 1. 1.]\n [3. 3. 3. 1. 1. 1. 1. 1. 1. 1.]\n [3. 3. 3. 1. 1. 1. 1. 1. 1. 1.]\n [2. 2. 2. 0. 0. 0. 0. 0. 0. 0.]\n [2. 2. 2. 0. 0. 0. 0. 0. 0. 4.]\n [2. 2. 2. 0. 0. 0. 0. 0. 0. 0.]\n [2. 2. 2. 4. 0. 0. 0. 0. 0. 0.]\n [2. 2. 2. 0. 0. 4. 0. 0. 0. 0.]\n [2. 2. 2. 0. 0. 0. 0. 0. 0. 0.]\n [2. 2. 2. 0. 0. 0. 0. 0. 0. 0.]]\n (2, 3, 4)\n [[[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]]\n \n [[12 13 14 15]\n [16 17 18 19]\n [20 21 22 23]]]\n (4, 3, 2)\n [[[ 0 12]\n [ 4 16]\n [ 8 20]]\n \n [[ 1 13]\n [ 5 17]\n [ 9 21]]\n \n [[ 2 14]\n [ 6 18]\n [10 22]]\n \n [[ 3 15]\n [ 7 19]\n [11 23]]]\n\n\n__Exercise 1.4:__ Calculate the following. Vectors are columns by default.\n\n$\n\\boldsymbol{C} = \\begin{bmatrix}0 & 2 & 4\\\\ 1 & 2 & 5 \\end{bmatrix}\n\\boldsymbol{d} = \\begin{bmatrix} 1 & 7 \\end{bmatrix}\n\\boldsymbol{E} = \\begin{bmatrix} 1 & 2 \\\\ 3 & 4 \\\\ \\end{bmatrix}\n$\n\n$\\boldsymbol{C}\\boldsymbol{d} = ?$\n\n$\\boldsymbol{C}\\boldsymbol{E} = ?$\n\n$\\boldsymbol{d}^T \\boldsymbol{C} - \\boldsymbol{d}^T = ?$\n\n$\\boldsymbol{C}^T\\boldsymbol{d} = ?$\n\n$\\boldsymbol{C}\\boldsymbol{d}^T = ?$\n\n$\\boldsymbol{d}\\boldsymbol{E} = ?$\n\n\n```python\n# Init matrices\n# One way:\nC = np.array([\n [0, 2, 4],\n [1, 2, 5]\n])\n\nd = np.array([1, 7])\n\n# Other way:\nE = np.arange(4).reshape(2, 2)\n\nprint(C.T * d)\nprint(C.T @ E)\nprint(d * C.T - d)\n```\n\n [[ 0 7]\n [ 2 14]\n [ 4 35]]\n [[ 2 3]\n [ 4 8]\n [10 19]]\n [[-1 0]\n [ 1 7]\n [ 3 28]]\n\n\n__Exercise 1.5:__ We can express the result of general matrix-vector product $\\boldsymbol{Ex}_1$ as a vector of dot products.\nIs it possible to do the same with $\\boldsymbol{x}_2^T\\boldsymbol{E}$?\n\n\n```python\n# There is a difference between a 1-D vector and a column matrix in numpy:\nx1 = np.array([1, 2]) # This is a vector\nx2 = np.array([ # This is a matrix\n [1],\n [2]\n])\n\n# First let's see the dimensions of these two\nprint(x1.shape)\nprint(x2.shape)\n\n# Matrix - vector multiplicataion\n# Then we can multiply them with E using np.matmul or @ matrix multiplication operator\nprint(E @ x1)\n\n# TODO x_2^T \\times E\n\n```\n\n (2,)\n (2, 1)\n [2 8]\n [[2]\n [8]]\n\n\n__Exercise 1.6:__ What is the difference between the two results from previous code cell?\n\nTODO actually, just think about the answer ;)\n\n\n### Derivatives\n\nThe final topic to cover are derivatives.\nAlmost all training algorithms of neural networks in practice are based on calculating the derivatives with respect to (w.r.t.) parameters of the model.\nYou should know the basics from your _Calculus course_ (Matematick\u00e1 anal\u00fdza), but just in case, we recommend you to read the following to refresh your memory:\n\n- [On derivatives](https://www.mathsisfun.com/calculus/derivatives-introduction.html)\n\nYou won't need to use derivatives during this course, so you won't need to learn all the [derivative rules](https://www.mathsisfun.com/calculus/derivatives-rules.html). However we need you to have an intuition about what derivatives are and what is their geometric interpretation. In essence, we need you to understand that a derivative tells us what is the slope of the tangent at given point. You should understand what is happening in the gif below:\n\n\n```python\nimport matplotlib.pyplot as plt\nplt.figure(figsize=(6,6))\ntangents = plt.imread('images/tangents.gif')\nplt.imshow(tangents)\n```\n\n
License: en:User:Dino, User:Lfahlberg CC BY-SA 3.0, via Wikimedia Commons
\n\nPartial derivatives are a concept you might have not head about before.\nIt is applied when we derive a function with more than one variable.\nIn such case we can actually derive a function in any direction.\n\nRead the following link:\n\n- [On partial derivatives](https://www.mathsisfun.com/calculus/derivatives-partial.html)\n\nFunction of one variable (1D) is a curve, and of two variables (2D) is a topographical relief.\n2D function can be visualized by a 3D graph.\nIn this graph you can pick a point and then ask, what is a slope of the tangent in any direction.\nMost commonly you would calculate the slope along axes of both variables (let's say $x,y$): $\\frac{df}{dx}$ and $\\frac{df}{dy}$.\n\nVector of derivatives w.r.t. all the parameters is called a _gradient_.\nGenerally for function $f$ with arbitrary number of parameters $x_1. x_2, ..., x_N = \\boldsymbol{x}$, the gradient $\\triangledown f$ is defined as:\n\n\\begin{equation}\n\\triangledown f(\\boldsymbol{x}) = \\frac{df}{d\\boldsymbol{x}} = \\begin{bmatrix}\\frac{df}{dx_1} \\\\ \\frac{df}{dx_2} \\\\ \\vdots \\\\ \\frac{df}{dx_N} \\end{bmatrix}\n\\end{equation}\n\nGradient is the most important concept from this week's lab. The gradient is a vector quantity that tells us the _direction of steepest ascent_ at each point. This is a very important property, which we will often use in the following weeks. The magnitude of this vector tells us how steep this ascent is, i.e. what is the slope of the tangent in the direction of the gradient.\n\nTo cpmpare _derivative_ and _gradient_:\n\n- _Derivative_ is a quantity that tells us, what is the rate of change in given direction.\n- _Gradient_ is a quantity that tells us what is the direction of the steepest rate of change, along with the rate of this change.\n\nObserve the difference between these two concepts in the Figure below. All the plots show the same function $F(x,y) = \\sin(x) \\cos(y)$. In first two plots we shot the derivatives w.r.t $y$ and $x$ respectively. These are shown as white arrows. Notice that they all point in one direction. On the other hand in the last plot we show the gradients. If we interpret the derivatives from the two previous plots as vectors, these gradients are in fact their sum.\n\n\n\n```python\nfrom backstage import plots\nplots.derivatives_plot()\n```\n\n__Exercise 1.8:__ With the following derivative rules:\n- $(af)' = af'$\n- $(f + g)' = f' + g'$\n- $(x^k)' = kx^{k-1}$\n\nCalculate the following:\n\n$f(x^2 + y^2 + 2x)$\n\n$\\frac{df}{dx}=?$\n\n$\\frac{df}{dy}=?$\n\n$\\triangledown f(x, y) = ?$\n\n$g(x_1, x_2, \\dots, x_N) = g(\\boldsymbol{x}) = \\boldsymbol{a} \\cdot \\boldsymbol{x}$\n\n$\\triangledown g(\\boldsymbol{x}) = ?$\n\n\n\n```python\n# TODO Derivatives ... by hand... on your paper\n```\n\n### Correct Answers\n\n__E 1.4:__\n\nThe term $\\boldsymbol{C}\\boldsymbol{d}^T$ is not valid. You can not multiply two matrices with dimensions $3 \\times 2$ and $1 \\times 2$.\nAlso the term $\\boldsymbol{E}\\boldsymbol{d} is also invalid.\n\n__E 1.7:__\n\n$\\frac{df}{dx}= 2x + 2$\n\n$\\frac{df}{dy}= 2y$\n\n$\\triangledown f(x, y) = \\begin{bmatrix}2x + 2 \\\\ 2y \\end{bmatrix} $\n\n$\\triangledown g(\\boldsymbol{x}) = \\boldsymbol{a}$\n\n\n```python\n\n```\n\n\n \nCreated in Deepnote\n", "meta": {"hexsha": "d3ddf247e16de0a49b46b16007735d5383ca9cfd", "size": 168446, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week_1/TrainExercises.ipynb", "max_stars_repo_name": "denislaca/neural_networks_at_fiit", "max_stars_repo_head_hexsha": "0d8c889e1334bd5db7ff6028453897411cafa610", "max_stars_repo_licenses": ["BSD-2-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week_1/TrainExercises.ipynb", "max_issues_repo_name": "denislaca/neural_networks_at_fiit", "max_issues_repo_head_hexsha": "0d8c889e1334bd5db7ff6028453897411cafa610", "max_issues_repo_licenses": ["BSD-2-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week_1/TrainExercises.ipynb", "max_forks_repo_name": "denislaca/neural_networks_at_fiit", "max_forks_repo_head_hexsha": "0d8c889e1334bd5db7ff6028453897411cafa610", "max_forks_repo_licenses": ["BSD-2-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 168446.0, "max_line_length": 168446, "alphanum_fraction": 0.9193153889, "converted": true, "num_tokens": 5593, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.46879062662624377, "lm_q2_score": 0.20434190478229486, "lm_q1q2_score": 0.09579356958889225}} {"text": "```python\n\"\"\"Intermolecular Interactions and Symmetry-Adapted Perturbation Theory\"\"\"\n\n__authors__ = \"Konrad Patkowski\"\n__email__ = [\"patkowsk@auburn.edu\"]\n\n__copyright__ = \"(c) 2008-2020, The Psi4Education Developers\"\n__license__ = \"BSD-3-Clause\"\n__date__ = \"2020-07-16\"\n```\n\nThis lab activity is designed to teach students about weak intermolecular interactions, and the calculation and interpretation of the interaction energy between two molecules. The interaction energy can be broken down into physically meaningful contributions (electrostatics, induction, dispersion, and exchange) using symmetry-adapted perturbation theory (SAPT). In this exercise, we will calculate complete interaction energies and their SAPT decomposition using the procedures from the Psi4 software package, processing and analyzing the data with NumPy and Matplotlib.\n\nPrerequisite knowledge: the Hartree-Fock method, molecular orbitals, electron correlation and the MP2 theory. The lab also assumes all the standard Python prerequisites of all Psi4Education labs.\n\nLearning Objectives: \n1. Recognize and appreciate the ubiquity and diversity of intermolecular interactions.\n2. Compare and contrast the supermolecular and perturbative methods of calculating interaction energy.\n3. Analyze and interpret the electrostatic, induction, dispersion, and exchange SAPT contributions at different intermolecular separations.\n\nAuthor: Konrad Patkowski, Auburn University (patkowsk@auburn.edu; ORCID: 0000-0002-4468-207X)\n\nCopyright: Psi4Education Project, 2020\n\n# Weak intermolecular interactions \n\nIn this activity, you will examine some properties of weak interactions between molecules. As the molecular subunits are not connected by any covalent (or ionic) bonds, we often use the term *noncovalent interactions*. Suppose we want to calculate the interaction energy between molecule A and molecule B for a certain geometry of the A-B complex (obviously, this interaction energy depends on how far apart the molecules are and how they are oriented). The simplest way of doing so is by subtraction (in the so-called *supermolecular approach*):\n\n\\begin{equation}\nE_{\\rm int}=E_{\\rm A-B}-E_{\\rm A}-E_{\\rm B}\n\\end{equation}\n\nwhere $E_{\\rm X}$ is the total energy of system X, computed using our favorite electronic structure theory and basis set. A negative value of $E_{\\rm int}$ means that A and B have a lower energy when they are together than when they are apart, so they do form a weakly bound complex that might be stable at least at very low temperatures. A positive value of $E_{\\rm int}$ means that the A-B complex is unbound - it is energetically favorable for A and B to go their separate ways. \n\nLet's consider a simple example of two interacting helium atoms and calculate $E_{\\rm int}$ at a few different interatomic distances $R$. You will use Psi4 to calculate the total energies that you need to perform subtraction. When you do so for a couple different $R$, you will be able to sketch the *potential energy curve* - the graph of $E_{\\rm int}(R)$ as a function of $R$.\n\nOK, but how should you pick the electronic structure method to calculate $E_{\\rm A-B}$, $E_{\\rm A}$, and $E_{\\rm B}$? Let's start with the simplest choice and try out the Hartree-Fock (HF) method. In case HF is not accurate enough, we will also try the coupled-cluster method with single, double, and perturbative triple excitations - CCSD(T). If you haven't heard about CCSD(T) before, let's just state that it is **(1)** usually very accurate (it's even called the *gold standard* of electronic structure theory) and **(2)** very expensive for larger molecules. For the basis set, let's pick the augmented correlation consistent triple-zeta (aug-cc-pVTZ) basis of Dunning which should be quite OK for both HF and CCSD(T).\n\n\n\n```python\n# A simple Psi4 input script to compute the potential energy curve for two helium atoms\n\n%matplotlib notebook\nimport time\nimport numpy as np\nimport scipy\nfrom scipy.optimize import *\nnp.set_printoptions(precision=5, linewidth=200, threshold=2000, suppress=True)\nimport psi4\nimport matplotlib.pyplot as plt\n\n# Set Psi4 & NumPy Memory Options\npsi4.set_memory('2 GB')\npsi4.core.set_output_file('output.dat', False)\n\nnumpy_memory = 2\n\npsi4.set_options({'basis': 'aug-cc-pVTZ',\n 'e_convergence': 1e-10,\n 'd_convergence': 1e-10,\n 'INTS_TOLERANCE': 1e-15})\n\n\n```\n\nWe need to collect some data points to graph the function $E_{\\rm int}(R)$. Therefore, we set up a list of distances $R$ for which we will run the calculations (we go with 11 of them). For each distance, we need to remember three values ($E_{\\rm A-B}$, $E_{\\rm A}$, and $E_{\\rm B}$). For this purpose, we will prepare two $11\\times 3$ NumPy arrays to hold the HF and CCSD(T) results. \n\n\n\n```python\ndistances = [4.0,4.5,5.0,5.3,5.6,6.0,6.5,7.0,8.0,9.0,10.0]\nehf = np.zeros((11,3))\neccsdt = np.zeros((11,3))\n\n\n```\n\nWe are almost ready to crunch some numbers! One question though: how are we going to tell Psi4 whether we want $E_{\\rm A-B}$, $E_{\\rm A}$, or $E_{\\rm B}$? \nWe need to define three different geometries. The $E_{\\rm A-B}$ one has two helium atoms $R$ atomic units from each other - we can place one atom at $(0,0,0)$ and the other at $(0,0,R)$. The other two geometries involve one actual helium atom, with a nucleus and two electrons, and one *ghost atom* in place of the other one. A ghost atom does not have a nucleus or electrons, but it does carry the same basis functions as an actual atom - we need to calculate all energies in the same basis set, with functions centered at both $(0,0,0)$ and $(0,0,R)$, to prevent the so-called *basis set superposition error*. In Psi4, the syntax `Gh(X)` denotes a ghost atom where basis functions for atom type X are located. \n\nUsing ghost atoms, we can now easily define geometries for the $E_{\\rm A}$ and $E_{\\rm B}$ calculations.\n\n\n\n```python\nfor i in range(len(distances)):\n dimer = psi4.geometry(\"\"\"\n He 0.0 0.0 0.0\n --\n He 0.0 0.0 \"\"\"+str(distances[i])+\"\"\"\n units bohr\n symmetry c1\n \"\"\")\n\n psi4.energy('ccsd(t)') #HF will be calculated along the way\n ehf[i,0] = psi4.variable('HF TOTAL ENERGY')\n eccsdt[i,0] = psi4.variable('CCSD(T) TOTAL ENERGY')\n psi4.core.clean()\n\n monomerA = psi4.geometry(\"\"\"\n He 0.0 0.0 0.0\n --\n Gh(He) 0.0 0.0 \"\"\"+str(distances[i])+\"\"\"\n units bohr\n symmetry c1\n \"\"\")\n\n psi4.energy('ccsd(t)') #HF will be calculated along the way\n ehf[i,1] = psi4.variable('HF TOTAL ENERGY')\n eccsdt[i,1] = psi4.variable('CCSD(T) TOTAL ENERGY')\n psi4.core.clean()\n\n monomerB = psi4.geometry(\"\"\"\n Gh(He) 0.0 0.0 0.0\n --\n He 0.0 0.0 \"\"\"+str(distances[i])+\"\"\"\n units bohr\n symmetry c1\n \"\"\")\n\n psi4.energy('ccsd(t)') #HF will be calculated along the way\n ehf[i,2] = psi4.variable('HF TOTAL ENERGY')\n eccsdt[i,2] = psi4.variable('CCSD(T) TOTAL ENERGY')\n psi4.core.clean()\n\n\n```\n\nWe have completed the $E_{\\rm A-B}$, $E_{\\rm A}$, or $E_{\\rm B}$ calculations for all 11 distances $R$ (it didn't take that long, did it?). We will now perform the subtraction to form NumPy arrays with $E_{\\rm int}(R)$ values for each method, converted from atomic units (hartrees) to kcal/mol, and graph the resulting potential energy curves using the matplotlib library. \n\n\n\n```python\n#COMPLETE the two lines below to generate interaction energies. Convert them from atomic units to kcal/mol.\neinthf = \neintccsdt = \n\nprint ('HF PEC',einthf)\nprint ('CCSD(T) PEC',eintccsdt)\n\nplt.plot(distances,einthf,'r+',linestyle='-',label='HF')\nplt.plot(distances,eintccsdt,'bo',linestyle='-',label='CCSD(T)')\nplt.hlines(0.0,4.0,10.0)\nplt.legend(loc='upper right')\nplt.show()\n\n```\n\n*Questions* \n1. Which curve makes more physical sense?\n2. Why does helium form a liquid at very low temperatures?\n3. You learned in freshman chemistry that two helium atoms do not form a molecule because there are two electrons on a bonding orbital and two electrons on an antibonding orbital. How does this information relate to the behavior of HF (which does assume a molecular orbital for every electron) and CCSD(T) (which goes beyond the molecular orbital picture)?\n4. When you increase the size of the interacting molecules, the CCSD(T) method quickly gets much more expensive and your calculation might take weeks instead of seconds. It gets especially expensive for the calculation of $E_{\\rm A-B}$ because A-B has more electrons than either A or B. Your friend suggests to use CCSD(T) only for the easier terms $E_{\\rm A}$ and $E_{\\rm B}$ and subtract them from $E_{\\rm A-B}$ calculated with a different, cheaper method such as HF. Why is this a really bad idea?\n\n*To answer the questions above, please double click this Markdown cell to edit it. When you are done entering your answers, run this cell as if it was a code cell, and your Markdown source will be recompiled.*\n\n\nA nice feature of the supermolecular approach is that it is very easy to use - you just need to run three standard energy calculations, and modern quantum chemistry codes such as Psi4 give you a lot of methods to choose from. However, the accuracy of subtraction hinges on error cancellation, and we have to be careful to ensure that the errors do cancel between $E_{\\rm A-B}$ and $E_{\\rm A}+E_{\\rm B}$. Another drawback of the supermolecular approach is that it is not particularly rich in physical insight. All that we get is a single number $E_{\\rm int}$ that tells us very little about the underlying physics of the interaction. Therefore, one may want to find an alternative approach where $E_{\\rm int}$ is computed directly, without subtraction, and it is obtained as a sum of distinct, physically meaningful terms. Symmetry-adapted perturbation theory (SAPT) is such an alternative approach.\n\n# Symmetry-Adapted Perturbation Theory (SAPT)\n\nSAPT is a perturbation theory aimed specifically at calculating the interaction energy between two molecules. Contrary to the supermolecular approach, SAPT obtains the interaction energy directly - no subtraction of similar terms is needed. Moreover, the result is obtained as a sum of separate corrections accounting for the electrostatic, induction, dispersion, and exchange contributions to interaction energy, so the SAPT decomposition facilitates the understanding and physical interpretation of results.\n- *Electrostatic energy* arises from the Coulomb interaction between charge densities of isolated molecules.\n- *Induction energy* is the energetic effect of mutual polarization between the two molecules.\n- *Dispersion energy* is a consequence of intermolecular electron correlation, usually explained in terms of correlated fluctuations of electron density on both molecules.\n- *Exchange energy* is a short-range repulsive effect that is a consequence of the Pauli exclusion principle.\n\nIn this activity, we will explore the simplest level of the SAPT theory called SAPT0 (see [Parker:2014] for the definitions of different levels of SAPT). A particular SAPT correction $E^{(nk)}$ corresponds to effects that are of $n$th order in the intermolecular interaction and $k$th order in the intramolecular electron correlation. In SAPT0, intramolecular correlation is neglected, and intermolecular interaction is included through second order:\n\n\\begin{equation}\nE_{\\rm int}^{\\rm SAPT0}=E^{(10)}_{\\rm elst}+E^{(10)}_{\\rm exch}+E^{(20)}_{\\rm ind,resp}+E^{(20)}_{\\rm exch-ind,resp}+E^{(20)}_{\\rm disp}+E^{(20)}_{\\rm exch-disp}+\\delta E^{(2)}_{\\rm HF}\n\\end{equation}\n\nIn this equation, the consecutive corrections account for the electrostatic, first-order exchange, induction, exchange induction, dispersion, and exchange dispersion effects, respectively. The additional subscript ''resp'' denotes that these corrections are computed including response effects - the HF orbitals of each molecule are relaxed in the electric field generated by the other molecule. The last term $\\delta E^{(2)}_{\\rm HF}$ approximates third- and higher-order induction and exchange induction effects and is taken from a supermolecular HF calculation.\n\nSticking to our example of two helium atoms, let's now calculate the SAPT0 interaction energy contributions using Psi4. In the results that follow, we will group $E^{(20)}_{\\rm ind,resp}$, $E^{(20)}_{\\rm exch-ind,resp}$, and $\\delta E^{(2)}_{\\rm HF}$ to define the total induction effect (including its exchange quenching), and group $E^{(20)}_{\\rm disp}$ with $E^{(20)}_{\\rm exch-disp}$ to define the total dispersion effect.\n\n\n\n```python\ndistances = [4.0,4.5,5.0,5.3,5.6,6.0,6.5,7.0,8.0,9.0,10.0]\neelst = np.zeros((11))\neexch = np.zeros((11))\neind = np.zeros((11))\nedisp = np.zeros((11))\nesapt = np.zeros((11))\n\nfor i in range(len(distances)):\n dimer = psi4.geometry(\"\"\"\n He 0.0 0.0 0.0\n --\n He 0.0 0.0 \"\"\"+str(distances[i])+\"\"\"\n units bohr\n symmetry c1\n \"\"\")\n\n psi4.energy('sapt0')\n eelst[i] = psi4.variable('SAPT ELST ENERGY') * 627.509\n eexch[i] = psi4.variable('SAPT EXCH ENERGY') * 627.509\n eind[i] = psi4.variable('SAPT IND ENERGY') * 627.509\n edisp[i] = psi4.variable('SAPT DISP ENERGY') * 627.509\n esapt[i] = psi4.variable('SAPT TOTAL ENERGY') * 627.509\n psi4.core.clean()\n\nplt.close()\nplt.ylim(-0.2,0.4)\nplt.plot(distances,eelst,'r+',linestyle='-',label='SAPT0 elst')\nplt.plot(distances,eexch,'bo',linestyle='-',label='SAPT0 exch')\nplt.plot(distances,eind,'g^',linestyle='-',label='SAPT0 ind')\nplt.plot(distances,edisp,'mx',linestyle='-',label='SAPT0 disp')\nplt.plot(distances,esapt,'k*',linestyle='-',label='SAPT0 total')\nplt.hlines(0.0,4.0,10.0)\nplt.legend(loc='upper right')\nplt.show()\n\n```\n\n*Questions* \n1. What is the origin of attraction between two helium atoms?\n2. For the interaction of two helium atoms, which SAPT terms are *long-range* (vanish with distance like some inverse power of $R$) and which are *short-range* (vanish exponentially with $R$ just like the overlap of molecular orbitals)?\n3. The dispersion energy decays at large $R$ like $R^{-n}$. Find the value of $n$ by fitting a function to the five largest-$R$ results. You can use `scipy.optimize.curve_fit` to perform the fitting, but you have to define the appropriate function first.\nDoes the optimal exponent $n$ obtained by your fit agree with what you know about van der Waals dispersion forces? Is the graph of dispersion energy shaped like the $R^{-n}$ graph for large $R$? What about intermediate $R$?\n\n*Do you know how to calculate $R^{-n}$ if you have an array with $R$ values? If not, look it up in the NumPy documentation!* \n\n\n\n```python\n#COMPLETE the definition of function f below.\ndef f\n\nndisp = scipy.optimize.curve_fit(f,distances[-5:],edisp[-5:])\nprint (\"Optimal dispersion exponent:\",ndisp[0][0])\n\n```\n\n# Interaction between two water molecules\n\nFor the next part, you will perform the same analysis and obtain the supermolecular and SAPT0 data for the interaction of two water molecules. We now have many more degrees of freedom: in addition to the intermolecular distance $R$, we can change the relative orientation of two molecules, or even their internal geometries (O-H bond lengths and H-O-H angles). In this way, the potential energy curve becomes a multidimensional *potential energy surface*. It is hard to graph functions of more than two variables, so we will stick to the distance dependence of the interaction energies. Therefore, we will assume one particular orientation of two water molecules (a hydrogen-bonded one) and vary the intermolecular distance $R$ while keeping the orientation, and molecular geometries, constant. The geometry of the A-B complex has been defined for you, but you have to request all the necessary Psi4 calculations and extract the numbers that you need. To save time, we will downgrade the basis set to aug-cc-pVDZ and use MP2 (an approximate method that captures most of electron correlation) in place of CCSD(T).\n\n*Hints:* To prepare the geometries for the individual water molecules A and B, copy and paste the A-B geometry, but use the Gh(O2)... syntax to define the appropriate ghost atoms. Remember to run `psi4.core.clean()` after each calculation.\n\n\n\n```python\ndistances_h2o = [2.7,3.0,3.5,4.0,4.5,5.0,6.0,7.0,8.0,9.0]\nehf_h2o = np.zeros((10,3))\nemp2_h2o = np.zeros((10,3))\npsi4.set_options({'basis': 'aug-cc-pVDZ'})\n\nfor i in range(len(distances_h2o)):\n dimer = psi4.geometry(\"\"\"\n O1\n H1 O1 0.96\n H2 O1 0.96 H1 104.5\n --\n O2 O1 \"\"\"+str(distances_h2o[i])+\"\"\" H1 5.0 H2 0.0\n X O2 1.0 O1 120.0 H2 180.0\n H3 O2 0.96 X 52.25 O1 90.0\n H4 O2 0.96 X 52.25 O1 -90.0\n units angstrom\n symmetry c1\n \"\"\")\n\n#COMPLETE the MP2 energy calculations for A-B, A, and B, and prepare the data for the graph.\n#Copy and paste the A-B geometry, but use the Gh(O2)... syntax to define the appropriate ghost atoms for the A and B calculations. \n#Remember to run psi4.core.clean() after each calculation.\n\nprint ('HF PEC',einthf_h2o)\nprint ('MP2 PEC',eintmp2_h2o)\n\nplt.close()\nplt.plot(distances_h2o,einthf_h2o,'r+',linestyle='-',label='HF')\nplt.plot(distances_h2o,eintmp2_h2o,'bo',linestyle='-',label='MP2')\nplt.hlines(0.0,2.5,9.0)\nplt.legend(loc='upper right')\nplt.show()\n\n```\n\n\n```python\neelst_h2o = np.zeros((10))\neexch_h2o = np.zeros((10))\neind_h2o = np.zeros((10))\nedisp_h2o = np.zeros((10))\nesapt_h2o = np.zeros((10))\n\n#COMPLETE the SAPT calculations for 10 distances to prepare the data for the graph.\n\nplt.close()\nplt.ylim(-10.0,10.0)\nplt.plot(distances_h2o,eelst_h2o,'r+',linestyle='-',label='SAPT0 elst')\nplt.plot(distances_h2o,eexch_h2o,'bo',linestyle='-',label='SAPT0 exch')\nplt.plot(distances_h2o,eind_h2o,'g^',linestyle='-',label='SAPT0 ind')\nplt.plot(distances_h2o,edisp_h2o,'mx',linestyle='-',label='SAPT0 disp')\nplt.plot(distances_h2o,esapt_h2o,'k*',linestyle='-',label='SAPT0 total')\nplt.hlines(0.0,2.5,9.0)\nplt.legend(loc='upper right')\nplt.show()\n\n```\n\nBefore we proceed any further, let us check one thing about your first MP2 water-water interaction energy calculation, the one that produced `eintmp2_h2o[0]`. Here's the geometry of that complex again:\n\n\n\n```python\n#all x,y,z in Angstroms\natomtypes = [\"O1\",\"H1\",\"H2\",\"O2\",\"H3\",\"H4\"]\ncoordinates = np.array([[0.116724185090, 1.383860971547, 0.000000000000],\n [0.116724185090, 0.423860971547, 0.000000000000],\n [-0.812697549673, 1.624225775439, 0.000000000000],\n [-0.118596320329, -1.305864713301, 0.000000000000],\n [0.362842754701, -1.642971982825, -0.759061990794],\n [0.362842754701, -1.642971982825, 0.759061990794]])\n\n```\n\nFirst, write the code to compute the four O-H bond lengths and two H-O-H bond angles in the two molecules. *(Hint: if the angles look weird, maybe they are still in radians - don't forget to convert them to degrees.)* Are the two water molecules identical?\n\nThen, check the values of the MP2 energy for these two molecules (the numbers $E_{\\rm A}$ and $E_{\\rm B}$ that you subtracted to get the interaction energy). If the molecules are the same, why are the MP2 energies close but not the same?\n\n*Hints:* The most elegant way to write this code is to define functions `distance(point1,point2)` for the distance between two points $(x_1,y_1,z_1)$ and $(x_2,y_2,z_2)$, and `angle(vec1,vec2)` for the angle between two vectors $(x_{v1},y_{v1},z_{v1})$ and $(x_{v2},y_{v2},z_{v2})$. Recall that the cosine of this angle is related to the dot product $(x_{v1},y_{v1},z_{v1})\\cdot(x_{v2},y_{v2},z_{v2})$. If needed, check the documentation on how to calculate the dot product of two NumPy vectors. \n\nWhen you are parsing the NumPy array with the coordinates, remember that `coordinates[k,:]` is the vector of $(x,y,z)$ values for atom number $k$, $k=0,1,2,\\ldots,N_{\\rm atoms}-1$. \n\n\n\n```python\n\n#COMPLETE the distance and angle calculations below.\nro1h1 = \nro1h2 = \nro2h3 = \nro2h4 = \nah1o1h2 = \nah3o2h4 = \nprint ('O-H distances: %5.3f %5.3f %5.3f %5.3f' % (ro1h1,ro1h2,ro2h3,ro2h4))\nprint ('H-O-H angles: %6.2f %6.2f' % (ah1o1h2,ah3o2h4))\nprint ('MP2 energy of molecule 1: %18.12f hartrees' % emp2_h2o[0,1])\nprint ('MP2 energy of molecule 2: %18.12f hartrees' % emp2_h2o[0,2])\n\n```\n\nWe can now proceed with the analysis of the SAPT0 energy components for the complex of two water molecules. *Please edit this Markdown cell to write your answers.*\n1. Which of the four SAPT terms are long-range, and which are short-range this time?\n2. For the terms that are long-range and decay with $R$ like $R^{-n}$, estimate $n$ by fitting a proper function to the 5 data points with the largest $R$, just like you did for the two interacting helium atoms (using `scipy.optimize.curve_fit`). How would you explain the power $n$ that you obtained for the electrostatic energy?\n\n\n\n```python\n#COMPLETE the optimizations below. \nnelst_h2o = \nnind_h2o = \nndisp_h2o = \nprint (\"Optimal electrostatics exponent:\",nelst_h2o[0][0])\nprint (\"Optimal induction exponent:\",nind_h2o[0][0])\nprint (\"Optimal dispersion exponent:\",ndisp_h2o[0][0])\n\n```\n\nThe water molecules are polar - each one has a nonzero dipole moment, and at large distances we expect the electrostatic energy to be dominated by the dipole-dipole interaction (at short distances, when the orbitals of two molecules overlap, the multipole approximation is not valid and the electrostatic energy contains the short-range *charge penetration* effects). Let's check if this is indeed the case. In preparation for this, we first find the HF dipole moment vector for each water molecule. \n\n\n\n```python\nwaterA = psi4.geometry(\"\"\"\nO 0.116724185090 1.383860971547 0.000000000000\nH 0.116724185090 0.423860971547 0.000000000000\nH -0.812697549673 1.624225775439 0.000000000000\nunits angstrom\nnoreorient\nnocom\nsymmetry c1\n\"\"\")\n\ncomA = waterA.center_of_mass()\ncomA = np.array([comA[0],comA[1],comA[2]])\nE, wfn = psi4.energy('HF',return_wfn=True)\ndipoleA = np.array([psi4.variable('SCF DIPOLE X'),psi4.variable('SCF DIPOLE Y'),\n psi4.variable('SCF DIPOLE Z')])*0.393456 # conversion from Debye to a.u.\npsi4.core.clean()\nprint(\"COM A in a.u.\",comA)\nprint(\"Dipole A in a.u.\",dipoleA)\n\nwaterB = psi4.geometry(\"\"\"\nO -0.118596320329 -1.305864713301 0.000000000000\nH 0.362842754701 -1.642971982825 -0.759061990794\nH 0.362842754701 -1.642971982825 0.759061990794\nunits angstrom\nnoreorient\nnocom\nsymmetry c1\n\"\"\")\n\ncomB = waterB.center_of_mass()\ncomB = np.array([comB[0],comB[1],comB[2]])\nE, wfn = psi4.energy('HF',return_wfn=True)\ndipoleB = np.array([psi4.variable('SCF DIPOLE X'),psi4.variable('SCF DIPOLE Y'),\n psi4.variable('SCF DIPOLE Z')])*0.393456 # conversion from Debye to a.u.\npsi4.core.clean()\nprint(\"COM B in a.u.\",comB)\nprint(\"Dipole B in a.u.\",dipoleB)\n\ncomA_to_comB = comB - comA\nprint(\"Vector from COMA to COMB:\",comA_to_comB)\n\n\n```\n\nOur goal now is to plot the electrostatic energy from SAPT against the interaction energy between two dipoles $\\boldsymbol{\\mu_A}$ and $\\boldsymbol{\\mu_B}$:\n\n\\begin{equation}\nE_{\\rm dipole-dipole}=\\frac{\\boldsymbol{\\mu_A}\\cdot\\boldsymbol{\\mu_B}}{R^3}-\\frac{3(\\boldsymbol{\\mu_A}\\cdot{\\mathbf R})(\\boldsymbol{\\mu_B}\\cdot{\\mathbf R})}{R^5} \n\\end{equation}\n\nProgram this formula in the `dipole_dipole` function below, taking ${\\mathbf R}$, $\\boldsymbol{\\mu_A}$, and $\\boldsymbol{\\mu_B}$ in atomic units and calculating the dipole-dipole interaction energy, also in atomic units (which we will later convert to kcal/mol). \nWith your new function, we can populate the `edipdip` array of dipole-dipole interaction energies for all intermolecular separations, and plot these energies alongside the actual electrostatic energy data from SAPT. \n\nNote that ${\\mathbf R}$ is the vector from the center of mass of molecule A to the center of mass of molecule B. For the shortest intermolecular distance, the atomic coordinates are listed in the code above, so `R = comA_to_comB`. For any other distance, we obtained the geometry of the complex by shifting one water molecule away from the other along the O-O direction, so we need to shift the center of mass of the second molecule in the same way.\n\n\n\n```python\n#the geometries are related to each other by a shift of 1 molecule along the O-O vector:\nOA_to_OB = (np.array([-0.118596320329,-1.305864713301,0.000000000000])-np.array(\n [0.116724185090,1.383860971547,0.000000000000]))/0.529177249\nOA_to_OB_unit = OA_to_OB/np.sqrt(np.sum(OA_to_OB*OA_to_OB))\nprint(\"Vector from OA to OB:\",OA_to_OB,OA_to_OB_unit)\n\ndef dipole_dipole(R,dipA,dipB):\n#COMPLETE the definition of the dipole-dipole energy. All your data are in atomic units.\n\nedipdip = []\nfor i in range(len(distances_h2o)):\n shiftlength = (distances_h2o[i]-distances_h2o[0])/0.529177249\n R = comA_to_comB + shiftlength*OA_to_OB_unit\n edipdip.append(dipole_dipole(R,dipoleA,dipoleB)*627.509)\n\nedipdip = np.array(edipdip)\nprint (edipdip)\n\nplt.close()\nplt.ylim(-10.0,10.0)\nplt.plot(distances_h2o,eelst_h2o,'r+',linestyle='-',label='SAPT0 elst')\nplt.plot(distances_h2o,edipdip,'bo',linestyle='-',label='dipole-dipole')\nplt.hlines(0.0,2.5,9.0)\nplt.legend(loc='upper right')\nplt.show()\n\n```\n\nWe clearly have a favorable dipole-dipole interaction, which results in negative (attractive) electrostatic energy. This is how the origins of hydrogen bonding might have been explained to you in your freshman chemistry class: two polar molecules have nonzero dipole moments and the dipole-dipole interaction can be strongly attractive. However, your SAPT components show you that it's not a complete explanation: the two water molecules are bound not only by electrostatics, but by two other SAPT components as well. Can you quantify the relative (percentage) contributions of electrostatics, induction, and dispersion to the overall interaction energy at the van der Waals minimum? This minimum is the second point on your curve, so, for example, `esapt_h2o[1]` is the total SAPT interaction energy.\n\n\n\n```python\n#now let's examine the SAPT0 contributions at the van der Waals minimum, which is the 2nd point on the curve\n#COMPLETE the calculation of percentages.\npercent_elst = \npercent_ind = \npercent_disp = \nprint ('At the van der Waals minimum, electrostatics, induction, and dispersion')\nprint (' contribute %5.1f, %5.1f, and %5.1f percent of interaction energy, respectively.'\n % (percent_elst,percent_ind,percent_disp))\n\n\n```\n\nYou have now completed some SAPT calculations and analyzed the meaning of different corrections. Can you complete the table below to indicate whether different SAPT corrections can be positive (repulsive), negative (attractive), or both, and why?\n\n\n\n```python\n#Type in your answers below.\n#COMPLETE this table. Do not remove the comment (#) signs.\n#\n#SAPT term Positive/Negative/Both? Why?\n#Electrostatics\n#Exchange\n#Induction\n#Dispersion\n\n```\n\n# Ternary diagrams\n\nHigher levels of SAPT calculations can give very accurate interaction energies, but are more computationally expensive than SAPT0. SAPT0 is normally sufficient for qualitative accuracy and basic understanding of the interaction physics. One important use of SAPT0 is to *classify different intermolecular complexes according to the type of interaction*, and a nice way to display the results of this classification is provided by a *ternary diagram*.\n\nThe relative importance of attractive electrostatic, induction, and dispersion contributions to a SAPT interaction energy for a particular structure can be marked as a point inside a triangle, with the distance to each vertex of the triangle depicting the relative contribution of a given type (the more dominant a given contribution is, the closer the point lies to the corresponding vertex). If the electrostatic contribution is repulsive, we can display the relative magnitudes of electrostatic, induction, and dispersion terms in the same way, but we need the second triangle (the left one). The combination of two triangles forms the complete diagram and we can mark lots of different points corresponding to different complexes and geometries.\n\nLet's now mark all your systems on a ternary diagram, in blue for two helium atoms and in red for two water molecules. What kinds of interaction are represented? Compare your diagram with the one pictured below, prepared for 2510 different geometries of the complex of two water molecules, with all kinds of intermolecular distances and orientations (this graph is taken from [Smith:2016]). What conclusions can you draw about the interaction of two water molecules at *any* orientation?\n\n\n\n```python\ndef ternary(sapt, title='', labeled=True, view=True, saveas=None, relpath=False, graphicsformat=['pdf']):\n#Adapted from the QCDB ternary diagram code by Lori Burns\n \"\"\"Takes array of arrays *sapt* in form [elst, indc, disp] and builds formatted\n two-triangle ternary diagrams. Either fully-readable or dotsonly depending\n on *labeled*.\n \"\"\"\n from matplotlib.path import Path\n import matplotlib.patches as patches\n\n # initialize plot\n plt.close()\n fig, ax = plt.subplots(figsize=(6, 3.6))\n plt.xlim([-0.75, 1.25])\n plt.ylim([-0.18, 1.02])\n plt.xticks([])\n plt.yticks([])\n ax.set_aspect('equal')\n\n if labeled:\n # form and color ternary triangles\n codes = [Path.MOVETO, Path.LINETO, Path.LINETO, Path.CLOSEPOLY]\n pathPos = Path([(0., 0.), (1., 0.), (0.5, 0.866), (0., 0.)], codes)\n pathNeg = Path([(0., 0.), (-0.5, 0.866), (0.5, 0.866), (0., 0.)], codes)\n ax.add_patch(patches.PathPatch(pathPos, facecolor='white', lw=2))\n ax.add_patch(patches.PathPatch(pathNeg, facecolor='#fff5ee', lw=2))\n\n # label corners\n ax.text(1.0,\n -0.15,\n u'Elst (\u2212)',\n verticalalignment='bottom',\n horizontalalignment='center',\n family='Times New Roman',\n weight='bold',\n fontsize=18)\n ax.text(0.5,\n 0.9,\n u'Ind (\u2212)',\n verticalalignment='bottom',\n horizontalalignment='center',\n family='Times New Roman',\n weight='bold',\n fontsize=18)\n ax.text(0.0,\n -0.15,\n u'Disp (\u2212)',\n verticalalignment='bottom',\n horizontalalignment='center',\n family='Times New Roman',\n weight='bold',\n fontsize=18)\n ax.text(-0.5,\n 0.9,\n u'Elst (+)',\n verticalalignment='bottom',\n horizontalalignment='center',\n family='Times New Roman',\n weight='bold',\n fontsize=18)\n\n xvals = []\n yvals = []\n cvals = []\n geomindex = 0 # first 11 points are He-He, the next 10 are H2O-H2O\n for sys in sapt:\n [elst, indc, disp] = sys\n\n # calc ternary posn and color\n Ftop = abs(indc) / (abs(elst) + abs(indc) + abs(disp))\n Fright = abs(elst) / (abs(elst) + abs(indc) + abs(disp))\n xdot = 0.5 * Ftop + Fright\n ydot = 0.866 * Ftop\n if geomindex <= 10:\n cdot = 'b'\n else:\n cdot = 'r'\n if elst > 0.:\n xdot = 0.5 * (Ftop - Fright)\n ydot = 0.866 * (Ftop + Fright)\n #print elst, indc, disp, '', xdot, ydot, cdot\n\n xvals.append(xdot)\n yvals.append(ydot)\n cvals.append(cdot)\n geomindex += 1\n\n sc = ax.scatter(xvals, yvals, c=cvals, s=15, marker=\"o\", \n edgecolor='none', vmin=0, vmax=1, zorder=10)\n\n # remove figure outline\n ax.spines['top'].set_visible(False)\n ax.spines['right'].set_visible(False)\n ax.spines['bottom'].set_visible(False)\n ax.spines['left'].set_visible(False)\n\n # save and show\n plt.show()\n return 1\n\nsapt = []\nfor i in range(11):\n sapt.append([eelst[i],eind[i],edisp[i]])\nfor i in range(10):\n sapt.append([eelst_h2o[i],eind_h2o[i],edisp_h2o[i]])\nidummy = ternary(sapt)\nfrom IPython.display import Image\nImage(filename='water2510.png')\n\n```\n\n# Some further reading:\n\n1. How is the calculation of SAPT corrections actually programmed? The Psi4NumPy projects has some tutorials on this topic: https://github.com/psi4/psi4numpy/tree/master/Tutorials/07_Symmetry_Adapted_Perturbation_Theory \n2. A classic (but recently updated) book on the theory of interactions between molecules: \"The Theory of Intermolecular Forces\"\n\t> [[Stone:2013](https://www.worldcat.org/title/theory-of-intermolecular-forces/oclc/915959704)] A. Stone, Oxford University Press, 2013\n3. The classic review paper on SAPT: \"Perturbation Theory Approach to Intermolecular Potential Energy Surfaces of van der Waals Complexes\"\n\t> [[Jeziorski:1994](http://pubs.acs.org/doi/abs/10.1021/cr00031a008)] B. Jeziorski, R. Moszynski, and K. Szalewicz, *Chem. Rev.* **94**, 1887 (1994)\n4. A brand new (as of 2020) review of SAPT, describing new developments and inprovements to the theory: \"Recent developments in symmetry\u2010adapted perturbation theory\"\n\t> [[Patkowski:2020](https://onlinelibrary.wiley.com/doi/abs/10.1002/wcms.1452)] K. Patkowski, *WIREs Comput. Mol. Sci.* **10**, e1452 (2020)\n5. The definitions and practical comparison of different levels of SAPT: \"Levels of symmetry adapted perturbation theory (SAPT). I. Efficiency and performance for interaction energies\"\n\t> [[Parker:2014](http://aip.scitation.org/doi/10.1063/1.4867135)] T. M. Parker, L. A. Burns, R. M. Parrish, A. G. Ryno, and C. D. Sherrill, *J. Chem. Phys.* **140**, 094106 (2014)\n6. An example study making use of the SAPT0 classification of interaction types, with lots of ternary diagrams in the paper and in the supporting information: \"Revised Damping Parameters for the D3 Dispersion Correction to Density Functional Theory\"\n\t> [[Smith:2016](https://pubs.acs.org/doi/abs/10.1021/acs.jpclett.6b00780)] D. G. A. Smith, L. A. Burns, K. Patkowski, and C. D. Sherrill, *J. Phys. Chem. Lett.* **7**, 2197 (2016).\n\n", "meta": {"hexsha": "b6741e2a13c223f230d6345905e5b822ecf2dd4f", "size": 42401, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Example/Psi4Education/sapt0_student.ipynb", "max_stars_repo_name": "yychuang/109-2-compchem-lite", "max_stars_repo_head_hexsha": "cbf17e542f9447e89fb48de1b28759419ffff956", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 23, "max_stars_repo_stars_event_min_datetime": "2019-12-19T22:56:32.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-01T00:58:56.000Z", "max_issues_repo_path": "Example/Psi4Education/sapt0_student.ipynb", "max_issues_repo_name": "yychuang/109-2-compchem-lite", "max_issues_repo_head_hexsha": "cbf17e542f9447e89fb48de1b28759419ffff956", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-03-22T14:40:22.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-26T17:27:01.000Z", "max_forks_repo_path": "Example/Psi4Education/sapt0_student.ipynb", "max_forks_repo_name": "yychuang/109-2-compchem-lite", "max_forks_repo_head_hexsha": "cbf17e542f9447e89fb48de1b28759419ffff956", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 20, "max_forks_repo_forks_event_min_datetime": "2019-11-17T15:45:01.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-24T00:05:55.000Z", "avg_line_length": 54.0140127389, "max_line_length": 1121, "alphanum_fraction": 0.6310700219, "converted": true, "num_tokens": 9484, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.46879062662624377, "lm_q2_score": 0.20434189993684584, "lm_q1q2_score": 0.09579356731739117}} {"text": "# 13 Euler-Maclaurin\u306e\u548c\u516c\u5f0f\n\n\u9ed2\u6728\u7384\n\n2018-07-04\uff5e2019-04-03\n\n* Copyright 2018 Gen Kuroki\n* License: MIT https://opensource.org/licenses/MIT\n* Repository: https://github.com/genkuroki/Calculus\n\n\u3053\u306e\u30d5\u30a1\u30a4\u30eb\u306f\u6b21\u306e\u5834\u6240\u3067\u304d\u308c\u3044\u306b\u95b2\u89a7\u3067\u304d\u308b:\n\n* http://nbviewer.jupyter.org/github/genkuroki/Calculus/blob/master/13%20Euler-Maclaurin%20summation%20formula.ipynb\n\n* https://genkuroki.github.io/documents/Calculus/13%20Euler-Maclaurin%20summation%20formula.pdf\n\n\u3053\u306e\u30d5\u30a1\u30a4\u30eb\u306f Julia Box \u3067\u5229\u7528\u3067\u304d\u308b.\n\n\u81ea\u5206\u306e\u30d1\u30bd\u30b3\u30f3\u306bJulia\u8a00\u8a9e\u3092\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb\u3057\u305f\u3044\u5834\u5408\u306b\u306f\n\n* [Windows\u3078\u306eJulia\u8a00\u8a9e\u306e\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb](http://nbviewer.jupyter.org/gist/genkuroki/81de23edcae631a995e19a2ecf946a4f)\n\n* [Julia v1.1.0 \u306e Windows 8.1 \u3078\u306e\u30a4\u30f3\u30b9\u30c8\u30fc\u30eb](https://nbviewer.jupyter.org/github/genkuroki/msfd28/blob/master/install.ipynb)\n\n\u3092\u53c2\u7167\u305b\u3088. \u524d\u8005\u306f\u53e4\u304f, \u5f8c\u8005\u306e\u65b9\u304c\u65b0\u3057\u3044.\n\n\u8ad6\u7406\u7684\u306b\u5b8c\u74a7\u306a\u8aac\u660e\u3092\u3059\u308b\u3064\u3082\u308a\u306f\u306a\u3044. \u7d30\u90e8\u306e\u3044\u3044\u52a0\u6e1b\u306a\u90e8\u5206\u306f\u81ea\u5206\u3067\u8a02\u6b63\u30fb\u4fee\u6b63\u305b\u3088.\n\n$\n\\newcommand\\eps{\\varepsilon}\n\\newcommand\\ds{\\displaystyle}\n\\newcommand\\Z{{\\mathbb Z}}\n\\newcommand\\R{{\\mathbb R}}\n\\newcommand\\C{{\\mathbb C}}\n\\newcommand\\QED{\\text{\u25a1}}\n\\newcommand\\root{\\sqrt}\n\\newcommand\\bra{\\langle}\n\\newcommand\\ket{\\rangle}\n\\newcommand\\d{\\partial}\n\\newcommand\\sech{\\operatorname{sech}}\n\\newcommand\\cosec{\\operatorname{cosec}}\n\\newcommand\\sign{\\operatorname{sign}}\n\\newcommand\\real{\\operatorname{Re}}\n\\newcommand\\imag{\\operatorname{Im}}\n$\n\n

\u76ee\u6b21

\n\n\n\n```julia\nusing Base.MathConstants\nusing Base64\nusing Printf\nusing Statistics\nconst e = \u212f\nendof(a) = lastindex(a)\nlinspace(start, stop, length) = range(start, stop, length=length)\n\nusing Plots\ngr(); ENV[\"PLOTS_TEST\"] = \"true\"\n#clibrary(:colorcet)\nclibrary(:misc)\n\nfunction pngplot(P...; kwargs...)\n sleep(0.1)\n pngfile = tempname() * \".png\"\n savefig(plot(P...; kwargs...), pngfile)\n showimg(\"image/png\", pngfile)\nend\npngplot(; kwargs...) = pngplot(plot!(; kwargs...))\n\nshowimg(mime, fn) = open(fn) do f\n base64 = base64encode(f)\n display(\"text/html\", \"\"\"\"\"\")\nend\n\nusing SymPy\n#sympy.init_printing(order=\"lex\") # default\n#sympy.init_printing(order=\"rev-lex\")\n\nusing SpecialFunctions\nusing QuadGK\n```\n\n## Bernoulli\u591a\u9805\u5f0f\n\n### Bernoulli\u591a\u9805\u5f0f\u306e\u5b9a\u7fa9\n\n**\u5b9a\u7fa9(Bernoulli\u591a\u9805\u5f0f):** Bernoulli\u591a\u9805\u5f0f** $B_n(x)$ ($n=0,1,2,\\ldots$)\u3092\n\n$$\n\\frac{ze^{zx}}{e^z-1} = \\sum_{n=0}^\\infty \\frac{B_n(x)}{n!}z^n\n$$\n\n\u306b\u3088\u3063\u3066\u5b9a\u7fa9\u3059\u308b. $\\QED$\n\n\n\n### Bernoulli\u591a\u9805\u5f0f\u306e\u57fa\u672c\u6027\u8cea\n\n**\u4e00\u822c\u5316Bernoulli\u591a\u9805\u5f0f\u306e\u57fa\u672c\u6027\u8cea:** Bernoulli\u591a\u9805\u5f0f $B_n(x)$ \u306f\u4ee5\u4e0b\u306e\u6027\u8cea\u3092\u6e80\u305f\u3057\u3066\u3044\u308b:\n\n(1) $B_0(x)=1$.\n\n(2) $\\ds\\int_0^1 B_n(x)\\,dx = \\delta_{n,0}$.\n\n(3) $\\ds B_n(x+h) = \\sum_{k=0}^n\\binom{n}{k}B_{n-k}(x)h^k = \n\\sum_{k=0}^n \\binom{n}{k} B_k(x) h^{n-k}$.\n\n(4) $B_n'(x)=nB_{n-1}(x)$.\n\n(5) $\\ds B_n(x+1)=B_n(x)+nx^{n-1}$.\n\n(6) $B_n(1-x)=(-1)^n B_n(x)$.\n\n(7) $B_n(1)=B_n(0)+\\delta_{n,1}$ \u3068\u306a\u308b.\n\n(8) $B_n(0)=1$, $\\ds B_n(0)=-\\frac{1}{2}$ \u3068\u306a, $n$ \u304c3\u4ee5\u4e0a\u306e\u5947\u6570\u306a\u3089\u3070 $B_n(0)=0$ \u3068\u306a\u308b.\n\n**\u8a3c\u660e:** (1) $e^{zx}=1+O(z)$, $\\ds\\frac{e^z-1}{z}=1+O(z)$ \u3088\u308a, $\\ds\\frac{ze^{zx}}{e^z-1}=1+O(z)$ \u306a\u306e\u3067 $B_0(x) = 1$.\n\n(2)\u3092\u793a\u305d\u3046.\n\n$$\n\\begin{aligned}\n&\n\\int_0^1 \\frac{ze^{zx}}{e^z-1}\\,dx = \\frac{z}{e^z-1}\\int_0^1 e^{zx}\\,dx = \n\\frac{z}{e^z-1}\\frac{e^z-1}{z} = 1, \n\\\\ &\n\\int_0^1\\frac{ze^{zx}}{e^z-1}\\,dx = \\sum_{n=0}^\\infty\\frac{z^n}{n!}\\int_0^1 B_n(x)\\,dx\n\\end{aligned}\n$$\n\n\u306a\u306e\u3067, \u3053\u308c\u3089\u3092\u6bd4\u8f03\u3057\u3066 $\\ds\\int_0^1 B_n(x)\\,dx = \\delta_{n,0}$.\n\n(3) \u4e8c\u9805\u5b9a\u7406\u3088\u308a,\n\n$$\n\\int_0^1 (x+y)^n\\,dy = \n\\sum_{k=0}^n \\binom{n}{k} x^{n-k} \\int_0^1 y^k\\,dy.\n$$\n\n\u3086\u3048\u306b, $x$ \u306e\u51fd\u6570\u3092 $x$ \u306e\u51fd\u6570\u306b\u79fb\u3059\u7dda\u5f62\u5199\u50cf(\u524d\u65b9\u79fb\u52d5\u5e73\u5747)\n\n$$\nf(x)\\mapsto \\int_0^1 f(x+y)\\,dy\n$$\n\n\u306f\u591a\u9805\u5f0f\u3092\u591a\u9805\u5f0f\u306b\u79fb\u3057, \u6700\u9ad8\u6b21\u306e\u4fc2\u6570\u304c1\u306e\u591a\u9805\u5f0f\u3092\u6700\u9ad8\u6b21\u306e\u4fc2\u6570\u304c1\u306e\u540c\u6b21\u306e\u591a\u9805\u5f0f\u306b\u79fb\u3059. \u3053\u308c\u3088\u308a, \u7dda\u5f62\u5199\u50cf $\\ds f(x)\\mapsto \\int_0^1 f(x+y)\\,dy$ \u306f\u591a\u9805\u5f0f\u3069\u3046\u3057\u306e\u4e00\u5bfe\u4e00\u5bfe\u5fdc\u3092\u4e0e\u3048\u308b\u7dda\u5f62\u5199\u50cf\u306b\u306a\u3063\u3066\u3044\u308b\u3053\u3068\u304c\u308f\u304b\u308b. \u305d\u3057\u3066,\n\n$$\n\\begin{aligned}\n&\n\\int_0^1\\frac{ze^{z(x+y)}}{e^z-1}\\,dx = \n\\sum_{n=0}^\\infty\\frac{\\int_0^1 B_n(x+y)\\,dy}{n!}z^n, \n\\\\ &\n\\int_0^1\\frac{ze^{z(x+y)}}{e^z-1}\\,dx = \n\\frac{ze^{zx}}{e^z-1}\\int_0^1 e^{zy}\\,dy =\n\\frac{ze^{zx}}{e^z-1}\\frac{e^z-1}{z} =\ne^{zx} =\n\\sum_{n=0}^\\infty \\frac{x^n}{n!}z^n\n\\end{aligned}\n$$\n\n\u306a\u306e\u3067, \u3053\u308c\u3089\u3092\u6bd4\u8f03\u3057\u3066,\n\n$$\n\\int_0^1 B_n(x+y)\\,dy = x^n\n$$\n\n\u304c\u6210\u7acb\u3059\u308b\u3053\u3068\u304c\u308f\u304b\u308b. \u3086\u3048\u306b, \n\n$$\n\\int_0^1 B_n(x+h+y)\\,dy = (x+h)^n = \\sum_{k=0}^n \\binom{n}{k}x^{n-k}h^k =\n\\int_0^1 \\sum_{k=0}^n \\binom{n}{k}B_{n-k}(x+y)h^k \\,dy\n$$\n\n\u3088\u308a\n\n$$\nB_n(x+h) = \\sum_{k=0}^n \\binom{n}{k}B_{n-k}(x)h^k.\n$$\n\n(4) \u3059\u3050\u4e0a\u306e\u7b49\u5f0f\u306e\u53f3\u8fba\u306e $h$ \u306e\u4fc2\u6570\u3092\u898b\u308b\u3053\u3068\u306b\u3088\u3063\u3066,\n\n$$\nB_n'(x) = n B_{n-1}(x).\n$$\n\n(5) Bernoulli\u591a\u9805\u5f0f\u306e\u6bcd\u51fd\u6570\u306e $x$ \u306b $x+1$ \u3092\u4ee3\u5165\u3059\u308b\u3068,\n\n$$\n\\frac{ze^{z(x+1)}}{e^z-1} = \\frac{ze^z e^{zx}}{e^z-1} =\n\\frac{z(1+(e^z-1))e^{zx}}{e^z-1} = \\frac{ze^{zx}}{e^z-1} + ze^{zx}\n$$\n\n\u306a\u306e\u3067\u4e21\u8fba\u3092 $z$ \u306b\u3064\u3044\u3066\u5c55\u958b\u3057\u3066\u6bd4\u8f03\u3059\u308c\u3070(5)\u304c\u5f97\u3089\u308c\u308b.\n\n(6) Bernoulli\u591a\u9805\u5f0f\u306e\u6bcd\u51fd\u6570\u306e $x$ \u306b $1-x$ \u3092\u4ee3\u5165\u3059\u308b\u3068,\n\n$$\n\\frac{ze^{z(1-x)}}{e^z-1} = \\frac{ze^z e^{-zx}}{e^z-1} =\n\\frac{ze^{-zx}}{1-e^{-z}} = \\frac{-ze^{-zx}}{e^{-z}-1}\n$$\n\n\u3068Bernoulli\u591a\u9805\u5f0f\u306e\u6bcd\u51fd\u6570\u306e $z$ \u306b $-z$ \u3092\u4ee3\u5165\u3057\u305f\u3082\u306e\u306b\u306a\u308b\u306e\u3067, \u4e21\u8fba\u3092 $z$ \u306b\u3064\u3044\u3066\u5c55\u958b\u3057\u3066\u6bd4\u8f03\u3059\u308c\u3070(5)\u304c\u5f97\u3089\u308c\u308b.\n\n(7) \u4e0a\u306e(2)\u3068(4)\u3088\u308a, $n$ \u304c2\u4ee5\u4e0a\u306e\u3068\u304d,\n\n$$\nB_n(1)-B_n(0) = \\int_0^1 B_n'(x)\\,dx = n\\int_0^1 B_{n-1}(x)\\,dx = n\\delta_{n-1,0} = \\delta_{n,1}\n$$\n\n\u3086\u3048\u306b $n$ \u304c2\u4ee5\u4e0a\u306e\u3068\u304d $B_n(1)=B_n(0)+\\delta_{n,1}$.\n\n(8) \u6b21\u306e\u51fd\u6570\u304c $z$ \u306e\u5076\u51fd\u6570\u3067 $z\\to 0$ \u3067 $1$ \u306b\u306a\u308b\u3053\u3068\u304b\u3089, (6)\u304c\u5f97\u3089\u308c\u308b:\n\n$$\n\\frac{z}{e^z-1} + \\frac{z}{2} = \\frac{z}{2}\\frac{e^{z/2}+e^{-z/2}}{e^{z/2}-e^{-z/2}}.\n\\qquad \\QED\n$$\n\n**\u6ce8\u610f:** $B_n=B_n(0)$ \u306f**Bernoulli\u6570**\u3068\u547c\u3070\u308c\u3066\u3044\u308b. (3)\u3067 $(x,h)$ \u3092 $(0,x)$ \u3067\u7f6e\u304d\u63db\u3048\u308b\u3068, Bernoulli\u591a\u9805\u5f0f\u304cBernoulli\u6570\u3067\u8868\u308f\u3055\u308c\u308b\u3053\u3068\u304c\u308f\u304b\u308b:\n\n$$\nB_n(x) = \\sum_{k=0}^n \\binom{n}{k}B_k x^{n-k}.\n$$\n\n\u4e0a\u306e\u5b9a\u7406\u306e\u6761\u4ef6(1),(2),(4)\u306b\u3088\u3063\u3066Bernoulli\u591a\u9805\u5f0f $B_n(x)$ \u304c $n$ \u306b\u3064\u3044\u3066\u5e30\u7d0d\u7684\u306b\u4e00\u610f\u7684\u306b\u6c7a\u307e\u308b. $\\QED$\n\n**\u4f8b:** \n$$\nB_0 = 1, \\quad B_1 = -\\frac{1}{2}, \\quad\nB_2 = \\frac{1}{6}, \\quad B_3=0, \\quad B_4 = -\\frac{1}{30}\n$$\n\n\u306a\u306e\u3067\n\n$$\n\\begin{aligned}\n&\nB_0(x)=1, \\quad \nB_1(x)=x-\\frac{1}{2}, \\quad\nB_2(x)=x^2-x+\\frac{1}{6}, \n\\\\ &\nB_3(x)=x^3-\\frac{3}{2}x^2+\\frac{1}{2}x, \\quad\nB_4(x)=x^4-2x^3+x^2-\\frac{1}{30}.\n\\qquad\\QED\n\\end{aligned}\n$$\n\n\n```julia\nBernoulliPolynomial(n,x) = sympy.bernoulli(n,x)\nx = symbols(\"x\", real=true)\n[BernoulliPolynomial(n,x) for n in 0:10]\n```\n\n\n\n\n\\[ \\left[ \\begin{array}{r}1\\\\x - \\frac{1}{2}\\\\x^{2} - x + \\frac{1}{6}\\\\x^{3} - \\frac{3 x^{2}}{2} + \\frac{x}{2}\\\\x^{4} - 2 x^{3} + x^{2} - \\frac{1}{30}\\\\x^{5} - \\frac{5 x^{4}}{2} + \\frac{5 x^{3}}{3} - \\frac{x}{6}\\\\x^{6} - 3 x^{5} + \\frac{5 x^{4}}{2} - \\frac{x^{2}}{2} + \\frac{1}{42}\\\\x^{7} - \\frac{7 x^{6}}{2} + \\frac{7 x^{5}}{2} - \\frac{7 x^{3}}{6} + \\frac{x}{6}\\\\x^{8} - 4 x^{7} + \\frac{14 x^{6}}{3} - \\frac{7 x^{4}}{3} + \\frac{2 x^{2}}{3} - \\frac{1}{30}\\\\x^{9} - \\frac{9 x^{8}}{2} + 6 x^{7} - \\frac{21 x^{5}}{5} + 2 x^{3} - \\frac{3 x}{10}\\\\x^{10} - 5 x^{9} + \\frac{15 x^{8}}{2} - 7 x^{6} + 5 x^{4} - \\frac{3 x^{2}}{2} + \\frac{5}{66}\\end{array} \\right] \\]\n\n\n\n\n```julia\n# (2) \u222b_0^1 B_n(x) dx = \u03b4_{n0}\n\nBernoulliPolynomial(n,x) = sympy.bernoulli(n,x)\nx = symbols(\"x\", real=true)\n[integrate(BernoulliPolynomial(n,x), (x,0,1)) for n = 0:10]'\n```\n\n\n\n\n\\[\\left[ \\begin{array}{rrrrrrrrrrr}1&0&0&0&0&0&0&0&0&0&0\\end{array}\\right]\\]\n\n\n\n\n```julia\n# (3) B_n(x+h) = \u03a3_{k=0}^n binom(n,k) B_{n-k}(x) h^k\n\nBernoulliNumber(n) = sympy.bernoulli(n)\nBernoulliPolynomial(n,x) = sympy.bernoulli(n,x)\nBinomCoeff(n,k) = sympy.binomial_coefficients_list(n)[k+1]\nx, h = symbols(\"x h\", real=true)\n[BernoulliPolynomial(n,x) == sum(k->BinomCoeff(n,k)*BernoulliNumber(k)*x^(n-k), 0:n) for n in 0:10]'\n```\n\n\n\n\n 1\u00d711 LinearAlgebra.Adjoint{Bool,Array{Bool,1}}:\n true true true true true true true true true true true\n\n\n\n\n```julia\n# (4) B_n'(x) = n B_{n-1}(x)\n\nBernoulliPolynomial(n,x) = sympy.bernoulli(n,x)\nx = symbols(\"x\", real=true)\n[diff(BernoulliPolynomial(n,x), x) == n*BernoulliPolynomial(n-1,x) for n = 1:10]'\n```\n\n\n\n\n 1\u00d710 LinearAlgebra.Adjoint{Bool,Array{Bool,1}}:\n true true true true true true true true true true\n\n\n\n\n```julia\n# (5) B_n(x+1) = B_n(x) + n x^{n-1}\n\nBernoulliPolynomial(n,x) = sympy.bernoulli(n,x)\nx = symbols(\"x\", real=true)\n[simplify(BernoulliPolynomial(n,x+1) - BernoulliPolynomial(n,x)) for n in 0:10]\n```\n\n\n\n\n\\[ \\left[ \\begin{array}{r}0\\\\1\\\\2 x\\\\3 x^{2}\\\\4 x^{3}\\\\5 x^{4}\\\\6 x^{5}\\\\7 x^{6}\\\\8 x^{7}\\\\9 x^{8}\\\\10 x^{9}\\end{array} \\right] \\]\n\n\n\n\n```julia\n# (6) B_n(1-x) = (-1)^n B_n(x)\n\nBernoulliPolynomial(n,x) = sympy.bernoulli(n,x)\nx = symbols(\"x\", real=true)\n[expand(BernoulliPolynomial(n,1-x)) == (-1)^n*BernoulliPolynomial(n,x) for n in 0:10]'\n```\n\n\n\n\n 1\u00d711 LinearAlgebra.Adjoint{Bool,Array{Bool,1}}:\n true true true true true true true true true true true\n\n\n\n\n```julia\n# (7) B_n(1) = B_n(0) + \u03b4_{n1}\n\nBernoulliPolynomial(n,x) = sympy.bernoulli(n,x)\nx = symbols(\"x\", real=true)\n[expand(BernoulliPolynomial(n,1)) - BernoulliPolynomial(n,0) for n in 0:10]'\n```\n\n\n\n\n\\[\\left[ \\begin{array}{rrrrrrrrrrr}0&1&0&0&0&0&0&0&0&0&0\\end{array}\\right]\\]\n\n\n\n\n```julia\n# (8) B_n = B_n(0) \u306f n \u304c3\u4ee5\u4e0a\u306e\u5947\u6570\u306a\u3089\u30700\u306b\u306a\u308b.\n\nBernoulliNumber(n) = sympy.bernoulli(n)\n[(n, BernoulliNumber(n)) for n in 0:10]\n```\n\n\n\n\n 11-element Array{Tuple{Int64,Sym},1}:\n (0, 1) \n (1, -1/2) \n (2, 1/6) \n (3, 0) \n (4, -1/30)\n (5, 0) \n (6, 1/42) \n (7, 0) \n (8, -1/30)\n (9, 0) \n (10, 5/66)\n\n\n\n### \u3079\u304d\u4e57\u548c\n\n$m$ \u306f\u6b63\u306e\u6574\u6570\u3067\u3042\u308b\u3059\u308b. Bernoulli\u591a\u9805\u5f0f\u306b\u3064\u3044\u3066, \n\n$$\nB_{m+1}(x+1)-B_{m+1}(x) = (m+1)x^m, \n\\quad\\text{i.e.}\\quad\nx^m = \\frac{B_{m+1}(x+1)-B_{m+1}(x)}{m+1}\n$$\n\n\u304c\u6210\u7acb\u3057\u3066\u3044\u308b\u306e\u3067, \u3053\u308c\u3092 $x=0,1,\\ldots,n$ \u306b\u3064\u3044\u3066\u8db3\u3057\u4e0a\u3052\u308b\u3068,\n\n$$\n\\sum_{j=1}^n j^m = \\frac{B_{m+1}(n+1)-B_{m+1}}{m+1}.\n\\qquad \\QED\n$$\n\n\n```julia\nPowerSum(m, n) = sum(j->j^m, 1:n)\nBernoulliNumber(n) = sympy.bernoulli(n)\nBernoulliPolynomial(n,x) = sympy.bernoulli(n,x)\nPowerSumFormula(m, n) = (BernoulliPolynomial(m+1,n+1)-BernoulliNumber(m+1))/(m+1)\n[(m, PowerSum(m,10), PowerSumFormula(m, 10)) for m in 1:10]\n```\n\n\n\n\n 10-element Array{Tuple{Int64,Int64,Sym},1}:\n (1, 55, 55) \n (2, 385, 385) \n (3, 3025, 3025) \n (4, 25333, 25333) \n (5, 220825, 220825) \n (6, 1978405, 1978405) \n (7, 18080425, 18080425) \n (8, 167731333, 167731333) \n (9, 1574304985, 1574304985) \n (10, 14914341925, 14914341925)\n\n\n\n### Bernoulli\u6570\u306e\u8a08\u7b97\u6cd5\n\nBernoulli\u6570 $B_n$ \u306f\n\n$$\\displaystyle\n\\frac{z}{e^z-1}=\\sum_{n=1}^\\infty B_n\\frac{z^n}{n!}\n$$\n\n\u3067\u5b9a\u7fa9\u3055\u308c\u308b. \u3057\u304b\u3057, \u3053\u306e\u5c55\u958b\u3092\u76f4\u63a5\u8a08\u7b97\u3059\u308b\u3053\u3068\u306b\u3088\u3063\u3066 Bernoulli \u6570\u3092\u6c42\u3081\u308b\u306e\u306f\u52b9\u7387\u304c\u60aa\u3044.\n\n\u307e\u305a, \u5de6\u8fba\u306e $z\\to 0$ \u306e\u6975\u9650\u3092\u53d6\u308b\u3053\u3068\u306b\u3088\u3063\u3066 $B_0=1$ \u3067\u3042\u308b\u3053\u3068\u306f\u3059\u3050\u306b\u308f\u304b\u308b.\n\n\u6b21\u306b, $n$ \u304c $3$ \u4ee5\u4e0a\u306e\u5947\u6570\u306e\u3068\u304d $B_n=0$ \u3068\u306a\u308b\u3053\u3068\u3092(\u518d\u3073)\u793a\u305d\u3046. \n\n$$\\displaystyle\n\\frac{z}{e^z-1} + \\frac z2\n=\\frac z2\\frac{e^z+1}{e^z-1} \n=\\frac z2\\frac{e^{z/2}+e^{-z/2}}{e^{z/2}-e^{-z/2}}\n$$\n\n\u3088\u308a, \u5de6\u8fba\u306f\u5076\u51fd\u6570\u306b\u306a\u308b\u306e\u3067, \u305d\u306e\u5c55\u958b\u306e\u5947\u6570\u6b21\u306e\u9805\u306f\u6d88\u3048\u308b. \u3053\u306e\u3053\u3068\u304b\u3089, $B_1=-1/2$ \u3067\u304b\u3064, $0=B_3=B_5=B_7=\\cdots$ \u3067\u3042\u308b\u3053\u3068\u3082\u308f\u304b\u308b.\n\n$$\\displaystyle\n\\frac{ze^z}{e^z-1}\n=\\sum_{j,k=0}^\\infty \\frac{z^j}{j!}\\frac{B_k z^k}{k!}\n=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n \\binom{n}{k} B_k\\right)\\frac{z^n}{n!}\n$$\n\n\u3067\u304b\u3064\n\n$$\\displaystyle\n\\frac{ze^z}{e^z-1}\n=\\frac{z}{e^z-1}+z\n=\\sum_{n=0}^\\infty(B_n+\\delta_{n1})\\frac{z^n}{n!}\n$$\n\n\u306a\u306e\u3067, \u3053\u308c\u3089\u3092\u6bd4\u8f03\u3059\u308b\u3068\n\n$$\\displaystyle\n\\sum_{k=0}^{n-1} \\binom{n}{k} B_k = \\delta_{n1}.\n$$\n\n\u3086\u3048\u306b, $n$ \u3092 $n+1$ \u3067\u7f6e\u304d\u63db\u3048, $n\\geqq 1$ \u3068\u3057, $B_n$ \u3092\u4ed6\u3067\u8868\u308f\u3059\u5f0f\u306b\u66f8\u304d\u76f4\u3059\u3068\n\n$$\\displaystyle\nB_n = -\\frac{1}{n+1}\\sum_{k=0}^{n-1}\\binom{n+1}{k}B_k\n\\qquad (n\\geqq 1).\n$$\n\n\u3053\u308c\u3092\u4f7f\u3048\u3070\u5e30\u7d0d\u7684\u306b $B_n$ \u3092\u6c42\u3081\u308b\u3053\u3068\u304c\u3067\u304d\u308b. $B_0=1$, $B_1=-1/2$, $0=B_3=B_5=B_7=\\cdots$ \u3067\u3042\u308b\u3053\u3068\u3092\u4f7f\u3046\u3068, \n\n$$\\displaystyle\nB_{2m} = -\\frac{1}{2m+1}\\left(\n1 -\\frac{2m+1}{2}\n+\\sum_{k=1}^{m-1}\\binom{2m+1}{2k}B_{2k}\n\\right).\n$$\n\n**\u554f\u984c:** \u4e0a\u306e\u65b9\u3067\u306fSymPy\u306b\u304a\u3051\u308bBernoulli\u6570\u306e\u51fd\u6570\u3092\u5229\u7528\u3057\u305f. Bernoulli\u6570\u3092\u8a08\u7b97\u3059\u308b\u305f\u3081\u306e\u30d7\u30ed\u30b0\u30e9\u30e0\u3092\u81ea\u5206\u3067\u66f8\u3051. $\\QED$\n\n**\u89e3\u7b54\u4f8b:** \u6b21\u306e\u30bb\u30eb\u306e\u901a\u308a. $\\QED$\n\n\n```julia\n# binomial coefficient: binom(n,k) = n(n-1)\u30fb(n-k+1)/k!\n#\nmydiv(a, b) = a / b\nmydiv(a::Integer, b::Integer) = a \u00f7 b\nfunction binom(n, k)\n k < 0 && return zero(n)\n k == 0 && return one(n)\n b = one(n)\n for j in 1:k\n b = mydiv(b*(n-k+j), j)\n end\n b\nend\n \n@show binom(Rational(big\"100\")/3, 30)\n\n# Bernoulli numbers: B(n) = Bernoulli[n+1] = B_n\n#\nstruct Bernoulli{T}\n B::Array{T,1}\nend\nfunction Bernoulli(; maxn=200)\n B = zeros(Rational{BigInt},maxn+1)\n B[1] = 1 # B_0\n B[2] = -1//2 # B_1\n for n in big\"2\":2:maxn+1\n B[n+1] = -(1//(n+1))*sum(j->binom(n+1,j)*B[j+1], 0:n-1)\n # B_n = -(1/(n+1)) \u03a3_{j=0}^{n-1} binom(n+1,j)*B_j\n end\n Bernoulli(B)\nend\n(B::Bernoulli)(n) = B.B[n+1]\n\nmaxn = 200\n@time B = Bernoulli(maxn=maxn) # B_n \u3092 B_{maxn} \u307e\u3067\u8a08\u7b97\nBB(n) = float(B(n)) # B(n) = B_n \u3067\u3042\u308b. BB(n)\u306f\u305d\u306e\u6d6e\u52d5\u5c0f\u6570\u70b9\u7248\n\n# SymPy\u306eBernoulli\u6570\u3068\u6bd4\u8f03\u3057\u3066\u6b63\u3057\u304f\u8a08\u7b97\u3067\u304d\u3066\u3044\u308b\u304b\u3069\u3046\u304b\u3092\u78ba\u8a8d\n#\nBernoulliNumber(n) = sympy.bernoulli(n)\n@show B_eq_B = [B(n) == BernoulliNumber(n) for n in 0:maxn]\nprintln()\n@show all(B_eq_B)\n\nmaxnprint = 30\nprintln()\nfor n in [0; 1; 2:2:maxnprint]\n println(\"B($n) = \", B(n))\nend\nprintln()\nfor n in [0; 1; 2:2:maxnprint]\n println(\"BB($n) = \", BB(n))\nend\n```\n\n binom(Rational(#= In[11]:15 =# @big_str(\"100\")) / 3, 30) = 11240781188817808072725280//984770902183611232881\n 1.913094 seconds (17.53 M allocations: 309.111 MiB, 19.93% gc time)\n B_eq_B = [B(n) == BernoulliNumber(n) for n = 0:maxn] = Bool[true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true, true]\n \n all(B_eq_B) = true\n \n B(0) = 1//1\n B(1) = -1//2\n B(2) = 1//6\n B(4) = -1//30\n B(6) = 1//42\n B(8) = -1//30\n B(10) = 5//66\n B(12) = -691//2730\n B(14) = 7//6\n B(16) = -3617//510\n B(18) = 43867//798\n B(20) = -174611//330\n B(22) = 854513//138\n B(24) = -236364091//2730\n B(26) = 8553103//6\n B(28) = -23749461029//870\n B(30) = 8615841276005//14322\n \n BB(0) = 1.0\n BB(1) = -0.50\n BB(2) = 0.1666666666666666666666666666666666666666666666666666666666666666666666666666674\n BB(4) = -0.03333333333333333333333333333333333333333333333333333333333333333333333333333359\n BB(6) = 0.02380952380952380952380952380952380952380952380952380952380952380952380952380947\n BB(8) = -0.03333333333333333333333333333333333333333333333333333333333333333333333333333359\n BB(10) = 0.0757575757575757575757575757575757575757575757575757575757575757575757575757578\n BB(12) = -0.2531135531135531135531135531135531135531135531135531135531135531135531135531131\n BB(14) = 1.166666666666666666666666666666666666666666666666666666666666666666666666666661\n BB(16) = -7.092156862745098039215686274509803921568627450980392156862745098039215686274513\n BB(18) = 54.97117794486215538847117794486215538847117794486215538847117794486215538847111\n BB(20) = -529.1242424242424242424242424242424242424242424242424242424242424242424242424247\n BB(22) = 6192.123188405797101449275362318840579710144927536231884057971014492753623188388\n BB(24) = -86580.25311355311355311355311355311355311355311355311355311355311355311355311313\n BB(26) = 1.425517166666666666666666666666666666666666666666666666666666666666666666666661e+06\n BB(28) = -2.729823106781609195402298850574712643678160919540229885057471264367816091954032e+07\n BB(30) = 6.015808739006423683843038681748359167714006423683843038681748359167714006423638e+08\n\n\n### \u5468\u671f\u7684Bernoulli\u591a\u9805\u5f0f\u306eFourier\u7d1a\u6570\u5c55\u958b\n\n$\\widetilde{B}_k(x) = B_k(x-\\lfloor x\\rfloor)$ \u3092**\u5468\u671f\u7684Bernoulli\u591a\u9805\u5f0f**\u3068\u547c\u3076\u3053\u3068\u306b\u3059\u308b. \u5468\u671f\u7684Bernoulli\u591a\u9805\u5f0f\u306f $\\widetilde{B}_k(x+1)=\\widetilde{B}_k(x)$ \u3092\u6e80\u305f\u3057\u3066\u3044\u308b. \n\n\u5468\u671f\u7684Bernoulli\u591a\u9805\u5f0f\u306e\u6bcd\u51fd\u6570 $\\ds\\frac{z e^{z(x-\\lfloor x\\rfloor)}}{e^z-1}$ \u306e $x$ \u306e\u51fd\u6570\u3068\u3057\u3066\u306e Fourier\u4fc2\u6570 $a_n(z)$ \u306f\u6b21\u306e\u3088\u3046\u306b\u6c42\u307e\u308b:\n\n$$\n\\frac{e^z-1}{z}a_n(z) = \\int_0^1 e^{zx}e^{-2\\pi inx}\\,dx =\n\\left[\\frac{e^{(z-2\\pi in)x}}{z-2\\pi in}\\right]_{x=0}^{x=1} = \n\\frac{e^z-1}{z-2\\pi in},\n\\qquad\na_n(z) = \\frac{z}{z-2\\pi in}.\n$$\n\n\u3086\u3048\u306b $a_0(z)=1$ \u3067\u3042\u308a, $n\\ne 0$ \u306e\u3068\u304d\n\n$$\na_n(z) = -\\sum_{k=1}^\\infty \\frac{z^k}{(2\\pi in)^k}\n$$\n\n\u3053\u308c\u3088\u308a, $\\widetilde{B}_k(x)$ \u306eFourier\u4fc2\u6570 $a_{k,n}$ \u306f, $a_{0,n}=\\delta_{n,0}$, $a_{k,0}=\\delta_{k,0}$ \u3092\u6e80\u305f\u3057, $k\\ne 0$, $n\\geqq 1$ \u306e\u3068\u304d\n\n$$\na_{k,n} = -\\frac{k!}{(2\\pi in)^k}\n$$\n\n\u3068\u306a\u308b\u3053\u3068\u304c\u308f\u304b\u308b. \u3057\u305f\u304c\u3063\u3066, Fourier\u7d1a\u6570\u8ad6\u3088\u308a, $k=1$ \u306e\u3068\u304d\u306f\u6574\u6570\u3067\u306f\u306a\u3044\u5b9f\u6570 $x$ \u306b\u3064\u3044\u3066, $k\\geqq 2$ \u306e\u5834\u5408\u306b\u306f\u3059\u3079\u3066\u306e\u5b9f\u6570 $x$ \u306b\u3064\u3044\u3066\u6b21\u304c\u6210\u7acb\u3059\u308b\u3053\u3068\u304c\u308f\u304b\u308b:\n\n$$\n\\widetilde{B}_k(x) = B_k(x-\\lfloor x\\rfloor) =\n-k!\\sum_{n\\ne 0} \\frac{e^{2\\pi inx}}{(2\\pi in)^k}.\n$$\n\n\u3059\u306a\u308f\u3061, $k=1,2,3,\\ldots$ \u306b\u3064\u3044\u3066\n\n$$\n\\widetilde{B}_{2k-1}(x) = \n(-1)^k 2(2k-1)!\\sum_{n=1}^\\infty\\frac{\\sin(2\\pi nx)}{(2\\pi n)^{2k-1}}, \n\\qquad\n\\widetilde{B}_{2k}(x) = \n(-1)^{k-1} 2(2k)!\\sum_{n=1}^\\infty \\frac{\\cos(2\\pi nx)}{(2\\pi n)^{2k}}. \n$$\n\n\u3053\u306e\u3053\u3068\u304b\u3089, $k$ \u304c\u5927\u304d\u3044\u3068\u304d(\u5b9f\u969b\u306b\u306f $k=5,6$ \u7a0b\u5ea6\u3067\u3059\u3067\u306b), \u5468\u671f\u7684Bernoulli\u591a\u9805\u5f0f\u306f $n=1$ \u306e\u9805\u3060\u3051\u3067\n\n$$\n\\widetilde{B}_{2k-1}(x) \\approx\n(-1)^k 2(2k-1)!\\frac{\\sin(2\\pi x)}{(2\\pi)^{2k-1}}, \n\\qquad\n\\widetilde{B}_{2k}(x) \\approx\n(-1)^{k-1} 2(2k)!\\frac{\\cos(2\\pi x)}{(2\\pi)^{2k}}\n$$\n\n\u3068\u8fd1\u4f3c\u3067\u304d\u308b\u3053\u3068\u304c\u308f\u304b\u308b. \u9069\u5f53\u306b\u30b9\u30b1\u30fc\u30eb\u3059\u308c\u3070\u5468\u671f\u7684Bernoulli\u591a\u9805\u5f0f\u306f $k\\to\\infty$ \u3067\u4e09\u89d2\u51fd\u6570\u306b\u53ce\u675f\u3059\u308b.\n\n\n```julia\nBBB = Bernoulli(Float64.(B.B)) # Float64 Bernoulli numbers\nBP(k,x) = sum(j->binom(k,j)*BBB(k-j)*x^j, 0:k) # Float64 Bernoulli polynomial\nPBP(k,x) = BP(k, x - floor(x)) # periodic Bernoulli polynomial\n\n# partial sum of Fourier series of periodic Bernoulli polynomial\nfunction PSFS(k, N, x)\n k == 0 && return zero(x)\n if isodd(k)\n return (-1)^((k+1)\u00f72)*2*factorial(k)*sum(n->sin(2\u03c0*n*x)/(2\u03c0*n)^k, 1:N)\n else\n return (-1)^(k\u00f72-1)*2*factorial(k)*sum(n->cos(2\u03c0*n*x)/(2\u03c0*n)^k, 1:N)\n end\nend\n\nPP = []\nx = -1.0:0.001:0.999\nfor (k,N) in [(1,20), (2,10), (3,3), (4,2), (5,1), (6,1)]\n y = PBP.(k,x)\n z = PSFS.(k, N, x)\n ymin = 1.2*minimum(y)\n ymax = 2.7*maximum(y)\n P = plot(legend=:topleft, size=(400, 250), ylim=(ymin, ymax))\n plot!(x, y, label=\"B_$k(x-[x])\")\n plot!(x, z, label=\"partial sum of Fourier series (N=$N)\")\n push!(PP, P)\nend\n\nplot(PP[1:2]..., size=(750, 280))\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nplot(PP[3:4]..., size=(750, 280))\n```\n\n\n\n\n \n\n \n\n\n\n\n```julia\nplot(PP[5:6]..., size=(750, 280))\n```\n\n\n\n\n \n\n \n\n\n\n## Euler-Maclaurin\u306e\u548c\u516c\u5f0f\n\n### Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u306e\u5c0e\u51fa\n\nBernoulli\u591a\u9805\u5f0f $B_n(x)$ \u3068Bernoulli\u6570 $B_n$ \u306b\u3064\u3044\u3066\n\n$$\n\\begin{aligned}\n&\nB_0(x) = 1, \\quad \\frac{d}{dx}\\frac{B_n(x)}{n!} = \\frac{B_{n-1}(x)}{(n-1)!}, \n\\\\ &\nB_1(0)=-\\frac{1}{2}, \\quad B_1(1)=\\frac{1}{2},\n\\\\ &\nB_n(1)=B_n(0)=B_n \\quad (n=0,2,3,4,5,\\ldots) \n\\\\ &\nB_{2j+1} = 0 \\quad (j=1,2,3,\\ldots)\n\\end{aligned}\n$$\n\n\u304c\u6210\u7acb\u3057\u3066\u3044\u308b. \u4ee5\u4e0b\u3067\u306f\u3057\u3070\u3089\u304f\u306e\u3042\u3044\u3060\u3053\u308c\u3089\u306e\u6761\u4ef6\u3057\u304b\u4f7f\u308f\u306a\u3044.\n\n\u90e8\u5206\u7a4d\u5206\u3092\u7e70\u308a\u8fd4\u3059\u3053\u3068\u306b\u3088\u3063\u3066,\n\n$$\n\\begin{aligned}\n\\int_0^1 f(x)\\,dx &= \\int_0^1 B_0(x)f(x)\\,dx \n\\\\ &=\n[B_1(x)f(x)]_0^1 - \\int_0^1 B_1(x)f'(x)\\,dx \n\\\\ &=\n[B_1(x)f(x)]_0^1 - \\frac{1}{2}[B_2(x)f'(x)]_0^1 + \\int_0^1 \\frac{B_2(x)}{2}f''(x)\\,dx \n\\\\ &=\n[B_1(x)f(x)]_0^1 - \\frac{1}{2}[B_2(x)f'(x)]_0^1 + \\frac{1}{3!}[B_3(x)f''(x)]_0^1 - \\int_0^1 \\frac{B_3(x)}{3!}f'''(x)\\,dx\n\\\\ &=\n\\cdots\\cdots\\cdots\\cdots\\cdots\n\\\\ &=\n\\sum_{k=1}^n \\frac{(-1)^{k-1}}{k!}\\left[B_k(x)f^{(k-1)}(x)\\right]_0^1 + \n(-1)^n\\int_0^1 \\frac{B_n(x)}{n!}f^{(n)}(x)\\,dx\n\\\\ &=\n\\frac{f(0)+f(1)}{2} + \\sum_{k=2}^n(-1)^{k-1}\\frac{B_k}{k!} (f^{(k-1)}(1)-f^{(k-1)}(0)) + \n(-1)^n\\int_0^1 \\frac{B_n(x)}{n!}f^{(n)}(x)\\,dx.\n\\end{aligned}\n$$\n\n\u5b9f\u6570 $x$ \u306b\u5bfe\u3057\u3066, $x$ \u4ee5\u4e0b\u306e\u6700\u5927\u306e\u6574\u6570\u3092 $\\lfloor x\\rfloor$ \u3068\u66f8\u304f. \u3053\u306e\u3068\u304d, $x-\\lfloor x\\rfloor$ \u306f $x$ \u306e\u300c\u5c0f\u6570\u90e8\u5206\u300d\u306b\u306a\u308b. \u3053\u306e\u3088\u3046\u306b\u8a18\u53f7\u3092\u6e96\u5099\u3057\u3066\u304a\u304f\u3068, \u6574\u6570 $j$ \u306b\u5bfe\u3057\u3066, \n\n$$\n\\begin{aligned}\n\\int_j^{j+1} f(x)\\,dx &= \\int_0^1 f(x+j)\\,dx\n\\\\ &=\n\\frac{f(j)+f(j+1)}{2} + \\sum_{k=2}^n (-1)^{k-1} \\frac{B_k}{k!} (f^{(k-1)}(j+1)-f^{(k-1)}(j)) + \n(-1)^n\\int_0^1 \\frac{B_n(x)}{n!}f^{(n)}(x+j)\\,dx\n\\\\ &=\n\\frac{f(j)+f(j+1)}{2} + \\sum_{k=2}^n (-1)^{k-1}\\frac{B_k}{k!} (f^{(k-1)}(j+1)-f^{(k-1)}(j)) + \n(-1)^n\\int_j^{j+1} \\frac{B_n(x-\\lfloor x\\rfloor)}{n!}f^{(n)}(x)\\,dx.\n\\end{aligned}\n$$\n\n$af(j), a+1:b-1)\n - sum(k -> (\n BernoulliNumber(k)/factorial(Sym(k))\n * (diff(f(x), x, k-1)(x=>b) - diff(f(x), x, k-1)(x=>a))\n ), 2:n)\n )\nend\n\nfunction EulerMaclaurinRemainder(f, a, b, n)\n x = symbols(\"x\", real=true)\n g = diff(f(x), x, n)\n (-1)^(n-1) * sum(k -> (\n integrate(BernoulliPolynomial(n,x)*g(x=>x+k), (x,0,1))\n ), a:b-1)/factorial(Sym(n))\nend\n\nx = symbols(\"x\", real=true)\n\n[integrate(x^m, (x, 0, 10)) for m in 7:15] |> display\n\n[\n EulerMaclaurinIntegral(x->x^m, 0, 10, 5) - EulerMaclaurinRemainder(x->x^m, 0, 10, 5)\n for m in 7:15\n] |> display\n```\n\n\n\\[ \\left[ \\begin{array}{r}12500000\\\\\\frac{1000000000}{9}\\\\1000000000\\\\\\frac{100000000000}{11}\\\\\\frac{250000000000}{3}\\\\\\frac{10000000000000}{13}\\\\\\frac{50000000000000}{7}\\\\\\frac{200000000000000}{3}\\\\625000000000000\\end{array} \\right] \\]\n\n\n\n\\[ \\left[ \\begin{array}{r}12500000\\\\\\frac{1000000000}{9}\\\\1000000000\\\\\\frac{100000000000}{11}\\\\\\frac{250000000000}{3}\\\\\\frac{10000000000000}{13}\\\\\\frac{50000000000000}{7}\\\\\\frac{200000000000000}{3}\\\\625000000000000\\end{array} \\right] \\]\n\n\n**Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u306e\u89e3\u91c82:** Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u306f\u6b21\u306e\u3088\u3046\u306b\u66f8\u304d\u76f4\u3055\u308c\u308b:\n\n$$\n\\begin{aligned}\n&\n\\sum_{j=a}^b f(j) = \n\\int_a^b f(x)\\,dx + \\frac{f(a)+f(b)}{2} + \n\\sum_{1\\leqq i\\leqq n/2} \\frac{B_{2i}}{(2i)!} (f^{(2i-1)}(b)-f^{(2i-1)}(a)) + R_n,\n\\\\ &\nR_n = (-1)^{n-1}\\int_a^b \\frac{B_n(x-\\lfloor x\\rfloor)}{n!}f^{(n)}(x)\\,dx\n\\end{aligned}\n$$\n\n\u3053\u308c\u306f $n$ \u304c3\u4ee5\u4e0a\u306e\u5947\u6570\u306e\u3068\u304d $B_n=0$ \u3068\u306a\u308b\u3053\u3068\u3092\u4f7f\u3046\u3068\u6b21\u306e\u3088\u3046\u306b\u66f8\u304d\u76f4\u3055\u308c\u308b:\n\n$$\n\\begin{aligned}\n&\n\\sum_{j=a}^b f(j) = \n\\int_a^b f(x)\\,dx + \\frac{f(a)+f(b)}{2} + \n\\sum_{k=2}^n \\frac{B_k}{k!} (f^{(k-1)}(b)-f^{(k-1)}(a)) + R_n,\n\\\\ &\nR_n = (-1)^{n-1}\\int_a^b \\frac{B_n(x-\\lfloor x\\rfloor)}{n!}f^{(n)}(x)\\,dx\n\\end{aligned}\n$$\n\n\u3053\u306e\u7b49\u5f0f\u306f\u51fd\u6570 $f$ \u306e\u6574\u6570\u306b\u304a\u3051\u308b\u5024\u306e\u548c $\\ds\\sum_{j=a}^b f(j)$ \u3092\u7a4d\u5206 $\\ds\\int_a^b f(x)\\,dx$ \u3067\u8fd1\u4f3c\u3057\u305f\u3068\u304d\u306e\u8aa4\u5dee\u304c\n\n$$\n\\frac{f(a)+f(b)}{2} + \n\\sum_{1\\leqq i\\leqq n/2} \\frac{B_{2i}}{(2i)!} (f^{(2i-1)}(b)-f^{(2i-1)}(a)) + R_n\n$$\n\n\u306b\u306a\u3063\u3066\u3044\u308b\u3053\u3068\u3092\u610f\u5473\u3057\u3066\u3044\u308b. \u4f8b\u3048\u3070, $n=1$ \u306e\u5834\u5408\u306b\u306f, $\\ds B_1(x)=x-\\frac{1}{2}$ \u306a\u306e\u3067,\n\n$$\n\\sum_{j=a}^b f(j) = \n\\int_a^b f(x)\\,dx + \\frac{f(a)+f(b)}{2} + \n\\int_a^b\\left(x-\\lfloor x\\rfloor-\\frac{1}{2}\\right)f'(x)\\,dx.\n$$\n\n$n=2$ \u306e\u5834\u5408\u306b\u306f $\\ds B_2(x)=x^2-x+\\frac{1}{6}$, $\\ds B_2=\\frac{1}{6}$ \u3067\u3042\u308a,\n\n$$\n\\sum_{j=a}^b f(j) = \n\\int_a^b f(x)\\,dx + \\frac{f(a)+f(b)}{2} +\n\\frac{f'(b)-f'(a)}{12} -\n\\int_a^b\\frac{B_2(x-\\lfloor x\\rfloor)}{2}f''(x)\\,dx.\n$$\n\n\u3068\u306a\u308b. $\\QED$\n\n\n```julia\n# \u3059\u3050\u4e0a\u306e\u516c\u5f0f\u3092\u691c\u8a3c\n\nPowerSum(m, n) = sum(j->j^m, 1:n)\nBernoulliNumber(n) = sympy.bernoulli(n)\nBernoulliPolynomial(n,x) = sympy.bernoulli(n,x)\n\nfunction EulerMaclaurinSum(f, a, b, n)\n x = symbols(\"x\", real=true)\n (\n integrate(f(x), (x, a, b))\n + (f(a)+f(b))/Sym(2)\n + sum(k -> (\n BernoulliNumber(k)/factorial(Sym(k))\n * (diff(f(x), x, k-1)(x=>b) - diff(f(x), x, k-1)(x=>a))\n ), 2:n)\n )\nend\n\nfunction EulerMaclaurinRemainder(f, a, b, n)\n x = symbols(\"x\", real=true)\n g = diff(f(x), x, n)\n (-1)^(n-1) * sum(k -> (\n integrate(BernoulliPolynomial(n,x)*g(x=>x+k), (x,0,1))\n ), a:b-1)/factorial(Sym(n))\nend\n\n[PowerSum(m, 10) for m in 1:10] |> display\n\n[EulerMaclaurinSum(x->x^m, 1, 10, m+1) for m in 1:10] |> display\n\n[\n EulerMaclaurinSum(x->x^m, 1, 10, m-1) + EulerMaclaurinRemainder(x->x^m, 1, 10, m-1)\n for m in 3:10\n] |> display\n\n[\n EulerMaclaurinSum(x->x^m, 1, 10, m-2) + EulerMaclaurinRemainder(x->x^m, 1, 10, m-2)\n for m in 4:10\n] |> display\n\n[\n EulerMaclaurinSum(x->x^m, 1, 10, m-3) + EulerMaclaurinRemainder(x->x^m, 1, 10, m-3)\n for m in 5:10\n] |> display\n```\n\n\n 10-element Array{Int64,1}:\n 55\n 385\n 3025\n 25333\n 220825\n 1978405\n 18080425\n 167731333\n 1574304985\n 14914341925\n\n\n\n\\[ \\left[ \\begin{array}{r}55\\\\385\\\\3025\\\\25333\\\\220825\\\\1978405\\\\18080425\\\\167731333\\\\1574304985\\\\14914341925\\end{array} \\right] \\]\n\n\n\n\\[ \\left[ \\begin{array}{r}3025\\\\25333\\\\220825\\\\1978405\\\\18080425\\\\167731333\\\\1574304985\\\\14914341925\\end{array} \\right] \\]\n\n\n\n\\[ \\left[ \\begin{array}{r}25333\\\\220825\\\\1978405\\\\18080425\\\\167731333\\\\1574304985\\\\14914341925\\end{array} \\right] \\]\n\n\n\n\\[ \\left[ \\begin{array}{r}220825\\\\1978405\\\\18080425\\\\167731333\\\\1574304985\\\\14914341925\\end{array} \\right] \\]\n\n\n### Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u306e\u5f62\u5f0f\u7684\u5c0e\u51fa\n\n\u51fd\u6570 $f(x)$ \u306b\u5bfe\u3057\u3066, \u3042\u308b\u51fd\u6570 $F(x)$ \u3067\n\n$$\nF(x+1) - F(x) = f(x+h)\n$$\n\n\u3068\u3044\u3046\u6761\u4ef6\u3092\u6e80\u305f\u3059\u3082\u306e\u3092\u6c42\u3081\u308b\u554f\u984c\u3092\u8003\u3048\u308b. \u305d\u306e\u3068\u304d, $\\ds D=\\frac{\\d}{\\d x}$ \u3068\u304a\u304f\u3068, \u5f62\u5f0f\u7684\u306b\u305d\u306e\u6761\u4ef6\u306f\n\n$$\n(e^D-1)F(x) = e^{hD}f(x) = De^{hD}\\int f(x)\\,dx\n$$\n\n\u3068\u66f8\u304d\u76f4\u3055\u308c\u308b. \u3053\u308c\u3088\u308a, \u5f62\u5f0f\u7684\u306b\u306f\n\n$$\nF(x) = \\frac{De^{hD}}{e^D-1}\\int f(x)\\,dx =\n\\sum_{k=0}^\\infty \\frac{B_k(h)}{k!}D^k \\int f(x)\\,dx =\n\\int f(x)\\,dx + \\sum_{k=1}^\\infty \\frac{B_k(h)}{k!}f^{(k-1)}(x).\n$$\n\n\u3053\u308c\u3088\u308a, \u6574\u6570 $an$ \u306e\u3068\u304d, \n\n$$\n\\begin{aligned}\n\\log n! &= \\log N! + \\log n - \\sum_{j=n}^N \\log j\n\\\\ &= \\log N! + \\log n -\\left(\n\\int_n^N \\log x\\,dx + \\frac{\\log n+\\log N}{2} +\n\\sum_{k=2}^{K-1}\\frac{B_k}{k(k-1)} \\left(\\frac{1}{N^{k-1}} - \\frac{1}{n^{k-1}}\\right) + \nR_{K,N}\n\\right)\n\\\\ &=\n\\log N! - \\left(N\\log N - N + \\frac{1}{2}\\log N\\right) - \n\\sum_{k=2}^{K-1} \\frac{B_k}{k(k-1)} \\frac{1}{N^{k-1}}\n\\\\ &\\,+\nn\\log n - n +\\frac{1}{2}\\log n +\n\\sum_{k=2}^{K-1}\\frac{B_k}{k(k-1)} \\frac{1}{n^{k-1}} + R_{K,N},\n\\\\ \nR_{K,N} &= (-1)^{K-1}\\int_n^N \\frac{\\tilde{B}_K(x)}{K}\\frac{(-1)^{K-1}}{x^K}\\,dx\n\\end{aligned}\n$$\n\n\u305f\u3060\u3057, $\\tilde{B}_n(x)=B_n(\\lfloor x\\rfloor)$ \u3068\u304a\u3044\u305f. \n\n\u3053\u3053\u3067\u306f, $N\\to\\infty$ \u306e\u3068\u304d\n\n$$\n\\log N! - \\left(N\\log N - N + \\frac{1}{2}\\log N\\right) \\to \\sqrt{2\\pi}\n$$\n\n\u3068\u306a\u308b\u3053\u3068\u306f\u65e2\u77e5\u3067\u3042\u308b\u3082\u306e\u3068\u3059\u308b. \u4f8b\u3048\u3070, \u30ce\u30fc\u30c8\u300c10 Gauss\u7a4d\u5206, \u30ac\u30f3\u30de\u51fd\u6570, \u30d9\u30fc\u30bf\u51fd\u6570\u300d\u300c12 Fourier\u89e3\u6790\u300d\u306eStirling\u306e\u8fd1\u4f3c\u516c\u5f0f\u306e\u7bc0\u3092\u53c2\u7167\u3057\u3066\u6b32\u3057\u3044. \u4ee5\u4e0b\u3067\u306f\u305d\u308c\u3089\u306e\u30ce\u30fc\u30c8\u3088\u308a\u3082\u7cbe\u5bc6\u306a\u7d50\u679c\u3092\u5f97\u308b.\n\n\u3053\u306e\u3068\u304d, \u4e0a\u306e\u7d50\u679c\u3067 $N\\to\\infty$ \u3068\u3059\u308b\u3068,\n\n$$\n\\begin{aligned}\n&\n\\log n! =\nn\\log n - n +\\frac{1}{2}\\log n + \\log\\sqrt{2\\pi} +\n\\sum_{k=2}^{K-1}\\frac{B_k}{k(k-1)} \\frac{1}{n^{k-1}} + R_K,\n\\\\ & \nR_K = (-1)^{K-1}\\int_n^\\infty \\frac{\\tilde{B}_K(x)}{K}\\frac{(-1)^{K-1}}{x^K}\\,dx = \nO\\left(\\frac{1}{n^{K-1}}\\right).\n\\end{aligned}\n$$\n\n$K=2L+1$ \u3068\u304a\u304f\u3053\u3068\u306b\u3088\u3063\u3066\u6b21\u304c\u5f97\u3089\u308c\u308b: \u6b63\u306e\u6574\u6570 $L$ \u306b\u5bfe\u3057\u3066,\n\n$$\n\\log n! =\nn\\log n - n + \\frac{1}{2}\\log n + \\log\\sqrt{2\\pi} +\n\\sum_{l=1}^L \\frac{B_{2l}}{(2l)(2l-1)}\\frac{1}{n^{2l-1}} + O\\left(\\frac{1}{n^{2L}}\\right).\n$$\n\n\u3053\u308c\u304c\u6c42\u3081\u3066\u3044\u305f\u7d50\u679c\u3067\u3042\u308b.\n\n\u4f8b\u3048\u3070, $L=2$ \u306e\u3068\u304d, $\\ds B_2=\\frac{1}{6}$, $\\ds B_4=-\\frac{1}{30}$ \u306a\u306e\u3067,\n\n$$\n\\log n! =\nn\\log n - n + \\frac{1}{2}\\log n + \\log\\sqrt{2\\pi} +\n\\frac{1}{12n} - \\frac{1}{360n^3} + O\\left(\\frac{1}{n^4}\\right).\n$$\n\n\u3053\u308c\u3088\u308a, \n\n$$\nn! = n^n e^{-n}\\sqrt{2\\pi n}\n\\left(1+\\frac{1}{12n} + \\frac{1}{288n^2} - \\frac{139}{51840n^3} + O\\left(\\frac{1}{n^4}\\right)\\right).\n$$\n\n\n```julia\nx = symbols(\"x\")\nseries(exp(x/12-x^3/360), x, n=4)\n```\n\n\n\n\n\\begin{equation*}1 + \\frac{x}{12} + \\frac{x^{2}}{288} - \\frac{139 x^{3}}{51840} + O\\left(x^{4}\\right)\\end{equation*}\n\n\n\n### Poisson\u306e\u548c\u516c\u5f0f\u3068Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u306e\u95a2\u4fc2\n\nPoisson\u306e\u548c\u516c\u5f0f\u3068\u306f, \u6025\u6e1b\u5c11\u51fd\u6570 $f(x)$ \u306b\u5bfe\u3057\u3066,\n\n$$\n\\sum_{m\\in\\Z} f(m) = \\sum_{n\\in\\Z} \\hat{f}(n), \\qquad\n\\hat{f}(p) = \\int_\\R f(x)e^{2\\pi i px}\\,dx\n$$\n\n\u304c\u6210\u7acb\u3059\u308b\u3068\u3044\u3046\u7d50\u679c\u3067\u3042\u3063\u305f. \u3053\u308c\u306e\u53f3\u8fba\u306f\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u5909\u5f62\u3067\u304d\u308b:\n\n$$\n\\begin{aligned}\n\\sum_{n\\in\\Z} \\hat{f}(n) &=\n\\sum_{n\\in\\Z} \\int_\\R f(x)e^{2\\pi i nx}\\,dx =\n\\int_\\R f(x)\\,dx + 2\\sum_{n=1}^\\infty\\int_\\R f(x)\\cos(2\\pi nx)\\,dx\n\\\\ &=\n\\int_\\R f(x)\\,dx - \\sum_{n=1}^\\infty\\int_\\R f'(x)\\frac{\\sin(2\\pi nx)}{\\pi n}\\,dx\n\\\\ &=\n\\int_\\R f(x)\\,dx + \\int_\\R \\left(-\\sum_{n=1}^\\infty\\frac{\\sin(2\\pi nx)}{\\pi n}\\right)f'(x)\\,dx\n\\end{aligned}\n$$\n\n2\u3064\u76ee\u306e\u7b49\u53f7\u3067\u306f $e^{2\\pi inx}+e^{-2\\pi inx}=2\\cos(2\\pi nx)$ \u3092\u7528\u3044, 3\u3064\u76ee\u306e\u7b49\u53f7\u3067\u306f\u90e8\u5206\u7a4d\u5206\u3092\u5b9f\u884c\u3057, 4\u3064\u76ee\u306e\u7b49\u53f7\u3067\u306f\u7121\u9650\u548c\u3068\u7a4d\u5206\u306e\u9806\u5e8f\u3092\u4ea4\u63db\u3057\u305f. \u305d\u308c\u3089\u306e\u64cd\u4f5c\u306f $f(x)$ \u304c\u6025\u6e1b\u5c11\u51fd\u6570\u3067\u3042\u308c\u3070\u5bb9\u6613\u306b\u6b63\u5f53\u5316\u3055\u308c\u308b. \n\n\u4e00\u65b9, Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u306e\n\n$$\nB_1(x-\\lfloor x\\rfloor) = x - \\lfloor x\\rfloor - \\frac{1}{2}\n$$\n\n\u3092\u4f7f\u3046\u5834\u5408\u304b\u3089, \n\n$$\n\\sum_{m\\in\\Z} f(m) =\n\\int_\\R f(x)\\,dx + \\int_\\R \\left(x - \\lfloor x\\rfloor - \\frac{1}{2}\\right) f'(x)\\,dx\n$$\n\n\u304c\u5c0e\u304b\u308c\u308b. \u3053\u308c\u306f\u90e8\u5206\u7a4d\u5206\u306b\u3088\u3063\u3066\u5f97\u3089\u308c\u308b\u6b21\u306e\u516c\u5f0f\u304b\u3089\u305f\u3060\u3061\u306b\u5c0e\u304b\u308c\u308b\u6613\u3057\u3044\u516c\u5f0f\u3067\u3042\u308b\u3053\u3068\u306b\u3082\u6ce8\u610f\u305b\u3088:\n\n$$\n\\begin{aligned}\n\\int_n^{n+1} \\left(x - n - \\frac{1}{2}\\right) f'(x)\\,dx &=\n\\left[\\left(x - n - \\frac{1}{2}\\right)f(x)\\right]_n^{n+1} - \\int_n^{n+1}f(x)\\,dx - n\n\\\\ &=\n\\frac{f(n+1)-f(n)}{2} - \\int_n^{n+1}f(x)\\,dx.\n\\end{aligned}\n$$\n\n\u4ee5\u4e0a\u306e2\u3064\u306e\u7d50\u679c\u3092\u6bd4\u8f03\u3059\u308b\u3068, Poisson\u306e\u548c\u516c\u5f0f\u3068Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u306e $B_1(x-\\lfloor x\\rfloor)$ \u3092\u4f7f\u3063\u305f\u5834\u5408\u306f, \n\n$$\nx - \\lfloor x\\rfloor - \\frac{1}{2} =\n-\\sum_{n=1}^\\infty\\frac{\\sin(2\\pi nx)}{\\pi n}\n\\tag{$*$}\n$$\n\n\u3068\u3044\u3046\u516c\u5f0f\u3067\u7d50\u3073\u4ed8\u3044\u3066\u3044\u308b\u3053\u3068\u304c\u308f\u304b\u308b. \u3053\u306e\u516c\u5f0f\u3092\u8a8d\u3081\u308c\u3070, Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u306e $B_1(x-\\lfloor x\\rfloor)$ \u3092\u4f7f\u3063\u305f\u5834\u5408\u304b\u3089Poisson\u306e\u548c\u516c\u5f0f\u304c\u5c0e\u304b\u308c\u308b. \n\n\u516c\u5f0f($*$)\u306e\u5de6\u8fba\u306f\u3044\u308f\u3086\u308b**\u306e\u3053\u304e\u308a\u6ce2**\u3067\u3042\u308a, \u53f3\u8fba\u306f\u305d\u306eFourier\u7d1a\u6570\u3067\u3042\u308b. \u516c\u5f0f($*$)\u306fFourier\u7d1a\u6570\u8ad6\u306b\u304a\u3051\u308b\u975e\u5e38\u306b\u6709\u540d\u306a\u516c\u5f0f\u3067\u3042\u308a, \u672c\u8cea\u7684\u306b\u305d\u308c\u3068\u540c\u3058\u516c\u5f0f\u306fFourier\u7d1a\u6570\u8ad6\u306b\u3064\u3044\u3066\u66f8\u304b\u308c\u305f\u6587\u732e\u306b\u306f\u4f8b\u3068\u3057\u3066\u5fc5\u305a\u8f09\u3063\u3066\u3044\u308b\u3068\u8a00\u3063\u3066\u3088\u3044\u304f\u3089\u3044\u3067\u3042\u308b. (Fourier\u7d1a\u6570\u8ad6\u3088\u308a, \u516c\u5f0f($*$)\u306f $x$ \u304c\u6574\u6570\u3067\u306a\u3044\u3068\u304d\u306b\u306f\u5b9f\u969b\u306b\u6210\u7acb\u3057\u3066\u3044\u308b\u3053\u3068\u304c\u308f\u304b\u308b.)\n\n\u3053\u306e\u3088\u3046\u306b, \u306e\u3053\u304e\u308a\u6ce2\u306eFourier\u7d1a\u6570\u5c55\u958b\u3068\u3044\u3046\u975e\u5e38\u306b\u7279\u6b8a\u306a\u516c\u5f0f\u306fPoisson\u306e\u548c\u516c\u5f0f\u3068\u3044\u3046\u4e00\u822c\u7684\u306a\u516c\u5f0f\u3092\u5c0e\u304f\u3060\u3051\u306e\u529b\u3092\u6301\u3063\u3066\u3044\u308b\u306e\u3067\u3042\u308b. \n\n**\u307e\u3068\u3081:** \u306e\u3053\u304e\u308a\u6ce2\u306eFourier\u7d1a\u6570\u5c55\u958b\u306f\u90e8\u5206\u7a4d\u5206\u3092\u901a\u3057\u3066Poisson\u306e\u548c\u516c\u5f0f\u3068\u672c\u8cea\u7684\u306b\u540c\u5024\u3067\u3042\u308b! $\\QED$\n\n\u3053\u306e\u7bc0\u3067\u89e3\u8aac\u3057\u305f\u3053\u3068\u306f\u6b21\u306e\u6587\u732e\u3067\u6307\u6458\u3055\u308c\u3066\u3044\u308b:\n\n* Tim Jameson, An elementary derivation of the Poisson summation formula\n\n\n```julia\nB_1(x) = x - 1/2\nb(x) = B_1(x - floor(x))\nS(N,x) = -sum(n->sin(2\u03c0*n*x)/(\u03c0*n), 1:N)\nx = -2:0.001:1.999\nN = 10\nplot(size=(400,200), ylim=(-0.6,1.2), legend=:top)\nplot!(x, b.(x), label=\"B_1(x-[x]) = x - [x] -1/2\")\nplot!(x, S.(N,x), label=\"partial sum of Fourier series (N=$N)\")\n```\n\n\n\n\n \n\n \n\n\n\n**\u88dc\u8db3:** \u3053\u306e\u30ce\u30fc\u30c8\u306e\u4e0a\u306e\u65b9\u306e\u5468\u671f\u7684Bernoulli\u591a\u9805\u5f0f $B_k(x-\\lfloor x\\rfloor)$ \u306eFourier\u7d1a\u6570\u5c55\u958b\u306e\u7bc0\u3092\u898b\u308c\u3070\u308f\u304b\u308b\u3088\u3046\u306b, \n\n$$\n\\sum_{n=1}^\\infty \\frac{\\cos(2\\pi nx)}{n^k}, \\quad\n\\sum_{n=1}^\\infty \\frac{\\sin(2\\pi nx)}{n^k}\n$$\n\n\u306e\u578b\u306eFourier\u7d1a\u6570\u306e\u53ce\u675f\u5148\u306f\u5e73\u884c\u79fb\u52d5\u3068\u5b9a\u6570\u500d\u306e\u9055\u3044\u3092\u9664\u3044\u3066\u5468\u671f\u7684Bernoulli\u591a\u9805\u5f0f\u306b\u306a\u308b. $\\QED$\n\n### \u53f0\u5f62\u516c\u5f0f\u3068Poisson\u306e\u548c\u516c\u5f0f\u306e\u95a2\u4fc2\n\n\u7c21\u5358\u306e\u305f\u3081 $f(x)$ \u306f $\\R$ \u4e0a\u306e\u6025\u6e1b\u5c11\u51fd\u6570\u3067\u3042\u308b\u3068\u3057, $a,b\\in\\Z$ \u304b\u3064 $a1$ \u306e\u3068\u304d(\u3088\u308a\u4e00\u822c\u306b\u306f $\\real s>1$ \u306e\u3068\u304d),\n\n$$\n\\zeta(s) = \\sum_{n=1}^\\infty \\frac{1}{n^s}\n$$\n\n\u306f\u7d76\u5bfe\u53ce\u675f\u3057\u3066\u3044\u308b\u306e\u3067\u3042\u3063\u305f. \u3053\u308c\u306bEuler-Maclaurin\u306e\u548c\u516c\u5f0f\n\n$$\n\\begin{aligned}\n&\n\\sum_{j=a}^b f(j) = \n\\int_a^b f(x)\\,dx + \\frac{f(a)+f(b)}{2} + \n\\sum_{k=2}^n \\frac{B_k}{k!} (f^{(k-1)}(b)-f^{(k-1)}(a)) + R_n,\n\\\\ &\nR_n = (-1)^{n-1}\\int_a^b \\frac{B_n(x-\\lfloor x\\rfloor)}{n!}f^{(n)}(x)\\,dx\n\\end{aligned}\n$$\n\n\u3092\u9069\u7528\u3057\u3066\u307f\u3088\u3046.\n\n### \u89e3\u6790\u63a5\u7d9a\n\n$\\real s > 1$ \u3067\u3042\u308b\u3068\u3057, $f(x)=x^{-s}$ \u3068\u304a\u304f. \u3053\u306e\u3068\u304d, \n\n$$\n\\begin{aligned}\n&\n\\int_a^\\infty f(x)\\,dx = \\int_1^\\infty x^{-s}\\,dx = \n\\left[\\frac{x^{-s+1}}{-s+1}\\right]_1^\\infty = \\frac{a^{-(s-1)}}{s-1}, \\qquad\nf(b)=b^{-s}\\to 0 \\quad(b\\to\\infty).\n\\\\ &\n\\frac{B_k}{k!}f^{(k-1)}(x) = \n\\frac{B_k}{k}\\binom{-s}{k-1} x^{-s-k+1}, \\quad\n\\frac{B_n(x-\\lfloor x\\rfloor)}{n!}f^{(n)}(x) = \n\\binom{-s}{n}B_n(x-\\lfloor x\\rfloor)x^{-s-n}\n\\end{aligned}\n$$\n\n\u306a\u306e\u3067, 2\u4ee5\u4e0a\u306e\u6574\u6570 $n$ \u306b\u3064\u3044\u3066,\n\n$$\n\\begin{aligned}\n&\n\\zeta(s) = \\frac{1}{s-1} + \\frac{1}{2} - \n\\sum_{k=2}^n \\frac{B_k}{k}\\binom{-s}{k-1} + R_n,\n\\\\ &\nR_n = (-1)^{n-1}\\binom{-s}{n}\\int_1^\\infty B_n(x-\\lfloor x\\rfloor)x^{-s-n}\\,dx.\n\\end{aligned}\n$$\n\n\u7a4d\u5206 $R_n$ \u306f $\\real s+n>1$ \u306a\u3089\u3070\u7d76\u5bfe\u53ce\u675f\u3057\u3066\u3044\u308b. \u3086\u3048\u306b, \u8907\u7d20\u5e73\u9762\u5168\u4f53\u306b $\\zeta(s)$ \u3092\u81ea\u7136\u306b\u62e1\u5f35\u3059\u308b\u65b9\u6cd5(\u89e3\u6790\u63a5\u7d9a\u3059\u308b\u65b9\u6cd5)\u304c\u5f97\u3089\u308c\u305f.\n\n$\\ds \\sum_{k=1}^\\infty \\frac{1}{n^s}$ \u305d\u306e\u3082\u306e\u3067\u306f\u306a\u304f, $n=a$ \u304b\u3089\u59cb\u307e\u308b\u7121\u9650\u548c $\\ds \\sum_{k=a}^\\infty \\frac{1}{n^s}=\\zeta(s)-\\sum_{n=1}^{a-1}\\frac{1}{n^s}$ \u306bEuler-Maclaurin\u306e\u548c\u516c\u5f0f\u3092\u9069\u7528\u3059\u308b\u3068,\n\n$$\n\\begin{aligned}\n&\n\\zeta(s) = \\sum_{n=1}^{a-1} \\frac{1}{n^s} - \\frac{a^{1-s}}{1-s} + \n\\frac{1}{2a^s} - \\sum_{k=2}^n \\frac{B_k}{k a^{s+k-1}}\\binom{-s}{k-1} + R_{n,a},\n\\\\ &\nR_{n,a} = (-1)^{n-1}\\binom{-s}{n}\\int_a^\\infty B_n(x-\\lfloor x\\rfloor)x^{-s-n}\\,dx.\n\\end{aligned}\n$$\n\n\n```julia\n# \u4e0a\u306e\u516c\u5f0f\u306b\u304a\u3051\u308b \u03b6(s) - R_{n,a} \u306e\u51fd\u6570\u5316\n\n# \u03b6(s) - R_{n,a} = \u03a3_{m=1}^{a-1} m^{-s} - a^{1-s}/(1-s) + 1/(2a^s)\n# - \u03a3_{k=2}^n B_k/(k a^{s+k-1}) binom(-s,k-1) (k is even)\n#\nfunction ApproxZeta(a, n, s)\n ss = float(big(s))\n z = zero(ss)\n z += (a \u2264 1 ? zero(ss) : sum(m->m^(-ss), 1:a-1)) # \u03a3_{m=1}^{a-1} m^{-s}\n z += -a^(1-ss)/(1-ss) # -a^{1-s}/(1-s)\n n == 0 && return z\n z += 1/(2*a^ss) # 1/(2a^s)\n n == 1 && return z\n z -= sum(k -> BB(k)/(k*a^(ss+k-1))*binom(-ss,k-1), 2:2:n)\n # -\u03a3_{k=2}^n B_k/(k a^{s+k-1}) binom(-s,k-1) (k is even)\nend\n\nA = ApproxZeta(40, 80, big\"0.5\")\nZ = zeta(big\"0.5\")\n@show A\n@show Z;\n```\n\n A = -1.460354508809586812889499152515298012467229331012581490542886087825530529474572\n Z = -1.460354508809586812889499152515298012467229331012581490542886087825530529474503\n\n\n$\\real s > 0$ \u306e\u3068\u304d, \n\n$$\n\\frac{1}{2a^s} - \\sum_{k=2}^n \\frac{B_k}{k a^{s+k-1}}\\binom{-s}{k-1} + R_{n,a}\n$$\n\n\u306f $a\\to\\infty$ \u3067 $0$ \u306b\u53ce\u675f\u3059\u308b\u306e\u3067,\n\n$$\n\\zeta(s) = \\lim_{a\\to\\infty}\\left(\\sum_{n=1}^{a-1} \\frac{1}{n^s} - \\frac{a^{1-s}}{1-s}\\right)\n\\quad (\\real s > 0)\n$$\n\n\u304c\u6210\u7acb\u3059\u308b\u3053\u3068\u304c\u308f\u304b\u308b. \u3053\u308c\u306f, Dirichlet\u7d1a\u6570\u306e\u90e8\u5206\u548c $\\ds\\sum_{n=1}^{a-1}\\frac{1}{n^s}$ \u304b\u3089\u88dc\u6b63\u9805\n\n$$\n\\frac{a^{1-s}}{1-s}\n$$\n\n\u3092\u5f15\u304d\u53bb\u3063\u3066\u304b\u3089, Dirichlet\u7d1a\u6570\u306e\u7dcf\u548c\u3092\u53d6\u308c\u3070, $0 < \\real s < 1$ \u3067\u3082\u53ce\u675f\u3057\u3066, $\\zeta(s)$ \u306e\u6b63\u78ba\u306a\u5024\u304c\u5f97\u3089\u308c\u308b\u3053\u3068\u3092\u610f\u5473\u3057\u3066\u3044\u308b.\n\n\n```julia\n# \u4e0a\u306e\u7d50\u679c\u306e\u30d7\u30ed\u30c3\u30c8\n\nApproxZeta0(a, s) = sum(n->n^(-s), 1:a-1) - a^(1-s)/(1-s)\na = 100\ns = 0.05:0.01:0.95\n@time z = zeta.(s)\n@time w = ApproxZeta0.(a, s)\nplot(size=(400, 250), legend=:bottomleft, xlabel=\"s\")\nplot!(s, z, label=\"zeta(s)\", lw=2)\nplot!(s, w, label=\"Euler-Maclaurin sum for n=0, a=$a\", lw=2, ls=:dash)\n```\n\n 0.235033 seconds (507.61 k allocations: 26.628 MiB, 13.12% gc time)\n 0.108976 seconds (319.80 k allocations: 15.620 MiB)\n\n\n\n\n\n \n\n \n\n\n\n\n```julia\n# \u3055\u3089\u306b\u9805\u306e\u6570\u30921\u3064\u5897\u3084\u3057\u305f\u5834\u5408\u306e\u30d7\u30ed\u30c3\u30c8\n\n# \u03b6(s) - R_{1,a} = \u03a3_{n=1}^{a-1} n^{-s} - a^{1-s}/(1-s) + 1/(2a^s)\n#\nApproxZeta1(a, s) = sum(n->n^(-s), 1:a-1) - a^(1-s)/(1-s) + 1/(2*a^s)\n\ns = -0.95:0.01:0.5\na = 10^3\n@time z = zeta.(s)\n@time w = ApproxZeta1.(a,s)\nplot(size=(400, 250), legend=:bottomleft, xlabel=\"s\")\nplot!(s, z, label=\"zeta(s)\", lw=2)\nplot!(s, w, label=\"Euler-Maclaurin sum for n=1, a=$a\", lw=2, ls=:dash)\n```\n\n 0.000172 seconds (8 allocations: 1.563 KiB)\n 0.117856 seconds (313.70 k allocations: 15.673 MiB)\n\n\n\n\n\n \n\n \n\n\n\n\n```julia\n# \u3055\u3089\u306b\u4e00\u822c\u306e\u5834\u5408\u306e\u30d7\u30ed\u30c3\u30c8\n#\n# Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u3067 \u03b6(s) \u306e\u8ca0\u306e s \u3067\u306e\u5024\u3092\u3074\u3063\u305f\u308a\u8fd1\u4f3c\u3067\u304d\u3066\u3044\u308b\u3053\u3068\u304c\u308f\u304b\u308b.\n\n[(-m, zeta(-m), Float64(ApproxZeta(2, 17, -m))) for m = 0:12] |> display\n\nn = 10\ns = -1.5:0.05:0.5\na = 10\n@time z = zeta.(s)\n@time w = ApproxZeta.(a, n, s)\nP1 = plot(size=(400, 250), legend=:bottomleft, xlabel=\"s\")\nplot!(s, z, label=\"zeta(s)\", lw=2)\nplot!(s, w, label=\"Euler-Maclaurin sum for a=$a, n=$n\", lw=2, ls=:dash)\n\nn = 17\ns = -16:0.05:-2.0\na = 2\n@time z = zeta.(s)\n@time w = ApproxZeta.(a, n, s)\nP2 = plot(size=(400, 250), legend=:topright, xlabel=\"s\")\nplot!(s, z, label=\"zeta(s)\", lw=2)\nplot!(s, w, label=\"Euler-Maclaurin sum for a=$a, n=$n\", lw=2, ls=:dash)\n```\n\n\n 13-element Array{Tuple{Int64,Float64,Float64},1}:\n (0, -0.5, -0.5) \n (-1, -0.08333333333333338, -0.08333333333333333) \n (-2, -0.0, -1.2954252832641667e-77) \n (-3, 0.008333333333333345, 0.008333333333333333) \n (-4, -0.0, -3.454467422037778e-77) \n (-5, -0.0039682539682539715, -0.003968253968253968)\n (-6, -0.0, 0.0) \n (-7, 0.004166666666666668, 0.004166666666666667) \n (-8, -0.0, 0.0) \n (-9, -0.007575757575757582, -0.007575757575757576) \n (-10, -0.0, -4.421718300208356e-75) \n (-11, 0.0210927960927961, 0.021092796092796094) \n (-12, -0.0, 0.0) \n\n\n 0.000048 seconds (8 allocations: 800 bytes)\n 0.144020 seconds (387.92 k allocations: 19.316 MiB)\n 0.000267 seconds (8 allocations: 2.719 KiB)\n 0.069766 seconds (555.19 k allocations: 20.968 MiB, 17.44% gc time)\n\n\n\n\n\n \n\n \n\n\n\n\n```julia\ndisplay(P1)\n```\n\n\n \n\n \n\n\n\u4e0a\u3068\u4e0b\u306e\u30b0\u30e9\u30d5\u3092\u898b\u308c\u3070\u308f\u304b\u308b\u3088\u3046\u306b, Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u306b\u3088\u3063\u3066\u8ca0\u306e\u5b9f\u6570\u3067\u306e $\\zeta$ \u51fd\u6570\u306e\u5024\u3092\u975e\u5e38\u306b\u3088\u304f\u8fd1\u4f3c\u3067\u304d\u3066\u3044\u308b. \u5b9f\u306f $\\zeta(s)$ \u3092\u5b9f\u90e8\u304c\u8ca0\u306e\u8907\u7d20\u6570\u307e\u3067\u62e1\u5f35\u3057\u3066\u3082\u3053\u306e\u8fd1\u4f3c\u306f\u3046\u307e\u304f\u884c\u3063\u3066\u3044\u308b.\n\n\n```julia\ndisplay(P2)\n```\n\n\n \n\n \n\n\n### \u03b6(2)\u306e\u8fd1\u4f3c\u8a08\u7b97\n\n$\\ds\\zeta(2)=\\sum_{n=1}^\\infty \\frac{1}{n^2}$ \u3092\u8a08\u7b97\u305b\u3088\u3068\u3044\u3046\u554f\u984c\u306f**Basel\u554f\u984c**\u3068\u547c\u3070\u308c\u3066\u3044\u308b\u3089\u3057\u3044. Basel\u554f\u984c\u306fEuler\u306b\u3088\u3063\u30661743\u5e74\u3053\u308d\u306b\u89e3\u304b\u308c\u305f\u3089\u3057\u3044. Euler\u304c\u3069\u306e\u3088\u3046\u306b\u8003\u3048\u305f\u304b\u306b\u3064\u3044\u3066\u306f\u6b21\u306e\u6587\u732e\u3092\u53c2\u7167\u305b\u3088.\n\n* \u6749\u672c\u654f\u592b, \u30d0\u30fc\u30bc\u30eb\u554f\u984c\u3068\u30aa\u30a4\u30e9\u30fc, 2007\u5e748\u670823\u65e5, \u6570\u7406\u89e3\u6790\u7814\u7a76\u6240\u8b1b\u7a76\u9332, \u7b2c1583\u5dfb, 2008\u5e74, pp.159-167\n\nEuler\u306f $\\zeta(2)$ \u306e\u8fd1\u4f3c\u5024\u3092\u81ea\u3089\u958b\u767a\u3057\u305fEuler-Maclaurin\u306e\u548c\u516c\u5f0f\u3092\u4f7f\u3063\u3066\u7cbe\u5bc6\u306b\u8a08\u7b97\u3057\u305f\u3089\u3057\u3044.\n\n\u8fd1\u4f3c\u5f0f\n\n$$\n\\zeta(s) \\approx\n\\sum_{n=1}^{a-1} \\frac{1}{n^s} - \\frac{a^{1-s}}{1-s} + \n\\frac{1}{2a^s} - \\sum_{k=2}^n \\frac{B_k}{k a^{s+k-1}}\\binom{-s}{k-1} \n$$\n\n\u3092\u7528\u3044\u3066, $\\zeta(2)$ \u3092\u8a08\u7b97\u3057\u3066\u307f\u3088\u3046. 3\u4ee5\u4e0a\u306e\u5947\u6570 $n$ \u306b\u3064\u3044\u3066 $B_n=0$ \u3068\u306a\u308b\u306e\u3067, $n=2m$ \u306e\u3068\u304d, \u53f3\u8fba\u306e\u9805\u6570\u306f $a+m+1$ \u306b\u306a\u308b.\n\n\u4f8b\u3048\u3070, $a=10$, $m=9$ \u3068\u3057, 20\u9805\u306e\u548c\u3092\u53d6\u308b\u3068,\n\n$$\n\\zeta(2) \\approx 1.64493\\;40668\\;4749\\cdots\n$$\n\n\u3068\u306a\u308a, \u6b63\u78ba\u306a\u5024 $\\ds\\frac{\\pi^2}{6}=1.64493\\;40668\\;4822\\cdots$ \u3068\u5c0f\u6570\u70b9\u4ee5\u4e0b\u7b2c11\u6841\u307e\u3067\u4e00\u81f4\u3057\u3066\u3044\u308b. \n\nEuler\u306f\u5f8c\u306b $\\ds\\zeta(2)=\\frac{\\pi^2}{6}$ \u3092\u5f97\u308b. Euler\u306f\u7af6\u4e89\u76f8\u624b\u306b\u8b70\u8ad6\u306b\u53b3\u5bc6\u6027\u306b\u6b20\u3051\u308b\u3068\u3057\u3066\u69d8\u3005\u306a\u6279\u5224\u3092\u53d7\u3051\u305f\u306e\u3060\u304c, \u4ee5\u4e0a\u306e\u3088\u3046\u306a\u6570\u5024\u8a08\u7b97\u306e\u7d50\u679c\u3092\u77e5\u3063\u3066\u3044\u305f\u306e\u3067, \u6b63\u89e3\u3092\u5f97\u305f\u3068\u3044\u3046\u78ba\u4fe1\u306f\u5fae\u5875\u3082\u63fa\u3089\u304c\u306a\u304b\u3063\u305f\u3060\u308d\u3046\u3068\u601d\u308f\u308c\u308b.\n\n**\u6ce8\u610f:** \u8ad6\u7406\u7684\u306b\u53b3\u5bc6\u306a\u8a3c\u660e\u306e\u65b9\u6cd5\u304c\u767a\u9054\u3057\u305f\u73fe\u4ee3\u306b\u304a\u3044\u3066\u3082, \u4eba\u9593\u306f\u5e38\u306b\u8a3c\u660e\u3092\u9593\u9055\u3046\u53ef\u80fd\u6027\u304c\u3042\u308b. \u4eba\u9593\u304c\u884c\u3063\u305f\u8a3c\u660e\u306f\u7d76\u5bfe\u7684\u306b\u306f\u4fe1\u7528\u3067\u304d\u306a\u3044. \u3060\u304b\u3089, \u305f\u3068\u3048\u8a3c\u660e\u304c\u5b8c\u6210\u3057\u305f\u3068\u601d\u3063\u3066\u3044\u305f\u3068\u3057\u3066\u3082, \u53ef\u80fd\u306a\u3089\u3070\u6570\u5024\u8a08\u7b97\u306b\u3088\u3063\u3066\u8ad6\u7406\u7684\u306b\u53b3\u5bc6\u306a\u8a3c\u660e\u4ee5\u5916\u306e\u8a3c\u62e0\u3092\u4f5c\u3063\u3066\u3044\u305f\u65b9\u304c\u5b89\u5168\u3060\u3068\u601d\u308f\u308c\u308b. $\\QED$\n\n**\u6ce8\u610f:** \u6570\u5b66\u306e\u30ce\u30fc\u30c8\u3092\u4f5c\u308a\u306a\u304c\u3089, \u6c17\u8efd\u306b\u6570\u5024\u7684\u8a3c\u62e0\u3082\u540c\u6642\u306b\u5f97\u308b\u305f\u3081\u306e\u9053\u5177\u3068\u3057\u3066, \u7b46\u8005\u304c\u3053\u306e\u30ce\u30fc\u30c8\u4f5c\u6210\u306e\u305f\u3081\u306b\u7528\u3044\u3066\u3044\u308bJulia\u8a00\u8a9e\u3068Jupyter\u3068Nbextensions\u306eLive Markdown Preview\u306f\u3053\u308c\u3092\u66f8\u3044\u3066\u3044\u308b\u6642\u70b9\u3067\u76f8\u5f53\u306b\u512a\u79c0\u306a\u9053\u5177\u3067\u3042\u308b\u3088\u3046\u306b\u601d\u308f\u308c\u308b. $\\QED$\n\n\n```julia\n# 20\u9805\u306e\u548c\n\nN = 20\n[(m, N-m-1, 2m, ApproxZeta(N-m-1, 2m, 2) - big(\u03c0)^2/6) for m in 2:N\u00f72-1] |> display\n\nm = 9\na = N-m-1\nZ = big(\u03c0)^2/6\nA = ApproxZeta(a, m, 2)\n@show a,m\n@show Z\n@show A;\n```\n\n\n 8-element Array{Tuple{Int64,Int64,Int64,BigFloat},1}:\n (2, 17, 4, -5.77451793863474833797788940478358503699585407578399357681001619323664145261758e-11) \n (3, 16, 6, 4.808127352395625095013460112150325878389866054958153137408430487774776509324829e-13) \n (4, 15, 8, -8.630887513943044224615236465465206970650911046136527708026292186723865816966492e-15) \n (5, 14, 10, 3.116217978527385328054235573023871173466586797186396436897662720414552852297451e-16) \n (6, 13, 12, -2.200847274100542514575619216657515798396053843860275532661691594239630209744406e-17)\n (7, 12, 14, 3.035248943857815147777677383711316694019656935103319432181355248871820406062584e-18) \n (8, 11, 16, -8.335321043122531064769674746337938450627967961329547742403411230422546148897753e-19)\n (9, 10, 18, 4.746601814392005312714027578027970306539540935051342164737224161514796063021067e-19) \n\n\n (a, m) = (10, 9)\n Z = 1.644934066848226436472415166646025189218949901206798437735558229370007470403185\n A = 1.644934066847493071302595112118921642731166540690350214159737969261778785588307\n\n\n### s = 1\u3067\u306e\u03b6(s)\u306e\u5b9a\u6570\u9805\u304cEuler\u5b9a\u6570\u306b\u306a\u308b\u3053\u3068\n\n$\\zeta(s)=\\ds\\sum_{n=1}^\\infty \\frac{1}{n^s}$ \u306bEuler-Maclaurin\u306e\u548c\u516c\u5f0f\u3092\u4f7f\u3063\u3066, 2\u4ee5\u4e0a\u306e $n$ \u306b\u3064\u3044\u3066\u6b21\u306e\u516c\u5f0f\u304c\u5f97\u3089\u308c\u308b\u306e\u3067\u3042\u3063\u305f:\n\n$$\n\\begin{aligned}\n&\n\\zeta(s) = \\frac{1}{s-1} + \\frac{1}{2} - \n\\sum_{k=2}^n \\frac{B_k}{k}\\binom{-s}{k-1} + R_n,\n\\\\ &\nR_n = (-1)^{n-1}\\binom{-s}{n}\\int_1^\\infty B_n(x-\\lfloor x\\rfloor)x^{-s-n}\\,dx.\n\\end{aligned}\n$$\n\n$n=1$ \u306e\u5834\u5408\u306b\u306f\n\n\\begin{aligned}\n\\sum_{j=a}^b f(j) &= \n\\int_a^b f(x)\\,dx + f(a) + \\int_a^b (x-\\lfloor x\\rfloor)f'(x)\\,dx\n\\\\ &=\n\\int_a^b f(x)\\,dx + f(a) + \\sum_{j=a}^{b-1}\\int_0^1 x f'(x+j)\\,dx\n\\end{aligned}\n\n\u3092 $f(x)=x^{-s}$, $f'(x)=-sx^{-s-1}$, $a=1$, $b=\\infty$ \u306e\u5834\u5408\u306b\u9069\u7528\u3057\u3066,\n\n$$\n\\zeta(s) = \n\\frac{1}{s-1} + 1 - s\\sum_{j=1}^\\infty\\int_0^1 \\frac{x}{(x+j)^{s+1}}\\,dx\n$$\n\n\u3092\u5f97\u308b. \u3057\u305f\u304c\u3063\u3066,\n\n$$\n\\lim_{s\\to 1}\\left(\\zeta(s)-\\frac{1}{s-1}\\right) =\n1 - \\sum_{j=1}^\\infty\\int_0^1 \\frac{x}{(x+j)^2}\\,dx.\n$$\n\n\u305d\u3057\u3066, $x=t-j$ \u3068\u7f6e\u63db\u3059\u308b\u3068, \n\n$$\n\\begin{align}\n-\\int_0^1\\frac{x}{(x+j)^2}\\,dx &= \n-\\int_j^{j+1}\\frac{-(t-j)}{t^2}\\,dt = \n-\\left[\\log t + \\frac{j}{t}\\right]_j^{j+1} \n\\\\ &=\n-\\log(j+1)+\\log j -\\frac{j}{j+1}+1 =\n\\frac{1}{j+1} + \\log j - \\log(j+1)\n\\end{align}\n$$\n\n\u306a\u306e\u3067, \u3053\u308c\u3092 $j=1$ \u304b\u3089 $j=N-1$ \u307e\u3067\u8db3\u3057\u4e0a\u3052\u308b\u3053\u3068\u306b\u3088\u3063\u3066,\n\n$$\n1 - \\sum_{j=1}^{n-1}\\int_0^1\\frac{x}{(x+j)^2}\\,dx =\n\\sum_{j=1}^N\\frac{1}{j} - \\log N.\n$$\n\n\u3053\u308c\u306e $N\\to\\infty$ \u3067\u306e\u6975\u9650\u306fEuler\u5b9a\u6570 $\\gamma=0.5772\\cdots$ \u306e\u5b9a\u7fa9\u3067\u3042\u3063\u305f. \u4ee5\u4e0a\u306b\u3088\u3063\u3066\u6b21\u304c\u793a\u3055\u308c\u305f:\n\n$$\n\\lim_{s\\to 1}\\left(\\zeta(s)-\\frac{1}{s-1}\\right) = \\gamma = 0.5772\\cdots.\n$$\n\n### \u8ca0\u306e\u6574\u6570\u306b\u304a\u3051\u308b\u30bc\u30fc\u30bf\u51fd\u6570\u306e\u7279\u6b8a\u5024\u306e\u8a08\u7b97\n\nEuler-Maclaurin\u306e\u548c\u516c\u5f0f: $3$ \u4ee5\u4e0a\u306e\u6574\u6570 $k$ \u306b\u3064\u3044\u3066 $B_k=0$ \u306a\u306e\u3067, \u4ee5\u4e0b\u306e\u516c\u5f0f\u3067 $k$ \u306f\u5076\u6570\u306e\u307f\u3092\u52d5\u304f\u3068\u3057\u3066\u3088\u3044:\n\n$$\n\\begin{aligned}\n&\n\\sum_{n=a}^b f(n) = \n\\int_a^b f(x)\\,dx + \\frac{f(a)+f(b)}{2} + \n\\sum_{k=2}^m \\frac{B_k}{k!}(f^{(k-1)}(b) - f^{(k-1)}(a)) + R_m,\n\\\\ &\nR_n = (-1)^{m-1}\\int_a^b \\frac{\\tilde{B}_m(x)}{m!} f^{(m)}(x)\\,dx.\n\\end{aligned}\n$$\n\n\u3053\u3053\u3067 $\\tilde{B}_m(x)=B_m(x-\\lfloor x\\rfloor)$ \u3068\u304a\u3044\u305f.\n\nEuler-Maclaurin\u306e\u548c\u516c\u5f0f\u3092 $f(x)=n^{-s}$, $a=1$, $b=\\infty$ \u306e\u5834\u5408\u306b\u9069\u7528\u3059\u308b\u3053\u3068\u306b\u3088\u3063\u3066 $\\zeta(s)$ \u306f\u6b21\u306e\u5f62\u3067 $\\Re s > 1-m$ \u307e\u3067\u81ea\u7136\u306b\u5ef6\u9577(\u89e3\u6790\u63a5\u7d9a)\u3055\u308c\u308b\u306e\u3067\u3042\u3063\u305f:\n\n$$\n\\zeta(s) = \n\\frac{1}{s-1} + \\frac{1}{2} -\n\\frac{1}{1-s}\\sum_{k=2}^m \\binom{1-s}{k} B_k + \n(-1)^{m-1}\\int_a^b \\binom{-s}{m} \\tilde{B}_m(x) x^{-s-m}\\,dx.\n$$\n\n\u3053\u306e\u516c\u5f0f\u3068 $k\\geqq 2$ \u306e\u3068\u304d $\\ds\\binom{1}{k}=0$ \u3068\u306a\u308b\u3053\u3068\u3088\u308a, \n\n$$\n\\zeta(0) = \\frac{1}{0-1} + \\frac{1}{2} = -\\frac{1}{2}.\n$$\n\n$r$ \u306f\u6b63\u306e\u6574\u6570\u3067\u3042\u308b\u3068\u3059\u308b. \u3053\u306e\u3068\u304d, $m>r$ \u3068\u3059\u308b\u3068 $\\ds\\binom{r}{m}=0$ \u3068\u306a\u308b\u306e\u3067, $B_0=1$, $B_1=-1/2$ \u306a\u306e\u3067,\n\n$$\n\\begin{aligned}\n\\zeta(-r) &=\n-\\frac{1}{r+1} + \\frac{1}{2} -\n\\frac{1}{r+1}\\sum_{k=2}^{r+1} \\binom{m+1}{k} B_k\n\\\\ =&\n-\\frac{1}{r+1}\\sum_{k=0}^{r+1} \\binom{m+1}{k} B_k =\n-\\frac{B_{r+1}}{r+1}.\n\\end{aligned}\n$$\n\n\u6700\u5f8c\u306e\u7b49\u53f7\u3067, Bernoulli\u6570\u3092\u5e30\u7d0d\u7684\u306b\u8a08\u7b97\u3059\u308b\u305f\u3081\u306b\u4f7f\u3048\u308b\u516c\u5f0f $\\ds\\sum_{k=0}^r \\binom{r+1}{k}B_k=0$ \u3092\u7528\u3044\u305f. \u4f8b\u3048\u3070, $r=1$ \u306e\u3068\u304d $B_0+2B_1=1+2(-1/2)=0$ \u3068\u306a\u308a, $r=2$ \u306e\u3068\u304d, $B_0+3B_1+3B_2=1+3(-1/2)+3(1/6)=0$ \u3068\u306a\u308b.\n\n\u4ee5\u4e0a\u306b\u3088\u3063\u3066\u6b21\u304c\u8a3c\u660e\u3055\u308c\u305f:\n\n$$\n\\zeta(0)=-\\frac{1}{2}, \\quad\n\\zeta(-r) = -\\frac{B_{r+1}}{r+1} \\quad (r=1,2,3,\\ldots).\n$$\n\n\u3053\u308c\u3089\u306e\u516c\u5f0f\u306f $B_n(1)=B_n+\\delta_{n,1}$, $B_1=-1/2$ \u3092\u4f7f\u3046\u3068, \n\n$$\n\\zeta(-r) = -\\frac{B_{r+1}(1)}{r+1} \\quad (r=0,1,2,\\ldots)\n$$\n\n\u306e\u5f62\u306b\u307e\u3068\u3081\u3089\u308c\u308b.\n\n### \u767a\u6563\u7d1a\u6570\u306e\u6709\u9650\u90e8\u5206\u3068 \u03b6(-r) \u306e\u95a2\u4fc2\n\n\u524d\u7bc0\u306e\u7d50\u679c $\\ds\\zeta(-r)=-\\frac{B_{r+1}(1)}{r+1}$ ($r=0,1,2,\\ldots$) \u306f\n\n$$\n\\begin{aligned}\n&\n1+1+1+1+\\cdots = -\\frac{1}{2},\n\\\\ &\n1+2+3+4+\\cdots = -\\frac{1}{12}\n\\end{aligned}\n$$\n\n\u306e\u3088\u3046\u306a\u5370\u8c61\u7684\u306a\u5f62\u5f0f\u3067\u66f8\u304b\u308c\u308b\u3053\u3068\u3082\u3042\u308b. \u305f\u3060\u3057, \u305d\u306e\u5834\u5408\u306b\u306f\u5de6\u8fba\u304c\u901a\u5e38\u306e\u7121\u9650\u548c\u3067\u306f\u306a\u304f, \u30bc\u30fc\u30bf\u51fd\u6570 $\\zeta(s)$ \u306e\u89e3\u6790\u63a5\u7d9a\u306e\u610f\u5473\u3067\u3042\u308b\u3053\u3068\u3092\u4e86\u89e3\u3057\u3066\u304a\u304b\u306a\u3051\u308c\u3070\u3044\u3051\u306a\u3044. \n\n\u5b9f\u306f\u3055\u3089\u306b\u89e3\u6790\u63a5\u7d9a\u3068\u3057\u3066\u7406\u89e3\u3059\u308b\u3060\u3051\u3067\u306f\u306a\u304f, \u300c\u5de6\u8fba\u306e\u767a\u6563\u3059\u308b\u7121\u9650\u548c\u304b\u3089\u9069\u5207\u306b\u7121\u9650\u5927\u3092\u5f15\u304d\u53bb\u308c\u3070\u53f3\u8fba\u306b\u7b49\u3057\u304f\u306a\u308b\u300d\u3068\u3044\u3046\u3088\u3046\u306a\u30bf\u30a4\u30d7\u306e\u547d\u984c\u3092\u3046\u307e\u304f\u4f5c\u308b\u3053\u3068\u3082\u3067\u304d\u308b. \u4ee5\u4e0b\u3067\u306f\u305d\u306e\u3053\u3068\u3092\u89e3\u8aac\u3057\u3088\u3046.\n\n\u4ee5\u4e0b, $\\eta$ \u306f\u975e\u8ca0\u306e\u5b9f\u6570\u306b\u5024\u3092\u6301\u3064 $\\R$ \u4e0a\u306e**\u6025\u6e1b\u5c11\u51fd\u6570**\u3067\u3042\u308b\u3068\u4eee\u5b9a\u3059\u308b. ($\\R$ \u4e0a\u306e\u6025\u6e1b\u5c11\u51fd\u6570\u3068\u306f $\\R$ \u4e0a\u306e $C^\\infty$ \u51fd\u6570\u3067\u305d\u308c\u81ea\u8eab\u304a\u3088\u3073\u305d\u306e\u3059\u3079\u3066\u306e\u968e\u6570\u306e\u5c0e\u51fd\u6570\u306b\u4efb\u610f\u306e\u591a\u9805\u5f0f\u51fd\u6570\u3092\u304b\u3051\u305f\u3082\u306e\u304c $|x|\\to\\infty$ \u3067 $0$ \u306b\u53ce\u675f\u3059\u308b\u3082\u306e\u306e\u3053\u3068\u3067\u3042\u308b.) \u3055\u3089\u306b, \n\n$$\n\\eta(0)=1, \\quad \\eta'(0)=0\n$$\n\n\u3068\u4eee\u5b9a\u3059\u308b. \u4f8b\u3048\u3070 $\\eta(x)=e^{-x^2}$ \u306f\u305d\u306e\u3088\u3046\u306a\u51fd\u6570\u306e\u4f8b\u306b\u306a\u3063\u3066\u3044\u308b.\n\n\u3053\u306e\u3068\u304d, $\\eta(x)$ \u304c\u6025\u6e1b\u5c11\u51fd\u6570\u3067\u3042\u308b\u3053\u3068\u3088\u308a, $N>0$ \u306e\u3068\u304d, \u7d1a\u6570\n\n$$\n\\sum_{n=1}^\\infty n^r \\eta(n/N) = 1^r\\eta(1/N) + 2^r\\eta(2/N) + 3^r\\eta(3/N) + \\cdots\n$$\n\n\u306f\u5e38\u306b\u7d76\u5bfe\u53ce\u675f\u3059\u308b. $r$ \u304c\u975e\u8ca0\u306e\u6574\u6570\u306e\u3068\u304d, $N\\to\\infty$ \u3068\u3059\u308b\u3068, \u3053\u306e\u7d1a\u6570\u306f\u767a\u6563\u7d1a\u6570 $1^r+2^r+3^r+\\cdots$ \u306b\u306a\u3063\u3066\u3057\u307e\u3046. \u4ee5\u4e0b\u306e\u76ee\u6a19\u306f, Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u3092\u4f7f\u3046\u3068, \u305d\u306e $N\\to\\infty$ \u3067\u306e\u767a\u6563\u90e8\u5206\u304c $CN^{r+1}$ ($C$ \u306f $\\eta$ \u3068 $r$ \u3067\u5177\u4f53\u7684\u306b\u6c7a\u307e\u308b\u5b9a\u6570) \u306e\u5f62\u306b\u307e\u3068\u307e\u308b\u3053\u3068\u3092\u793a\u3059\u3053\u3068\u3067\u3042\u308b. \u305d\u3057\u3066, \u6b8b\u3063\u305f\u6709\u9650\u90e8\u5206\u306f**\u5e38\u306b** $\\zeta(-r)$ \u306b\u53ce\u675f\u3059\u308b\u3053\u3068\u3082\u793a\u3055\u308c\u308b.\n\n$\\tilde{B}_n(x)=B_n(x-\\lfloor x\\rfloor)$ \u3068\u66f8\u304f\u3053\u3068\u306b\u3059\u308b.\n\n\u3053\u306e\u3068\u304d, $f(x)=\\eta(x/N)$ \u306bEuler-Maclaurin\u306e\u548c\u516c\u5f0f\u3092\u9069\u7528\u3059\u308b\u3068, $f(0)=1$, $f'(0)=f(\\infty)=f'(\\infty)=0$ \u3088\u308a, \n$$\n\\begin{aligned}\n1+\\sum_{n=1}^\\infty\\eta(x/N) &= \n\\sum_{n=0}^\\infty\\eta(x/N) \n\\\\ &= \n\\int_0^\\infty\\eta(x/N)\\,dx + \\frac{1}{2} +B_2(f'(\\infty)-f'(0)) - \n\\int_0^\\infty\\frac{\\tilde{B}_2(x)}{2!}\\frac{1}{N^2}\\eta''(x/N)\\,dx\n\\\\ &=\nN\\int_0^\\infty\\eta(y)\\,dy + \\frac{1}{2} -\n\\frac{1}{N}\\int_0^\\infty\\frac{\\tilde{B}_2(Ny)}{2!}\\eta''(y)\\,dy.\n\\end{aligned}\n$$\n\n\u3086\u3048\u306b, $\\zeta(0)=-1/2$ \u3092\u4f7f\u3046\u3068,\n\n$$\n\\sum_{n=1}^\\infty\\eta(x/N) - N\\int_0^\\infty\\eta(y)\\,dy =\n\\zeta(0) + O(1/N).\n$$\n\n\u3053\u308c\u306f $N\\to\\infty$ \u3067\u767a\u6563\u7d1a\u6570 $1+1+1+1+\\cdots$ \u306b\u306a\u308b\u7121\u9650\u548c $\\ds \\sum_{n=1}^\\infty\\eta(x/N)$ \u304b\u3089, \u305d\u306e\u767a\u6563\u90e8\u5206 $\\ds N\\int_0^\\infty\\eta(y)\\,dy$ \u3092\u5f15\u304d\u53bb\u3063\u3066, $N\\to\\infty$ \u306e\u6975\u9650\u3092\u53d6\u308b\u3068, $\\zeta(0)$ \u306b\u53ce\u675f\u3059\u308b\u3053\u3068\u3092\u610f\u5473\u3057\u3066\u3044\u308b. \u3053\u308c\u304c\u6b32\u3057\u3044\u7d50\u679c\u306e1\u3064\u76ee\u3067\u3042\u308b.\n\n$r$ \u306f\u6b63\u306e\u6574\u6570\u3067\u3042\u308b\u3068\u3057, $f(x)=x^r\\eta(x/N)$ \u3068\u304a\u304f. \u305d\u306e\u3068\u304d, Leibnitz\u5247\n\n$$\n(\\varphi(x) \\psi(x))^{(m)} = \\sum_{i=0}^r \\binom{m}{i}\\varphi^{(i)}(x)\\psi^{(m-i)}(x)\n\\\\\n$$\n\n\u3092\u4f7f\u3046\u3068,\n\n$$\nf^{(r+2)}(x) = \\frac{1}{N^2}F(x/N), \\quad\nF(y) = \\binom{r+2}{0}y^r\\eta^{(r+2)}(y) + \\cdots + \\binom{r+2}{r}r!\\eta(y)\n$$\n\n\u305d\u306e $f(x)$ \u306bEuler-Maclaurin\u306e\u548c\u516c\u5f0f\u3092\u9069\u7528\u3059\u308b\u3068, $f^{(k)}(\\infty)=f^{(k)}(\\infty)=0$ \u304a\u3088\u3073,\n\n$$\nf(0) = f'(0) = \\cdots = f^{(r-1)}(0) = f^{(r+1)}(0) = 0, \\quad\nf^{(r)}(0) = r!\n$$\n\n\u3088\u308a, \n\n$$\n\\begin{aligned}\n\\sum_{n=1}^\\infty n^r\\eta(n/N) &=\n\\sum_{n=0}^\\infty f(n) =\n\\int_0^\\infty f(x)\\,dx - \\frac{B_{r+1}}{(r+1)!}r! - \\frac{B_{r+2}}{(r+2)!}0 + \n(-1)^{r+1}\\int_0^\\infty \\frac{\\tilde{B}_{r+2}(x)}{(r+2)!} f^{(r+2)}(x)\\,dx\n\\\\ &=\nN^{r+1}\\int_0^\\infty y^r\\eta(y)\\,dy - \\frac{B_{r+1}}{r+1} +\n(-1)^{r+1}\\frac{1}{N}\\int_0^\\infty \\frac{\\tilde{B}_{r+2}(Ny)}{(r+2)!} F(y)\\,dy\n\\\\ &=\nN^{r+1}\\int_0^\\infty y^r\\eta(y)\\,dy - \\frac{B_{r+1}}{r+1} + O(1/N).\n\\end{aligned}\n$$\n\n\u3086\u3048\u306b, $\\ds\\zeta(-r)=-\\frac{B_{r+1}}{r+1}$ \u3092\u4f7f\u3046\u3068,\n\n$$\n\\sum_{n=1}^\\infty n^r\\eta(n/N) - N^{r+1}\\int_0^\\infty y^r\\eta(y)\\,dy =\n\\zeta(-r) + O(1/N).\n$$\n\n\u3053\u308c\u306f $N\\to\\infty$ \u3067\u767a\u6563\u7d1a\u6570 $1^r+2^r+3^r+4^r+\\cdots$ \u306b\u306a\u308b\u7121\u9650\u548c $\\ds \\sum_{n=1}^\\infty n^r\\eta(x/N)$ \u304b\u3089, \u305d\u306e\u767a\u6563\u90e8\u5206 $\\ds N^{r+1}\\int_0^\\infty y^r\\eta(y)\\,dy$ \u3092\u5f15\u304d\u53bb\u3063\u3066, $N\\to\\infty$ \u306e\u6975\u9650\u3092\u53d6\u308b\u3068, $\\zeta(-r)$ \u306b\u53ce\u675f\u3059\u308b\u3053\u3068\u3092\u610f\u5473\u3057\u3066\u3044\u308b. \u3053\u308c\u304c\u6b32\u3057\u3044\u7d50\u679c\u3067\u3042\u308b.\n\n**\u6ce8\u610f:** \u4ee5\u4e0a\u306e\u8a08\u7b97\u306e\u30dd\u30a4\u30f3\u30c8\u306f, \u975e\u8ca0\u306e\u6025\u6e1b\u5c11\u51fd\u6570 $\\eta(x)$ \u3067 $\\eta(0)=1$, $\\eta'(0)=0$ \u3092\u6e80\u305f\u3059\u3082\u306e\u3067\u767a\u6563\u7d1a\u6570\u3092\u6b63\u5247\u5316\u3057\u3066\u5f97\u3089\u308c\u308b\u7d1a\u6570\u306e\u5834\u5408\u306b\u306f, Euler-Maclaurin\u306e\u548c\u516c\u5f0f\u306e\u300c\u9014\u4e2d\u306e\u9805\u300d\u304c\u307b\u3068\u3093\u3069\u6d88\u3048\u3066\u3057\u307e\u3046\u3053\u3068\u3067\u3042\u308b. $C N^{r+1}$ \u578b\u306e\u767a\u6563\u9805\u3068\u5b9a\u6570\u9805\u3068 $O(1/N)$ \u306e\u90e8\u5206\u306e3\u3064\u306e\u9805\u3057\u304b\u751f\u304d\u6b8b\u3089\u306a\u3044. $\\QED$\n\n**\u6ce8\u610f:** \u4ee5\u4e0a\u306e\u7d50\u679c\u306b\u95a2\u3059\u308b\u3088\u308a\u9032\u3093\u3060\u89e3\u8aac\u306b\u3064\u3044\u3066\u306f\u6b21\u306e\u30ea\u30f3\u30af\u5148\u3092\u53c2\u7167\u305b\u3088:\n\n* Terence Tao, The Euler-Maclaurin formula, Bernoulli numbers, the zeta function, and real-variable analytic continuation, Blog: What's new, 10 April, 2010.\n\n\u3053\u306e\u30d6\u30ed\u30b0\u8a18\u4e8b\u306f\u304b\u306a\u308a\u8aad\u307f\u6613\u3044. $\\QED$\n\n**\u554f\u984c:** \u4ee5\u4e0a\u306e\u7d50\u679c\u3092\u6570\u5024\u8a08\u7b97\u3067\u3082\u78ba\u8a8d\u3057\u3066\u307f\u3088. $\\QED$\n\n**\u30d2\u30f3\u30c8:** $\\eta(x)=e^{-x^2}$ \u306e\u5834\u5408\u3092\u8a66\u3057\u3066\u307f\u3088. \u305d\u306e\u3068\u304d,\n\n$$\n\\int_0^\\infty y^r\\eta(y)\\,dy = \n\\int_0^\\infty y^r e^{-y^2}\\,dy = \n\\frac{1}{2}\\Gamma\\left(\\frac{r+1}{2}\\right)\n$$\n\n\u3068\u306a\u3063\u3066\u3044\u308b. $\\QED$\n\n**\u89e3\u7b54\u4f8b:** \u6b21\u306e\u30ea\u30f3\u30af\u5148\u306e\u30ce\u30fc\u30c8\u3092\u898b\u3088.\n\n* \u9ed2\u6728\u7384, \u03b6(s) \u306e Re s \uff1c 1 \u3067\u306e\u69d8\u5b50 $\\QED$\n\n\n```julia\ny = symbols(\"y\", real=true)\nr = symbols(\"r\", positive=true)\nintegrate(y^r*exp(-y^2), (y, 0, oo))\n```\n\n\n\n\n\\begin{equation*}\\frac{\\Gamma\\left(\\frac{r}{2} + \\frac{1}{2}\\right)}{2}\\end{equation*}\n\n\n\n\n```julia\n\n```\n", "meta": {"hexsha": "67c37566ed2a68e9e13fdf3a6bda3d56f06ed915", "size": 794741, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "13 Euler-Maclaurin summation formula.ipynb", "max_stars_repo_name": "genkuroki/Calculus", "max_stars_repo_head_hexsha": "424ef53bf493242ce48c58ba39e43b8e601eb403", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 20, "max_stars_repo_stars_event_min_datetime": "2018-06-22T13:24:20.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-17T00:04:57.000Z", "max_issues_repo_path": "13 Euler-Maclaurin summation formula.ipynb", "max_issues_repo_name": "genkuroki/Calculus", "max_issues_repo_head_hexsha": "424ef53bf493242ce48c58ba39e43b8e601eb403", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "13 Euler-Maclaurin summation formula.ipynb", "max_forks_repo_name": "genkuroki/Calculus", "max_forks_repo_head_hexsha": "424ef53bf493242ce48c58ba39e43b8e601eb403", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2019-12-28T19:57:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-09-06T23:23:46.000Z", "avg_line_length": 102.3754991627, "max_line_length": 3580, "alphanum_fraction": 0.6436927251, "converted": true, "num_tokens": 30688, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4455295350395727, "lm_q2_score": 0.21469142916152645, "lm_q1q2_score": 0.09565137261131625}} {"text": "\n\nEst\u00e9 um notebook Colab contendo exerc\u00edcios de programa\u00e7\u00e3o em python, numpy e pytorch.\n\n## Coloque seu nome\n\n\n```python\nprint('Meu nome \u00e9: Fernanda Caldas')\n```\n\n Meu nome \u00e9: Fernanda Caldas\n\n\n# Parte 1:\n\n## Exerc\u00edcios de Processamento de Dados\n\nNesta parte pode-se usar as bibliotecas nativas do python como a `collections`, `re` e `random`. Tamb\u00e9m pode-se usar o NumPy.\n\n## Exerc\u00edcio 1.1\nCrie um dicion\u00e1rio com os `k` itens mais frequentes de uma lista.\n\nPor exemplo, dada a lista de itens `L=['a', 'a', 'd', 'b', 'd', 'c', 'e', 'a', 'b', 'e', 'e', 'a']` e `k=2`, o resultado deve ser um dicion\u00e1rio cuja chave \u00e9 o item e o valor \u00e9 a sua frequ\u00eancia: {'a': 4, 'e': 3}\n\n\n```python\nfrom collections import Counter\n\ndef top_k(L, k):\n return Counter(L).most_common(k)\n```\n\nMostre que sua implementa\u00e7\u00e3o est\u00e1 correta usando uma entrada com poucos itens:\n\n\n```python\nL = ['f', 'a', 'a', 'd', 'b', 'd', 'c', 'e', 'a', 'b', 'e', 'e', 'a', 'd']\nk = 3\nresultado = top_k(L=L, k=k)\nprint(f'resultado: {resultado}')\n```\n\n resultado: [('a', 4), ('d', 3), ('e', 3)]\n\n\nMostre que sua implementa\u00e7\u00e3o \u00e9 eficiente usando uma entrada com 10M de itens:\n\n\n```python\nimport random\nL = random.choices('abcdefghijklmnopqrstuvwxyz', k=10_000_000)\nk = 10000\n```\n\n\n```python\n%%timeit\nresultado = top_k(L=L, k=k)\n```\n\n 1 loop, best of 5: 538 ms per loop\n\n\n## Exerc\u00edcio 1.2\n\nEm processamento de linguagem natural, \u00e9 comum convertemos as palavras de um texto para uma lista de identificadores dessas palavras. Dado o dicion\u00e1rio `V` abaixo onde as chaves s\u00e3o palavras e os valores s\u00e3o seus respectivos identificadores, converta o texto `D` para uma lista de identificadores.\n\nPalavras que n\u00e3o existem no dicion\u00e1rio dever\u00e3o ser convertidas para o identificador do token `unknown`.\n\nO c\u00f3digo deve ser insens\u00edvel a mai\u00fasculas (case-insensitive).\n\nSe atente que pontua\u00e7\u00f5es (v\u00edrgulas, ponto final, etc) tamb\u00e9m s\u00e3o consideradas palavras.\n\n\n```python\n\"\"\"\nEu tinha conseguido fazer at\u00e9 a parte abaixo. \nN\u00e3o estava encontrando uma fun\u00e7\u00e3o para substituir o vetor K pelos identificadores, nem tornar case-insensitive.\n\"\"\"\n\nimport re\nD = 'Eu gosto de comer pizza.'\nK = re.findall(r\"[\\w']+|[.,!?;]\", D)\nprint(K)\n\"\"\"\nNo c\u00f3digo do aluno Andersson Andre\u00e9 Romero Deza, aprendi essa fun\u00e7\u00e3o \"get()\" que substitui os tokens pelos identificadores.\n\nO segundo par\u00e2metro (value) retorna o valor atribu\u00eddo a uma palavra que n\u00e3o esteja no dicion\u00e1rio. [https://www.w3schools.com/python/ref_dictionary_get.asp]\n\nNo mesmo c\u00f3digo, tamb\u00e9m encontrei a op\u00e7\u00e3o \"text.lower()\" que torna o c\u00f3digo case-insensitive ao deixar tudo min\u00fasculo.\n\nAgrade\u00e7o ao Andersson pela ajuda neste exerc\u00edcio.\n\"\"\"\n\ndef tokens_to_ids(text, vocabulary):\n import re\n \n K = re.findall(r\"[\\w']+|[.,!?;]\", text.lower())\n ids = []\n \n for k in K:\n ids.append(vocabulary.get(k, vocabulary['unknown']))\n \n return ids\n```\n\n ['Eu', 'gosto', 'de', 'comer', 'pizza', '.']\n\n\nMostre que sua implementa\u00e7\u00e3o esta correta com um exemplo pequeno:\n\n---\n\n\n\n\n```python\nV = {'eu': 1, 'de': 2, 'gosto': 3, 'comer': 4, '.': 5, 'unknown': -1}\nD = 'Eu gosto de comer pizza.'\n\nprint(tokens_to_ids(D, V))\n```\n\n [1, 3, 2, 4, -1, 5]\n\n\nMostre que sua implementa\u00e7\u00e3o \u00e9 eficiente com um exemplo grande:\n\n\n```python\nV = {'eu': 1, 'de': 2, 'gosto': 3, 'comer': 4, '.': 5, 'unknown': -1}\nD = ' '.join(1_000_000 * ['Eu gosto de comer pizza.'])\n```\n\n\n```python\n%%timeit\nresultado = tokens_to_ids(D, V)\n```\n\n 1 loop, best of 5: 2.62 s per loop\n\n\n## Exerc\u00edcio 1.3\n\nEm aprendizado profundo \u00e9 comum termos que lidar com arquivos muito grandes.\n\nDado um arquivo de texto onde cada item \u00e9 separado por `\\n`, escreva um programa que amostre `k` itens desse arquivo aleatoriamente.\n\nNota 1: Assuma amostragem de uma distribui\u00e7\u00e3o uniforme, ou seja, todos os itens tem a mesma probablidade de amostragem.\n\nNota 2: Assuma que o arquivo n\u00e3o cabe em mem\u00f3ria.\n\nNota 3: Utilize apenas bibliotecas nativas do python.\n\n\n```python\ndef sample(path: str, k: int):\n import random\n \n with open(path) as f:\n D = [line.rstrip('\\n') for line in f] #Fonte: https://stackoverflow.com/a/17570045\n \n return random.choices(D, k = k)\n```\n\nMostre que sua implementa\u00e7\u00e3o est\u00e1 correta com um exemplo pequeno:\n\n\n```python\nfilename = 'small.txt'\ntotal_size = 100\nn_samples = 10\n\nwith open(filename, 'w') as fout:\n fout.write('\\n'.join(f'line {i}' for i in range(total_size)))\n\nsamples = sample(path=filename, k=n_samples)\nprint(samples)\nprint(len(samples) == n_samples)\n```\n\n ['line 99', 'line 94', 'line 3', 'line 56', 'line 98', 'line 17', 'line 96', 'line 41', 'line 58', 'line 86']\n True\n\n\nMostre que sua implementa\u00e7\u00e3o \u00e9 eficiente com um exemplo grande:\n\n\n```python\nfilename = 'large.txt'\ntotal_size = 1_000_000\nn_samples = 10000\n\nwith open(filename, 'w') as fout:\n fout.write('\\n'.join(f'line {i}' for i in range(total_size)))\n```\n\n\n```python\n%%timeit\nsamples = sample(path=filename, k=n_samples)\nassert len(samples) == n_samples\n```\n\n 1 loop, best of 5: 256 ms per loop\n\n\n# Parte 2:\n\n## Exerc\u00edcios de Numpy\n\nNesta parte deve-se usar apenas a biblioteca NumPy. Aqui n\u00e3o se pode usar o PyTorch.\n\n## Exerc\u00edcio 2.1\n\nQuantos opera\u00e7\u00f5es de ponto flutuante (flops) de soma e de multiplica\u00e7\u00e3o tem a multiplica\u00e7\u00e3o matricial $AB$, sendo que a matriz $A$ tem tamanho $m \\times n$ e a matriz $B$ tem tamanho $n \\times p$?\n\nResposta:\n- n\u00famero de somas: $m\\cdot p\\cdot n$\n- n\u00famero de multiplica\u00e7\u00f5es: $m\\cdot p\\cdot (n-1)$\n\n## Exerc\u00edcio 2.2\n\nEm programa\u00e7\u00e3o matricial, n\u00e3o se faz o loop em cada elemento da matriz,\nmas sim, utiliza-se opera\u00e7\u00f5es matriciais.\n\nDada a matriz `A` abaixo, calcule a m\u00e9dia dos valores de cada linha sem utilizar la\u00e7os expl\u00edcitos.\n\nUtilize apenas a biblioteca numpy.\n\n\n```python\nimport numpy as np\nnp.set_printoptions(edgeitems=10, linewidth=180)\n```\n\n\n```python\nA = np.arange(24).reshape(4, 6)\nprint(A)\n```\n\n [[ 0 1 2 3 4 5]\n [ 6 7 8 9 10 11]\n [12 13 14 15 16 17]\n [18 19 20 21 22 23]]\n\n\n\n```python\nnp.sum(A, axis=1)/(A.shape[1])\n```\n\n\n\n\n array([ 2.5, 8.5, 14.5, 20.5])\n\n\n\n## Exerc\u00edcio 2.3\n\nSeja a matriz $C$ que \u00e9 a normaliza\u00e7\u00e3o da matriz $A$:\n$$ C(i,j) = \\frac{A(i,j) - A_{min}}{A_{max} - A_{min}} $$\n\nNormalizar a matriz `A` do exerc\u00edcio acima de forma que seus valores fiquem entre 0 e 1.\n\n\n```python\nfrom numpy import array\n\nC = (array(A) - np.amin(A))/(np.amax(A) - np.amin(A))\nC\n```\n\n\n\n\n array([[0. , 0.04347826, 0.08695652, 0.13043478, 0.17391304, 0.2173913 ],\n [0.26086957, 0.30434783, 0.34782609, 0.39130435, 0.43478261, 0.47826087],\n [0.52173913, 0.56521739, 0.60869565, 0.65217391, 0.69565217, 0.73913043],\n [0.7826087 , 0.82608696, 0.86956522, 0.91304348, 0.95652174, 1. ]])\n\n\n\n## Exerc\u00edcio 2.4\n\nModificar o exerc\u00edcio anterior de forma que os valores de cada *coluna* da matriz `A` sejam normalizados entre 0 e 1 independentemente dos valores das outras colunas.\n\n\n\n```python\nC = (array(A) - array(A.min(axis=0)))/(array(A.max(axis=0)) - array(A.min(axis=0)))\nC\n```\n\n\n\n\n array([[0. , 0. , 0. , 0. , 0. , 0. ],\n [0.33333333, 0.33333333, 0.33333333, 0.33333333, 0.33333333, 0.33333333],\n [0.66666667, 0.66666667, 0.66666667, 0.66666667, 0.66666667, 0.66666667],\n [1. , 1. , 1. , 1. , 1. , 1. ]])\n\n\n\n## Exerc\u00edcio 2.5\n\nModificar o exerc\u00edcio anterior de forma que os valores de cada *linha* da matriz `A` sejam normalizados entre 0 e 1 independentemente dos valores das outras linhas.\n\n\n\n```python\nC = ((array(A.T) - array((A.T).min(axis=0)))/(array((A.T).max(axis=0)) - array((A.T).min(axis=0)))).T\nC\n```\n\n\n\n\n array([[0. , 0.2, 0.4, 0.6, 0.8, 1. ],\n [0. , 0.2, 0.4, 0.6, 0.8, 1. ],\n [0. , 0.2, 0.4, 0.6, 0.8, 1. ],\n [0. , 0.2, 0.4, 0.6, 0.8, 1. ]])\n\n\n\n## Exerc\u00edcio 2.6\n\nA [fun\u00e7\u00e3o softmax](https://en.wikipedia.org/wiki/Softmax_function) \u00e9 bastante usada em apredizado de m\u00e1quina para converter uma lista de n\u00fameros para uma distribui\u00e7\u00e3o de probabilidade, isto \u00e9, os n\u00fameros ficar\u00e3o normalizados entre zero e um e sua soma ser\u00e1 igual \u00e0 um.\n\nImplemente a fun\u00e7\u00e3o softmax com suporte para batches, ou seja, o softmax deve ser aplicado a cada linha da matriz. Deve-se usar apenas a biblioteca numpy. Se atente que a exponencia\u00e7\u00e3o gera estouro de representa\u00e7\u00e3o quando os n\u00fameros da entrada s\u00e3o muito grandes. Tente corrigir isto.\n\n#### Resposta:\n\nEm [Stack Overflow](https://stackoverflow.com/a/34969389), \u00e9 sugerida a seguinte adapta\u00e7\u00e3o para n\u00fameros muito grandes:\n\n\\begin{equation}\n\\sigma(\\mathbf{z})_i = \\frac{e^{z_i - z_{max}}}{\\sum_{j=1}^K e^{z_j - z_{max}}}\n\\end{equation}\npois ter\u00edamos\n\n\\begin{equation}\n\\sigma(\\mathbf{z})_i = \\frac{e^{z_i}}{e^{z_{max}}\\sum_{j=1}^K e^{z_j} e^{-z_{max}}} = \\frac{e^{z_i}}{\\sum_{j=1}^K e^{z_j}}\n\\end{equation}\n\n\n```python\nimport numpy as np\nfrom numpy import array\n\n\ndef softmax(A):\n '''\n Aplica a fun\u00e7\u00e3o de softmax \u00e0 matriz `A`.\n\n Entrada:\n `A` \u00e9 uma matriz M x N, onde M \u00e9 o n\u00famero de exemplos a serem processados\n independentemente e N \u00e9 o tamanho de cada exemplo.\n \n Sa\u00edda:\n Uma matriz M x N, onde a soma de cada linha \u00e9 igual a um.\n '''\n aux = np.zeros(A.shape)\n mx = A.max(axis=1)\n den = np.sum(np.exp(array(A.T) - (A.T).max(axis=0)), axis=0)\n for i in range(A.shape[0]):\n aux[i] = np.exp(array(A[i,:]) - array(mx[i]))/array(den[i])\n \n return aux\n```\n\nMostre que sua implementa\u00e7\u00e3o est\u00e1 correta usando uma matriz pequena como entrada:\n\n\n```python\nA = np.array([[0.5, -1, 1000],\n [-2, 0, 0.5]])\nsoftmax(A)\n```\n\n\n\n\n array([[0. , 0. , 1. ],\n [0.04861082, 0.35918811, 0.59220107]])\n\n\n\nO c\u00f3digo a seguir verifica se sua implementa\u00e7\u00e3o do softmax est\u00e1 correta. \n- A soma de cada linha de A deve ser 1;\n- Os valores devem estar entre 0 e 1\n\n\n```python\nnp.allclose(softmax(A).sum(axis=1), 1) and softmax(A).min() >= 0 and softmax(A).max() <= 1\n```\n\n\n\n\n True\n\n\n\nMostre que sua implementa\u00e7\u00e3o \u00e9 eficiente usando uma matriz grande como entrada:\n\n\n```python\nA = np.random.uniform(low=-10, high=10, size=(128, 100_000))\n```\n\n\n```python\n%%timeit\nsoftmax(A)\n```\n\n 1 loop, best of 5: 593 ms per loop\n\n\n\n```python\nSM = softmax(A)\nnp.allclose(SM.sum(axis=1), 1) and SM.min() >= 0 and SM.max() <= 1\n```\n\n\n\n\n True\n\n\n\n## Exerc\u00edcio 2.7\n\nA codifica\u00e7\u00e3o one-hot \u00e9 usada para codificar entradas categ\u00f3ricas. \u00c9 uma codifica\u00e7\u00e3o onde apenas um bit \u00e9 1 e os demais s\u00e3o zero, conforme a tabela a seguir.\n\n| Decimal | Binary | One-hot\n| ------- | ------ | -------\n| 0 | 000 | 1 0 0 0 0 0 0 0\n| 1 | 001 | 0 1 0 0 0 0 0 0\n| 2 | 010 | 0 0 1 0 0 0 0 0\n| 3 | 011 | 0 0 0 1 0 0 0 0\n| 4 | 100 | 0 0 0 0 1 0 0 0\n| 5 | 101 | 0 0 0 0 0 1 0 0\n| 6 | 110 | 0 0 0 0 0 0 1 0\n| 7 | 111 | 0 0 0 0 0 0 0 1\n\nImplemente a fun\u00e7\u00e3o one_hot(y, n_classes) que codifique o vetor de inteiros y que possuem valores entre 0 e n_classes-1.\n\n\n\n```python\nimport sys\nnp.set_printoptions(suppress=True)\n\ndef one_hot(y, n_classes):\n A = ((10**(n_classes - y - 1)).astype(int)).astype(str)\n \n return np.char.zfill(A, n_classes)\n```\n\n\n```python\nN_CLASSES = 9\nN_SAMPLES = 10\ny = (np.random.rand((N_SAMPLES)) * N_CLASSES).astype(int)\nprint(y)\nprint(one_hot(y, N_CLASSES))\n```\n\n [3 8 5 5 6 7 6 7 2 4]\n ['000100000' '000000001' '000001000' '000001000' '000000100' '000000010' '000000100' '000000010' '001000000' '000010000']\n\n\nMostre que sua implementa\u00e7\u00e3o \u00e9 eficiente usando uma matriz grande como entrada:\n\n\n```python\nN_SAMPLES = 100_000\nN_CLASSES = 1_000\ny = (np.random.rand((N_SAMPLES)) * N_CLASSES).astype(int)\n```\n\n\n```python\n%%timeit\none_hot(y, N_CLASSES)\n```\n\n 1 loop, best of 5: 221 ms per loop\n\n\n## Exerc\u00edcio 2.8\n\nImplemente uma classe que normalize um array de pontos flutuantes `array_a` para a mesma m\u00e9dia e desvio padr\u00e3o de um outro array `array_b`, conforme exemplo abaixo:\n```\narray_a = np.array([-1, 1.5, 0])\narray_b = np.array([1.4, 0.8, 0.3, 2.5])\nnormalize = Normalizer(array_b)\nnormalized_array = normalize(array_a)\nprint(normalized_array) # Deve imprimir [0.3187798 2.31425165 1.11696854]\n```\n\nMostre que seu c\u00f3digo est\u00e1 correto com o exemplo abaixo:\n\n\n```python\narray_a = [-1, 1.5, 0]\narray_b = [1.4, 0.8, 0.3, 2.5]\nnormalize = Normalizer(array_b)\nnormalized_array = normalize(array_a)\nprint(normalized_array)\n```\n\n# Parte 3:\n\n## Exerc\u00edcios Pytorch: Grafo Computacional e Gradientes\n\nNesta parte pode-se usar quaisquer bibliotecas.\n\nUm dos principais fundamentos para que o PyTorch seja adequado para deep learning \u00e9 a sua habilidade de calcular o gradiente automaticamente a partir da express\u00f5es definidas. Essa facilidade \u00e9 implementada atrav\u00e9s do c\u00e1lculo autom\u00e1tico do gradiente e constru\u00e7\u00e3o din\u00e2mica do grafo computacional.\n\n## Grafo computacional\n\nSeja um exemplo simples de uma fun\u00e7\u00e3o de perda J dada pela Soma dos Erros ao Quadrado (SEQ - Sum of Squared Errors): \n$$ J = \\sum_i (x_i w - y_i)^2 $$\nque pode ser reescrita como:\n$$ \\hat{y_i} = x_i w $$\n$$ e_i = \\hat{y_i} - y_i $$\n$$ e2_i = e_i^2 $$\n$$ J = \\sum_i e2_i $$\n\nAs redes neurais s\u00e3o treinadas atrav\u00e9s da minimiza\u00e7\u00e3o de uma fun\u00e7\u00e3o de perda usando o m\u00e9todo do gradiente descendente. Para ajustar o par\u00e2metro $w$ precisamos calcular o gradiente $ \\frac{ \\partial J}{\\partial w} $. Usando a\nregra da cadeia podemos escrever:\n$$ \\frac{ \\partial J}{\\partial w} = \\frac{ \\partial J}{\\partial e2_i} \\frac{ \\partial e2_i}{\\partial e_i} \\frac{ \\partial e_i}{\\partial \\hat{y_i} } \\frac{ \\partial \\hat{y_i}}{\\partial w}$$ \n\n```\n y_pred = x * w\n e = y_pred - y\n e2 = e**2\n J = e2.sum()\n```\n\nAs quatro express\u00f5es acima, para o c\u00e1lculo do J podem ser representadas pelo grafo computacional visualizado a seguir: os c\u00edrculos s\u00e3o as vari\u00e1veis (tensores), os quadrados s\u00e3o as opera\u00e7\u00f5es, os n\u00fameros em preto s\u00e3o os c\u00e1lculos durante a execu\u00e7\u00e3o das quatro express\u00f5es para calcular o J (forward, predict). O c\u00e1lculo do gradiente, mostrado em vermelho, \u00e9 calculado pela regra da cadeia, de tr\u00e1s para frente (backward).\n\n\n\nPara entender melhor o funcionamento do grafo computacional com os tensores, recomenda-se leitura em:\n\nhttps://pytorch.org/docs/stable/notes/autograd.html\n\n\n```python\nimport torch\n```\n\n\n```python\ntorch.__version__\n```\n\n\n\n\n '1.10.0+cu111'\n\n\n\n**Tensor com atributo .requires_grad=True**\n\nQuando um tensor possui o atributo `requires_grad` como verdadeiro, qualquer express\u00e3o que utilizar esse tensor ir\u00e1 construir um grafo computacional para permitir posteriormente, ap\u00f3s calcular a fun\u00e7\u00e3o a ser derivada, poder usar a regra da cadeia e calcular o gradiente da fun\u00e7\u00e3o em termos dos tensores que possuem o atributo `requires_grad`.\n\n\n\n```python\ny = torch.arange(0, 8, 2).float()\ny\n```\n\n\n\n\n tensor([0., 2., 4., 6.])\n\n\n\n\n```python\nx = torch.arange(0, 4).float()\nx\n```\n\n\n\n\n tensor([0., 1., 2., 3.])\n\n\n\n\n```python\nw = torch.ones(1, requires_grad=True)\nw\n```\n\n\n\n\n tensor([1.], requires_grad=True)\n\n\n\n## C\u00e1lculo autom\u00e1tico do gradiente da fun\u00e7\u00e3o perda J\n\nSeja a express\u00e3o: $$ J = \\sum_i ((x_i w) - y_i)^2 $$\n\nQueremos calcular a derivada de $J$ em rela\u00e7\u00e3o a $w$.\n\n## Forward pass\n\nDurante a execu\u00e7\u00e3o da express\u00e3o, o grafo computacional \u00e9 criado. Compare os valores de cada parcela calculada com os valores em preto da figura ilustrativa do grafo computacional.\n\n\n```python\n# predict (forward)\ny_pred = x * w; print('y_pred =', y_pred)\n\n# c\u00e1lculo da perda J: loss\ne = y_pred - y; print('e =',e)\ne2 = e.pow(2) ; print('e2 =', e2)\nJ = e2.sum() ; print('J =', J)\n```\n\n y_pred = tensor([0., 1., 2., 3.], grad_fn=)\n e = tensor([ 0., -1., -2., -3.], grad_fn=)\n e2 = tensor([0., 1., 4., 9.], grad_fn=)\n J = tensor(14., grad_fn=)\n\n\n## Backward pass\n\nO `backward()` varre o grafo computacional a partir da vari\u00e1vel a ele associada (raiz) e calcula o gradiente para todos os tensores que possuem o atributo `requires_grad` como verdadeiro.\nObserve que os tensores que tiverem o atributo `requires_grad` ser\u00e3o sempre folhas no grafo computacional.\nO `backward()` destroi o grafo ap\u00f3s sua execu\u00e7\u00e3o. Esse comportamento \u00e9 padr\u00e3o no PyTorch. \n\nA t\u00edtulo ilustrativo, se quisermos depurar os gradientes dos n\u00f3s que n\u00e3o s\u00e3o folhas no grafo computacional, precisamos primeiro invocar `retain_grad()` em cada um desses n\u00f3s, como a seguir. Entretanto nos exemplos reais n\u00e3o h\u00e1 necessidade de verificar o gradiente desses n\u00f3s.\n\n\n```python\ne2.retain_grad()\ne.retain_grad()\ny_pred.retain_grad()\n```\n\nE agora calculamos os gradientes com o `backward()`.\n\nw.grad \u00e9 o gradiente de J em rela\u00e7\u00e3o a w.\n\n\n```python\nif w.grad: w.grad.zero_()\nJ.backward()\nprint(w.grad)\n```\n\n tensor([-28.])\n\n\nMostramos agora os gradientes que est\u00e3o grafados em vermelho no grafo computacional:\n\n\n```python\nprint(e2.grad)\nprint(e.grad)\nprint(y_pred.grad)\n```\n\n tensor([1., 1., 1., 1.])\n tensor([ 0., -2., -4., -6.])\n tensor([ 0., -2., -4., -6.])\n\n\n## Exerc\u00edcio 3.1\nCalcule o mesmo gradiente ilustrado no exemplo anterior usando a regra das diferen\u00e7as finitas, de acordo com a equa\u00e7\u00e3o a seguir, utilizando um valor de $\\Delta w$ bem pequeno.\n\n$$ \\frac{\\partial J}{\\partial w} = \\frac{J(w + \\Delta w) - J(w - \\Delta w)}{2 \\Delta w} $$\n\n\n```python\ndef J_func(w, x, y):\n J = torch.sum((x*w - y)**2)\n \n return J\n\ndef grad_J(w, dw, x, y):\n grad = (J_func(w + dw, x, y) - J_func(w - dw, x, y))/(2*dw)\n \n return grad\n\n# Calcule o gradiente usando a regra diferen\u00e7as finitas\n# Confira com o valor j\u00e1 calculado anteriormente\nx = torch.arange(0, 4).float()\ny = torch.arange(0, 8, 2).float()\nw = torch.ones(1)\ndw = 0.01*torch.ones(1)\ngrad = grad_J(w, dw, x, y)\nprint('grad=', grad)\n```\n\n grad= tensor([-28.0000])\n\n\n\n```python\nw\n```\n\n\n\n\n tensor([1.])\n\n\n\n\n```python\nx\n```\n\n\n\n\n tensor([0., 1., 2., 3.])\n\n\n\n## Exerc\u00edcio 3.2\n\nMinimizando $J$ pelo gradiente descendente\n\n$$ w_{k+1} = w_k - \\lambda \\frac {\\partial J}{\\partial w} $$\n\nSupondo que valor inicial ($k=0$) $w_0 = 1$, use learning rate $\\lambda = 0.01$ para calcular o valor do novo $w_{20}$, ou seja, fazendo 20 atualiza\u00e7\u00f5es de gradientes. Deve-se usar a fun\u00e7\u00e3o `J_func` criada no exerc\u00edcio anterior.\n\nConfira se o valor do primeiro gradiente est\u00e1 de acordo com os valores j\u00e1 calculado acima\n\n\n```python\nlearning_rate = 0.01\niteracoes = 20\n\nx = torch.arange(0, 4).float()\ny = torch.arange(0, 8, 2).float()\nw = torch.ones(1)\nJ = torch.ones(iteracoes)\ndw = 0.05*w\n\nfor i in range(iteracoes):\n print('i =', i)\n J[i] = J_func(w, x, y)\n print('J=', J[i])\n grad = grad_J(w, dw, x, y)\n print('grad =',grad)\n w = w - learning_rate*grad\n print('w =', w)\n\nimport matplotlib.pyplot as plt\n# Plote o gr\u00e1fico da loss J pela itera\u00e7\u00e3o i\nplt.plot(J)\n```\n\n## Exerc\u00edcio 3.3\n\nRepita o exerc\u00edcio 2 mas usando agora o calculando o gradiente usando o m\u00e9todo backward() do pytorch. Confira se o primeiro valor do gradiente est\u00e1 de acordo com os valores anteriores. Execute essa pr\u00f3xima c\u00e9lula duas vezes. Os valores devem ser iguais.\n\n\n\n```python\nlearning_rate = 0.01\niteracoes = 20\n\nx = torch.arange(0, 4).float()\ny = torch.arange(0, 8, 2).float()\nw = torch.ones(1, requires_grad=True)\n\nfor i in range(iteracoes):\n print('i =', i)\n J = J_func(w, x, y)\n print('J=', J)\n grad = ?\n print('grad =',grad)\n w = ?\n print('w =', w)\n\n# Plote aqui a loss pela itera\u00e7\u00e3o\n```\n\n##Exerc\u00edcio 3.4\n\nQuais s\u00e3o as restri\u00e7\u00f5es na escolha dos valores de $\\Delta w$ no c\u00e1lculo do gradiente por diferen\u00e7as finitas?\n\nResposta:\n\n##Exerc\u00edcio 3.5\n\nAt\u00e9 agora trabalhamos com $w$ contendo apenas um par\u00e2metro. Suponha agora que $w$ seja uma matriz com $N$ par\u00e2metros e que o custo para executar $(x_i w - y_i)^2$ seja $O(N)$.\n> a) Qual \u00e9 o custo computacional para fazer uma \u00fanica atualiza\u00e7\u00e3o (um passo de gradiente) dos par\u00e2metros de $w$ usando o m\u00e9todo das diferencas finitas?\n>\n> b) Qual \u00e9 o custo computacional para fazer uma \u00fanica atualiza\u00e7\u00e3o (um passo de gradiente) dos par\u00e2metros de $w$ usando o m\u00e9todo do backpropagation?\n\n\n\nResposta (justifique):\n\na)\n\nb)\n\n##Exerc\u00edcio 3.6\n\nQual o custo (entropia cruzada) esperado para um exemplo (uma amostra) no come\u00e7o do treinamento de um classificador inicializado aleatoriamente?\n\nA equa\u00e7\u00e3o da entropia cruzada \u00e9:\n$$L = - \\sum_{j=0}^{K-1} y_j \\log p_j, $$\nOnde:\n\n- K \u00e9 o n\u00famero de classes;\n\n- $y_j=1$ se $j$ \u00e9 a classe do exemplo (ground-truth), 0 caso contr\u00e1rio. Ou seja, $y$ \u00e9 um vetor one-hot;\n\n- $p_j$ \u00e9 a probabilidade predita pelo modelo para a classe $j$.\n\nA resposta tem que ser em fun\u00e7\u00e3o de uma ou mais das seguintes vari\u00e1veis:\n\n- K = n\u00famero de classes\n\n- B = batch size\n\n- D = dimens\u00e3o de qualquer vetor do modelo\n\n- LR = learning rate\n\nResposta:\n\nFim do notebook.\n", "meta": {"hexsha": "6a5a6fd6c567b964e0343fff833771a2af520aaf", "size": 66417, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ex01/Fernanda_Caldas/FernandaCaldas_Semana_1.ipynb", "max_stars_repo_name": "flych3r/IA025_2022S1", "max_stars_repo_head_hexsha": "8a5a92a0d22c3a602906bdc3b8c7eb8ae325e88b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2022-03-20T21:16:14.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-20T22:20:26.000Z", "max_issues_repo_path": "ex01/Fernanda_Caldas/FernandaCaldas_Semana_1.ipynb", "max_issues_repo_name": "flych3r/IA025_2022S1", "max_issues_repo_head_hexsha": "8a5a92a0d22c3a602906bdc3b8c7eb8ae325e88b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ex01/Fernanda_Caldas/FernandaCaldas_Semana_1.ipynb", "max_forks_repo_name": "flych3r/IA025_2022S1", "max_forks_repo_head_hexsha": "8a5a92a0d22c3a602906bdc3b8c7eb8ae325e88b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2022-03-16T15:39:36.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-27T14:04:34.000Z", "avg_line_length": 33.6118421053, "max_line_length": 9685, "alphanum_fraction": 0.5272896999, "converted": true, "num_tokens": 6876, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4035668680822513, "lm_q2_score": 0.2365162364457076, "lm_q1q2_score": 0.09545011679299543}} {"text": "```python\nfrom IPython.core.display import HTML\nHTML(\"\")\n```\n\n\n\n\n\n\n\n\n# Lecture 8, Optimality conditions \n\nWe are still studying the full problem\n\n$$\n\\begin{align} \\\n\\min \\quad &f(x)\\\\\n\\text{s.t.} \\quad & g_j(x) \\geq 0\\text{ for all }j=1,\\ldots,J\\\\\n& h_k(x) = 0\\text{ for all }k=1,\\ldots,K\\\\\n&x\\in \\mathbb R^n.\n\\end{align}\n$$\n\n## What aspects are important for optimality conditions?\n* Think about this for a while\n* You can first think about the case without constraints and, then, what should be added there\n\n\n## Optimality conditions for **unconstrained** optimization\n\n* (**Necessary condition**) Let $f$ be twice differentiable at $x^*\\in\\mathbb R^n$. If $x^*$ is a local minimizer, then $\\nabla f(x^*)=0$ and the Hessian matrix $H(x^*)$ is positively semidefinite.\n* (**Sufficient condition**) Let $f$ be twice continuously differentiable at $x^*\\in\\mathbb R^n$. If $\\nabla f(x^*)=0$ and $H(x^*)$ is positively definite, then $x^*$ is a strict local minimizer.\n\nIn order to identify which points are optimal, we want to define similar conditions as there are for unconstrained problems through the gradient:\n\n>If $x$ is a local optimum to function $f$, then $\\nabla f(x)=0$.\n\n## Karush-Kuhn-Tucker (KKT) conditions\n\n\n\n**Theorem (First order Karush-Kuhn-Tucker (KKT) Necessary Conditions)** \n\nLet $x^*$ be a local minimum for problem\n$$\n$$\n\\begin{align} \\\n\\min \\quad &f(x)\\\\\n\\text{s.t.} \\quad & g_j(x) \\geq 0\\text{ for all }j=1,\\ldots,J\\\\\n& h_k(x) = 0\\text{ for all }k=1,\\ldots,K\\\\\n&x\\in \\mathbb R^n.\n\\end{align}\n$$\n$$\n\nLet us assume that objective and constraint functions are continuosly differentiable at a point $x^*$ and assume that $x^*$ satisfies some regularity conditions (see e.g., https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions#Regularity_conditions_.28or_constraint_qualifications.29 ). Then there exists unique Lagrance multiplier vectors $\\mu^*=(\\mu_1^*,\\ldots,\\mu_J^*)$ and $\\lambda^* = (\\lambda^*_1,\\ldots,\\lambda_K^*)$ such that\n\n$$\n\\begin{align}\n&\\nabla_xL(x^*,\\mu^*,\\lambda^*) = 0\\\\\n&\\mu_j^*\\geq0,\\text{ for all }j=1,\\ldots,J \\text{ (also known as **Dual feasibility**)}\\\\\n&\\mu_j^*g_j(x^*)=0,\\text{for all }j=1,\\ldots,J,\n\\end{align}\n$$\n\nwhere $L$ is the *Lagrangian function* $$L(x,\\mu,\\lambda) = f(x)- \\sum_{j=1}^J\\mu_jg_j(x) -\\sum_{k=1}^K\\lambda_kh_k(x)$$.\n\n\n* Lagrangian Function can be viewed as a function aggregated the original objective function plus the **penalized terms on constraint violations**.\n\n## An example of constraint qualifications for inequality constraint problems\n\n\n**Definition (regular point)**\n\nA point $x^*\\in S$ is *regular* if the set of gradients of the active inequality constraints \n\n$$\n\\{\\nabla g_j(x^*) | \\text{ constraint } i \\text{ is active}\\}\n$$\n\nis linearly independent. This means that none of them can be expressed as a linear combination of the others. (*In a simple language one might say that they point to different directions; as an example you can think of the basis vectors of $\\mathbb R^n$*.)\n\nKKT conditions were developed independently by \n* William Karush:\"Minima of Functions of Several Variables with Inequalities as Side Constraints\". *M.Sc. Dissertation*, Dept. of Mathematics, Univ. of Chicago, 1939\n* Harold W. Kuhn & Albert W. Tucker: \"Nonlinear programming\", In: *Proceedings of 2nd Berkeley Symposium*, pp. 481\u2013492, 1951\n\nThe coefficients $\\mu$ and $\\lambda$ are called the *KKT multipliers*.\n\nThe first equality \n\n$$\n\\nabla_xL(x,\\mu,\\lambda) = 0\n$$\n\nis called the stationary rule and the requirement \n\n$$\n\\mu_j^*g_j(x)=0,\\text{for all }j=1,\\ldots,J\n$$\n\nis called the complementarity rule.\n\n### Note:\n\n* In some cases, the necessary conditions are also sufficient for optimality.\n\n* For example, the necessary conditions mentioned above are sufficient for optimality if $f$, $g_j$ and $h_k (\\forall j, k)$ are convex (in a minimization problem).\n\n## Example\n\nConsider the optimization problem\n\n$$\n\\begin{align}\n\\min &\\qquad (x_1^2+x^2_2+x^2_3)\\\\\n\\text{s.t}&\\qquad x_1+x_2+x_3-3\\geq 0.\n\\end{align}\n$$\n\nLet us verify the KKT necessary conditions for the local optimum $x^*=(1,1,1)$.\n\nWe can see that\n\n$$\nL(x,\\mu,\\lambda) = (x_1^2+x_2^2+x_3^2)-\\mu_1(x_1+x_2+x_3-3)\n$$\n\nand thus\n\n$$\n\\nabla_x L(x,\\mu,\\lambda) = (2x_1-\\mu_1,2x_2-\\mu_1,2x_3-\\mu_1)\n$$\n\nand if $\\nabla_x L([1,1,1],\\mu,\\lambda)=0$, then \n\n$$\n2-\\mu_1=0 $$\nwhich holds when $$\n\\mu_1=2.\n$$\n\nIn addition to this, we can see that $x^*_1+x^*_2+x^*_3-3= 0$. Thus, the completementarity rule holds even though $\\mu_1\\neq 0$.\n\n## Example 2\n\nLet us check the KKT conditions for a solution that is not a local optimum. Let us have $x^*=(0,1,1)$.\n\n$$\n\\nabla_x L(x,\\mu,\\lambda) = (2x_1-\\mu_1,2x_2-\\mu_1,2x_3-\\mu_1)\n$$\n\n\nWe can easily see that in this case, the conditions are \n\n$$\\left\\{\n\\begin{array}{c}\n-\\mu_1 = 0\\\\\n2-\\mu_1=0\n\\end{array}\n\\right.\n$$\n\nClearly, there does not exist a $\\mu_1\\in \\mathbb R$ such that these equalities would hold.\n\n## Example 3\n\nLet us check the KKT conditions for another solution that is not a local optimum. Let us have $x^*=(2,2,2)$.\n\n$$\n\\nabla_x L(x,\\mu,\\lambda) = (2x_1-\\mu_1,2x_2-\\mu_1,2x_3-\\mu_1)\n$$\n\n\nWe can easily see that in this case, the conditions are\n\n$$\n4-\\mu_1 = 0\n$$\n\nNow, $\\mu_1=4$ satisfies this equation. However, now\n\n$$\n\\mu_1(x^*_1+x^*_2+x^*_3-3)=4(6-3) = 12 \\neq 0.\n$$\n\nThus, the completementarity rule fails and the KKT conditions are not true.\n\n\n### Another example\n\nFormulate the KKT conditions for the following example:\n$$\nmin \ud835\udc53(\\mathbf{x}) = (\ud835\udc65_1 \u2212 3)^2 + (\ud835\udc65_2 \u2212 2)^2\\\\\ns.t. \\\\\n\ud835\udc65_1^2 + \ud835\udc65_2^2 \u2264 5,\\\\\n\ud835\udc65_1 + 2\ud835\udc65_2 = 4,\\\\\n\ud835\udc65_1, \ud835\udc65_2 \u2265 0\n$$\n\nCheck them for $x^* = (2,1)$\n\n* This part will be completed during the lecture by students (10 min).\n* You need to use both conditions to find the KKT multipliers. There are three inequality constraints (j=1,2,3) and one equality constraint. \n\n### A reminder of KKT necessary conditions:\n$$\n\\begin{align}\n&\\nabla_xL(x^*,\\mu^*,\\lambda^*) = 0\\\\\n&\\mu_j^*\\geq0,\\text{ for all }j=1,\\ldots,J\\\\\n&\\mu_j^*g_j(x^*)=0,\\text{for all }j=1,\\ldots,J,\n\\end{align}\n$$\n\nwhere $L$ is the *Lagrangian function* $$L(x,\\mu,\\lambda) = f(x)- \\sum_{j=1}^J\\mu_jg_j(x) -\\sum_{k=1}^K\\lambda_kh_k(x)$$\n\n\n```python\n\n```\n\n## Geometric interpretation of the KKT conditions\n\n## Stationary rule\n\nConsider the *Lagrangian function* L as: $$L(x,\\mu,\\lambda) = f(x)- \\sum_{j=1}^J\\mu_jg_j(x) -\\sum_{k=1}^K\\lambda_kh_k(x)$$.\n\nThe stationary rule is:\n$$\n\\nabla_xL(x,\\mu,\\lambda) = 0\n$$\n\nThe stationary rule can be written as: There exist $\\mu,\\lambda'$ so that\n\n$$\n-\\nabla f(x) = -\\sum_{j=1}^K\\mu_j\\nabla g_j(x) + \\sum_{k=1}^K\\lambda'_k\\nabla h_k(x).\n$$\n\nNotice that we have slightly different $\\lambda'$.\n\nNow, remember that the $-\\nabla v(x)$ gives us the direction of reduction for a function $v$.\n\nThus, the above equation means that the direction of reduction of the function $-\\nabla f(x)$ is countered by the direction of the reduction of the inequality constraints $-\\nabla g_j(x)$ and the directions of either growth (or reduction, since $\\lambda'$ can be negative) of the equality constraints $\\nabla h_k(x)$.\n\n**This means that the function cannot get reduced without reducing the inequality constraints (making the solution infeasible, if already at the bound), or increasing or decreasing the equality constraints (making, thus, the solution again infeasible).**\n\n\n\n#### With just one inequality constraint this means that the negative gradients of $f$ and $g$ must point to the same direction.\n\n\n\n#### With equality constraints this means that the negative gradient of the objective function and the gradient of the equality constraint must either point to the same or opposite directions\n\n\n\n## Complementarity conditions\nAnother way of expressing complementarity condition\n\n$$\n\\mu_jg_j(x) = 0 \\text{ for all } j=1,\\ldots,J\n$$\n\nis to say that both $\\mu_j$ and $g_j(x)$ cannot be positive at the same time. Especially, if $\\mu_j>0$, then $g_j(x)=0$.\n\n**This means that if we want to use the gradient of a constraint for countering the reduction of the function, then the constraint must be at the boundary.**\n\n### Sufficient conditions:\n\n* The necessary conditions are sufficient for optimality if $f$, $g_j$ and $h_k (\\forall j, k)$ are convex (in a minimization problem).\n\n* In general, the necessary conditions are not sufficient for optimality and additional information is required, e.g., the Second Order Sufficient Conditions for smooth functions.\n\n\n### Second-order sufficient conditions (Projected Hessian is positive definite)\n\nFor a smooth, non-linear optimization problem, a second order sufficient condition is given as follows:\n\nIf $(x^*, \\mu^*, \\lambda^*$ be a constrained local minimum for the Lagrangian function\n\n$$L(x,\\mu,\\lambda) = f(x)- \\sum_{j=1}^J\\mu_jg_j(x) -\\sum_{k=1}^K\\lambda_kh_k(x)$$\n\nThen, \n\n$$ d^T \\nabla _{\\mathbf{xx}}^2L(x^*,\\mu^*,\\lambda^*) d > 0 \\text { (Hessian is positive definite) }$$ \n\n\nBut in constrained optimization we are **not interested in all d**.\n\nInstead, we are looking for the $d$ vectors that lies on the tangent space (active constraints).\n\n\n* i.e., $$ \\forall d \\neq 0 \\text{, } [\\nabla _{x}g_{j}(x^{*}),\\nabla _{x}h_{k}(x^{*})]^Td = 0 \\text{; } \\forall j, k.$$\n\n\n", "meta": {"hexsha": "8280c4a54a2b99ec8ed20852de256f0053c3ce1f", "size": 17224, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture 8, optimality conditions.ipynb", "max_stars_repo_name": "bshavazipour/TIES483-2022", "max_stars_repo_head_hexsha": "93dfabbfe1e953e5c5f83c44412963505ecf575a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture 8, optimality conditions.ipynb", "max_issues_repo_name": "bshavazipour/TIES483-2022", "max_issues_repo_head_hexsha": "93dfabbfe1e953e5c5f83c44412963505ecf575a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture 8, optimality conditions.ipynb", "max_forks_repo_name": "bshavazipour/TIES483-2022", "max_forks_repo_head_hexsha": "93dfabbfe1e953e5c5f83c44412963505ecf575a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-02-03T09:40:02.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-03T09:40:02.000Z", "avg_line_length": 26.8286604361, "max_line_length": 471, "alphanum_fraction": 0.5334417092, "converted": true, "num_tokens": 3000, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.2845759920814681, "lm_q2_score": 0.33458944125318596, "lm_q1q2_score": 0.09521612218460948}} {"text": "```python\n%matplotlib inline\n```\n\n\n\n# Importing data from MEG devices\n\n\nThis section describes how to read data for various MEG manufacturers.\n :depth: 2\n\n\n\nElekta NeuroMag (.fif)\n======================\n\nNeuromag Raw FIF files can be loaded using :func:`mne.io.read_raw_fif`.\n\nIf the data were recorded with MaxShield on and have not been processed\nwith MaxFilter, they may need to be loaded with\n``mne.io.read_raw_fif(..., allow_maxshield=True)``.\n\n\n\nArtemis123 (.bin)\n=================\nMEG data from the Artemis123 system can be read with\\\n:func:`mne.io.read_raw_artemis123`.\n\n\n\n4-D Neuroimaging / BTI data (dir)\n=================================\n\nMNE-Python provides :func:`mne.io.read_raw_bti` to read and convert 4D / BTI\ndata. This reader function will by default replace the original channel names,\ntypically composed of the letter `A` and the channel number with Neuromag.\nTo import the data, the following input files are mandatory:\n\n- A data file (typically c,rfDC)\n containing the recorded MEG time series.\n\n- A hs_file\n containing the digitizer data.\n\n- A config file\n containing acquisition information and metadata.\n\nBy default :func:`mne.io.read_raw_bti` assumes that these three files are located\nin the same folder.\n\n

Note

While reading the reference or compensation channels,\n the compensation weights are currently not processed.\n As a result, the :class:`mne.io.Raw` object and the corresponding fif\n file does not include information about the compensation channels\n and the weights to be applied to realize software gradient\n compensation. If the data are saved in the Magnes system are already\n compensated, there will be a small error in the forward calculations,\n whose significance has not been evaluated carefully at this time.

\n\n\n\nCTF data (dir)\n==============\n\nThe function :func:`mne.io.read_raw_ctf` can be used to read CTF data.\n\nCTF Polhemus data\n-----------------\n\nThe function :func:`mne.channels.read_dig_polhemus_isotrak` can be used to read\nPolhemus data.\n\nApplying software gradient compensation\n---------------------------------------\n\nSince the software gradient compensation employed in CTF\nsystems is a reversible operation, it is possible to change the\ncompensation status of CTF data in the data files as desired. This\nsection contains information about the technical details of the\ncompensation procedure and a description of\n:func:`mne.io.Raw.apply_gradient_compensation`.\n\nThe raw instances returned by :func:`mne.io.read_raw_ctf` contain several\ncompensation matrices which are employed to suppress external disturbances\nwith help of the reference channel data. The reference sensors are\nlocated further away from the brain than the helmet sensors and\nare thus measuring mainly the external disturbances rather than magnetic\nfields originating in the brain. Most often, a compensation matrix\ncorresponding to a scheme nicknamed *Third-order gradient\ncompensation* is employed.\n\nLet us assume that the data contain $n_1$ MEG\nsensor channels, $n_2$ reference sensor\nchannels, and $n_3$ other channels.\nThe data from all channels can be concatenated into a single vector\n\n\\begin{align}x = [x_1^T x_2^T x_3^T]^T\\ ,\\end{align}\n\nwhere $x_1$, $x_2$,\nand $x_3$ are the data vectors corresponding\nto the MEG sensor channels, reference sensor channels, and other\nchannels, respectively. The data before and after compensation,\ndenoted here by $x_{(0)}$ and $x_{(k)}$, respectively,\nare related by\n\n\\begin{align}x_{(k)} = M_{(k)} x_{(0)}\\ ,\\end{align}\n\nwhere the composite compensation matrix is\n\n\\begin{align}M_{(k)} = \\begin{bmatrix}\n I_{n_1} & C_{(k)} & 0 \\\\\n 0 & I_{n_2} & 0 \\\\\n 0 & 0 & I_{n_3}\n \\end{bmatrix}\\ .\\end{align}\n\nIn the above, $C_{(k)}$ is a $n_1$ by $n_2$ compensation\ndata matrix corresponding to compensation \"grade\" $k$.\nIt is easy to see that\n\n\\begin{align}M_{(k)}^{-1} = \\begin{bmatrix}\n I_{n_1} & -C_{(k)} & 0 \\\\\n 0 & I_{n_2} & 0 \\\\\n 0 & 0 & I_{n_3}\n \\end{bmatrix}\\ .\\end{align}\n\nTo convert from compensation grade $k$ to $p$ one\ncan simply multiply the inverse of one compensate compensation matrix\nby another and apply the product to the data:\n\n\\begin{align}x_{(k)} = M_{(k)} M_{(p)}^{-1} x_{(p)}\\ .\\end{align}\n\nThis operation is performed by :meth:`mne.io.Raw.apply_gradient_compensation`.\n\n\n\nKIT MEG system data (.sqd)\n==========================\n\nMNE-Python includes the :func:`mne.io.read_raw_kit` and\n:func:`mne.read_epochs_kit` to read and convert KIT MEG data.\nThis reader function will by default replace the original channel names,\nwhich typically with index starting with zero, with ones with an index starting\nwith one.\n\nTo import continuous data, only the input .sqd or .con file is needed. For\nepochs, an Nx3 matrix containing the event number/corresponding trigger value\nin the third column is needed.\n\nThe following input files are optional:\n\n- A KIT marker file (mrk file) or an array-like containing the locations of\n the HPI coils in the MEG device coordinate system.\n These data are used together with the elp file to establish the coordinate\n transformation between the head and device coordinate systems.\n\n- A Polhemus points file (elp file) or an array-like\n containing the locations of the fiducials and the head-position\n indicator (HPI) coils. These data are usually given in the Polhemus\n head coordinate system.\n\n- A Polhemus head shape data file (hsp file) or an array-like\n containing locations of additional points from the head surface.\n These points must be given in the same coordinate system as that\n used for the elp file.\n\n\n

Note

The output fif file will use the Neuromag head coordinate system convention,\n see `coordinate_systems`. A coordinate transformation between the\n Polhemus head coordinates and the Neuromag head coordinates is included.

\n\nBy default, KIT-157 systems assume the first 157 channels are the MEG channels,\nthe next 3 channels are the reference compensation channels, and channels 160\nonwards are designated as miscellaneous input channels (MISC 001, MISC 002,\netc.).\nBy default, KIT-208 systems assume the first 208 channels are the MEG channels,\nthe next 16 channels are the reference compensation channels, and channels 224\nonwards are designated as miscellaneous input channels (MISC 001, MISC 002,\netc.).\n\nIn addition, it is possible to synthesize the digital trigger channel (STI 014)\nfrom available analog trigger channel data by specifying the following\nparameters:\n\n- A list of trigger channels (stim) or default triggers with order: '<' | '>'\n Channel-value correspondence when converting KIT trigger channels to a\n Neuromag-style stim channel. By default, we assume the first eight\n miscellaneous channels are trigger channels. For '<', the largest values are\n assigned to the first channel (little endian; default). For '>', the largest\n values are assigned to the last channel (big endian). Can also be specified\n as a list of trigger channel indexes.\n- The trigger channel slope (slope) : '+' | '-'\n How to interpret values on KIT trigger channels when synthesizing a\n Neuromag-style stim channel. With '+', a positive slope (low-to-high)\n is interpreted as an event. With '-', a negative slope (high-to-low)\n is interpreted as an event.\n- A stimulus threshold (stimthresh) : float\n The threshold level for accepting voltage changes in KIT trigger\n channels as a trigger event.\n\nThe synthesized trigger channel data value at sample $k$ will\nbe:\n\n\\begin{align}s(k) = \\sum_{p = 1}^n {t_p(k) 2^{p - 1}}\\ ,\\end{align}\n\nwhere $t_p(k)$ are the thresholded\nfrom the input channel data d_p(k):\n\n\\begin{align}t_p(k) = \\Bigg\\{ \\begin{array}{l}\n 0 \\text{ if } d_p(k) \\leq t\\\\\n 1 \\text{ if } d_p(k) > t\n \\end{array}\\ .\\end{align}\n\nThe threshold value $t$ can\nbe adjusted with the ``stimthresh`` parameter.\n\n\n\nFieldTrip MEG/EEG data (.mat)\n=============================\n\nMNE-Python includes :func:`mne.io.read_raw_fieldtrip`, :func:`mne.read_epochs_fieldtrip` and :func:`mne.read_evoked_fieldtrip` to read data coming from FieldTrip.\n\nThe data is imported directly from a ``.mat`` file.\n\nThe ``info`` parameter can be explicitly set to ``None``. The import functions will still work but:\n\n#. All channel locations will be in head coordinates.\n#. Channel orientations cannot be guaranteed to be accurate.\n#. All channel types will be set to generic types.\n\nThis is probably fine for anything that does not need that information, but if you intent to do things like interpolation of missing channels, source analysis or look at the RMS pairs of planar gradiometers, you most likely run into problems.\n\nIt is **highly recommended** to provide the ``info`` parameter as well. The ``info`` dictionary can be extracted by loading the original raw data file with the corresponding MNE-Python functions::\n\n original_data = mne.io.read_raw_fiff('original_data.fif', preload=False)\n original_info = original_data.info\n data_from_ft = mne.read_evoked_fieldtrip('evoked_data.mat', original_info)\n\nThe imported data can have less channels than the original data. Only the information for the present ones is extracted from the ``info`` dictionary.\n\nAs of version 0.17, importing FieldTrip data has been tested on a variety of systems with the following results:\n\n+----------+-------------------+-------------------+-------------------+\n| System | Read Raw Data | Read Epoched Data | Read Evoked Data |\n+==========+===================+===================+===================+\n| BTI | Works | Untested | Untested |\n+----------+-------------------+-------------------+-------------------+\n| CNT | Data imported as | Data imported as | Data imported as |\n| | microvolts. | microvolts. | microvolts. |\n| | Otherwise fine. | Otherwise fine. | Otherwise fine. |\n+----------+-------------------+-------------------+-------------------+\n| CTF | Works | Works | Works |\n+----------+-------------------+-------------------+-------------------+\n| EGI | Mostly Ok. Data | Mostly Ok. Data | Mostly Ok. Data |\n| | imported as | imported as | imported as |\n| | microvolts. | microvolts. | microvolts. |\n| | FieldTrip does | FieldTrip does | FieldTrip does |\n| | not apply | not apply | not apply |\n| | calibration. | calibration. | calibration. |\n+----------+-------------------+-------------------+-------------------+\n| KIT | Does not work. | Does not work. | Does not work. |\n| | Channel names are | Channel names are | Channel names are |\n| | different in | different in | different in |\n| | MNE-Python and | MNE-Python and | MNE-Python and |\n| | FieldTrip. | FieldTrip. | FieldTrip. |\n+----------+-------------------+-------------------+-------------------+\n| Neuromag | Works | Works | Works |\n+----------+-------------------+-------------------+-------------------+\n| eximia | Works | Untested | Untested |\n+----------+-------------------+-------------------+-------------------+\n\nCreating MNE data structures from arbitrary data (from memory)\n==============================================================\n\nArbitrary (e.g., simulated or manually read in) raw data can be constructed\nfrom memory by making use of :class:`mne.io.RawArray`, :class:`mne.EpochsArray`\nor :class:`mne.EvokedArray` in combination with :func:`mne.create_info`.\n\nThis functionality is illustrated in `ex-array-classes`. Using 3rd party\nlibraries such as `NEO `__ in\ncombination with these functions abundant electrophysiological file formats can\nbe easily loaded into MNE.\n\n", "meta": {"hexsha": "780dc12dc538a529c01bb2a1bad798982f7c701b", "size": 13295, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "dev/_downloads/4b44551162dc4f8dda6c7f0d2af501fe/plot_10_reading_meg_data.ipynb", "max_stars_repo_name": "massich/mne-tools.github.io", "max_stars_repo_head_hexsha": "95650593ba0eca4ff8257ebcbdf05731038d8d4e", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "dev/_downloads/4b44551162dc4f8dda6c7f0d2af501fe/plot_10_reading_meg_data.ipynb", "max_issues_repo_name": "massich/mne-tools.github.io", "max_issues_repo_head_hexsha": "95650593ba0eca4ff8257ebcbdf05731038d8d4e", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "dev/_downloads/4b44551162dc4f8dda6c7f0d2af501fe/plot_10_reading_meg_data.ipynb", "max_forks_repo_name": "massich/mne-tools.github.io", "max_forks_repo_head_hexsha": "95650593ba0eca4ff8257ebcbdf05731038d8d4e", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 309.1860465116, "max_line_length": 12504, "alphanum_fraction": 0.6472358029, "converted": true, "num_tokens": 2938, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4649015713733885, "lm_q2_score": 0.20434189024594807, "lm_q1q2_score": 0.09499886587274975}} {"text": "```python\nfrom IPython.core.display import HTML, Image\ncss_file = 'style.css'\nHTML(open(css_file, 'r').read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\nfrom sympy import init_printing, Matrix, symbols\nfrom IPython.display import Image\nfrom warnings import filterwarnings\n```\n\n\n```python\ninit_printing(use_latex = 'mathjax')\nfilterwarnings('ignore')\n```\n\n# Projection matrices and least squares\n\n\n\n## Least squares\n\n* Consider from the previous lecture the three data point in the plane\n$$ ({t}_{i},{y}_{i}) =(1,1), (2,2),(3,2) $$\n* From this we need to construct a straight line\n* This could be helpful say in, statistics (remember, though in statistics we might have to get rid of statistical outliers)\n* Nonetheless (view image above) we note that we have a straight line in slope-intercept form\n$$ {y}={C}+{Dt} $$\n* On the line at *t* values of 1, 2, and 3 we will have\n$$ {y}_{1}={C}+{D}=1 \\\\ {y}_{2}={C}+{2D}=2 \\\\ {y}_{3}={C}+{3D}=2 $$\n* The actual *y* values at these *t* values are 1, 2, and 2, though\n* We are thus including an error of\n$$ \\delta{y} \\\\ { \\left( { e }_{ 1 } \\right) }^{ 2 }={ \\left[ \\left( C+D \\right) -1 \\right] }^{ 2 }\\\\ { \\left( { e }_{ 2 } \\right) }^{ 2 }={ \\left[ \\left( C+2D \\right) -2 \\right] }^{ 2 }\\\\ { \\left( { e }_{ 3 } \\right) }^{ 2 }={ \\left[ \\left( C+3D \\right) -2 \\right] }^{ 2 } $$\n* Since some are positive and some are negative (actual values below or above the line), we simply determine the square (which will always be positive)\n* Adding the (three in our example here) squares we have the sum total of the error (which is actuall just the sqautre of the distance between the line and actual *y* values)\n* The line will be the best fit when this error sum is at a minimum (hence *least squares*)\n* We can do this with calculus or with linear algebra\n* For calculus we take the partial derivatives of both unknowns and set to zero\n* For linear algebra we project orthogonally onto the columnspace (hence minimizing the error)\n * Note that the solution **b** does not exist in the columnspace (it is not a linear combination of the columns)\n\n### Calculus method\n\n* We'll create a function *f*(C,D) and successively take the partial derivatives of both variables and set it to zero\n* We fill then have two equation with two unknowns to solve (which is easy enough to do manually or by simple linear algebra and row reduction)\n\n\n```python\nC, D = symbols('C D')\n```\n\n\n```python\ne1_squared = ((C + D) - 1) ** 2\ne2_squared = ((C + 2 * D) - 2) ** 2\ne3_squared = ((C + 3 * D) - 2) ** 2\nf = e1_squared + e2_squared + e3_squared\nf\n```\n\n\n\n\n$$\\left(C + D - 1\\right)^{2} + \\left(C + 2 D - 2\\right)^{2} + \\left(C + 3 D - 2\\right)^{2}$$\n\n\n\n\n```python\nf.expand() # Expanding the expression\n```\n\n\n\n\n$$3 C^{2} + 12 C D - 10 C + 14 D^{2} - 22 D + 9$$\n\n\n\n* Doing the partial derivatives will be\n$$ f\\left( C,D \\right) =3{ C }^{ 2 }+12CD-10C+14{ D }^{ 2 }-22D+9\\\\ \\frac { \\partial f }{ \\partial C } =6C+12D-10=0\\\\ \\frac { \\partial f }{ \\partial D } =12C+28D-22=0 $$\n\n\n```python\nf.diff(C) # Taking the partial derivative with respect to C\n```\n\n\n\n\n$$6 C + 12 D - 10$$\n\n\n\n\n```python\nf.diff(D) # Taking the partial derivative with respect to D\n```\n\n\n\n\n$$12 C + 28 D - 22$$\n\n\n\n* Setting both equal to zero (and creating a simple augmented matrix) we get\n$$ 6C+12D-10=0\\\\ 12C+28D-22=0\\\\ \\therefore \\quad 6C+12D=10\\\\ \\therefore \\quad 12C+28D=22 $$\n\n\n```python\nA_augm = Matrix([[6, 12, 10], [12, 28, 22]])\nA_augm\n```\n\n\n\n\n$$\\left[\\begin{matrix}6 & 12 & 10\\\\12 & 28 & 22\\end{matrix}\\right]$$\n\n\n\n\n```python\nA_augm.rref() # Doing a Gauss-Jordan elimination to reduced row echelon form\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 0 & \\frac{2}{3}\\\\0 & 1 & \\frac{1}{2}\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}$$\n\n\n\n* We now have a solution\n$$ {y}=\\frac{2}{3} + \\frac{1}{2}{t}$$\n\n### Linear algebra\n\n* We note that we can construct the following\n$$ {C}+{1D}={1} \\\\ {C}+{2D}={2} \\\\ {C}+{3D}={2} \\\\ {C}\\begin{bmatrix} 1 \\\\ 1\\\\ 1 \\end{bmatrix}+{D}\\begin{bmatrix} 1 \\\\ 2 \\\\ 3 \\end{bmatrix}=\\begin{bmatrix} 1 \\\\ 2 \\\\ 2 \\end{bmatrix} \\\\ A\\underline { x } =\\underline { b } \\\\ \\begin{bmatrix} 1 & 1 \\\\ 1 & 2 \\\\ 1 & 3 \\end{bmatrix}\\begin{bmatrix} C \\\\ D \\end{bmatrix}=\\begin{bmatrix} 1 \\\\ 2 \\\\ 2 \\end{bmatrix} $$\n* **b** is not in the columnspace of A and we have to do orthogonal projection\n$$ { A }^{ T }A\\hat { x } ={ A }^{ T }\\underline { b } \\\\ \\hat { x } ={ \\left( { A }^{ T }A \\right) }^{ -1 }{ A }^{ T }\\underline { b } $$\n\n\n```python\nA = Matrix([[1, 1], [1, 2], [1, 3]])\nb = Matrix([1, 2, 2])\nA, b # Showing the two matrices\n```\n\n\n\n\n$$\\begin{pmatrix}\\left[\\begin{matrix}1 & 1\\\\1 & 2\\\\1 & 3\\end{matrix}\\right], & \\left[\\begin{matrix}1\\\\2\\\\2\\end{matrix}\\right]\\end{pmatrix}$$\n\n\n\n\n```python\nx_hat = (A.transpose() * A).inv() * A.transpose() * b\nx_hat\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\frac{2}{3}\\\\\\frac{1}{2}\\end{matrix}\\right]$$\n\n\n\n* Again, we get the same values for C and D\n\n* Remember the following\n$$ \\underline{b} = \\underline{p}+\\underline{e} $$\n* **p** and **e** are perpendicular\n* Indeed **p** is in the columnspace of A and **e** is perpendicular to the columspace (or any vector in the columnspace)\n\n## Example problem\n\n### Example problem 1\n\n* Find the quadratic (second order polynomial) equation through the origin, with the following data points: (1,1), (2,5) and (-1,-2)\n\n#### Solution\n\n* Let's just think about a quadratic equation in *y* and *t*\n$$ {y}={c}_{1} +{C}{t}+{D}{t}^{2} $$\n* Through the origin (0,0) means *y* = 0 and *t* = 0, thus we have\n$$ {0}={c}_{1} +{C}{0}+{D}{0}^{2} \\\\ {c}_{1}=0 \\\\ {y}={C}{t}+{D}{t}^{2} $$\n\n* This gives us three equation for our three data points\n$$ C\\left( 1 \\right) +D{ \\left( 1 \\right) }^{ 2 }=1\\\\ C\\left( 2 \\right) +D{ \\left( 2 \\right) }^{ 2 }=5\\\\ C\\left( -1 \\right) +D{ \\left( -1 \\right) }^{ 2 }=-2\\\\ C\\begin{bmatrix} 1 \\\\ 2 \\\\ -1 \\end{bmatrix}+D\\begin{bmatrix} 1 \\\\ 4 \\\\ 1 \\end{bmatrix}=\\begin{bmatrix} 1 \\\\ 5 \\\\ -2 \\end{bmatrix}\\\\ A=\\begin{bmatrix} 1 & 1 \\\\ 2 & 4 \\\\ -1 & 1 \\end{bmatrix}\\\\ \\underline { x } =\\begin{bmatrix} C \\\\ D \\end{bmatrix}\\\\ \\underline { b } =\\begin{bmatrix} 1 \\\\ 5 \\\\ -2 \\end{bmatrix} $$\n\n* Clearly **b** is not in the columnspace of A and we have to project orthogonally onto the columnspace using\n$$ \\hat { x } ={ \\left( { A }^{ T }A \\right) }^{ -1 }{ A }^{ T }\\underline { b } $$\n\n\n```python\nA = Matrix([[1, 1], [2, 4], [-1, 1]])\nb = Matrix([1, 5, -2])\nx_hat = (A.transpose() * A).inv() * A.transpose() * b\nx_hat\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\frac{41}{22}\\\\\\frac{5}{22}\\end{matrix}\\right]$$\n\n\n\n* Here's a simple plot of the equation\n\n\n```python\nimport matplotlib.pyplot as plt # The graph plotting module\nimport numpy as np # The numerical mathematics module\n%matplotlib inline\n```\n\n\n```python\nx = np.linspace(-2, 3, 100) # Creating 100 x-values\ny = (41 / 22) * x + (5 / 22) * x ** 2 # From the equation above\nplt.figure(figsize = (8, 6)) # Creating a plot of the indicated size\nplt.plot(x, y, 'b-') # Plot the equation above , in essence 100 little plots using small segmnets of blue lines\nplt.plot(1, 1, 'ro') # Plot the point in a red dot\nplt.plot(2, 5, 'ro')\nplt.plot(-1, -2, 'ro')\nplt.plot(0, 0, 'gs') # Plot the origin as a green square\nplt.show(); # Create the plot\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "f9dc1b5ec2f99f52a1dae06fbc11ed76d16a55a1", "size": 29457, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_16_Projection_matrices_and_least_squares.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_16_Projection_matrices_and_least_squares.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_16_Projection_matrices_and_least_squares.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 46.980861244, "max_line_length": 11756, "alphanum_fraction": 0.6730488509, "converted": true, "num_tokens": 3189, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.47657965106367595, "lm_q2_score": 0.19930799790404563, "lm_q1q2_score": 0.09498613609530993}} {"text": "# Basic Concepts\nThe bottom-up analysis of dynamic states of a network is based on network topology and kinetic theory describing the links in the network. In this chapter, we provide a primer for the basic concepts of dynamic analysis of network states. We also discuss basics of the kinetic theory that is needed to formulate and understand detailed dynamic models of biochemical reaction networks. \n\n## Properties of Dynamic States\nThe three key dynamic properties outlined in the introduction - time constants, aggregate variables and transitions - are detailed in this section. \n\n### Time scales \nA fundamental quantity in dynamic analysis is the _time constant_. A time constant is a measure of time span over which significant changes occur in a state variable. It is thus a scaling factor for time and determines where in the time scale spectrum one needs to focus attention when dealing with a particular process or event of interest. \n\nA general definition of a time constant is given by \n\n$$\\begin{equation} \\tau = \\frac{\\Delta x}{|dx/dt|_{avg}} \\tag{2.1} \\end{equation}$$\n\nwhere $\\Delta{x}$ is a characteristic change in the state variable $x$ of interest and $|dx/dt|_{avg}$\nis an estimate of the rate of change of the variable $x$. Notice the ratio between $\\Delta{x}$ and the average derivative has units of time, and the time constant characterizes the time span over which these changes in $x$ occur, see Figure 2.1. \n\n\n\n**Figure 2.1:** Illustration of the concept of a time constant, $\\tau$, and its estimation as $\\tau = \\Delta x\\ / |dx/dt|_{avg}.$\n\nIn a network, there are many time constants. In fact, there is a spectrum of time constants, $\\tau_1,\\ \\tau_2, \\dots \\tau_r$ where $r$ is the rank of the Jacobian matrix defining the dynamic dimensionality of the dynamic response of the network. This spectrum of time constants typically spans many orders of magnitude. The consequences of a well-separated set of time constants is a key concern in the analysis of network dynamics. \n\n### Forming aggregate variables through \"pooling\" \nOne important consequence of time scale hierarchy is the fact that we will have fast and slow events. If fast events are filtered out or ignored, one removes a dynamic degree of freedom from the dynamic description, thus reducing the dynamic dimension of a system. Removal of a dynamic dimension leads to \"coarse-graining\" of the dynamic description. Reduction in dynamic dimension results in the combination, or pooling, of variables into aggregate variables. \n\nA simple example can be obtained from upper glycolysis. The first three reactions of this pathway are: \n\n$$\\begin{equation} \\text{glucose}\\ \\underset{\\stackrel{\\frown}{ATP \\ ADP}}{\\stackrel{HK}{\\longrightarrow}} \\text{G6P} \\underset{\\text{fast}, \\tau_f}{\\stackrel{PGI}{\\leftrightharpoons}} \\text{F6P} \\underset{\\stackrel{\\frown}{ATP \\ ADP}}{\\stackrel{PFK}{\\longrightarrow}} \\text{FDP} \\tag{2.2} \\end{equation}$$\n\nThis schema includes the second step in glycolysis where glucose-6-phosphate (G6P) is converted to fructose-6-phosphate (F6P) by the phosphogluco-isomerase (PGI). Isomerases are highly active enzymes and have rate constants that tend to be fast. In this case, PGI has a much faster response time than the response time of the flanking kinases in this pathway, hexokinase (HK) and phosphofructokinase (PFK). If one considers a time period that is much greater than $\\tau_f$ (the time constant associated with PGI), this system is simplified to: \n\n$$\\begin{equation} \\underset{\\stackrel{\\frown}{ATP \\ ADP}}{\\stackrel{HK}{\\longrightarrow}} \\ \\underset{t \\gg \\tau_f}{\\text{HP}} \\ \\underset{\\stackrel{\\frown}{ATP \\ ADP}}{\\stackrel{PFK}{\\longrightarrow}} \\tag{2.3} \\end{equation}$$\n\nwhere HP = (G6P+F6P) is the hexosephosphate pool. At a slow time scale (i.e, long compared to $\\tau_f$), the isomerase reaction has effectively equilibrated, leading to the removal of its dynamics from the network. As a result, F6P and G6P become dynamically coupled and can be considered to be a single variable. HP is an example of an aggregate variable that results from pooling G6P and F6P into a single variable. Such aggregation of variables is a consequence of time-scale hierarchy in networks. Determining how to aggregate variables into meaningful quantities becomes an important consideration in the dynamic analysis of network states. Further examples of pooling variables are given in Section\u00a02.3. \n\n### Transitions \nThe dynamic analysis of a network comes down to examining its transient behavior as it moves from one state to another. \n\n\n\n**Figure 2.2:** Illustration of a transition from one state to another. (a) A simple transition. (b) A more complex set of transitions.\n\nOne type of transition, or _transient response,_ is illustrated in Figure\u00a02.2a, where a system is in a homeostatic state, labeled as state $\\text{#1}$, and is perturbed at time zero. Over some time period, as a result of the perturbation, it transitions into another homeostatic state (state $\\text{#2}$). We are interested in characteristics such as the time duration of this response, as well as looking at the dynamic states that the network exhibits during this transition. Complex types of transitions are shown in Figure\u00a02.2b. \n\nIt should be noted that when complex kinetic models are studied, there are two ways to perturb a system and induce a transient response. One is to instantaneously change the initial condition of one of the state variables (typically a concentration), and the second is to change the state of an environmental variable that represents an input to the system. The latter perturbation is the one that is biologically meaningful, whereas the former may be of some mathematical interest. \n\n### Visualizing dynamic states \nThere are several ways to graphically represent dynamic states: \n\n* First, we can represent them on a map (Figure\u00a02.3a). If we have a reaction or a compound map for a network of interest, we can simply draw it out on a computer screen and leave open spaces above the arrows and the concentrations into which we can write numerical values for these quantities. These quantities can then be displayed dynamically as the simulation proceeds, or by a graph showing the changes in the variable over time. This representation requires writing complex software to make such an interface. \n\n* A second, and probably more common, way of viewing dynamic states is to simply graph the state variables, $x$, as a function of time (Figure\u00a02.3b). Such graphs show how the variables move up and down, and on which time scales. Often, one uses a logarithmic scale for the y-axis, and that often delineates the different time constants on which a variable moves. \n\n* A third way to represent dynamic solutions is to plot two state variables against one another in a two-dimensional plot (Figure\u00a02.3c). This representation is known as a _phase portrait_. Plotting two variables against one another traces out a curve in this plane along which time is a parameter. At the beginning of the trajectory, time is zero, and at the end, time has gone to infinity. These phase portraits will be discussed in more detail in Chapter 3.\n\n\n\n**Figure 2.3:** Graphical representation of dynamic states.\n\n## Primer on Rate Laws\nThe reaction rates, $v_i$, are described mathematically using kinetic theory. In this section, we will discuss some of the fundamental concepts of kinetic theory that lead to their formation. \n\n### Elementary reactions \nThe fundamental events in chemical reaction networks are elementary reactions. There are two types of elemental reactions: \n\n$$\\begin{align} &\\text{linear} &x \\stackrel{v}{\\rightarrow} \\tag{2.4a} \\\\ &\\text{bi-linear} & x_1 + x_2 \\stackrel{v}{\\rightarrow} \\tag{2.4b} \\end{align}$$\n\nA special case of a bi-linear reaction is when $x_1$ is the same as $x_2$ in which case the reaction is second order. \n\nElementary reactions represent the irreducible events of chemical transformations, analogous to a base pair being the irreducible unit of DNA sequence. Note that rates, $v$, and concentrations, $x$, are non-negative variables, that is; \n\n$$\\begin{equation} x \\geq 0, \\ v \\geq 0 \\tag{2.5} \\end{equation}$$\n\n### Mass action kinetics \nThe fundamental assumption underlying the mathematical description of reaction rates is that they are proportional to the collision frequency of molecules taking part in a reaction. Most commonly, reactions are bi-linear, where two different molecules collide to produce a chemical transformation. The probability of a collision is proportional to the concentration of a chemical species in a 3-dimensional unconstrained domain. This proportionality leads to the elementary reaction rates: \n\n$$\\begin{align} \\text{linear} \\ \\ &v = kx \\ &\\text{where the units on}& \\ k \\ \\text{are time}^{-1} \\ \\text{and} \\tag{2.6a} \\\\ \\text{bi-linear} \\ \\ &v = kx_1x_2 \\ &\\text{where the units on}& \\ k \\ \\text{are time}^{-1}\\text{conc}^{-1} \\tag{2.6b} \\end{align}$$\n\n### Enzymes increase the probability of the 'right' collision \nNot all collisions of molecules have the same probability of producing a chemical reaction. Collisions at certain angles are more likely to produce a reaction than others. As illustrated in Figure 2.4, molecules bound to the surface of an enzyme can be oriented to produce collisions at certain angles, thus accelerating the reaction rate. The numerical values of the rate constants are thus genetically determined as the structure of a protein is encoded in the sequence of the DNA. Sequence variation in the underlying gene in a population leads to differences amongst the individuals that make up the population. Principles of enzyme catalysis are further discussed in Section\u00a05.1. \n\n\n\n**Figure 2.4:** A schematic showing how the binding sites of two molecules on an enzyme bring them together to collide at an optimal angle to produce a reaction. Panel A: Two molecules can collide at random and various angles in free solution. Only a fraction of the collisions lead to a chemical reaction. Panel B: Two molecules bound to the surface of a reaction can only collide at a highly restricted angle, substantially enhancing the probability of a chemical reaction between the two compounds. Redrawn based on (Lowenstein, 2000).\n\n### Generalized mass action kinetics \nThe reaction rates may not be proportional to the concentration in certain circumstances, and we may have what are called _power-law kinetics_. The mathematical form of the elementary rate laws are \n\n$$\\begin{align} v &= kx^a \\tag{2.7a} \\\\ v &= kx_1^ax_2^b \\tag{2.7b} \\end{align}$$\n\nwhere $a$ and $b$ can be greater or smaller than unity. In cases where a restricted geometry reduces the probability of collision relative to a geometrically-unrestricted case, the numerical values of $a$ and $b$ are less than unity, and vice versa. \n\n### Combining elementary reactions \nIn the analysis of chemical kinetics, the elementary reactions are often combined into reaction mechanisms. Following are two such examples: \n\n#### Reversible reactions:\nIf a chemical conversion is thermodynamically reversible, then the two opposite reactions can be combined as\n\n$$\\begin{equation} x_1 \\underset{v_{-}}{\\stackrel{v_+}{\\rightleftharpoons}} x_2 \\end{equation}$$\n\nThe net rate of the reaction can then be described by the difference between the forward and reverse reactions; \n\n$$\\begin{align} v_{net} &= v^+ - v^- = k^+x_1 - k^-x_2, \\tag{2.8a} \\\\ &K_{eq} = x_2 / x_1 = k^+/k^- \\tag{2.8b} \\end{align}$$\n\nwhere $K_{eq}$ is the equilibrium constant for the reaction. Note that $v_{net}$ can be positive or negative. Both $k^+$ and $k^-$ have units of reciprocal time. They are thus inverses of time constants. Similarly, a net reversible bi-linear reaction can be written as \n\n$$\\begin{equation} x_1 + x_2 \\underset{v_{-}}{\\stackrel{v_+}{\\rightleftharpoons}} x_3 \\end{equation}$$\n\nThe net rate of the reaction can then be described by \n\n$$\\begin{align} v_{net} &= v^+ - v^- = k^+x_1x_2 - k^-x_3, \\\\ &K_{eq} = x_3 / x_1x_2 = k^+/k^- \\end{align}$$\n\nwhere $K_{eq}$ is the equilibrium constant for the reaction. The units on the rate constant $(k^+)$ for a bi-linear reaction are concentration per time. Note that we can also write this equation as \n\n$$\\begin{equation} v_{net} = k^+x_1x_2 - k^-x_3 = k^+(x_1x_2 - x_3/K_{eq}) \\end{equation}$$\n\nthat can be a convenient form as often the $K_{eq}$ is a known number with a thermodynamic basis, and thus only a numerical value for $k^+$ needs to be estimated. \n\n#### Converting enzymatic reaction mechanisms into rate laws: \nOften, more complex combinations of elementary reactions are analyzed. The classical irreversible Michaelis-Menten mechanism is comprised of three elementary reactions. \n\n$$\\begin{equation} S + E \\underset{v_{-1} = k_{-1}x}{\\stackrel{v_1 = k_1se}{\\rightleftharpoons}} X \\stackrel{v_2 = k_2x}{\\longrightarrow} E + P \\end{equation}$$\n\nwhere a substrate, $S$, binds to an enzyme to form a complex, $X$, that can break down to generate the product, $P$. The concentrations of the corresponding chemical species is denoted with the same lower case letter; i.e., $e=[E]$, etc. This reaction mechanism has two conservation quantities associated with it: one on the enzyme $e_{tot} = e + x$ and one on the substrate $s_{tot} = s+x+p$. \n\nA quasi-steady-state assumption (QSSA), $dx/dt=0$, is then applied to generate the classical rate law\n\n$$\\begin{equation} \\frac{ds}{dt} = \\frac{-v_ms}{K_m + s} \\tag{2.9} \\end{equation}$$\n\nthat describes the kinetics of this reaction mechanism. This expression is the best-known rate equation in enzyme kinetics. It has two parameters: the maximal reaction rate $v_m$, and the Michaelis-Menten constant $K_m = (k_{-1} + k_2)/k_1$. The use and applicability of kinetic assumptions to deriving rate laws for enzymatic reaction mechanisms is discussed in detail in Chapter 5. \n\nIt should be noted that the elimination of the elementary rates through the use of the simplifying kinetic assumptions _fundamentally changes_ the mathematical nature of the dynamic description from that of bi-linear equations to that of hyperbolic equations (i.e., Eq. 2.9) and, more generally, to ratios of polynomial functions. \n\n### Pseudo-first order rate constants (PERCs) \nThe effects of temperature, pH, enzyme concentrations, and other factors that influence the kinetics can often be accounted for in a condition specific numerical value of a constant that looks like a regular elementary rate constant, as in Eq (2.4). The advantage of having such constants is that it simplifies the network dynamic analysis. The disadvantage is that dynamic descriptions based on PERCs are condition specific. This issue is discussed in Parts 3 and 4 of the book. \n\n### The mass action ratio ($\\Gamma$) \nThe equilibrium relationship among reactants and products of a chemical reaction are familiar to the reader. For example, the equilibrium relationship for the PGI reaction (Eq. (2.8)) is \n\n$$\\begin{equation} K_{eq} = \\frac{[\\text{F6P}]_{eq}}{[\\text{G6P}]_{eq}} \\tag{2.10} \\end{equation}$$\n\nThis relationship is observed in a closed system after the reaction is allowed to proceed to equilibrium over a long time, $t \\rightarrow \\infty$, (which in practice has a meaning relative to the time constant of the reaction, $t \\gg \\tau_f$). \n\nHowever, in a cell, as shown in Eq. (2.2), the PGI reactions operate in an \"open\" environment, i.e., G6P is being produced and F6P is being consumed. The reaction reaches a steady state in a cell that will have concentration values that are different from the equilibrium value. The _mass action ratio_ for open systems, defined to be analogous to the equilibrium constant, is \n\n$$\\begin{equation} \\Gamma = \\frac{[\\text{F6P}]_{ss}}{[\\text{G6P}]_{ss}} \\tag{2.11} \\end{equation}$$\n\nThe mass action ratio is denoted by $\\Gamma$ in the literature. \n\n### 'Distance' from equilibrium \nThe numerical value of the ratio $\\Gamma / K_{eq}$ relative to unity can be used as a measure of how far a reaction is from equilibrium in a cell. Fast reversible reactions tend to be close to equilibrium in an open system. For instance, the net reaction rate for a reversible bi-linear reaction (Eq. (2.2)) can be written as: \n\n$$\\begin{equation} v_{net} = k^+x_1x_2 - k^-x_3 = k^+x_1x_2(1 - \\Gamma/K_{eq}) \\end{equation}$$\n\nIf the reaction is \"fast\" then $(k^+x_1x_2)$ is a \"large\" number and thus $(1 - \\Gamma/K_{eq})$ tends to be a \"small\" number, since the net reaction rate is balanced relative to other reactions in the network. \n\n### Recap \nThese basic considerations of reaction rates and enzyme kinetic rate laws are described in much more detail in other standard sources, e.g., (Segal, 1975). In this text, we are not so concerned about the details of the mathematical form of the rate laws, but rather with the order-of-magnitude of the rate constants and how they influence the properties of the dynamic response. \n\n## More on Aggregate Variables\nPools, or aggregate variables, form as a result of well-separated time constants. Such pools can form in a hierarchical fashion. Aggregate variables can be physiologically significant, such as the total inventory of high-energy phosphate bonds, or the total inventory of particular types of redox equivalents. These important concepts are perhaps best illustrated through a simple example that should be considered a primer on a rather important and intricate subject matter. Formation of aggregate variables in complex models is seen throughout Parts III and IV of this text. \n\n\n\n**Figure 2.5:** The chemical transformations involved in the distribution of high-energy phosphate bonds among adenosines.\n\n### Distribution of high-energy phosphate among the adenylate phosphates \nIn Figure\u00a02.5 we show the skeleton structure of the transfer of high-energy phosphate bonds among the adenylates. In this figure we denote the use of ATP by $v_1$ and the synthesis of ATP from ADP by $v_2$, $v_5$ and $v_{-5}$ denote the reaction rates of adenylate kinase that distributes the high energy phosphate bonds among ATP, ADP, and AMP, through the reaction \n\n$$\\begin{equation} 2 \\text{ADP} \\leftrightharpoons \\text{ATP} + \\text{AMP} \\tag{2.12} \\end{equation}$$\n\nFinally, the synthesis of AMP and its degradation is denoted by $v_3$ and $v_4$, respectively. The dynamic mass balance equations that describe this schema are: \n\n$$\\begin{align} \\frac{d \\text{ATP}}{dt} &= -v_1 + v_2 + v_{5, net} \\tag{2.13a} \\\\ \\frac{d \\text{ADP}}{dt} &= v_1 - v_2 - 2 v_{5, net} \\tag{2.13b} \\\\ \\frac{d \\text{AMP}}{dt} &= v_3 - v_4 + v_{5, net} \\tag{2.13c} \\end{align} $$\n\nThe responsiveness of these reactions falls into three categories: $v_{5, net} (=v_5 - v_{-5})$ is a _fast_ reversible reaction, $v_1$ and $v_2$ have _intermediate_ time scales, and the kinetics of $v_3$ and $v_4$ are _slow_ and have large time constants associated with them. Based on this time scale decomposition, we can combine the three concentrations so that they lead to the elimination of the reactions of a particular response time category on the right hand side of (Eq. 2.13). These combinations are as follows: \n\n* First, we can eliminate all but the slow reactions by forming the sum of the adenosine phosphates. \n\n $$\\begin{equation} \\frac{d}{dt}(\\text{ATP} + \\text{ADP} + \\text{AMP}) = v_3 - v_4\\ \\text{(slow)} \\tag{2.14} \\end{equation}$$\n\n The only reaction rates that appear on the right hand side of the equation are $v_3$ and $v_4$, that are the slowest reactions in the system. Thus, the summation of ATP, ADP, and AMP is a pool or aggregate variable that is expected to exhibit the slowest dynamics in the system. \n\n\n* The second pooled variable of interest is the summation of 2ATP and ADP that represents the total number of high energy phosphate bonds found in the system at any given point in time: \n \n $$\\begin{equation} \\frac{d}{dt}(2 \\text{ATP} + \\text{ADP}) = -v_1 + v_2\\ \\text{(intermediate)} \\tag{2.15} \\end{equation}$$\n\n This aggregate variable is only moved by the reaction rates of intermediate response times, those of $v_1$ and $v_2$. \n\n\n* The third aggregate variable we can form is the sum of the energy carrying nucleotides which are \n\n $$\\begin{equation} \\frac{d}{dt}(\\text{ATP} + \\text{ADP}) = -v_{5, net}\\ \\text{(fast)} \\tag{2.16} \\end{equation}$$\n\n This summation will be the fastest aggregate variable in the system. \n\nNotice that by combining the concentrations in certain ways, we define aggregate variables that may move on distinct time scales in the simple model system, and, in addition, we can interpret these variables in terms of their metabolic physiological significance. However, in general, time scale decomposition is more complex as the concentrations that influence the rate laws may move on many time scales and the arguments in the rate law functions must be pooled as well. \n\n### Using ratios of aggregate variables to describe metabolic physiology \nWe can define an aggregate variable that represents the _capacity_ to carry high-energy phosphate bonds. That simply is the summation of $\\text{ATP} + \\text{ADP} + \\text{AMP}.$ This number multiplied by 2 would be the total number of high energy phosphate bonds that can be stored in this system. The second variable that we can define here would be the _occupancy_ of that capacity, $\\textit{2ATP + ADP}$, which is simply an enumeration of how much of that capacity is occupied by high-energy phosphate bonds. Notice that the occupancy variable has a conjugate pair, which would be the vacancy variable. The ratio of these two aggregate variables forms a charge \n\n$$\\begin{equation} \\text{charge} = \\frac{\\text{occupancy}}{\\text{capacity}} \\tag{2.17} \\end{equation}$$\n\ncalled the _energy charge,_ given by \n\n$$\\begin{equation} \\text{E.C} = \\frac{2 \\text{ATP} \\ + \\ \\text{ADP}}{2(\\text{ATP} \\ + \\ \\text{ADP} \\ + \\ \\text{AMP})} \\tag{2.18} \\end{equation}$$\n\nwhich is a variable that varies between 0 and 1. This quantity is the _energy charge_ defined by Daniel Atkinson\u00a0(Atkinson, 1968). In cells, the typical numerical range for this variable when measured is 0.80-0.90. \n\nIn a similar way, one can define other redox charges. For instance, the _catabolic redox charge_ on the NADH carrier can be defined as \n\n$$\\begin{equation} \\text{C.R.C} = \\frac{\\text{NADH}}{\\text{NADH} \\ + \\ \\text{NAD}} \\tag{2.19} \\end{equation}$$\n\nwhich simply is the fraction of the NAD pool that is in the reduced form of NADH. It typically has a low numerical value in cells, i.e., about 0.001-0.0025, and therefore this pool is typically discharged by passing the redox potential to the electron transfer system (ETS). The _anabolic redox charge_\n\n$$\\begin{equation} \\text{A.R.C} = \\frac{\\text{NADPH}}{\\text{NADPH} \\ + \\ \\text{NADP}} \\tag{2.20} \\end{equation}$$\n\nin contrast, tends to be in the range of 0.5 or higher, and thus this pool is charged and ready to drive biosynthetic reactions. Therefore, pooling variables together based on a time scale hierarchy and chemical characteristics can lead to aggregate variables that are physiologically meaningful. \n\nIn Chapter\u00a08 we further explore these fundamental concepts of time scale hierarchy. They are then used in Parts III and IV in interpreting the dynamic states of realistic biological networks. \n\n## Time Scale Decomposition\n### Reduction in dimensionality \nAs illustrated by the examples given in the previous section, most biochemical reaction networks are characterized by many time constants. Typically, these time constants are of very different orders of magnitude. The hierarchy of time constants can be represented by the time axis, Figure 2.6. Fast transients are characterized by the processes at the extreme left and slow transients at the extreme right. The process time scale, i.e., the time scale of interest, can be represented by a _window of observation_ on this time axis. Typically, we have three principal ranges of time constants of interest if we want to focus on a limited set of events taking place in a network. We can thus decompose the system response in time. To characterize network dynamics completely we would have to study all the time constants. \n\n\n\n**Figure 2.6:** Schematic illustration of network transients that overlap with the time span of observation. n, n + 1, ... represent the decadic order of time constants. \n\n### Three principal time constants \nOne can readily conceptualize this by looking at a three-dimensional linear system where the first time constant represents the fast motion, the second represents the time scale of interest, and the third is a slow motion, see Figure\u00a02.7. The general solution to a three-dimensional linear system is \n\n$$\\begin{align} \\textbf{x}(t) &=\\textbf{v}_1 \\langle \\textbf{u}_1, \\ \\textbf{x}_0 \\rangle \\ \\text{exp}(\\lambda_1 t) && \\text{fast} \\\\ &+\\textbf{v}_2 \\langle \\textbf{u}_2, \\ \\textbf{x}_0 \\rangle \\ \\text{exp}(\\lambda_2 t) && \\text{intermediate} \\\\ &+\\textbf{v}_3 \\langle \\textbf{u}_3, \\ \\textbf{x}_0 \\rangle \\ \\text{exp}(\\lambda_3 t) && \\text{slow} \\tag{2.21} \\end{align}$$\n\nwhere $\\textbf{v}_i$ are the _eigenvectors,_ $\\textbf{u}_i$ are the _eigenrows,_ and $\\boldsymbol{\\lambda}_i$ are the _eigenvalues_ of the Jacobian matrix. The eigenvalues are negative reciprocals of time constants. \n\nThe terms that have time constants faster than the observed window can be eliminated from the dynamic description as these terms are small. However, the mechanisms which have transients slower than the observed time exhibit high \"inertia\" and hardly move from their initial state and can be considered constants. \n\n\n\n**Figure 2.7:** A schematic of a decay comprised of three dynamic modes with well-separated time constants. \n\n#### Example: 3D motion simplifying to a 2D motion \nFigure\u00a02.8 illustrates a three-dimensional space where there is rapid motion into a slow two-dimensional subspace. The motion in the slow subspace is spanned by two \"slow\" eigenvectors, whereas the fast motion is in the direction of the \"fast\" eigenvector. \n\n\n\n**Figure 2.8:** Fast motion into a two-dimensional subspace.\n\n### Multiple time scales \nIn reality there are many more than three time scales in a realistic network. In metabolic systems there are typically many time scales and a hierarchical formation of pools, Figure\u00a02.9. The formation of such hierarchies will be discussed in Parts III and IV of the text. \n\n\n\n**Figure 2.9:** Multiple time scales in a metabolic network and the process of pool formation. This figure represents human folate metabolism. (a) A map of the folate network. (b) An illustration progressive pool formation. Beyond the first time scale pools form between CHF and CH2F; and 5MTHF, 10FTHF, SAM; and MET and SAH (these are abbreviations for the long, full names of these metabolites). DHF and THF form a pool beyond the second time scale. Beyond the third time scale CH2F/CHF join the 5MTHF/10FTHF/SAM pool. Beyond the fourth time scale HCY joins the MET/SAH pool. Ultimately, on time scales on the order of a minute and slower, interactions between the pools of folate carriers and methionine metabolites interact. Courtesy of Neema Jamshidi\u00a0(Jamshidi, 2008a).\n\n## Network Structure versus Dynamics\nThe stoichiometric matrix represents the topological structure of the network, and this structure has significant implications with respect to what dynamic states a network can take. Its null spaces give us information about pathways and pools. It also determines the structural features of the gradient matrix. Network topology can have a dominant effect on network dynamics. \n\n### The null spaces of the stoichiometric matrix \nAny matrix has a right and a left null space. The right null space, normally called just the null space, is defined by all vectors that give zero when post-multiplying that matrix: \n\n$$\\begin{equation} \\textbf{Sv}=0 \\tag{2.22} \\end{equation}$$\n\nThe null space thus contains all the steady state flux solutions for the network. The null space can be spanned by a set of basis vectors that are pathway vectors\u00a0(SB1). \n\nThe left null space is defined by all vectors that give zero when pre-multiplying that matrix: \n\n$$\\begin{equation} \\textbf{lS}=0 \\tag{2.23} \\end{equation}$$\n\nThese vectors $\\textbf{l}$ correspond to pools that are always conserved at all time scales. We will call them _time invariants_. Throughout the book we will look at these properties of the stoichiometric matrices that describe the networks being studied. \n\n\n\n**Figure 2.10:** A schematic showing how the structure of $\\textbf{S}$ and $\\textbf{G}$ form matrices that have non-zero elements in the same location if one of these matrices is transposed. The columns of $\\textbf{S}$ and the rows of $\\textbf{G}$ have similar but not identical vectors in an n-dimensional space. Note that this similarity only holds once the two opposing elementary reactions have been combined into a net reaction.\n\n### The structure of the gradient matrix \nWe will now examine some of the properties of $\\textbf{G}$. If a compound $x_i$ participates in reaction $v_j$, then the entry $s_{i,j}$ is non-zero. Thus, a net reaction \n\n$$\\begin{equation} x_i + x_{i + 1} \\stackrel{v_j}{\\leftrightharpoons} x_{i + 2} \\tag{2.24} \\end{equation}$$\n\nwith a net reaction rate \n\n$$\\begin{equation} v_j = v_j^+ - v_j^- \\tag{2.25} \\end{equation}$$\n\ngenerates three non-zero entries in $\\textbf{S}$: $s_{i,j}$, $s_{i + 1,j}$, and $s_{i + 2,j}$. Since compounds $x_i$, $x_{i + 1}$, and $x_{i + 2}$ influence reaction $v_j$, they will also generate non-zero elements in $\\textbf{G}$, see Figure\u00a02.10. Thus, non-zero elements generated by the reactions are: \n\n$$\\begin{equation} g_{j, i} = \\frac{\\partial v_j}{\\partial x_i}, \\ g_{j, i + 1} = \\frac{\\partial v_j}{\\partial x_{i + 1}}, \\ \\text{and} \\ g_{j, i + 2} = \\frac{\\partial v_j}{\\partial x_{i + 2}} \\tag{2.26} \\end{equation}$$\n\nIn general, every reaction in a network is a reversible reaction. Hence we have the the following relationships between the elements of $\\textbf{S}$ and $\\textbf{G}$: \n\n$$\\begin{align} \\text{if} \\ &s_{i, j} = 0 \\ \\text{then} \\ g_{j, i} = 0 \\\\ \\text{if} \\ &s_{i, j} \\ne 0 \\ \\text{then} \\ g_{j, i} \\ne 0 \\\\ \\text{if} \\ &s_{i, j} > 0 \\ \\text{then} \\ g_{j, i} < 0 \\\\ \\text{if} \\ &s_{i, j} < 0 \\ \\text{then} \\ g_{j, i} >0 \\end{align}$$\n\nNote that for the rare cases where a reaction is effectively irreversible, an element in $\\textbf{G}$ can become very small, but in principle finite.\n\nIt can thus be seen that \n\n$$\\begin{equation} -\\textbf{G}^T \\ \\tilde \\ \\ \\textbf{S} \\tag{2.27} \\end{equation}$$\n\nin the sense that both will have non-zero elements in the same location. These elements will have opposite signs. \n\n### Stoichiometric autocatalysis \nThe fundamental structure of most catabolic pathways in a cell is such that a compound is imported into a cell and then some property stored on cofactors is transferred to the compound and the molecule is thus \"charged\" with this property. This charged form is then degraded into a waste product that is secreted from the cell. During that degradation process, the property that the molecule was charged with is re-extracted from the compound, often in larger quantities than was used in the initial charging of the compound. This pathway structure is the cellular equivalent of \"it takes money to make money,\" and its basic network structure is in Figure\u00a02.11. \n\n\n\n**Figure 2.11:** The prototypic pathway structure for degradation of a carbon substrate.\n\nThis figure illustrates the import of a substrate, $S$, to a cell. It is charged with high-energy phosphate bonds to form an intermediate, $X$. $X$ is then subsequently degraded to a waste product, $W$, that is secreted. In the degradation process, ATP is recouped in a larger quantity than was used in the charging process. This means that there is a net production of ATP in the two steps, and that difference can be used to drive various load functions on metabolism. \n\nThe consequence of this schema is basically _stoichiometric autocatalysis_ that can lead to multiple steady states. The rate of formation of $\\text{ATP}$ from this schema as balanced by the load parameters is illustrated in Figure\u00a02.12. This figure shows that the $\\text{ATP}$ generation is 0 if all the adenosine phosphates are in the form of $\\text{ATP}$ because there is no $\\text{ADP}$ to drive the conversion of X to W. The $\\text{ATP}$ generation is also 0 if there is no $\\text{ATP}$ available, because $S$ cannot be charged to form $X$. The curve in between $\\text{ATP} = 0$ and $\\text{ATP} = \\text{ATP}_{max}$ will be positive. The $\\text{ATP}$ load, or use rate, will be a curve that grows with $\\text{ATP}$ concentration and is sketched here as a hyperbolic function. As shown, there are three intersections in this curve, with the upper stable steady-state being the physiological state of this system. This system can thus have multiple steady-states and this property is a consequence of the topological structure of this reaction network. \n\n### Network structure \nThe three topics discussed in this section show that the stoichiometric matrix has a dominant effect on integrated network functions and sets constraints on the dynamic states that a network can achieve. The numerical values of the elements of the gradient matrix determine which of these states are chosen. \n\n## Physico-Chemical Effects\nMolecules have other physico-chemical properties besides the collision rates that are used in kinetic theory. They also have osmotic properties and are electrically charged. Both of these features influence dynamic descriptions of biochemical reaction networks. \n\n### The constant volume assumption \nMost systems that we identify in systems biology correspond to some biological entity. Such entities may be an organelle like the nucleus or the mitochondria, or it may be the whole cell, as illustrated in Figure\u00a02.13. \n\nA compound, $x_i$, internal to the system, has a mass balance on the total amount per cell. We denote this quantity with an $M_i$. $M_i$ is a product of the volume per cell, $V$, and the concentration of the compound, $x_i$, which is amount per volume \n\n$$\\begin{equation} M_i = V \\ x_i \\tag{2.28} \\end{equation}$$\n\nThe time derivative of the amount per cell is given by: \n\n$$\\begin{equation} \\frac{M_i}{dt} = \\frac{d}{dt}(V \\ x_i) = V \\frac{d x_i}{dt} + x_i \\frac{dV}{dt} \\tag{2.29} \\end{equation}$$\n\n\n\n**Figure 2.13:** An illustration of a 'system' with a defined boundary, inputs and outputs, and an internal network of reactions. The $V$ volume of the system may change over time. $\\Pi$ denotes osmotic pressure, see (Eq. 2.32).\n\nThe time change of the amount $M_i$ per cell is thus dependent on two dynamic variables. One is $dx_i/dt$ which is the time change in the concentration of $x_i$, and the second is $dV/dt$ which is the change in volume with time. The volume is typically taken to be time invariant and therefore the term $dV/dt$ is equal to 0 and therefore results in a system that is of _constant volume_. In this case \n\n$$\\begin{equation} \\frac{d x_i}{dt} = \\frac{1}{V}\\frac{d M_i}{dt} \\tag{2.30} \\end{equation}$$\n\nThis constant volume assumption (recall Table\u00a01.2) needs to be carefully scrutinized when one builds kinetic models since volumes of cellular compartments tend to fluctuate and such fluctuations can be very important. Very few kinetic models in the current literature account for volume variation because it is mathematically challenging and numerically difficult to deal with. A few kinetic models have appeared, however, that do take volume fluctuations into account\u00a0(Joshi, 1989m and Klipp, 2005). \n\n### Osmotic balance \nMolecules come with osmotic pressure, electrical charge, and other properties, all of which impact the dynamic states of networks. For instance, in cells that do not have rigid walls, the osmotic pressure has to be balanced inside $(\\Pi_{in})$ and outside $(\\Pi_{out})$ of the cell (Figure\u00a02.13), i.e., \n\n$$\\begin{equation} \\Pi_{in} = \\Pi_{out} \\tag{2.31} \\end{equation}$$\n\nAt first approximation, osmotic pressure is proportional to the total solute concentration, \n\n$$\\begin{equation} \\Pi = R T \\sum_i x_i \\tag{2.32} \\end{equation}$$\n\nalthough some compounds are more osmotically-active than others and have osmotic coefficients that are not unity. The consequences are that if a reaction takes one molecule and splits it into two, the reaction comes with an increase in osmotic pressure that will impact the total solute concentration allowable inside the cell, as it needs to be balanced relative to that outside the cell. Osmotic balance equations are algebraic equations that are often complicated and therefore are often conveniently ignored in the formulation of a kinetic model. \n\n### Electroneutrality \nAnother constraint on dynamic network models is the accounting for electrical charge. Molecules tend to be charged positively or negatively. Elementary charges cannot be separated, and therefore the total number of positive and negative charges within a compartment must balance. Any import and export in and out of a compartment of a charged species has to be counterbalanced by the equivalent number of molecules of the opposite charge crossing the membrane. Typically, bilipid membranes are impermeable to cations, but permeable to anions. For instance, the deliberate displacement of sodium and potassium by the ATP-driven sodium potassium pump is typically balanced by chloride ions migrating in and out of a cell or a compartment leading to a state of electroneutrality both inside and outside the cell. The equations that describe electroneutrality are basically a summation of the charge, $z_i$, of a molecule multiplied by its concentration, \n\n$$\\begin{equation} \\sum_i z_ix_i = 0 \\tag{2.33} \\end{equation}$$\n\nand such terms are summed up over all the species in a compartment. That sum has to add up to 0 to maintain electroneutrality. Since that summation includes concentrations of species, it represents an algebraic equation that is a constraint on the allowable concentration states of a network.\n\n## Summary\n\n* Time constants are key quantities in dynamic analysis. Large biochemical reaction networks typically have a broad spectrum of time constants. \n\n* Well-separated time constants lead to pooling of variables to form aggregates. Aggregate variables represent a coarse-grained (i.e., lower dimensional) view of network dynamics and can lead to physiologically meaningful variables. \n\n* Elementary reactions and mass action kinetics are the irreducible events in dynamic descriptions of networks. Elementary reactions are often combined into reaction mechanisms from which rate laws are derived using simplifying assumptions. \n\n* Network structure has an overarching effect on network dynamics. Certain physico-chemical effects can as well. Thus topological analysis is useful, and so is a careful examination of the assumptions (recall Table\u00a01.2) that underlie the dynamic mass balances (Eq. (1.1)) for the system being modeled and simulated. \n\n$\\tiny{\\text{\u00a9 B. \u00d8. Palsson 2011;}\\ \\text{This publication is in copyright.}\\\\ \\text{Subject to statutory exception and to the provisions of relevant collective licensing agreements,}\\\\ \\text{no reproduction of any part may take place without the written permission of Cambridge University Press.}}$\n", "meta": {"hexsha": "0ebffc93944ed0ed95b71f46b3b8efbb2da0bba6", "size": 44601, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/education/sb2/chapters/sb2_chapter2.ipynb", "max_stars_repo_name": "z-haiman/MASSpy", "max_stars_repo_head_hexsha": "aeeed1e3f9d1058e9485247a86f85cb94eeecbc9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/education/sb2/chapters/sb2_chapter2.ipynb", "max_issues_repo_name": "z-haiman/MASSpy", "max_issues_repo_head_hexsha": "aeeed1e3f9d1058e9485247a86f85cb94eeecbc9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/education/sb2/chapters/sb2_chapter2.ipynb", "max_forks_repo_name": "z-haiman/MASSpy", "max_forks_repo_head_hexsha": "aeeed1e3f9d1058e9485247a86f85cb94eeecbc9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 101.1360544218, "max_line_length": 1074, "alphanum_fraction": 0.6992219905, "converted": true, "num_tokens": 9845, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4882834101888134, "lm_q2_score": 0.19436781335210207, "lm_q1q2_score": 0.09490657873450717}} {"text": "# Defining Custom Display Logic for Your Own Objects\n\n## Overview\n\nIn Python, objects can declare their textual representation using the `__repr__` method. IPython expands on this idea and allows objects to declare other, richer representations including:\n\n* HTML\n* JSON\n* PNG\n* JPEG\n* SVG\n* LaTeX\n\nThis Notebook shows how you can add custom display logic to your own classes, so that they can be displayed using these rich representations. There are two ways of accomplishing this:\n\n1. Implementing special display methods such as `_repr_html_`.\n2. Registering a display function for a particular type.\n\nIn this Notebook we show how both approaches work.\n\nBefore we get started, we will import the various display functions for displaying the different formats we will create.\n\n\n```python\nfrom IPython.display import display\nfrom IPython.display import (\n display_html, display_jpeg, display_png,\n display_javascript, display_svg, display_latex\n)\n```\n\n## Implementing special display methods\n\nThe main idea of the first approach is that you have to implement special display methods, one for each representation you want to use. Here is a list of the names of the special methods and the values they must return:\n\n* `_repr_html_`: return raw HTML as a string\n* `_repr_json_`: return raw JSON as a string\n* `_repr_jpeg_`: return raw JPEG data\n* `_repr_png_`: return raw PNG data\n* `_repr_svg_`: return raw SVG data as a string\n* `_repr_latex_`: return LaTeX commands in a string surrounded by \"$\".\n\n### Model Citizen: pandas\n\nA prominent example of a package that has IPython-aware rich representations of its objects is [pandas](http://pandas.pydata.org/).\n\nA pandas DataFrame has a rich HTML table representation,\nusing `_repr_html_`.\n\n\n\n```python\nimport io\nimport pandas\n```\n\n\n```python\n%%writefile data.csv\nDate,Open,High,Low,Close,Volume,Adj Close\n2012-06-01,569.16,590.00,548.50,584.00,14077000,581.50\n2012-05-01,584.90,596.76,522.18,577.73,18827900,575.26\n2012-04-02,601.83,644.00,555.00,583.98,28759100,581.48\n2012-03-01,548.17,621.45,516.22,599.55,26486000,596.99\n2012-02-01,458.41,547.61,453.98,542.44,22001000,540.12\n2012-01-03,409.40,458.24,409.00,456.48,12949100,454.53\n\n```\n\n\n```python\ndf = pandas.read_csv(\"data.csv\")\npandas.set_option('display.notebook_repr_html', False)\ndf\n```\n\nrich HTML can be activated via `pandas.set_option`.\n\n\n```python\npandas.set_option('display.notebook_repr_html', True)\ndf\n```\n\n\n```python\nlines = df._repr_html_().splitlines()\nprint(\"\\n\".join(lines[:20]))\n```\n\n### Exercise\n\nWrite a simple `Circle` Python class. Don't even worry about properties such as radius, position, colors, etc. To help you out use the following representations (remember to wrap them in Python strings):\n\nFor HTML:\n\n ○\n\nFor SVG:\n\n \n \n \n\nFor LaTeX (wrap with `$` and use a raw Python string):\n\n \\bigcirc\n\nAfter you write the class, create an instance and then use `display_html`, `display_svg` and `display_latex` to display those representations.\n\nTips : you can slightly tweek the representation to know from which `_repr_*_` method it came from. \nFor example in my solution the svg representation is blue, and the HTML one show \"`HTML`\" between brackets.\n\n### Solution\n\nHere is my simple `MyCircle` class:\n\n\n```python\n# %load ../../exercises/IPython Kernel/soln/mycircle.py\nclass MyCircle(object):\n\n def __init__(self, center=(0.0,0.0), radius=1.0, color='blue'):\n self.center = center\n self.radius = radius\n self.color = color\n\n def _repr_html_(self):\n return \"○ (html)\"\n\n def _repr_svg_(self):\n return \"\"\"\n \n \"\"\"\n \n def _repr_latex_(self):\n return r\"$\\bigcirc \\LaTeX$\"\n\n def _repr_javascript_(self):\n return \"alert('I am a circle!');\"\n\n```\n\nNow create an instance and use the display methods:\n\n\n```python\nc = MyCircle()\n```\n\n\n```python\ndisplay_html(c)\n```\n\n\n```python\ndisplay_svg(c)\n```\n\n\n```python\ndisplay_latex(c)\n```\n\n\n```python\ndisplay_javascript(c)\n```\n\n## Adding IPython display support to existing objects\n\nWhen you are directly writing your own classes, you can adapt them for display in IPython by following the above example. But in practice, we often need to work with existing code we can't modify. We now illustrate how to add these kinds of extended display capabilities to existing objects. To continue with our example above, we will add a PNG representation to our `Circle` class using Matplotlib.\n\n### Model citizen: sympy\n\n[SymPy](http://sympy.org) is another model citizen that defines rich representations of its object.\nUnlike pandas above, sympy registers display formatters via IPython's display formatter API, rather than declaring `_repr_mime_` methods.\n\n\n```python\nfrom sympy import Rational, pi, exp, I, symbols\nx, y, z = symbols(\"x y z\")\n```\n\n\n```python\nr = Rational(3,2)*pi + exp(I*x) / (x**2 + y)\nr\n```\n\nSymPy provides an `init_printing` function that sets up advanced $\\LaTeX$\nrepresentations of its objects.\n\n\n```python\nfrom sympy.interactive.printing import init_printing\ninit_printing()\nr\n```\n\nTo add a display method to an existing class, we must use IPython's display formatter API. Here we show all of the available formatters:\n\n\n```python\nip = get_ipython()\nfor mime, formatter in ip.display_formatter.formatters.items():\n print('%24s : %s' % (mime, formatter.__class__.__name__))\n\n```\n\nLet's grab the PNG formatter:\n\n\n```python\npng_f = ip.display_formatter.formatters['image/png']\n```\n\nWe will use the `for_type` method to register our display function.\n\n\n```python\npng_f.for_type?\n```\n\nAs the docstring describes, we need to define a function the takes the object as a parameter and returns the raw PNG data.\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\n```\n\n\n```python\nclass AnotherCircle(object):\n def __init__(self, radius=1, center=(0,0), color='r'):\n self.radius = radius\n self.center = center\n self.color = color\n \n def __repr__(self):\n return \"<%s Circle with r=%s at %s>\" % (\n self.color,\n self.radius,\n self.center,\n )\n \nc = AnotherCircle()\nc\n```\n\n\n```python\nfrom IPython.core.pylabtools import print_figure\n\ndef png_circle(circle):\n \"\"\"Render AnotherCircle to png data using matplotlib\"\"\"\n fig, ax = plt.subplots()\n patch = plt.Circle(circle.center,\n radius=circle.radius,\n fc=circle.color,\n )\n ax.add_patch(patch)\n plt.axis('scaled')\n data = print_figure(fig, 'png')\n # We MUST close the figure, otherwise IPython's display machinery\n # will pick it up and send it as output, resulting in a double display\n plt.close(fig)\n return data\n```\n\n\n```python\nc = AnotherCircle()\nprint(repr(png_circle(c)[:10]))\n```\n\nNow we register the display function for the type:\n\n\n```python\npng_f.for_type(AnotherCircle, png_circle)\n```\n\nNow all `Circle` instances have PNG representations!\n\n\n```python\nc2 = AnotherCircle(radius=2, center=(1,0), color='g')\nc2\n```\n\n\n```python\ndisplay_png(c2)\n```\n\n## return the object\n\n\n```python\n# for demonstration purpose, I do the same with a circle that has no _repr_javascript method\nclass MyNoJSCircle(MyCircle):\n \n def _repr_javascript_(self):\n return\n\ncNoJS = MyNoJSCircle()\n```\n\nOf course you can now still return the object, and this will use compute all the representations, store them in the notebook and show you the appropriate one.\n\n\n```python\ncNoJS\n```\n\nOr just use `display(object)` if you are in a middle of a loop\n\n\n```python\nfor i in range(3):\n display(cNoJS)\n```\n\nAdvantage of using `display()` versus `display_*()` is that all representation will be stored in the notebook document and notebook file, they are then availlable for other frontends or post-processing tool like `nbconvert`.\n\nLet's compare `display()` vs `display_html()` for our circle in the Notebook Web-app and we'll see later the difference in nbconvert.\n\n\n```python\nprint(\"I should see a nice html circle in web-app, but\")\nprint(\"nothing if the format I'm viewing the notebook in\")\nprint(\"does not support html\")\ndisplay_html(cNoJS)\n```\n\n\n```python\nprint(\"Whatever the format I will see a representation\")\nprint(\"of my circle\")\ndisplay(cNoJS)\n```\n\n\n```python\nprint(\"Same if I return the object\")\ncNoJS\n```\n\n\n```python\nprint(\"But not if I print it\")\nprint(cNoJS)\n```\n\n## Cleanup\n\n\n```python\n!rm -f data.csv\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "3e9eba577a70ce0fc1f025f6983e8cc93696f065", "size": 18408, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/Old Custom Display Logic.ipynb", "max_stars_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_stars_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/Old Custom Display Logic.ipynb", "max_issues_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_issues_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/Old Custom Display Logic.ipynb", "max_forks_repo_name": "jhgoebbert/jupyter-jsc-notebooks", "max_forks_repo_head_hexsha": "bcd08ced04db00e7a66473b146f8f31f2e657539", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2022-01-13T18:49:12.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-13T18:49:12.000Z", "avg_line_length": 22.7821782178, "max_line_length": 408, "alphanum_fraction": 0.5373750543, "converted": true, "num_tokens": 2238, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4301473339755162, "lm_q2_score": 0.2200070946316962, "lm_q1q2_score": 0.09463546521152322}} {"text": "```R\noptions(warn=-1)\n# load library\nlibrary(reshape2) # for melt and cast functions\nlibrary(ggplot2) # for plotting functions\nlibrary(tm) # text mining library\n#install.packages(\"SnowballC\")\nlibrary(SnowballC)\n```\n\n## Question 1\n\n### 1. Derive expectation and maximation steps of hard-EM algorithm for document clustering\n\n- N is the total number of documents, K is the number of clusters.\n- {d1....dn} are the documents, with corresponding latent variables {z1...zn) where zn:=(zn1,....,znk) is the cluster assignment vector for the nth documents, znk = 1 if the document belongs to the cluster k and zero otherwise. \n- Parameters: Phi k is the cluster proportion,and mu k is the word proportion. Where the sum of phi of all clusters equal to 1 and the sum of word proportion of all words in cluster k equal to 1.\n\\begin{equation}\n\\sum_{k=1}^{K} \\varphi_{k}=1\n\\end{equation}\n\nand \n\n\\begin{equation}\n\\sum_{w \\in \\mathcal{A}} \\mu_{k, w}=1\n\\end{equation}\n\nThen the probability of observed documents is given by:\n\\begin{equation}\n\\begin{aligned}\np\\left(d_{1}, \\ldots, d_{N}\\right)=\\prod_{n=1}^{N} p\\left(d_{n}\\right) &=\\prod_{n=1}^{N} \\sum_{k=1}^{K} p\\left(z_{n, k}=1, d_{n}\\right) \\\\\n&=\\prod_{n=1}^{N} \\sum_{k=1}^{K}\\left(\\varphi_{k} \\prod_{w \\in \\mathcal{A}} \\mu_{k, w}^{c\\left(w, d_{n}\\right)}\\right)\n\\end{aligned}\n\\end{equation}\n\n\nApply log to above, then the log-likelihood is:\n\n\\begin{equation}\n\\begin{aligned}\n\\ln p\\left(d_{1}, \\ldots, d_{N}\\right)=\\sum_{n=1}^{N} \\ln p\\left(d_{n}\\right) &=\\sum_{n=1}^{N} \\ln \\sum_{k=1}^{K} p\\left(z_{n, k}=1, d_{n}\\right) \\\\\n&=\\sum_{n=1}^{N} \\ln \\sum_{k=1}^{K}\\left(\\varphi_{k} \\prod_{w \\in \\mathcal{A}} \\mu_{k, w}^{c\\left(w, d_{n}\\right)}\\right)\n\\end{aligned}\n\\end{equation}\n\n\nTo maximise the Likelihood of incomplete Data, we use EM algorithm.\n\nFirst, as the parameters are unknown, we initialize the starting values of parameters \u03b8. These values will be called \u03b8old, and the unknown parameters we want to estimate will be \u03b8new. \n\n\\begin{equation}\n\\theta^{\\text {old }}=\\left(\\boldsymbol{\\varphi}^{\\text {old }}, \\boldsymbol{\\mu}_{1}^{\\text {old }}, \\ldots, \\boldsymbol{\\mu}_{K}^{\\text {old }}\\right)\n\\end{equation}\n\nDefine Q function:\n\n\\begin{equation}\n\\begin{aligned}\nQ\\left(\\boldsymbol{\\theta}, \\boldsymbol{\\theta}^{\\text {old }}\\right) &:=\\sum_{n=1}^{N} \\sum_{k=1}^{K} p\\left(z_{n, k}=1 \\mid d_{n}, \\boldsymbol{\\theta}^{\\text {old }}\\right) \\ln p\\left(z_{n, k}=1, d_{n} \\mid \\boldsymbol{\\theta}\\right) \\\\\n&=\\sum_{n=1}^{N} \\sum_{k=1}^{K} p\\left(z_{n, k}=1 \\mid d_{n}, \\boldsymbol{\\theta}^{\\text {old }}\\right)\\left(\\ln \\varphi_{k}+\\sum_{w \\in \\mathcal{A}} c\\left(w, d_{n}\\right) \\ln \\mu_{k, w}\\right) \\\\\n&=\\sum_{n=1}^{N} \\sum_{k=1}^{K} \\gamma\\left(z_{n, k}\\right)\\left(\\ln \\varphi_{k}+\\sum_{w \\in \\mathcal{A}} c\\left(w, d_{n}\\right) \\ln \\mu_{k, w}\\right)\n\\end{aligned}\n\\end{equation}\n\nwhere \n\n\\begin{equation}\n\\gamma\\left(z_{n, k}\\right) = p\\left(z_{n, k}=1 \\mid d_{n}, \\boldsymbol{\\theta}^{\\text {old }}\\right)\n\\end{equation}\n\nare the responsability factors\n\nE step: \n1. calculate \u03b3(znk) based on estimated parameters\n\\begin{equation}\n\\gamma\\left(z_{n k}\\right):=p\\left(z_{n k}=1 \\mid \\boldsymbol{d}_{n}, \\boldsymbol{\\theta}^{\\text {old }}\\right)\n\\end{equation}\n\n2. for each document, find the cluster with the maximum probability. \n\n\\begin{equation}\nZ^{*}=\\operatorname{argmax}_{z} \\gamma\\left(z_{n, k}\\right)=\\operatorname{argmax}_{z} p\\left(z_{n, k}=1 \\mid d_{n}, \\theta^{\\text {old }}\\right)\n\\end{equation}\n\nM step:\n\nFor hard EM, there is no expectation on the latent variables, so :\n\n\\begin{equation}\n\\mathcal{Q}\\left(\\theta, \\theta^{\\text {old }}\\right)=\\sum_{n=1}^{N} \\ln p\\left(z_{n, k=Z^{*}}=1, d_{n} \\mid \\theta\\right)\n\\end{equation}\n\nFind: \n\\begin{equation}\n\\operatorname{argmax}_{\\theta} \\sum_{n=1}^{N}\\left(\\ln \\varphi_{k=Z^{*}}+\\sum_{w \\in \\mathcal{A}} c\\left(w, d_{n}\\right) \\ln \\mu_{k=Z^{*}, w}\\right)\n\\end{equation}\n\n1. Sub the z* calculated into the partial derivatives below and recalculate the estimations of the parametors, update the parameters.\n\n\n\\begin{equation}\n\\varphi_{k}=\\frac{N_{k}}{N} \\text { where } N_{k}:=\\sum_{n=1}^{N} \\gamma\\left(z_{n, k}\\right)\n\\end{equation}\n\nand \n\n\\begin{equation}\n\\mu_{k, w}=\\frac{\\sum_{n=1}^{\\prime} \\gamma\\left(z_{n, k}\\right) c\\left(w, d_{n}\\right)}{\\sum_{w^{\\prime} \\in \\mathcal{A}} \\sum_{n=1}^{N} \\gamma\\left(z_{n, k}\\right) c\\left(w^{\\prime}, d_{n}\\right)}\n\\end{equation}\n\nUse \\begin{equation}\n\\boldsymbol{\\theta}^{\\text {old }} \\leftarrow \\boldsymbol{\\theta}^{\\text {new }}\n\\end{equation} and repeat until converge\n\n\n### 2. Implement the hard-EM and soft-EM\n\n\n```R\n# Initialize parameters (theta_old function)\ntheta <- function(size, K, seed = 123456){\n set.seed(seed) # set seed\n phi.hat <- matrix(1/K,nrow = K, ncol=1) # assume all clusters have the same size (we will update this later on)\n mu.hat <- matrix(runif(K*size),nrow = K, ncol = size) # initiate Mu \n mu.hat <- prop.table(mu.hat, margin = 1) # normalization to ensure that sum of each row is 1\n \n return (list(\"phi.hat\" = phi.hat, \"mu.hat\" = mu.hat))\n}\n\n# Helper Function \n# This function is needed to prevent numerical overflow/underflow when working with small numbers\nlogSum <- function(v) {\n m = max(v)\n return ( m + log(sum(exp(v-m))))\n}\n```\n\n\n```R\n# train objective function\ntrain_obj <- function(theta_old, wf) { \n N <- dim(wf)[2] # number of documents\n K <- dim(theta_old$mu.hat)[1] # number of cluster\n \n nloglike = 0\n for (n in 1:N){\n lprob <- matrix(0,ncol = 1, nrow=K) \n for (k in 1:K){\n lprob[k,1] = sum(wf[,n] * log(theta_old$mu.hat[k,])) \n }\n nloglike <- nloglike - logSum(lprob + log(theta_old$phi))\n }\n \n return (nloglike)\n}\n```\n\n\n```R\n# EM function for document clustering(hard & soft)\nEM.step <- function(wf, K = 4, max.epoch=10, soft = TRUE, seed){ \n \n # Parameters Setting\n N <- ncol(wf) # number of documents\n W <- nrow(wf) # number of words i.e. vocabulary size\n theta_old = theta(W, K, seed = seed) # initialize parameters\n gamma <- matrix(,nrow=N, ncol=K) # empty posterior matrix\n \n # check initial values\n print(train_obj(theta_old,wf))\n # EM-step\n for(epoch in 1:max.epoch){\n \n \n # E step: \n for (n in 1:N){\n for (k in 1:K){\n ## calculate the posterior based on the estimated mu and rho in the \"log space\"\n gamma[n,k] <- log(theta_old$phi.hat[k]) + sum(wf[,n] * log(theta_old$mu.hat[k,])) \n }\n # normalisation to sum to 1 in the log space\n logZ = logSum(gamma[n,])\n gamma[n,] = gamma[n,] - logZ\n }\n \n # converting back from the log space \n gamma <- exp(gamma)\n \n # for hard EM, we want the k with the highest probability to be 1\n if(soft == FALSE){\n # hard assignments:\n max.prob <- gamma==apply(gamma, 1, max) # for each point find the cluster with the maximum (estimated) probability\n gamma[max.prob] <- 1 # assign each point to the cluster with the highest probability\n gamma[!max.prob] <- 0 # remove points from clusters with lower probabilites\n }\n \n \n # M step:\n # we need this matrix (same shape as the ) here because when calculating mean, \n # it can result in zero which leads to log calculation result in NaN, to avoid this issue, add a small number to avoid 0\n eps = matrix(1e-10, nrow = W, ncol = K)\n for (k in 1:K){\n ## recalculate the estimations:\n theta_old$phi.hat[k] <- sum(gamma[,k])/N # the cluster size\n theta_old$mu.hat[k,] <- ((wf%*%gamma[,k])+eps[,k])/sum((wf%*%gamma[,k])+eps[,k]) # new means (cluster cenroids)\n }\n # evaluate and compare likelihood\n print(train_obj(theta_old,wf))\n }\n \n # keep the final parameters and gamma (posterior matrix)\n return(list(\"theta\"= theta_old,\"posterior\"=gamma))\n}\n```\n\n### 3. run soft-EM and hard-EM on provided data with K = 4\n\n\n```R\n## read the file (each line of the text file is one document)\ntext <- readLines('./Task2A.txt')\n\n## the terms before '\\t' are the lables (the newsgroup names) and all the remaining text after '\\t' are the actual documents\ndocs <- strsplit(text, '\\t')\nrm(text) # just free some memory!\n\n# store the labels for evaluation\nlabels <- unlist(lapply(docs, function(x) x[1]))\n\n# store the unlabeled texts \ndocs <- data.frame(unlist(lapply(docs, function(x) x[2]))) \n```\n\n\n```R\n# preprocessing\ndocs$doc_id <- rownames(docs)\ncolnames(docs) <- c(\"text\",\"doc_id\")\n\n# create a corpus\ndocs <- DataframeSource(docs)\ncorp <- Corpus(docs)\n\n# Preprocessing:\ncorp <- tm_map(corp, removeWords, stopwords(\"english\")) # remove stop words \n#(the most common word in a language that can be find in any document)\ncorp <- tm_map(corp, removePunctuation) # remove punctuation\ncorp <- tm_map(corp, stemDocument) # perform stemming (reducing inflected and derived words to their root form)\ncorp <- tm_map(corp, removeNumbers) # remove all numbers\ncorp <- tm_map(corp, stripWhitespace) # remove redundant spaces \n\n# Create a matrix which its rows are the documents and colomns are the words. \n# Each number in Document Term Matrix shows the frequency of a word (colomn header) in a particular document (row title)\ndtm <- DocumentTermMatrix(corp)\n# reduce the sparcity of out dtm\ndtm <- removeSparseTerms(dtm, 0.90)\n\n# store word frequency into a matrix\nwf <- t(as.matrix(dtm))\n```\n\n\n```R\n# run soft and hard EM\nEM_soft <- EM.step(wf, K=4, max.epoch=15,soft = TRUE, seed = 123456) \nEM_hard <- EM.step(wf, K=4, max.epoch=15,soft = FALSE, seed = 123456) \n```\n\n [1] 459718.3\n [1] 425849.8\n [1] 422287.2\n [1] 420268.8\n [1] 419696.7\n [1] 419511.3\n [1] 419432.7\n [1] 419403\n [1] 419385.8\n [1] 419303.3\n [1] 419291.5\n [1] 419283\n [1] 419274.8\n [1] 419248.1\n [1] 419226.4\n [1] 419218.6\n [1] 459718.3\n [1] 425717.5\n [1] 422187.1\n [1] 420212.3\n [1] 419695.3\n [1] 419536.1\n [1] 419450\n [1] 419417.2\n [1] 419402\n [1] 419394.1\n [1] 419387.3\n [1] 419383.3\n [1] 419382.4\n [1] 419382.5\n [1] 419382.5\n [1] 419382.5\n\n\n### 4. Perform PCA on clustering\n\n\n```R\n##--- Cluster Visualization -------------------------------------------------\ncluster.viz <- function(counts, color.vector, title=' '){\n # PCA\n p.comp <- prcomp(counts, scale. = TRUE, center = TRUE)\n # visualize\n plot(p.comp$x, col=color.vector, pch=1, main=title)\n}\n```\n\n\n```R\n# visualization settings\noptions(repr.plot.width=18, repr.plot.height=10)\npar(mfrow=c(1,2))\n\n# Get the color.vector(labels) \nlabel.soft <- apply(EM_soft$posterior, 1, which.max)\nlabel.hard <- apply(EM_hard$posterior, 1, which.max)\n\n# normalize the count matrix for better visualization\ncounts <- scale(wf)\ncounts[is.nan(counts)] <- 0\n\n# visualize the clusters estimated by soft and hard EM\ncluster.viz(t(counts), label.soft, 'Estimated Clusters (Soft EM) - normalized')\ncluster.viz(t(counts), label.hard, 'Estimated Clusters (Hard EM) - normalized')\ncluster.viz(t(wf), label.soft, 'Estimated Clusters (Soft EM)')\ncluster.viz(t(wf), label.hard, 'Estimated Clusters (Hard EM)')\n# visualize the real clusters\ncluster.viz(t(counts), factor(labels), 'Real Clusters - normalized')\ncluster.viz(t(wf), factor(labels), 'Real Clusters')\n```\n", "meta": {"hexsha": "36db60e01b3da4708f88f4cd38f99a71a6fce079", "size": 228543, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Machine Learning study notes/Latent Variable Models and Neural Networks/src/31436285_assessment_2_q1.ipynb", "max_stars_repo_name": "alanwzy/my-projects", "max_stars_repo_head_hexsha": "35279795a8b2d61f82ad0118f493a18b293459f8", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-10-05T04:26:12.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-05T04:26:12.000Z", "max_issues_repo_path": "Machine Learning study notes/Latent Variable Models and Neural Networks/src/31436285_assessment_2_q1.ipynb", "max_issues_repo_name": "alanwzy/my-projects", "max_issues_repo_head_hexsha": "35279795a8b2d61f82ad0118f493a18b293459f8", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Machine Learning study notes/Latent Variable Models and Neural Networks/src/31436285_assessment_2_q1.ipynb", "max_forks_repo_name": "alanwzy/my-projects", "max_forks_repo_head_hexsha": "35279795a8b2d61f82ad0118f493a18b293459f8", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 461.703030303, "max_line_length": 91694, "alphanum_fraction": 0.9262064469, "converted": true, "num_tokens": 3765, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4804786780479071, "lm_q2_score": 0.1968262036430985, "lm_q1q2_score": 0.09457079413162413}} {"text": "
\n\n
\u00a0\u00a0\u00a0\u00a0\n\n\n
[mlcourse.ai](https://mlcourse.ai) - Open Machine Learning Course
\n\n
\n\nAuteur: [Alexey Natekin](https://www.linkedin.com/in/natekin/), fondateur d\u2019OpenDataScience, Machine Learning Evangelist. Traduit et \u00e9dit\u00e9 par [Olga Daykhovskaya](https://www.linkedin.com/in/odaykhovskaya/), [Anastasia Manokhina](https://www.linkedin.com/in/anastasiamanokhina/), [Egor Polusmak](https://www.linkedin.com/in/egor-polusmak/), [Yuanyuan Pao](https://www.linkedin.com/in/yuanyuanpao/) et [Ousmane Ciss\u00e9](https://github.com/oussou-dev). Ce mat\u00e9riel est soumis aux termes et conditions de la licence [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). L'utilisation gratuite est autoris\u00e9e \u00e0 des fins non commerciales.\n\n
\n \n
Th\u00e8me 10. Gradient Boosting
\n\n\n\n\n\nJusqu'\u00e0 pr\u00e9sent, nous avons couvert 9 sujets allant de l'analyse exploratoire de donn\u00e9es \u00e0 l'analyse de s\u00e9ries chronologiques en Python:\n\n1. [Analyse exploratoire de donn\u00e9es avec Pandas](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-1-exploratory-data-analysis-with-pandas-de57880f1a68)\n2. [Visualisation de donn\u00e9es avec Python](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-2-visual-data-analysis-in-python-846b989675cd)\n3. [Classification, arbres-de-d\u00e9cision et k plus proches voisins](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-3-classification-decision-trees-and-k-nearest-neighbors-8613c6b6d2cd)\n4. [Classification lin\u00e9aire et r\u00e9gression](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-4-linear-classification-and-regression-44a41b9b5220)\n5. [Bagging et random forest](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-5-ensemble-of-algorithms-and-random-forest-8e05246cbba7)\n6. [Feature Engineering and Feature Selection](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-6-feature-engineering-and-feature-selection-8b94f870706a)\n7. [Apprentissage non supervis\u00e9 : (ACP) Analyse en Composantes Principales et Clustering](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-7-unsupervised-learning-pca-and-clustering -db7879568417)\n8. [Vowpal Wabbit : Learning with Gigabytes of Data](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-8-vowpal-wabbit-fast-learning-with-gigabytes-of-data-60f750086237)\n9. [Analyse des s\u00e9ries temporelles en Python](https://medium.com/open-machine-learning-course/open-machine-learning-course-topic-9-time-series-analysis-in-python-a270cb05e0b3)\n\nAujourd'hui, nous allons examiner l'un des algorithmes d'apprentissage automatique les plus populaires et les plus pratiques : le Gradient Boosting.\n\n
Sommaire de l'article
\n\n\nNous vous recommandons de lire cet article dans l\u2019ordre d\u00e9crit ci-dessous, mais n'h\u00e9sitez pas \u00e0 parcourir les diff\u00e9rentes sections.\n\n

\n\n\n# 1. Introduction et histoire du Boosting\nPresque tout le monde en dans le domaine de l'apprentissage automatique a entendu parler du Gradient boosting. Nombreux data scientist incluent cet algorithme dans leur bo\u00eete \u00e0 outils en raison des bons r\u00e9sultats obtenus sur tout probl\u00e8me (inconnu) donn\u00e9.\n\nDe plus, XGBoost est souvent la recette standard pour [gagner](https://github.com/dmlc/xgboost/blob/master/demo/README.md#usecases) les cmp\u00e9titions de [ML](http://blog.kaggle.com/tag/xgboost/). Il est si populaire que l\u2019id\u00e9e de superposer des XGBoosts est devenue un m\u00e8me. De plus, le boosting est un composant important dans [de nombreux syst\u00e8mes de recommandation](https://en.wikipedia.org/wiki/Learning_to_rank#Practical_usage_by_search_engines); parfois, il est m\u00eame consid\u00e9r\u00e9e comme une [marque](https://yandex.com/company/technologies/matrixnet/).\nPenchons-nous sur l'histoire et le d\u00e9veloppement du boosting.\n\nLe boosting est n\u00e9 de [la question :](http://www.cis.upenn.edu/~mkearns/papers/boostnote.pdf) est-il possible d'obtenir un mod\u00e8le fort \u00e0 partir d'un grand nombre de mod\u00e8les relativement faibles et simples ? Par \u00abmod\u00e8les faibles\u00bb, nous n'entendons pas de simples mod\u00e8les de base tels que les arbres de d\u00e9cision, mais des mod\u00e8les avec des performances de pr\u00e9cision m\u00e9diocres, o\u00f9 m\u00e9diocre est un peu meilleur que le hasard.\n\n[Une r\u00e9ponse math\u00e9matique positive](http://www.cs.princeton.edu/~schapire/papers/strengthofweak.pdf) a \u00e9t\u00e9 identifi\u00e9e, mais il a fallu quelques ann\u00e9es pour d\u00e9velopper des algorithmes pleinement fonctionnels bas\u00e9s sur cette solution, par exemple AdaBoost. Ces algorithmes adoptent une approche dite \"gourmande\" : ils construisent d\u2019abord une combinaison lin\u00e9aire de mod\u00e8les simples (algorithmes de base) en repesant les donn\u00e9es d\u2019entr\u00e9e. Ensuite, le mod\u00e8le (g\u00e9n\u00e9ralement un arbre de d\u00e9cision) est construit sur des objets pr\u00e9c\u00e9demment pr\u00e9dits de mani\u00e8re incorrecte, auxquels des pond\u00e9rations plus importantes ont \u00e9t\u00e9 attribu\u00e9es.\n\n\nDe nombreux cours d'apprentissage automatique \u00e9tudient AdaBoost - l'anc\u00eatre du GBM (Gradient Boosting Machine). Cependant, depuis la fusion d'AdaBoost avec GBM, il est devenu \u00e9vident qu'AdaBoost n'est qu'une variante particuli\u00e8re de GBM.\n\nL'algorithme lui-m\u00eame a une interpr\u00e9tation visuelle tr\u00e8s claire et une intuition permettant de d\u00e9finir des poids. Jetons un coup d'\u0153il au probl\u00e8me de classification des jouets suivant dans lequel nous allons scinder les donn\u00e9es entre les arbres de profondeur 1 (\u00e9galement appel\u00e9s \u00abstumps\u00bb) \u00e0 chaque it\u00e9ration d'AdaBoost. Pour les deux premi\u00e8res it\u00e9rations, nous avons l'image suivante :\n\n\n\nLa taille du point correspond \u00e0 son poids, attribu\u00e9 \u00e0 une pr\u00e9diction incorrecte. \u00c0 chaque it\u00e9ration, nous pouvons constater que ces poids augmentent - les \"stumps\" ne peuvent pas faire face \u00e0 ce probl\u00e8me. Cependant, si nous votons (de mani\u00e8re pond\u00e9r\u00e9e) pour les \"stumps\", nous obtiendrons les bonnes classifications :\n\n\n\nPseudocode:\n- Initialiser les poids des \u00e9chantillons $\\Large w_i^{(0)} = \\frac{1}{l}, i = 1, \\dots, l$.\n- pour tout $t = 1, \\dots, T$\n\u00a0\u00a0\u00a0\u00a0* Entra\u00eenez l'algo $\\Large b_t$, laissez $\\epsilon_t$ \u00eatre l\u2019erreur d\u2019entra\u00eenement.\n * $\\Large \\alpha_t = \\frac{1}{2}ln\\frac{1 - \\epsilon_t}{\\epsilon_t}$.\n\u00a0\u00a0\u00a0\u00a0* Mettez \u00e0 jour les poids des \u00e9chantillons : $\\Large w_i^{(t)} = w_i^{(t-1)} e^{-\\alpha_t y_i b_t(x_i)}, i = 1, \\dots, l$.\n * Normaliser les poids des \u00e9chantillons: $\\Large w_0^{(t)} = \\sum_{j = 1}^k w_j^{(t)}, w_i^{(t)} = \\frac{w_i^{(t)}}{w_0^{(t)}}, i = 1, \\dots, l$.\n- Retourne $\\sum_t^{T}\\alpha_tb_t$\n\n\n[Voici](https://www.youtube.com/watch?v=k4G2VCuOMMg) un exemple plus d\u00e9taill\u00e9 d'AdaBoost o\u00f9, en it\u00e9rant, nous pouvons voir que les poids augmentent, en particulier \u00e0 la fronti\u00e8re entre les classes.\n\nAdaBoost fonctionne bien, mais [le manque](https://www.cs.princeton.edu/courses/archive/spring07/cos424/papers/boosting-survey.pdf) d'explication de la raison de la r\u00e9ussite de l'algorithme a sem\u00e9 le doute. Certains consid\u00e9raient cela comme un super-algorithme, une solution miracle, mais d'autres \u00e9taient sceptiques et pensaient qu'AdaBoost \u00e9tait juste trop bien sur-appris\n\nLe probl\u00e8me de sur-apprentissage existait bel et bien, surtout lorsque les donn\u00e9es pr\u00e9sentaient de fortes valeurs aberrantes. Par cons\u00e9quent, dans ce type de probl\u00e8me, AdaBoost \u00e9tait instable. Heureusement, quelques professeurs du d\u00e9partement de statistique de Stanford, qui avaient cr\u00e9\u00e9 Lasso, Elastic Net et Random Forest, ont commenc\u00e9 \u00e0 \u00e9tudier l'algorithme. En 1999, Jerome Friedman a propos\u00e9 la g\u00e9n\u00e9ralisation du d\u00e9veloppement d\u2019algorithmes de Boosting - Gradient Boosting (Machine), \u00e9galement connu sous le nom de GBM. Avec ce travail, Friedman a mis en place la base statistique de nombreux algorithmes fournissant l'approche g\u00e9n\u00e9rale du Boosting pour l'optimisation dans l'espace fonctionnel.\n\nCART, bootstrap, et beaucoup d'autres algorithmes sont issus du d\u00e9partement des statistiques de Stanford. Ce faisant, leurs noms seront introduits dans les prochains manuels. Ces algorithmes sont tr\u00e8s pratiques et certains travaux r\u00e9cents ne sont pas encore largement adopt\u00e9s. Par exemple, consultez [glinternet](https://arxiv.org/abs/1308.2719).\n\nPas beaucoup d'enregistrements vid\u00e9o de Friedman sont disponibles. Bien qu'il y ait une tr\u00e8s int\u00e9ressante [interview](https://www.youtube.com/watch?v=8hupHmBVvb0) avec lui sur la cr\u00e9ation de CART et sur la fa\u00e7on dont ils ont r\u00e9solu les probl\u00e8mes de statistiques (similaires \u00e0 la data analysis et \u00e0 la data science aujourd'hui) il y a plus de 40 ans.\n\nIl existe \u00e9galement une excellente [conf\u00e9rence](https://www.youtube.com/watch?v=zBk3PK3g-Fc) de Hastie, une r\u00e9trospective sur l'analyse de donn\u00e9es fournie par l'un des cr\u00e9ateurs des m\u00e9thodes que nous utilisons tous les jours.\n\nEn g\u00e9n\u00e9ral, la recherche en ing\u00e9nierie et en algorithmes est pass\u00e9e d'une approche \u00e0 part enti\u00e8re \u00e0 la construction et \u00e0 l'\u00e9tude d'algorithmes. D'un point de vue math\u00e9matique, il ne s'agit pas d'un gros changement : nous ajoutons (ou renfor\u00e7ons) des algorithmes faibles et \u00e9largissons notre ensemble avec des am\u00e9liorations progressives pour les parties des donn\u00e9es o\u00f9 le mod\u00e8le \u00e9tait inexact. Mais, cette fois, le prochain mod\u00e8le simple ne repose pas uniquement sur des objets repond\u00e9r\u00e9s, mais am\u00e9liore son approximation du gradient de la fonction objective globale. Ce concept ouvre grandement nos algorithmes \u00e0 l'imagination et aux extensions.\n\n\n\n## Histoire de la GBM\n\nPlus de 10 ans apr\u00e8s l\u2019introduction de la GBM, celle-ci est devenue un \u00e9l\u00e9ment essentiel de la bo\u00eete \u00e0 outils de la science des donn\u00e9es.\nGBM a \u00e9t\u00e9 \u00e9tendu pour s\u2019appliquer \u00e0 diff\u00e9rents probl\u00e8mes statistiques : GLMboost et GAMboost pour renforcer les mod\u00e8les GAM existants, CoxBoost pour les courbes de survie et RankBoost et LambdaMART pour le classement.\nDe nombreuses r\u00e9alisations de GBM sont \u00e9galement apparues sous diff\u00e9rents noms et sur diff\u00e9rentes plateformes : GBM stochastique, GBDT (arbres de d\u00e9cision boost\u00e9s par gradient), GBRT (arbres de r\u00e9gression boost\u00e9s par gradient), MART (arbres de r\u00e9gression additive multiples), etc. En outre, la communaut\u00e9 du ML (Machine Learning) \u00e9tait tr\u00e8s segment\u00e9e et dissoci\u00e9e, ce qui rendait difficile de savoir \u00e0 quel point la stimulation \u00e9tait devenue g\u00e9n\u00e9ralis\u00e9e.\n\nDans le m\u00eame temps, le boosting avait \u00e9t\u00e9 activement utilis\u00e9 dans le classement des recherches. Ce probl\u00e8me a \u00e9t\u00e9 r\u00e9\u00e9crit en termes de fonction de perte qui p\u00e9nalise les erreurs dans l'ordre de sortie. Il est donc devenu pratique de l'ins\u00e9rer simplement dans la GBM. AltaVista a \u00e9t\u00e9 l\u2019une des premi\u00e8res entreprises \u00e0 introduire le renforcement du classement. Bient\u00f4t, les id\u00e9es se r\u00e9pandent dans Yahoo, Yandex, Bing, etc. Une fois que cela s'est produit, le boosting est devenu l'un des principaux algorithmes utilis\u00e9s non seulement dans la recherche, mais \u00e9galement dans les technologies de base de l'industrie.\n\nLes comp\u00e9titions ML, en particulier Kaggle, ont jou\u00e9 un r\u00f4le majeur dans la popularisation du Boosting. \u00c0 pr\u00e9sent, les chercheurs disposaient d\u2019une plate-forme commune leur permettant de faire face \u00e0 diff\u00e9rents probl\u00e8mes de science des donn\u00e9es avec un grand nombre de participants du monde entier. Avec Kaggle, il est possible de tester de nouveaux algorithmes sur les donn\u00e9es r\u00e9elles, ce qui permet aux algorithmes de \"briller\", et de fournir des informations compl\u00e8tes sur le partage des r\u00e9sultats de performance de mod\u00e8le entre jeux de donn\u00e9es de comp\u00e9tition. C\u2019est exactement ce qui est arriv\u00e9 avec le Boosting quand il a \u00e9t\u00e9 utilis\u00e9 chez [Kaggle](http://blog.kaggle.com/2011/12/21/score-xavier-conort-on-coming-second-in-give-me-some-credit/) (voir les entretiens avec les gagnants de Kaggle \u00e0 partir de 2011 qui ont principalement utilis\u00e9 le boosting). La biblioth\u00e8que [XGBoost](https://github.com/dmlc/xgboost) a rapidement gagn\u00e9 en popularit\u00e9 apr\u00e8s son apparition. XGBoost n'est pas un nouvel algorithme unique. il s'agit simplement d'une r\u00e9alisation extr\u00eamement efficace de la GBM classique avec des heuristiques suppl\u00e9mentaires.\n\nCet algorithme a suivi le chemin tr\u00e8s typique des algorithmes ML aujourd'hui : probl\u00e8me math\u00e9matique et savoir-faire algorithmique pour des applications pratiques r\u00e9ussies et une adoption en masse des ann\u00e9es apr\u00e8s sa premi\u00e8re apparition.\n\n# 2. L'algorithme GBM\n### Probl\u00e8me\n\nNous allons r\u00e9soudre le probl\u00e8me de l'approximation des fonctions dans un contexte d'apprentissage supervis\u00e9 g\u00e9n\u00e9ral. Nous avons un ensemble de fonctionnalit\u00e9s $ \\large x $ et les variables cibles $\\large y, \\large \\left\\{ (x_i, y_i) \\right\\}_{i=1, \\ldots,n}$ que nous utilisons pour restaurer la d\u00e9pendance $\\large y = f(x) $. Nous restaurons la d\u00e9pendance en approximant $ \\large \\hat{f}(x) $ et en comprenant quelle approximation est la meilleure lorsque nous utilisons la fonction de perte $ \\large L(y,f) $, que nous voulons minimiser: $ \\large y \\approx \\hat{f}(x), \\large \\hat{f}(x) = \\underset{f(x)}{\\arg\\min} \\ L(y,f(x)) $.\n\n\n\nPour le moment, nous ne faisons aucune hypoth\u00e8se concernant le type de d\u00e9pendance $ \\large f(x) $, le mod\u00e8le de notre approximation $ \\large \\hat{f}(x) $ ou la distribution de la variable cible $ \\large y $. Nous nous attendons seulement \u00e0 ce que la fonction $ \\large L(y,f) $ soit diff\u00e9rentiable. Notre approche est tr\u00e8s g\u00e9n\u00e9rale : nous d\u00e9finissons $ \\large \\hat {f}(x) $ en minimisant la perte :\n$$ \\large \\hat{f}(x) = \\underset{f(x)}{\\arg\\min} \\ \\mathbb {E} _{x,y}[L(y,f(x))] $$\n\nMalheureusement, le nombre de fonctions $ \\large f(x) $ n'est pas simplement grand, mais son espace fonctionnel est infiniment dimensionnel. C'est pourquoi il est acceptable pour nous de limiter l'espace de recherche par une famille de fonctions $ \\large f(x, \\theta), \\theta \\in \\mathbb{R}^d $. Cela simplifie beaucoup l'objectif car nous avons maintenant une optimisation solvable des valeurs de param\u00e8tres:\n$ \\large \\hat{f}(x) = f(x, \\hat{\\theta}),$\n$$\\large \\hat{\\theta} = \\underset{\\theta}{\\arg\\min} \\ \\mathbb {E} _{x,y}[L(y,f(x,\\theta))] $$\n\nDes solutions analytiques simples pour trouver les param\u00e8tres optimaux $ \\large \\hat{\\theta} $ n'existant souvent pas, les param\u00e8tres sont g\u00e9n\u00e9ralement approxim\u00e9s de mani\u00e8re it\u00e9rative. Pour commencer, nous \u00e9crivons la fonction de perte empirique $ \\large L_{\\theta}(\\hat{\\theta}) $ qui nous permettra d\u2019\u00e9valuer nos param\u00e8tres en utilisant nos donn\u00e9es. De plus, \u00e9crivons notre approximation $ \\large \\hat{\\theta} $ pour un certain nombre d'it\u00e9rations $ \\large M $ sous forme de somme:\n$ \\large \\hat{\\theta} = \\sum_{i = 1}^M \\hat{\\theta_i}, \\\\\n\\large L_{\\theta}(\\hat{\\theta}) = \\sum_{i = 1}^N L(y_i,f(x_i, \\hat{\\theta}))$\n\nEnsuite, il ne reste plus qu'\u00e0 trouver un algorithme it\u00e9ratif appropri\u00e9 pour minimiser $\\large L_{\\theta}(\\hat{\\theta})$. La descente de gradient est l'option la plus simple et la plus fr\u00e9quemment utilis\u00e9e. Nous d\u00e9finissons le gradient comme \u00e9tant $\\large \\nabla L_{\\theta}(\\hat{\\theta})$ et y ajoutons nos \u00e9valuations it\u00e9ratives $\\large \\hat{\\theta_i}$ (puisque nous minimisons la perte, nous ajoutons le signe moins). Notre derni\u00e8re \u00e9tape consiste \u00e0 initialiser notre premi\u00e8re approximation $\\large \\hat{\\theta_0}$ et \u00e0 choisir le nombre d'it\u00e9rations $\\large M$. Passons en revue les \u00e9tapes de cet algorithme inefficace et na\u00eff pour approximer $\\large \\hat{\\theta}$:\n\n1. D\u00e9finir l'approximation initiale des param\u00e8tres $\\large \\hat{\\theta} = \\hat{\\theta_0}$\n2. Pour chaque it\u00e9ration $\\large t = 1, \\dots, M$, r\u00e9p\u00e9tez les \u00e9tapes 3 \u00e0 7:\n3. Calculer le gradient de la fonction de perte $\\large \\nabla L_{\\theta}(\\hat{\\theta})$ pour l'approximation courante $\\large \\hat{\\theta}$\n$\\large \\nabla L_{\\theta}(\\hat{\\theta}) = \\left[\\frac{\\partial L(y, f(x, \\theta))}{\\partial \\theta}\\right]_{\\theta = \\hat{\\theta}}$\n4. D\u00e9finir l'approximation it\u00e9rative actuelle $\\large \\hat{\\theta_t}$ en fonction du gradient calcul\u00e9\n$\\large \\hat{\\theta_t} \\leftarrow \u2212\\nabla L_{\\theta}(\\hat{\\theta})$\n5. Mettre \u00e0 jour l'approximation des param\u00e8tres $\\large \\hat{\\theta}$:\n$\\large \\hat{\\theta} \\leftarrow \\hat{\\theta} + \\hat{\\theta_t} = \\sum_{i = 0}^t \\hat{\\theta_i} $\n6. Enregistrer le r\u00e9sultat de l'approximation $\\large \\hat{\\theta}$:\n$\\large \\hat{\\theta} = \\sum_{i = 0}^M \\hat{\\theta_i} $\n7. Utiliser la fonction trouv\u00e9e $\\large \\hat{f}(x) = f(x, \\hat{\\theta})$\n\n\n\n### Descente de gradient fonctionnel\n\nImaginons une seconde que nous puissions am\u00e9liorer l'optimisation dans l'espace des fonctions et rechercher de mani\u00e8re it\u00e9rative les approximations $\\large \\hat{f}(x)$ en tant que fonctions elles-m\u00eames. Nous allons exprimer notre approximation comme une somme d'am\u00e9liorations incr\u00e9mentales, chacune \u00e9tant une fonction. Pour plus de commodit\u00e9, nous commencerons imm\u00e9diatement par la somme de l'approximation initiale $\\large \\hat{f_0}(x)$:\n$$\\large \\hat{f}(x) = \\sum_{i = 0}^M \\hat{f_i}(x)$$\n\nRien n'est encore arriv\u00e9. nous avons seulement d\u00e9cid\u00e9 de chercher notre approximation $\\large \\hat{f}(x)$ non pas comme un grand mod\u00e8le avec beaucoup de param\u00e8tres (par exemple, un r\u00e9seau de neurones), mais comme une somme de fonctions pr\u00e9tendant que nous nous d\u00e9pla\u00e7ons dans un espace fonctionnel.\n\nAfin d'accomplir cette t\u00e2che, nous devons limiter notre recherche \u00e0 une famille de fonctions $\\large \\hat{f}(x) = h(x, \\theta)$. Il y a quelques probl\u00e8mes ici - tout d\u2019abord, la somme des mod\u00e8les peut \u00eatre plus compliqu\u00e9e que n\u2019importe quel mod\u00e8le de cette famille; Deuxi\u00e8mement, l'objectif g\u00e9n\u00e9ral est toujours dans l'espace fonctionnel. Notons qu'\u00e0 chaque \u00e9tape, nous devrons choisir un coefficient optimal $\\large \\rho \\in \\mathbb{R}$. Pour l'\u00e9tape $\\large t$, le probl\u00e8me est le suivant:\n$$\\large \\hat{f}(x) = \\sum_{i = 0}^{t-1} \\hat{f_i}(x), \\\\\n\\large (\\rho_t,\\theta_t) = \\underset{\\rho,\\theta}{\\arg\\min} \\ \\mathbb {E} _{x,y}[L(y,\\hat{f}(x) + \\rho \\cdot h(x, \\theta))], \\\\\n\\large \\hat{f_t}(x) = \\rho_t \\cdot h(x, \\theta_t)$$\n\nC'est ici que la magie op\u00e8re. Nous avons d\u00e9fini tous nos objectifs en termes g\u00e9n\u00e9raux, comme si nous pouvions former tout type de mod\u00e8le $\\large h(x, \\theta)$ pour n\u2019importe quel type de fonction de perte $\\large L(y, f(x, \\theta))$. En pratique, c'est extr\u00eamement difficile, mais heureusement, il existe un moyen simple de r\u00e9soudre ce probl\u00e8me.\n\nConnaissant l'expression du gradient de la fonction de perte, nous pouvons calculer sa valeur sur nos donn\u00e9es. Donc, entra\u00eenons les mod\u00e8les de telle sorte que nos pr\u00e9dictions soient davantage corr\u00e9l\u00e9es \u00e0 ce gradient (avec un signe moins). En d'autres termes, nous allons utiliser les moindres carr\u00e9s pour corriger les pr\u00e9dictions avec ces r\u00e9sidus. Pour les t\u00e2ches de classification, de r\u00e9gression et de classement, nous minimiserons la diff\u00e9rence quadratique entre les pseudo-r\u00e9sidus $\\large r$ et nos pr\u00e9dictions. Pour l'\u00e9tape $\\large t$, le probl\u00e8me final se pr\u00e9sente comme suit:\n$$ \\large \\hat{f}(x) = \\sum_{i = 0}^{t-1} \\hat{f_i}(x), \\\\\n\\large r_{it} = -\\left[\\frac{\\partial L(y_i, f(x_i))}{\\partial f(x_i)}\\right]_{f(x)=\\hat{f}(x)}, \\quad \\mbox{for } i=1,\\ldots,n ,\\\\\n\\large \\theta_t = \\underset{\\theta}{\\arg\\min} \\ \\sum_{i = 1}^{n} (r_{it} - h(x_i, \\theta))^2, \\\\\n\\large \\rho_t = \\underset{\\rho}{\\arg\\min} \\ \\sum_{i = 1}^{n} L(y_i, \\hat{f}(x_i) + \\rho \\cdot h(x_i, \\theta_t))$$\n\n\n\n###\u00a0L'algorithme GBM classique de Friedman\n\nNous pouvons maintenant d\u00e9finir l\u2019algorithme GBM classique propos\u00e9 par Jerome Friedman en 1999. C\u2019est un algorithme supervis\u00e9 qui a les composants suivants:\n\n- ensemble de donn\u00e9es $\\large \\left\\{ (x_i, y_i) \\right\\}_{i=1, \\ldots,n}$;\n- nombre d'it\u00e9rations $\\large M$;\n- choix de la fonction de perte $\\large L(y, f)$ avec un gradient d\u00e9fini;\n- choix de la famille de fonctions des algorithmes de base $\\large h(x, \\theta)$ avec la proc\u00e9dure d'apprentissage;\n- hyperparam\u00e8tres suppl\u00e9mentaires $\\large h(x, \\theta)$ (par exemple, dans les arbres de d\u00e9cision, la profondeur de l\u2019arbre);\n\nLa seule chose qui reste est l'approximation initiale $\\large f_0(x)$. Pour simplifier, pour une approximation initiale, une valeur constante $\\large \\gamma$ est utilis\u00e9e. La valeur constante, ainsi que le coefficient optimal $\\large \\rho $, sont identifi\u00e9s via une recherche binaire ou un autre algorithme de recherche de ligne sur la fonction de perte initiale (et non un gradient). Nous avons donc notre algorithme GBM d\u00e9crit comme suit:\n\n1. Initialiser GBM avec une valeur constante $\\large \\hat{f}(x) = \\hat{f}_0, \\hat{f}_0 = \\gamma, \\gamma \\in \\mathbb{R}$\n$\\large \\hat{f}_0 = \\underset{\\gamma}{\\arg\\min} \\ \\sum_{i = 1}^{n} L(y_i, \\gamma)$\n2. Pour chaque it\u00e9ration $\\large t = 1, \\dots, M$, r\u00e9p\u00e9ter :\n3. Calculer les pseudo-r\u00e9sidus $\\large r_t$\n$\\large r_{it} = -\\left[\\frac{\\partial L(y_i, f(x_i))}{\\partial f(x_i)}\\right]_{f(x)=\\hat{f}(x)}, \\quad \\mbox{for } i=1,\\ldots,n$\n4. Construire le nouvel algorithme de base $\\large h_t(x)$ sous forme de r\u00e9gression sur les pseudo-r\u00e9sidus $\\large \\left\\{ (x_i, r_{it}) \\right\\}_{i=1, \\ldots,n}$\n5. Trouver le coefficient optimal $\\large \\rho_t $ \u00e0 $\\large h_t(x)$ concernant la fonction de perte initiale\n$\\large \\rho_t = \\underset{\\rho}{\\arg\\min} \\ \\sum_{i = 1}^{n} L(y_i, \\hat{f}(x_i) + \\rho \\cdot h(x_i, \\theta))$\n6. Enregistrer $\\large \\hat{f_t}(x) = \\rho_t \\cdot h_t(x)$\n7. Mettre \u00e0 jour l'approximation courante $\\large \\hat{f}(x)$\n$\\large \\hat{f}(x) \\leftarrow \\hat{f}(x) + \\hat{f_t}(x) = \\sum_{i = 0}^{t} \\hat{f_i}(x)$\n8. Composez le mrd\u00e8le final GBM $\\large \\hat{f}(x)$\n$\\large \\hat{f}(x) = \\sum_{i = 0}^M \\hat{f_i}(x) $\n9. Conqu\u00e9rir Kaggle et le reste du monde\n\n### Exemple pas \u00e0 pas: comment fonctionne la GBM\n\nVoyons un exemple du fonctionnement de la GBM. Dans cet exemple de bas\u00e9 sur un jeu, nous allons restaurer une fonction bruyante $\\large y = cos(x) + \\epsilon, \\epsilon \\sim \\mathcal{N}(0, \\frac{1}{5}), x \\in [-5,5]$.\n\n\n\nIl s\u2019agit d\u2019un probl\u00e8me de r\u00e9gression avec une cible \u00e0 valeur r\u00e9elle. Nous allons donc choisir d\u2019utiliser la fonction de perte d\u2019erreur quadratique moyenne. Nous allons g\u00e9n\u00e9rer 300 paires d'observations et les approximer avec des arbres de d\u00e9cision de profondeur 2. R\u00e9unissons tout ce dont nous avons besoin pour utiliser GBM:\n- Jeu de donn\u00e9es $\\large \\left\\{ (x_i, y_i) \\right\\}_{i=1, \\ldots,300}$ \u2713\n- Nombre d'it\u00e9rations $\\large M = 3$ \u2713;\n- La fonction de perte d'erreur quadratique moyenne $\\large L(y, f) = (y-f)^2$ \u2713\n- Le gradient de perte en $\\large L(y, f) = L_2$ n'est que des r\u00e9sidus $\\large r = (y - f)$ \u2713;\n- Arbre de d\u00e9cision en tant qu'algorithme de base $\\large h(x)$ \u2713;\n- Hyperparam\u00e8tres des arbres de d\u00e9cision : la profondeur des arbres est \u00e9gale \u00e0 2 \u2713;\n\nPour l'erreur quadratique moyenne, l'initialisation $\\large \\gamma$ et les coefficients $\\large \\rho_t$ sont simples. Nous initialiserons GBM avec la valeur moyenne $\\large \\gamma = \\frac{1}{n} \\cdot \\sum_{i = 1}^n y_i$ et d\u00e9finirons tous les coefficients $\\large \\rho_t$ sur 1.\n\nNous allons lancer GBM et dessiner deux types de graphes : l\u2019approximation courante $\\large \\hat{f}(x)$ (graphe bleu) et chaque arbre $\\large \\hat{f_t}(x)$ construit sur ses pseudo-r\u00e9sidus (graphe vert). Le num\u00e9ro du graphique correspond au num\u00e9ro d'it\u00e9ration :\n\n\n\n\u00c0 la deuxi\u00e8me it\u00e9ration, nos arbres ont retrouv\u00e9 la forme de base de la fonction. Cependant, \u00e0 la premi\u00e8re it\u00e9ration, nous voyons que l'algorithme n'a construit que la \"branche gauche\" de la fonction ($\\large x \\in [-5, -4]$). Cela \u00e9tait d\u00fb au fait que nos arbres n\u2019avaient tout simplement pas assez de profondeur pour construire une branche sym\u00e9trique \u00e0 la fois et qu\u2019ils se concentraient sur la branche gauche pr\u00e9sentant l\u2019erreur la plus grande. Par cons\u00e9quent, la branche de droite n'est apparue qu'apr\u00e8s la deuxi\u00e8me it\u00e9ration.\n\nLe reste du processus se d\u00e9roule comme pr\u00e9vu : \u00e0 chaque \u00e9tape, nos pseudo-r\u00e9sidus ont diminu\u00e9 et GBM a am\u00e9lior\u00e9 de mieux en mieux la fonction d'origine \u00e0 chaque it\u00e9ration. Cependant, par construction, les arbres ne peuvent pas se rapprocher d'une fonction continue, ce qui signifie que GBM n'est pas id\u00e9al dans cet exemple. Pour jouer avec les approximations de fonctions GBM, vous pouvez utiliser la d\u00e9mo interactive impressionnante de ce blog intitul\u00e9e [Brilliantly wrong](http://arogozhnikov.github.io/2016/06/24/gradient_boosting_explained.html) :\n\n\n\n# 3. Fonctions de perte\n\nSi nous voulons r\u00e9soudre un probl\u00e8me de classification au lieu d'une r\u00e9gression, qu'est-ce qui changerait ? Il suffit de choisir une fonction de perte appropri\u00e9e, $\\large L(y, f)$. C\u2019est le moment le plus important qui d\u00e9termine exactement comment nous allons optimiser et \u00e0 quelles caract\u00e9ristiques nous pouvons nous attendre dans le mod\u00e8le final.\n\nEn r\u00e8gle g\u00e9n\u00e9rale, nous n'avons pas besoin de l'inventer nous-m\u00eames : les chercheurs l'ont d\u00e9j\u00e0 fait pour nous. Aujourd'hui, nous allons explorer les fonctions de perte pour les deux objectifs les plus courants : la r\u00e9gression $\\large y \\in \\mathbb{R}$ et la classification binaire $\\large y \\in \\left\\{-1, 1\\right\\}$.\n\n### Fonctions de perte li\u00e9es \u00e0 la r\u00e9gression\n\nCommen\u00e7ons par un probl\u00e8me de r\u00e9gression pour $\\large y \\in \\mathbb{R}$. Afin de choisir la fonction de perte appropri\u00e9e, nous devons d\u00e9terminer quelles propri\u00e9t\u00e9s de la distribution conditionnelle $\\large (y|x)$ nous souhaitons restaurer. Les options les plus courantes sont:\n\n- $\\large L(y, f) = (y - f)^2$ a.k.a. $\\large L_2$ perte ou perte gaussienne. C'est la moyenne conditionnelle classique, qui est le cas le plus simple et le plus courant. Si nous n'avons pas d'informations suppl\u00e9mentaires ou d'exigences pour qu'un mod\u00e8le soit robuste, nous pouvons utiliser la perte gaussienne.\n- perte $\\large L_1$ ou $\\large L_1$ ou perte laplacienne. Au premier abord, cette fonction ne semble pas pouvoir \u00eatre diff\u00e9renci\u00e9e, mais d\u00e9finit en r\u00e9alit\u00e9 la m\u00e9diane conditionnelle. Comme nous le savons, la m\u00e9diane est robuste aux valeurs aberrantes, raison pour laquelle cette fonction de perte est meilleure dans certains cas. La p\u00e9nalit\u00e9 pour les grandes variations n\u2019est pas aussi lourde que dans $\\large L_2$.\n- $ \\large \\begin{equation} L(y, f) =\\left\\{ \\begin{array}{@{}ll@{}} (1 - \\alpha) \\cdot |y - f|, & \\text{if}\\ y-f \\leq 0 \\\\ \\alpha \\cdot |y - f|, & \\text{if}\\ y-f >0 \\end{array}\\right. \\end{equation}, \\alpha \\in (0,1)\n$ a.k.a. $\\large L_q$ perte ou perte de Quantile. Au lieu de la m\u00e9diane, il utilise des quantiles. Par exemple, $\\large \\alpha = 0.75$ correspond au 75%-quantile. Nous pouvons voir que cette fonction est asym\u00e9trique et p\u00e9nalise les observations qui se trouvent du c\u00f4t\u00e9 droit du quantile d\u00e9fini.\n\n\n\nUtilisons la fonction de perte $\\large L_q$ sur nos donn\u00e9es. L'objectif est de restaurer le quantile conditionnel de cosinus \u00e0 75%. Mettons tout en \u0153uvre pour GBM:\n- Jeu de donn\u00e9es $\\large \\left\\{ (x_i, y_i) \\right\\}_{i=1, \\ldots,300}$ \u2713\n- Un nombre d'it\u00e9rations $\\large M = 3$ \u2713;\n- Fonction de perte pour les quantiles $ \\large \\begin{equation} L_{0.75}(y, f) =\\left\\{\n\\begin{array}{@{}ll@{}} 0.25 \\cdot |y - f|, & \\text{if}\\ y-f \\leq 0 \\\\ 0.75 \\cdot |y - f|, & \\text{if}\\ y-f >0 \\end{array}\\right. \\end{equation} $ \u2713;\n- Gradient $\\large L_{0.75}(y, f)$ - fonction pond\u00e9r\u00e9e par $\\large \\alpha = 0.75$. Nous allons former un mod\u00e8le bas\u00e9 sur des arbres pour la classification :\n$\\large r_{i} = -\\left[\\frac{\\partial L(y_i, f(x_i))}{\\partial f(x_i)}\\right]_{f(x)=\\hat{f}(x)} = $\n$\\large = \\alpha I(y_i > \\hat{f}(x_i) ) - (1 - \\alpha)I(y_i \\leq \\hat{f}(x_i) ), \\quad \\mbox{for } i=1,\\ldots,300$ \u2713;\n- Arbre de d\u00e9cision en tant qu'algorithme de base $\\large h(x)$ \u2713;\n- Hyperparam\u00e8tre des arbres : profondeur = 2 \u2713;\n\nPour notre approximation initiale, nous prendrons le quantile n\u00e9cessaire de $\\large y$. Cependant, nous ne savons rien des coefficients optimaux $\\large \\rho_t$, nous allons donc utiliser la recherche par ligne standard. Les r\u00e9sultats sont les suivants :\n\n\n\nNous pouvons observer que, \u00e0 chaque it\u00e9ration, $\\large r_{i} $ ne prend que 2 valeurs possibles, mais GBM est toujours capable de restaurer notre fonction initiale.\n\nLes r\u00e9sultats globaux de GBM avec fonction de perte quantile sont les m\u00eames que les r\u00e9sultats avec fonction de perte quadratique d\u00e9cal\u00e9e par $\\large \\approx 0.135$. Mais si nous utilisions le 90%-quantile, nous n'aurions pas assez de donn\u00e9es car les classes seraient d\u00e9s\u00e9quilibr\u00e9es. Nous devons nous en souvenir lorsque nous traitons des probl\u00e8mes non standard.\n\n*\"Quelques mots sur les fonctions de perte de r\u00e9gression\"*\n\nPour les t\u00e2ches de r\u00e9gression, de nombreuses fonctions de perte ont \u00e9t\u00e9 d\u00e9velopp\u00e9es, certaines avec des propri\u00e9t\u00e9s suppl\u00e9mentaires. Par exemple, ils peuvent \u00eatre robustes, comme dans la [fonction de perte de Huber](https://en.wikipedia.org/wiki/Huber_loss). Pour un petit nombre de valeurs aberrantes, la fonction de perte fonctionne comme $\\large L_2$, mais apr\u00e8s un seuil d\u00e9fini, la fonction devient $\\large L_1$. Cela permet de r\u00e9duire l'effet des valeurs aberrantes et de se concentrer sur l'image globale.\n\nNous pouvons illustrer cela avec l'exemple suivant. Les donn\u00e9es sont g\u00e9n\u00e9r\u00e9es \u00e0 partir de la fonction $\\large y = \\frac{sin(x)}{x}$ avec ajout de bruit, un m\u00e9lange de distributions normales et de distributions de Bernulli. Nous montrons les fonctions sur les graphes A-D et le GBM correspondant sur F-H (le graphe E repr\u00e9sente la fonction initiale):\n\n [Taille originale](https://habrastorage.org/web/130/05b/222/13005b222e8a4eb68c3936216c05e276.jpg).\n\n\nDans cet exemple, nous avons utilis\u00e9 des splines comme algorithme de base. Vous voyez, il ne faut pas toujours que ce soit des arbres pour le Bosting ?\n\nNous pouvons clairement voir la diff\u00e9rence entre les fonctions $\\large L_2$, $\\large L_1$ et la perte de Huber. Si nous choisissons des param\u00e8tres optimaux pour la perte de Huber, nous pouvons obtenir la meilleure approximation possible parmi toutes nos options. La diff\u00e9rence est \u00e9galement visible dans les quantiles de 10%, 50% et 90%.\n\nMalheureusement, la fonction de perte de Huber n\u2019est prise en charge que par tr\u00e8s peu de biblioth\u00e8ques / packages populaires; h2o le supporte, mais pas XGBoost. Il est pertinent pour d'autres choses plus exotiques comme les [expectiles conditionnelles](https://www.slideshare.net/charthur/quantile-and-expectile-regression), mais il peut quand m\u00eame s'agir de connaissances int\u00e9ressantes.\n\n\n### Fonctions de perte de classification\n\nMaintenant, regardons le probl\u00e8me de classification binaire $\\large y \\in \\left\\{-1, 1\\right\\}$. Nous avons vu que GBM peut m\u00eame optimiser des fonctions de perte non diff\u00e9renciables. Techniquement, il est possible de r\u00e9soudre ce probl\u00e8me avec une perte de r\u00e9gression $\\large L_2$, mais ce ne serait pas correct.\n\nLa distribution de la variable cible n\u00e9cessite que nous utilisions un \"log-likehood\", il nous faut donc diff\u00e9rentes fonctions de perte pour les cibles multipli\u00e9es par leurs pr\u00e9dictions : $\\large y \\cdot f$. Les choix les plus courants seraient les suivants: \n\n- $\\large L(y, f) = log(1 + exp(-2yf))$ ak.a. Perte logistique ou perte de Bernoulli. Cela a une propri\u00e9t\u00e9 int\u00e9ressante qui p\u00e9nalise m\u00eame les classes correctement pr\u00e9dites, ce qui aide non seulement \u00e0 optimiser la perte, mais \u00e9galement \u00e0 \u00e9carter davantage les classes, m\u00eame si toutes les classes sont correctement pr\u00e9dites.\n- perte de $\\large L(y, f) = exp(-yf)$ a.k.a. AdaBoost. Le classique AdaBoost est \u00e9quivalent \u00e0 GBM avec cette fonction de perte. Conceptuellement, cette fonction est tr\u00e8s similaire \u00e0 la perte logistique, mais elle est plus p\u00e9nalis\u00e9e de fa\u00e7on exponentielle si la pr\u00e9diction est fausse.\n\n\n\nG\u00e9n\u00e9rons un nouveau jeu de donn\u00e9es pour notre probl\u00e8me de classification. En guise de base, nous prendrons notre cosinus \"bruyant\" et nous utiliserons la fonction de signe pour les classes de la variable cible. Nos donn\u00e9es ressemblent \u00e0 ceci (le \"jitter-noise\" est ajout\u00e9 pour plus de clart\u00e9) :\n\n\n\n\nNous utiliserons la perte logistique pour rechercher ce que nous am\u00e9liorons r\u00e9ellement. Donc, encore une fois, nous avons mis en place ce que nous allons utiliser pour GBM :\n- jeu de donn\u00e9es $\\large \\left\\{ (x_i, y_i) \\right\\}_{i=1, \\ldots,300}, y_i \\in \\left\\{-1, 1\\right\\}$ \u2713\n- Nombre d'it\u00e9rations $\\large M = 3$ \u2713;\n- Perte logistique en tant que fonction de perte, son gradient est calcul\u00e9 de la mani\u00e8re suivante :\n$\\large r_{i} = \\frac{2 \\cdot y_i}{1 + exp(2 \\cdot y_i \\cdot \\hat{f}(x_i)) }, \\quad \\mbox{for } i=1,\\ldots,300$ \u2713;\n- Arbre de d\u00e9cision en tant qu'algorithme de base $\\large h(x)$ \u2713;\n- Hyperparam\u00e8tres des arbres de d\u00e9cision : la profondeur de l'arbre est \u00e9gale \u00e0 2 \u2713;\n\nCette fois, l'initialisation de l'algorithme est un peu plus difficile. Premi\u00e8rement, nos classes sont d\u00e9s\u00e9quilibr\u00e9es (63% contre 37%). Deuxi\u00e8mement, il n\u2019existe pas de formule analytique connue pour l\u2019initialisation de notre fonction de perte, nous devons donc rechercher $\\large \\hat{f_0} = \\gamma$ via search :\n\n\n\n\nNotre approximation initiale optimale est d'environ -0,273. Vous auriez pu deviner que c'\u00e9tait n\u00e9gatif car il est plus rentable de tout pr\u00e9dire comme la classe la plus populaire, mais il n'y a pas de formule pour la valeur exacte. Maintenant, commen\u00e7ons enfin par GBM, et regardons ce qui se passe r\u00e9ellement sous le capot: \n\n\n\nL'algorithme a restaur\u00e9 avec succ\u00e8s la s\u00e9paration entre nos classes. Vous pouvez voir comment les zones \"inf\u00e9rieures\" se s\u00e9parent car les arbres sont plus confiants dans la pr\u00e9diction correcte de la classe n\u00e9gative et comment se forment les deux \u00e9tapes des classes mixtes. Il est clair que nous avons beaucoup d'observations correctement class\u00e9es et un certain nombre d'observations avec des erreurs importantes qui sont apparues en raison du bruit dans les donn\u00e9es.\n\n### Poids\n\nParfois, nous voulons une fonction de perte plus sp\u00e9cifique pour notre probl\u00e8me. Par exemple, dans les s\u00e9ries chronologiques financi\u00e8res, il se peut que nous souhaitions accorder plus de poids aux mouvements importants dans la s\u00e9rie chronologique; pour la pr\u00e9vision du taux de d\u00e9sabonnement, il est plus utile de pr\u00e9dire le d\u00e9sabonnement des clients ayant un LTV \u00e9lev\u00e9 (ou une valeur \u00e0 vie: combien d'argent un client rapportera-t-il \u00e0 l'avenir ?). \n\n\n\nLe guerrier statistique inventerait sa propre fonction de perte, en \u00e9crivant le gradient (pour un entra\u00eenement plus efficace, incluez le Hessian) et v\u00e9rifierait avec soin si cette fonction remplissait les propri\u00e9t\u00e9s requises. Cependant, il y a de fortes chances que quelqu'un commette une erreur quelque part, se heurte \u00e0 des difficult\u00e9s de calcul et consacre une quantit\u00e9 excessive de temps \u00e0 la recherche.\n\nAu lieu de cela, un instrument tr\u00e8s simple a \u00e9t\u00e9 invent\u00e9 (ce dont on se souvient rarement dans la pratique) : peser des observations et attribuer des fonctions de pond\u00e9ration. L'exemple le plus simple d'une telle pond\u00e9ration est la d\u00e9finition de pond\u00e9rations pour la balance de classes. En g\u00e9n\u00e9ral, si nous savons qu'un sous-ensemble de donn\u00e9es, tant dans les variables d'entr\u00e9e $\\large x$ que dans la variable cible $\\large y$, a une plus grande importance pour notre mod\u00e8le, nous leur attribuons simplement une pond\u00e9ration plus importante, $\\large w(x,y)$. L\u2019objectif principal est de satisfaire aux exigences g\u00e9n\u00e9rales en mati\u00e8re de poids: \n$$ \\large w_i \\in \\mathbb{R}, \\\\\n\\large w_i \\geq 0 \\quad \\mbox{for } i=1,\\ldots,n, \\\\\n\\large \\sum_{i = 1}^n w_i > 0 $$\n\nLes poids peuvent r\u00e9duire consid\u00e9rablement le temps pass\u00e9 \u00e0 ajuster la fonction de perte \u00e0 la t\u00e2che que nous r\u00e9solvons et encourager les exp\u00e9riences sur les propri\u00e9t\u00e9s des mod\u00e8les cibles. L'attribution de ces poids est enti\u00e8rement fonction de la cr\u00e9ativit\u00e9. Nous ajoutons simplement des poids scalaires:\n$$ \\large L_{w}(y,f) = w \\cdot L(y,f), \\\\\n\\large r_{it} = - w_i \\cdot \\left[\\frac{\\partial L(y_i, f(x_i))}{\\partial f(x_i)}\\right]_{f(x)=\\hat{f}(x)}, \\quad \\mbox{for } i=1,\\ldots,n$$\n\nIl est clair que, pour les poids arbitraires, nous ne connaissons pas les propri\u00e9t\u00e9s statistiques de notre mod\u00e8le. Lier les poids aux valeurs $\\large y$ peut souvent \u00eatre trop compliqu\u00e9. Par exemple, l'utilisation de poids proportionnels \u00e0 $\\large |y|$ dans la fonction de perte $\\large L_1$ n'est pas \u00e9quivalente \u00e0 la perte de $\\large L_2$ car le gradient ne prendra pas en compte les valeurs des pr\u00e9dictions elles-m\u00eames : $\\large \\hat{f}(x)$.\n\nNous mentionnons tout cela afin de mieux comprendre nos possibilit\u00e9s. Cr\u00e9ons des poids tr\u00e8s exotiques pour notre jeu de donn\u00e9es. Nous allons d\u00e9finir une fonction de poids fortement asym\u00e9trique comme suit :\n$$ \\large \\begin{equation} w(x) =\\left\\{ \\begin{array}{@{}ll@{}} 0.1, & \\text{if}\\ x \\leq 0 \\\\ 0.1 + |cos(x)|, & \\text{if}\\ x >0 \\end{array}\\right. \\end{equation} $$\n\n\n\nAvec ces poids, nous nous attendons \u00e0 obtenir deux propri\u00e9t\u00e9s : moins de d\u00e9tails pour les valeurs n\u00e9gatives de $\\large x$ et la forme de la fonction, similaire au cosinus initial. Nous reprenons les r\u00e9glages des autres GBM de notre exemple pr\u00e9c\u00e9dent avec une classification incluant la recherche par ligne pour les coefficients optimaux. Regardons ce que nous avons :\n\n\n\nNous avons atteint le r\u00e9sultat escompt\u00e9. Premi\u00e8rement, nous pouvons voir \u00e0 quel point les pseudo-r\u00e9sidus diff\u00e8rent fortement; \u00e0 l'it\u00e9ration initiale, ils ressemblent presque au cosinus d'origine. Deuxi\u00e8mement, la partie gauche du graphique de la fonction \u00e9tait souvent ignor\u00e9e au profit de la droite, qui avait des poids plus importants. Troisi\u00e8mement, la fonction que nous avons obtenue \u00e0 la troisi\u00e8me it\u00e9ration a re\u00e7u suffisamment d\u2019attention et a commenc\u00e9 \u00e0 ressembler au cosinus original (elle a \u00e9galement l\u00e9g\u00e8rement sur-appris)\n\nLes poids sont un outil puissant mais risqu\u00e9 que nous pouvons utiliser pour contr\u00f4ler les propri\u00e9t\u00e9s de notre mod\u00e8le. Si vous souhaitez optimiser votre fonction de perte, essayez d'abord de r\u00e9soudre un probl\u00e8me plus simple en ajoutant des pond\u00e9rations aux observations, \u00e0 votre discr\u00e9tion.\n\n# 4. Conclusion\n\nAujourd'hui, nous avons appris la th\u00e9orie derri\u00e8re le Gradient Boosting. GBM n'est pas seulement un algorithme sp\u00e9cifique, mais une m\u00e9thodologie commune pour la construction d'ensembles de mod\u00e8les. De plus, cette m\u00e9thodologie est suffisamment flexible et extensible - il est possible de former un grand nombre de mod\u00e8les en tenant compte de diff\u00e9rentes fonctions de perte avec une vari\u00e9t\u00e9 de fonctions de pond\u00e9ration.\n\nLa pratique et les comp\u00e9titions ML montrent que, dans les probl\u00e8mes classiques (\u00e0 l\u2019exception des images, de l\u2019audio et des donn\u00e9es tr\u00e8s \u00e9parses), la GBM est souvent l\u2019algorithme le plus efficace (pour ne pas mentionner les ensembles superpos\u00e9s et de haut niveau, o\u00f9 la GBM fait presque toujours partie int\u00e9grante). En outre, il existe de nombreuses adaptations de GBM [pour l'apprentissage par renforcement](https://arxiv.org/abs/1603.04119) (Minecraft, ICML 2016). En passant, l'algorithme Viola-Jones, qui est encore utilis\u00e9 en vision par ordinateur, est bas\u00e9 sur [AdaBoost](https://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework#Learning_algorithm).\n\nDans cet article, nous avons volontairement omis de poser des questions sur la r\u00e9gularisation, la stochasticit\u00e9 et les hyper-param\u00e8tres de GBM. Ce n'est pas par hasard que nous avons utilis\u00e9 un petit nombre d'it\u00e9rations $\\large M = 3$. Si nous utilisions 30 arbres au lieu de 3 et formions le GBM comme d\u00e9crit, le r\u00e9sultat ne serait pas pr\u00e9visible :\n\n\n\n\n\n\n\n[D\u00e9mo interactive](http://arogozhnikov.github.io/2016/07/05/gradient_boosting_playground.html)\n\n# 5. Mission de d\u00e9monstration\nVotre t\u00e2che consiste \u00e0 battre les bases de r\u00e9f\u00e9rence dans le cadre de la comp\u00e9tition \"Flight delays\" de [Kaggle Inclass](https://www.kaggle.com/c/flight-delays-fall-2018). Vous recevez un [d\u00e9marreur CatBoost](https://www.kaggle.com/kashnitsky/mlcourse-ai-fall-2019-catboost-starter), l'astuce consiste \u00e0 proposer de bonnes caract\u00e9ristiques (features).\n\n# 6. Ressources utiles\n- Liste des cours [site](https://mlcourse.ai), [course repo](https://github.com/Yorko/mlcourse.ai), and YouTube [channel](https://www.youtube.com/watch?v=QKTuw4PNOsU&list=PLVlY_7IJCMJeRfZ68eVfEcu-UcN9BbwiX)\n- Course materials as a [Kaggle Dataset](https://www.kaggle.com/kashnitsky/mlcourse)\n- mlcourse.ai lectures on gradient boosting: [theory](https://youtu.be/g0ZOtzZqdqk) and [practice](https://youtu.be/V5158Oug4W8)\n- [Original article](https://statweb.stanford.edu/~jhf/ftp/trebst.pdf) about GBM from Jerome Friedman\n- \u201cGradient boosting machines, a tutorial\u201d, [paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3885826/) by Alexey Natekin, and Alois Knoll\n- [Chapter in Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/printings/ESLII_print10.pdf) from Hastie, Tibshirani, Friedman (page 337)\n- [Wiki](https://en.wikipedia.org/wiki/Gradient_boosting) article about Gradient Boosting\n- [Introduction to boosted trees (Xgboost docs)](https://xgboost.readthedocs.io/en/latest/tutorials/model.html)\n- [Video-lecture by Hastie](https://www.youtube.com/watch?v=wPqtzj5VZus) about GBM at h2o.ai conference\n- [CatBoost vs. Light GBM vs. XGBoost](https://towardsdatascience.com/catboost-vs-light-gbm-vs-xgboost-5f93620723db) on \"Towards Data Science\"\n- [Benchmarking and Optimization of\nGradient Boosting Decision Tree Algorithms](https://arxiv.org/abs/1809.04559), [XGBoost: Scalable GPU Accelerated Learning](https://arxiv.org/abs/1806.11248) - benchmarking CatBoost, Light GBM, and XGBoost (no 100% winner)\n", "meta": {"hexsha": "1c3155e06f78ea3a25926b7ff513edf7b08edcb1", "size": 52578, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "jupyter_french/topic10_boosting/topic10_gradient_boosting-fr_def.ipynb", "max_stars_repo_name": "salman394/AI-ml--course", "max_stars_repo_head_hexsha": "2ed3a1382614dd00184e5179026623714ccc9e8c", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "jupyter_french/topic10_boosting/topic10_gradient_boosting-fr_def.ipynb", "max_issues_repo_name": "salman394/AI-ml--course", "max_issues_repo_head_hexsha": "2ed3a1382614dd00184e5179026623714ccc9e8c", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "jupyter_french/topic10_boosting/topic10_gradient_boosting-fr_def.ipynb", "max_forks_repo_name": "salman394/AI-ml--course", "max_forks_repo_head_hexsha": "2ed3a1382614dd00184e5179026623714ccc9e8c", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 97.0073800738, "max_line_length": 2282, "alphanum_fraction": 0.6978013618, "converted": true, "num_tokens": 12763, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.41869690935568665, "lm_q2_score": 0.2254166210386804, "lm_q1q2_score": 0.09438124254629754}} {"text": "\n\n\n\n## Data-driven Design and Analyses of Structures and Materials (3dasm)\n\n## Lecture 1\n\n### Miguel A. Bessa | M.A.Bessa@tudelft.nl | Associate Professor\n\n## Introduction\n\n**What:** A lecture of the \"3dasm\" course\n\n**Where:** This notebook comes from this [repository](https://github.com/bessagroup/3dasm_course)\n\n**Reference for entire course:** Murphy, Kevin P. *Probabilistic machine learning: an introduction*. MIT press, 2022. Available online [here](https://probml.github.io/pml-book/book1.html)\n\n**How:** We try to follow Murphy's book closely, but the sequence of Chapters and Sections is different. The intention is to use notebooks as an introduction to the topic and Murphy's book as a resource.\n* If working offline: Go through this notebook and read the book.\n* If attending class in person: listen to me (!) but also go through the notebook in your laptop at the same time. Read the book.\n* If attending lectures remotely: listen to me (!) via Zoom and (ideally) use two screens where you have the notebook open in 1 screen and you see the lectures on the other. Read the book.\n\n**Optional reference (the \"bible\" by the \"bishop\"... pun intended \ud83d\ude06) :** Bishop, Christopher M. *Pattern recognition and machine learning*. Springer Verlag, 2006.\n\n**References/resources to create this notebook:**\n* [Figure (Car stopping distance)](https://korkortonline.se/en/theory/reaction-braking-stopping/)\n* Snippets of code from this awesome [repo](https://github.com/gerdm/prml) by Gerardo Duran-Martin that replicates many figures in Bishop's book\n\nApologies in advance if I missed some reference used in this notebook. Please contact me if that is the case, and I will gladly include it here.\n\n## **OPTION 1**. Run this notebook **locally in your computer**:\n1. Install miniconda3 [here](https://docs.conda.io/en/latest/miniconda.html)\n2. Open a command window and create a virtual environment called \"3dasm\":\n```\nconda create -n 3dasm python=3 numpy scipy jupyter nb_conda matplotlib pandas scikit-learn rise tensorflow -c conda-forge\n```\n3. Install [git](https://github.com/git-guides/install-git), open command window & clone the repository to your computer:\n```\ngit clone https://github.com/bessagroup/3dasm_course\n```\n4. Load jupyter notebook by typing in (anaconda) command window (it will open in your internet browser):\n```\nconda activate 3dasm\njupyter notebook\n```\n5. Open notebook (3dasm_course/Lectures/Lecture1/3dasm_Lecture1.ipynb)\n\n**Short note:** My personal environment also has other packages that help me while teaching.\n\n> conda install -n 3dasm -c conda-forge jupyter_contrib_nbextensions hide_code\n\nThen in the 3dasm conda environment:\n\n> jupyter nbextension install --py hide_code --sys-prefix\n>\n> jupyter nbextension enable --py hide_code\n>\n> jupyter serverextension enable --py hide_code\n>\n> jupyter nbextension enable splitcell/splitcell\n\n## **OPTION 2**. Use **Google's Colab** (no installation required, but times out if idle):\n\n1. go to https://colab.research.google.com\n2. login\n3. File > Open notebook\n4. click on Github (no need to login or authorize anything)\n5. paste the git link: https://github.com/bessagroup/3dasm_course\n6. click search and then click on the notebook (*3dasm_course/Lectures/Lecture1/3dasm_Lecture1.ipynb*)\n\n\n```python\n# Basic plotting tools needed in Python.\n\nimport matplotlib.pyplot as plt # import plotting tools to create figures\nimport numpy as np # import numpy to handle a lot of things!\n\n%config InlineBackend.figure_format = \"retina\" # render higher resolution images in the notebook\nplt.style.use(\"seaborn\") # style for plotting that comes from seaborn\nplt.rcParams[\"figure.figsize\"] = (8,4) # rescale figure size appropriately for slides\n```\n\n## Outline for today\n\n* Introduction\n - Taking a probabilistic perspective on machine learning\n* Basics of univariate statistics\n - Continuous random variables\n - Probabilities vs probability densities\n - Moments of a probability distribution\n* The mindblowing Bayes' rule\n - The rule that spawns almost every ML model (even when we don't realize it)\n\n**Reading material**: This notebook + Chapter 2 until Section 2.3\n\n## Get hyped about Artificial Intelligence...\n\n\n```python\nfrom IPython.display import display, YouTubeVideo, HTML\nYouTubeVideo('RNnZwvklwa8', width=512, height=288) # show that slides are interactive:\n # rescale video to 768x432 and back to 512x288\n```\n\n\n\n\n\n\n\n\n\n\n**Well...** This class *might* not make you break the world (yet!). Let's focus on the fundamentals:\n\n* Probabilistic perspective on machine learning\n* Supervised learning (especially regression)\n\n## Machine learning (ML)\n\n* **ML definition**: A computer program that learns from experience $E$ wrt tasks $T$ such that the performance $P$ at those tasks improves with experience $E$.\n\n* We'll treat ML from a **probabilistic perspective**:\n - Treat all unknown quantities as **random variables**\n \n* What are random variables?\n - Variables endowed with probability distributions!\n\n## The car stopping distance problem\n\n\n\n

\nCar stopping distance ${\\color{red}y}$ as a function of its velocity ${\\color{green}x}$ before it starts braking:\n\n${\\color{red}y} = {\\color{blue}z} x + \\frac{1}{2\\mu g} {\\color{green}x}^2 = {\\color{blue}z} x + 0.1 {\\color{green}x}^2$\n\n- ${\\color{blue}z}$ is the driver's reaction time (in seconds)\n- $\\mu$ is the road/tires coefficient of friction (assume $\\mu=0.5$)\n- $g$ is the acceleration of gravity (assume $g=10$ m/s$^2$).\n\n## The car stopping distance problem\n\n### How to obtain this formula?\n\n$y = d_r + d_{b}$\n\nwhere $d_r$ is the reaction distance, and $d_b$ is the braking distance.\n\n### Reaction distance $d_r$\n\n$d_r = z x$\n\nwith $z$ being the driver's reaction time, and $x$ being the velocity of the car at the start of braking.\n\n## The car stopping distance problem\n\n### Braking distance $d_b$\n\nKinetic energy of moving car:\n\n$E = \\frac{1}{2}m x^2$       where $m$ is the car mass.\n\nWork done by braking:\n\n$W = \\mu m g d_b$       where $\\mu$ is the coefficient of friction between the road and the tire, $g$ is the acceleration of gravity, and $d_b$ is the car braking distance.\n\nThe braking distance follows from $E=W$:\n\n$d_b = \\frac{1}{2\\mu g}x^2$\n\nTherefore, if we add the reacting distance $d_r$ to the braking distance $d_b$ we get the stopping distance $y$:\n\n$$y = d_r + d_b = z x + \\frac{1}{2\\mu g} x^2$$\n\n## The car stopping distance problem\n\n\n\n$y = {\\color{blue}z} x + 0.1 x^2$\n\nThe driver's reaction time ${\\color{blue}z}$ is a **random variable (rv)**\n\n* Every driver has its own reaction time $z$\n\n* Assume the distribution associated to $z$ is Gaussian with **mean** $\\mu_z=1.5$ seconds and **variance** $\\sigma_z^2=0.5$ seconds$^2$\n\n$$\nz \\sim \\mathcal{N}(\\mu_z=1.5,\\sigma_z^2=0.5^2)\n$$\n\nwhere $\\sim$ means \"sampled from\", and $\\mathcal{N}$ indicates a Gaussian **probability density function (pdf)**\n\n## Univariate Gaussian pdf \n\nThe gaussian pdf is defined as:\n\n$$\n \\mathcal{N}(z | \\mu_z, \\sigma_z^2) = \\frac{1}{\\sqrt{2\\pi\\sigma_z^2}}e^{-\\frac{1}{2\\sigma_z^2}(z - \\mu_z)^2}\n$$\n\nAlternatively, we can write it using the **precision** term $\\lambda_z := 1 / \\sigma_z^2$ instead of using $\\sigma_z^2$:\n\n$$\n \\mathcal{N}(z | \\mu_z, \\lambda_z^{-1}) = \\frac{\\lambda_z^{1/2}}{\\sqrt{2\\pi}}e^{-\\frac{\\lambda_z}{2}(z - \\mu_z)^2}\n$$\n\nAnyway, recall how this pdf looks like...\n\n\n```python\ndef norm_pdf(z, mu_z, sigma_z2): return 1 / np.sqrt(2 * np.pi * sigma_z2) * np.exp(-(z - mu_z)**2 / (2 * sigma_z2))\nzrange = np.linspace(-8, 4, 200) # create a list of 200 z points between z=-8 and z=4\nfig, ax = plt.subplots() # create a plot\nax.plot(zrange, norm_pdf(zrange, 0, 1), label=r\"$\\mu_z=0; \\ \\sigma_z^2=1$\") # plot norm_pdf(z|0,1)\nax.plot(zrange, norm_pdf(zrange, 1.5, 0.5**2), label=r\"$\\mu_z=1.5; \\ \\sigma_z^2=0.5^2$\") # plot norm_pdf(z|1.5,0.5^2)\nax.plot(zrange, norm_pdf(zrange, -1, 2**2), label=r\"$\\mu_z=-1; \\ \\sigma_z^2=2^2$\") # plot norm_pdf(z|-1,2^2)\nax.set_xlabel(\"z\", fontsize=20) # create x-axis label with font size 20\nax.set_ylabel(\"probability density\", fontsize=20) # create y-axis label with font size 20\nax.legend(fontsize=15) # create legend with font size 15\nax.set_title(\"Three different Gaussian pdfs\", fontsize=20); # create title with font size 20\n```\n\nThe green curve shows the Gaussian pdf of the rv $z$ **conditioned** on the mean $\\mu_z=1.5$ and variance $\\sigma_z^2=0.5^2$ for the car stopping distance problem.\n\n## Univariate Gaussian pdf \n\n$$\n p(z) = \\mathcal{N}(z | \\mu_z, \\sigma_z^2) = \\frac{1}{\\sqrt{2\\pi\\sigma_z^2}}e^{-\\frac{1}{2\\sigma_z^2}(z - \\mu_z)^2}\n$$\n\nThe output of this expression is the **PROBABILITY DENSITY** of $z$ **given** (or conditioned to) a particular $\\mu_z$ and $\\sigma_z^2$.\n\n* **Important**: Probability Density $\\neq$ Probability\n\nSo, what is a probability?\n\n## Probability\n\nThe probability of an event $A$ is denoted by $\\text{Pr}(A)$.\n\n* $\\text{Pr}(A)$ means the probability with which we believe event A is true\n\n* An event $A$ is a binary variable saying whether or not some state of the world holds.\n\nProbability is defined such that: $0 \\leq \\text{Pr}(A) \\leq 1$\n\nwhere $\\text{Pr}(A)=1$ if the event will definitely happen and $\\text{Pr}(A)=0$ if it definitely will not happen.\n\n## Joint probability\n\n**Joint probability** of two events: $\\text{Pr}(A \\wedge B)= \\text{Pr}(A, B)$\n\nIf $A$ and $B$ are **independent**: $\\text{Pr}(A, B)= \\text{Pr}(A) \\text{Pr}(B)$\n\nFor example, suppose $z_1$ and $z_2$ are chosen uniformly at random from the set $\\mathcal{Z} = \\{1, 2, 3, 4\\}$.\n\nLet $A$ be the event that $z_1 \\in \\{1, 2\\}$ and $B$ be the event that **another** rv denoted as $z_2 \\in \\{3\\}$.\n\nThen we have: $\\text{Pr}(A, B) = \\text{Pr}(A) \\text{Pr}(B) = \\frac{1}{2} \\cdot \\frac{1}{4}$.\n\n## Probability of a union of two events\n\nProbability of event $A$ or $B$ happening is: $\\text{Pr}(A \\vee B)= \\text{Pr}(A) + \\text{Pr}(B) - \\text{Pr}(A \\wedge B)$\n\nIf these events are mutually exclusive (they can't happen at the same time):\n\n$$\n\\text{Pr}(A \\vee B)= \\text{Pr}(A) + \\text{Pr}(B)\n$$\n\nFor example, suppose an rv denoted as $z_1$ is chosen uniformly at random from the set $\\mathcal{Z} = \\{1, 2, 3, 4\\}$.\n\nLet $A$ be the event that $z_1 \\in \\{1, 2\\}$ and $B$ be the event that the **same** rv $z_1 \\in \\{3\\}$.\n\nThen we have $\\text{Pr}(A \\vee B) = \\frac{2}{4} + \\frac{1}{4}$.\n\n## Conditional probability of one event given another\n\nWe define the **conditional probability** of event $B$ happening given that $A$ has occurred as follows:\n\n$$\n\\text{Pr}(B | A)= \\frac{\\text{Pr}(A,B)}{\\text{Pr}(A)}\n$$\n\nThis is not defined if $\\text{Pr}(A) = 0$, since we cannot condition on an impossible event.\n\n## Conditional independence of one event given another\n\nWe say that event $A$ is conditionally independent of event $B$ if we have $\\text{Pr}(A | B)= \\text{Pr}(A)$\n\nThis implies $\\text{Pr}(B|A) = \\text{Pr}(B)$. Hence, the joint probability becomes $\\text{Pr}(A, B) = \\text{Pr}(A) \\text{Pr}(B)$\n\nThe book uses the notation $A \\perp B$ to denote this property.\n\n## Coming back to our car stopping distance problem\n\n\n\n$y = {\\color{blue}z} x + 0.1 x^2$\n\nwhere $z$ is a **continuous** rv such that $z \\sim \\mathcal{N}(\\mu_z=1.5,\\sigma_z^2=0.5^2)$.\n\n* What is the probability of an event $Z$ defined by a reaction time $z \\leq 0.52$ seconds?\n\n$$\n\\text{Pr}(Z)=\\text{Pr}(z \\leq 0.52)= P(z=0.52)\n$$\n\nwhere $P(z)$ denotes the **cumulative distribution function (cdf)**. Note that cdf is denoted with a capital $P$.\n\nLikewise, we can compute the probability of being in any interval as follows:\n\n$\\text{Pr}(a \\leq z \\leq b)= P(z=b)-P(z=a)$\n\n* But how do we compute the cdf at a particular value $b$, e.g. $P(z=b)$?\n\n## Cdf's result from pdf's\n\nA pdf $p(z)$ is defined as the derivative of the cdf $P(z)$:\n\n$$\np(z)=\\frac{d}{d z}P(z)\n$$\n\nSo, given a pdf $p(z)$, we can compute the following probabilities:\n\n$$\\text{Pr}(z \\leq b)=\\int_{-\\infty}^b p(z) dz = P(b)$$\n$$\\text{Pr}(z \\geq a)=\\int_a^{\\infty} p(z) dz = 1 - P(a)$$\n$$\\text{Pr}(a \\leq z \\leq b)=\\int_a^b p(z) dz = P(b) - P(a)$$\n\n**IMPORTANT**: $\\int_{-\\infty}^{\\infty} p(z) dz = 1$\n\n### Some notes about pdf's\n\nThe integration to unity is important!\n\n$$\\int_{-\\infty}^{\\infty} p(z) dz = 1$$\n\n**Remember:** the integral of a pdf leads to a probability, and probabilities cannot be larger than 1.\n\nFor example, from this property we can derive the following:\n\n$$\n\\int_{-\\infty}^{\\infty} p(z) dz = \\int_{-\\infty}^{a} p(z) dz + \\int_{a}^{\\infty} p(z) dz\n$$\n\n$$\n\\Rightarrow \\text{Pr}(z \\geq a)= 1 - \\text{Pr}(z \\leq a) = 1 - \\text{P}(a) = 1 - \\int_{-\\infty}^a p(z) dz\n$$\n\nIn some cases we will work with probability distributions that are **unnormalized**, so this comment is important!\n\n* Being unnormalized means that the probability density of the distribution does not integrate to 1.\n* In this case, we cannot call such function a pdf, even though its output is a probability density.\n\n## Cdf's result from pdf's\n\nKey point?\n\n* Given a pdf $p(z)$, we can compute the probability of a continuous rv $z$ being in a finite interval as follows:\n\n$$\n\\text{Pr}(a \\leq z \\leq b)=\\int_a^b p(z) dz = P(b) - P(a)\n$$\n\nAs the size of the interval gets smaller, we can write\n\n$$\n\\text{Pr}\\left(z - \\frac{dz}{2} \\leq z \\leq z + \\frac{dz}{2}\\right) \\approx p(z) dz\n$$\n\nIntuitively, this says the probability of $z$ being in a small interval around $z$ is the density at $z$ times\nthe width of the interval.\n\n\n```python\nfrom scipy.stats import norm # import from scipy.stats the normal distribution\n\nzrange = np.linspace(-3, 3, 100) # 100 values for plot\nfig_std_norm, (ax1, ax2) = plt.subplots(1, 2) # create a plot with 2 subplots side-by-side\nax1.plot(zrange, norm.cdf(zrange, 0, 1), label=r\"$\\mu_z=0; \\ \\sigma_z=1$\") # plot cdf of standard normal\nax1.set_xlabel(\"z\", fontsize=20)\nax1.set_ylabel(\"probability\", fontsize=20)\nax1.legend(fontsize=15)\nax1.set_title(\"Standard Gaussian cdf\", fontsize=20)\n\nax2.plot(zrange, norm.pdf(zrange, 0, 1), label=r\"$\\mu_z=0; \\ \\sigma_z=1$\") # plot pdf of standard normal\nax2.set_xlabel(\"z\", fontsize=20)\nax2.set_ylabel(\"probability density\", fontsize=20)\nax2.legend(fontsize=15)\nax2.set_title(\"Standard Gaussian pdf\", fontsize=20)\nfig_std_norm.set_size_inches(25, 5) # scale figure to be wider (since there are 2 subplots)\n```\n\n## Note about scipy.stats\n\n[scipy](https://docs.scipy.org/doc/scipy/index.html) is an open-source software for mathematics, science, and engineering. It's brilliant and widely used for many things!\n\n**In particular**, [scipy.stats](https://docs.scipy.org/doc/scipy/reference/stats.html) is a simple module within scipy that has statistical functions and operations that are very useful. This way, we don't need to code all the functions ourselves. That's why we are using it to plot the cdf and pdf of the Gaussian distribution from now on, and we will use it for other things later.\n\n* In case you are interested, scipy.stats has a nice [tutorial](https://docs.scipy.org/doc/scipy/tutorial/stats.html)\n\n## Coming back to our car stopping distance problem\n\n\n\n$y = {\\color{blue}z} x + 0.1 x^2$\n\nwhere $z$ is a continuous rv such that $p(z)= \\mathcal{N}(z | \\mu_z=1.5,\\sigma_z^2=0.5^2)$.\n\n* What is the probability of an event $Z$ defined by a reaction time $z \\leq 0.52$ seconds?\n\n$$\n\\text{Pr}(Z) = \\text{Pr}(z \\leq 0.52) = P(z=0.52) = \\int_{-\\infty}^{0.52} p(z) dz\n$$\n\n\n```python\nPr_Z = norm.cdf(0.52, 1.5, 0.5) # using scipy norm.cdf(z=0.52 | mu_z=1.5, sigma_z=0.5)\n\nprint(\"The probability of event Z is: Pr(Z) = \",round(Pr_Z,3))\n```\n\n The probability of event Z is: Pr(Z) = 0.025\n\n\n\n```python\nz_value = 0.52 # z = 0.52 seconds\nzrange = np.linspace(0, 3, 200) # 200 values for plot\nfig_car_norm, (ax1, ax2) = plt.subplots(1, 2) # create subplot (two figures in 1)\nax1.plot(zrange, norm.cdf(zrange, 1.5, 0.5), label=r\"$\\mu_z=1.5; \\ \\sigma_z=0.5$\") # Figure 1 is cdf\nax1.plot(z_value, norm.cdf(z_value, 1.5, 0.5), 'r*',markersize=15, linewidth=2,\n label=u'$P(z=0.52~|~\\mu_z=1.5, \\sigma_z^2=0.5^2)$')\nax1.set_xlabel(\"z\", fontsize=20)\nax1.set_ylabel(\"probability\", fontsize=20)\nax1.legend(fontsize=15)\nax1.set_title(\"Gaussian cdf of $z$ for car problem\", fontsize=20)\nax2.plot(zrange, norm.pdf(zrange, 1.5, 0.5), label=r\"$\\mu_z=1.5; \\ \\sigma_z=0.5$\") # figure 2 is pdf\nax2.plot(z_value, norm.pdf(z_value, 1.5, 0.5), 'r*', markersize=15, linewidth=2,\n label=u'$p(z=0.52~|~\\mu_z=1.5, \\sigma_z^2=0.5^2)$')\nax2.set_xlabel(\"z\", fontsize=20)\nax2.set_ylabel(\"probability density\", fontsize=20)\nax2.legend(fontsize=15)\nax2.set_title(\"Gaussian pdf of $z$ for car problem\", fontsize=20)\nfig_car_norm.set_size_inches(25, 5) # scale figure to be wider (since there are 2 subplots)\n```\n\n### Why is the Gaussian distribution so widely used?\n\nSeveral reasons:\n\n1. It has two parameters which are easy to interpret, and which capture some of the most basic properties of a distribution, namely its mean and variance.\n2. The central limit theorem (Sec. 2.8.6 of the book) tells us that sums of independent random variables have an approximately Gaussian distribution, making it a good choice for modeling residual errors or \u201cnoise\u201d.\n3. The Gaussian distribution makes the least number of assumptions (has maximum entropy), subject to the constraint of having a specified mean and variance (Sec. 3.4.4 of the book); this makes it a good default choice in many cases.\n4. It has a simple mathematical form, which results in easy to implement, but often highly effective, methods.\n\n## Car stopping distance problem\n\n\n\n$y = {\\color{blue}z} x + 0.1 x^2$\n\nwhere $z$ is a continuous rv such that $z \\sim \\mathcal{N}(\\mu_z=1.5,\\sigma_z^2=0.5^2)$.\n\n* What is the **expected** value for the reaction time $z$?\n\nThis is not a trick question! It's the mean $\\mu_z$, of course!\n\n* But how do we compute the expected value for any distribution?\n\n## Moments of a distribution\n\n### First moment: Expected value or mean\n\nThe expected value (mean) of a distribution is the **first moment** of the distribution:\n\n$$\n\\mathbb{E}[z]= \\int_{\\mathcal{Z}}z p(z) dz\n$$\n\nwhere $\\mathcal{Z}$ indicates the support of the distribution (the $z$ domain). \n\n* Often, $\\mathcal{Z}$ is omitted as it is usually between $-\\infty$ to $\\infty$\n* The expected value $\\mathbb{E}[z]$ is often denoted by $\\mu_z$\n\nAs you might expect (pun intended \ud83d\ude06), the expected value is a linear operator:\n\n$$\n\\mathbb{E}[az+b]= a\\mathbb{E}[z] + b\n$$\n\nwhere $a$ and $b$ are fixed variables (NOT rv's).\n\nAdditionally, for a set of $n$ rv's, one can show that the expectation of their sum is as follows:\n\n$\\mathbb{E}\\left[\\sum_{i=1}^n z_i\\right]= \\sum_{i=1}^n \\mathbb{E}[z_i]$\n\nIf they are **independent**, the expectation of their product is given by\n\n$\\mathbb{E}\\left[\\prod_{i=1}^n z_i\\right]= \\prod_{i=1}^n \\mathbb{E}[z_i]$\n\n## Moments of a distribution\n\n### Second moment (and relation to Variance)\n\nThe 2nd moment of a distribution $p(z)$ is:\n\n$$\n\\mathbb{E}[z^2]= \\int_{\\mathcal{Z}}z^2 p(z) dz\n$$\n\n#### Variance can be obtained from the 1st and 2nd moments\n\nThe variance is a measure of the \u201cspread\u201d of the distribution:\n\n$$\n\\mathbb{V}[z] = \\mathbb{E}[(z-\\mu_z)^2] = \\int (z-\\mu_z)^2 p(z) dz = \\mathbb{E}[z^2] - \\mu_z^2\n$$\n\n* It is often denoted by the square of the standard deviation, i.e. $\\sigma_z^2 = \\mathbb{V}[z] = \\mathbb{E}[(z-\\mu_z)^2]$\n\n#### Elaboration of the variance as a result of the first two moments of a distribution\n\n$$\n\\begin{align}\n\\mathbb{V}[z] & = \\mathbb{E}[(z-\\mu_z)^2] \\\\\n& = \\int (z-\\mu_z)^2 p(z) dz \\\\\n& = \\int z^2 p(z) dz + \\mu_z^2 \\int p(z) dz - 2\\mu_z \\int zp(z) dz \\\\\n& = \\mathbb{E}[z^2] - \\mu_z^2\n\\end{align}\n$$\n\nwhere $\\mu_z = \\mathbb{E}[z]$ is the first moment, and $\\mathbb{E}[z^2]$ is the second moment.\n\nTherefore, we can also write the second moment of a distribution as\n\n$$\\mathbb{E}[z^2] = \\sigma_z^2 + \\mu_z^2$$\n\n#### Variance and standard deviation properties\n\nThe standard deviation is defined as\n\n$ \\sigma_z = \\text{std}[z] = \\sqrt{\\mathbb{V}[z]}$\n\nThe variance of a shifted and scaled version of a random variable is given by\n\n$\\mathbb{V}[a z + b] = a^2\\mathbb{V}[z]$\n\nwhere $a$ and $b$ are fixed variables (NOT rv's).\n\nIf we have a set of $n$ independent rv's, the variance of their sum is given by the sum of their variances\n\n$$\n\\mathbb{V}\\left[\\sum_{i=1}^n z_i\\right] = \\sum_{i=1}^n \\mathbb{V}[z_i]\n$$\n\nThe variance of their product can also be derived, as follows:\n\n$$\n\\begin{align}\n\\mathbb{V}\\left[\\prod_{i=1}^n z_i\\right] & = \\mathbb{E}\\left[ \\left(\\prod_i z_i\\right)^2 \\right] - \\left( \\mathbb{E}\\left[\\prod_i z_i \\right]\\right)^2\\\\\n & = \\mathbb{E}\\left[ \\prod_i z_i^2 \\right] - \\left( \\prod_i\\mathbb{E}\\left[ z_i \\right]\\right)^2\\\\\n & = \\prod_i \\mathbb{E}\\left[ z_i^2 \\right] - \\prod_i\\left( \\mathbb{E}\\left[ z_i \\right]\\right)^2\\\\\n & = \\prod_i \\left( \\mathbb{V}\\left[ z_i \\right] +\\left( \\mathbb{E}\\left[ z_i \\right]\\right)^2 \\right)- \\prod_i\\left( \\mathbb{E}\\left[ z_i \\right]\\right)^2\\\\\n & = \\prod_i \\left( \\sigma_{z,\\,i}^2 + \\mu_{z,\\,i}^2 \\right)- \\prod_i\\mu_{z,\\,i}^2 \\\\\n\\end{align}\n$$\n\n## Note about higher-order moments\n\n* The $k$-th moment of a distribution $p(z)$ is defined as the expected value of the $k$-th power of $z$, i.e. $z^k$:\n\n$$\n\\mathbb{E}[z^k]= \\int_{\\mathcal{Z}}z^k p(z) dz\n$$\n\n## Mode of a distribution\n\nThe mode of an rv $z$ is the value of $z$ for which $p(z)$ is maximum.\n\nFormally, this is written as,\n\n$$ \\mathbf{z}^* = \\underset{z}{\\mathrm{argmax}}~p(z)$$\n\nIf the distribution is multimodal, this may not be unique:\n* That's why $\\mathbf{z}^*$ is in **bold**, to denote that in general it is a vector that is retrieved!\n* However, if the distribution is unimodal (one maximum), like the univariate Gaussian distribution, then it retrieves a scalar $z^*$\n\nNote that even if there is a unique mode, this point may not be a good summary of the distribution.\n\n## Mean vs mode for a non-symmetric distribution\n\n\n```python\n# 1. Create a gamma pdf with parameter a = 2.0\n\nfrom scipy.stats import gamma # import from scipy.stats the Gamma distribution\n\na = 2.0 # this is the only input parameter needed for this distribution\n\n# Define the support of the distribution (its domain) by using the\n# inverse of the cdf (called ppf) to get the lowest z of the plot that\n# corresponds to Pr = 0.01 and the highest z of the plot that corresponds\n# to Pr = 0.99:\nzrange = np.linspace(gamma.ppf(0.01, a), gamma.ppf(0.99, a), 200) \n\nmu_z, var_z = gamma.stats(2.0, moments='mv') # This computes the mean and variance of the pdf\n\nfig_gamma_pdf, ax = plt.subplots() # a trick to save the figure for later use\nax.plot(zrange, gamma.pdf(zrange, a), label=r\"$\\Gamma(z|a=2.0)$\")\nax.set_xlabel(\"z\", fontsize=20)\nax.set_ylabel(\"probability density\", fontsize=20)\nax.legend(fontsize=15)\nax.set_title(\"Gamma pdf for $a=2.0$\", fontsize=20)\nplt.close(fig_gamma_pdf) # do not plot the figure now. We will show it in a later cell\n```\n\n\n```python\n# 2. Plot the expected value (mean) for this pdf\nax.plot(mu_z, gamma.pdf(mu_z, a), 'r*', markersize=15, linewidth=2, label=u'$\\mu_z = \\mathbb{E}[z]$')\n```\n\n\n\n\n []\n\n\n\n\n```python\n# 3. Calculate the mode and plot it\nfrom scipy.optimize import minimize # import minimizer\n\n# Finding the maximum of the gamma pdf can be done by minimizing\n# the negative gamma pdf. So, we create a function that outputs\n# the negative of the gamma pdf given the parameter a=2.0:\ndef neg_gamma_given_a(z): return -gamma.pdf(z,a)\n\n# Use the default optimizer of scipy (L-BFGS) to find the\n# maximum (by minimizing the negative gamma pdf). Note\n# that we need to give an initial guess for the value of z,\n# so we can use, for example, z=mu_z:\nmode_z = minimize(neg_gamma_given_a,mu_z).x\n\nax.plot(mode_z, np.max(gamma.pdf(mode_z, a)),'g^', markersize=15,\n linewidth=2,label=u'mode $\\mathbf{z}^*=\\mathrm{argmax}~p(z)$')\nax.legend() # show legend\n```\n\n\n\n\n \n\n\n\n\n```python\n# Code to generate this Gamma distribution hidden during presentation (it's shown as notes)\n\nprint('The mean is ',mu_z) # print the mean calculated for this gamma pdf\nprint('The mode is approximately ',mode_z) # print the mode\nfig_gamma_pdf # show figure of this gamma pdf\n```\n\n## The amazing Bayes' rule\nBayesian inference definition:\n* Inference means \u201cthe act of passing from sample data to generalizations, usually with calculated degrees of certainty\u201d.\n* Bayesian is used to refer to inference methods that represent \u201cdegrees of certainty\u201d using probability theory, and which leverage Bayes\u2019 rule to update the degree of certainty given data.\n\n**Bayes\u2019 rule** is a formula for computing the probability distribution over possible values of an unknown (or hidden) quantity $z$ given some observed data $y$:\n\n$$\np(z|y) = \\frac{p(y|z) p(z)}{p(y)}\n$$\n\nBayes' rule follows automatically from the identity: $p(z|y) p(y) = p(y|z) p(z) = p(y,z) = p(z,y)$\n\n## The amazing Bayes' rule\n\n* I know... You don't find it very amazing (yet!).\n* Wait until you realize that almost all ML methods can be derived from this simple formula\n\n$$\np(z|y) = \\frac{p(y|z) p(z)}{p(y)}\n$$\n\n### See you next class\n\nHave fun!\n\n\n", "meta": {"hexsha": "4a1d17452ee091093759237660eba501b4e16d1c", "size": 551563, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/Lecture1/3dasm_Lecture1.ipynb", "max_stars_repo_name": "shushu-qin/3dasm_course", "max_stars_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-02-07T18:45:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-23T21:45:27.000Z", "max_issues_repo_path": "Lectures/Lecture1/3dasm_Lecture1.ipynb", "max_issues_repo_name": "shushu-qin/3dasm_course", "max_issues_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/Lecture1/3dasm_Lecture1.ipynb", "max_forks_repo_name": "shushu-qin/3dasm_course", "max_forks_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2022-02-07T18:45:49.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T19:30:17.000Z", "avg_line_length": 369.4326858674, "max_line_length": 159916, "alphanum_fraction": 0.9294550215, "converted": true, "num_tokens": 8363, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.29098086621490676, "lm_q2_score": 0.32423541204073586, "lm_q1q2_score": 0.09434630105316053}} {"text": "```python\n## This code cell will not be shown in the HTML version of this notebook\n#### some helpful imports ####\n# import autograd functionality\nimport autograd.numpy as np\n\n# import testing libraries\nimport sys\nsys.path.append('../')\nfrom mlrefined_libraries import time_series_lib as timelib\nfrom mlrefined_libraries import pid_lib as pidlib\n\n# import dataset path\ndatapath = '../datasets/'\n\n# import various other libraries e.g., for plotting, deep copying\nimport copy\nimport matplotlib.pyplot as plt\n\n# this is needed to compensate for %matplotl+ib notebook's tendancy to blow up images when plotted inline\nfrom matplotlib import rcParams\nrcParams['figure.autolayout'] = True\n%matplotlib notebook\n\n# autoreload function - so if anything behind the scenes is changeed those changes\n# are reflected in the notebook without having to restart the kernel\n%load_ext autoreload\n%autoreload 2\n```\n\n# Principles of PID control\n\nToggle code on and off in this presentation by clicking the 'Toggle code' button below.\n\n\n```python\nfrom IPython.display import display\nfrom IPython.display import HTML\nimport IPython.core.display as di # Example: di.display_html('

%s:

' % str, raw=True)\n\n# This line will hide code by default when the notebook is e\u00e5xported as HTML\ndi.display_html('', raw=True)\n\n# This line will add a button to toggle visibility of code blocks, for use with the HTML export version\ndi.display_html('''''', raw=True)\n```\n\n\n\n\n\n\n\n\n\n# The standard PID Control Model\n\n- with a trained Imitator or System Model in hand, we can now look at automatically controlling this Imitator to perform desired tasks\n\n\n- the simplest kind of behavior we make a system obey (like e.g., temperature control and cruise control): is to make the system match a series of **training** *set points* $x_1,\\,x_2,\\,...,x_T$ as closely as possible\n\n\n- examples of set points:\n - for cruise control / autonomous driving: speed levels to have the car drive\n - for temperature control: different temperature levels throughout the day\n - for an industrial process like water level: keep a certain level in a tank that can change throughout the day\n\n- this involves putting our Imitator model under the authority of a Controller (or - you can say - we pass our Imitator model through a Control model that selects optimal actions for it)\n\n\n- once trained the Control Model should be able to choose actions automatically so that the Imitator matches desired set points \n\n\n- that is, our Controller will choose actions $a_t$ for optimally for our Imitator so that its output $s_{t+1} = f_{\\text{imitator}}\\left(s_t,a_t\\right)$ matches the desired set points as closely as possible (as the system allows), or that \n\n$$s_t \\approx x_t \\,\\,\\,\\,\\, \\text{for} \\,\\,\\,\\,\\, t=2,...,T$$\n\n(often the initial state $s_1$ is determined by the problem, or set to a reasonable reference value) \n\n\n- once tuned properly a Control Model is often referred to as a *Control Law* or *Optimal Policy* \n\n- how does the Controller choose optimal actions to meet a series of set points? \n\n\n- *one way* is to train a Controller to choose actions optimally **looking backwards** to decide on the best choice of action in the present\n\n\n- this is based on the desire to always be correcting for previous mistakes / *historical errors*, or how well $s_t \\approx x_t$ previously\n\n\n- this most popular control approach is called *Proportional Integral Derivative or PID* ontrol \n\n\n- this is a simple *parameterized dynamic system with unlimited memory* that captures historical *error* between our sequence of training set points and the corresponding states of our system\n\n\n- as a graphical model it looks like this\n\n
\n

\n\n

\n
\n\n
\n

\n\n

\n
\n\n- here we have used the *signed error* $e_t = x_t - s_t$ as historical feedback to the Controller Model: $f_{\\text{controller}}\\left(e_t\\right)$\n\n\n- using the signed error is a convention, you could use another (e.g., absolute value of the error)\n\n\n- we uses a linear combination of this (and its history and/or its derivatives) to learn how to choose actions optimally\n\n\n- the simplest parameterized controller (a *Proportional controller*) uses a linear combination of this (signed) error to determine the next action to take\n\n\\begin{equation}\nf_{\\text{controller}}\\left(e_t;\\Theta_{\\text{controller}}\\right) = a_t = w_0 + w_1e_t\n\\end{equation}\n\n\n- this is a function with weights we need to tune properly in order for the controller to properly choose actions\n\n\n- it is called a *Proportional controller* because the action $a_t$ is literally being made proportional to the signed error of the system at the prior state\n\n
\n

\n\n

\n
\n\n- a common extension of this idea: use a summary statistic of the history of the error as well as well \n\n\n- this is usually chosen to be the *integral of the error*\n\n\\begin{equation}\nh^e_t = h^e_t + \\frac{1}{D}e_t\n\\end{equation}\n\nwhere $\\frac{1}{D}$ is the gap between steps, but in principle any dynamic system with unlimited memory will work\n\n\n- adding the integral (or history) of the error gives the controller *context*, as the integral summarizes how the error has changed in the past\n\n\n- this is what we mean when we say that a PID Controller 'looks backward' to decide on the best choice of action in the present\n\n- adding the history $h^e_t$ term to our parameterized action function / control model gives the parameterized update\n\n\\begin{equation}\na_t = w_0 + w_1e_t + w_2h^e_t\n\\end{equation}\n\n- this is a so-called *Proportional Integral* controller - one of the most common automatic controller used today in practice (for set-point matching automatic control problems)\n\n\n- notice: this history of the error is now an explicit input to our Controller \n\n\\begin{equation}\nf_{\\text{control}}\\left(e_t,h^e_t ; \\Theta_{\\text{controller}} \\right) = a_t = w_0 + w_1e_t + w_2h^e_t\n\\end{equation}\n\n- one final common addition: the derivative of the error: $\\frac{e_t - e_{t-1}}{D}$\n\n\n- proportional information derivative or *local difference* of the error can be added as well, tacking on another term as to the action update\n\n\n\\begin{equation}\na_t = w_0 + w_1e_t + w_2h_t + w_3\\frac{e_t - e_{t-1}}{D}\n\\end{equation}\n\n- using this update in a Control Model we have the so-called *Proportional Inegral Derivative* (PID) controller.\n\n\n- our Control Model now takes in two prior error terms \n\n# Tuning the weights of a PID Control Model\n\n- how do we tune these weights properly - so that our controller learns how to produce the best actions to lead our system model to match our training set points?\n\n\n- traditional (non machine-learning) approaches PID controller tuning involve \"voodoo\" and a lot of human trial and error \n\n\n- see e.g., the recommendations for tuning on stackoverflow and youtube (note here: $w_1 = K_p$, $w_2 = K_i$, and $w_3 = K_d$)\n\nhttps://robotics.stackexchange.com/questions/167/what-are-good-strategies-for-tuning-pid-loops\n\nhttps://www.youtube.com/watch?v=VVOi2dbtxC0&t=1050s\n\n\n- note: 'traditional' does not mean that 'old' - this is how many people tune their controllers today\n\n- why so much 'voo-doo' and human trial and error for PID tuning?\n\n\n- because the traditional way of doing PID control already involves one understand an enormous amount of specialized information\n\n - for Imitator / System modeling: the traditional way is to use 'first principles' differential equations modeling\n \n - this can lead to system models that are - by their very nature - highly unstable\n \n - this involves building up not only a steep mathematical knowledge stack, but expert knowledge in a particular domain (e.g., physics, chemistry, robotics, etc.,)\n \n \n- things often not included in traditional automatic control pedagogy\n\n - programming / basic CS\n - machine learning / deep learning basics (as an alternative to differential equations modeling)\n - mathematical optimization (although there is some emphasis on this for advanced students of automatic control, the approaches taken are very limiting = only special structures like QP are studied)\n\n# The ML perspective on PID tuning\n\n- but 'automatic parameter tuning' is the 'bread and butter' for machine learning folks - so what do we need to do to auto-tune our PID parameters?\n\n\n- Like everything else: we need to form a cost function (whose minimum recovers correctly tuned parameters)!\n\n\n- to design a proper cost, lets look at our entire controller pipeline- including our system model\n\n\n- to keep things simple we will perform these derivations using the simplest Proportional (P) controller, but everything that follows is the same for PI and PID controllers as well\n\n
\n

\n\n

\n
\n\n- our basic Proportional controller takes in the current (signed) error $e_t = x_t - s_t$ and returns the action $a_t$ \n\n\n\\begin{equation}\nf_{\\text{control}}\\left(e_t\\right) = a_t\n\\end{equation}\n\n\n- we then feed this action into our Imitator model to get our next state $s_{t+1}$\n\n\n\\begin{equation}\nf_{\\text{imitator}}\\left(s_t, a_t \\right) = f_{\\text{imitator}}\\left(s_t,\\, f_{\\text{control}}\\left(e_t\\right) \\right) = s_{t+1} \n\\end{equation}\n\n\n- now ideally - we want these actions made *so that this next state matches the input set point*, that is\n\n\\begin{equation}\ns_{t+1} \\approx x_{t+1}\n\\end{equation}\n\n\n- in other words, we want the *error* $e_{t+1} = x_{t+1} - s_{t+1}$ to be small in magnitude\n\n- so why not tune $\\Theta_{\\text{controller}}$ to minimize the average e.g., squared error over the entire sequence of *training set points* $x_1,...,x_T$ \n\n\n\\begin{equation}\n\\frac{1}{T-1}\\sum_{t=1}^{T-1}\\left(e_{t+1}\\right)^2 = \\frac{1}{T-1}\\sum_{t=1}^T\\left(s_{t+1} - x_{t+1}\\right)^2\n\\end{equation}\n\n\n- we ignore the error on our initial state $s_1$ since we cannot adjust it (its given by the problem at hand! e.g., with cruise control the car starts with 0 velocity)\n\n\n- if unwravel the definition of $s_{t+1}$ and express it in terms of our system and control model this is equivalently\n\n\n\\begin{equation}\n\\frac{1}{T-1}\\sum_{t=1}^{T-1}\\left(f_{\\text{imitator}}\\left(s_t,\\, f_{\\text{control}}\\left(e_t;\\Theta_{\\text{controller}}\\right) \\right) - x_{t+1}\\right)^2\n\\end{equation}\n\n\n- note here: we are minimizing over the *Controller parameters* $\\Theta_{\\text{controller}}$, **not** the parameters of our Imitator model\n\n\n- the weights of our imitator have already been tuned as necessary to real action/state data (another way to think about it: they are regularized so that the system matches a real set of input/output data)\n\n\n- our goal in optimizing our Controller parameters is to solidify our *Optimal Control Law* so that we learn an entire sequence of optimal actions $a_1,\\,a_2,\\,...,a_{T-1}$\n\n\n- thus we are getting at our optimal set of actions indirectly (via a parameterized function)\n\n- Lets look at a simple implementation of a PID controller\n\n\n```python\n# a simple implementation of a PID controller\ndef PID_controller(e_t,h_t,d_t,w): \n # note here in terms of inputs\n # e_t = current error\n # h_t = integral of error\n # d_t = derivative of error\n return w[0] + w[1]*e_t + w[2]*h_t + w[3]*d_t\n```\n\n\n```python\n# loop for evaluating control model over all input/output action/state pairs\n# Our inputs here:\n# s_1 - the initial condition state\n# x - sequence of training set points\n# w - the control model parameters\ndef control_loop(x,w):\n # initialize key variables and containers\n s_t = copy.deepcopy(s_1)\n h_t = 0\n d_t = 0\n frac = 1/float(np.size(x))\n action_history = []\n state_history = [s_t]\n error_history = []\n \n # loop over training set points and run through controller, then \n # system models\n for t in range(np.size(x) - 1):\n # get current set point\n x_t = x[:,t]\n\n # update error\n e_t = x_t - s_t\n error_history.append(e_t)\n \n # update integral of error\n h_t = h_t + frac*e_t\n \n # update derivative of error \n if t > 0:\n d_t = frac*(error_history[-1] - error_history[-2])\n \n # send error, integral, and derivative to PID controller\n a_t = PID_controller(e_t,h_t,d_t,w)\n \n # clip a_t to match system specifications?\n \n # send action to system model\n s_t = tuned_system_model(s_t,a_t)\n \n # store state output, and actions (for plotting)\n state_history.append(s_t)\n action_history.append(a_t)\n\n # transition to arrays\n state_history = np.array(state_history)[np.newaxis,:]\n action_history = np.array(action_history)[np.newaxis,:]\n \n # return velocities and control history\n return state_history,action_history\n```\n\n\n```python\n# an implementation of the least squares cost for PID controller tuning\n# note here: s is an (1 x T) array and a an (1 x T-1) array\ndef least_squares(w,x):\n # system_loop - runs over all action-state pairs and produces entire\n # state prediction set\n state_history,action_history = control_loop(x,w)\n\n # compute least squares error between real and predicted states\n cost = np.sum((state_history[:,1:] - x[:,1:])**2)\n return cost/float(x.shape[1]-1)\n```\n\n#### Example: 1 Cruise control\n\n- a `Python` implementation of our `control_model` for the *cruise control* problem. Notice at each update step the action is clipped to lie in the range $[-50,100]$ - which is the angle of the pedal against the floor of the car.\n\n\n- because here we are using a 'true model' of the automobile we need to use a zero order optimization method - since we cannot compute the gradient of `system_model` with respect to our PID weights.\n\n\n```python\n# create tuned system model for the car\nind = np.argmin(mylib1.train_cost_histories[0]) \nw_best = mylib1.weight_histories[0][ind]\n# a_norm = mylib1.x_norm\n# s_norm = mylib1.y_norm\n# s_invnorm = mylib1.y_invnorm\n# a_invnorm = mylib1.x_invnorm\n# tuned_system_model = lambda state,action: s_invnorm(system_model(s_norm(state),a_norm(action),w_best))\n\ntuned_system_model = lambda state,action: system_model(state,action,w_best)\ns_1 = 0.0\n```\n\n\n```python\n# loop for evaluating control model over all input/output action/state pairs\n# Our inputs here:\n# s_1 - the initial condition state\n# x - sequence of training set points\n# w - the control model parameters\ndef control_loop(x,w):\n # initialize key variables and containers\n s_t = copy.deepcopy(s_1)\n h_t = 0\n d_t = 0\n frac = 1/float(np.size(x))\n action_history = []\n state_history = [s_t]\n error_history = []\n \n # loop over training set points and run through controller, then \n # system models\n for t in range(np.size(x) - 1):\n # get current set point\n x_t = x[:,t]\n\n # update error\n e_t = x_t - s_t\n error_history.append(e_t)\n \n # update integral of error\n h_t = h_t + frac*e_t\n \n # update derivative of error \n if t > 0:\n d_t = frac*(error_history[-1] - error_history[-2])\n \n # send error, integral, and derivative to PID controller\n a_t = PID_controller(e_t,h_t,d_t,w)\n \n # clip action range to realistic machine standard?\n # clip inputs to -50% to 100% for car\n if a_t >= 100.0:\n a_t = 100.0\n if a_t <= -50.0:\n a_t = -50.0\n \n # send action to system model\n s_t = tuned_system_model(s_t,a_t)\n\n # store state output, and actions (for plotting)\n state_history.append(s_t)\n action_history.append(a_t)\n\n # transition to arrays\n state_history = np.array(state_history)[np.newaxis,:]\n action_history = np.array(action_history)[np.newaxis,:]\n \n # return velocities and control history\n return state_history,action_history\n```\n\n\n```python\n# an implementation of the least squares cost for PID controller tuning\n# note here: s is an (1 x T) array and a an (1 x T-1) array\ndef least_absolute(w,x):\n # system_loop - runs over all action-state pairs and produces entire\n # state prediction set\n state_history,action_history = control_loop(x,w)\n\n # compute least squares error between real and predicted states\n cost = np.sum(np.abs(state_history[:,1:] - x[:,1:]))\n return cost/float(x.shape[1]-1)\n```\n\n\n```python\n# create an instance of the car simulator\ndemo = pidlib.car_simulator.MyCar()\n\n# create a training sequence of *set points* for trying out the true simulator, and for learning a controller\nx_car = demo.create_set_points()\n```\n\n\n```python\n# This code cell will not be shown in the HTML version of this notebook\n# initialize with input/output data\nmylib5 = pidlib.rnn_pid_lib.super_setup.Setup(x_car)\n\n# normalize?\nmylib5.preprocessing_steps(normalizer_name = 'none')\n\n# split into training and validation sets\nmylib5.make_train_val_split(train_portion = 1)\n\n# choose cost\nmylib5.choose_cost(control_loop,cost = least_absolute)\n\n# fit an optimization\nw = 0.1*np.random.randn(4,1)\n# mylib5.fit(max_its = 59,alpha_choice = 10**(0),optimizer = 'gradient_descent',w_init = w,verbose = False)\n\nmylib5.fit(max_its = 50,alpha_choice = 10**(0),optimizer = 'zero_order',w_init = w,verbose = False)\n\n# show cost function history\nmylib5.show_histories(start = 1)\n```\n\n\n \n\n\n\n\n\n\n\n```python\n# This code cell will not be shown in the HTML version of this notebook\n# Plot the standard normalized series and its training fit\npidlib.variable_order_plotters.plot_setpoint_train_val_sequences(mylib5)\n```\n\n\n \n\n\n\n\n\n\n- these actions are crazy when the desired speed ramps up and down - we can fix this by *regularizing*\n\n#### Example: 2 PID controller for two tank example\n\n\n```python\n# This code cell will not be shown in the HTML version of this notebook\ndef two_tank_control_model(x,w):\n #### simulate vehicle response to set points ####\n s_t = [0.0,0.0]\n action_history = []\n state_history = [s_t]\n h = 0.0\n for t in range(np.size(x) - 1):\n # get current set point\n x_t = x[:,t]\n\n # update error\n e_t = x_t - s_t[1]\n \n # update integral of error\n h = h + e_t*0.1\n\n # set action based on PI linear combination\n a_t = w[0] + w[1]*e_t + w[2]*h \n \n if t > 0:\n a_t += w[3]*(s_t[1] - state_history[-2][1])\n\n # clip inputs to -50% to 100%\n if a_t >= 100.0:\n a_t = 100.0\n if a_t <= 0.0:\n a_t = 0.0\n \n # cap off condition\n #if s_t[1] > x_t:\n # a_t = 0\n \n # run pid controller\n s_t = demo_3.tank_model(a_t,s_t)\n \n # store results\n action_history.append(a_t)\n state_history.append(s_t)\n\n # transition to arrays\n state_history = np.array(state_history).T\n action_history = np.array(action_history)[np.newaxis,:]\n \n # return velocities and control history\n return state_history,action_history\n```\n\n\n```python\n# This code cell will not be shown in the HTML version of this notebook\n# an implementation of the least squares cost function for linear regression\ndef least_squares(w,x):\n states,actions = control_model(x,w)\n # compute cost over batch\n cost = np.sum((states[1,1:] - x[:,1:])**2)\n return cost/float(x.shape[1]-1)\n\n# a compact least absolute deviations cost function\ndef least_absolute_deviations(w,x):\n states,actions = control_model(x,w)\n # compute cost over batch\n cost = np.sum(np.abs(states[1,1:] - x[:,1:]))\n return cost/float(x.shape[1]-1)\n```\n\n\n```python\n# This code cell will not be shown in the HTML version of this notebook\n# initialize with input/output data\nmylib6 = pidlib.rnn_pid_lib.super_setup.Setup(x_tank2)\n\n# split into training and validation sets\nmylib6.make_train_val_split(train_portion = 1)\n\n# choose cost\ncontrol_model = two_tank_control_model\nmylib6.choose_cost(model = control_model,cost = least_absolute_deviations)\n\n# fit an optimization\nw = 0.1*np.random.randn(4,1)\nmylib6.fit(max_its = 10,alpha_choice = 'diminishing',optimizer = 'zero_order',w_init = w,verbose = False)\n\n# show cost function history\nmylib6.show_histories(start = 5)\n```\n\n\n \n\n\n\n\n\n\n\n```python\n# This code cell will not be shown in the HTML version of this notebook\n# plot results\npidlib.two_tank_plotter.plot_results(mylib6)\n```\n\n\n \n\n\n\n\n\n", "meta": {"hexsha": "5527d4c17765f1dded46c7025df3e60eb6b184cf", "size": 551316, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "presentations/pid_control/pid_control_part_1.ipynb", "max_stars_repo_name": "jermwatt/blog", "max_stars_repo_head_hexsha": "3dd0d464d7a17c1c7a6508f714edc938dc3c03e9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 14, "max_stars_repo_stars_event_min_datetime": "2019-04-17T23:55:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-08-08T02:18:49.000Z", "max_issues_repo_path": "presentations/pid_control/pid_control_part_1.ipynb", "max_issues_repo_name": "jermwatt/blog", "max_issues_repo_head_hexsha": "3dd0d464d7a17c1c7a6508f714edc938dc3c03e9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "presentations/pid_control/pid_control_part_1.ipynb", "max_forks_repo_name": "jermwatt/blog", "max_forks_repo_head_hexsha": "3dd0d464d7a17c1c7a6508f714edc938dc3c03e9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2019-04-10T22:46:27.000Z", "max_forks_repo_forks_event_max_datetime": "2020-11-06T09:16:30.000Z", "avg_line_length": 131.0473021155, "max_line_length": 178387, "alphanum_fraction": 0.8126301431, "converted": true, "num_tokens": 5339, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4843800842769844, "lm_q2_score": 0.19436780401867254, "lm_q1q2_score": 0.094147893291297}} {"text": "

Table of Contents

\n\n\n# $\\LaTeX$\n\n>_LaTeX, which is pronounced \u00abLah-tech\u00bb or \u00abLay-tech\u00bb (to rhyme with \u00abblech\u00bb or \u00abBertolt Brecht\u00bb), is a document preparation system for high-quality typesetting. It is most often used for medium-to-large technical or scientific documents but it can be used for almost any form of publishing.\\\n $~~~~$ - https://www.latex-project.org/about/_\n\n\nThis git is designed to be a primer in using $\\LaTeX$ in Jupyter notebooks with the intention of improving presentation.\n\n$\\LaTeX$ is a powerful tool in making your markdowns cells look more professional and stand out. One can introduce $\\LaTeX$ into the Jupyter Notebooks markdown cells using imported libraries, however, Jupyter Notebook natively leverages JavaScripts library MathJax to allow rendering of special characters and formatting. Its intended to allow for displaying mathematical functions and text in a more legible format and is widely used in academia.\n\nThis notebook aims to act like a cheat sheet for commonly used formatting examples. As we get progress along in the notebook, explanations will be limited to expounding on usage idiosyncrasies of the function in question. In other words, sections later in the book rely on knowledge of previous sections.\n\n## Getting Started \nLets start with the basics. To insert $\\LaTeX$ formatted text in Jupyter notebook is enclose the text in '$' to get started. Eg: here is the letter 'A' before and after\n\nA : A\\\n\\\\$A\\\\$ : $A$\n\nEasy enough. However, if all we wanted to bold and italicize our text, we can do that with builtin markdown shortcuts. The real power of $\\LaTeX$ comes with introducing special characters. This is done using the escape character '\\'. In this example, to get the alpha symbol we prefix it with the backspace character.\n\n\\\\$alpha\\\\$ : $alpha$\\\n\\\\$\\\\alpha\\\\$ : $\\alpha$\n\n\n### Spacing\n\n$\\LaTeX$ doesn't care for white space and displays text with standardized spacing regardless of how many whitespace characters are there. To allow for better separation and control one can use '~'\n\n$A B$ (10 spaces separating the characters)\\\n$A~~B$ (2 '~' between characters)\n\n### Justification\n#### Centering\nBy default, Markdowns are left justified and so are $\\LaTeX$ entries. Enclose text in '\\$$' instead of a single '\\$' centers the $\\LaTeX$ text. eg:\n\n$$A = B$$\n\n#### Multiline formatting for math\nIf you are typing a multi line mathematical process or derivation, it can be convenient to wrap the entire block in a \\\\begin and \\\\end. This allows us to forgo the '$' at the beginning and end of each line. A few things to keep in mind,\n\n- Every line should end with a '\\\\\\\\'. \n- By default the block right aligned, however one can provide an ***&*** at every line to specify point of alignment\n\n\n\n$$\n\\begin{align}\nx &= 10 + 5 +2 \\\\\nx &= 10 + 7\\\\\nx &= 17\\\\\nx - 17 &= 0\\\\\n\\end{align}\n$$\n\n## The Greeks\nThe greek alphabets are all represented. Note that some letters have a capitalized version. Letters exempt are the ones which resemble their alphabet equivalent, such as a capital $\\alpha$ is ***A***. To access the capital variations, capitalize the first letter\n\n\n$$\n\\begin{align}\nAlpha&: \\alpha\\\\\nBeta&: \\beta\\\\\nGamma&: \\gamma~\\Gamma\\\\\nDelta&: \\delta~\\Delta\\\\\nEpsilon&: \\epsilon\\\\\nZeta&: \\zeta\\\\\nEta&: \\eta\\\\\nTheta&: \\theta~\\Theta\\\\\nIota&: \\iota\\\\\nKappa&: \\kappa\\\\\nLambda&: \\lambda~\\Lambda\\\\\nMu&: \\mu\\\\\nNu&: \\nu\\\\\nXi&: \\xi~\\Xi\\\\\nOmicron&: \\omicron\\\\\nPi&: \\pi~\\Pi\\\\\nSigma&: \\sigma~\\Sigma\\\\\nTau&: \\tau\\\\\nUpsilon&: \\upsilon~\\Upsilon\\\\\nPhi&: \\phi~\\Phi\\\\\nChi&: \\chi\\\\\nPsi&: \\psi~\\Psi\\\\\nOmega&: \\omega~\\Omega\n\\end{align}\n$$\n\n\n## Mathematical Symbols\nThere is a plethora of mathematical symbols available for use as well, however, due to their verbose nature, we won't be going over every single one. Here are a few divided by subsections\n\n### Superscript and Subscript\nSuperscipts and Subscripts are easily accessible using the '^' and '\\_' symbols. This symbol needs to be prefixed to the exponent or the subscript. \n\n$$ a_b $$ \n$$ x^2 $$ \n\n### Grouping\nGrouping allows us to group a set of symbols together so they are always presented together. Lets use a superscript to illustrate this example. By default the superscript symbol only captures the first character to superscript. So if I wanted to write anything a little more complicated I'd have to group characters together using the _{ }_ symbols. In the following example we get two very different outcomes depending upon whether we grouped y+z\n\n$$\n\\begin{align}\nNo~grouping &:x ^ y + z\\\\\nGrouping &:x ^ {y + z}\n\\end{align}\n$$\n\n### Sets & Probability\n\nOperators for showing set relationship\n\n$$\n\\begin{align}\nUnion &: \\cup\\\\\nIntersection &: \\cap\\\\\nSubset &: \\subset\\\\\nSuperset &: \\supset\\\\\n\\end{align}\n$$\n\n### Mathematical comparators\nThese symbols and their usage are fairly self explanatory. Some symbols which have a dedicated spot on the keyboard don't need any special effort. For eg:\n$$ = ~ < ~ >$$\n\nHowever, for others that do not have a dedicated keyboard spot they can be accessed with the escape character,\n\n$$\n\\begin{align}\n\\approx ~ \\leq~ \\geq \\\\\n\\equiv ~ \\ll~ \\gg \\\\\n\\neq ~ \\leq~ \\geq \\\\\n\\end{align}\n$$\n\n### Mathematical operators\n$$\\pm ~\\times~\\cdot$$\n\n### Fractions\nFractions can be presented using the _'\\\\dfrac'_ or _'\\\\tfrac'_ command. Depending upon need either can be utilized.\n\nDfrac: $\\dfrac x y$\n\nTfrac: $\\tfrac x y$\n\nThis can be combined with grouping to display complex math in a legible form\n\n$$ \\dfrac {x^{exp}} {y_{sub}^{exp}} $$\n\n## A few examples of using $\\LaTeX$\nLets use what we've learned so far and write some famous and some more obscure equations\n\n#### Einstein's mass-energy equivalence\n\n\n$$ E = mc^2$$\n\n#### Equations of motion\n\n$$\n\\begin{align}\nv &= u + at \\\\\ns &= ut + \\dfrac {a t^2}{2} \\\\\ns &= \\dfrac {(u+v)t}{2} \\\\\nv^2 &= u^2 + 2as\n\\end{align}\n$$\n\n#### Quadratic Equation\nFor, \n\n$$\n\\begin{align}\na x^2 + b x + c &= 0 \\\\ \nx &= \\dfrac {-b \\pm \\sqrt{b^2 - 4 ac}}{2a} \n\\end{align}\n$$\n\n#### Some calculus for good measure\n\n##### Derivation\n\n$$\n\\begin{align}\n \\dfrac{d}{dx} x^n &= n x^{n-1} \\\\ \n \\dfrac{d}{dx} (f(x)\\cdot g(x)) &= \\dfrac{d}{dx} (f(x))\\cdot g(x) + \\dfrac{d}{dx} (g(x))\\cdot f(x)\\\\\n\\end{align}\n$$\n\n##### Integration\n\n$$\n\\begin{align}\n \\int x^n \\cdot dx &= \\dfrac{x^{n+1}}{n+1} + C \\\\\n \\int f(x)\\cdot g'(x)\\cdot dx &= f(x)\\cdot g(x) - \\int g(x)\\cdot f'(x)\\cdot dx\n\\end{align}\n$$\n\n## Using in visualizations\n\nThese rules can also be applied to your visualizations if needed. All labels and titles can utilize $\\LaTeX$ formatting as long as they are inserted as _rstrings_. This allows the use of the backslash escape character. As an example, here we plot a sine function and its differentiation cosine. Take note of the title and legend and the x ticks\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt \n\n# Generate x values\nx = np.arange(0, 4*np.pi, 0.1); \n\n# Get y values for sine wave and its differentiation cosine\ny = np.sin(x) \ndy = np.cos(x)\n\n# Plot waves\nplt.figure(figsize = (10,8))\nplt.plot(x, y) \nplt.plot(x, dy)\n\n# Give a title for the sine wave plot\nplt.title(r'Plot of $sin(x)$ & $\\dfrac {d}{dx} sin(x)$') \n\n# Give x axis label for the plot\nplt.xlabel('x') \n\n# Give y axis label for the plot\nplt.ylabel('f (x)') \n\nplt.xticks(np.arange(0, 9*(np.pi/2),(np.pi)),\n [r'0$\\pi$', r'$1\\pi$', r'2$\\pi$',r'3$\\pi$', r'4$\\pi$'])\n\n\nplt.grid(True, alpha =0.2)\nplt.axhline(y=0, color='k')\n\nplt.legend([r'$sin(x)$',r'$\\dfrac {d}{dx} sin(x) = cos(x)$'],loc = 1)\nplt.show();\n```\n\nAs you can see the possibilities are limitless and learning and leveraging $\\LaTeX$ is a must to improve on your presentations\n", "meta": {"hexsha": "9dbb490d4952f370ce255e82d7019c438e1d4b6c", "size": 77387, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Jupyter_Workbook.ipynb", "max_stars_repo_name": "ssaeed85/UsingLatexInJupyterNB", "max_stars_repo_head_hexsha": "34006d0406415fd1b4b43d6edf5dd51e46985788", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Jupyter_Workbook.ipynb", "max_issues_repo_name": "ssaeed85/UsingLatexInJupyterNB", "max_issues_repo_head_hexsha": "34006d0406415fd1b4b43d6edf5dd51e46985788", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Jupyter_Workbook.ipynb", "max_forks_repo_name": "ssaeed85/UsingLatexInJupyterNB", "max_forks_repo_head_hexsha": "34006d0406415fd1b4b43d6edf5dd51e46985788", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 150.2660194175, "max_line_length": 59480, "alphanum_fraction": 0.8741132232, "converted": true, "num_tokens": 3449, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4960938294709195, "lm_q2_score": 0.18952108217423458, "lm_q1q2_score": 0.09402023942128886}} {"text": "\n\nGiulio Tesei
\nQDETAILSS WS3
\n2019-10-24, Lund University\n\n### LAYOUT\n

\n- What is a Jupyter notebook?\n

\n- How can it improve my workflow?\n

\n- How to get started\n

\n- How to share my notebook to help other scientists to reproduce my analysis\n\n### What is it?\nInteractive document that integrates:\n- code:\n - A long list of available programming languages:\n - Python, Java, R, Julia, Matlab, Octave, Scala, Spark, PHP, C#, C++, etc.\n

\n- command-line tools:\n - copying / deleting / moving files with `cp` / `rm` / `mv`\n - navigate in the directory tree with `cd`\n - create a new folder with `mkdir` \n

\n- narrative text:\n - equations\n - tables\n - links\n

\n- visualizations\n\n### Code\n\nDocumentation accessible within the notebook.\n- How can I call this function?\n- Which arguments does it have?\n- What attributes does this object have?\n\n\n```python\nnames = ['marie_curie','amedeo_avogadro','rosalind_franklin']\nprint(type(names), names[0], type(names[0]))\n```\n\n marie_curie \n\n\n\n```python\nfor name in names:\n first_last = name.split('_')\n print('first_last is ',first_last)\n first = first_last[0]\n last = first_last[1]\n print(first.capitalize()+' '+last.swapcase())\n```\n\n first_last is ['marie', 'curie']\n Marie CURIE\n first_last is ['amedeo', 'avogadro']\n Amedeo AVOGADRO\n first_last is ['rosalind', 'franklin']\n Rosalind FRANKLIN\n\n\n### Command-Line Tools:\nNo need to use the terminal or file managers.\n- copy / delete / move files or folders with `cp` / `rm` / `mv`\n- navigate in the directory tree with `cd`\n- create a new folder with `mkdir` \n- check the pth of the current directory with `pwd`\n\n\n```python\n%pwd\n```\n\n\n\n\n '/Users/giulio/jc/qdetailss'\n\n\n\n\n```python\n%mkdir data\n%ls\n```\n\n \u001b[34maux\u001b[m\u001b[m/ \u001b[34mdata\u001b[m\u001b[m/ jupyter_slides.pdf\r\n custom.css jupyter.ipynb \u001b[34mreveal.js\u001b[m\u001b[m/\r\n\n\n\n```python\n%rm -r data\n%ls\n```\n\n \u001b[34maux\u001b[m\u001b[m/ jupyter.ipynb \u001b[34mreveal.js\u001b[m\u001b[m/\r\n custom.css jupyter_slides.pdf\r\n\n\n### Narrative Text\n[Markdown](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) markup language:\n- equations\n- tables\n- links\n\n```\n###### Free Induction Decay\n\nThe oscillating voltage, $V(t)$, has an initial amplitude $V(0)$ which freely decays in time, $t$. The dampened signal can be modeled as a sine function of frequency $\\nu$, decaying exponentially with decay constant $T_2$:\n\n\\begin{equation}\nV(t)=V(0)\\,\\exp{(-t/T_2)}\\,\\sin{(2 \\pi \\nu t)}.\n\\end{equation}\n\n| Variable | Description | Unit |\n|---------------| ---------------|-------|\n| $t$ | time | ms |\n| $V$ | voltage | V |\n| $\\nu$ | frequency | Hz |\n| $T_2$ | decay constant | ms |\n\n###### References\n1. [Wikipedia](http://tiny.cc/r9r0ez)\n2. [Merriam-Webster](http://tiny.cc/uas0ez)\n```\n\n##### Free Induction Decay\n\nThe oscillating voltage, $V(t)$, has an initial amplitude $V(0)$ which freely decays in time, $t$. The dampened signal can be modeled as a sine function of frequency $\\nu$, decaying exponentially with decay constant $T_2$:\n\n\\begin{equation}\nV(t)=V(0)\\,\\exp{(-t/T_2)}\\,\\sin{(2 \\pi \\nu t)}.\n\\end{equation}\n\n| Variable | Description | Unit |\n|---------------| ---------------|-------|\n| $t$ | time | ms |\n| $V$ | voltage | V |\n| $\\nu$ | frequency | Hz |\n| $T_2$ | decay constant | ms |\n\n###### References\n1. [Wikipedia](http://tiny.cc/r9r0ez)\n2. [Merriam-Webster](http://tiny.cc/uas0ez)\n\n### How can it improve my workflow?\nAll the steps of your data analysis and visualization in a single document. \n- Interactive data exploration and analysis\n

\n- Immediate access to documentation: learn coding, readily use new libraries!\n

\n- Facilitates iteration: \n - once the notebook is set up, the analysis can be repeated effortlessly with new variables / data sets\n

\n- A large set of freely available tools:\n - Python libraries for linear algebra, fitting data, plotting, handling tabular data, image analysis, bioinformatics, spectroscopy, molecular visualization\n

\n- [Examples](https://github.com/jupyter/jupyter/wiki/A-gallery-of-interesting-Jupyter-Notebooks)\n\n\n\n```\n\n```\n\n\n```python\nimport numpy as np\nx = np.linspace(0,np.pi/2.,10)\nx\n```\n\n\n\n\n array([0. , 0.17453293, 0.34906585, 0.52359878, 0.6981317 ,\n 0.87266463, 1.04719755, 1.22173048, 1.3962634 , 1.57079633])\n\n\n\n\n```python\nnp.mean(x) # np.std() to compute the standard deviation\n```\n\n\n\n\n 0.7853981633974483\n\n\n\n\n```python\ny = np.cos(x) # np.sin(), np.tan(), np.log(), np.exp() etc.\ny\n```\n\n\n\n\n array([1.00000000e+00, 9.84807753e-01, 9.39692621e-01, 8.66025404e-01,\n 7.66044443e-01, 6.42787610e-01, 5.00000000e-01, 3.42020143e-01,\n 1.73648178e-01, 6.12323400e-17])\n\n\n\n\n\n\n```python\n!head -n 1 aux/pmf.dat\n```\n\n 1.525000000000000000e+01 2.086592750614395442e+01 1.224484592940475874e-01\r\n\n\n\n```python\n# Load data file\nx,y,z = np.loadtxt('aux/pmf.dat',unpack=True)\n```\n\n\n```python\n# Integrate along the given axis using the composite trapezoidal rule.\nnp.trapz(y,x)\n```\n\n\n\n\n 940.9132634027817\n\n\n\n\n```python\n# Return the derivative of an array.\ndy = np.gradient(y,x)\n# Save the gradient to a text file\nnp.savetxt('aux/gradient.dat',y)\n```\n\n\n\n\n```python\nimport matplotlib.pyplot as plt\nplt.rcParams.update({'figure.dpi': 70})\n```\n\n\n```python\nimport matplotlib.pyplot as plt\nx = np.linspace(0,10*np.pi,200)\ny = np.cos(x)\nplt.plot(x,y)\nplt.ylabel('Cosine')\nplt.xlabel('Angle / rad')\nplt.show()\n```\n\n$$\nP(x) = \\frac{1}{\\sqrt{2 \\pi \\sigma}} \\exp{\\left ( \\frac{-(x-\\mu)^2}{2\\sigma^2} \\right)}\n$$\n\n\n```python\nx = np.linspace(0,9,1000)\nfor u in range(1,8):\n y = np.exp(-(u-x)**2/(2*0.2**2)) / np.sqrt(2*np.pi*0.2**2)\n plt.plot(x,y,label=str(u))\nplt.legend(frameon=False, title='$\\mu$ / nm')\nplt.xlim(0,9)\nplt.xlabel('Distance, $x$ / nm'); plt.ylabel('Probabilty, $P(x)$')\nplt.savefig('aux/normal.pdf') # png, jpg, eps\nplt.show()\n```\n\n\n```python\nplt.rcParams.update({'figure.dpi': 300})\n```\n\n\n```python\nimport matplotlib.image as mpimg\nimg = mpimg.imread('aux/protein.png')\nprint(img.shape)\nfig = plt.figure(figsize=(1.2, 1.2))\nplt.imshow(img, interpolation='bilinear')\nplt.axis('off')\nplt.show()\n```\n\n\n```python\nplt.rcParams.update({'figure.dpi': 75})\n```\n\n\n```python\nfrom matplotlib.collections import LineCollection\nx = np.linspace(0,10,1000)\ny = np.exp(-(2.6-x)**2) / np.sqrt(4*np.pi) + np.exp(-(7-x)**2/2) / np.sqrt(8*np.pi)\npoints = np.array([x, y]).T.reshape(-1, 1, 2)\nsegments = np.concatenate([points[:-1], points[1:]], axis=1)\nnorm = plt.Normalize(x.min(), x.max())\nlc = LineCollection(segments, cmap='plasma', norm=norm)\nlc.set_array(x); lc.set_linewidth(4); plt.gca().add_collection(lc)\nplt.ylabel(r'Probability, $P(x)$'); plt.xlabel(r'Distance, $x$ / nm')\nplt.xlim(0,10); plt.ylim(0,.3)\nplt.show()\n```\n\nQ: How can I plot a gradient-colored line?\n\nA: Google [\"matplotlib gradient color line\"](https://matplotlib.org/3.1.1/gallery/lines_bars_and_markers/multicolored_line.html)\n\nYou can google very specific questions and quickly find excellent answers, generally on matplolib.org or stackoverflow.com\n\n### Multiple Subplots\n\n\n```python\nfig, axes = plt.subplots(nrows=3,ncols=2,figsize=(7, 6))\nx = np.arange(10)\naxes[0,0].plot(x, x**2, 'bo')\naxes[2,1].plot(x, -x**2, 'r^')\naxes[2,1].yaxis.set_ticks_position('right') # yticks on the right side\n```\n\n\n```python\ndef plotQCM(): \n fig, ((ax1,ax2),(ax3,ax4)) = plt.subplots(nrows=2,ncols=2)\n\n colors = plt.rcParams['axes.prop_cycle'].by_key()['color']\n\n ax1 = plt.subplot2grid((11, 5), (5, 0), rowspan=6, colspan=3)\n ax3 = plt.subplot2grid((11, 5), (0, 0), rowspan=5, colspan=3)\n ax2 = plt.subplot2grid((11, 5), (5, 3), rowspan=6, colspan=2)\n ax4 = plt.subplot2grid((11, 5), (0, 3), rowspan=5, colspan=2)\n\n a = np.linspace(0,np.pi,100)\n\n for ax in [ax2,ax4]:\n for i,c in zip(range(1,13,2),colors):\n ax.plot(np.cos(a*i)+i*2,a,lw=2,color=c)\n ax.plot(-np.cos(a*i)+i*2,a,lw=2,ls=':',color=c)\n\n ax.set_xticks(np.arange(1,13,2)*2)\n ax.set_xticklabels(np.arange(1,13,2))\n ax.tick_params(axis='both',which='both',bottom=False,right=False,labelbottom=True,labelright=False,\n left=False,labelleft=False,pad=-.5)\n ax.set_frame_on(False)\n\n ax2.set_ylim(0-0.06/1.3*np.pi,np.pi+0.06/1.3*np.pi)\n ax4.set_ylim(0-0.06*np.pi,np.pi+0.06*np.pi)\n\n x0 = 2; y0 = 0; x1 = 1.7;\n\n c=7\n\n ax1.fill([x1,x1+2,x1+2.6,x1+.6], [y0,y0,y0+1,y0+1], colors[c], alpha=0.3, \n edgecolor=colors[c],ls='--',lw=0)\n ax1.fill([x1+.6,x1+2.6,x1+2.6,x1+.6], [y0+1,y0+1,y0+1.3,y0+1.3], colors[c], alpha=0.6, \n edgecolor=colors[c],ls='--',lw=0)\n\n ax1.fill([x0,x0+2,x0+2,x0], [y0+1,y0+1,y0+1.3,y0+1.3], colors[9], alpha=0.6, edgecolor=colors[9],ls='-',lw=0)\n ax1.fill([x0,x0+2,x0+2,x0], [y0,y0,y0+1,y0+1], colors[9], alpha=0.3, edgecolor=colors[9],ls='-',lw=0)\n\n ax3.fill([x1,x1+2,x1+2.6,x1+.6], [y0,y0,y0+1,y0+1], colors[c], alpha=0.3, \n edgecolor=colors[c],ls='--',lw=0)\n ax3.fill([x0,x0+2,x0+2,x0], [y0,y0,y0+1,y0+1], colors[9], alpha=0.3, edgecolor=colors[9],ls='-',lw=0)\n\n for ax in [ax1,ax3]: \n ax.axis('off')\n ax.plot([2.3,2.3],[0,1],marker='o',lw=0,color='k')\n ax.hlines(y=[0,1],xmin=[.9925,.9925],xmax=[2.3,2.3],lw=1,color='k')\n ax.annotate(r'$\\bigcirc$',xy=(1,0.5),fontsize=24,color='k',horizontalalignment='center',\n verticalalignment='center')\n ax.annotate(u'\\u223F',xy=(1,0.5),fontsize=16,color='k',horizontalalignment='center',\n verticalalignment='center')\n ax.vlines(x=[1,1],ymin=[0,0.61],ymax=[.4,1],lw=1,color='k')\n\n ax1.set_xlim(.7,4.5)\n ax3.set_xlim(.7,4.5)\n ax3.set_ylim(-.05,1.05)\n ax1.set_ylim(-.05,1.35)\n\n ax3.annotate('QCR', xy=(3,0.5),fontsize=14,color='k',horizontalalignment='center', verticalalignment='center')\n ax1.annotate('QCR', xy=(3,0.5),fontsize=14,color='k',horizontalalignment='center', verticalalignment='center')\n ax1.annotate('Film', xy=(3,1.15),fontsize=14,color='k',horizontalalignment='center', verticalalignment='center')\n\n plt.gcf().text(.6, 0.58, '$n$ = ', fontsize=12)\n plt.gcf().text(.6, 0.058, '$n$ = ', fontsize=12)\n\n plt.tight_layout(w_pad=2.5,h_pad=1)\n plt.show()\n```\n\n\n```python\ndef plotFourier(): \n fig, (ax1,ax2) = plt.subplots(nrows=1,ncols=2,figsize=(7, 3.5))\n \n colors = plt.rcParams['axes.prop_cycle'].by_key()['color']\n\n gamma = 2e-1\n fr = 5\n t = np.arange(0,10,.001)\n cos = np.cos(2*np.pi*fr*t)\n exp = np.exp(-t*2*np.pi*gamma)\n func = exp*cos\n ax1.plot(t,func,color=colors[0])\n ax1.set_xlim(0,4)\n\n fs = np.linspace(0,10,1000)\n curr = []\n for f in fs:\n curr.append( np.trapz( func*np.cos(2*np.pi*f*t),t) )\n curr = np.array(curr)\n ax2.plot(fs,curr,lw=2,color=colors[0]) \n ax2.yaxis.set_label_position('right'); ax2.yaxis.set_ticks_position('right')\n ax2.set_xlim(0,7)\n ax2.set_ylim(-.05,.55)\n\n ax1.hlines(y=-np.exp(-1),xmin=0,xmax=1/(2*np.pi*gamma))\n ax1.hlines(y=.7,xmin=4/5,xmax=1)\n ax1.hlines(y=.7,xmin=1,xmax=1.5,lw=1,linestyle=':')\n ax1.vlines(x=4/5,ymin=np.exp(-4/5*2*np.pi*gamma),ymax=.7,lw=1,linestyle=':')\n ax1.vlines(x=1,ymin=np.exp(-2*np.pi*gamma),ymax=.7,lw=1,linestyle=':')\n ax1.vlines(x=1/(2*np.pi*gamma)-.01,ymin=-np.exp(-1),ymax=-.6,lw=1,linestyle=':')\n ax1.annotate('1 / ( 2 $\\pi$ $\\Gamma_r$ )',xy=(.67,-.8),fontsize=16)\n ax2.hlines(y=curr.max()/2.,xmin=5,xmax=5+gamma)\n ax2.hlines(y=curr.max()/2.,xmin=5,xmax=5+gamma+.7,lw=1,linestyle=':')\n ax2.vlines(x=fr,ymin=curr.max()/2.,ymax=curr.max()+.05,lw=1,linestyle=':')\n ax2.annotate('$\\Gamma_r$',xy=(6,.185),fontsize=16)\n ax2.annotate(\"$f_r$\",xy=(fr-.2,curr.max()+.07),fontsize=16)\n ax1.annotate('1 / $f_r$',xy=(1.6,.65),fontsize=16)\n ax1.text(x=4.16,y=.1,s='FT',fontsize=16)\n ax1.text(x=4.13,y=-.05,s='\u27f6',fontsize=16)\n\n gamma = 4e-1\n fr = 2\n t = np.arange(0,10,.001)\n cos = np.cos(2*np.pi*fr*t)\n exp = np.exp(-t*2*np.pi*gamma)\n func = exp*cos\n ax1.plot(t,func,color=colors[3],lw=1)\n\n fs = np.linspace(0,10,1000)\n curr = []\n for f in fs:\n curr.append( np.trapz( func*np.cos(2*np.pi*f*t),t) )\n curr = np.array(curr)\n ax2.plot(fs,curr,lw=1,color=colors[3]) \n ax2.yaxis.set_label_position('right'); ax2.yaxis.set_ticks_position('right')\n\n ax2.hlines(y=curr.max()/2.,xmin=fr,xmax=fr+gamma)\n ax2.hlines(y=curr.max()/2.,xmin=fr,xmax=fr+gamma+.7,lw=1,linestyle=':')\n ax2.vlines(x=fr,ymin=curr.max()/2.,ymax=curr.max()+.05,lw=1,linestyle=':')\n ax2.annotate(\"$\\Gamma$\",xy=(3.2,.08),fontsize=16)\n ax2.annotate(\"$f$\",xy=(fr-.2,curr.max()+.07),fontsize=16)\n ax1.tick_params(axis='both',which='both',left=False,bottom=False,labelbottom=False,labelleft=False)\n ax2.tick_params(axis='both',which='both',bottom=False,right=False,labelbottom=False,labelright=False)\n ax1.set_xlabel('Time',labelpad=6)\n ax2.set_xlabel('Frequency',labelpad=6)\n ax1.set_ylabel('Current, $I(t)$',labelpad=6)\n ax2.set_ylabel('Current, $I(t)$',labelpad=8)\n\n fig.tight_layout(w_pad=3)\n plt.show()\n```\n\n\n```python\nplt.rcParams.update({'figure.dpi': 60})\n```\n\n\n```python\nplotQCM()\nplotFourier()\n```\n\n### [Jupyter Widgets](https://ipywidgets.readthedocs.io/en/latest/)\nGain control and visualize changes in the data!\n\n\n```python\nfrom mpl_toolkits.axes_grid1.inset_locator import inset_axes\nfrom ipywidgets import interactive\n\ndef plot_cos_decay_FT(freq=1,gamma=.2):\n\n def cos_decay(time,freq,gamma):\n cos = np.cos(2*np.pi*freq*time)\n exp = np.exp(-time*2*np.pi*gamma)\n return exp*cos\n \n def FT(time,freq,gamma):\n fourier = []\n for f in np.linspace(0,freq*2,1000):\n cos = np.cos(2*np.pi*f*time)\n fourier.append( np.trapz( cos_decay(time,freq,gamma)*cos,time) )\n return np.array(fourier)\n\n time = np.arange(0,10,.001)\n\n fig = plt.figure(figsize=(3.5,4))\n ax = plt.axes()\n ax.plot(time,cos_decay(time,freq,gamma),color=plt.get_cmap('tab10')(3), lw=1)\n \n axins = inset_axes(ax, width='60%', height='30%', loc='upper right', borderpad=1.1)\n axins.plot(np.linspace(0,freq*2,1000),FT(time,freq,gamma),color=plt.get_cmap('tab10')(0), lw=1)\n \n axins.set_xlabel(r'Frequency, $f$',color=plt.get_cmap('tab10')(0),fontsize=10,labelpad=1)\n axins.set_ylabel(r'$I(f)$',fontsize=10,labelpad=1)\n \n ax.set_ylabel('$I(t)$',fontsize=12)\n ax.set_xlabel(r'Time, $t$',color=plt.get_cmap('tab10')(3),fontsize=12)\n ax.set_xlim(0,10)\n```\n\n\n```python\ninteractive_plot = interactive(plot_cos_decay_FT, freq=(1, 5, .1), gamma=(.08,.14,.01) )\ninteractive_plot.children[0].description=r'$f$' # slide bar\ninteractive_plot.children[1].description=r'$\\Gamma$' # slide bar\ninteractive_plot\n```\n\n\n interactive(children=(FloatSlider(value=1.0, description='$f$', max=5.0, min=1.0), FloatSlider(value=0.14, des\u2026\n\n\n\n\n\n```python\nimport pandas as pd\nfrom IPython.display import display\n```\n\nLibrary to handle tabular data: a convenient alternative to Excel!\n\n### Size-Exclusion Chromatography\nData from the purification of $\\alpha$-synuclein monomers kindly provided by **Veronica Lattanzi**\n \n\n\n\n```bash\n%%bash\nhead -n 22 aux/191923_d_alphasyn.txt\n```\n\n Run Name,20191023 dasyn NIST\r\n Run Date,12:10:53 PM 10-23-19\r\n Method Name,Increase_pumpA\r\n Export Format Version, 1.00\r\n Method ID, 2059\r\n Points/Second, 5.00\r\n Number of Records, 11780\r\n Offset from Run Start Time,00:00:00\r\n Run End Time,00:39:17\r\n Time,Second\r\n UV,AU,\r\n Conductivity,mS/cm\r\n Gradient Pump,\r\n Trace 3,\r\n Trace 4,\r\n Trace 5,\r\n Trace 6,\r\n GP Pressure,\r\n Volume,ml\r\n Fraction,\r\n Time,UV,Conductivity,Volume\r\n 0.0,-0.000730, 1.040, 0.0\r\n\n\n\n```python\ndf = pd.read_csv('aux/191923_d_alphasyn.txt',header=20,sep=',',index_col=0)\ndisplay(df.head(2))\n```\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
UVConductivityVolume
Time
0.0-0.0007301.040.0
0.2-0.0007241.040.0
\n
\n\n\n\n```python\nfig = plt.figure(figsize=(6, 2.5))\nplt.plot(df.index/60, df['UV'])\nplt.ylabel('Absorbance at 276 nm'); plt.xlabel('Time / min')\n```\n\n\n```python\nfig = plt.figure(); ax1 = plt.axes()\nax1.plot(df.index/60, df['UV'])\nax2 = ax1.twinx() # creates a new subplot identical to x1, with invisible x-axis and y-axis on the r.h.s\nax2.plot(df.index/60, df['Conductivity'],color=plt.cm.tab10(3))\nax1.tick_params(axis='y',colors=plt.cm.tab10(0))\nax1.set_xlabel('Time / min')\nax1.set_ylabel('Absorbance at 276 nm',color=plt.cm.tab10(0))\nax2.set_ylabel('Conductivity / mS cm$^{-1}$',color=plt.cm.tab10(3))\nax2.tick_params(axis='y',colors=plt.cm.tab10(3))\nplt.savefig('aux/chromatogram.png')\n```\n\n\n```python\nfig = plt.figure(figsize=(6, 2.5))\n\nplt.plot(df.index/60, df['UV'])\nplt.xlim(17,27)\nt1 = 19.8; t2 = 23.5\nplt.vlines([t1,t2],ymin=0,ymax=.25,linestyle=':')\n\nplt.ylabel('Absorbance at 276 nm'); plt.xlabel('Time / min'); plt.show()\n\nt1 = 19.8*60; t2 = 23.5*60 # convertion to seconds\nabs_avg = np.mean(df.loc[t1:t2]['UV']); epsilon = 5960 ;path_length = 0.5\nprint('Monomer concentration:','{:.3f} \u03bcM'.format(abs_avg/epsilon/path_length*1e6))\n```\n\n### Data Scraping: Importing an HTML Table from [Sigma Aldrich](https://www.sigmaaldrich.com/life-science/metabolomics/learning-center/amino-acid-reference-chart.html)\n\n\n```python\nurl = \"https://www.sigmaaldrich.com/life-science/metabolomics/learning-center/amino-acid-reference-chart.html\"\ndf = pd.read_html(url, header=0, index_col=0, na_values='\u2013')[0]\ndf = df['Alanine':'Valine'] # select rows we are interested in\ndf = df.apply(pd.to_numeric,errors='ignore') # convert numbers from strings to numeric values\ndisplay( df.iloc[::3] ) # show every third amino acid\n```\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Abbr.Abbr..1Molecular WeightMolecular FormulaResidue FormulaResidue Weight (-H2O)pKa1pKb2pKx3pl4
Name
AlanineAlaA89.10C3H7NO2C3H5NO71.082.349.69NaN6.00
Aspartic acidAspD133.11C4H7NO4C4H5NO3115.091.889.603.652.77
GlutamineGlnQ146.15C5H10N2O3C5H8N2O2128.132.179.13NaN5.65
HydroxyprolineHypO131.13C5H9NO3C5H7NO2113.111.829.65NaNNaN
LysineLysK146.19C6H14N2O2C6H12N2O128.182.188.9510.539.74
ProlineProP115.13C5H9NO2C5H7NO97.121.9910.60NaN6.30
ThreonineThrT119.12C4H9NO3C4H7NO2101.112.099.10NaN5.60
ValineValV117.15C5H11NO2C5H9NO99.132.329.62NaN5.96
\n
\n\n\n\n```python\ndisplay( df['Arginine':'Glutamic acid'][['pKa1','pKb2','pKx3']] )\n```\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pKa1pKb2pKx3
Name
Arginine2.179.0412.48
Asparagine2.028.80NaN
Aspartic acid1.889.603.65
Cysteine1.9610.288.18
Glutamic acid2.199.674.25
\n
\n\n\n\n```python\ndf['Arginine':'Glutamine']['Molecular Weight'].values\n```\n\n\n\n\n array([174.2 , 132.12, 133.11, 121.16, 147.13, 146.15])\n\n\n\n\n```python\ndf['Arginine':'Glutamine']['Molecular Weight'].values.mean()\n```\n\n\n\n\n 142.31166666666667\n\n\n\n\n```python\ndisplay(df[df['pl4']>7])\n```\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Abbr.Abbr..1Molecular WeightMolecular FormulaResidue FormulaResidue Weight (-H2O)pKa1pKb2pKx3pl4
Name
ArginineArgR174.20C6H14N4O2C6H12N4O156.192.179.0412.4810.76
HistidineHisH155.16C6H9N3O2C6H7N3O137.141.829.176.007.59
LysineLysK146.19C6H14N2O2C6H12N2O128.182.188.9510.539.74
\n
\n\n\n\n```python\ndf[df['pl4']>7]['Molecular Weight'].values.mean()\n```\n\n\n\n\n 158.51666666666668\n\n\n\n\n```python\nnp.mean(df['Molecular Weight']-df['Residue Weight (-H2O)'])\n```\n\n\n\n\n 18.014545454545452\n\n\n\n[](https://jakevdp.github.io/PythonDataScienceHandbook/)\nhttps://jakevdp.github.io/PythonDataScienceHandbook/\n\n### [Jupyter Course in Lund](https://github.com/mlund/jupyter-course)\n\n#### Reproducible and Interactive Data Analysis and Modelling using Jupyter Notebooks (4 ECTS)\n\n- course developed by me, Caterina Doglioni, Mikael Lund and Benjamin Ragan-Kelley\n- [COMPUTE](http://cbbp.thep.lu.se/compute/Previous_courses.php) research school (Natural Science)\n- video lectures ([Intro & Widgets](https://api.kaltura.nordu.net/p/310/sp/31000/embedIframeJs/uiconf_id/23450585/partner_id/310/widget_id/0_vujap4by?iframeembed=true&playerId=kaltura_player_5bfdb69292c20&flashvars[playlistAPI.kpl0Id]=0_nc717bpa&flashvars[playlistAPI.autoContinue]=true&flashvars[playlistAPI.autoInsert]=true&flashvars[ks]=&flashvars[localizationCode]=en&flashvars[imageDefaultDuration]=30&flashvars[leadWithHTML5]=true&flashvars[forceMobileHTML5]=true&flashvars[nextPrevBtn.plugin]=true&flashvars[sideBarContainer.plugin]=true&flashvars[sideBarContainer.position]=left&flashvars[sideBarContainer.clickToClose]=true&flashvars[chapters.plugin]=true&flashvars[chapters.layout]=vertical&flashvars[chapters.thumbnailRotator]=false&flashvars[streamSelector.plugin]=true&flashvars[EmbedPlayer.SpinnerTarget]=videoHolder&flashvars[dualScreen.plugin]=true), [Libraries](https://www.youtube.com/playlist?list=PLto3nNV9nKZlXSWOAqmmn4J7csD4I6a2d), [ATLAS Dijet](https://api.kaltura.nordu.net/p/310/sp/31000/embedIframeJs/uiconf_id/23450585/partner_id/310/widget_id/0_hr5l2zj6?iframeembed=true&playerId=kaltura_player_5bfdb5d709908&flashvars[playlistAPI.kpl0Id]=0_pspvclw2&flashvars[playlistAPI.autoContinue]=true&flashvars[playlistAPI.autoInsert]=true&flashvars[ks]=&flashvars[localizationCode]=en&flashvars[imageDefaultDuration]=30&flashvars[leadWithHTML5]=true&flashvars[forceMobileHTML5]=true&flashvars[nextPrevBtn.plugin]=true&flashvars[sideBarContainer.plugin]=true&flashvars[sideBarContainer.position]=left&flashvars[sideBarContainer.clickToClose]=true&flashvars[chapters.plugin]=true&flashvars[chapters.layout]=vertical&flashvars[chapters.thumbnailRotator]=false&flashvars[streamSelector.plugin]=true&flashvars[EmbedPlayer.SpinnerTarget]=videoHolder&flashvars[dualScreen.plugin]=true)), hands-on sessions, and peer-reviewed project work \n- next event: December\u2013January\n- contact: \n - Ross Church: ross@astro.lu.se\n - Caterina Doglioni: caterina.doglioni@hep.lu.se\n - Mikael Lund: mikael.lund@teokem.lu.se\n\n\n\n[](http://www.rdkit.org/)\n\n\n```python\nfrom rdkit import Chem\nfrom rdkit.Chem.Draw import IPythonConsole\nm1 = Chem.MolFromSmiles('n1c2C(=O)NC(N)=Nc2ncc1CNc3ccc(cc3)C(=O)N[C@H](C(O)=O)CCC(O)=O')\nm1\n```\n\n\n```python\nfrom rdkit.Chem import Draw\nDraw.MolToFile(m1,'aux/folate.svg')\n```\n\n\n```python\nm1.GetNumAtoms()\n```\n\n\n\n\n 32\n\n\n\n\n```python\nChem.MolToSmiles(m1)\n```\n\n\n\n\n 'Nc1nc2ncc(CNc3ccc(C(=O)N[C@@H](CCC(=O)O)C(=O)O)cc3)nc2c(=O)[nH]1'\n\n\n\n\n```python\nm2 = Chem.AddHs(m1) # add hydrogens\nm2\n```\n\n\n```python\nfrom rdkit.Chem import AllChem\nChem.AllChem.EmbedMolecule(m2) # make it 3D using ETKDG method\nm2\n```\n\n\n```python\nimport nglview as nv\nview = nv.show_rdkit(m2)\nview\n```\n\n\n NGLWidget()\n\n\n\n```python\nprint(Chem.MolToMolBlock(m2),file=open('aux/folate.mol','w+'))\n```\n\n\n```python\n\nview = nv.show_file('aux/folate.mol')\nview\n```\n\n\n NGLWidget()\n\n\n\n\n\n```python\nimport mdtraj as md\ns = md.load('aux/4mqj.pdb')\nprint('Number of atoms:', s.n_atoms)\nprint('Number of residues:', s.n_residues)\nchains = [chain for chain in s.top.chains]\nn_chains = len(chains)\nprint('Number of chains:', n_chains)\ns14 = s.atom_slice(s.top.select('all and chainid < 4'))\nprint('Radius of gyration:',md.compute_rg(s14)[0],'nm')\n```\n\n Number of atoms: 10369\n Number of residues: 2386\n Number of chains: 24\n Radius of gyration: 2.3107479678330973 nm\n\n\n\n```python\nimport nglview as nv\nview = nv.show_pdbid('4mqj')\nview\n```\n\n\n NGLWidget()\n\n\n\n```python\nimport matplotlib as mpl\nview = nv.show_mdtraj(s)\nview.clear_representations(component=0)\nfor i in range(4):\n chain = [a.index for a in s.top.chain(i).atoms]\n view.add_representation('spacefill', selection=chain, color=mpl.colors.to_hex(plt.cm.tab20(i)))\nview\n```\n\n\n NGLWidget()\n\n\n\n```python\ndef viewColorScheme(molecule,dataframe):\n dataframe = dataframe.copy()\n dataframe['Abbr.'] = dataframe['Abbr.'].str.upper()\n dataframe.set_index('Abbr.', drop=True, inplace=True)\n dd = dataframe.dropna()\n dataframe = dataframe.fillna(0)\n preg = (dd['pKx3']-dd['pKx3'].min()) / (dd['pKx3'].max() - dd['pKx3'].min())\n colorscheme = pd.Series([mpl.colors.to_hex(c) for c in plt.cm.rainbow_r(preg)],index=preg.index)\n view = nv.show_mdtraj(molecule)\n view.clear_representations(component=0)\n for res in [res for chain in chains[:4] for res in chain.residues ]:\n atoms = [a.index for a in res.atoms]\n if dataframe.loc[res.name]['pKx3'] == 0:\n view.add_spacefill(selection=atoms, color='#ffffff')\n else:\n view.add_spacefill(selection=atoms, color=colorscheme[res.name])\n view.camera = 'orthographic'\n return view\n```\n\n\n```python\nviewColorScheme(s,df)\n```\n\n\n NGLWidget()\n\n\n### How to get started\n\nThe installation is simple and quick!\n\n- Install [miniconda](https://docs.conda.io/en/latest/miniconda.html)\n- miniconda is the light version of anaconda, a package manager that runs on Windows, Mac and Linux\n\n\n\n#### On Mac or Linux\n\n- Download the installation script for your operating system\n - using the terminal: `curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh`\n- Install by running the script: \n - type and enter `bash Miniconda3-latest-MacOSX-x86_64.sh`\n- Create a `conda` environment with Python 3.7 (the latest version):\n - type and enter `conda create -n myenv python`, myenv is the name of the environment (any name works)\n- Activate the environment:\n - `source activate myenv`\n- Install notebook, numpy, pandas, matplotlib, scipy:\n - `conda install notebook numpy pandas matplotlib scipy`\n- Install RDkit, mdtraj, nglview, ipywidgets\n - we need to specify the channel: `conda install -c conda-forge rdkit mdtraj nglview ipywidgets`\n- launch Jupyter notebook: `jupyter-notebook`\n\n#### On Windows\n\n- Download the installation executable for your operating system\n- Install by running the `.exe` file\n- Create a new `conda` environment with Python 3.7 (the latest version):\n - open the anaconda prompt from the start menu and navigate to the folder where the course material has been unzipped (e.g. using cd to change directory and dir to list files in a folder)\n - type: `conda create -n myenv python`, myenv is the name of the environment (any name works)\n- Activate the environment:\n - `activate myenv`\n- Install notebook, numpy, pandas, matplotlib, scipy:\n - `conda install notebook numpy pandas matplotlib scipy`\n- Install RDkit, mdtraj, nglview\n - we need to specify the channel: `conda install -c conda-forge rdkit mdtraj nglview ipywidgets`\n- launch Jupyter notebook: `jupyter-notebook`\n\n\n```python\nfrom IPython.display import IFrame\nIFrame(src='https://www.youtube.com/embed/HW29067qVWk', width=640, height=400)\n```\n\n\n\n\n\n\n\n\n\n\n### How to share my notebooks to help other scientists to reproduce my analyses\n\n- [Ten simple rules for writing and sharing computational analyses in Jupyter Notebooks](https://doi.org/10.1371/journal.pcbi.1007007)\n

\n- Saved as an HTML file and provided as Supporting Information\n

\n- It is important to provide the list of packages needed to run the notebook:\n - create a conda environment for every project\n - export the conda environment to a yml file: `conda env export > environment.yml`\n - other scientists can quickly reproduce your environment: `conda env create -f environment.yml`\n

\n- `notebook.ipynb` + data + `environment.yml` in a zip file as Supporting Information\n

\n- Example: [refnx: neutron and X-ray reflectometry analysis in Python](http://scripts.iucr.org/cgi-bin/paper?rg5158)\n\n### How to share my notebooks to help other scientists to reproduce my analyses\n\n- [Create a GitHub repository](https://help.github.com/en/github/getting-started-with-github/create-a-repo)\n

\n- Upload your notebook and `environment.yml`\n

\n- [myBinder](https://mybinder.readthedocs.io/en/latest/introduction.html) allows you to run the notebook in the repository on a server: no need to download and install \n

\n- Example: [refnx: neutron and X-ray reflectometry analysis in Python](https://github.com/refnx/refnx)\n\n[](https://reproducible-science-curriculum.github.io/sharing-RR-Jupyter/01-sharing-github/)\n
\n\n\n\nTo convert a notebook into slides in pdf format:\n1. `jupyter nbconvert --to slides jupyter.ipynb --post serve`\n2. `conda install nodejs`\n3. `npm install -g decktape`\n4. copy `reveal.js/` and `custom.css` in the notebook directory (Developer Tools in Chrome)\n5. modify `reveal.js/js/reveal.js` so that: width: \"90%\", height: \"90%\", margin: 0, minScale: 1 and maxScale: 1\n6. convert html to pdf: `decktape jupyter.slides.html jupyter.pdf`\n", "meta": {"hexsha": "024c502524dd4ef80791c814b513513079df18ac", "size": 399589, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "qdetailss/jupyter.ipynb", "max_stars_repo_name": "urania277/jupyter-course", "max_stars_repo_head_hexsha": "20060173e7355fc4726148f00b61404d2613b74b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2017-11-27T23:41:53.000Z", "max_stars_repo_stars_event_max_datetime": "2021-03-24T21:24:04.000Z", "max_issues_repo_path": "qdetailss/jupyter.ipynb", "max_issues_repo_name": "urania277/jupyter-course", "max_issues_repo_head_hexsha": "20060173e7355fc4726148f00b61404d2613b74b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2017-12-08T20:12:35.000Z", "max_issues_repo_issues_event_max_datetime": "2021-03-26T09:28:07.000Z", "max_forks_repo_path": "lectures/qdetailss/jupyter.ipynb", "max_forks_repo_name": "mlund/jupyter-course", "max_forks_repo_head_hexsha": "d2e12d153febc6848a1ed80a2f3f29973a3bea73", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2017-12-11T13:18:22.000Z", "max_forks_repo_forks_event_max_datetime": "2021-04-13T14:18:33.000Z", "avg_line_length": 160.3487158909, "max_line_length": 59136, "alphanum_fraction": 0.8836229226, "converted": true, "num_tokens": 11410, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4378234991142019, "lm_q2_score": 0.2146914090629578, "lm_q1q2_score": 0.09399694394570265}} {"text": "\n
\n GEOS 639 Geodetic Imaging \n\n Lab 5: Volcano Source Modeling Using InSAR -- [20 Points] \n\n
\n Franz J Meyer; University of Alaska Fairbanks
\n Due Date: April 14, 2022 \n
\n\n This lab will introduce you to the intersection between Geodetic Displacement data created using InSAR and Geophysical Modeling. Radar Remote Sensing can provide you with geodetic observations of surface displacement. Inverse Modeling helps you understand the physical causes behind an observed displacement. \n \nTo illuminate the handoff from geodesy to geophysics, this lab will show how to use InSAR observations to determine the most likely parameters of a volcanic magma source underneath Okmok volcano, Alaska. You will use a Mogi source model to describe the physics behind observed surface displacement at Okmok. We will again use our **Jupyter Notebook** framework implemented within the Amazon Web Services (AWS) cloud to work on this exercise.

\n\nThis Lab is part of the UAF course GEOS639 Geodetic Imaging. It will introduce the following data analysis concepts:\n\n- A Mogi Source Model describing volcanic source geometry and physics\n- How to use the \"grid search\" method to perform a pseudo-inversion of a Mogi source model \n- How to solve for the best fitting source parameters using modeling with InSAR data\n
\n
\n\n\n
\n THIS NOTEBOOK INCLUDES THREE HOMEWORK ASSIGNMENTS. \n
\n Complete all assignments to achieve full score.
\n\n To submit your homework, please download your completed Jupyter Notebook from the server both asf PDF (*.pdf) and Notebook file (*.ipynb) and submit them as a ZIP bundle via the GEOS 639 Canvas page. To download, please select the following options in the main menu of the notebook interface:\n\n
    \n
  1. Save your notebook with all of its content by selecting File / Save and Checkpoint
  2. \n
  3. To export in Notebook format, click the radio button next to the notebook file in the main Jupyter Hub browser tab. Once clicked, a download field will appear near the top of the page.
  4. \n
  5. To export in PDF format, right-click on your browser window and print the browser content to PDF
  6. \n
\n\nContact me at fjmeyer@alaska.edu should you run into any problems.\n
\n
\n
\n\n\n```python\nimport url_widget as url_w\nnotebookUrl = url_w.URLWidget()\ndisplay(notebookUrl)\n```\n\n\n```python\nfrom IPython.display import Markdown\nfrom IPython.display import display\n\nnotebookUrl = notebookUrl.value\nuser = !echo $JUPYTERHUB_USER\nenv = !echo $CONDA_PREFIX\nif env[0] == '':\n env[0] = 'Python 3 (base)'\nif env[0] != '/home/jovyan/.local/envs/unavco':\n display(Markdown(f'WARNING:'))\n display(Markdown(f'This notebook should be run using the \"unavco\" conda environment.'))\n display(Markdown(f'It is currently using the \"{env[0].split(\"/\")[-1]}\" environment.'))\n display(Markdown(f'Select \"unavco\" from the \"Change Kernel\" submenu of the \"Kernel\" menu.'))\n display(Markdown(f'If the \"unavco\" environment is not present, use Create_OSL_Conda_Environments.ipynb to create it.'))\n display(Markdown(f'Note that you must restart your server after creating a new environment before it is usable by notebooks.'))\n```\n\n# 0. Importing Relevant Python Packages\n\n First step in any notebook is to import the required Python libraries into the Jupyter environment. In this notebooks we use the following scientific libraries:\n
    \n
  1. NumPy is one of the principal packages for scientific applications of Python. It is intended for processing large multidimensional arrays.
  2. \n
  3. Matplotlib is a low-level library for creating two-dimensional diagrams and graphs. With its help, you can build diverse charts, from histograms and scatterplots to non-Cartesian coordinates graphs.
  4. \n
\n
\nThe first step is to import all required python modules:\n\n\n```python\nimport os # for chdir, getcwd, path.basename, path.exists\nimport copy\nimport subprocess # for check_call\n\nimport matplotlib.pylab as plt # for add_subplot, cm.jet, colorbar, figure, grid, imshow, rcParams.update, savefig,\n # set_bad, set_clim, set_title, set_xlabel, set_ylabel\nimport numpy as np # for arange, arctan, concatenate, cos, fromfile, isnan, ma.masked_value, min, pi, power, reshape,\n # sqrt, square, sin, sum, tile, transpose, where, zeros \n```\n\nset up matplotlib plotting inside the notebook:\n\n\n```python\n%matplotlib inline\n```\n\n
\n\n# 1. Introduction to the Study Site: Okmok Volcano, Alaska\n\n Okmok is one of the more active volcanoes in Alaska\u2019s Aleutian Chain. Its last (confirmed) eruption was in the summer of 2008. Okmok is interesting from an InSAR perspective as it inflates and deflates heavily as magma moves around in its magmatic source located roughly 2.5 km underneath the surface. To learn more about Okmok volcano and its eruptive history, please visit the very informative site of the Alaska Volcano Observatory.\n\nThis lab uses a pair of C-band ERS-2 SAR images acquired on Aug 18, 2000 and Jul 19, 2002 to analyze the properties of a volcanic source that was responsible for an inflation of Okmok volcano of more than 3 cm near its summit. The figure to the right shows the Okmok surface displacement as measured by GPS data from field campaigns conducted in 2000 and 2002. The plots show that the displacement measured at the site is consistent with that created by an inflating point (Mogi) source.
\n\nThe primary goal of the problem set is to estimate values for four unknown model parameters describing a source process beneath a volcano. The lab uses real InSAR data from Okmok volcano, so you should get some sense for how remote sensing can be used to infer physical processes at volcanoes. We will assume that the source can be modeled as an inflating point source (a so-called Mogi source) and will use a grid-search method to find the source model parameters (3D source location and volume of magma influx) that best describe our InSAR-observed surface displacement.\n
\n
\n
\n\n# 2. Downloading and Visualizing the InSAR Data\n\n## 2.1 Download Data from AWS S3 Storage Bucket and Prep for Further Processing\n\nWe are using a pre-calculated displacement map created from C-band ERS-2 SAR images acquired on Aug 18, 2000 and Jul 19, 2002. We will pull the displacement map from an Amazon Web Services (AWS) S3 storage bucket: \n\nCreate and move to a directory in which to store our Lab 5 files:\"\n\n\n```python\npath = f\"{os.getcwd()}/lab_6_data\"\nif not os.path.exists(path):\n os.makedirs(path)\nos.chdir(path)\nprint(f\"Current working directory: {os.getcwd()}\")\n```\n\nDownload the displacement map from the AWS-S3 bucket:\n\n\n```python\ndisplacement_map_path = 's3://asf-jupyter-data-west/E451_20000818_20020719.unw'\ndisplacement_map = os.path.basename(displacement_map_path)\n!aws --region=us-west-2 --no-sign-request s3 cp $displacement_map_path $displacement_map\n```\n\nDefine some variables:\n\n\n```python\nsample = 1100\nline = 980\nposting = 40.0\nhalf_wave = 28.3\n```\n\nRead the dataset into the notebook, storing our observed displacement map in the variable \"observed_displacement_map\": \n\n\n```python\nif os.path.exists(displacement_map):\n with open (displacement_map, 'rb') as f: \n coh = np.fromfile(f, dtype='>f', count=-1)\n observed_displacement_map = np.reshape(coh, (line, sample))\n```\n\nNow we scale the measured and unwrapped InSAR phase into surface displacement in *cm* units and replace all ```nans``` with 0\n\n\n```python\nobserved_displacement_map = observed_displacement_map*half_wave/2.0/np.pi\nwhere_are_NaNs = np.isnan(observed_displacement_map)\nobserved_displacement_map[where_are_NaNs] = 0\n```\n\n Create a mask that removes invalid samples (low coherence) from the displacement map: \n\n\n```python\nobserved_displacement_map_m = np.ma.masked_where(observed_displacement_map==0, observed_displacement_map)\n```\n\n
\n\n## 2.2 Visualize The Surface Displacement Map\n\n We will visualize the displacement map both in units of [cm] and as a rewrapped interferogram.\n

\nWrite a function that calculates the bounding box.
\n\n\n```python\ndef extents(vector_component):\n delta = vector_component[1] - vector_component[0]\n return [vector_component[0] - delta/2, vector_component[-1] + delta/2]\n```\n\nCreate a directory in which to store the plots we are about to make, and move into it: \n\n\n```python\nos.chdir(path)\nproduct_path = 'plots'\nif not os.path.exists(product_path):\n os.makedirs(product_path)\nif os.path.exists(product_path) and os.getcwd() != f\"{path}/{product_path}\":\n os.chdir(product_path)\nprint(f\"Current working directory: {os.getcwd()}\")\n```\n\nWrite a plotting function:\n\n\n```python\ndef plot_model(infile, line, sample, posting, output_filename=None, dpi=72):\n # Calculate the bounding box\n extent_xvec = extents((np.arange(1, sample*posting, posting)) / 1000)\n extent_yvec = extents((np.arange(1, line*posting, posting)) / 1000)\n extent_xy = extent_xvec + extent_yvec\n \n plt.rcParams.update({'font.size': 14})\n inwrapped = (infile/10 + np.pi) % (2*np.pi) - np.pi\n cmap = copy.copy(plt.cm.get_cmap(\"jet\"))\n cmap.set_bad('white', 1.)\n \n # Plot displacement\n fig = plt.figure(figsize=(16, 8))\n ax1 = fig.add_subplot(1, 2, 1)\n im = ax1.imshow(infile, interpolation='nearest', cmap=cmap, extent=extent_xy, origin='upper')\n cbar = ax1.figure.colorbar(im, ax=ax1, orientation='horizontal')\n ax1.set_title(\"Displacement in look direction [mm]\")\n ax1.set_xlabel(\"Easting [km]\")\n ax1.set_ylabel(\"Northing [km]\")\n plt.grid()\n \n # Plot interferogram\n im.set_clim(-30, 30)\n ax2 = fig.add_subplot(1, 2, 2)\n im = ax2.imshow(inwrapped, interpolation='nearest', cmap=cmap, extent=extent_xy, origin='upper')\n cbar = ax2.figure.colorbar(im, ax=ax2, orientation='horizontal')\n ax2.set_title(\"Interferogram phase [rad]\")\n ax2.set_xlabel(\"Easting [km]\")\n ax2.set_ylabel(\"Northing [km]\")\n plt.grid()\n \n if output_filename:\n plt.savefig(output_filename, dpi=dpi)\n```\n\nCall plot_model() to plot our observed displacement map: \n\n\n```python\nplot_model(observed_displacement_map_m, line, sample, posting, output_filename='Okmok-inflation-observation.png', dpi=200)\n```\n\n
\n\n# 3. The Mogi Source Forward Model for InSAR Observations\n\n## 3.1 The Mogi Equation\n\nThe Mogi model provides the 3D ground displacement, $u(x,y,z)$, due to an inflating source at location $(x_s,y_s,z_s)$ with volume change $V$:\n\n\\begin{equation}\nu(x,y,z)=\\frac{1}{\\pi}(1-\\nu)\\cdot V\\Big(\\frac{x-x_s}{r(x,y,z)^3},\\frac{y-y_s}{r(x,y,z)^3},\\frac{z-z_s}{r(x,y,z)^3}\\Big)\n\\end{equation}\n
\n\\begin{equation}\nr(x,y,z)=\\sqrt{(x-x_s)^2+(y-y_s)^2+(z-z_s)^2}\n\\end{equation}\n\nwhere $r$ is the distance from the Mogi source to $(x,y,z)$, and $\\nu$ is the Poisson's ratio of the halfspace. The Poisson ratio describes how rocks react when put under stress (e.g., pressure). It is affected by temperature, the quantity of liquid to solid, and the composition of the soil material. In our problem, we will assume that $\\nu$ is fixed. \n
\n\n## 3.2 Projecting Mogi Displacement to InSAR Line-of-Sight\n\nIn our example, the $x$-axis points east, $y$ points north, and $z$ points up. However, in the code the input values for $z$ are assumed to be depth, such that the Mogi source is at depth $z_s > 0$. The observed interferogram is already corrected for the effect of topography, so the observations can be considered to be at $z = 0$.\n \n\nThe satellite \u201csees\u201d a projection of the 3D ground displacement, $u$, onto the look vector, $\\hat{L}$, which points from the satellite to the target. Therefore, we are actually interested in the (signed magnitude of the) projection of $u$ onto $\\hat{L}$ (right). This is given by\n\n\\begin{array}{lcl} proj_{\\hat{L}}u & = & (u^T\\hat{L})\\hat{L} \\\\ u^T\\hat{L} & = & u \\cdot \\hat{L} = |u||\\hat{L}|cos(\\alpha) = |u|cos(\\alpha) \\\\ & = & u_x\\hat{L}_x+ u_y\\hat{L}_y + u_z\\hat{L}_z \\end{array}\n\nwhere the look vector is given by $\\hat{L}=(sin(l) \\cdot cos(t), -sin(l) \\cdot sin(t), -cos(l))$, where $l$ is the look angle measured from the nadir direction and $t$ is the satellite track angle measured clockwise from geographic north. All vectors are represented in an east-north-up basis.\n\nOur forward model takes a Mogi source, $(x_s,y_s,z_s,V)$, and computes the look displacement at any given $(x, y, z)$ point. If we represent the ith point on our surface grid by $x_i = (x_i,y_i,z_i)$ the the displacement vector is $u_i = u(x_i, y_i, z_i)$, and the look displacement is\n\n\\begin{equation}\nd_i = u_i \\cdot \\hat{L}\n\\end{equation}\n\n\n\n## 3.3 Defining the Mogi Forward Model\n\nWe can now represent the Mogi forward problem as \n\n\\begin{equation}\ng(m) = d\n\\end{equation}\n\nwhere $g(\u00b7)$ describes the forward model in the very first equation in this notebook, $m$ is the (unknown) Mogi model, and $d$ is the predicted interferogram. The following code cells calculate the Mogi forward model according to the equations given above:\n\n\nWrite a function to calculate a forward model for a Mogi source. \n\n\n```python\ndef calc_forward_model_mogi(n1, e1, depth, delta_volume, northing, easting, plook):\n \n # This geophysical coefficient is needed to describe how pressure relates to volume change\n displacement_coefficient = (1e6*delta_volume*3)/(np.pi*4)\n \n # Calculating the horizontal distance from every point in the displacement map to the x/y source location\n d_mat = np.sqrt(np.square(northing-n1) + np.square(easting-e1))\n \n # denominator of displacement field for mogi source\n tmp_hyp = np.power(np.square(d_mat) + np.square(depth),1.5)\n \n # horizontal displacement\n horizontal_displacement = displacement_coefficient * d_mat / tmp_hyp\n \n # vertical displacement\n vertical_displacement = displacement_coefficient * depth / tmp_hyp\n \n # azimuthal angle\n azimuth = np.arctan2((easting-e1), (northing-n1))\n \n # compute north and east displacement from horizontal displacement and azimuth angle\n east_displacement = np.sin(azimuth) * horizontal_displacement\n north_displacement = np.cos(azimuth) * horizontal_displacement\n \n # project displacement field onto look vector\n temp = np.concatenate((east_displacement, north_displacement, vertical_displacement), axis=1)\n delta_range = temp.dot(np.transpose([plook]))\n delta_range = -1.0 * delta_range\n return delta_range\n```\n\nWrite a function to create simulated displacement data based on Mogi Source Model parameters: \n\n\n```python\ndef displacement_data_from_mogi(x, y, z, volume, iplot, imask):\n # Organizing model parameters\n bvc = [x, y, z, volume, 0, 0, 0, 0]\n bvc = np.asarray(bvc, dtype=object)\n bvc = np.transpose(bvc)\n \n # Setting acquisition parameters\n track = -13.3*np.pi / 180.0\n look = 23.0*np.pi / 180.0\n plook = [-np.sin(look)*np.cos(track), np.sin(look)*np.sin(track), np.cos(look)]\n \n # Defining easting and northing vectors\n northing = np.arange(0, (line)*posting, posting) / 1000\n easting = np.arange(0, (sample)*posting, posting) / 1000\n northing_mat = np.tile(northing, (sample, 1))\n easting_mat = np.transpose(np.tile(easting, (line, 1)))\n northing_vec = np.reshape(northing_mat, (line*sample, 1))\n easting_vec = np.reshape(easting_mat, (line*sample, 1))\n \n # Handing coordinates and model parameters over to the rngchg_mogi function\n calc_range = calc_forward_model_mogi(bvc[1], bvc[0], bvc[2], bvc[3], northing_vec, easting_vec, plook)\n \n # Reshaping surface displacement data derived via calc_forward_model_mogi()\n surface_displacement = np.reshape(calc_range, (sample,line))\n \n # return rotated surface displacement\n return np.transpose(np.fliplr(surface_displacement))\n```\n\n
\n\n## 3.4 Plotting The Mogi Forward Model\n\nThe cell below plots several Mogi forward models by varying some of the four main Mogi modeling parameters $(x_s,y_s,z_s,V)$.\n \nThe examples below fix the depth parameter to $z_s = 2.58 km$ and the volume change parameter to $volume = 0.0034 km^3$. We then vary the easting and northing parameters $x_s$ and $y_s$ to demonstrate how the model predictions vary when model parameters are changed.\n

\nRun the first example: \n\n\n```python\nplt.rcParams.update({'font.size': 14})\nextent_x = extents((np.arange(1, sample*posting, posting))/1000)\nextent_y = extents((np.arange(1, line*posting, posting))/1000)\nextent_xy = extent_x + extent_y\nxs = np.arange(18, 24.2, 0.4)\nys = np.arange(20, 24.2, 0.4)\n\nzs = 2.58;\nvolume = 0.0034;\nxa = [0, 7, 15]\nya = [0 ,5, 10]\n\nfig = plt.figure(figsize=(18, 18))\ncmap = copy.copy(plt.cm.get_cmap(\"jet\"))\nsubplot_index = 1\n\nfor k in xa:\n for l in ya: \n ax = fig.add_subplot(3, 3, subplot_index)\n predicted_displacement_map = displacement_data_from_mogi(xs[k], ys[l], zs, volume, 0, 0)\n predicted_displacement_map_m = np.ma.masked_where(observed_displacement_map==0, predicted_displacement_map)\n im = ax.imshow(predicted_displacement_map_m, interpolation='nearest', cmap=cmap, extent=extent_xy)\n cbar = ax.figure.colorbar(im, ax=ax, orientation='horizontal')\n plt.grid()\n im.set_clim(-30, 30)\n ax.plot(xs[k],ys[l], 'k*', markersize=25, markerfacecolor='w')\n ax.set_title('Source: X=%4.2fkm; Y=%4.2fkm' % (xs[k], ys[l]))\n ax.set_xlabel(\"Easting [km]\")\n ax.set_ylabel(\"Northing [km]\")\n subplot_index += 1\n \nplt.savefig('Model-samples-3by3.png', dpi=200, transparent='false')\n```\n\n
\n\n# Homework Assignment #1 \n\n
\n ASSIGNMENT #1: Experiment with the Mogi Forward Model -- [8 Points] \n\n To get a feeling for the Mogi forward model, please run the following forward model experiments using the Python Function displacement_data_from_mogi and plot the results (using the code cell above):\n\n
    \n
  1. Run a reference simulation using the code cell above by specifying the following model parameters for source depth $x_s$ and volume change $V$: $z_{s1} = 2.5 km$; $V_1 = 0.01 km^3$. The script will visualize the resulting simulated surface displacement maps. Change the name of the output figure (last line of the script) to something that you will recognize later on (e.g., ReferenceRun.png). -- [2 Points]
  2. \n
    \n
  3. Change the depth of the source by a factor of three ($z_{s2} = 7.5 km$) while leaving the other model parameters unchanged. Modify name of the output figure in the last line of the script. Visualize the results. Discuss changes to the reference run. Describe how the strength and shape of the displacement signal has changed and provide a physical explanation. -- [2 Points]
  4. \n
    \n
  5. Now change the source volume by a factor of three ($V_2 = 0.03 km^3$ \u2013 also reset source depth to $z_{s1} = 2.5 km$). Visualize the results and compare them to the reference run. -- [2 Points]
  6. \n
    \n
  7. Finally change both source volume and depth by a factor of three ($z_{s2} = 7.5 km$ and $V_2 = 0.03 km^3$). Compare this result to the results of experiments 1\u20133. -- [2 Points]
  8. \n
\n\n
\n
\n\n
\n
\n Question 1.1 [2 Points]: Experiment no. 1: Perform reference run in code cell above using source model parameters to $z_{s1} = 2.5 km$; $V_1 = 0.01 km^3$. Plot results. \n\nADD DISCUSSION HERE:\n\n
\n\n
\n
\n Question 1.2 [2 Points]: Experiment no. 2: Set source model parameters to $z_{s2} = 7.5 km$; $V_1 = 0.01 km^3$. Plot and discuss the results in comparison to reference run. \n\nADD DISCUSSION HERE:\n\n
\n\n
\n
\n Question 1.3 [2 Points]: Experiment no. 3: Set source model parameters to $V_2 = 0.03 km^3$ and $z_{s1} = 2.5 km$. Plot and discuss the results in comparison to reference run. \n\nADD DISCUSSION HERE:\n\n
\n\n
\n
\n Question 1.4 [2 Points]: Experiment no. 4: Set source model parameters to $V_2 = 0.03 km^3$ and $z_{s2} = 7.5 km$. Plot and discuss the results in comparison to reference run. \n\nADD DISCUSSION HERE:\n\n
\n
\n\nModify the example script below to answer questions 1.2 - 1.4: \n\n\n```python\nplt.rcParams.update({'font.size': 14})\nextent_x = extents((np.arange(1, sample*posting, posting))/1000)\nextent_y = extents((np.arange(1, line*posting, posting))/1000)\nextent_xy = extent_x + extent_y\nxs = np.arange(18, 24.2, 0.4)\nys = np.arange(20, 24.2, 0.4)\n\n# ------------ Change Variables HERE --------------- #\nzs = 2.58;\nvolume = 0.0034;\n# ------------------------------------------------- #\n\nxa = [0, 7, 15]\nya = [0 ,5, 10]\n\nfig = plt.figure(figsize=(18, 18))\ncmap = copy.copy(plt.cm.get_cmap(\"jet\"))\nsubplot_index = 1\n\nfor k in xa:\n for l in ya: \n ax = fig.add_subplot(3, 3, subplot_index)\n predicted_displacement_map = displacement_data_from_mogi(xs[k], ys[l], zs, volume, 0, 0)\n predicted_displacement_map_m = np.ma.masked_where(observed_displacement_map==0, predicted_displacement_map)\n im = ax.imshow(predicted_displacement_map_m, interpolation='nearest', cmap=cmap,extent=extent_xy)\n cbar = ax.figure.colorbar(im, ax=ax, orientation ='horizontal')\n plt.grid()\n im.set_clim(-30, 30)\n ax.plot(xs[k],ys[l], 'k*', markersize=25, markerfacecolor='w')\n ax.set_title(f\"Source: X={xs[k]:.2f}km; Y={ys[l]:.2f}km\")\n ax.set_xlabel(\"Easting [km]\")\n ax.set_ylabel(\"Northing [km]\")\n subplot_index += 1\n\n# CHANGE THE NAME OF THE IMAGE THAT IS BEING SAVED TO YOUR CLOUD INSTANCE!!\nplt.savefig('Model-samples-3by3.png', dpi=200, transparent='false')\n```\n\n
\n\n# 4. Solving the Inverse Model\n\n The inverse problem seeks to determine the optimal parameters $(\\hat{x_s},\\hat{y_s},\\hat{z_s},\\hat{V})$ of the Mogi model $m$ by minimizing the misfit between predictions, $g(m)$, and observations $d^{obs}$ according to\n \n\\begin{equation}\n\\sum{\\Big[g(m) - d^{obs}\\Big]^2}\n\\end{equation}\n\nThis equation describes misfit using the method of least-squares, a standard approach to approximate the solution of an overdetermined equation system. We will use a grid-search approach to find the set of model parameters that minimize the the misfit function. The approach is composed of the following processing steps: \n
    \n
  1. Loop through the mogi model parameters,
  2. \n
  3. Calculate the forward model for each set of parameters,
  4. \n
  5. Calculate the misfit $\\sum{[g(m) - d^{obs}]^2}$, and
  6. \n
  7. Find the parameter set that minimizes this misfit.
  8. \n
\n
\n\n## 4.1 Experimenting with Misfit\n\nLet's look at the misfit $\\sum{[g(m) - d^{obs}]^2}$ for a number of different model parameter sets $(x_s,y_s,z_s,V)$: \n\n\n\n\n```python\nplt.rcParams.update({'font.size': 14})\nextent_x = extents((np.arange(1, sample*posting, posting))/1000)\nextent_y = extents((np.arange(1, line*posting, posting))/1000)\nextent_xy = extent_x + extent_y\nxs = np.arange(18, 24.2, 0.4)\nys = np.arange(20, 24.2, 0.4)\n\nzs = 2.58;\nvolume = 0.0034;\nxa = [0, 7, 15]\nya = [0 ,5, 10]\n\nfig = plt.figure(figsize=(18, 18))\ncmap = copy.copy(plt.cm.get_cmap(\"jet\"))\nsubplot_index = 1\n\nfor k in xa:\n for l in ya: \n ax = fig.add_subplot(3, 3, subplot_index)\n predicted_displacement_map = displacement_data_from_mogi(xs[k], ys[l], zs, volume, 0, 0)\n predicted_displacement_map_m = np.ma.masked_where(observed_displacement_map==0, predicted_displacement_map)\n im = ax.imshow(observed_displacement_map_m-predicted_displacement_map_m, interpolation='nearest', cmap=cmap, extent=extent_xy)\n cbar = ax.figure.colorbar(im, ax=ax, orientation='horizontal')\n plt.grid()\n im.set_clim(-30, 30)\n ax.plot(xs[k], ys[l], 'k*', markersize=25, markerfacecolor='w')\n ax.set_title('Source: X=%4.2fkm; Y=%4.2fkm' % (xs[k], ys[l]))\n ax.set_xlabel(\"Easting [km]\")\n ax.set_ylabel(\"Northing [km]\")\n subplot_index += 1\nplt.savefig('Misfit-samples-3by3.png', dpi=200, transparent='false')\n```\n\n
\n\n## 4.2 Running Grid-Search to find Best Fitting Model Parameter $(\\hat{x}_s,\\hat{y}_s)$\n\nThe following code cell runs a grid-search approach to find the best fitting Mogi source parameters for the 2000-2002 displacement event at Okmok. To keep things simple, we will fix the depth $z_s$ and volume change $V$ parameters close to their \"true\" values and search only for the correct east/north source location ($x_s,y_s$).\n

\nWrite a script using the grid-search approach in Python:\n\n\n```python\n# FIX Z AND dV, SEARCH OVER X AND Y\n\n# Setting up search parameters\nxs = np.arange(19, 22.2, 0.2)\nys = np.arange(21, 23.2, 0.2)\nzs = 2.58;\nvolume = 0.0034;\n\nnx = xs.size\nny = ys.size\nng = nx * ny;\n\nprint(f\"fixed z = {zs}km, dV = {volume}, searching over (x,y)\")\n\nmisfit = np.zeros((nx, ny))\nsubplot_index = 0\n\n# Commence grid-search for best model parameters\nfor k, xv in enumerate(xs):\n for l, yv in enumerate(ys):\n subplot_index += 1\n predicted_displacement_map = displacement_data_from_mogi(xs[k], ys[l], zs, volume, 0, 0)\n predicted_displacement_map_m = np.ma.masked_where(observed_displacement_map==0, predicted_displacement_map)\n misfit[k,l] = np.sum(np.square(observed_displacement_map_m - predicted_displacement_map_m))\n print(f\"Source {subplot_index:3d}/{ng:3d} is x = {xs[k]:.2f} km, y = {ys[l]:.2f} km\")\n\n# Searching for the minimum in the misfit matrix\nmmf = np.where(misfit == np.min(misfit))\nprint(f\"\\n----------------------------------------------------------------\")\nprint('Best fitting Mogi Source located at: X = %5.2f km; Y = %5.2f km' % (xs[mmf[0]], ys[mmf[1]]))\nprint(f\"----------------------------------------------------------------\")\n```\n\n
\n\n## 4.3 Plot and Inspect the Misfit Function\n\nThe code cell below plots the misfit function ($\\sum{[g(m) - d^{obs}]^2}$) describing the fit of different Mogi source parameterizations to the observed InSAR data. You should notice a clear minimum in the misfit plot at the location of the best fitting source location estimated above. \n \nYou may notice that, even for the best fitting solution, the misfit does not become zero. This could be due to other signals in the InSAR data (e.g., atmospheric effects or residual topography). Alternatively, it could also indicate that the observed displacement doesn't fully comply with Mogi theory. \n\n

\nPlot the misfit function ($\\sum{[g(m) - d^{obs}]^2}$):\n\n\n```python\nplt.rcParams.update({'font.size': 18})\nextent_xy = extents(xs) + extents(ys)\nfig = plt.figure(figsize=(10, 10))\ncmap = copy.copy(plt.cm.get_cmap(\"jet\"))\nax1 = fig.add_subplot(1, 1 ,1)\nim = ax1.imshow(np.transpose(misfit), origin='lower', interpolation='nearest', cmap=cmap, extent=extent_xy)\n# USE THIS COMMAND TO CHANGE COLOR SCALING: im.set_clim(-30, 30)\nax1.set_aspect('auto')\ncbar = ax1.figure.colorbar(im, ax=ax1, orientation='horizontal')\nax1.plot(xs[mmf[0]], ys[mmf[1]], 'k*', markersize=25, markerfacecolor='w')\nax1.set_title(\"Misfit Function for Mogi-Source Approximation\")\nax1.set_xlabel(\"Easting [km]\")\nax1.set_ylabel(\"Northing [km]\")\nplt.savefig('Misfit-function.png', dpi=200, transparent='false')\n```\n\n
\n\n## 4.4 Plot Best-Fitting Mogi Forward Model and Compare to Observations\n\nWith the best-fitting model parameters defined, you can now analyze how well the model fits the InSAR-observed surface displacement. The best way to do that is to look at both the observed and predicted displacement maps and compare their spatial patterns. Additionally, we will also plot the residuals (observed_displacement_map - observed_displacement_map) to determine if there are additional signals in the data that are not modeled using Mogi theory. \n\n

\nCompare the observed and predicted displacement maps:\n\n\n```python\n# Calculate predicted displacement map for best-fitting Mogi parameters:\npredicted_displacement_map = displacement_data_from_mogi(xs[mmf[0]], ys[mmf[1]], zs, volume, 0, 0)\n\n# Mask the predicted displacement map to remove pixels incoherent in the observations:\npredicted_displacement_map_m = np.ma.masked_where(observed_displacement_map==0, predicted_displacement_map)\n\n# Plot observed displacement map\nplot_model(observed_displacement_map_m, line, sample, posting)\n\n# Plot simulated displacement map\nplot_model(predicted_displacement_map_m, line, sample, posting)\n\nplt.savefig('BestFittingMogiDefo.png', dpi=200, transparent='false')\n\n# Plot simulated displacement map without mask applied\nplot_model(predicted_displacement_map, line, sample, posting)\n```\n\nDetermine if there are additional signals in the data that are not modeled using Mogi theory:\n\n\n```python\n# Plot residual between observed and predicted displacement maps\nplot_model(observed_displacement_map_m-predicted_displacement_map_m, line, sample, posting)\nplt.savefig('Residuals-ObsMinusMogi.png', dpi=200, transparent='false')\n```\n\n# Homework Assignment #2 \n\n
\n ASSIGNMENT #2: Run 2nd Grid-Search to Find Model Parameters $(\\hat{z}_s,\\hat{V})$ -- [8 Points] \n\n For this second grid-search run, we now switch out the model parameters we are trying to estimate. We will assume that the lateral location of the Mogi source is now fixed to its estimated value ($\\hat{x}_s = 20.6 km$; $\\hat{y}_s = 21.8 km$). \n\nTo perform a grid search for the best fitting model parameters $\\hat{z}_s$ and $\\hat{V}$, please complete the following steps:\n\n
    \n
    \n
  1. Using the previous grid-search script as a template, write a new grid-search script to search for the best fitting source model depth ($z_s$) and volume change ($V$). -- [3 Points]
  2. \n
    \n
  3. Provide a plot of the misfit function and provide the best-fitting values for $\\hat{z}_s$ and $\\hat{V}$. When plotting the misfit function, put $z_s$ on the vertical axis. You may want to adjust the color scale, in order to better see the shape of the misfit function. -- [2 Points]
  4. \n
    \n
  5. Compare the $z_s$ vs. $V$ misfit function (misfit function 2) to the $y_s$ vs. $x_s$ misfit function (misfit function 1). You should see that the shape of the function is different. Misfit function 1 is largely of circular shape while misfit function 2 appears elongated. Interpret this pattern. -- [3 Points]
  6. \n
\n\n
\n
\n\n
\n
\n Question 2.1 [3 Points]: Provide code to perform grid search over $z_s$ and $V$. \n\nPROVIDE SCRIPT BY MODIFYING THE CODE IN THE CODE CELL BELOW:\n\n
\n\n\n```python\n# !!!! MODIFY THIS SCRIPT TO PERFORM A GRID SEARCH OVER zs AND V: !!!!\n\n# Setting up search parameters\nxs = np.arange(19, 22.2, 0.2)\nys = np.arange(21, 23.2, 0.2)\nzs = 2.58;\nvolume = 0.0034;\n\nnx = xs.size\nny = ys.size\nng = nx * ny;\n\n#print('fixed z = ',zs,' km, dV = ',volume, ' searching over (x,y)')\nprint(f\"fixed z = {zs}km, dV = {volume}, searching over (x,y)\")\n\nmisfit=np.zeros((nx,ny))\nsubplot_index = 0\n\n# Commence grid-search for best model parameters\nfor k, xv in enumerate(xs):\n for l, yv in enumerate(ys):\n subplot_index = subplot_index+1\n predicted_displacement_map = displacement_data_from_mogi(xs[k],ys[l],zs,volume,0,0)\n predicted_displacement_map_m = np.ma.masked_where(observed_displacement_map == 0, predicted_displacement_map)\n misfit[k,l] = np.sum(np.square(observed_displacement_map_m - predicted_displacement_map_m))\n print(f\"Source {subplot_index:3d}/{ng:3d} is x = {xs[k]:.2f} km, y = {ys[l]:.2f} km\")\n\n# Searching for the minimum in the misfit matrix\nmmf = np.where(misfit == np.min(misfit))\nprint('')\nprint(f\"\\n----------------------------------------------------------------\")\nprint('Best fitting Mogi Source located at: X = %5.2f km; Y = %5.2f km' % (xs[mmf[0]], ys[mmf[1]]))\nprint(f\"----------------------------------------------------------------\")\n```\n\n
\n
\n Question 2.2-A [1 Points]: Provide the best fitting values for source depth ($\\hat{z}_s$) and volume change ($\\hat{V}$) according to your grid-search results. \n\nPROVIDE ESTIMATES FOR $\\hat{z}_s$ and $\\hat{V}$ HERE:\n\n
\n\n
\n
\n Question 2.2-B [1 Points]: Provide plot of $z_s$ on $V$ misfit function. \n\nPROVIDE PLOT HERE:\n\n
\n\n
\n
\n Question 2.3 [3 Points]: Compare the $z_s$ vs. $V$ misfit function (misfit function 2) to the initial $y_s$ vs. $x_s$ misfit function (misfit function 1). Interpret their difference in spatial pattern. \n\nPROVIDE DISCUSSION HERE:\n\n
\n\n
\n\n# Homework Assignment #3 \n\n
\n ASSIGNMENT #3: Error Discussion -- [4 Points] \n\n In a perfect world where data are noise-free and geophysical models perfectly represent reality, there should be a set of model parameters that reduces the misfit function to zero. In our case, however, the misfit function still shows large values, even for the best fitting model parameters. Provide and explain three reasons for why the differences between the model and the data are not zero.\n

\nPROVIDE DISCUSSION HERE:\n
\n
\n
\n\n# 5. Version Log\n\n GEOS 639 Geodetic Imaging - Version 1.3.3 - March 2022 \n
\n Version Changes:\n
    \n
  • remove obsolete asf_notebook functions
  • \n
  • url_widget
  • \n
  • Adjust some of the language in the notebook
  • \n
\n
\n
\n", "meta": {"hexsha": "088d12e38825fed81e22bdce2f05f60e86355693", "size": 53932, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Week-9/GEOS639-Lab5-VolcanoSourceModelingfromInSAR.ipynb", "max_stars_repo_name": "uafgeoteach/GEOS639-InSARGeoImaging", "max_stars_repo_head_hexsha": "2f0804f875fe3dbc4972c1dfc785dc585ebbd482", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2022-02-22T06:29:41.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-04T19:09:31.000Z", "max_issues_repo_path": "Week-9/GEOS639-Lab5-VolcanoSourceModelingfromInSAR.ipynb", "max_issues_repo_name": "uafgeoteach/GEOS639-InSARGeoImaging", "max_issues_repo_head_hexsha": "2f0804f875fe3dbc4972c1dfc785dc585ebbd482", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Week-9/GEOS639-Lab5-VolcanoSourceModelingfromInSAR.ipynb", "max_forks_repo_name": "uafgeoteach/GEOS639-InSARGeoImaging", "max_forks_repo_head_hexsha": "2f0804f875fe3dbc4972c1dfc785dc585ebbd482", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 37.4267869535, "max_line_length": 709, "alphanum_fraction": 0.5857375955, "converted": true, "num_tokens": 10736, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4148988457967689, "lm_q2_score": 0.22541660542786954, "lm_q1q2_score": 0.09352508941544874}} {"text": "# SNLP Assignment 3\n\nName 1: Sangeet Sagar
\nStudent id 1: 7009050
\nEmail 1: sasa00001@stud.uni-saarland.de
\n\n\nName 2: Nikhil Paliwal
\nStudent id 2: 7009915
\nEmail 2: nipa00002@stud.uni-saarland.de
\n\n**Instructions:** Read each question carefully.
\nMake sure you appropriately comment your code wherever required. Your final submission should contain the completed Notebook and the respective Python files for exercises 2 and 3. There is no need to submit the data files.
\nUpload the zipped folder in Teams. Make sure to click on \"Turn-in\" after you upload your submission, otherwise the assignment will not be considered as submitted. Only one member of the group should make the submisssion.\n\n---\n\n## Exercise 1: Entropy Intuition (2 points)\n\n### 1.1 (0.5 points)\n\nOrder the following three snippets by entropy (highest to lowest). Justify your answer (view it more intuitively rather than by using a specific character-level language model, though you would probably reach the same conclusion).\n\n```\n1: A B A A A A B B A A A B A B B B B B A\n2: A B A B A B A B A B A B A B A B A B A\n3: A B A A A B A B A B A B A B A B A B A\n```\n\n**Answer**
\nCorrect order: $2 > 3 > 1$
\n*Explanation*: Looking intuitively, Entropy is a measure of randomness in a probability distribution. We compare the entorpy in the above sequences by comparing radomness. **2** has the highest degre of randomness for the reason that no consecutive letters are same. This is followed by **3** as it has a repition of `A A A` in the beginning. This is observation is more prevelant in **1**, hence it has least entropy among all.\n\n### 1.2 (0.5 point)\n\nWords in natural language do not have the maximum entropy given the available alphabet. This creates a redundancy (e.g. the word `maximum` could be uniquely replaced by `mxmm` and everyone would still understand). If the development of natural languages leads to somewhat optimal solutions, why is it beneficial to have such redundancies in communication?\n\nIf you're uncertain, please refer to this well-written article: [www-math.ucdenver.edu/~wcherowi/courses/m5410/m5410lc1.html](http://www-math.ucdenver.edu/~wcherowi/courses/m5410/m5410lc1.html).\n\n**Answer**
\nHaving redundancies diminishes the uncertainty in communication. With more information on the goal of communication, more certain we become what the speaker refers.\n\n### 1.3 (1 point)\n\n1. Assume you were given a perfect language model that would always assign probability of $1$ to the next word. What would be the cross-entropy on any text? Motivate your answer with formal derivation. (0.5 points)\n2. How does cross-entropy relate to perplexity? Is there a reason why would one be preferred over the other? (0.5 points)\n\n**Answer**
\nCross-entropy\n$$ H(P, Q) = \\quad \u2013\\sum_{x \\in X} P(x) * \\log(Q(x)) $$\n1. In the given situation, $P(x) = Q(x) = 1$. Hence,\n$$ H(P, Q) = \\quad \u2013\\sum_{x \\in X} 1 * \\log(1) $$\n$$ H(P, Q) = 0 $$\nIntuitively, if the probablity of the next word is 1, we are always certain about the subsequent outcomes, hence there is no un-certainity and the Entropy is $0$.\n\n2. Perplexity($M$) is given as $M = 2^{H(P, Q)}$. Hence it is equivalent to the exponentiation of the cross-entropy. Generally, perplexity is preferred over cross-entropy as it is easy to interpret (for the reason that perplexity is the avergae de-facto size of vocabluary).\n\n\n## Exercise 2: Harry Potter and the Measure of Uncertainty (4 points)\n\n#### 2.1 (2.5 points)\n\nHarry, Hermione, and Ron are trying to save the Philosopher's Stone. To do this, they have to cross a series of hurdles to reach the room where the stone is kept. Currently, they are trapped in a chamber whose exit is blocked by fire. On a table before them are 7 potions.\n\n|P1|P2|P3|P4|P5|P6|P7|\n|---|---|---|---|---|---|---|\n\nOf these, 6 potions are poisons and only one is the antidote that will get them through the exit. Drinking the poison will not kill them, but will weaken them considerably. \n\n1. There is no way of knowing which potion is a poison and which an antidote. How many potions must they sample *on an average* to pick the antidote? (1 point)\n\n**Answer**
\nWe have $X$ = no. of potion sampled before picking up an antidote.
\n\n$P(X=1) = \\frac{1}{7} \\quad \\quad \\quad$ (antidote is picked up in the first sampling)
\n$P(X=2) = \\frac{6}{7}\\cdot\\frac{1}{6} = \\frac{1}{7}\\quad$ (1 poison, 1 antidote)
\n$P(X=3) = \\frac{6}{7} \\cdot \\frac{5}{6} \\cdot \\frac{1}{5} = \\frac{1}{7} $ (2 poison, 1 antidote)
\nSimilarly,
\n$P(X=4)= P(X=5)= P(X=6)= P(X=7) =\\frac{1}{7} $\n\n\n$$ E[x] = \\sum_{n=1}^{7} n\\cdot\\left(\\frac{1}{7}\\right)$$\n$$ E[x] = \\frac{1}{7}\\sum_{n=1}^{7} n$$\n$$ E[x] = 4$$\n\nTherefore, we must take sample 4 potions on an avergage to pick the antidote.\n\nHermione notices a scroll lying near the potions. The scroll contains an intricate riddle written by Professor Snape that will help them determine which potion is the antidote. With the help of the clues provided, Hermione cleverly deduces that each potion can be the antidote with a certain probability. \n\n|P1|P2|P3|P4|P5|P6|P7|\n|---|---|---|---|---|---|---|\n|1/16|1/4|1/64|1/2|1/64|1/32|1/8|\n\n2. In this situation, how many potions must they now sample *on an average* to pick the antidote correctly? (1 point)\n\n\n\n3. What is the most efficient sequence of potions they must sample to discover the antidote? Why do you claim that in terms of how uncertain you are about guessing right? (0.5 point)\n\n**Answer**
\n**P4 > P2 > P7 > P1 > P6 > {P3, P4}**
\nThe sequence of potion sampling given above diminishes the uncertainity (to maximum extent) as we take the potion with highest probablity of being an antidote at first.\n\n\n#### 2.2 (1.5 points)\n\n1. Extend your logic from 2.1 to a Shannon's Game where you have to correctly guess the next word in a sentence. Assume that a word is any possible permutation and combination of 26 letters of the alphabet, and all the words have a length of at most *n*. \nHow many guesses will one have to make to guess the correct word? (1 point)
\n(**Hint**: Think of how many words can exist in this scenario)\n\n**Answer**
\nLet $k$ be the total number of words of length atmost $n$ be present in the corpus:\n$$ k = \\sum_{n=1}^{26} 26^{n}$$\n\nSum of a GP
\n$$ k = \\frac{26}{25}(26^{n}-1)$$\n\nUsing similar logic form 2.1, we have
\n$E[X=1] = \\frac{1}{k} $ ; (expectation of a correct guess in the 1st sampling)
\n$E[X=2] = \\frac{k-1}{k} \\frac{1}{k-1} = \\frac{1}{k}$ ; (expectation of a correct guess in the 2nd sampling)
\nAnd so on,
\n$E[X=k] = \\frac{1}{k}$ ; (we sample as many times as we have totat number of words in the corpus)\n\n$$ E[x] = \\sum_{m=1}^{k} m\\cdot\\left(\\frac{1}{k}\\right)$$\n$$ E[x] = \\frac{1}{k}\\sum_{m=1}^{k} m$$\n$$ E[x] = \\frac{1}{k}\\cdot \\frac{k(k+1)}{2}$$\n\nTherefore, one has to make $\\frac{1}{k}\\cdot \\frac{k(k+1)}{2}$ (where $k = \\sum_{n=1}^{26} 26^{n}$) guesses to to guess the correct word.\n\n2. Why is the entropy lower in real-world languages? How do language models help to reduce the uncertainty of guessing the correct word? (2-3 sentences) (0.5 point)\n\n**Answer**
\nEntropy is lower for real-world languages for the reason that the language is defined under a particular set of rules (e.g. Grammatical rules) and is confined to follow them. We as a speaker or a writer of the language follow a specific sentence structure.
\nA statistical language model is learned from raw text and predicts the probability of the next word in the sequence given the words already present in the sequence. Hence, given a word the LM will have certain choices to choose\nfrom to make the next prediction and it will select the one with maximum probablity, thus reducing the uncertainity of guessing the correct word.\n\n## Exercise 3: Kullback-Leibler Divergence (4 points)\n\nAnother metric (besides perplexity and cross-entropy) to compare two probability distributions is the Kullback-Leibler Divergence $D_{KL}$. It is defined as:\n\n\\begin{equation}\nD_{KL}(P\\|Q) = \\sum_{x \\in X}P(x) \\cdot \\log \\frac{P(x)}{Q(x)}\n\\end{equation}\n\nWhere $P$ is the empirical or observed distribution, and Q is the estimated distribution over a common probabilitiy space $X$. \nAnswer the following questions:\n\n#### 3.1. (0.5 points)\n\nHow is $D_{KL}$ related to Cross-Entropy? Derive a mathematical expression that describes the relationship. \n\n**Answer**
\n$$ D_{KL}(P\\|Q) = \\sum_{x \\in X}P(x) \\cdot \\log \\frac{P(x)}{Q(x)} $$ \n$$ D_{KL}(P\\|Q) = -\\sum_{x \\in X}P(x)\\cdot \\log(Q(x)) + \\sum_{x \\in X}P(x)\\cdot \\log(Q(x)) $$ \n$$ D_{KL}(P\\|Q) = E_P[-\\log(Q)]- E_P[-\\log(P)]$$ \n$$ D_{KL}(P\\|Q) = H(P, Q)- H(P)$$ \n\nWhere:
\n$ H(P,Q)$ = cross entropy of distributions $P$ and $Q$
\n$ H(P)$ = entropy of distribution $P$\n\n#### 3.2. (0.5 points)\n\nIs minimizing $D_{KL}$ the same thing as minimizing Cross-Entropy? Support your answer using your answer to 1.\n\n\n\n**Answer**
\nYes, minimizing cross-entrpy is same as minimizing $D_{KL}$ becuase entropy remains unchanged for a true distribution. Changes in the distribution are reflected only in the cross-entropy.\n\n#### 3.3 (3 points)\n\nFor a function $d$ to be considered a distance metric, the following three properties must hold:\n\n$\\forall x,y,z \\in U:$\n\n1. $d(x,y) = 0 \\Leftrightarrow x = y$\n2. $d(x,y) = d(y,x)$\n3. $d(x,z) \\le d(x,y) + d(y,z)$\n\nIs $D_{KL}$ a distance metric? ($U$ in this case is the set of all distributions over the same possible states).\nFor each of the three points either prove that it holds for $K_{DL}$ or show a counterexample proving why it does not.\n\n**Answer**
\n1. Let $x=p$, $y=q$ \n$$ D(p \\|q) = H(p, q) - H(p) \\quad \\quad \\quad \\quad \\dots (1)$$\n$$ D(p \\|q) = H(p,p) - H(p)$$\n$$ D(p \\|q) = H(p) - H(p)$$\n$$ D(p \\|q) = 0$$\n$D_{KL}$ holds here.\n\n2. $$ D(q \\|p) = H(q, p) - H(q) \\quad \\quad \\quad \\quad \\dots (2)$$\nFrom 1 and 2, \n$$D(q \\|p) \\neq D(p \\|q)$$\n$D_{KL}$ does not hold here.
\nCounterexample:\n\\begin{align}\nD(x\\|y) &= \\frac{1}{3} \\log\\left(\\frac{1/3}{1/6}\\right) + \\frac{2}{3} \\log\\left(\\frac{2/3}{5/6}\\right) \\\\\nD(x\\|y) &= 0.035 \\\\\n\\\\\nD(y\\|x) &= \\frac{1}{6} \\log\\left(\\frac{1/6}{1/3}\\right) + \\frac{5}{6} \\log\\left(\\frac{5/6}{2/3}\\right) \\\\\nD(y\\|x) &= 0.03 \\\\\n\\end{align}\nHence\n$$D(x \\|y) \\neq D(y \\|x) $$\n\n3. $D_{KL}$ does not hold here.
\nCounterexample:\nSample space : ${0, 1}$
\n$x(0) = \\frac{1}{3}$
\n$y(0) = \\frac{1}{6}$
\n$z(0) = \\frac{1}{12}$
\n\n$$ D(p\\|q) = \\sum p_i \\log\\left(\\frac{p_i}{q_i}\\right) $$\n\\begin{align}\nD(x\\|z) &= \\sum x_i \\log\\left(\\frac{x_i}{z_i}\\right) \\\\\nD(x\\|z) &= x(0) \\log\\left(\\frac{x(0)}{z(0)}\\right) + x(1) \\log\\left(\\frac{x(1)}{z(1)}\\right) \\\\\nD(x\\|z) &= \\frac{1}{3} \\log\\left(\\frac{1/3}{1/12}\\right) + \\frac{2}{3} \\log\\left(\\frac{2/3}{11/12}\\right) \\\\\nD(x\\|z) &= 0.108 \\\\\n\\\\\nD(x\\|y) &= \\frac{1}{3} \\log\\left(\\frac{1/3}{1/6}\\right) + \\frac{2}{3} \\log\\left(\\frac{2/3}{5/6}\\right) \\\\\nD(x\\|y) &= 0.035 \\\\\n\\\\\nD(y\\|z) &= \\frac{1}{6} \\log\\left(\\frac{1/6}{1/12}\\right) + \\frac{5}{6} \\log\\left(\\frac{5/6}{1/12}\\right) \\\\\nD(x\\|z) &= 0.015 \\\\\n\\end{align}\n\nHence,\n$$D(x\\|z) \\ge D(x\\|y) + D(x\\|y)$$\n\nTherefore, $D_{KL}$ is not a distance metric.\n\n## Bonus (1.5 points)\n\n1. Compute $D_{KL}(Q_1\\|P_1)$ for the following pair of sentences based on a unigram language model (word level).\n\n```\np1: to be or not to be\nq1: to be or to be or not or to be be be\n```\n\n Do so by implementing the function `dkl` in `bonus.py`. You will also have to calculate the distributions $P_1$, $Q_1$; for this, you can either reuse your code from the last assignment or implement a new function in `bonus.py`. (1 point)\n\n2. Suppose the sentences in 1. would be replaced by the following sequences of symbols. You can imagine them to be sequences of nucleobases in a [coding](https://en.wikipedia.org/wiki/Coding_region) region of a gene in your genome.\n\n```\np2: ACTGACACTGAC\nq2: ACTACTGACCCACTACTGACCC\n```\n\nLet $P_2$, $Q_2$ be the character-level unigram LMs derived from these sequences. What values will $D_{KL}(P_1\\|P_2)$, $D_{KL}(Q_1\\|Q_2)$ take? Does the quantity hold any information? Would computing $D_{KL}$ between distributions over two different natural languages hold any information? (0.5 points)\n\nNo mathematical explanation nor coding required for the second part.\n\n\n```python\nfrom importlib import reload\nimport bonus\nbonus = reload(bonus)\n\n# TODO: estimate LMs\nP = \nQ = \n\n# TODO: DKL\nprint(bonus.dkl(p,q))\n```\n", "meta": {"hexsha": "097ef77b504e34bca9a8b9536f3c50d171ac5fb9", "size": 17709, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "snlp/hw3/Assignment3.ipynb", "max_stars_repo_name": "sangeet2020/ss-21", "max_stars_repo_head_hexsha": "c2dbcf9668cb82b27a76e766a977483dd5fae0d4", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-02-13T21:07:49.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-13T21:07:49.000Z", "max_issues_repo_path": "snlp/hw3/Assignment3.ipynb", "max_issues_repo_name": "sangeet2020/ss-21", "max_issues_repo_head_hexsha": "c2dbcf9668cb82b27a76e766a977483dd5fae0d4", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "snlp/hw3/Assignment3.ipynb", "max_forks_repo_name": "sangeet2020/ss-21", "max_forks_repo_head_hexsha": "c2dbcf9668cb82b27a76e766a977483dd5fae0d4", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 39.6174496644, "max_line_length": 434, "alphanum_fraction": 0.5670562991, "converted": true, "num_tokens": 4096, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4687906266262437, "lm_q2_score": 0.19930799790404563, "lm_q1q2_score": 0.09343372122905962}} {"text": "\n\n\n

Escuela de Ciencias B\u00e1sicas, Tecnolog\u00eda e Ingenier\u00eda

\n
\n\n\n

ECBTI

\n
\n\n\n

Curso: M\u00e9todos Num\u00e9ricos

\n
\n\n\n

Unidad 1: Error

\n
\n\n\n

Febrero 28 de 2020

\n
\n\n\n***\n\n> **Tutor:** Carlos Alberto \u00c1lvarez Henao, I.C. D.Sc.\n\n> **skype:** carlos.alberto.alvarez.henao\n\n> **Herramienta:** [Jupyter](http://jupyter.org/)\n\n> **Kernel:** Python 3.7\n\n\n***\n\n***Comentario:*** estas notas est\u00e1n basadas en el curso del profesor [Kyle T. Mandli](https://github.com/mandli/intro-numerical-methods) (en ingl\u00e9s)\n\n# Fuentes de error\n\nLos c\u00e1lculos num\u00e9ricos, que involucran el uso de m\u00e1quinas (an\u00e1logas o digitales) presentan una serie de errores que provienen de diferentes fuentes:\n\n- del Modelo\n- de los datos\n- de truncamiento\n- de representaci\u00f3n de los n\u00fameros (punto flotante)\n- $\\ldots$\n\n***Meta:*** Categorizar y entender cada tipo de error y explorar algunas aproximaciones simples para analizarlas.\n\n# Error en el modelo y los datos\n\nErrores en la formulaci\u00f3n fundamental\n\n- Error en los datos: imprecisiones en las mediciones o incertezas en los par\u00e1metros\n\nInfortunadamente no tenemos control de los errores en los datos y el modelo de forma directa pero podemos usar m\u00e9todos que pueden ser m\u00e1s robustos en la presencia de estos tipos de errores.\n\n# Error de truncamiento\n\nLos errores surgen de la expansi\u00f3n de funciones con una funci\u00f3n simple, por ejemplo, $sin(x) \\approx x$ para $|x|\\approx0$.\n\n# Error de representaci\u00f3n de punto fotante\n\nLos errores surgen de aproximar n\u00fameros reales con la representaci\u00f3n en precisi\u00f3n finita de n\u00fameros en el computador.\n\n# Definiciones b\u00e1sicas\n\nDado un valor verdadero de una funci\u00f3n $f$ y una soluci\u00f3n aproximada $F$, se define:\n\n- Error absoluto\n\n$$e_a=|f-F|$$\n\n- Error relativo\n\n$$e_r = \\frac{e_a}{|f|}=\\frac{|f-F|}{|f|}$$\n\n\n\n# Notaci\u00f3n $\\text{Big}-\\mathcal{O}$\n\nsea $$f(x)= \\mathcal{O}(g(x)) \\text{ cuando } x \\rightarrow a$$\n\nsi y solo si\n\n$$|f(x)|\\leq M|g(x)| \\text{ cuando } |x-a| < \\delta \\text{ donde } M, a > 0$$\n\n\nEn la pr\u00e1ctica, usamos la notaci\u00f3n $\\text{Big}-\\mathcal{O}$ para decir algo sobre c\u00f3mo se pueden comportar los t\u00e9rminos que podemos haber dejado fuera de una serie. Veamos el siguiente ejemplo de la aproximaci\u00f3n de la serie de Taylor:\n\n***Ejemplo:***\n\nsea $f(x) = \\sin x$ con $x_0 = 0$ entonces\n\n$$T_N(x) = \\sum^N_{n=0} (-1)^{n} \\frac{x^{2n+1}}{(2n+1)!}$$\n\nPodemos escribir $f(x)$ como\n\n$$f(x) = x - \\frac{x^3}{6} + \\frac{x^5}{120} + \\mathcal{O}(x^7)$$\n\nEsto se vuelve m\u00e1s \u00fatil cuando lo vemos como lo hicimos antes con $\\Delta x$:\n\n$$f(x) = \\Delta x - \\frac{\\Delta x^3}{6} + \\frac{\\Delta x^5}{120} + \\mathcal{O}(\\Delta x^7)$$\n\n# Reglas para el error de propagaci\u00f3n basado en la notaci\u00f3n $\\text{Big}-\\mathcal{O}$\n\nEn general, existen dos teoremas que no necesitan prueba y se mantienen cuando el valor de $x$ es grande:\n\nSea\n\n$$\\begin{aligned}\n f(x) &= p(x) + \\mathcal{O}(x^n) \\\\\n g(x) &= q(x) + \\mathcal{O}(x^m) \\\\\n k &= \\max(n, m)\n\\end{aligned}$$\n\nEntonces\n\n$$\n f+g = p + q + \\mathcal{O}(x^k)\n$$\n\ny\n\n\\begin{align}\n f \\cdot g &= p \\cdot q + p \\mathcal{O}(x^m) + q \\mathcal{O}(x^n) + O(x^{n + m}) \\\\\n &= p \\cdot q + \\mathcal{O}(x^{n+m})\n\\end{align}\n\nDe otra forma, si estamos interesados en valores peque\u00f1os de $x$, $\\Delta x$, la expresi\u00f3n puede ser modificada como sigue:\n\n\\begin{align}\n f(\\Delta x) &= p(\\Delta x) + \\mathcal{O}(\\Delta x^n) \\\\\n g(\\Delta x) &= q(\\Delta x) + \\mathcal{O}(\\Delta x^m) \\\\\n r &= \\min(n, m)\n\\end{align}\n\nentonces\n\n$$\n f+g = p + q + O(\\Delta x^r)\n$$\n\ny\n\n\\begin{align}\n f \\cdot g &= p \\cdot q + p \\cdot \\mathcal{O}(\\Delta x^m) + q \\cdot \\mathcal{O}(\\Delta x^n) + \\mathcal{O}(\\Delta x^{n+m}) \\\\\n &= p \\cdot q + \\mathcal{O}(\\Delta x^r)\n\\end{align}\n\n***Nota:*** En este caso, supongamos que al menos el polinomio con $k=max(n,m)$ tiene la siguiente forma:\n\n$$\n p(\\Delta x) = 1 + p_1 \\Delta x + p_2 \\Delta x^2 + \\ldots\n$$\n\no\n\n$$\n q(\\Delta x) = 1 + q_1 \\Delta x + q_2 \\Delta x^2 + \\ldots\n$$\n\npara que $\\mathcal{O}(1)$ \n\n\nde modo que hay un t\u00e9rmino $\\mathcal{O}(1)$ que garantiza la existencia de $\\mathcal{O}(\\Delta x^r)$ en el producto final.\n\nPara tener una idea de por qu\u00e9 importa m\u00e1s la potencia en $\\Delta x$ al considerar la convergencia, la siguiente figura muestra c\u00f3mo las diferentes potencias en la tasa de convergencia pueden afectar la rapidez con la que converge nuestra soluci\u00f3n. Tenga en cuenta que aqu\u00ed estamos dibujando los mismos datos de dos maneras diferentes. Graficar el error como una funci\u00f3n de $\\Delta x$ es una forma com\u00fan de mostrar que un m\u00e9todo num\u00e9rico est\u00e1 haciendo lo que esperamos y muestra el comportamiento de convergencia correcto. Dado que los errores pueden reducirse r\u00e1pidamente, es muy com\u00fan trazar este tipo de gr\u00e1ficos en una escala log-log para visualizar f\u00e1cilmente los resultados. Tenga en cuenta que si un m\u00e9todo fuera realmente del orden $n$, ser\u00e1 una funci\u00f3n lineal en el espacio log-log con pendiente $n$.\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n```\n\n\n```python\ndx = np.linspace(1.0, 1e-4, 100)\n\nfig = plt.figure()\nfig.set_figwidth(fig.get_figwidth() * 2.0)\naxes = []\naxes.append(fig.add_subplot(1, 2, 1))\naxes.append(fig.add_subplot(1, 2, 2))\n\nfor n in range(1, 5):\n axes[0].plot(dx, dx**n, label=\"$\\Delta x^%s$\" % n)\n axes[1].loglog(dx, dx**n, label=\"$\\Delta x^%s$\" % n)\n\naxes[0].legend(loc=2)\naxes[1].set_xticks([10.0**(-n) for n in range(5)])\naxes[1].set_yticks([10.0**(-n) for n in range(16)])\naxes[1].legend(loc=4)\nfor n in range(2):\n axes[n].set_title(\"Crecimiento del Error vs. $\\Delta x^n$\")\n axes[n].set_xlabel(\"$\\Delta x$\")\n axes[n].set_ylabel(\"Error Estimado\")\n axes[n].set_title(\"Crecimiento de las diferencias\")\n axes[n].set_xlabel(\"$\\Delta x$\")\n axes[n].set_ylabel(\"Error Estimado\")\n\nplt.show()\n```\n\n# Error de truncamiento\n\n***Teorema de Taylor:*** Sea $f(x) \\in C^{m+1}[a,b]$ y $x_0 \\in [a,b]$, para todo $x \\in (a,b)$ existe un n\u00famero $c = c(x)$ que se encuentra entre $x_0$ y $x$ tal que\n\n$$ f(x) = T_N(x) + R_N(x)$$\n\ndonde $T_N(x)$ es la aproximaci\u00f3n del polinomio de Taylor\n\n$$T_N(x) = \\sum^N_{n=0} \\frac{f^{(n)}(x_0)\\times(x-x_0)^n}{n!}$$\n\ny $R_N(x)$ es el residuo (la parte de la serie que obviamos)\n\n$$R_N(x) = \\frac{f^{(n+1)}(c) \\times (x - x_0)^{n+1}}{(n+1)!}$$\n\nOtra forma de pensar acerca de estos resultados consiste en reemplazar $x - x_0$ con $\\Delta x$. La idea principal es que el residuo $R_N(x)$ se vuelve mas peque\u00f1o cuando $\\Delta x \\rightarrow 0$.\n\n$$T_N(x) = \\sum^N_{n=0} \\frac{f^{(n)}(x_0)\\times \\Delta x^n}{n!}$$\n\ny $R_N(x)$ es el residuo (la parte de la serie que obviamos)\n\n$$ R_N(x) = \\frac{f^{(n+1)}(c) \\times \\Delta x^{n+1}}{(n+1)!} \\leq M \\Delta x^{n+1}$$\n\n***Ejemplo 1:***\n\n$f(x) = e^x$ con $x_0 = 0$\n\nUsando esto podemos encontrar expresiones para el error relativo y absoluto en funci\u00f3n de $x$ asumiendo $N=2$.\n\nDerivadas:\n$$\\begin{aligned}\n f'(x) &= e^x \\\\\n f''(x) &= e^x \\\\ \n f^{(n)}(x) &= e^x\n\\end{aligned}$$\n\nPolinomio de Taylor:\n$$\\begin{aligned}\n T_N(x) &= \\sum^N_{n=0} e^0 \\frac{x^n}{n!} \\Rightarrow \\\\\n T_2(x) &= 1 + x + \\frac{x^2}{2}\n\\end{aligned}$$\n\nRestos:\n$$\\begin{aligned}\n R_N(x) &= e^c \\frac{x^{n+1}}{(n+1)!} = e^c \\times \\frac{x^3}{6} \\quad \\Rightarrow \\\\\n R_2(x) &\\leq \\frac{e^1}{6} \\approx 0.5\n\\end{aligned}$$\n\nPrecisi\u00f3n:\n$$\n e^1 = 2.718\\ldots \\\\\n T_2(1) = 2.5 \\Rightarrow e \\approx 0.2 ~~ r \\approx 0.1\n$$\n\n\u00a1Tambi\u00e9n podemos usar el paquete `sympy` que tiene la capacidad de calcular el polinomio de *Taylor* integrado!\n\n\n```python\nimport sympy\nx = sympy.symbols('x')\nf = sympy.symbols('f', cls=sympy.Function)\n\nf = sympy.exp(x)\nf.series(x0=0, n=5)\n```\n\n\n\n\n$\\displaystyle 1 + x + \\frac{x^{2}}{2} + \\frac{x^{3}}{6} + \\frac{x^{4}}{24} + O\\left(x^{5}\\right)$\n\n\n\nGraficando\n\n\n```python\nx = np.linspace(-1, 1, 100)\nT_N = 1.0 + x + x**2 / 2.0\nR_N = np.exp(1) * x**3 / 6.0\n\nplt.plot(x, T_N, 'r', x, np.exp(x), 'k', x, R_N, 'b')\nplt.plot(0.0, 1.0, 'o', markersize=10)\nplt.grid(True)\nplt.xlabel(\"x\")\nplt.ylabel(\"$f(x)$, $T_N(x)$, $R_N(x)$\")\nplt.legend([\"$T_N(x)$\", \"$f(x)$\", \"$R_N(x)$\"], loc=2)\nplt.show()\n```\n\n***Ejemplo 2:***\n\nAproximar\n\n$$ f(x) = \\frac{1}{x} \\quad x_0 = 1,$$\n\nusando $x_0 = 1$ para el tercer termino de la serie de Taylor.\n\n$$\\begin{aligned}\n f'(x) &= -\\frac{1}{x^2} \\\\\n f''(x) &= \\frac{2}{x^3} \\\\\n f^{(n)}(x) &= \\frac{(-1)^n n!}{x^{n+1}}\n\\end{aligned}$$\n\n$$\\begin{aligned}\n T_N(x) &= \\sum^N_{n=0} (-1)^n (x-1)^n \\Rightarrow \\\\\n T_2(x) &= 1 - (x - 1) + (x - 1)^2\n\\end{aligned}$$\n\n$$\\begin{aligned}\n R_N(x) &= \\frac{(-1)^{n+1}(x - 1)^{n+1}}{c^{n+2}} \\Rightarrow \\\\\n R_2(x) &= \\frac{-(x - 1)^{3}}{c^{4}}\n\\end{aligned}$$\n\n\n```python\nx = np.linspace(0.8, 2, 100)\nT_N = 1.0 - (x-1) + (x-1)**2\nR_N = -(x-1.0)**3 / (1.1**4)\n\nplt.plot(x, T_N, 'r', x, 1.0 / x, 'k', x, R_N, 'b')\nplt.plot(1.0, 1.0, 'o', markersize=10)\nplt.grid(True)\nplt.xlabel(\"x\")\nplt.ylabel(\"$f(x)$, $T_N(x)$, $R_N(x)$\")\n\nplt.legend([\"$T_N(x)$\", \"$f(x)$\", \"$R_N(x)$\"], loc=8)\nplt.show()\n```\n\n# En esta celda haz tus comentarios\n\n\nEsta cosa con esta vaina quizas tal vez-.-.--\n\n\n\n\n\n\n\n\n\n## Error de punto flotante\n\nErrores surgen de aproximar n\u00fameros reales con n\u00fameros de precisi\u00f3n finita\n\n$$\\pi \\approx 3.14$$\n\no $\\frac{1}{3} \\approx 0.333333333$ en decimal, los resultados forman un n\u00famero finito de registros para representar cada n\u00famero.\n\n### Sistemas de punto flotante\n\nLos n\u00fameros en sistemas de punto flotante se representan como una serie de bits que representan diferentes partes de un n\u00famero. En los sistemas de punto flotante normalizados, existen algunas convenciones est\u00e1ndar para el uso de estos bits. En general, los n\u00fameros se almacenan dividi\u00e9ndolos en la forma\n\n$$F = \\pm d_1 . d_2 d_3 d_4 \\ldots d_p \\times \\beta^E$$\n\ndonde\n\n1. $\\pm$ es un bit \u00fanico y representa el signo del n\u00famero.\n\n\n2. $d_1 . d_2 d_3 d_4 \\ldots d_p$ es la *mantisa*. observe que, t\u00e9cnicamente, el decimal se puede mover, pero en general, utilizando la notaci\u00f3n cient\u00edfica, el decimal siempre se puede colocar en esta ubicaci\u00f3n. Los digitos $d_2 d_3 d_4 \\ldots d_p$ son llamados la *fracci\u00f3n* con $p$ digitos de precisi\u00f3n. Los sistemas normalizados espec\u00edficamente ponen el punto decimal en el frente y asume $d_1 \\neq 0$ a menos que el n\u00famero sea exactamente $0$.\n\n\n3. $\\beta$ es la *base*. Para el sistema binario $\\beta = 2$, para decimal $\\beta = 10$, etc.\n\n\n4. $E$ es el *exponente*, un entero en el rango $[E_{\\min}, E_{\\max}]$\n\nLos puntos importantes en cualquier sistema de punto flotante es\n\n1. Existe un conjunto discreto y finito de n\u00fameros representables.\n\n\n2. Estos n\u00fameros representables no est\u00e1n distribuidos uniformemente en la l\u00ednea real\n\n\n3. La aritm\u00e9tica en sistemas de punto flotante produce resultados diferentes de la aritm\u00e9tica de precisi\u00f3n infinita (es decir, matem\u00e1tica \"real\")\n\n### Propiedades de los sistemas de punto flotante\n\nTodos los sistemas de punto flotante se caracterizan por varios n\u00fameros importantes\n\n- N\u00famero normalizado reducido (underflow si est\u00e1 por debajo, relacionado con n\u00fameros sub-normales alrededor de cero)\n\n\n- N\u00famero normalizado m\u00e1s grande (overflow)\n\n\n- Cero\n\n\n- $\\epsilon$ o $\\epsilon_{mach}$\n\n\n- `Inf` y `nan`\n\n***Ejemplo: Sistema de juguete***\n\nConsidere el sistema decimal de 2 digitos de precisi\u00f3n (normalizado)\n\n$$f = \\pm d_1 . d_2 \\times 10^E$$\n\ncon $E \\in [-2, 0]$.\n\n**Numero y distribuci\u00f3n de n\u00fameros**\n\n\n1. Cu\u00e1ntos n\u00fameros pueden representarse con este sistema?\n\n\n2. Cu\u00e1l es la distribuci\u00f3n en la l\u00ednea real?\n\n\n3. Cu\u00e1les son los l\u00edmites underflow y overflow?\n\nCu\u00e1ntos n\u00fameros pueden representarse con este sistema?\n\n$$f = \\pm d_1 . d_2 \\times 10^E ~~~ \\text{with} E \\in [-2, 0]$$\n\n$$2 \\times 9 \\times 10 \\times 3 + 1 = 541$$\n\nCu\u00e1l es la distribuci\u00f3n en la l\u00ednea real?\n\n\n```python\nd_1_values = [1, 2, 3, 4, 5, 6, 7, 8, 9]\nd_2_values = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\nE_values = [0, -1, -2]\n\nfig = plt.figure(figsize=(10.0, 1.0))\naxes = fig.add_subplot(1, 1, 1)\n\nfor E in E_values:\n for d1 in d_1_values:\n for d2 in d_2_values:\n axes.plot( (d1 + d2 * 0.1) * 10**E, 0.0, 'r+', markersize=20)\n axes.plot(-(d1 + d2 * 0.1) * 10**E, 0.0, 'r+', markersize=20)\n \naxes.plot(0.0, 0.0, '+', markersize=20)\naxes.plot([-10.0, 10.0], [0.0, 0.0], 'k')\n\naxes.set_title(\"Distribuci\u00f3n de Valores\")\naxes.set_yticks([])\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"\")\naxes.set_xlim([-0.1, 0.1])\nplt.show()\n```\n\nCu\u00e1les son los l\u00edmites superior (overflow) e inferior (underflow)?\n\n- El menor n\u00famero que puede ser representado (underflow) es: $1.0 \\times 10^{-2} = 0.01$\n\n\n\n- El mayor n\u00famero que puede ser representado (overflow) es: $9.9 \\times 10^0 = 9.9$\n\n### Sistema Binario\n\nConsidere el sistema en base 2 de 2 d\u00edgitos de precisi\u00f3n\n\n$$f=\\pm d_1 . d_2 \\times 2^E \\quad \\text{with} \\quad E \\in [-1, 1]$$\n\n\n#### Numero y distribuci\u00f3n de n\u00fameros**\n\n\n1. Cu\u00e1ntos n\u00fameros pueden representarse con este sistema?\n\n\n2. Cu\u00e1l es la distribuci\u00f3n en la l\u00ednea real?\n\n\n3. Cu\u00e1les son los l\u00edmites underflow y overflow?\n\nCu\u00e1ntos n\u00fameros pueden representarse en este sistema?\n\n\n$$f=\\pm d_1 . d_2 \\times 2^E ~~~~ \\text{con} ~~~~ E \\in [-1, 1]$$\n\n$$ 2 \\times 1 \\times 2 \\times 3 + 1 = 13$$\n\nCu\u00e1l es la distribuci\u00f3n en la l\u00ednea real?\n\n\n```python\nd_1_values = [1]\nd_2_values = [0, 1]\nE_values = [1, 0, -1]\n\nfig = plt.figure(figsize=(10.0, 1.0))\naxes = fig.add_subplot(1, 1, 1)\n\nfor E in E_values:\n for d1 in d_1_values:\n for d2 in d_2_values:\n axes.plot( (d1 + d2 * 0.5) * 2**E, 0.0, 'r+', markersize=20)\n axes.plot(-(d1 + d2 * 0.5) * 2**E, 0.0, 'r+', markersize=20)\n \naxes.plot(0.0, 0.0, 'r+', markersize=20)\naxes.plot([-4.5, 4.5], [0.0, 0.0], 'k')\n\naxes.set_title(\"Distribuci\u00f3n de Valores\")\naxes.set_yticks([])\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"\")\naxes.set_xlim([-3.5, 3.5])\nplt.show()\n```\n\nCu\u00e1les son los l\u00edmites superior (*overflow*) e inferior (*underflow*)?\n\n- El menor n\u00famero que puede ser representado (*underflow*) es: $1.0 \\times 2^{-1} = 0.5$\n\n\n\n\n- El mayor n\u00famero que puede ser representado (*overflow*) es: $1.1 \\times 2^1 = 3$\n\nObserve que estos n\u00fameros son en sistema binario. \n\nUna r\u00e1pida regla de oro:\n\n$$2^3 2^2 2^1 2^0 . 2^{-1} 2^{-2} 2^{-3}$$\n\ncorresponde a\n\n8s, 4s, 2s, 1s . mitades, cuartos, octavos, $\\ldots$\n\n### Sistema real - IEEE 754 sistema binario de punto flotante\n\n#### Precisi\u00f3n simple\n\n- Almacenamiento total es de 32 bits\n\n\n- Exponente de 8 bits $\\Rightarrow E \\in [-126, 127]$\n\n\n- Fracci\u00f3n 23 bits ($p = 24$)\n\n\n```\ns EEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF\n0 1 8 9 31\n```\n\nOverflow $= 2^{127} \\approx 3.4 \\times 10^{38}$\n\nUnderflow $= 2^{-126} \\approx 1.2 \\times 10^{-38}$\n\n$\\epsilon_{\\text{machine}} = 2^{-23} \\approx 1.2 \\times 10^{-7}$\n\n\n#### Precisi\u00f3n doble\n\n- Almacenamiento total asignado es 64 bits\n\n- Exponenete de 11 bits $\\Rightarrow E \\in [-1022, 1024]$\n\n- Fracci\u00f3n de 52 bits ($p = 53$)\n\n```\ns EEEEEEEEEE FFFFFFFFFF FFFFFFFFFF FFFFFFFFFF FFFFFFFFFF FFFFFFFFFF FF\n0 1 11 12 63\n```\nOverflow $= 2^{1024} \\approx 1.8 \\times 10^{308}$\n\nUnderflow $= 2^{-1022} \\approx 2.2 \\times 10^{-308}$\n\n$\\epsilon_{\\text{machine}} = 2^{-52} \\approx 2.2 \\times 10^{-16}$\n\n### Acceso de Python a n\u00fameros de la IEEE\n\nAccede a muchos par\u00e1metros importantes, como el epsilon de la m\u00e1quina\n\n```python\nimport numpy\nnumpy.finfo(float).eps\n```\n\n\n```python\nimport numpy\nnumpy.finfo(float).eps\n\nprint(numpy.finfo(numpy.float16))\nprint(numpy.finfo(numpy.float32))\nprint(numpy.finfo(float))\nprint(numpy.finfo(numpy.float128))\n```\n\n## Por qu\u00e9 deber\u00eda importarnos esto?\n\n- Aritm\u00e9tica de punto flotante no es conmutativa o asociativa\n\n\n- Errores de punto flotante compuestos, No asuma que la precisi\u00f3n doble es suficiente\n\n\n- Mezclar precisi\u00f3n es muy peligroso\n\n### Ejemplo 1: Aritm\u00e9tica simple\n\nAritm\u00e9tica simple $\\delta < \\epsilon_{\\text{machine}}$\n\n $$(1+\\delta) - 1 = 1 - 1 = 0$$\n\n $$1 - 1 + \\delta = \\delta$$\n\n### Ejemplo 2: Cancelaci\u00f3n catastr\u00f3fica\n\nMiremos qu\u00e9 sucede cuando sumamos dos n\u00fameros $x$ y $y$ cuando $x+y \\neq 0$. De hecho, podemos estimar estos l\u00edmites haciendo un an\u00e1lisis de error. Aqu\u00ed necesitamos presentar la idea de que cada operaci\u00f3n de punto flotante introduce un error tal que\n\n$$\n \\text{fl}(x ~\\text{op}~ y) = (x ~\\text{op}~ y) (1 + \\delta)\n$$\n\ndonde $\\text{fl}(\\cdot)$ es una funci\u00f3n que devuelve la representaci\u00f3n de punto flotante de la expresi\u00f3n encerrada, $\\text{op}$ es alguna operaci\u00f3n (ex. $+, -, \\times, /$), y $\\delta$ es el error de punto flotante debido a $\\text{op}$.\n\nDe vuelta a nuestro problema en cuesti\u00f3n. El error de coma flotante debido a la suma es\n\n$$\\text{fl}(x + y) = (x + y) (1 + \\delta).$$\n\n\nComparando esto con la soluci\u00f3n verdadera usando un error relativo tenemos\n\n$$\\begin{aligned}\n \\frac{(x + y) - \\text{fl}(x + y)}{x + y} &= \\frac{(x + y) - (x + y) (1 + \\delta)}{x + y} = \\delta.\n\\end{aligned}$$\n\nentonces si $\\delta = \\mathcal{O}(\\epsilon_{\\text{machine}})$ no estaremos muy preocupados.\n\nQue pasa si consideramos un error de punto flotante en la representaci\u00f3n de $x$ y $y$, $x \\neq y$, y decimos que $\\delta_x$ y $\\delta_y$ son la magnitud de los errores en su representaci\u00f3n. Asumiremos que esto constituye el error de punto flotante en lugar de estar asociado con la operaci\u00f3n en s\u00ed.\n\nDado todo esto, tendr\u00edamos\n\n$$\\begin{aligned}\n \\text{fl}(x + y) &= x (1 + \\delta_x) + y (1 + \\delta_y) \\\\\n &= x + y + x \\delta_x + y \\delta_y \\\\\n &= (x + y) \\left(1 + \\frac{x \\delta_x + y \\delta_y}{x + y}\\right)\n\\end{aligned}$$\n\nCalculando nuevamente el error relativo, tendremos\n\n$$\\begin{aligned}\n \\frac{x + y - (x + y) \\left(1 + \\frac{x \\delta_x + y \\delta_y}{x + y}\\right)}{x + y} &= 1 - \\left(1 + \\frac{x \\delta_x + y \\delta_y}{x + y}\\right) \\\\\n &= \\frac{x}{x + y} \\delta_x + \\frac{y}{x + y} \\delta_y \\\\\n &= \\frac{1}{x + y} (x \\delta_x + y \\delta_y)\n\\end{aligned}$$\n\nLo importante aqu\u00ed es que ahora el error depende de los valores de $x$ y $y$, y m\u00e1s importante a\u00fan, su suma. De particular preocupaci\u00f3n es el tama\u00f1o relativo de $x + y$. A medida que se acerca a cero en relaci\u00f3n con las magnitudes de $x$ y $y$, el error podr\u00eda ser arbitrariamente grande. Esto se conoce como ***cancelaci\u00f3n catastr\u00f3fica***.\n\n\n```python\ndx = numpy.array([10**(-n) for n in range(1, 16)])\nx = 1.0 + dx\ny = -numpy.ones(x.shape)\nerror = numpy.abs(x + y - dx) / (dx)\n\nfig = plt.figure()\nfig.set_figwidth(fig.get_figwidth() * 2)\n\naxes = fig.add_subplot(1, 2, 1)\naxes.loglog(dx, x + y, 'o-')\naxes.set_xlabel(\"$\\Delta x$\")\naxes.set_ylabel(\"$x + y$\")\naxes.set_title(\"$\\Delta x$ vs. $x+y$\")\n\naxes = fig.add_subplot(1, 2, 2)\naxes.loglog(dx, error, 'o-')\naxes.set_xlabel(\"$\\Delta x$\")\naxes.set_ylabel(\"$|x + y - \\Delta x| / \\Delta x$\")\naxes.set_title(\"Diferencia entre $x$ y $y$ vs. Error relativo\")\n\nplt.show()\n```\n\n### Ejemplo 3: Evaluaci\u00f3n de una funci\u00f3n\n\nConsidere la funci\u00f3n\n\n$$\n f(x) = \\frac{1 - \\cos x}{x^2}\n$$\n\ncon $x\\in[-10^{-4}, 10^{-4}]$. \n\nTomando el l\u00edmite cuando $x \\rightarrow 0$ podemos ver qu\u00e9 comportamiento esperar\u00edamos ver al evaluar esta funci\u00f3n:\n\n$$\n \\lim_{x \\rightarrow 0} \\frac{1 - \\cos x}{x^2} = \\lim_{x \\rightarrow 0} \\frac{\\sin x}{2 x} = \\lim_{x \\rightarrow 0} \\frac{\\cos x}{2} = \\frac{1}{2}.\n$$\n\n\u00bfQu\u00e9 hace la representaci\u00f3n de punto flotante?\n\n\n```python\nx = numpy.linspace(-1e-3, 1e-3, 100, dtype=numpy.float32)\nerror = (0.5 - (1.0 - numpy.cos(x)) / x**2) / 0.5\n\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x, error, 'o')\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"Error Relativo\")\n```\n\n### Ejemplo 4: Evaluaci\u00f3n de un Polinomio\n\n $$f(x) = x^7 - 7x^6 + 21 x^5 - 35 x^4 + 35x^3-21x^2 + 7x - 1$$\n\n\n```python\nx = numpy.linspace(0.988, 1.012, 1000, dtype=numpy.float16)\ny = x**7 - 7.0 * x**6 + 21.0 * x**5 - 35.0 * x**4 + 35.0 * x**3 - 21.0 * x**2 + 7.0 * x - 1.0\n\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x, y, 'r')\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"y\")\naxes.set_ylim((-0.1, 0.1))\naxes.set_xlim((x[0], x[-1]))\nplt.show()\n```\n\n### Ejemplo 5: Evaluaci\u00f3n de una funci\u00f3n racional\n\nCalcule $f(x) = x + 1$ por la funci\u00f3n $$F(x) = \\frac{x^2 - 1}{x - 1}$$\n\n\u00bfCu\u00e1l comportamiento esperar\u00edas encontrar?\n\n\n```python\nx = numpy.linspace(0.5, 1.5, 101, dtype=numpy.float16)\nf_hat = (x**2 - 1.0) / (x - 1.0)\n\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x, numpy.abs(f_hat - (x + 1.0)))\naxes.set_xlabel(\"$x$\")\naxes.set_ylabel(\"Error Absoluto\")\nplt.show()\n```\n\n## Combinaci\u00f3n de error\n\nEn general, nos debemos ocupar de la combinaci\u00f3n de error de truncamiento con el error de punto flotante.\n\n- Error de Truncamiento: errores que surgen de la aproximaci\u00f3n de una funci\u00f3n, truncamiento de una serie.\n\n$$\\sin x \\approx x - \\frac{x^3}{3!} + \\frac{x^5}{5!} + O(x^7)$$\n\n\n- Error de punto flotante: errores derivados de la aproximaci\u00f3n de n\u00fameros reales con n\u00fameros de precisi\u00f3n finita\n\n$$\\pi \\approx 3.14$$\n\no $\\frac{1}{3} \\approx 0.333333333$ en decimal, los resultados forman un n\u00famero finito de registros para representar cada n\u00famero.\n\n### Ejemplo 1:\n\nConsidere la aproximaci\u00f3n de diferencias finitas donde $f(x) = e^x$ y estamos evaluando en $x=1$\n\n$$f'(x) \\approx \\frac{f(x + \\Delta x) - f(x)}{\\Delta x}$$\n\nCompare el error entre disminuir $\\Delta x$ y la verdadera solucion $f'(1) = e$\n\n\n```python\ndelta_x = numpy.linspace(1e-20, 5.0, 100)\ndelta_x = numpy.array([2.0**(-n) for n in range(1, 60)])\nx = 1.0\nf_hat_1 = (numpy.exp(x + delta_x) - numpy.exp(x)) / (delta_x)\nf_hat_2 = (numpy.exp(x + delta_x) - numpy.exp(x - delta_x)) / (2.0 * delta_x)\n\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.loglog(delta_x, numpy.abs(f_hat_1 - numpy.exp(1)), 'o-', label=\"Unilateral\")\naxes.loglog(delta_x, numpy.abs(f_hat_2 - numpy.exp(1)), 's-', label=\"Centrado\")\naxes.legend(loc=3)\naxes.set_xlabel(\"$\\Delta x$\")\naxes.set_ylabel(\"Error Absoluto\")\nplt.show()\n```\n\n### Ejemplo 2:\n\nEval\u00fae $e^x$ con la serie de *Taylor*\n\n$$e^x = \\sum^\\infty_{n=0} \\frac{x^n}{n!}$$\n\npodemos elegir $n< \\infty$ que puede aproximarse $e^x$ en un rango dado $x \\in [a,b]$ tal que el error relativo $E$ satisfaga $E<8 \\cdot \\varepsilon_{\\text{machine}}$?\n\n\u00bfCu\u00e1l podr\u00eda ser una mejor manera de simplemente evaluar el polinomio de Taylor directamente por varios $N$?\n\n\n```python\nimport scipy.special\n\ndef my_exp(x, N=10):\n value = 0.0\n for n in range(N + 1):\n value += x**n / scipy.special.factorial(n)\n \n return value\n\nx = numpy.linspace(-2, 2, 100, dtype=numpy.float32)\nfor N in range(1, 50):\n error = numpy.abs((numpy.exp(x) - my_exp(x, N=N)) / numpy.exp(x))\n if numpy.all(error < 8.0 * numpy.finfo(float).eps):\n break\n\nprint(N)\n\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x, error)\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"Error Relativo\")\nplt.show()\n```\n\n### Ejemplo 3: Error relativo\n\nDigamos que queremos calcular el error relativo de dos valores $x$ y $y$ usando $x$ como valor de normalizaci\u00f3n\n\n$$\n E = \\frac{x - y}{x}\n$$\ny\n$$\n E = 1 - \\frac{y}{x}\n$$\n\nson equivalentes. En precisi\u00f3n finita, \u00bfqu\u00e9 forma pidr\u00eda esperarse que sea m\u00e1s precisa y por qu\u00e9?\n\nEjemplo tomado de [blog](https://nickhigham.wordpress.com/2017/08/14/how-and-how-not-to-compute-a-relative-error/) posteado por Nick Higham*\n\nUsando este modelo, la definici\u00f3n original contiene dos operaciones de punto flotante de manera que\n\n$$\\begin{aligned}\n E_1 = \\text{fl}\\left(\\frac{x - y}{x}\\right) &= \\text{fl}(\\text{fl}(x - y) / x) \\\\\n &= \\left[ \\frac{(x - y) (1 + \\delta_+)}{x} \\right ] (1 + \\delta_/) \\\\\n &= \\frac{x - y}{x} (1 + \\delta_+) (1 + \\delta_/)\n\\end{aligned}$$\n\nPara la otra formulaci\u00f3n tenemos\n\n$$\\begin{aligned}\n E_2 = \\text{fl}\\left( 1 - \\frac{y}{x} \\right ) &= \\text{fl}\\left(1 - \\text{fl}\\left(\\frac{y}{x}\\right) \\right) \\\\\n &= \\left(1 - \\frac{y}{x} (1 + \\delta_/) \\right) (1 + \\delta_-)\n\\end{aligned}$$\n\nSi suponemos que todos las $\\text{op}$s tienen magnitudes de error similares, entonces podemos simplificar las cosas dejando que \n\n$$\n |\\delta_\\ast| \\le \\epsilon.\n$$\n\nPara comparar las dos formulaciones, nuevamente usamos el error relativo entre el error relativo verdadero $e_i$ y nuestras versiones calculadas $E_i$\n\nDefinici\u00f3n original\n\n$$\\begin{aligned}\n \\frac{e - E_1}{e} &= \\frac{\\frac{x - y}{x} - \\frac{x - y}{x} (1 + \\delta_+) (1 + \\delta_/)}{\\frac{x - y}{x}} \\\\\n &\\le 1 - (1 + \\epsilon) (1 + \\epsilon) = 2 \\epsilon + \\epsilon^2\n\\end{aligned}$$\n\nDefinici\u00f3n manipulada:\n\n$$\\begin{aligned}\n \\frac{e - E_2}{e} &= \\frac{e - \\left[1 - \\frac{y}{x}(1 + \\delta_/) \\right] (1 + \\delta_-)}{e} \\\\\n &= \\frac{e - \\left[e - \\frac{y}{x} \\delta_/) \\right] (1 + \\delta_-)}{e} \\\\\n &= \\frac{e - \\left[e + e\\delta_- - \\frac{y}{x} \\delta_/ - \\frac{y}{x} \\delta_/ \\delta_-)) \\right] }{e} \\\\\n &= - \\delta_- + \\frac{1}{e} \\frac{y}{x} \\left(\\delta_/ + \\delta_/ \\delta_- \\right) \\\\\n &= - \\delta_- + \\frac{1 -e}{e} \\left(\\delta_/ + \\delta_/ \\delta_- \\right) \\\\\n &\\le \\epsilon + \\left |\\frac{1 - e}{e}\\right | (\\epsilon + \\epsilon^2)\n\\end{aligned}$$\n\nVemos entonces que nuestro error de punto flotante depender\u00e1 de la magnitud relativa de $e$\n\n\n```python\n# Based on the code by Nick Higham\n# https://gist.github.com/higham/6f2ce1cdde0aae83697bca8577d22a6e\n# Compares relative error formulations using single precision and compared to double precision\n\nN = 501 # Note: Use 501 instead of 500 to avoid the zero value\nd = numpy.finfo(numpy.float32).eps * 1e4\na = 3.0\nx = a * numpy.ones(N, dtype=numpy.float32)\ny = [x[i] + numpy.multiply((i - numpy.divide(N, 2.0, dtype=numpy.float32)), d, dtype=numpy.float32) for i in range(N)]\n\n# Compute errors and \"true\" error\nrelative_error = numpy.empty((2, N), dtype=numpy.float32)\nrelative_error[0, :] = numpy.abs(x - y) / x\nrelative_error[1, :] = numpy.abs(1.0 - y / x)\nexact = numpy.abs( (numpy.float64(x) - numpy.float64(y)) / numpy.float64(x))\n\n# Compute differences between error calculations\nerror = numpy.empty((2, N))\nfor i in range(2):\n error[i, :] = numpy.abs((relative_error[i, :] - exact) / numpy.abs(exact))\n\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.semilogy(y, error[0, :], '.', markersize=10, label=\"$|x-y|/|x|$\")\naxes.semilogy(y, error[1, :], '.', markersize=10, label=\"$|1-y/x|$\")\n\naxes.grid(True)\naxes.set_xlabel(\"y\")\naxes.set_ylabel(\"Error Relativo\")\naxes.set_xlim((numpy.min(y), numpy.max(y)))\naxes.set_ylim((5e-9, numpy.max(error[1, :])))\naxes.set_title(\"Comparasi\u00f3n Error Relativo\")\naxes.legend()\nplt.show()\n```\n\nAlgunos enlaces de utilidad con respecto al punto flotante IEEE:\n\n- [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)\n\n\n- [IEEE 754 Floating Point Calculator](http://babbage.cs.qc.edu/courses/cs341/IEEE-754.html)\n\n\n- [Numerical Computing with IEEE Floating Point Arithmetic](http://epubs.siam.org/doi/book/10.1137/1.9780898718072)\n\n## Operaciones de conteo\n\n- ***Error de truncamiento:*** *\u00bfPor qu\u00e9 no usar m\u00e1s t\u00e9rminos en la serie de Taylor?*\n\n\n- ***Error de punto flotante:*** *\u00bfPor qu\u00e9 no utilizar la mayor precisi\u00f3n posible?*\n\n### Ejemplo 1: Multiplicaci\u00f3n matriz - vector\n\nSea $A, B \\in \\mathbb{R}^{N \\times N}$ y $x \\in \\mathbb{R}^N$.\n\n1. Cuenta el n\u00famero aproximado de operaciones que tomar\u00e1 para calcular $Ax$\n\n2. Hacer lo mismo para $AB$\n\n***Producto Matriz-vector:*** Definiendo $[A]_i$ como la $i$-\u00e9sima fila de $A$ y $A_{ij}$ como la $i$,$j$-\u00e9sima entrada entonces\n\n$$\n A x = \\sum^N_{i=1} [A]_i \\cdot x = \\sum^N_{i=1} \\sum^N_{j=1} A_{ij} x_j\n$$\n\nTomando un caso en particular, siendo $N=3$, entonces la operaci\u00f3n de conteo es\n\n$$\n A x = [A]_1 \\cdot v + [A]_2 \\cdot v + [A]_3 \\cdot v = \\begin{bmatrix}\n A_{11} \\times v_1 + A_{12} \\times v_2 + A_{13} \\times v_3 \\\\\n A_{21} \\times v_1 + A_{22} \\times v_2 + A_{23} \\times v_3 \\\\\n A_{31} \\times v_1 + A_{32} \\times v_2 + A_{33} \\times v_3\n \\end{bmatrix}\n$$\n\nEsto son 15 operaciones (6 sumas y 9 multiplicaciones)\n\nTomando otro caso, siendo $N=4$, entonces el conteo de operaciones es:\n\n$$\n A x = [A]_1 \\cdot v + [A]_2 \\cdot v + [A]_3 \\cdot v = \\begin{bmatrix}\n A_{11} \\times v_1 + A_{12} \\times v_2 + A_{13} \\times v_3 + A_{14} \\times v_4 \\\\\n A_{21} \\times v_1 + A_{22} \\times v_2 + A_{23} \\times v_3 + A_{24} \\times v_4 \\\\\n A_{31} \\times v_1 + A_{32} \\times v_2 + A_{33} \\times v_3 + A_{34} \\times v_4 \\\\\n A_{41} \\times v_1 + A_{42} \\times v_2 + A_{43} \\times v_3 + A_{44} \\times v_4 \\\\\n \\end{bmatrix}\n$$\n\nEsto lleva a 28 operaciones (12 sumas y 16 multiplicaciones).\n\nGeneralizando, hay $N^2$ mutiplicaciones y $N(N-1)$ sumas para un total de \n\n$$\n \\text{operaciones} = N (N - 1) + N^2 = \\mathcal{O}(N^2).\n$$\n\n***Producto Matriz-Matriz ($AB$):*** Definiendo $[B]_j$ como la $j$-\u00e9sima columna de $B$ entonces\n\n$$\n (A B)_{ij} = \\sum^N_{i=1} \\sum^N_{j=1} [A]_i \\cdot [B]_j\n$$\n\nEl producto interno de dos vectores es representado por \n\n$$\n a \\cdot b = \\sum^N_{i=1} a_i b_i\n$$\n\nconduce a $\\mathcal{O}(3N)$ operaciones. Como hay $N^2$ entradas en la matriz resultante, tendr\u00edamos $\\mathcal{O}(N^3)$ operaciones\n\nExisten m\u00e9todos para realizar la multiplicaci\u00f3n matriz - matriz m\u00e1s r\u00e1pido. En la siguiente figura vemos una colecci\u00f3n de algoritmos a lo largo del tiempo que han podido limitar el n\u00famero de operaciones en ciertas circunstancias\n$$\n \\mathcal{O}(N^\\omega)\n$$\n\n\n### Ejemplo 2: M\u00e9todo de Horner para evaluar polinomios\n\nDado\n\n$$P_N(x) = a_0 + a_1 x + a_2 x^2 + \\ldots + a_N x^N$$ \n\no\n\n\n$$P_N(x) = p_1 x^N + p_2 x^{N-1} + p_3 x^{N-2} + \\ldots + p_{N+1}$$\n\nqueremos encontrar la mejor v\u00eda para evaluar $P_N(x)$\n\nPrimero considere dos v\u00edas para escribir $P_3$\n\n$$ P_3(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4$$\n\ny usando multiplicaci\u00f3n anidada\n\n$$ P_3(x) = ((p_1 x + p_2) x + p_3) x + p_4$$\n\nConsidere cu\u00e1ntas operaciones se necesitan para cada...\n\n$$ P_3(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4$$\n\n$$P_3(x) = \\overbrace{p_1 \\cdot x \\cdot x \\cdot x}^3 + \\overbrace{p_2 \\cdot x \\cdot x}^2 + \\overbrace{p_3 \\cdot x}^1 + p_4$$\n\nSumando todas las operaciones, en general podemos pensar en esto como una pir\u00e1mide\n\n\n\npodemos estimar de esta manera que el algoritmo escrito de esta manera tomar\u00e1 aproximadamente $\\mathcal{O}(N^2/2)$ operaciones para completar.\n\nMirando nuetros otros medios de evaluaci\u00f3n\n\n$$ P_3(x) = ((p_1 x + p_2) x + p_3) x + p_4$$\n\nAqu\u00ed encontramos que el m\u00e9todo es $\\mathcal{O}(N)$ (el 2 generalmente se ignora en estos casos). Lo importante es que la primera evaluaci\u00f3n es $\\mathcal{O}(N^2)$ y la segunda $\\mathcal{O}(N)$!\n\n### Algoritmo\n\n\nComplete la funci\u00f3n e implemente el m\u00e9todo de *Horner*\n\n```python\ndef eval_poly(p, x):\n \"\"\"Evaluates polynomial given coefficients p at x\n \n Function to evaluate a polynomial in order N operations. The polynomial is defined as\n \n P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]\n \n The value x should be a float.\n \"\"\"\n pass\n```\n\n\n```python\ndef eval_poly(p, x):\n \"\"\"Evaluates polynomial given coefficients p at x\n \n Function to evaluate a polynomial in order N operations. The polynomial is defined as\n \n P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]\n \n The value x should be a float.\n \"\"\"\n ### ADD CODE HERE\n pass\n```\n\n\n```python\n# Scalar version\ndef eval_poly(p, x):\n \"\"\"Evaluates polynomial given coefficients p at x\n \n Function to evaluate a polynomial in order N operations. The polynomial is defined as\n \n P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]\n \n The value x should be a float.\n \"\"\"\n \n y = p[0]\n for coefficient in p[1:]:\n y = y * x + coefficient\n \n return y\n\n# Vectorized version\ndef eval_poly(p, x):\n \"\"\"Evaluates polynomial given coefficients p at x\n \n Function to evaluate a polynomial in order N operations. The polynomial is defined as\n \n P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]\n \n The value x can by a NumPy ndarray.\n \"\"\"\n \n y = numpy.ones(x.shape) * p[0]\n for coefficient in p[1:]:\n y = y * x + coefficient\n \n return y\n\np = [1, -3, 10, 4, 5, 5]\nx = numpy.linspace(-10, 10, 100)\nplt.plot(x, eval_poly(p, x))\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "129885372fc1774cb414bc1497c4b632f8558a18", "size": 197004, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Cap_01_Error.ipynb", "max_stars_repo_name": "UNADCdD/M-todos-Num-ricos", "max_stars_repo_head_hexsha": "539838f7f72c365e515d6fe81e91296f6e8826ea", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-04T00:26:35.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-04T00:26:35.000Z", "max_issues_repo_path": "Cap_01_Error.ipynb", "max_issues_repo_name": "UNADCdD/M-todos-Num-ricos", "max_issues_repo_head_hexsha": "539838f7f72c365e515d6fe81e91296f6e8826ea", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Cap_01_Error.ipynb", "max_forks_repo_name": "UNADCdD/M-todos-Num-ricos", "max_forks_repo_head_hexsha": "539838f7f72c365e515d6fe81e91296f6e8826ea", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-09-08T01:36:50.000Z", "max_forks_repo_forks_event_max_datetime": "2020-09-08T01:36:50.000Z", "avg_line_length": 114.0069444444, "max_line_length": 49936, "alphanum_fraction": 0.8389981929, "converted": true, "num_tokens": 11499, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.38121955219593834, "lm_q2_score": 0.2450850131323717, "lm_q1q2_score": 0.0934311989562584}} {"text": "# Example of DOV search methods for CPT measurements (sonderingen)\n\n[](https://mybinder.org/v2/gh/DOV-Vlaanderen/pydov/master?filepath=docs%2Fnotebooks%2Fsearch_sonderingen.ipynb)\n\n## Use cases explained below\n* Get CPT measurements in a bounding box\n* Get CPT measurements with specific properties\n* Get CPT measurements in a bounding box based on specific properties\n* Select CPT measurements in a municipality and return depth\n* Get CPT measurements based on fields not available in the standard output dataframe\n* Get CPT measurements data, returning fields not available in the standard output dataframe\n* Get CPT measurements in a municipality and where groundwater related data are available\n\n\n```python\n%matplotlib inline\nimport inspect, sys\n```\n\n\n```python\nimport pydov\n```\n\n## Get information about the datatype 'Sondering'\n\n\n```python\nfrom pydov.search.sondering import SonderingSearch\nsondering = SonderingSearch()\n```\n\nA description is provided for the 'Sondering' datatype:\n\n\n```python\nsondering.get_description()\n```\n\n\n\n\n 'In DOV worden de resultaten van sonderingen ter beschikking gesteld. Bij het uitvoeren van de sondering wordt een sondeerpunt met conus bij middel van buizen statisch de grond ingedrukt. Continu of met bepaalde diepte-intervallen wordt de weerstand aan de conuspunt, de plaatselijke wrijvingsweerstand en/of de totale indringingsweerstand opgemeten. Eventueel kan aanvullend de waterspanning in de grond rond de conus tijdens de sondering worden opgemeten met een waterspanningsmeter. Het op diepte drukken van de sondeerbuizen gebeurt met een indrukapparaat. De nodige reactie voor het indrukken van de buizen wordt geleverd door een verankering en/of door het gewicht van de sondeerwagen. De totale indrukcapaciteit varieert van 25 kN tot 250 kN, afhankelijk van apparaat en opstellingswijze.'\n\n\n\nThe different fields that are available for objects of the 'Sondering' datatype can be requested with the get_fields() method:\n\n\n```python\nfields = sondering.get_fields()\n\n# print available fields\nfor f in fields.values():\n print(f['name'])\n```\n\n id\n sondeernummer\n pkey_sondering\n weerstandsdiagram\n meetreeks\n x\n y\n start_sondering_mtaw\n gemeente\n diepte_sondering_van\n diepte_sondering_tot\n datum_aanvang\n uitvoerder\n conus\n sondeermethode\n apparaat\n informele_stratigrafie\n formele_stratigrafie\n hydrogeologische_stratigrafie\n opdrachten\n datum_gw_meting\n diepte_gw_m\n z\n qc\n Qt\n fs\n u\n i\n\n\nYou can get more information of a field by requesting it from the fields dictionary:\n* *name*: name of the field\n* *definition*: definition of this field\n* *cost*: currently this is either 1 or 10, depending on the datasource of the field. It is an indication of the expected time it will take to retrieve this field in the output dataframe.\n* *notnull*: whether the field is mandatory or not\n* *type*: datatype of the values of this field\n\n\n```python\nfields['diepte_sondering_tot']\n```\n\n\n\n\n {'name': 'diepte_sondering_tot',\n 'definition': 'Maximumdiepte van de sondering ten opzichte van het aanvangspeil, in meter.',\n 'type': 'float',\n 'notnull': False,\n 'query': True,\n 'cost': 1}\n\n\n\nOptionally, if the values of the field have a specific domain the possible values are listed as *values*:\n\n\n```python\nfields['conus']['values']\n```\n\n\n\n\n {'E': None, 'M1': None, 'M2': None, 'M4': None, 'U': None, 'onbekend': None}\n\n\n\n## Example use cases\n\n### Get CPT measurements in a bounding box\n\nGet data for all the CPT measurements that are geographically located within the bounds of the specified box.\n\nThe coordinates are in the Belgian Lambert72 (EPSG:31370) coordinate system and are given in the order of lower left x, lower left y, upper right x, upper right y.\n\n\n```python\nfrom pydov.util.location import Within, Box\n\ndf = sondering.search(location=Within(Box(152999, 206930, 153050, 207935)))\ndf.head()\n```\n\n [000/001] .\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxystart_sondering_mtawdiepte_sondering_vandiepte_sondering_totdatum_aanvanguitvoerdersondeermethodeapparaatdatum_gw_metingdiepte_gw_mzqcQtfsui
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100KNNaNNaN0.21.62.06NaNNaNNaN
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100KNNaNNaN0.43.64.26NaNNaNNaN
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100KNNaNNaN0.62.63.46NaNNaNNaN
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100KNNaNNaN0.84.05.66NaNNaNNaN
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100KNNaNNaN1.03.06.53NaNNaNNaN
\n
\n\n\n\nThe dataframe contains one CPT measurement where multiple measurement points. The available data are flattened to represent unique attributes per row of the dataframe.\n\nUsing the *pkey_sondering* field one can request the details of this borehole in a webbrowser:\n\n\n```python\nfor pkey_sondering in set(df.pkey_sondering):\n print(pkey_sondering)\n```\n\n https://www.dov.vlaanderen.be/data/sondering/1973-016812\n\n\n### Get CPT measurements with specific properties\n\nNext to querying CPT based on their geographic location within a bounding box, we can also search for CPT measurements matching a specific set of properties. For this we can build a query using a combination of the 'Sondering' fields and operators provided by the WFS protocol.\n\nA list of possible operators can be found below:\n\n\n```python\n[i for i,j in inspect.getmembers(sys.modules['owslib.fes'], inspect.isclass) if 'Property' in i]\n```\n\n\n\n\n ['PropertyIsBetween',\n 'PropertyIsEqualTo',\n 'PropertyIsGreaterThan',\n 'PropertyIsGreaterThanOrEqualTo',\n 'PropertyIsLessThan',\n 'PropertyIsLessThanOrEqualTo',\n 'PropertyIsLike',\n 'PropertyIsNotEqualTo',\n 'PropertyIsNull',\n 'SortProperty']\n\n\n\nIn this example we build a query using the *PropertyIsEqualTo* operator to find all CPT measuremetns that are within the community (gemeente) of 'Herstappe':\n\n\n```python\nfrom owslib.fes import PropertyIsEqualTo\n\nquery = PropertyIsEqualTo(propertyname='gemeente',\n literal='Elsene')\ndf = sondering.search(query=query)\n\ndf.head()\n```\n\n [000/029] .............................\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxystart_sondering_mtawdiepte_sondering_vandiepte_sondering_totdatum_aanvanguitvoerdersondeermethodeapparaatdatum_gw_metingdiepte_gw_mzqcQtfsui
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25KNNaN1.971.03.3NaNNaNNaNNaN
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25KNNaN1.971.12.9NaNNaNNaNNaN
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25KNNaN1.971.22.7NaNNaNNaNNaN
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25KNNaN1.971.32.4NaNNaNNaNNaN
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25KNNaN1.971.43.6NaNNaNNaNNaN
\n
\n\n\n\nOnce again we can use the *pkey_sondering* as a permanent link to the information of these CPT measurements:\n\n\n```python\nfor pkey_sondering in set(df.pkey_sondering):\n print(pkey_sondering)\n```\n\n https://www.dov.vlaanderen.be/data/sondering/1974-016926\n https://www.dov.vlaanderen.be/data/sondering/1976-030150\n https://www.dov.vlaanderen.be/data/sondering/1971-023321\n https://www.dov.vlaanderen.be/data/sondering/1976-013900\n https://www.dov.vlaanderen.be/data/sondering/1976-014640\n https://www.dov.vlaanderen.be/data/sondering/1974-016927\n https://www.dov.vlaanderen.be/data/sondering/1992-000338\n https://www.dov.vlaanderen.be/data/sondering/1971-022777\n https://www.dov.vlaanderen.be/data/sondering/1980-024719\n https://www.dov.vlaanderen.be/data/sondering/1976-030128\n https://www.dov.vlaanderen.be/data/sondering/1971-023323\n https://www.dov.vlaanderen.be/data/sondering/1980-024720\n https://www.dov.vlaanderen.be/data/sondering/1971-022775\n https://www.dov.vlaanderen.be/data/sondering/1992-000339\n https://www.dov.vlaanderen.be/data/sondering/1992-000335\n https://www.dov.vlaanderen.be/data/sondering/1975-014063\n https://www.dov.vlaanderen.be/data/sondering/1971-023091\n https://www.dov.vlaanderen.be/data/sondering/1976-030148\n https://www.dov.vlaanderen.be/data/sondering/1976-030140\n https://www.dov.vlaanderen.be/data/sondering/1971-023320\n https://www.dov.vlaanderen.be/data/sondering/1971-023322\n https://www.dov.vlaanderen.be/data/sondering/1971-022776\n https://www.dov.vlaanderen.be/data/sondering/1976-013899\n https://www.dov.vlaanderen.be/data/sondering/1976-014638\n https://www.dov.vlaanderen.be/data/sondering/1975-014064\n https://www.dov.vlaanderen.be/data/sondering/1976-013898\n https://www.dov.vlaanderen.be/data/sondering/1992-000337\n https://www.dov.vlaanderen.be/data/sondering/1971-023319\n https://www.dov.vlaanderen.be/data/sondering/1992-000336\n\n\n### Get CPT measurements in a bounding box based on specific properties\n\nWe can combine a query on attributes with a query on geographic location to get the CPT measurements within a bounding box that have specific properties.\n\nThe following example requests the CPT measurements with a depth greater than or equal to 2000 meters within the given bounding box.\n\n(Note that the datatype of the *literal* parameter should be a string, regardless of the datatype of this field in the output dataframe.)\n\n\n```python\nfrom owslib.fes import PropertyIsGreaterThanOrEqualTo\n\nquery = PropertyIsGreaterThanOrEqualTo(\n propertyname='diepte_sondering_tot',\n literal='20')\n\ndf = sondering.search(\n location=Within(Box(200000, 211000, 205000, 214000)),\n query=query\n )\n\ndf.head()\n```\n\n [000/021] .....................\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxystart_sondering_mtawdiepte_sondering_vandiepte_sondering_totdatum_aanvanguitvoerdersondeermethodeapparaatdatum_gw_metingdiepte_gw_mzqcQtfsui
0https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200kN - TRACK-TRUCK2010-08-30 12:50:001.451.301.22NaN1.0NaN0.8
1https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200kN - TRACK-TRUCK2010-08-30 12:50:001.451.353.19NaN2.0NaN1.0
2https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200kN - TRACK-TRUCK2010-08-30 12:50:001.451.407.21NaN63.0NaN1.2
3https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200kN - TRACK-TRUCK2010-08-30 12:50:001.451.4512.75NaN138.0NaN1.2
4https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200kN - TRACK-TRUCK2010-08-30 12:50:001.451.5015.26NaN143.0NaN1.4
\n
\n\n\n\nWe can look at one of the CPT measurements in a webbrowser using its *pkey_sondering*:\n\n\n```python\nfor pkey_sondering in set(df.pkey_sondering):\n print(pkey_sondering)\n```\n\n https://www.dov.vlaanderen.be/data/sondering/2015-055496\n https://www.dov.vlaanderen.be/data/sondering/2008-077545\n https://www.dov.vlaanderen.be/data/sondering/2008-077592\n https://www.dov.vlaanderen.be/data/sondering/2007-049201\n https://www.dov.vlaanderen.be/data/sondering/2008-077566\n https://www.dov.vlaanderen.be/data/sondering/2009-000053\n https://www.dov.vlaanderen.be/data/sondering/2009-000052\n https://www.dov.vlaanderen.be/data/sondering/2008-077556\n https://www.dov.vlaanderen.be/data/sondering/2010-062407\n https://www.dov.vlaanderen.be/data/sondering/2008-077579\n https://www.dov.vlaanderen.be/data/sondering/2008-077580\n https://www.dov.vlaanderen.be/data/sondering/2008-077564\n https://www.dov.vlaanderen.be/data/sondering/2008-077581\n https://www.dov.vlaanderen.be/data/sondering/2008-077577\n https://www.dov.vlaanderen.be/data/sondering/2008-077591\n https://www.dov.vlaanderen.be/data/sondering/2009-000054\n https://www.dov.vlaanderen.be/data/sondering/2015-054995\n https://www.dov.vlaanderen.be/data/sondering/2015-054999\n https://www.dov.vlaanderen.be/data/sondering/2007-049200\n https://www.dov.vlaanderen.be/data/sondering/2008-077565\n https://www.dov.vlaanderen.be/data/sondering/2008-077557\n\n\n### Select CPT measurements in a municipality and return depth\n\nWe can limit the columns in the output dataframe by specifying the *return_fields* parameter in our search.\n\nIn this example we query all the CPT measurements in the city of Ghent and return their depth:\n\n\n```python\nquery = PropertyIsEqualTo(propertyname='gemeente',\n literal='Gent')\ndf = sondering.search(query=query,\n return_fields=('diepte_sondering_tot',))\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
diepte_sondering_tot
02.7
11.4
27.6
311.5
418.6
\n
\n\n\n\n\n```python\ndf.describe()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
diepte_sondering_tot
count3589.000000
mean18.509772
std8.498644
min1.000000
25%11.400000
50%18.800000
75%24.600000
max52.600000
\n
\n\n\n\n\n```python\nax = df.boxplot()\nax.set_title('Distribution depth CPT measurements in Ghent');\nax.set_ylabel(\"depth (m)\")\n```\n\n### Get CPT measurements based on fields not available in the standard output dataframe\n\nTo keep the output dataframe size acceptable, not all available WFS fields are included in the standard output. However, one can use this information to select CPT measurements as illustrated below.\n\nFor example, make a selection of the CPT measurements in municipality the of Antwerp, using a conustype 'U':\n\n\n```python\nfrom owslib.fes import And\n\nquery = And([PropertyIsEqualTo(propertyname='gemeente',\n literal='Antwerpen'),\n PropertyIsEqualTo(propertyname='conus', \n literal='U')]\n )\ndf = sondering.search(query=query,\n return_fields=('pkey_sondering', 'sondeernummer', 'x', 'y', 'diepte_sondering_tot', 'datum_aanvang'))\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxydiepte_sondering_totdatum_aanvang
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.029.701993-03-02
1https://www.dov.vlaanderen.be/data/sondering/2...GEO-02/111-S1150347.3214036.429.952002-12-17
2https://www.dov.vlaanderen.be/data/sondering/2...GEO-04/123-SKD4-E146437.7222317.54.452004-07-12
3https://www.dov.vlaanderen.be/data/sondering/2...GEO-04/123-SKD6-E146523.9222379.77.402004-07-14
4https://www.dov.vlaanderen.be/data/sondering/2...GEO-04/123-SKD5-E146493.4222298.81.652004-07-16
\n
\n\n\n\n### Get CPT data, returning fields not available in the standard output dataframe\n\nAs denoted in the previous example, not all available fields are available in the default output frame to keep its size limited. However, you can request any available field by including it in the *return_fields* parameter of the search:\n\n\n```python\nquery = And([PropertyIsEqualTo(propertyname='gemeente', literal='Gent'),\n PropertyIsEqualTo(propertyname='conus', literal='U')])\n\ndf = sondering.search(query=query,\n return_fields=('pkey_sondering', 'sondeernummer', 'diepte_sondering_tot',\n 'conus', 'x', 'y'))\n\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerdiepte_sondering_totconusxy
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SV33.80U110241.6204692.2
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SI15.65U110062.5205051.4
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SII26.50U110107.0204965.3
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIII16.50U110152.4204876.1
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIV16.70U110197.8204787.0
\n
\n\n\n\n\n```python\ndf\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerdiepte_sondering_totconusxy
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SV33.80U110241.6204692.2
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SI15.65U110062.5205051.4
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SII26.50U110107.0204965.3
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIII16.50U110152.4204876.1
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIV16.70U110197.8204787.0
5https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIX27.60U110479.5205240.7
6https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SVI16.80U110288.5204608.8
7https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SVII26.70U110334.3204519.8
8https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SX27.50U110685.0204845.5
9https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SXI25.60U109941.5204346.9
10https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SXII26.50U110412.2204398.1
11https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/096-SIX(CPT9)17.60U105018.0190472.0
12https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/096-SVII(CPT7)26.05U105046.0190550.0
13https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/096-SVIII(CPT8)24.75U104997.0190521.0
14https://www.dov.vlaanderen.be/data/sondering/1...GEO-97/002-S229.90U105376.6189104.3
15https://www.dov.vlaanderen.be/data/sondering/1...GEO-97/002-S35.90U105391.3189083.7
16https://www.dov.vlaanderen.be/data/sondering/1...GEO-97/002-S130.60U105399.3189065.2
17https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S118.05U106104.1188699.4
18https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S217.30U106045.3188708.4
19https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S318.70U106100.5188743.8
20https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S517.30U106130.0188712.0
21https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S417.00U106077.5188686.0
\n
\n\n\n\n## Resistivity plot\n\nThe data for the reporting of resistivity plots with the online application, see for example [this report](https://www.dov.vlaanderen.be/zoeken-ocdov/proxy-sondering/sondering/1993-001275/rapport/identifygrafiek?outputformaat=PDF), is also accessible with the pydov package. Querying the data for this specific _sondering_:\n\n\n```python\nquery = PropertyIsEqualTo(propertyname='pkey_sondering',\n literal='https://www.dov.vlaanderen.be/data/sondering/1993-001275')\ndf_sond = sondering.search(query=query)\n\ndf_sond.head()\n```\n\n [000/001] .\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxystart_sondering_mtawdiepte_sondering_vandiepte_sondering_totdatum_aanvanguitvoerdersondeermethodeapparaatdatum_gw_metingdiepte_gw_mzqcQtfsui
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200KNNaNNaN0.611.60NaN130.069.0NaN
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200KNNaNNaN0.76.30NaN100.029.0NaN
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200KNNaNNaN0.86.22NaN120.0-4.0NaN
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200KNNaNNaN0.94.92NaN120.0-48.0NaN
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200KNNaNNaN1.04.40NaN80.0-35.0NaN
\n
\n\n\n\nWe have the depth (`z`) available, together with the measured values for each depth of the variables (in dutch):\n\n* `qc`: Opgemeten waarde van de conusweerstand, uitgedrukt in MPa.\n* `Qt`: Opgemeten waarde van de totale weerstand, uitgedrukt in kN.\n* `fs`: Opgemeten waarde van de plaatelijke kleefweerstand uitgedrukt in kPa.\n* `u`: Opgemeten waarde van de porienwaterspanning, uitgedrukt in kPa.\n* `i`: Opgemeten waarde van de inclinatie, uitgedrukt in graden.\n\nTo recreate the resistivity plot, we also need the `resistivity number` (wrijvingsgetal `rf`), see [DOV documentation](https://www.dov.vlaanderen.be/page/sonderingen).\n\n\\begin{equation}\nR_f = \\frac{f_s}{q_c}\n\\end{equation}\n\n**Notice:** $f_s$ is provide in kPa and $q_c$ in MPa.\n\nAdding `rf` to the dataframe:\n\n\n```python\ndf_sond[\"rf\"] = df_sond[\"fs\"]/df_sond[\"qc\"]/10 \n```\n\nRecreate the resistivity plot:\n\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n\n```python\ndef make_patch_spines_invisible(ax):\n ax.set_frame_on(True)\n ax.patch.set_visible(False)\n for sp in ax.spines.values():\n sp.set_visible(False)\n```\n\n\n```python\nfig, ax0 = plt.subplots(figsize=(8, 12))\n\n# Prepare the individual axis\nax_qc = ax0.twiny()\nax_fs = ax0.twiny()\nax_u = ax0.twiny()\nax_rf = ax0.twiny()\n\nfor i, ax in enumerate([ax_qc, ax_fs, ax_u]):\n ax.spines[\"top\"].set_position((\"axes\", 1+0.05*(i+1)))\n make_patch_spines_invisible(ax)\n ax.spines[\"top\"].set_visible(True)\n\n# Plot the data on the axis\ndf_sond.plot(x=\"rf\", y=\"z\", label=\"rf\", ax=ax_rf, color='purple', legend=False)\ndf_sond.plot(x=\"qc\", y=\"z\", label=\"qc (MPa)\", ax=ax_qc, color='black', legend=False)\ndf_sond.plot(x=\"fs\", y=\"z\", label=\"fs (kPa)\", ax=ax_fs, color='green', legend=False)\ndf_sond.plot(x=\"u\", y=\"z\", label=\"u (kPa)\", ax=ax_u, color='red', \n legend=False, xlim=(-100, 300)) # ! 300 is hardocded here for the example\n\n# styling and configuration\nax_rf.xaxis.label.set_color('purple')\nax_fs.xaxis.label.set_color('green')\nax_u.xaxis.label.set_color('red')\n\nax0.axes.set_visible(False)\nax_qc.axes.yaxis.set_visible(False)\nax_fs.axes.yaxis.set_visible(False)\nfor i, ax in enumerate([ax_rf, ax_qc, ax_fs, ax_u, ax0]):\n ax.spines[\"right\"].set_visible(False)\n ax.spines[\"bottom\"].set_visible(False)\n ax.xaxis.label.set_fontsize(15)\n ax.xaxis.set_label_coords(-0.05, 1+0.05*i)\n ax.spines['left'].set_position(('outward', 10))\n ax.spines['left'].set_bounds(0, 30)\nax_rf.set_xlim(0, 46)\n\nax_u.set_title(\"Resistivity plot CPT measurement GEO-93/023-SII-E\", fontsize=12)\n\nax0.invert_yaxis()\nax_rf.invert_xaxis()\nax_u.set_ylabel(\"Depth(m)\", fontsize=12)\nfig.legend(loc='lower center', ncol=4)\nfig.tight_layout()\n```\n\n## Visualize locations\n\nUsing Folium, we can display the results of our search on a map.\n\n\n```python\n# import the necessary modules (not included in the requirements of pydov!)\nimport folium\nfrom folium.plugins import MarkerCluster\nfrom pyproj import Transformer\n```\n\n\n```python\n# convert the coordinates to lat/lon for folium\ndef convert_latlon(x1, y1):\n transformer = Transformer.from_crs(\"epsg:31370\", \"epsg:4326\", always_xy=True)\n x2,y2 = transformer.transform(x1, y1)\n return x2, y2\n\ndf['lon'], df['lat'] = zip(*map(convert_latlon, df['x'], df['y'])) \n# convert to list\nloclist = df[['lat', 'lon']].values.tolist()\n```\n\n\n```python\n# initialize the Folium map on the centre of the selected locations, play with the zoom until ok\nfmap = folium.Map(location=[df['lat'].mean(), df['lon'].mean()], zoom_start=11)\nmarker_cluster = MarkerCluster().add_to(fmap)\nfor loc in range(0, len(loclist)):\n folium.Marker(loclist[loc], popup=df['sondeernummer'][loc]).add_to(marker_cluster)\nfmap\n\n```\n\n\n\n\n
Make this Notebook Trusted to load map: File -> Trust Notebook
\n\n\n", "meta": {"hexsha": "14705b56c007c8599d0395ef99a7fd03959d6513", "size": 200415, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/notebooks/search_sonderingen.ipynb", "max_stars_repo_name": "rebot/pydov", "max_stars_repo_head_hexsha": "1d5f0080440f4e0f983c8087aed9aec1624ba906", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "docs/notebooks/search_sonderingen.ipynb", "max_issues_repo_name": "rebot/pydov", "max_issues_repo_head_hexsha": "1d5f0080440f4e0f983c8087aed9aec1624ba906", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/notebooks/search_sonderingen.ipynb", "max_forks_repo_name": "rebot/pydov", "max_forks_repo_head_hexsha": "1d5f0080440f4e0f983c8087aed9aec1624ba906", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 86.2, "max_line_length": 81824, "alphanum_fraction": 0.7573784397, "converted": true, "num_tokens": 14744, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48047867804790706, "lm_q2_score": 0.194367818018817, "lm_q1q2_score": 0.09338959225673736}} {"text": "#
Econometrics HW_08
\n\n**
11510691 \u7a0b\u8fdc\u661f$\\DeclareMathOperator*{\\argmin}{argmin}\n\\DeclareMathOperator*{\\argmax}{argmax}\n\\DeclareMathOperator*{\\plim}{plim}\n\\newcommand{\\using}[1]{\\stackrel{\\mathrm{#1}}{=}}\n\\newcommand{\\ffrac}{\\displaystyle \\frac}\n\\newcommand{\\asim}{\\overset{\\text{a}}{\\sim}}\n\\newcommand{\\space}{\\text{ }}\n\\newcommand{\\bspace}{\\;\\;\\;\\;}\n\\newcommand{\\QQQ}{\\boxed{?\\:}}\n\\newcommand{\\void}{\\left.\\right.}\n\\newcommand{\\Tran}[1]{{#1}^{\\mathrm{T}}}\n\\newcommand{\\d}[1]{\\displaystyle{#1}}\n\\newcommand{\\CB}[1]{\\left\\{ #1 \\right\\}}\n\\newcommand{\\SB}[1]{\\left[ #1 \\right]}\n\\newcommand{\\P}[1]{\\left( #1 \\right)}\n\\newcommand{\\abs}[1]{\\left| #1 \\right|}\n\\newcommand{\\norm}[1]{\\left\\| #1 \\right\\|}\n\\newcommand{\\dd}{\\mathrm{d}}\n\\newcommand{\\Exp}{\\mathrm{E}}\n\\newcommand{\\RR}{\\mathbb{R}}\n\\newcommand{\\EE}{\\mathbb{E}}\n\\newcommand{\\NN}{\\mathbb{N}}\n\\newcommand{\\ZZ}{\\mathbb{Z}}\n\\newcommand{\\QQ}{\\mathbb{Q}}\n\\newcommand{\\AcA}{\\mathscr{A}}\n\\newcommand{\\FcF}{\\mathscr{F}}\n\\newcommand{\\Var}[2][\\,\\!]{\\mathrm{Var}_{#1}\\left[#2\\right]}\n\\newcommand{\\Avar}[2][\\,\\!]{\\mathrm{Avar}_{#1}\\left[#2\\right]}\n\\newcommand{\\Cov}[2][\\,\\!]{\\mathrm{Cov}_{#1}\\left(#2\\right)}\n\\newcommand{\\Corr}[2][\\,\\!]{\\mathrm{Corr}_{#1}\\left(#2\\right)}\n\\newcommand{\\I}[1]{\\mathrm{I}\\left( #1 \\right)}\n\\newcommand{\\N}[1]{\\mathcal{N} \\left( #1 \\right)}\n\\newcommand{\\ow}{\\text{otherwise}}\n\\void^\\dagger$
**\n\n## Question 5\n\n$\\P{1}$\n\n$\\bspace$The two set of standard errors are so close and there's really no needs to distinguish them from each other.\n\n$\\P{2}$\n\n$\\bspace$Holding all other variables fixed, the probability of smoking changes by about $-0.029\\times4=-0.116$.\n\n$\\P{3}$\n\n$$\\abs{\\ffrac{0.02} {2\\times 0.00026}}\\approx 38.46153846153846$$\n\n$\\P{4}$\n\n$\\bspace$Holding other factors in the equation fixed, the probability to smoke will decrease $0.101$ for a person in a state with restaurant smoking restrictions.\n\n$\\P{5}$\n\n$$\\begin{align}\\widehat{\\text{smoke}} &= 0.656-0.069\\times\\log\\P{67.44} + 0.012\\times\\log\\P{6500} - 0.029 \\times 16\\\\\n&\\bspace+ 0.0207\\times77 -0.00026\\times77^2 -0.101\\times0-0.026\\times 0\\\\\n&\\approx 0.0052\n\\end{align}$$\n\n## Question 6\n\n$\\P{1}$\n\n$\\bspace$The numerator has $k+1$ regressors, and that's the $df$ for it. For the denominator, its $df$ is $n-\\P{k-2}$\n\n$\\P{2}$\n\n$\\bspace$In BP test, there's one more regressor thus it's got a higher $R$-squared. In White test, the model has more restrictions and thus the $R$-squared will be higher.\n\n$\\P{3}$\n\n$\\bspace$For $t$ test statistic will be a little bit smaller however the change of $F$ statistic is unpredictable. The $\\text{SSR}$s will be larger while $df$s also do.\n\n$\\P{4}$\n\n$\\bspace$Collinearity. Since the estimated equation will be a linear combination of all variables.\n\n## Question 7\n\n$\\P{1}$\n\n$\\bspace$Since the two are uncorrelated, we have $\\Var{u_{i,e}} = \\Var{f_i} + \\Var{v_{i,e}} = \\sigma_f^2 + \\sigma_v^2$\n\n$\\P{2}$\n\n$$\\begin{align}\n\\Cov{u_{i,e},u_{i,g}} &= \\Cov{f_i + v_{i,e},f_i + v_{i,g}}\\\\\n&= \\Cov{f_i,f_i} + \\Cov{f_i,v_{i,g}} + \\Cov{v_{i,e},f_i} + \\Cov{v_{i,e},v_{i,g}}\\\\\n&= \\Cov{f_i,f_i} + 0 + 0 + 0 = \\Var{f_i}\n\\end{align}$$\n\n$\\P{3}$\n\n$$\\begin{align}\n\\Var{\\bar u_i} &= \\Var{\\ffrac{1} {m_i}\\sum_{e=1}^{m_i} u_{i,e}} \\\\\n&= \\Var{f_i + \\ffrac{1} {m_i}\\sum_{e=1}^{m_i} v_{i,e}}\\\\\n&= \\Var{f_i} + \\Var{\\ffrac{1} {m_i}\\sum_{e=1}^{m_i} v_{i,e}}\\\\\n&= \\sigma_f^2 + \\ffrac{\\sigma_v^2} {m_i}\n\\end{align}$$\n\n$\\P{4}$\n\n$\\bspace$From the weighted OLS method, our target is to find some specific weight so that $\\Var{\\bar u_i} = \\ffrac{\\sigma_f^2} {m_i}$. If we take the weight so that the data are simply averaged, then like the preceding problem, we failed.\n", "meta": {"hexsha": "c7a0be2eac61335b17ffedc98005f62b79ceca82", "size": 6099, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "FinMath/Econometrics/HW/HW_08.ipynb", "max_stars_repo_name": "XavierOwen/Notes", "max_stars_repo_head_hexsha": "d262a9103b29ee043aa198b475654aabd7a2818d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2018-11-27T10:31:08.000Z", "max_stars_repo_stars_event_max_datetime": "2019-01-20T03:11:58.000Z", "max_issues_repo_path": "FinMath/Econometrics/HW/HW_08.ipynb", "max_issues_repo_name": "XavierOwen/Notes", "max_issues_repo_head_hexsha": "d262a9103b29ee043aa198b475654aabd7a2818d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "FinMath/Econometrics/HW/HW_08.ipynb", "max_forks_repo_name": "XavierOwen/Notes", "max_forks_repo_head_hexsha": "d262a9103b29ee043aa198b475654aabd7a2818d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-07-14T19:57:23.000Z", "max_forks_repo_forks_event_max_datetime": "2020-07-14T19:57:23.000Z", "avg_line_length": 34.4576271186, "max_line_length": 249, "alphanum_fraction": 0.5158222659, "converted": true, "num_tokens": 1437, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.49218813572079556, "lm_q2_score": 0.18952109132967757, "lm_q1q2_score": 0.09328003262132463}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n#####Version 0.1\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the projects [homepage](camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\nChapter 1\n======\n***\n\nThe Philosophy of Bayesian Inference\n------\n \n> You are a skilled programmer, but bugs still slip into your code. After a particularly difficult implementation of an algorithm, you decide to test your code on a trivial example. It passes. You test the code on a harder problem. It passes once again. And it passes the next, *even more difficult*, test too! You are starting to believe that there may be no bugs in this code...\n\nIf you think this way, then congratulations, you already are thinking Bayesian! Bayesian inference is simply updating your beliefs after considering new evidence. A Bayesian can rarely be certain about a result, but he or she can be very confident. Just like in the example above, we can never be 100% sure that our code is bug-free unless we test it on every possible problem; something rarely possible in practice. Instead, we can test it on a large number of problems, and if it succeeds we can feel more *confident* about our code, but still not certain. Bayesian inference works identically: we update our beliefs about an outcome; rarely can we be absolutely sure unless we rule out all other alternatives. \n\n\n### The Bayesian state of mind\n\n\nBayesian inference differs from more traditional statistical inference by preserving *uncertainty*. At first, this sounds like a bad statistical technique. Isn't statistics all about deriving *certainty* from randomness? To reconcile this, we need to start thinking like Bayesians. \n\nThe Bayesian world-view interprets probability as measure of *believability in an event*, that is, how confident we are in an event occurring. In fact, we will see in a moment that this is the natural interpretation of probability. \n\nFor this to be clearer, we consider an alternative interpretation of probability: *Frequentist*, known as the more *classical* version of statistics, assume that probability is the long-run frequency of events (hence the bestowed title). For example, the *probability of plane accidents* under a frequentist philosophy is interpreted as the *long-term frequency of plane accidents*. This makes logical sense for many probabilities of events, but becomes more difficult to understand when events have no long-term frequency of occurrences. Consider: we often assign probabilities to outcomes of presidential elections, but the election itself only happens once! Frequentists get around this by invoking alternative realities and saying across all these realities, the frequency of occurrences defines the probability. \n\nBayesians, on the other hand, have a more intuitive approach. Bayesians interpret a probability as measure of *belief*, or confidence, of an event occurring. Simply, a probability is a summary of an opinion. An individual who assigns a belief of 0 to an event has no confidence that the event will occur; conversely, assigning a belief of 1 implies that the individual is absolutely certain of an event occurring. Beliefs between 0 and 1 allow for weightings of other outcomes. This definition agrees with the probability of a plane accident example, for having observed the frequency of plane accidents, an individual's belief should be equal to that frequency, excluding any outside information. Similarly, under this definition of probability being equal to beliefs, it is meaningful to speak about probabilities (beliefs) of presidential election outcomes: how confident are you candidate *A* will win?\n\nNotice in the paragraph above, I assigned the belief (probability) measure to an *individual*, not to Nature. This is very interesting, as this definition leaves room for conflicting beliefs between individuals. Again, this is appropriate for what naturally occurs: different individuals have different beliefs of events occurring, because they possess different *information* about the world. The existence of different beliefs does not imply that anyone is wrong. Consider the following examples demonstrating the relationship between individual beliefs and probabilities:\n\n- I flip a coin, and we both guess the result. We would both agree, assuming the coin is fair, that the probability of Heads is 1/2. Assume, then, that I peek at the coin. Now I know for certain what the result is: I assign probability 1.0 to either Heads or Tails (whichever it is). Now what is *your* belief that the coin is Heads? My knowledge of the outcome has not changed the coin's results. Thus we assign different probabilities to the result. \n\n- Your code either has a bug in it or not, but we do not know for certain which is true, though we have a belief about the presence or absence of a bug. \n\n- A medical patient is exhibiting symptoms $x$, $y$ and $z$. There are a number of diseases that could be causing all of them, but only a single disease is present. A doctor has beliefs about which disease, but a second doctor may have slightly different beliefs. \n\n\nThis philosophy of treating beliefs as probability is natural to humans. We employ it constantly as we interact with the world and only see partial truths, but gather evidence to form beliefs. Alternatively, you have to be *trained* to think like a frequentist. \n\nTo align ourselves with traditional probability notation, we denote our belief about event $A$ as $P(A)$. We call this quantity the *prior probability*.\n\nJohn Maynard Keynes, a great economist and thinker, said \"When the facts change, I change my mind. What do you do, sir?\" This quote reflects the way a Bayesian updates his or her beliefs after seeing evidence. Even — especially — if the evidence is counter to what was initially believed, the evidence cannot be ignored. We denote our updated belief as $P(A |X )$, interpreted as the probability of $A$ given the evidence $X$. We call the updated belief the *posterior probability* so as to contrast it with the prior probability. For example, consider the posterior probabilities (read: posterior beliefs) of the above examples, after observing some evidence $X$.:\n\n1\\. $P(A): \\;\\;$ the coin has a 50 percent chance of being Heads. $P(A | X):\\;\\;$ You look at the coin, observe a Heads has landed, denote this information $X$, and trivially assign probability 1.0 to Heads and 0.0 to Tails.\n\n2\\. $P(A): \\;\\;$ This big, complex code likely has a bug in it. $P(A | X): \\;\\;$ The code passed all $X$ tests; there still might be a bug, but its presence is less likely now.\n\n3\\. $P(A):\\;\\;$ The patient could have any number of diseases. $P(A | X):\\;\\;$ Performing a blood test generated evidence $X$, ruling out some of the possible diseases from consideration.\n\n\nIt's clear that in each example we did not completely discard the prior belief after seeing new evidence $X$, but we *re-weighted the prior* to incorporate the new evidence (i.e. we put more weight, or confidence, on some beliefs versus others). \n\nBy introducing prior uncertainty about events, we are already admitting that any guess we make is potentially very wrong. After observing data, evidence, or other information, we update our beliefs, and our guess becomes *less wrong*. This is the alternative side of the prediction coin, where typically we try to be *more right*. \n\n\n\n### Bayesian Inference in Practice\n\n If frequentist and Bayesian inference were programming functions, with inputs being statistical problems, then the two would be different in what they return to the user. The frequentist inference function would return a number, representing an estimate (typically a summary statistic like the sample average etc.), whereas the Bayesian function would return *probabilities*.\n\nFor example, in our debugging problem above, calling the frequentist function with the argument \"My code passed all $X$ tests; is my code bug-free?\" would return a *YES*. On the other hand, asking our Bayesian function \"Often my code has bugs. My code passed all $X$ tests; is my code bug-free?\" would return something very different: probabilities of *YES* and *NO*. The function might return:\n\n\n> *YES*, with probability 0.8; *NO*, with probability 0.2\n\n\n\nThis is very different from the answer the frequentist function returned. Notice that the Bayesian function accepted an additional argument: *\"Often my code has bugs\"*. This parameter is the *prior*. By including the prior parameter, we are telling the Bayesian function to include our belief about the situation. Technically this parameter in the Bayesian function is optional, but we will see excluding it has its own consequences. \n\n\n####Incorporating evidence\n\nAs we acquire more and more instances of evidence, our prior belief is *washed out* by the new evidence. This is to be expected. For example, if your prior belief is something ridiculous, like \"I expect the sun to explode today\", and each day you are proved wrong, you would hope that any inference would correct you, or at least align your beliefs better. Bayesian inference will correct this belief.\n\n\nDenote $N$ as the number of instances of evidence we possess. As we gather an *infinite* amount of evidence, say as $N \\rightarrow \\infty$, our Bayesian results (often) align with frequentist results. Hence for large $N$, statistical inference is more or less objective. On the other hand, for small $N$, inference is much more *unstable*: frequentist estimates have more variance and larger confidence intervals. This is where Bayesian analysis excels. By introducing a prior, and returning probabilities (instead of a scalar estimate), we *preserve the uncertainty* that reflects the instability of statistical inference of a small $N$ dataset. \n\nOne may think that for large $N$, one can be indifferent between the two techniques since they offer similar inference, and might lean towards the computational-simpler, frequentist methods. An individual in this position should consider the following quote by Andrew Gelman (2005)[1], before making such a decision:\n\n> Sample sizes are never large. If $N$ is too small to get a sufficiently-precise estimate, you need to get more data (or make more assumptions). But once $N$ is \"large enough,\" you can start subdividing the data to learn more (for example, in a public opinion poll, once you have a good estimate for the entire country, you can estimate among men and women, northerners and southerners, different age groups, etc.). $N$ is never enough because if it were \"enough\" you'd already be on to the next problem for which you need more data.\n\n### Are frequentist methods incorrect then? \n\n**No.**\n\nFrequentist methods are still useful or state-of-the-art in many areas. Tools such as least squares linear regression, LASSO regression, and expectation-maximization algorithms are all powerful and fast. Bayesian methods complement these techniques by solving problems that these approaches cannot, or by illuminating the underlying system with more flexible modeling.\n\n\n#### A note on *Big Data*\nParadoxically, big data's predictive analytic problems are actually solved by relatively simple algorithms [2][4]. Thus we can argue that big data's prediction difficulty does not lie in the algorithm used, but instead on the computational difficulties of storage and execution on big data. (One should also consider Gelman's quote from above and ask \"Do I really have big data?\" )\n\nThe much more difficult analytic problems involve *medium data* and, especially troublesome, *really small data*. Using a similar argument as Gelman's above, if big data problems are *big enough* to be readily solved, then we should be more interested in the *not-quite-big enough* datasets. \n\n\n### Our Bayesian framework\n\nWe are interested in beliefs, which can be interpreted as probabilities by thinking Bayesian. We have a *prior* belief in event $A$, beliefs formed by previous information, e.g., our prior belief about bugs being in our code before performing tests.\n\nSecondly, we observe our evidence. To continue our buggy-code example: if our code passes $X$ tests, we want to update our belief to incorporate this. We call this new belief the *posterior* probability. Updating our belief is done via the following equation, known as Bayes' Theorem, after its discoverer Thomas Bayes:\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{is proportional to } )\n\\end{align}\n\nThe above formula is not unique to Bayesian inference: it is a mathematical fact with uses outside Bayesian inference. Bayesian inference merely uses it to connect prior probabilities $P(A)$ with an updated posterior probabilities $P(A | X )$.\n\n##### Example: Mandatory coin-flip example\n\nEvery statistics text must contain a coin-flipping example, I'll use it here to get it out of the way. Suppose, naively, that you are unsure about the probability of heads in a coin flip (spoiler alert: it's 50%). You believe there is some true underlying ratio, call it $p$, but have no prior opinion on what $p$ might be. \n\nWe begin to flip a coin, and record the observations: either $H$ or $T$. This is our observed data. An interesting question to ask is how our inference changes as we observe more and more data? More specifically, what do our posterior probabilities look like when we have little data, versus when we have lots of data. \n\nBelow we plot a sequence of updating posterior probabilities as we observe increasing amounts of data (coin flips).\n\n\n```\n\"\"\"\nThe book uses a custom matplotlibrc file, which provides the unique styles for\nmatplotlib plots. If executing this book, and you wish to use the book's\nstyling, provided are two options:\n 1. Overwrite your own matplotlibrc file with the rc-file provided in the\n book's styles/ dir. See http://matplotlib.org/users/customizing.html\n 2. Also in the styles is bmh_matplotlibrc.json file. This can be used to\n update the styles in only this notebook. Try running the following code:\n\n import json\n s = json.load( open(\"../styles/bmh_matplotlibrc.json\") )\n matplotlib.rcParams.update(s)\n\n\"\"\"\n\n# The code below can be passed over, as it is currently not important, plus it\n# uses advanced topics we have not covered yet. LOOK AT PICTURE, MICHAEL!\n%pylab inline\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n# For the already prepared, I'm using Binomial's conj. prior.\nfor k, N in enumerate(n_trials):\n sx = subplot(len(n_trials)/2, 2, k+1)\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials)-1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads))\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\",\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\nThe posterior probabilities are represented by the curves, and our uncertainty is proportional to the width of the curve. As the plot above shows, as we start to observe data our posterior probabilities start to shift and move around. Eventually, as we observe more and more data (coin-flips), our probabilities will tighten closer and closer around the true value of $p=0.5$ (marked by a dashed line). \n\nNotice that the plots are not always *peaked* at 0.5. There is no reason it should be: recall we assumed we did not have a prior opinion of what $p$ is. In fact, if we observe quite extreme data, say 8 flips and only 1 observed heads, our distribution would look very biased *away* from lumping around 0.5 (with no prior opinion, how confident would you feel betting on a fair coin after observing 8 tails and 1 head). As more data accumulates, we would see more and more probability being assigned at $p=0.5$, though never all of it.\n\nThe next example is a simple demonstration of the mathematics of Bayesian inference. \n\n##### Example: Bug, or just sweet, unintended feature?\n\n\nLet $A$ denote the event that our code has **no bugs** in it. Let $X$ denote the event that the code passes all debugging tests. For now, we will leave the prior probability of no bugs as a variable, i.e. $P(A) = p$. \n\nWe are interested in $P(A|X)$, i.e. the probability of no bugs, given our debugging tests $X$. To use the formula above, we need to compute some quantities.\n\nWhat is $P(X | A)$, i.e., the probability that the code passes $X$ tests *given* there are no bugs? Well, it is equal to 1, for a code with no bugs will pass all tests. \n\n$P(X)$ is a little bit trickier: The event $X$ can be divided into two possibilities, event $X$ occurring even though our code *indeed has* bugs (denoted $\\sim A\\;$, spoken *not $A$*), or event $X$ without bugs ($A$). $P(X)$ can be represented as:\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\nWe have already computed $P(X|A)$ above. On the other hand, $P(X | \\sim A)$ is subjective: our code can pass tests but still have a bug in it, though the probability there is a bug present is reduced. Note this is dependent on the number of tests performed, the degree of complication in the tests, etc. Let's be conservative and assign $P(X|\\sim A) = 0.5$. Then\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\nThis is the posterior probability. What does it look like as a function of our prior, $p \\in [0,1]$? \n\n\n```\nfigsize(12.5, 4)\np = np.linspace(0, 1, 50)\nplt.plot(p, 2*p/(1+p), color=\"#348ABD\", lw=3)\n#plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"])\nplt.scatter(0.2, 2*(0.2)/1.2, s=140, c=\"#348ABD\")\nplt.xlim(0, 1)\nplt.ylim(0, 1)\nplt.xlabel(\"Prior, $P(A) = p$\")\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\")\nplt.title(\"Are there bugs in my code?\")\n```\n\nWe can see the biggest gains if we observe the $X$ tests passed when the prior probability, $p$, is low. Let's settle on a specific value for the prior. I'm a strong programmer (I think), so I'm going to give myself a realistic prior of 0.20, that is, there is a 20% chance that I write code bug-free. To be more realistic, this prior should be a function of how complicated and large the code is, but let's pin it at 0.20. Then my updated belief that my code is bug-free is 0.33. \n\nRecall that the prior is a probability: $p$ is the prior probability that there *are no bugs*, so $1-p$ is the prior probability that there *are bugs*.\n\nSimilarly, our posterior is also a probability, with $P(A | X)$ the probability there is no bug *given we saw all tests pass*, hence $1-P(A|X)$ is the probability there is a bug *given all tests passed*. What does our posterior probability look like? Below is a chart of both the prior and the posterior probabilities. \n\n\n\n```\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1./3, 2./3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\",\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0+0.25, .7+0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\",\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"])\nplt.title(\"Prior and Posterior probability of bugs present\")\nplt.ylabel(\"Probability\")\nplt.legend(loc=\"upper left\");\n```\n\nNotice that after we observed $X$ occur, the probability of bugs being absent increased. By increasing the number of tests, we can approach confidence (probability 1) that there are no bugs present.\n\nThis was a very simple example of Bayesian inference and Bayes rule. Unfortunately, the mathematics necessary to perform more complicated Bayesian inference only becomes more difficult, except for artificially constructed cases. We will later see that this type of mathematical analysis is actually unnecessary. First we must broaden our modeling tools. The next section deals with *probability distributions*. If you are already familiar, feel free to skip (or at least skim), but for the less familiar the next section is essential.\n\n_______\n\n##Probability Distributions\n\n\n**Let's quickly recall what a probability distribution is:** Let $Z$ be some random variable. Then associated with $Z$ is a *probability distribution function* that assigns probabilities to the different outcomes $Z$ can take. Graphically, a probability distribution is a curve where the probability of an outcome is proportional to the height of the curve. You can see examples in the first figure of this chapter. \n\nWe can divide random variables into three classifications:\n\n- **$Z$ is discrete**: Discrete random variables may only assume values on a specified list. Things like populations, movie ratings, and number of votes are all discrete random variables. Discrete random variables become more clear when we contrast them with...\n\n- **$Z$ is continuous**: Continuous random variable can take on arbitrarily exact values. For example, temperature, speed, time, color are all modeled as continuous variables because you can progressively make the values more and more precise.\n\n- **$Z$ is mixed**: Mixed random variables assign probabilities to both discrete and continuous random variables, i.e. it is a combination of the above two categories. \n\n###Discrete Case\nIf $Z$ is discrete, then its distribution is called a *probability mass function*, which measures the probability $Z$ takes on the value $k$, denoted $P(Z=k)$. Note that the probability mass function completely describes the random variable $Z$, that is, if we know the mass function, we know how $Z$ should behave. There are popular probability mass functions that consistently appear: we will introduce them as needed, but let's introduce the first very useful probability mass function. We say $Z$ is *Poisson*-distributed if:\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$ is called a parameter of the distribution, and it controls the distribution's shape. For the Poisson distribution, $\\lambda$ can be any positive number. By increasing $\\lambda$, we add more probability to larger values, and conversely by decreasing $\\lambda$ we add more probability to smaller values. One can describe $\\lambda$ as the *intensity* of the Poisson distribution. \n\nUnlike $\\lambda$, which can be any positive number, the value $k$ in the above formula must be a non-negative integer, i.e., $k$ must take on values 0,1,2, and so on. This is very important, because if you wanted to model a population you could not make sense of populations with 4.25 or 5.612 members. \n\nIf a random variable $Z$ has a Poisson mass distribution, we denote this by writing\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\nOne useful property of the Poisson distribution is that its expected value is equal to its parameter, i.e.:\n\n$$E\\large[ \\;Z\\; | \\; \\lambda \\;\\large] = \\lambda $$\n\nWe will use this property often, so it's useful to remember. Below, we plot the probability mass distribution for different $\\lambda$ values. The first thing to notice is that by increasing $\\lambda$, we add more probability of larger values occurring. Second, notice that although the graph ends at 15, the distributions do not. They assign positive probability to every non-negative integer.\n\n\n```\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\")\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\")\n```\n\n###Continuous Case\nInstead of a probability mass function, a continuous random variable has a *probability density function*. This might seem like unnecessary nomenclature, but the density function and the mass function are very different creatures. An example of continuous random variable is a random variable with *exponential density*. The density function for an exponential random variable looks like this:\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1./l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1./l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\")\n```\n\n\n###But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC\n-----\n\nPyMC is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC is so cool.\n\nWe will model the problem above using PyMC. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC code is easy to read. The only novel thing should be the syntax, and I will interrupt the code to explain individual sections. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables:\n\n\n```\nimport pymc as pm\n\nalpha = 1.0/count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\nlambda_1 = pm.Exponential(\"lambda_1\", alpha)\nlambda_2 = pm.Exponential(\"lambda_2\", alpha)\n\ntau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data)\n```\n\nIn the code above, we create the PyMC variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC's *stochastic variables*, so-called because they are treated by the back end as random number generators. We can demonstrate this fact by calling their built-in `random()` methods.\n\n\n```\nprint \"Random output:\", tau.random(), tau.random(), tau.random()\n```\n\n Random output: 52 2 26\n\n\n\n```\n@pm.deterministic\ndef lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):\n out = np.zeros(n_count_data)\n out[:tau] = lambda_1 # lambda before tau is lambda1\n out[tau:] = lambda_2 # lambda after (and including) tau is lambda2\n return out\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n`@pm.deterministic` is a decorator that tells PyMC this is a deterministic function. That is, if the arguments were deterministic (which they are not), the output would be deterministic as well. \n\n\n```\nobservation = pm.Poisson(\"obs\", lambda_, value=count_data, observed=True)\n\nmodel = pm.Model([observation, lambda_1, lambda_2, tau])\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `value` keyword. We also set `observed = True` to tell PyMC that this should stay fixed in our analysis. Finally, PyMC wants us to collect all the variables of interest and create a `Model` instance out of them. This makes our life easier when we retrieve the results.\n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo*, which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```\n### Mysterious code to be explained in Chapter 3.\nmcmc = pm.MCMC(model)\nmcmc.sample(40000, 10000, 1)\n```\n\n [****************100%******************] 40000 of 40000 complete\n\n\n\n```\nlambda_1_samples = mcmc.trace('lambda_1')[:]\nlambda_2_samples = mcmc.trace('lambda_2')[:]\ntau_samples = mcmc.trace('tau')[:]\n```\n\n\n```\nfigsize(12.5, 10)\n#histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data)-20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n###Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\")\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```\n#type your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```\n#type your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC part. Just consider all instances where `tau_samples < 45`.)\n\n\n```\n#type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. .\n- [2] Norvig, Peter. 2009. [*The Unreasonable Effectiveness of Data*](http://www.csee.wvu.edu/~gidoretto/courses/2011-fall-cp/reading/TheUnreasonable EffectivenessofData_IEEE_IS2009.pdf).\n- [3] Patil, A., D. Huard and C.J. Fonnesbeck. 2010. \nPyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical \nSoftware, 35(4), pp. 1-81. \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n```\n\n```\n", "meta": {"hexsha": "dea3e4b810865f8738e02a4a005d12cca4fb1df1", "size": 412277, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_stars_repo_name": "sielizondo/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "9e4c4efd6fbc21e7ff49a7147489f844b8a962f3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2015-01-22T23:03:58.000Z", "max_stars_repo_stars_event_max_datetime": "2015-10-06T15:37:24.000Z", "max_issues_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_issues_repo_name": "claudiamihai/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "9e4c4efd6fbc21e7ff49a7147489f844b8a962f3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_forks_repo_name": "claudiamihai/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "9e4c4efd6fbc21e7ff49a7147489f844b8a962f3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-07-01T11:35:00.000Z", "max_forks_repo_forks_event_max_datetime": "2019-07-01T11:35:00.000Z", "avg_line_length": 379.6289134438, "max_line_length": 109855, "alphanum_fraction": 0.9037491783, "converted": true, "num_tokens": 11110, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.24798742624020276, "lm_q2_score": 0.3738758227716966, "lm_q1q2_score": 0.09271650302259124}} {"text": "```python\nimport numpy as np\n\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n```\n\n##### Exercise 7.1\n\nWhy do you think a larger random walk task (19 states instead of 5) was used in the examples of this chapter? Would a smaller walk have shifted the advantage to a different value of n? How about the change in left-side outcome from 0 to -1? Would that have made any difference in the best value of n?\n\nA small random walk would truncate large n-step to their total returns since episodes will be shorter (i.e. large n would just result in alpha MC methods). Therefore we should expect the advantage at lower n for smaller random walks. \n\nWith values initialized at 0, if the left-most value terminated in 0 reward, we would need longer episodes for an agent to assign the correct values to the states left of center, since episodes that terminate to the left will not cause any updates initially, only the episodes that terminate to the right end with non-zero reward. Thus I would expect the best value of n to increase.\n\n---------\n\n##### Exercise 7.2\n\nWhy do you think on-line methods worked better than off-line methods on the example task?\n\nOff-line methods generally take random actions with some small probability $\\epsilon$. We would expect at least 1-2 random actions in an environment with a minimum of 10 states to termination, depending on $\\epsilon$ (assuming $\\epsilon$ is between 10-20%). Therefore, even after finding the optimal action-values, these random actions will attribute erroneous rewards to certain actions, leading to higher RMSEs compared to on-line methods; we also see that larger n is more optimal for off-line methods compared to on-line, presumably because larger n reduces noise from the $\\epsilon$ greedy actions.\n\n-----------\n\n##### Exercise 7.3\n\nIn the lower part of Figure 7.2, notice that the plot for n=3 is different from the others, dropping to low performance at a much lower value of $\\alpha$ than similar methods. In fact, the same was observed for n=5, n=7, and n=9. Can you explain why this might have been so? In fact, we are not sure ourselves.\n\nMy hypothesis is that odd values of n have higher RMSE because of the environment. It takes at a minimum, an odd number of steps to reach termination from the starting state. For off-line methods, even after finding the optimal action-values, an agent may still not terminate in an odd number of steps. Therefore my hypothesis is that odd n-step methods are more likely to cause erroneous updates to the $\\epsilon$ greedy actions compared to even n-step methods. A quick way to test this, would be to create a random-walk where an agent will terminate at a minimum in an even number of steps, and then to observe the same plots as in Figure 7.2. \n\n----------\n\n#### Exercise 7.4 \n\nThe parameter $\\lambda $ characterizes how fast the exponential weighting in Figure 7.4 falls off, and thus how far into the future the $\\lambda $-return algorithm looks in determining its backup. But a rate factor such as $\\lambda $ is sometimes an awkward way of characterizing the speed of the decay. For some purposes it is better to specify a time constant, or half-life. What is the equation relating $\\lambda $ and the half-life, $\\tau$, the time by which the weighting sequence will have fallen to half of its initial value?\n\nThe half life occurs when weighting drops in half:\n\n$ \\lambda^{n} = 0.5 $,\n\nwhich occurs at,\n$n = -ln(2) / ln(\\lambda) = \\tau$\n\n\n-----\nGetting (7.3) from the equation above it:\n\n$R_t^\\lambda = (1 - \\lambda) \\sum_{n=1}^\\infty \\lambda^{n-1} R^{(n)}_t$,\n\nafter $T-t-1$, we sum to infinity but with $R^{T-t-1}_t$, which is just the total return $R_t$, so:\n\n$R_t^\\lambda = (1 - \\lambda) \\sum_{n=1}^{T-t-1} \\lambda^{n-1} R^{(n)}_t + (1 - \\lambda) R_t \\sum_{n=T-t-1}^{\\infty} \\lambda^{n} $\n\nWe can remove $\\lambda^{T-t-1}$ from the last sum to get $ (1 - \\lambda) R_t \\lambda^{T-t-1} \\sum_{n=0}^\\infty \\lambda^n = (1 - \\lambda) R_t \\lambda^{T-t-1} \\frac{1}{1 - \\lambda}$, so that: \n\n$R_t^\\lambda = (1 - \\lambda) \\sum_{n=1}^{T-t-1} \\lambda^{n} R^{(n)}_t + \\lambda^{T-t-1} R_t $\n\n----------\n\n##### Exercise 7.5\n\nIn order to get TD($\\lambda$) to be equivalent to the $\\lambda$-return algorithm in the online case, the proposal is that $\\delta_t = r_{t+1} + \\gamma V_t(s_{t+1}) - V_{t-1}(s_t) $ and the n-step return is $R_t^{(n)} = r_{t+1} + \\dots + \\gamma^{n-1} r_{t+n} + \\gamma^n V_{t+n-1}(s_{t+n}) $. To show that this new TD method is equivalent to the $\\lambda$ return, it suffices to show that $\\Delta V_t(s_t)$ for the $\\lambda$ return is equivalent to the new TD with modified $\\delta_t$ and $R_t^{(n)}$.\n\nAs such, we expand the $\\lambda$ return:\n\n$\n\\begin{equation}\n\\begin{split}\n\\frac{1}{\\alpha} \\Delta V_t(s_t) =& -V_{t-1}(s_t) + R_t^\\lambda\\\\\n=& -V_{t-1}(s_t) + (1 - \\lambda) \\lambda^0 [r_{t+1} + \\gamma V_t(s_{t+1})] + (1-\\lambda) \\lambda^1 [r_{t+1} + \\gamma r_{t+2} + \\gamma^2 V_{t+1}(s_{t+2})] + \\dots\\\\\n=& -V_{t-1}(s_t) + (\\gamma \\lambda)^0 [r_{t+1} + \\gamma V_t(s_{t+1}) - \\gamma \\lambda V_t(s_{t+1})] + (\\gamma \\lambda)^1 [r_{t+2} + \\gamma V_{t+1}(s_{t+2}) - \\gamma \\lambda V_{t+1}(s_{t+2})] + \\dots\\\\\n=& (\\gamma \\lambda)^0 [r_{t+1} + \\gamma V_t(s_{t+1}) - V_{t-1}(s_t)] + (\\gamma \\lambda) [r_{t+2} + \\gamma V_{t+1}(s_{t+2}) - V_t(s_t+1)] + \\dots\\\\\n=& \\sum_{k=t}^\\infty (\\gamma \\lambda)^{k-t} \\delta_k\n\\end{split}\n\\end{equation}\n$\n\nwhere $\\delta_k = r_k + \\gamma V_k(s_{k+1}) - V_{k-1}(s_k)$ as defined in the problem. Therefore, for online TD as defined above, the $\\lambda$ return is exactly equivalent.\n\n\n-------------\n\n##### Exercise 7.6\n\nIn Example 7.5, suppose from state s the wrong action is taken twice before the right action is taken. If accumulating traces are used, then how big must the trace parameter $\\lambda $ be in order for the wrong action to end up with a larger eligibility trace than the right action?\n \nThe eligibility trace update is $e_t(s) \\leftarrow 1 + \\gamma \\lambda e_{t-1}(s)$ if $s = s_t$ and $e_t(s) \\leftarrow \\gamma \\lambda e_{t-1}(s)$ if $s \\neq s_t$. For two wrong actions, then one right action, $e_t(wrong) = (1 + \\gamma \\lambda) \\gamma \\lambda $, and $e_t(right) = 1$. If we want $e_t(wrong) \\gt e_t(right)$, we need $(1 + \\gamma \\lambda) \\gamma \\lambda \\gt 1$, or $\\gamma \\lambda \\gt \\frac{1}{2} (\\sqrt(5) - 1)$.\n\n-----------\n\n##### Exercise 7.7\n\n\n\n```python\nclass LoopyEnvironment(object):\n def __init__(self):\n self._terminal_state = 5\n self._state = 0\n self._num_actions = 2\n \n @property\n def state(self):\n return self._state\n \n @state.setter\n def state(self, state):\n assert isinstance(state, int)\n assert state >= 0 and state <= self._terminal_state\n self._state = state\n \n @property\n def terminal_state(self):\n return self._terminal_state\n\n def reinit_state(self):\n self._state = 0\n \n def get_states_list(self):\n return range(self._terminal_state + 1)\n \n def get_actions_list(self):\n return range(self._num_actions)\n \n def is_terminal_state(self):\n return self._state == self._terminal_state\n \n def take_action(self, action):\n \"\"\"\n action int: 0 or 1\n if action is 0 = wrong, then don't change the state\n if action is 1 = right, then go to the next state\n\n returns int: reward\n \"\"\"\n assert action in [0, 1]\n assert self.is_terminal_state() == False\n if action == 1:\n self._state += 1\n if self._state == self._terminal_state:\n return 1\n return 0\n```\n\n\n```python\nimport random\nfrom itertools import product\n\nclass SARSA_lambda(object):\n def __init__(self, environment):\n states = environment.get_states_list()\n actions = environment.get_actions_list()\n \n self.environment = environment\n self.state_actions = list(product(states, actions))\n self.Q = np.random.random([len(states), len(actions)])\n self.e = np.zeros([len(states), len(actions)])\n \n def _get_epsilon_greedy_action(self, epsilon, p):\n if random.random() <= epsilon:\n action = random.randint(0, len(p) - 1)\n return action\n actions = np.where(p == np.amax(p))[0]\n action = np.random.choice(actions)\n return action\n \n def learn(self, num_episodes=100, Lambda=.9, gamma=.9, epsilon=.05, alpha=0.05,\n replace_trace=False):\n \"\"\"\n Args:\n num_episodes (int): Number of episodes to train\n Lambda (float): TD(lambda) parameter \n (if lambda = 1 we have MC or if lambda = 0 we have 1-step TD)\n gamma (float): decay parameter for Bellman equation\n epsilon (float): epsilon greedy decisions\n alpha (float): determines how big should TD update be\n \n Returns:\n list (int): the number of time steps it takes for each episode to terminate\n \"\"\"\n \n time_steps = []\n for n in xrange(num_episodes):\n time_idx = 0\n self.e = self.e * 0\n self.environment.reinit_state()\n s = self.environment.state\n a = random.randint(0, self.Q.shape[1] - 1)\n while not self.environment.is_terminal_state():\n r = self.environment.take_action(a)\n time_idx += 1\n\n s_prime = self.environment.state\n a_prime = self._get_epsilon_greedy_action(epsilon, self.Q[s_prime, :])\n delta = r + gamma * self.Q[s_prime, a_prime] - self.Q[s, a]\n\n if replace_trace:\n self.e[s, a] = 1\n else:\n self.e[s, a] = self.e[s, a] + 1\n \n for s, a in self.state_actions:\n self.Q[s, a] = self.Q[s, a] + alpha * delta * self.e[s, a]\n self.e[s, a] = gamma * Lambda * self.e[s, a]\n \n s = s_prime\n a = a_prime\n \n time_steps.append(time_idx)\n return time_steps\n\n```\n\n\n```python\nenv = LoopyEnvironment()\ns = SARSA_lambda(env)\n```\n\nRun both the replace-trace and the SARSA($\\lambda$) regular trace methods for X episodes, and repeat N times. Get the average time length over all X episodes for each iteration for each alpha. In the environment in Figure 7.18, it takes at a minimum, 5 time steps to terminate. This is our baseline.\n\n\n```python\n\ndef get_results(replace_trace, num_trials, num_episodes):\n alphas = np.linspace(.2, 1, num=10)\n results = np.array([])\n for alpha in alphas:\n res = []\n for i in xrange(num_trials):\n sarsa_lambda = SARSA_lambda(env)\n t = sarsa_lambda.learn(num_episodes=num_episodes, alpha=alpha, \n replace_trace=replace_trace, gamma=0.9,\n epsilon=0.05, Lambda=0.9)\n res.append(np.mean(t))\n\n if results.shape[0] == 0:\n results = np.array([alpha, np.mean(res)])\n else:\n results = np.vstack([results, [alpha, np.mean(res)]])\n return results\n\nnum_trials = 100\nnum_episodes = 20\nreplace_trace = get_results(True, num_trials, num_episodes)\nregular_trace = get_results(False, num_trials, num_episodes)\n \n```\n\n\n```python\nplt.plot(replace_trace[:, 0], replace_trace[:, 1], label='replace')\nplt.plot(regular_trace[:, 0], regular_trace[:, 1], label='regular')\n\nplt.legend()\nplt.title('Exercise 7.7: First %d episodes averaged %d times' %(num_episodes, num_trials))\nplt.xlabel('alpha')\nplt.ylabel('Time-steps')\n```\n\nWe see that on average, the replace trace method for $\\gamma = 0.9$, $\\lambda=0.9$, $\\epsilon=0.05$ takes less time to terminate. With lower $\\gamma$, the advantage of replace-trace seems to disappear.\n\n-----------\n\n##### Exercise 7.8\n\nsarsa($\\lambda$) with replacing traces, has a backup which is equivalent to sarsa($\\lambda$) until the first repeated state-action pair. If we use the replace-trace formula in Figure 7.17, the replace-trace backup diagram terminates at the first repeated state-action pair. For the replace-trace formula in Figure 7.16, the backup diagram after the first repeated-state action pair is some hybrid of sarsa($\\lambda$) with weights changed only for the repeated state-actions. I'm not sure how to draw that.\n\n-------\n\n##### Exercise 7.9\n\nWrite pseudocode for an implementation of TD($\\lambda $) that updates only value estimates for states whose traces are greater than some small positive constant.\n \n\nYou can use a hash-map of traces to update, and if the update reduces the value of the trace below some constant, remove the trace from the hash-map. Traces get added to the hash-map as they get visited. If you want to write the pseudo code or real code, feel free to make a pull-request!\n\n-------\n\n##### Exercise 7.10\n\nProve that the forward and backward views of off-line TD($\\lambda $) remain equivalent under their new definitions with variable $\\lambda $ given in this section. Follow the example of the proof in Section 7.4.\n\n\nAs given in the book, the backward view is:\n\n$\n e_t(s)=\\left\\{\n \\begin{array}{ll}\n \\gamma \\lambda_t e_{t-1}(s), & \\mbox{ if } s \\neq s_t\\\\\n \\gamma \\lambda_t e_{t-1}(s) + 1, & \\mbox{ if } s = s_t\n \\end{array}\n \\right.\n$\n\nand the forward view is:\n\n$R_t^\\lambda = \\sum_{k=t+1}^{T-1} R_t^{(k-t)} (1 - \\lambda_k) \\prod_{i=t+1}^{k-1} \\lambda_i + R_t \\prod_{i=t+1}^{T-1} \\lambda_i$.\n\nThe proof is almost identical to 7.4. For the backward view we need to express the eligibility trace nonrecursively:\n\n$e_t(s) = \\gamma \\lambda_t e_{t-1}(s) + I_{ss_t} = \\gamma \\lambda_t [\\gamma \\lambda_{t-1} e_{t-2}(s) + I_{ss_{t-1}}] + I_{ss_t} = \\sum_{k=0}^t I_{ss_k}\\gamma^{t-k} \\prod_{i=k+1}^t \\lambda_i$\n\nso that the sum of all updates to a given state is:\n\n$\\sum_{t=0}^{T-1}\\alpha I_{ss_t} \\sum_{k=t}^{T-1} \\gamma^{k-t} \\prod_{i=t+1}^k \\lambda_i \\delta_k$\n\nwhich was obtained by following the same algebra as in 7.9 to 7.12.\n\n\nThe next step is to show that the sum of all updates of the forward view is equivalent to the previous equation above. We start with:\n\n\n$\n\\begin{equation}\n\\begin{split}\n\\frac{1}{\\alpha} \\Delta V_t(s_t) =& -V_{t}(s_t) + R_t^\\lambda\\\\\n=& -V_t(s_t) + (1 - \\lambda_{t+1}) [r_{t+1} + \\gamma V_t(s_{t+1})] + (1 - \\lambda_{t+2})\\lambda_{t+1} [r_{t+1} + \\gamma r_{t+2} + \\gamma^2 V_t(s_{t+2})] + \\dots\\\\\n=& -V_{t}(s_t) + [r_{t+1} + \\gamma V_t(s_{t+1}) - \\lambda_{t+1} \\gamma V_t(s_{t+1})] + \\gamma \\lambda_{t+1} [r_{t+2} + \\gamma V_t(s_{t+2}) - \\gamma \\lambda_{t+2} V_t(s_{t+2})] + \\dots\\\\\n=& [r_{t+1} + \\gamma V_t(s_{t+1}) - V_t(s_t)] + (\\gamma \\lambda_{t+1})[r_{t+2} + \\gamma V_t(s_{t+2}) - V_t(s_{t+1})] + (\\gamma^2 \\lambda_{t+1}\\lambda_{t+2}) \\delta_{t+3} + \\dots\\\\\n\\approx& \\sum_{k=t}^{T-1} \\gamma^{k-t} \\delta_k \\prod_{i=t+1}^{k} \\lambda_i\n\\end{split}\n\\end{equation}\n$\n\nwhich is equivalent to the backward case, and becomes an equality for offline updates.\n\n\n------\n\n** \"Eligibility traces are the first line of defense against both long-delayed rewards and non-Markov tasks.\"**\n\n\"In the future it may be possible to vary the trade-off between TD and Monte Carlo methods more finely by using variable $\\lambda $, but at present it is not clear how this can be done reliably and usefully.\"\n", "meta": {"hexsha": "504ccbb9859a7ad2f0ef2e2aef7ca83300954e43", "size": 44358, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/chapter7.ipynb", "max_stars_repo_name": "btaba/intro-to-rl", "max_stars_repo_head_hexsha": "b65860cd81ce43ac344d4f618a6364c000ea971b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 39, "max_stars_repo_stars_event_min_datetime": "2016-10-02T19:41:19.000Z", "max_stars_repo_stars_event_max_datetime": "2019-07-30T18:10:37.000Z", "max_issues_repo_path": "notebooks/chapter7.ipynb", "max_issues_repo_name": "btaba/intro-to-rl", "max_issues_repo_head_hexsha": "b65860cd81ce43ac344d4f618a6364c000ea971b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2017-07-08T08:17:06.000Z", "max_issues_repo_issues_event_max_datetime": "2017-08-03T01:38:33.000Z", "max_forks_repo_path": "notebooks/chapter7.ipynb", "max_forks_repo_name": "btaba/intro-to-rl", "max_forks_repo_head_hexsha": "b65860cd81ce43ac344d4f618a6364c000ea971b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 16, "max_forks_repo_forks_event_min_datetime": "2016-10-02T20:12:38.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-14T20:30:57.000Z", "avg_line_length": 91.4597938144, "max_line_length": 23488, "alphanum_fraction": 0.7602687227, "converted": true, "num_tokens": 4328, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4111108836623764, "lm_q2_score": 0.22541660542786957, "lm_q1q2_score": 0.09267121984962469}} {"text": "```python\nfrom IPython.core.display import HTML, Image\ncss_file = 'style.css'\nHTML(open(css_file, 'r').read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\nfrom sympy import init_printing, symbols, Matrix\nfrom warnings import filterwarnings\n```\n\n\n```python\ninit_printing(use_latex = 'mathjax')\nfilterwarnings('ignore')\n```\n\n# Orthogonal vectors and subspaces\n# Rowspace orthogonal to nullspace and columnspace to nullspace of AT\n# N(ATA) = N(A)\n\n## Orthogonal vectors\n\n* Two vectors are orthogonal if their dot product is zero\n* If they are written as column vectors **x** and **y**, their dot product is **x**T**y**\n * For orthogonal (perpendicular) vectors **x**T**y** = 0\n* From the Pythagorean theorem they are orthogonal if\n$$ { \\left\\| \\overline { x } \\right\\| }^{ 2 }+{ \\left\\| \\overline { y } \\right\\| }^{ 2 }={ \\left\\| \\overline { x } +\\overline { y } \\right\\| }^{ 2 }\\\\ { \\left\\| \\overline { x } \\right\\| }=\\sqrt { { x }_{ 1 }^{ 2 }+{ x }_{ 2 }^{ 2 }+\\dots +{ x }_{ b }^{ 2 } } $$\n\n* The length squared of a (column) vector **x** can be calculated by **x**T**x**\n* This achieves exactly the same as the sum of the squares of each element in the vector\n$$ { x }_{ 1 }^{ 2 }+{ x }_{ 2 }^{ 2 }+\\dots +{ x }_{ n }^{ 2 }$$\n\n* Following from the Pythagorean theorem we have\n$$ { \\left\\| \\overline { x } \\right\\| }^{ 2 }+{ \\left\\| \\overline { y } \\right\\| }^{ 2 }={ \\left\\| \\overline { x } +\\overline { y } \\right\\| }^{ 2 }\\\\ { \\underline { x } }^{ T }\\underline { x } +{ \\underline { y } }^{ T }\\underline { y } ={ \\left( \\underline { x } +\\underline { y } \\right) }^{ T }\\left( \\underline { x } +\\underline { y } \\right) \\\\ { \\underline { x } }^{ T }\\underline { x } +{ \\underline { y } }^{ T }\\underline { y } ={ \\underline { x } }^{ T }\\underline { x } +{ \\underline { x } }^{ T }\\underline { y } +{ \\underline { y } }^{ T }\\underline { x } +{ \\underline { y } }^{ T }\\underline { y } \\\\ \\because \\quad { \\underline { x } }^{ T }\\underline { y } ={ \\underline { y } }^{ T }\\underline { x } \\\\ { \\underline { x } }^{ T }\\underline { x } +{ \\underline { y } }^{ T }\\underline { y } ={ \\underline { x } }^{ T }\\underline { x } +2{ \\underline { x } }^{ T }\\underline { y } +{ \\underline { y } }^{ T }\\underline { y } \\\\ 2{ \\underline { x } }^{ T }\\underline { y } =0\\\\ { \\underline { x } }^{ T }\\underline { y } =0 $$\n* This states that the dot product of orthogonal vectors equal zero\n\n* The zero vector is orthogonal to all other similar dimensional vectors\n\n## Orthogonality of subspaces\n\n* Consider two subspaces *S* and *T*\n* To be orthogonal every vector in *S* must be orthogonal to any vector in *T*\n\n* Consider the *XY* and *YZ* planes in 3-space\n* They are not orthogonal, since many combinations of vectors (one in each plane) are not orthogonal\n* Vectors in the intersection, even though, one each from each plane can indeed be the same vector\n* We can say that any planes that intersect cannot be orthogonal to each other\n\n## Orthogonality of the rowspace and the nullspace\n\n* The nullspace contains vectors **x** such that A**x** = **0**\n* Now remembering that **x**T**y** = 0 for orthogonal column vectors and considering each row in A as a transposed column vector and **x** (indeed a column vector) and their product being zero meaning that they are orthogonal, we have:\n$$ \\begin{bmatrix} { { a }_{ 11 } } & { a }_{ 12 } & \\dots & { a }_{ 1n } \\\\ { a }_{ 21 } & { a }_{ 22 } & \\dots & { a }_{ 2n } \\\\ \\vdots & \\vdots & \\vdots & \\vdots \\\\ { a }_{ m1 } & { a }_{ m2 } & \\dots & { a }_{ mn } \\end{bmatrix}\\begin{bmatrix} { x }_{ 1 } \\\\ { x }_{ 2 } \\\\ \\vdots \\\\ { x }_{ n } \\end{bmatrix}=\\begin{bmatrix} 0 \\\\ 0 \\\\ \\vdots \\\\ 0 \\end{bmatrix}\\\\ \\begin{bmatrix} { a }_{ 11 } & { a }_{ 12 } & \\dots & { a }_{ 1n } \\end{bmatrix}\\begin{bmatrix} { x }_{ 1 } \\\\ { x }_{ 2 } \\\\ \\vdots \\\\ { x }_{ n } \\end{bmatrix}=0\\\\ \\dots $$\n\n* The rows (row vectors) in A are NOT the only vectors in the rowspace, since we also need to show that ALL linear combinations of them are also orthogonal to **x**\n* This is easy to see by the structure above\n\n## Orthogonality of the columnspace and the nullspace of AT\n\n* The proof is the same as above\n\n* The orthogonality of the rowspace and the nullspace is creating two orthogonal subspaces in ℝn\n* The orthogonality of the columnspace and the nullspace of AT is creating two orthogonal subspaces in ℝm\n\n* Note how the dimension add up to the degree of the space ℝ\n * The rowspace (a fundamental subspace in ℝn) is of dimension *r*\n * The dimension of the nullspace (a fundamental subspace in ℝn) is of dimension *n* - *r*\n * Addition of these dimensions gives us the dimension of the total space *n* as in ℝn\n * AND\n * The columnspace is of dimension *r* and the nullspace of AT is of dimension *m* - *r*, which adds to *m* as in ℝm\n\n* This means that two lines that may be orthogonal in ℝ3 cannot be two orthogonal subspaces of ℝ3 since the addition of the dimensions of these two subspaces (lines) is not 3 (as in ℝ3)\n\n* We call this complementarity, i.e. the nullspace and rowspace are orthogonal *complements* in ℝn\n\n## ATA\n\n* We know that\n * The result is square\n * The result is symmetric, i.e. (*n*×*m*)(*m*×*n*)=*n*×*n*\n * (ATA)T = ATATT = ATA\n\n* When A**x** = **b** is not solvable we use ATA**x** = AT**b**\n* **x** in the first instance did not have a solution, but after multiplying both side with AT, we hope that the second **x** has an solution, now called\n$$ {A}^{T}{A}\\hat{x} = {A}^{T}{b} $$\n\n\n* Consider the matrix below with *m* = 4 equation in *n* = 2 unknowns\n* The only **b** solutions must be linear combinations of the columnspace of A\n\n\n```python\nA = Matrix([[1, 1], [1, 2], [1, 5]])\nA\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 1\\\\1 & 2\\\\1 & 5\\end{matrix}\\right]$$\n\n\n\n$$ {x}_{1} \\begin{bmatrix} 1 \\\\ 1 \\\\ 1 \\end{bmatrix} + {x}_{2} \\begin{bmatrix} 1 \\\\ 2 \\\\ 5 \\end{bmatrix} = \\begin{bmatrix} {b}_{1} \\\\ {b}_{2} \\\\ {b}_{3} \\end{bmatrix} $$\n\n\n```python\nA.transpose() * A\n```\n\n\n\n\n$$\\left[\\begin{matrix}3 & 8\\\\8 & 30\\end{matrix}\\right]$$\n\n\n\n* Note how the nullspace of ATA is equal to the nullspace of A\n\n\n```python\n(A.transpose() * A).nullspace() == A.nullspace()\n```\n\n\n\n\n True\n\n\n\n* The same goes for the rank\n\n\n```python\nA.rref(), (A.transpose() * A).rref()\n```\n\n\n\n\n$$\\begin{pmatrix}\\begin{pmatrix}\\left[\\begin{matrix}1 & 0\\\\0 & 1\\\\0 & 0\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}, & \\begin{pmatrix}\\left[\\begin{matrix}1 & 0\\\\0 & 1\\end{matrix}\\right], & \\begin{bmatrix}0, & 1\\end{bmatrix}\\end{pmatrix}\\end{pmatrix}$$\n\n\n\n* ATA is not always invertible\n* In fact it is only invertible if the nullspace of A only contains the zero vector (has independent columns)\n\n\n```python\n\n```\n", "meta": {"hexsha": "94607e4da580d6972243c6e49fa333b042126e17", "size": 15461, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_14_Orthogonality_of_vectors_and_subspaces.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_14_Orthogonality_of_vectors_and_subspaces.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_14_Orthogonality_of_vectors_and_subspaces.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 32.6181434599, "max_line_length": 1149, "alphanum_fraction": 0.4976392213, "converted": true, "num_tokens": 3019, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.476579651063676, "lm_q2_score": 0.19436782035217448, "lm_q1q2_score": 0.09263174800144658}} {"text": "#### 10.\n\n\n\n\n**HW Review:**\n\n

Diffraction and crystallography

\n\n\n

11.2 Describe the \u201cphase problem\u201d in X-ray crystallography, and at least one way the problem can be addressed (or at least circumvented to solve X-ray structures).

\n\n

See Page 420 for phase problem, See Page 421 for the way the problem can be addressed.

\n\n\n\n

11.20 Draw a set of points as a rectangular array based on unit cells of side a and b, and mark the planes with Miller indices (1,0,0), (0,1,0), (1,1,0), (1,2,0), (2,3,0), (4,1,0).\n

\n\nHere's an example...\n\n$$(1,2,0) = (k,h,l) \\implies (\\frac{a}{h},\\frac{b}{k},0) = (\\frac{a}{1},\\frac{b}{2},0) \\\\ \\implies 2\\times(\\frac{a}{1},\\frac{b}{2},0) = (2a,b,0)$$\n\n
\n\n\n\n
\n\n\n\n\n**Chapter 12**\n\n**12.12 A swimmer enters a gloomier world (in one sense) on diving to greater depths. Given that the mean molar absorption coefficient of seawater in the visible region is $6.2x10^{\u22125}$ $dm^{3}$ $mol^{\u22121}$ $cm^{\u22121}$, calculate the depth at which a diver will experience (a) half the surface intensity of light and (b) one-tenth that intensity.**\n\n\n### Derivation of Beer's Law:\n\nHere is an image of the situation we wish to model:\n\n\n\nThe density of particles, $\\rho$ and the absorption coefficient, $\\alpha$ multiplied by the intensity, I shown in the 1st order differential equation:\n$$ -\\frac{\\partial{I}}{\\partial{x}} = I \\alpha \\rho $$\n\nCombine like-terms to each side of the equation:\n\n$$\\int_{I_{0}}^{I} \\frac{\\partial{I}}{I} = -\\int_{0}^{x} \\alpha \\rho \\partial{x} $$\n\n\nWe know that $\\int \\frac{1}{x}dx = ln(x)$, so\n\n$$ln(\\frac{I}{I_{0}}) = - \\alpha \\rho x, $$\n\n\n-----------------\n\nTo get the general solution of the D.E we can take the exponential of both sides \n\n$$\\frac{I}{I_{0}} = e^{-\\alpha \\rho x} $$\n\n**General Solution to the D.E**:\n\n$$I (x) = I_{0} e^{-\\alpha \\rho x}$$\n\n--------------------------\n\nOtherwise, to continue deriving Beer's Law we can use the property of logarithms:\n\n$$-ln(\\frac{I}{I_{0}}) = ln(\\frac{I_{0}}{I}) = \\alpha \\rho x, $$\n\n\nand since we know the following\n\n$$log_{10}(x) = \\frac{ln(x)}{ln(10)},$$\n\nthen we can say\n\n$$ log_{10}(\\frac{I_{0}}{I}) = \\frac{\\alpha \\rho x}{ln(10)}$$\n\n\nFinally, we can say that $\\rho \\propto c$. We can also simplify further by saying $\\epsilon =\\frac{\\alpha}{ln(10)}$, which has units of $M^{-1}cm^{-1}$ and $x = b$, where b is in cm.\n\n$$ A = log(\\frac{I_{0}}{I}) = \\epsilon b c$$\n\nNow, solving for the path length $b$ gives the following expression with $c_{H_{2}O} = \\rho/MW$ and $I = 0.5I_{0}$.\n\n\n$$ b = \\frac{log(\\frac{I_{0}}{0.5I_{0}})}{\\epsilon (\\rho/MW)} = \\frac{0.301}{(6.2 x 10^{-5} dm^{3}. mol^{-1}.cm^{-1}) (55.5 mol.dm^{-3})} = 87 cm $$\n\n**Note**, since the information regarding salt water concentration is not provided in the question we approximated the concentration by with values for $H_{2}O$.\n\n\n\n\n\n**12.25 How many normal modes of vibration are there for (a) $NO_{2}$, (b) $N_{2}O$, (c) cyclohexane, and (d) hexane?**\n\nThere are $3N-6$ and $3N-5$ vibrational modes (in which N is the number of atoms in molecule) for non-linear and linear molecules; respectively.\n\n\n**(a)** $NO_{2}$, Non-linear; $3N-6 = 3(3)-6 = 3$\n\n**(b)** $N_{2}O$, linear; $3N-5 = 3(3)-5 = 4$\n\n**(c)** cyclohexane, non-linear; $3N-6 = 3(18)-6 = 48$\n\n**(d)** hexane, non-linear; $3N-6 = 3(20)-6 = 54$\n\n\n\n\n-----------------------------------------\n\n**SIDE NOTES:**\n\n### Rates of various processes\n\n| $\\text{Process}$ | $\\text{Timescales (s)}$ | $\\text{Radiative}$ | $\\text{Transition}$ |\n| :--: | :--: | :--: | :--: |\n| IC | $10^{-14}-10^{-11}$ | N | $S_{n} \\to S_{1}$ |\n| Vib Relax | $10^{-14}-10^{-11}$ | N | ${S_{n}}^{*} \\to S_{n}$ |\n| Abs | $10^{-15}$ | Y | $S_{0} \\to S_{n}$ |\n| Fluor | $10^{-9}-10^{-7}$ | Y | $S_{1} \\to S_{0}$ |\n| ISC | $10^{-8}-10^{-3}$ | N | $S_{1} \\to T_{1}$ |\n| Phos | $10^{-4}-10^{0}$ | Y | $T_{1} \\to S_{0}$ |\n\n\n- timescale of FRET are typically in ns \n\n\n-----------------------------------------\n\n\n\n**12.37 When benzophenone is illuminated with ultraviolet radiation, it is excited into a singlet state. This singlet changes rapidly into a triplet, which phosphoresces. Triethylamine acts as a quencher for the triplet. In an experiment in methanol as solvent, the phosphorescence intensity Iphos varied with amine concentration as shown below. A time-resolved laser spectroscopy experiment had also shown that the half-life of the fluorescence in the absence of quencher is 29 ms. What is the value of $k_{Q}$?**\n\n\n| $Species$ | $\\text{}$ | $\\text{}$ | $\\text{}$ |\n| :--: | :--: | :--: | :--: |\n| $[Q]/(mol\\space dm^{\u22123})$ | 0.0010 | 0.0050 | 0.0100 |\n| $I_{phos}/(A.U.)$ | 0.41 | 0.25 | 0.16|\n\n\nFirst, we need to write out the mechanism that is given in the question:\n\n>When benzophenone is illuminated with ultraviolet radiation, it is excited into a singlet state. \n\n$$ M + h\\nu_{i} \\rightarrow M^{*} \\tag{1}$$\n\n>This singlet changes rapidly into a triplet, which phosphoresces.\n\n$$ M^{*} \\rightarrow M + h\\nu_{phos} \\tag{2}$$\n\n>Triethylamine acts as a quencher for the triplet.\n\n$$ M^{*} + Q \\rightarrow M + Q \\tag{3}$$\n\n\n
\n\nTo model this process, we apply the steady state approximation on $[M^{*}]$ to obtain $I_{phos}$... (Do this to get your own \"stern-volmer\" equation that models what the questions provides).\n\n**Steady State** is an assumption that the rate of (production/destruction) is equal to zero i.e., at equilibrium. \n\n$$\\frac{d[M^{*}]}{dt} = I_{abs} - k_{Q}[Q][M^{*}]-k_{phos}[M^{*}]=0$$\n\n$$ \\implies (-k_{Q}[Q]-k_{phos})[M^{*}] = -I_{abs} \\implies [M^{*}] = \\frac{I_{abs}}{k_{Q}[Q]+k_{phos}},$$\n\nand we know that $I_{phos} = k_{phos}[M^{*}]$, so \n\n$$ I_{phos} = k_{phos} \\frac{I_{abs}}{k_{Q}[Q]+k_{phos}}$$\n\nWe can take the inverse of $I_{phos}$ to get the equation in the form of a line:\n\n$$ \\frac{1}{I_{phos}} = \\frac{1}{I_{abs}} + \\frac{k_{Q}[Q]}{k_{phos}I_{abs}}$$\n\nNow, we plot the data that was given and extract the slope...\n\n\n```python\n%matplotlib inline\nimport plot as p\nimport numpy as np\nQ = np.array([0.0010,0.0050, 0.0100])\nIphos = np.array([0.41, 0.25, 0.16])\nx,y = Q,1/Iphos\np.simple_plot(x,y,xlabel=r'$[Q]$',ylabel=r'${I_{phos}}^{-1}$',Type='scatter',color=False,fig_size=(8,4),\n fit=True, order=1, annotate_text=r\"$slope=k_{Q}/(k_{phos}I_{abs})$\",annotate_x=-0.005, annotate_y=5.5)\n```\n\nTherefore, the linear fit gives:\n$$I_{phos}^{-1}=(424.5302 dm^{3} mol )[Q]+(1.966), $$\n\nwhere $\\frac{k_{Q}}{k_{phos}I_{abs}} = 424.5302 dm^{3} mol $. \n\nTherefore,\n\n$$k_{Q} = \\frac{(24.5302 dm^{3} mol)(2.39x10^{4} s^{-1})}{1.97} = 5.2x10^{6} dm^{3} mol^{-1} s^{-1} $$\n\n\n\n\n\n\n\n\n\n\n\n\n#### [Jump to table of contents.](#Table-of-Contents:)\n\n
\n\n\n\n

What kind of information can be obtained using FRET spectroscopy? What is the distance dependence of the FRET effect?

\n\n

F\u00f6rster resonance energy transfer (FRET) spectroscopy is useful for studying processes involving inter and intra-molecular energy transfer and can be used to measure distances (ranging from 1 to 9 nm) in biological systems. Furthermore, conformational changes can be studied, and also good for studying bulk distances. Single molecule FRET \u2014create histograms of binned FRET distances, ultimately revealing states.See Pages 500,501 for more information.

\n\n
\n\n**12.39 The F\u00f6rster theory of resonance energy transfer and the basis for the FRET technique can be tested by performing fluorescence measurements on a series of compounds in which an energy donor and an energy acceptor are covalently linked by a rigid molecular linker of variable and known length. L. Stryer and R.P. Haugland, Proc. Natl. Acad. Sci. USA 58, 719 (1967), collected the following data on a family of compounds with the general composition dansyl-(l-prolyl)n-naphthyl, in which the distance R between the naphthyl donor and the dansyl acceptor was varied by increasing the number of prolyl units in the linker:**\n\n\n| $\\text{}$ | $\\text{}$ | $\\text{}$ | $\\text{}$ | $\\text{}$ | $\\text{}$ | $\\text{}$ | $\\text{}$ | $\\text{}$ | $\\text{}$ | $\\text{}$ |\n| :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | \n| $R/nm$ | 1.2 | 1.5 | 1.8 | 2.8 | 3.1 | 3.4 | 3.7 | 4.0 | 4.3 | 4.6 |\n| $\\eta_{T}$ | 0.99 | 0.94 | 0.97 | 0.82 | 0.74 | 0.65 | 0.40 | 0.28 | 0.24 | 0.16 |\n\n\n\n**Are the data described adequately by the F\u00f6rster theory (eqns 12.26 and 12.27)? If so, what is the value of $R_{0}$ for the naphthyl\u2013dansyl pair?**\n\n\n
F\u00f6rster theory:
\n\nStates that the efficiency of resonance energy transfer is related to the distance $R$ between donor-acceptor pairs by\n\n$$\\eta_{T} = \\frac{{R_{0}}^{6}}{{R_{0}}^{6} + {R}^{6}}, $$\n\nwhere $R_{0}$ is the distance at which $50 \\%$ of the energy is transfered from donor to acceptor, and $R$ is the distance between donor and acceptor.\n\nFirst, we need to rearrange the F\u00f6rster theory equation into a linearized form. \n\n$$ \\frac{1}{\\eta_{T}} = \\frac{{R_{0}}^{6} + {R}^{6}}{{R_{0}}^{6}} = 1 + (\\frac{R}{R_{0}})^{6}$$\n\nNow, we are able to plot the data:\n\n\n\n\n```python\n%matplotlib inline\nimport plot as p\nimport numpy as np\nR = np.array([1.2, 1.5, 1.8, 2.8, 3.1, 3.4, 3.7, 4.0, 4.3, 4.6])\nnT = np.array([0.99, 0.94, 0.97, 0.82, 0.74, 0.65, 0.40, 0.28, 0.24, 0.16])\nx,y = R**6,1/nT\np.simple_plot(x,y,xlabel=r'$(R/(nm))^{6}$',ylabel=r'${\\eta_{T}}^{-1}$',Type='scatter',\n color=False,fig_size=(8,4),fit=True, order=1)\n```\n\nUsing the slope of the line $y=0.000550*x+(0.971320)$, where the slope is $0.000550 = (\\frac{1}{R_{0}})^{6}$.\n\n$$R_{0} = (\\frac{1}{0.000550 nm^{-6}})^{1/6} = 3.5 nm$$\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "8d990f9f13cff07ca8ee44c0d5c933429653b990", "size": 69020, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "CHEM3405_Physical_Chemistry_Bio/HW_Review_03-25-20.ipynb", "max_stars_repo_name": "robraddi/tu_chem", "max_stars_repo_head_hexsha": "18b8247d6c00e33f15f040a57a32b5fc2372137a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-04-29T04:26:42.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-29T04:26:42.000Z", "max_issues_repo_path": "CHEM3405_Physical_Chemistry_Bio/HW_Review_03-25-20.ipynb", "max_issues_repo_name": "robraddi/tu_chem", "max_issues_repo_head_hexsha": "18b8247d6c00e33f15f040a57a32b5fc2372137a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CHEM3405_Physical_Chemistry_Bio/HW_Review_03-25-20.ipynb", "max_forks_repo_name": "robraddi/tu_chem", "max_forks_repo_head_hexsha": "18b8247d6c00e33f15f040a57a32b5fc2372137a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2020-12-03T17:47:05.000Z", "max_forks_repo_forks_event_max_datetime": "2020-12-03T17:47:05.000Z", "avg_line_length": 162.4, "max_line_length": 26628, "alphanum_fraction": 0.8598377282, "converted": true, "num_tokens": 3985, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3522017956470284, "lm_q2_score": 0.26284183159693775, "lm_q1q2_score": 0.09257336505959532}} {"text": "# Practical Session 1: Data exploration and regression algorithms\n\n*Notebook by Ekaterina Kochmar*\n\n## 0.1. Dataset\n\nThe California House Prices Dataset is originally obtained from the StatLib repository. This dataset contains the collected information on the variables (e.g., median income, number of households, precise geographical position) using all the block groups in California from the 1990 Census. A block group is the smallest geographical unit for which the US Census Bureau publishes sample data, and on average it includes $1425.5$ individuals living in a geographically compact area. The [original data](http://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html) contains $20640$ observations on $9$ variables, with the *median house value* being the dependent variable (or *target attribute*). The [modified dataset](https://www.kaggle.com/camnugent/california-housing-prices) from Aurelien Geron, *Hands-On Machine Learning with Scikit-Learn and TensorFlow* contains an additional categorical variable.\n\nFor more information on the original data, please refer to Pace, R. Kelley and Ronald Barry, *Sparse Spatial Autoregressions*, Statistics and Probability Letters, 33 (1997) 291-297. For the information on the modified dataset, please refer to Aurelien Geron, *Hands-On Machine Learning with Scikit-Learn and TensorFlow*, O\u2032Reilly (2017), ISBN: 978-1491962299.\n\n## 0.2. Understanding your task\n\nYou are given a dataset that contains a range of attributes describing the houses in California. Your task is to predict the median price of a house based on its attributes. That is, you should train a machine learning (ML) algorithm on the available data, and the next time you get new information on some housing in California, you can use your trained algorithm to predict its price.\n\nThe questions to ask yourself before starting a new ML project:\n- Does the task suggest a supervised or an unsupervised approach?\n- Are you trying to predict a discrete or a continuous value?\n- Which ML algorithm is most suitable?\n\nTry to answer these questions before you start working on this task, using the following hints:\n- *Supervised* approaches rely on the availability of target label annotation in data; examples include regression and classification approaches. *Unsupervised* approaches don't use annotated data; clustering is a good example of such approach.\n- *Discrete* variables are associated with classes and imply classification approach. *Continuous* variables are associated with regression.\n\n## 0.3. Machine Learning check-list\n\nIn a typical ML project, you need to:\n\n- Get the dataset\n- Understand the data, the attributes and their correlations\n- Split the data into training and test set\n- Apply normalisation, scaling and other transformations to the attributes if needed\n- Build a machine learning model\n- Evaluate the model and investigate the errors\n- Tune your model to improve performance\n\nThis practical will show you how to implement the above steps.\n\n## 0.4. Prerequisites\n\nSome of you might have used Jupiter notebooks with the following libraries before in the [CL 1A Scientific Computing course](https://www.cl.cam.ac.uk/teaching/1920/SciComp/materials.html).\n\nTo run the notebooks on your machine, check if `Python 3` is installed. In addition, you will need the following libraries:\n\n- `Pandas` for easy data uploading and manipulation. Check installation instructions at https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html\n- `Matplotlib`: for visualisations. Check installation instructions at https://matplotlib.org/users/installing.html\n- `NumPy` and `SciPy`: for scietinfic programming. Check installation instruction at https://www.scipy.org/install.html\n- `Scikit-learn`: for machine learning algorithms. Check installation instructions at http://scikit-learn.org/stable/install.html\n\nAlternatively, a number of these libraries can be installed in one go through [Anaconda](https://www.anaconda.com/products/individual) distribution. \n\n## 0.5. Learning objectives\n\nIn this practical you will learn how to:\n\n- upload and explore a dataset\n- visualise and explore the correlations between the variables\n- structure a machine learning project\n- select the training and test data in a random and in a stratified way\n- handle missing values\n- handle categorical values\n- implement a custom data transformer\n- build a machine learning pipeline\n- implement a regression algorithm\n- evaluate a regression algorithm performance\n\nIn addition, you will learn about such common machine learning concepts as:\n- data scaling and normalisation\n- overfitting and underfitting\n- cross-validation\n- hyperparameter setting with grid search\n\n\n## Step 1: Uploading and inspecting the data\n\nFirst let's upload the dataset using `Pandas` and defining a function pointing to the location of the `housing.csv` file:\n\n\n```python\nimport pandas as pd\nimport os\n\ndef load_data(housing_path):\n csv_path = os.path.join(housing_path, \"housing.csv\")\n return pd.read_csv(csv_path)\n```\n\nNow, let's run `load_data` using the path where you stored your `housing.csv` file. This function will return a `Pandas` DataFrame object containing all the data. It is always a good idea to take a quick look into the uploaded dataset and make sure you understand the data you are working with. For example, you can check the top rows of the uploaded data and get the general information about the dataset using `Pandas` functionality as follows:\n\n\n```python\nhousing = load_data(\"housing/\")\nhousing.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
longitudelatitudehousing_median_agetotal_roomstotal_bedroomspopulationhouseholdsmedian_incomemedian_house_valueocean_proximity
0-122.2337.8841.0880.0129.0322.0126.08.3252452600.0NEAR BAY
1-122.2237.8621.07099.01106.02401.01138.08.3014358500.0NEAR BAY
2-122.2437.8552.01467.0190.0496.0177.07.2574352100.0NEAR BAY
3-122.2537.8552.01274.0235.0558.0219.05.6431341300.0NEAR BAY
4-122.2537.8552.01627.0280.0565.0259.03.8462342200.0NEAR BAY
\n
\n\n\n\nRemember that each row in this table represents a block group (housing district), and each column an attribute. How many attributes does the dataset contain? \n\nAnother way to get the summary information about the number of instances and attributes in the dataset is using `info` function. It also shows each attribute's type and number of non-null values:\n\n\n```python\nhousing.info()\n```\n\n \n RangeIndex: 20640 entries, 0 to 20639\n Data columns (total 10 columns):\n # Column Non-Null Count Dtype \n --- ------ -------------- ----- \n 0 longitude 20640 non-null float64\n 1 latitude 20640 non-null float64\n 2 housing_median_age 20640 non-null float64\n 3 total_rooms 20640 non-null float64\n 4 total_bedrooms 20433 non-null float64\n 5 population 20640 non-null float64\n 6 households 20640 non-null float64\n 7 median_income 20640 non-null float64\n 8 median_house_value 20640 non-null float64\n 9 ocean_proximity 20640 non-null object \n dtypes: float64(9), object(1)\n memory usage: 1.6+ MB\n\n\nBefore proceeding further, think about the following: \n- How is the data represented? \n- What do the attribute types suggest? \n- Are there any missing values in the dataset? If so, should you do anything about them? \n\nYou must have worked with numerical values before, and the data types like `float64` should look familiar. However, *ocean\\_proximity* attribute has values of a different type. You can inspect the values of a particular attribute in the DataFrame using the following code:\n\n\n```python\nhousing[\"ocean_proximity\"].value_counts()\n```\n\n\n\n\n <1H OCEAN 9136\n INLAND 6551\n NEAR OCEAN 2658\n NEAR BAY 2290\n ISLAND 5\n Name: ocean_proximity, dtype: int64\n\n\n\nThe above suggests that the values are categorical: there are $5$ categories that define ocean proximity. ML algorithms prefer to work with numerical data, besides all the other attributes are represented using numbers. Keep that in mind, as this suggests that you will need to cast the categorical data as numerical.\n\nFor now, let's have a general overview of the attributes and distribution of their values (note *ocean_proximity* is excluded from this summary):\n\n\n```python\nhousing.describe()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
longitudelatitudehousing_median_agetotal_roomstotal_bedroomspopulationhouseholdsmedian_incomemedian_house_value
count20640.00000020640.00000020640.00000020640.00000020433.00000020640.00000020640.00000020640.00000020640.000000
mean-119.56970435.63186128.6394862635.763081537.8705531425.476744499.5396803.870671206855.816909
std2.0035322.13595212.5855582181.615252421.3850701132.462122382.3297531.899822115395.615874
min-124.35000032.5400001.0000002.0000001.0000003.0000001.0000000.49990014999.000000
25%-121.80000033.93000018.0000001447.750000296.000000787.000000280.0000002.563400119600.000000
50%-118.49000034.26000029.0000002127.000000435.0000001166.000000409.0000003.534800179700.000000
75%-118.01000037.71000037.0000003148.000000647.0000001725.000000605.0000004.743250264725.000000
max-114.31000041.95000052.00000039320.0000006445.00000035682.0000006082.00000015.000100500001.000000
\n
\n\n\n\nTo make sure you understand the structure of the dataset, try answering the following questions: \n- How can you interpret the values in the table above?\n- What do the percentiles (e.g., $25\\%$ or $50\\%$) tell you about the distribution of values in this dataset (you can select one particular attribute to explain)? \n- How are the missing values handled?\n\nRemember that you can always refer to [`Pandas`](https://pandas.pydata.org/pandas-docs/stable/reference/index.html) documentation.\n\nAnother good way to get an overview of the values distribution is to plot histograms. This time, you'll need to use `matplotlib`:\n\n\n```python\n%matplotlib inline \n#so that the plot will be displayed in the notebook\nimport matplotlib.pyplot as plt\n\nhousing.hist(bins=50, figsize=(20,15))\nplt.show()\n```\n\nTwo observations about this graphs are worth making:\n- the *median_income*, *housing_median_age* and the *median_house_value* have been capped by the team that collected the data: that is, the values for the *median_income* are scaled by dividing the income by \\\\$10000 and capped so that they range between $[0.4999, 15.0001]$ with the incomes lower than $0.4999$ and higher than $15.0001$ binned together; similarly, the *housing_median_age* values have been scaled and binned to range between $[1, 52]$ years and the *median_house_value* \u2013 to range between $[14999, 500001]$. Data manipulations like these are not unusual in data science but it's good to be aware of how the data is represented;\n- several other attributes are \"tail heavy\" \u2013 they have a long distribution tail with many decreasingly rare values to the right of the mean. In practice that means that you might consider using the logarithms of these values rather than the absolute values.\n\n## Step 2: Splitting the data into training and test sets\n\nIn this practical, you are working with a dataset that has been collected and thoroughly labelled in the past. Each instance has a predefined set of values and the correct price label assigned to it. After training the ML model on this dataset you hope to be able to predict the prices for new houses, not contained in this dataset, based on their characteristics such as geographical position, median income, number of rooms and so on. How can you check in advance whether your model is good in making such predictions?\n\nThe answer is: you set part of your dataset, called *test set*, aside and use it to evaluate the performance of your model only. You train and tune your model using the rest of the dataset \u2013 *training set* \u2013 and evaluate the performance of the model trained this way on the test set. Since the model doesn't see the test set during training, this perfomance should give you a reasonable estimate of how well it would perform on new data. Traditionally, you split the data into $80\\%$ training and $20\\%$ test set, making sure that the test instances are selected randomly so that you don't end up with some biased selection leading to over-optimistic or over-pessimistic results on your test set.\n\nFor example, you can select your test set as the code below shows. To ensure random selection of the test items, use `np.random.permutation`. However, if you want to ensure that you have a stable test set, and the same test instances get selected from the dataset in a random fashion in different runs of the program, select a random seed, e.g. using `np.random.seed(42)`.\n\n\n```python\nimport numpy as np\nnp.random.seed(42)\n\ndef split_train_test(data, test_ratio): \n shuffled_indices = np.random.permutation(len(data))\n test_set_size = int(len(data) * test_ratio)\n test_indices = shuffled_indices[:test_set_size]\n train_indices = shuffled_indices[test_set_size:]\n return data.iloc[train_indices], data.iloc[test_indices]\n\ntrain_set, test_set = split_train_test(housing, 0.2)\nprint(len(train_set), \"training instances +\", len(test_set), \"test instances\")\n```\n\n 16512 training instances + 4128 test instances\n\n\nNote that `scikit-learn` provides a similar functionality to the code above with its `train_test_split` function. Morevoer, you can pass it several datasets with the same number of rows each, and it will split them into training and test sets on the same indices (you might find it useful if you need to pass in a separate DataFrame with labels):\n\n\n```python\nfrom sklearn.model_selection import train_test_split\n\ntrain_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)\nprint(len(train_set), \"training instances +\", len(test_set), \"test instances\")\n```\n\n 16512 training instances + 4128 test instances\n\n\nSo far, you have been selecting your test set using random sampling methods. If your data is representative of the task at hand, this should help ensure that the results of the model testing are informative. However, if your dataset is not very large and the data is skewed on some of the attributes or on the target label (as is often the case with the real-world data), random sampling might introduce a sampling bias. *Stratified sampling* is a technique that helps make sure that the distributions of the instance attributes or labels in the training and the test sets are similar, meaning that the proportion of instances drawn from each *stratum* in the dataset is similar in the training and test data.\n\nSampling bias may express itself both in the distribution of labels and in the distribution of the attribute values. For instance, take a look at the *median_income* attribute value distribution. Suppose for now (and you might find a confirmation to that later in the practical) that this attribute is predictive of the house price, however its values are unevenly distributed across the range of $[0.4999, 15.0001]$ with a very long tail. If random sampling doesn't select enough instances for each *stratum* (each range of incomes) the estimate of the under-represented strata's importance will be biased. \n\nFirst, to limit the number of income categories (strata), particularly at the long tail, let's apply further binning to the income values: e.g., you can divide the income by $1.5$, round up the values using `ceil` to have discrete categories (bins), and merge all the categories greater than $5$ into category $5$. The latter can be achieved using `Pandas`' `where` functionality, keeping the original values when they are smaller than $5$ and converting them to $5$ otherwise:\n\n\n```python\nhousing[\"income_cat\"] = np.ceil(housing[\"median_income\"] / 1.5)\nhousing[\"income_cat\"].where(housing[\"income_cat\"] < 5, 5.0, inplace = True)\n\nhousing[\"income_cat\"].hist()\nplt.show()\n```\n\nNow you have a much smaller number of categories of income, with the instances more evenly distributed, so you can hope to get enough data to represent the tail. Next, let's split the dataset into training and test sets making sure both contain similar proportion of instances from each income category. You can do that using `scikit-learn`'s `StratifiedShuffleSplit` specifying the condition on which the data should be stratified (in this case, income category):\n\n\n```python\nfrom sklearn.model_selection import StratifiedShuffleSplit\n\nsplit = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)\nfor train_index, test_index in split.split(housing, housing[\"income_cat\"]):\n strat_train_set = housing.loc[train_index]\n strat_test_set = housing.loc[test_index]\n```\n\nLet's compare the distribution of the income values in the randomly selected train and test sets and the stratified train and test sets against the full dataset. To better understand the effect of random sampling versus stratified sampling, let's also estimate the error that would be introduced in the data by such splits:\n\n\n```python\ntrain_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)\n\ndef income_cat_proportions(data):\n return data[\"income_cat\"].value_counts() / len(data)\n\ncompare_props = pd.DataFrame({\n \"Overall\": income_cat_proportions(housing),\n \"Stratified tr\": income_cat_proportions(strat_train_set),\n \"Random tr\": income_cat_proportions(train_set),\n \"Stratified ts\": income_cat_proportions(strat_test_set),\n \"Random ts\": income_cat_proportions(test_set),\n})\ncompare_props[\"Rand. tr %error\"] = 100 * compare_props[\"Random tr\"] / compare_props[\"Overall\"] - 100\ncompare_props[\"Rand. ts %error\"] = 100 * compare_props[\"Random ts\"] / compare_props[\"Overall\"] - 100\ncompare_props[\"Strat. tr %error\"] = 100 * compare_props[\"Stratified tr\"] / compare_props[\"Overall\"] - 100\ncompare_props[\"Strat. ts %error\"] = 100 * compare_props[\"Stratified ts\"] / compare_props[\"Overall\"] - 100\n\ncompare_props.sort_index()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
OverallStratified trRandom trStratified tsRandom tsRand. tr %errorRand. ts %errorStrat. tr %errorStrat. ts %error
1.00.0398260.0398500.0397290.0397290.040213-0.2433090.9732360.060827-0.243309
2.00.3188470.3188590.3174660.3187980.324370-0.4330651.7322600.003799-0.015195
3.00.3505810.3505940.3485950.3505330.358527-0.5666112.2664460.003455-0.013820
4.00.1763080.1762960.1785370.1763570.1673931.264084-5.056334-0.0068700.027480
5.00.1144380.1144020.1156730.1145830.1094961.079594-4.318374-0.0317530.127011
\n
\n\n\n\nAs you can see, the distributions in the stratified training and test sets are much closer to the original distribution of categories as well as being much closer to each other. \n\nNote, that to help you split the data, you had to introduce a new category \u2013 *income_cat* \u2013 which contains the same information as the original attribute *median_income* binned in a different way:\n\n\n```python\nstrat_train_set.info()\n```\n\n \n Int64Index: 16512 entries, 17606 to 15775\n Data columns (total 11 columns):\n # Column Non-Null Count Dtype \n --- ------ -------------- ----- \n 0 longitude 16512 non-null float64\n 1 latitude 16512 non-null float64\n 2 housing_median_age 16512 non-null float64\n 3 total_rooms 16512 non-null float64\n 4 total_bedrooms 16354 non-null float64\n 5 population 16512 non-null float64\n 6 households 16512 non-null float64\n 7 median_income 16512 non-null float64\n 8 median_house_value 16512 non-null float64\n 9 ocean_proximity 16512 non-null object \n 10 income_cat 16512 non-null float64\n dtypes: float64(10), object(1)\n memory usage: 1.5+ MB\n\n\nBefore proceeding further let's remove the *income_cat* attribute so the data is back to its original state. Here is how you can do that:\n\n\n```python\nfor set_ in (strat_train_set, strat_test_set):\n set_.drop(\"income_cat\", axis=1, inplace=True)\n\nstrat_train_set.info()\n```\n\n \n Int64Index: 16512 entries, 17606 to 15775\n Data columns (total 10 columns):\n # Column Non-Null Count Dtype \n --- ------ -------------- ----- \n 0 longitude 16512 non-null float64\n 1 latitude 16512 non-null float64\n 2 housing_median_age 16512 non-null float64\n 3 total_rooms 16512 non-null float64\n 4 total_bedrooms 16354 non-null float64\n 5 population 16512 non-null float64\n 6 households 16512 non-null float64\n 7 median_income 16512 non-null float64\n 8 median_house_value 16512 non-null float64\n 9 ocean_proximity 16512 non-null object \n dtypes: float64(9), object(1)\n memory usage: 1.4+ MB\n\n\n## Step 3: Exploring the attributes\n\nThe next step is to look more closely into the attributes and gain insights into the data. In particular, you should try to answer the following questions: \n- Which attributes look most informative? \n- How do they correlate with each other and the target label?\n- Is any further normalisation or scaling needed?\n\nThe most informative ways in which you can answer the questions above are by *visualising* the data and by *collecting additional statistics* on the attributes and their relations to each other.\n\nFirst, remember that from now on you're only looking into and gaining insights from the training data. You will use the test data at the evaluation step only, thus ensuring no data leakage between the training and test sets occurs and the results on the test set are a fair evaluation of your algorithm's performance. Let's make a copy of the training set that you can experiment with without a danger of overwriting or changing the original data: \n\n\n```python\nhousing = strat_train_set.copy()\n```\n\n### Visualisations\n\nThe first two attributes describe the geographical position of the houses. Let's apply further visualisations and look into the geographical area that is covered: for that, use a scatter plot plotting longitude against latitude coordinates. To make the scatter plot more informative, use `alpha` option to highlight high density points:\n\n\n```python\nhousing.plot(kind='scatter', x='longitude', y='latitude', alpha=0.2)\n```\n\nYou can experiment with `alpha` values to get a better understanding, but it should be obvious from these plots that the areas in the south and along the coast of California are more densely populated (roughly corresponding to the Bay Area, Los Angeles, San Diego, and the Central Valley). \n\nNow, what does geographical position suggest about the housing prices? In the following code, the size of the circles represents the size of the population, and the color represents the price, ranging from blue for low prices to red for high prices (this color scheme is specified by the preselected `cmap` type):\n\n\n```python\nhousing2 = strat_train_set.copy()\n\n```\n\n\n```python\nhousing2[\"ocean_proximity\"].value_counts()\n\n# housing2.loc(\"ocean_proximity\")\n#TODO COME BACK TO THIS \nhousing2[\"ocean_proximity\"].value_counts()\n\n```\n\n\n\n\n <1H OCEAN 7276\n INLAND 5263\n NEAR OCEAN 2124\n NEAR BAY 1847\n ISLAND 2\n Name: ocean_proximity, dtype: int64\n\n\n\n\n```python\nhousing.plot(kind='scatter', x='longitude', y='latitude', alpha=0.5,\n s=housing[\"population\"]/100, label=\"population\", figsize=(10,7), \n c=housing[\"median_house_value\"], cmap=plt.get_cmap(\"jet\"), colorbar=\"True\",\n )\nplt.legend()\n```\n\nThis plot suggests that the housing prices depend on the proximity to the ocean and on the population size. What does this suggest about the informativeness of the attributes for your ML task?\n\n### Correlations\n\nLet's also look into how the attributes correlate with each other:\n\n\n```python\ncorr_matrix = housing.corr()\ncorr_matrix\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
longitudelatitudehousing_median_agetotal_roomstotal_bedroomspopulationhouseholdsmedian_incomemedian_house_value
longitude1.000000-0.924478-0.1058480.0488710.0765980.1080300.063070-0.019583-0.047432
latitude-0.9244781.0000000.005766-0.039184-0.072419-0.115222-0.077647-0.075205-0.142724
housing_median_age-0.1058480.0057661.000000-0.364509-0.325047-0.298710-0.306428-0.1113600.114110
total_rooms0.048871-0.039184-0.3645091.0000000.9293790.8551090.9183920.2000870.135097
total_bedrooms0.076598-0.072419-0.3250470.9293791.0000000.8763200.980170-0.0097400.047689
population0.108030-0.115222-0.2987100.8551090.8763201.0000000.9046370.002380-0.026920
households0.063070-0.077647-0.3064280.9183920.9801700.9046371.0000000.0107810.064506
median_income-0.019583-0.075205-0.1113600.200087-0.0097400.0023800.0107811.0000000.687160
median_house_value-0.047432-0.1427240.1141100.1350970.047689-0.0269200.0645060.6871601.000000
\n
\n\n\n\nSince you are trying to predict the house value, the last column in this table is the most informative. Let's make the output clearer:\n\n\n```python\ncorr_matrix[\"median_house_value\"].sort_values(ascending=False)\n```\n\n\n\n\n median_house_value 1.000000\n median_income 0.687160\n total_rooms 0.135097\n housing_median_age 0.114110\n households 0.064506\n total_bedrooms 0.047689\n population -0.026920\n longitude -0.047432\n latitude -0.142724\n Name: median_house_value, dtype: float64\n\n\n\nThis makes it clear that the *median_income* is most strongly positively correlated with the price. There is small positive correlation of the price with *total_rooms* and *housing_median_age*, and small negative correlation with *latitude*, which suggests that the prices go up with the increase in income, number of rooms and house age, and go down when you go north. `Pandas`' `scatter_matrix` function allows you to visualise the correlation of attributes with each other (note that since the correlation of an attribute with itself will result in a straight line, `Pandas` uses a histogram instead \u2013 that's what you see along the diagonal):\n\n\n```python\nfrom pandas.plotting import scatter_matrix\n# If the above returns an error, use the following:\n#from pandas.tools.plotting import scatter_matrix\n\nattributes = [\"median_house_value\", \"median_income\", \"total_rooms\", \"housing_median_age\", \"latitude\"]\nscatter_matrix(housing[attributes], figsize=(12,8))\n```\n\nThese plots confirm that the income attribute is the most promising one for predicting house prices, so let's zoom in on this attribute:\n\n\n```python\nhousing.plot(kind=\"scatter\", x=\"median_income\", y=\"median_house_value\", alpha=0.3)\n```\n\nThere are a couple of observations to be made about this plot:\n- The correlation is indeed quite strong: the values follow the upward trend and are not too dispersed otherwise;\n- You can clearly see a line around $500000$ which covers a full range of income values and is due to the fact that the house prices above that value were capped in the original dataset. However, the plot suggests that there are also some other less obvious groups of values, most visible around $350000$ and $450000$, that also cover a range of different income values. Since your ML algorithm will learn to reproduce such data quirks, you might consider looking into these matters further and removing these districts from your dataset (after all, in any real-world application, one can expect a certain amount of noise in the data and clearing the data is one of the steps in any practical application). \n\nThe next thing to notice is that a number of attributes from the original dataset, including *total_rooms*, \t*total_bedrooms* and *population*, do not actually describe each house in particular but rather represent the cumulative counts for *all households* in the block group. At the same time, the task at hand requires you to predict the house price for *each individual household*. In addition, an attribute that measures the proportion of bedrooms against the total number of rooms might be informative. Therefore, the following transformed attributes might be more useful for the prediction:\n\n\n```python\nhousing[\"rooms_per_household\"] = housing[\"total_rooms\"] / housing[\"households\"]\nhousing[\"bedrooms_per_household\"] = housing[\"total_bedrooms\"] / housing[\"households\"]\nhousing[\"bedrooms_per_rooms\"] = housing[\"total_bedrooms\"] / housing[\"total_rooms\"]\nhousing[\"population_per_household\"] = housing[\"population\"] / housing[\"households\"]\n```\n\nA good way to check whether these transformations have any effect on the task is to check attributes correlations again:\n\n\n```python\ncorr_matrix = housing.corr()\ncorr_matrix[\"median_house_value\"].sort_values(ascending=False)\n```\n\n\n\n\n median_house_value 1.000000\n median_income 0.687160\n rooms_per_household 0.146285\n total_rooms 0.135097\n housing_median_age 0.114110\n households 0.064506\n total_bedrooms 0.047689\n population_per_household -0.021985\n population -0.026920\n bedrooms_per_household -0.043343\n longitude -0.047432\n latitude -0.142724\n bedrooms_per_rooms -0.259984\n Name: median_house_value, dtype: float64\n\n\n\nYou can see that the number of rooms per household is more strongly correlated with the house price \u2013 the more rooms the more expensive the house, while the proportion of bedrooms is more strongly correlated with the price than either the number of rooms or bedrooms in the household \u2013 since the correlation is negative, the lower the bedroom-to-room ratio, the more expensive the property.\n\n## Step 4: Data preparation and transformations for machine learning algorithms\n\nNow you are almost ready to implement a regression algorithm for the task at hand. However, there are a couple of other things to address, in particular:\n- handle missing values if there are any;\n- convert all attribute values (e.g. categorical, textual) into numerical format;\n- scale / normalise the feature values if necessary.\n\nFirst, let's separate the labels you're trying to predict (*median_house_value*) from the attributes in the dataset that you will use as *features*. The following code will keep a copy of the labels and the rest of the attributes separate (note that `drop()` will create a copy of the data and will not affect `strat_train_set` itself): \n\n\n```python\nhousing = strat_train_set.drop(\"median_house_value\", axis=1) #drop makes a copy!\nhousing_labels = strat_train_set[\"median_house_value\"].copy()\n```\n\nYou can add the transformed features that you found useful before with the additional function as shown below. Then you can run `add_features(housing)` to add the features:\n\n\n```python\ndef add_features(data):\n # add the transformed features that you found useful before\n data[\"rooms_per_household\"] = data[\"total_rooms\"] / data[\"households\"]\n data[\"bedrooms_per_household\"] = data[\"total_bedrooms\"] / data[\"households\"]\n data[\"bedrooms_per_rooms\"] = data[\"total_bedrooms\"] / data[\"total_rooms\"]\n data[\"population_per_household\"] = data[\"population\"] / data[\"households\"]\n \n# add_features(housing)\n```\n\nYou will learn shortly about how to implement your own *data transformers* and will be able to re-implement addition of these features as a data transfomer.\n\n### Handling missing values\n\nIn Step 1 above, when you took a quick look into the dataset, you might have noticed that all attributes but one have $20640$ values in the dataset; *total_bedrooms* has $20433$, so some values are missing. ML algorithms cannot deal with missing values, so you'll need to decide how to replace these values. There are three possible solutions:\n\n1. remove the corresponding housing blocks from the dataset (i.e., remove the rows in the dataset)\n2. remove the whole attribute (i.e., remove the column)\n3. set the missing values to some predefined value (e.g., zero value, the mean, the median, the most frequent value of the attribute, etc.)\n\nThe following `Pandas` functionality will help you implement each of these options:\n\n\n```python\nhousing\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
longitudelatitudehousing_median_agetotal_roomstotal_bedroomspopulationhouseholdsmedian_incomeocean_proximity
17606-121.8937.2938.01568.0351.0710.0339.02.7042<1H OCEAN
18632-121.9337.0514.0679.0108.0306.0113.06.4214<1H OCEAN
14650-117.2032.7731.01952.0471.0936.0462.02.8621NEAR OCEAN
3230-119.6136.3125.01847.0371.01460.0353.01.8839INLAND
3555-118.5934.2317.06592.01525.04459.01463.03.0347<1H OCEAN
..............................
6563-118.1334.2046.01271.0236.0573.0210.04.9312INLAND
12053-117.5633.8840.01196.0294.01052.0258.02.0682INLAND
13908-116.4034.099.04855.0872.02098.0765.03.2723INLAND
11159-118.0133.8231.01960.0380.01356.0356.04.0625<1H OCEAN
15775-122.4537.7752.03095.0682.01269.0639.03.5750NEAR BAY
\n

16512 rows \u00d7 9 columns

\n
\n\n\n\n\n```python\n## option 1:\nhousing.dropna(subset=[\"total_bedrooms\"])\n## option 2:\n# housing.drop(\"total_bedrooms\", axis=1)\n# option 3:\n# median = housing[\"total_bedrooms\"].median()\n# housing[\"total_bedrooms\"].fillna(median, inplace=True)\nhousing\n\n\n# I would have chossen to repalce over droping them!\n# I think replacing will cause more harm than use.?\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
longitudelatitudehousing_median_agetotal_roomstotal_bedroomspopulationhouseholdsmedian_incomeocean_proximity
17606-121.8937.2938.01568.0351.0710.0339.02.7042<1H OCEAN
18632-121.9337.0514.0679.0108.0306.0113.06.4214<1H OCEAN
14650-117.2032.7731.01952.0471.0936.0462.02.8621NEAR OCEAN
3230-119.6136.3125.01847.0371.01460.0353.01.8839INLAND
3555-118.5934.2317.06592.01525.04459.01463.03.0347<1H OCEAN
..............................
6563-118.1334.2046.01271.0236.0573.0210.04.9312INLAND
12053-117.5633.8840.01196.0294.01052.0258.02.0682INLAND
13908-116.4034.099.04855.0872.02098.0765.03.2723INLAND
11159-118.0133.8231.01960.0380.01356.0356.04.0625<1H OCEAN
15775-122.4537.7752.03095.0682.01269.0639.03.5750NEAR BAY
\n

16512 rows \u00d7 9 columns

\n
\n\n\n\nAlthough, all three options are possible, keep in mind that in the first two cases you are throwing away either some valuable attributes (e.g., as you've seen earlier, *bedrooms_per_rooms* correlates well with the label you're trying to predict) or a number of valuable training examples. Option 3, therefore, looks more promising. Note, that for that you estimate a mean or median based on the training set only (as, in general, your ML algorithm has access to the training data only during the training phase), and then store the mean / median values to replace the missing values in the test set (or any new dataset, to that effect). In addition, you might want to calculate and store the mean / median values for all attributes as in a real-life application you can never be sure if any of the attributes will have missing values in the future.\n\nHere is how you can calculate and store median values using `sklearn` (note that you'll need to exclude `ocean_proximity` attribute from this calculation since it has non-numerical values):\n\n\n```python\n# for earlier versions of sklearn use:\n#from sklearn.preprocessing import Imputer \n#imputer = Imputer(strategy=\"median\")\n\nfrom sklearn.impute import SimpleImputer\n\nimputer = SimpleImputer(strategy=\"median\")\nhousing_num = housing.drop(\"ocean_proximity\", axis=1)\nimputer.fit(housing_num)\n```\n\n\n\n\n SimpleImputer(add_indicator=False, copy=True, fill_value=None,\n missing_values=nan, strategy='median', verbose=0)\n\n\n\nYou can check the median values stored in the `imputer` as follows:\n\n\n```python\nimputer.statistics_\n```\n\n\n\n\n array([-1.1849e+02, 3.4260e+01, 2.9000e+01, 2.1270e+03, 4.3500e+02,\n 1.1660e+03, 4.0900e+02, 3.5348e+00, 1.7970e+05])\n\n\n\nand also make sure that they exactly coincide with the median values for all numerical attributes:\n\n\n```python\nhousing_num.median().values\n```\n\n\n\n\n array([-1.1849e+02, 3.4260e+01, 2.9000e+01, 2.1270e+03, 4.3500e+02,\n 1.1660e+03, 4.0900e+02, 3.5348e+00, 1.7970e+05])\n\n\n\nFinally, let's replace the missing values in the training data:\n\n**WHAT does this do!!!**\n\n\n```python\nX = imputer.transform(housing_num)\nhousing_tr = pd.DataFrame(X, columns=housing_num.columns)\nhousing_tr.info()\n```\n\n \n RangeIndex: 20640 entries, 0 to 20639\n Data columns (total 9 columns):\n # Column Non-Null Count Dtype \n --- ------ -------------- ----- \n 0 longitude 20640 non-null float64\n 1 latitude 20640 non-null float64\n 2 housing_median_age 20640 non-null float64\n 3 total_rooms 20640 non-null float64\n 4 total_bedrooms 20640 non-null float64\n 5 population 20640 non-null float64\n 6 households 20640 non-null float64\n 7 median_income 20640 non-null float64\n 8 median_house_value 20640 non-null float64\n dtypes: float64(9)\n memory usage: 1.4 MB\n\n\n### Handling textual and categorical attributes\n\nAnother aspect of the dataset that should be handled is the textual / categorical values of the *ocean_proximity* attribute. ML algorithms prefer working with numerical data, so let's use `sklearn`'s functionality and cast the categorical values as numerical values as follows:\n\n\n```python\nfrom sklearn.preprocessing import LabelEncoder\n\nencoder = LabelEncoder()\nhousing_cat_encoded = encoder.fit_transform(housing[\"ocean_proximity\"])\nhousing_cat_encoded\n```\n\n\n\n\n array([3, 3, 3, ..., 1, 1, 1])\n\n\n\nThe code above mapped the categories to numerical values. You can check what the numerical values correspond to in the original data using:\n\n\n```python\nencoder.classes_\n```\n\n\n\n\n array(['<1H OCEAN', 'INLAND', 'ISLAND', 'NEAR BAY', 'NEAR OCEAN'],\n dtype=object)\n\n\n\nOne problem with the encoding above is that the ML algorithm will automatically assume that the numerical values that are close to each other encode similar concepts, which for this data is not quite true: for example, value $0$ corresponding to *$<$1H OCEAN* category is actually most similar to values $3$ and $4$ (*NEAR BAY* and *NEAR OCEAN*) and not to value $1$ (*INLAND*).\n\nAn alternative to this encoding is called *one-hot encoding* and it runs as follows: for each category, it creates a separate binary attribute which is set to $1$ (hot) when the category coincides with the attribute, and $0$ (cold) otherwise. So, for instance, *$<$1H OCEAN* will be encoded as a one-hot vector $[1, 0, 0, 0, 0]$ and *NEAR OCEAN* will be encoded as $[0, 0, 0, 0, 1]$. The following `sklearn`'s functionality allows to convert categorical values into one-hot vectors:\n\n\n```python\nfrom sklearn.preprocessing import OneHotEncoder\n\nencoder = OneHotEncoder()\n# fit_transform expects a 2D array, but housing_cat_encoded is a 1D array.\n# Reshape it using NumPy's reshape functionality where -1 simply means \"unspecified\" dimension \nhousing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1,1))\nhousing_cat_1hot\n```\n\n\n\n\n <20640x5 sparse matrix of type ''\n \twith 20640 stored elements in Compressed Sparse Row format>\n\n\n\nNote that the data format above says that the output is a sparse matrix. This means that the data structure only stores the location of the non-zero elements, rather than the full set of vectors which are mostly full of zeros. You can check the [documentation on sparse matrices](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html) if you'd like to learn more. If you'd like to see how the encoding looks like you can also convert it back into a dense NumPy array using:\n\n\n```python\nhousing_cat_1hot.toarray()\n```\n\n\n\n\n array([[0., 0., 0., 1., 0.],\n [0., 0., 0., 1., 0.],\n [0., 0., 0., 1., 0.],\n ...,\n [0., 1., 0., 0., 0.],\n [0., 1., 0., 0., 0.],\n [0., 1., 0., 0., 0.]])\n\n\n\nThe steps above, including casting text categories to numerical categories and then converting them into 1-hot vectors, can be performed using `sklearn`'s `LabelBinarizer`:\n\n\n```python\nfrom sklearn.preprocessing import LabelBinarizer\n\nencoder = LabelBinarizer()\nhousing_cat_1hot = encoder.fit_transform(housing[\"ocean_proximity\"])\nhousing_cat_1hot\n```\n\n\n\n\n array([[0, 0, 0, 1, 0],\n [0, 0, 0, 1, 0],\n [0, 0, 0, 1, 0],\n ...,\n [0, 1, 0, 0, 0],\n [0, 1, 0, 0, 0],\n [0, 1, 0, 0, 0]])\n\n\n\nThe above produces dense array as an output, so if you'd like to have a sparse matrix instead you can specify it in the `LabelBinarizer` constructor:\n\n\n```python\nencoder = LabelBinarizer(sparse_output=True)\nhousing_cat_1hot = encoder.fit_transform(housing[\"ocean_proximity\"])\nhousing_cat_1hot\n```\n\n\n\n\n <20640x5 sparse matrix of type ''\n \twith 20640 stored elements in Compressed Sparse Row format>\n\n\n\n### Data transformers\n\nA useful functionality of `sklearn` is [data transformers](http://scikit-learn.org/stable/data_transforms.html): you will see them used in preprocessing very often. For example, you have just used one to impute the missing values. In addition, you can implement your own custom data transformers. In general, a transformer class needs to implement three methods:\n- a constructor method;\n- a `fit` method that learns parameters (e.g. mean and standard deviation for a normalization transformer) or returns `self`; and\n- a `transform` method that applies the learned transformation to the new data.\n\nWhenever you see `fit_transform` method, it means that the method uses an optimised combination of `fit` and `transform`. Here is how you can implement a data transformer that will convert categorical values into 1-hot vectors:\n\n\n```python\nfrom sklearn.base import TransformerMixin # TransformerMixin allows you to use fit_transform method\n\nclass CustomLabelBinarizer(TransformerMixin):\n def __init__(self, *args, **kwargs):\n self.encoder = LabelBinarizer(*args, **kwargs)\n def fit(self, X, y=0):\n self.encoder.fit(X)\n return self\n def transform(self, X, y=0):\n return self.encoder.transform(X)\n```\n\nSimilarly, here is how you can wrap up adding new transformed features like bedroom-to-room ratio with a data transformer:\n\n\n```python\nfrom sklearn.base import BaseEstimator, TransformerMixin \n# BaseEstimator allows you to drop *args and **kwargs from you constructor\n# and, in addition, allows you to use methods set_params() and get_params()\n\nrooms_id, bedrooms_id, population_id, household_id = 3, 4, 5, 6\n\nclass CombinedAttributesAdder(BaseEstimator, TransformerMixin):\n def __init__(self, add_bedrooms_per_rooms = True): # note no *args and **kwargs used this time\n self.add_bedrooms_per_rooms = add_bedrooms_per_rooms\n def fit(self, X, y=None):\n return self\n def transform(self, X, y=None):\n rooms_per_household = X[:, rooms_id] / X[:, household_id]\n bedrooms_per_household = X[:, bedrooms_id] / X[:, household_id]\n population_per_household = X[:, population_id] / X[:, household_id]\n if self.add_bedrooms_per_rooms:\n bedrooms_per_rooms = X[:, bedrooms_id] / X[:, rooms_id]\n return np.c_[X, rooms_per_household, bedrooms_per_household, \n population_per_household, bedrooms_per_rooms]\n else:\n return np.c_[X, rooms_per_household, bedrooms_per_household, \n population_per_household]\n \nattr_adder = CombinedAttributesAdder()\nhousing_extra_attribs = attr_adder.transform(housing.values)\n# print(housing_extra_attribs.info)\n```\n\nIf you'd like to explore the new attributes, you can convert the `housing_extra_attribs` into a `Pandas` DataFrame and apply the functionality as before:\n\n\n```python\nhousing_extra_attribs = pd.DataFrame(housing_extra_attribs, columns=list(housing.columns)+\n [\"rooms_per_household\", \"bedrooms_per_household\", \n \"population_per_household\", \"bedrooms_per_rooms\"])\nprint(housing.info)\n\nhousing_extra_attribs.info()\n\n```\n\n \n \n RangeIndex: 16512 entries, 0 to 16511\n Data columns (total 13 columns):\n # Column Non-Null Count Dtype \n --- ------ -------------- ----- \n 0 longitude 16512 non-null object\n 1 latitude 16512 non-null object\n 2 housing_median_age 16512 non-null object\n 3 total_rooms 16512 non-null object\n 4 total_bedrooms 16512 non-null object\n 5 population 16512 non-null object\n 6 households 16512 non-null object\n 7 median_income 16512 non-null object\n 8 ocean_proximity 16512 non-null object\n 9 rooms_per_household 16512 non-null object\n 10 bedrooms_per_household 16512 non-null object\n 11 population_per_household 16512 non-null object\n 12 bedrooms_per_rooms 16512 non-null object\n dtypes: object(13)\n memory usage: 1.6+ MB\n\n\n\n```python\nhousing_extra_attribs.info()\n```\n\n### Feature scaling\n\nFinally, ML algorithms do not typically perform well when the feature values cover significantly different ranges of values. For example, in the dataset at hand, the income ranges from $0.4999$ to $15.0001$, while population ranges from $3$ to $35682$. Taken at the same scale, these values are not directly comparable. The data transformation that should be applied to these values is called *feature scaling*.\n\nOne of the most common ways to scale the data is to apply *min-max scaling* (also often referred to as *normalisaton*). Min-max scaling puts all values on the scale of $[0, 1]$ making the ranges directly comparable. For that, you need to subtract the min from the actual value and divide by the difference between the maximum and minimum values, i.e.:\n\n\\begin{equation}\nf_{scaled} = \\frac{f - F_{min}}{F_{max} - F_{min}}\n\\end{equation}\n\nwhere $f \\in F$ is the actual feature value of a feature type $F$, and $F_{min}$ and $F_{max}$ are the minumum and maximum values for the feature of type $F$.\n\nAnother common approach is *standardisation*, which subtracts the mean value (so the standardised values have a zero mean) and divides by the variance (so the standardised values have unit variance). Standardisation does not impose a specific range on the values and is more robust to the outliers: i.e., a noisy input or an incorrect income value of $100$ (when the rest of the values lie within the range of $[0.4999, 15.0001]$) will introduce a significant skew in the data after min-max scaling. At the same time, standardisation does not bind values to the same range of $[0, 1]$, which might be problematic for some algorithms.\n\n`Scikit-learn` has an implementation for the `MinMaxScaler`, `StandardScaler`, as well as [other scaling approaches](http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-scaler), i.e.:\n\n\n```python\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler\n\nscaler = StandardScaler()\nhousing_tr_scaled = scaler.fit_transform(housing_tr)\n```\n\n### Putting all the data transformations together\n\nAnother useful functionality of `sklearn` is pipelines. These allow you to stack several separate transformations together. For example, you can apply the numerical transformations such as missing values handling and data scaling as follows:\n\n\n```python\nfrom sklearn.pipeline import Pipeline\n\nnum_pipeline = Pipeline([\n #('imputer', Imputer(strategy=\"median\")),\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('std_scaler', StandardScaler()),\n])\n\nhousing_num_tr = num_pipeline.fit_transform(housing_num)\nhousing_num_tr.shape\n```\n\nPipelines are useful because they help combining several steps together, so that the output of one data transformer (e.g., `Imputer`) is passed on as an input to the next one (e.g., `StandardScaler`) and so you don't need to worry about the intermediate steps. Besides, it makes the code look more concise and readable. However:\n- the code above doesn't handle categorical values;\n- we started with `Pandas` DataFrames because they are useful for data uploading and inspection, but the `Pipeline` expects `NumPy` arrays as input, and at the moment, `sklearn`'s `Pipeline` cannot handle `Pandas` DataFrames.\n\nIn fact, there is a way around the two issues above. Let's implement another custom data transformer that will allow you to select specific attributes from a `Pandas` DataFrame:\n\n\n```python\nfrom sklearn.base import BaseEstimator, TransformerMixin\n\n# Create a class to select numerical or categorical columns \n# since Scikit-Learn doesn't handle DataFrames yet\nclass DataFrameSelector(BaseEstimator, TransformerMixin):\n def __init__(self, attribute_names):\n self.attribute_names = attribute_names\n def fit(self, X, y=None):\n return self\n def transform(self, X):\n return X[self.attribute_names].values\n```\n\nThe transformer above allows you to select a predefined set of attributes from a DataFrame, dropping the rest and converting the selected ones into a `NumPy` array. This is quite useful because now you can select the numerical attributes and apply one set of transformations to them, and then select categorical attributes and apply another set of transformation to them, i.e.:\n\n\n```python\nnum_attribs = list(housing_num)\ncat_attribs = [\"ocean_proximity\"]\n\nnum_pipeline = Pipeline([\n ('selector', DataFrameSelector(num_attribs)),\n #('imputer', Imputer(strategy=\"median\")),\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('attribs_adder', CombinedAttributesAdder()),\n ('std_scaler', StandardScaler()),\n ])\n\ncat_pipeline = Pipeline([\n ('selector', DataFrameSelector(cat_attribs)),\n ('label_binarizer', CustomLabelBinarizer()),\n ])\n```\n\nFinally, to merge the output of the two separate data transformers back together, you can use `sklearn`'s `FeatureUnion` functionality: it runs the two pipelines' `fit` methods and the two `transform` methods in parallel, and then concatenates the output. I.e.:\n\n\n```python\nfrom sklearn.pipeline import FeatureUnion\n\nfull_pipeline = FeatureUnion(transformer_list=[\n (\"num_pipeline\", num_pipeline),\n (\"cat_pipeline\", cat_pipeline),\n ])\n\n\nhousing = strat_train_set.drop(\"median_house_value\", axis=1)\nhousing_labels = strat_train_set[\"median_house_value\"].copy()\n\nhousing_prepared = full_pipeline.fit_transform(housing)\nprint(housing_prepared.shape)\nhousing_prepared\n```\n\n## Step 5: Implementation, evaluation and fine-tuning of a regression model\n\nNow that you've explored and prepared the data, you can implement a regression model to predict the house prices on the test set. \n\n### Training and evaluating the model\n\nLet's train a [Linear Regression](http://scikit-learn.org/stable/modules/linear_model.html) model first. During training, a Linear Regression model tries to find the optimal set of weights $w=(w_{1}, w_{2}, ..., w_{n})$ for the features (attributes) $X=(x_{1}, x_{2}, ..., x_{n})$ by minimising the residual sum of squares between the responses predicted by such linear approximation $Xw$ and the observed responses $y$ in the dataset, i.e. trying to solve:\n\n\\begin{equation}\nmin_{w} ||Xw - y||_{2}^{2}\n\\end{equation}\n\n\n```python\nfrom sklearn.linear_model import LinearRegression\n\nlin_reg = LinearRegression()\nlin_reg.fit(housing_prepared, housing_labels)\n```\n\nFirst, let's try the model on some instances from the training set itself:\n\n\n```python\nsome_data = housing.iloc[:5]\nsome_labels = housing_labels.iloc[:5]\n# note the use of transform, as you'd like to apply already learned (fitted) transformations to the data\nsome_data_prepared = full_pipeline.transform(some_data)\n\nprint(\"Predictions:\", list(lin_reg.predict(some_data_prepared)))\nprint(\"Actual labels:\", list(some_labels))\n```\n\nThe above shows that the model is able to predict some price values, however they don't seem to be very accurate. How can you measure the performance of your model in a more comprehensive way?\n\nTypically, the output of the regression model is measured in terms of the error in prediction. There are two error measures that are commonly used. *Root Mean Square Error (RMSE)* measures the average deviation of the model's prediction from the actual label, but note that it gives a higher weight for large errors:\n\n\\begin{equation}\nRMSE(X, h) = \\sqrt{\\frac{1}{m} \\sum_{i=1}^{m} (h(x^{(i)}) - y^{(i)})^{2}}\n\\end{equation}\n\nwhere $m$ is the number of instances, $h$ is the model (hypothesis), $X$ is the matrix containing all feature values, $x^{(i)}$ is the feature vector describing instance $i$, and $y^{(i)}$ is the actual label for instance $i$.\n\nBecause *RMSE* is highly influenced by the outliers (i.e., large errors), in some situations *Mean Absolute Error (MAE)* is preferred. You may note that its estimation is somewhat similar to the estimation of *RMSE*:\n\n\\begin{equation}\nMAE(X, h) = \\frac{1}{m} \\sum_{i=1}^{m} |h(x^{(i)}) - y^{(i)}|\n\\end{equation}\n\nLet's measure the performance of the linear regression model using these error estimations:\n\n\n```python\nfrom sklearn.metrics import mean_squared_error\n\nhousing_predictions = lin_reg.predict(housing_prepared)\nlin_mse = mean_squared_error(housing_labels, housing_predictions)\nlin_rmse = np.sqrt(lin_mse)\nlin_rmse\n```\n\nGiven that the majority of the districts' housing values lie somewhere between $[\\$100000, \\$300000]$ an estimation error of over \\\\$68000 is very high. This shows that the regression model *underfits* the training data: it doesn't capture the patterns in the training data well enough because it lacks the descriptive power either due to the features not providing enough information to make a good prediction or due to the model itself being not complex enough. The ways to fix this include:\n- using more features and/or more informative features, for example applying log to some of the existing features to address the long tail distributions;\n- using more complex models;\n- reducing the constraints on the model.\n\nThe model that you used above is not constrained (or, *regularised* \u2013 more on this in later lectures), so you should try using more powerful models or work on the feature set.\n\nFor example, *polynomial regression* models the relationship between the $X$ and $y$ as an $n$-th degree polynomial. Polynomial regression extends simple linear regression by constructing polynomial features from the existing ones. For simplicity, assume that your data has only $2$ features rather than $8$, i.e. $X=[x_{1}, x_{2}]$. The linear regression model above tries to learn the coefficients (weights) $w=[w_{0}, w_{1}, w_{3}]$ for the linear prediction (a plane) $\\hat{y} = w_{0} + w_{1}x_{1} + w_{2}x_{2}$ that minimises the residual sum of squares between the prediction and actual label as you've seen above. \n\nIf you want to fit a paraboloid to the data instead of a plane, you can combine the features in second-order polynomials, so that the model looks like this: \n\n\\begin{equation}\n\\hat{y} = w_{0} + w_{1}x_{1} + w_{2}x_{2} + w_{3}x_{1}x_{2} + w_{4}x_{1}^2 + w_{5}x_{2}^2\n\\end{equation}\n\nThis time, the model tries to learn an optimal set of weights $w=[w_{0}, ..., w_{5}]$ (note that $w_{0}$ is called an intercept).\n\nNote that polynomial regression still employs a linear model. For instance, you can define a new variable $z = [x_1, x_2, x_1x_2, x_1^2, x_2^2]$ and rewrite the polynomial above as:\n\n\\begin{equation}\n\\hat{y} = w_{0} + w_{1}z_{0} + w_{2}z_{1} + w_{3}z_{2} + w_{4}z_{3} + w_{5}z_{4}\n\\end{equation}\n\nFor that reason, the polynomial regression in `sklearn` is addressed at the `preprocessing` steps \u2013 that is, first the second-order polynomials are estimated on the features, and then the same `LinearRegression` model as above is applied. For instance, use a second- and third-order polynomials and compare the results (feel free to use higher order polynomials, though keep in mind that as the complexity of the model increases, so does the processing time, the number of weights to be learned, and the chance that the model *overfits* to the training data). For more information, refer to `sklearn` [documentation](http://scikit-learn.org/stable/auto_examples/linear_model/plot_polynomial_interpolation.html):\n\n\n```python\nfrom sklearn.preprocessing import PolynomialFeatures\n\nmodel = Pipeline([('poly', PolynomialFeatures(degree=3)),\n ('linear', LinearRegression())])\n\nmodel = model.fit(housing_prepared, housing_labels)\nhousing_predictions = model.predict(housing_prepared)\nlin_mse = mean_squared_error(housing_labels, housing_predictions)\nlin_rmse = np.sqrt(lin_mse)\nlin_rmse\n```\n\nHow does the performance of the polynomial regression model compare to the first-order linear regression? You see that the performance improves as the complexity of the feature space increases. However, note that the more complex the model becomes, the more accurately it learns to replicate the training data, and the less likely it will generalise to the new pattern, i.e. in the test data. This phenomenon of learning to replicate the patterns from the training data too closely is called *overfitting*, and it is an opposite of *underfitting* when the model does not learn enough about the pattern from the training data due to its simplicity.\n\nJust to give you a flavor of the problem, here is an example of a complex model from the `sklearn` suite called `DecisionTreeRegressor` (Decision Trees are outside of the scope of this course, so don't worry if this looks unfamiliar to you. `sklearn` has implementation for a wide range of ML algorithms, so do check the [documentation](http://scikit-learn.org/stable/auto_examples/tree/plot_tree_regression.html) if you want to learn more). Note that the `DecisionTreeRegressor` learns to predict the values in the training data perfectly well (resulting in the error of $0$!) which usually means that it won't work well on the new data \u2013 e.g., check this later on the test data:\n\n\n```python\nfrom sklearn.tree import DecisionTreeRegressor\n\ntree_reg = DecisionTreeRegressor()\ntree_reg = tree_reg.fit(housing_prepared, housing_labels)\nhousing_predictions = tree_reg.predict(housing_prepared)\ntree_mse = mean_squared_error(housing_labels, housing_predictions)\ntree_mse = np.sqrt(tree_mse)\ntree_mse\n```\n\n### Learning to better evaluate you model using cross-validation\n\nObviously, one of the problems with overfitting above is caused by the fact that you're training and testing on the same (training) set (remember, that you should do all model tuning and optimisation on the training data, and only then apply the best model to the test data). So how can you measure the level of overfitting *before* you apply this model to the test data?\n\nThere are two possible solutions. You can either reapply `train_test_split` function from Step 2 to set aside part of the training set as a *development* (or *validation*) set, and then train the model on the smaller training set and tune it on the development set, before applying your best model to the test set. Or you can use *cross-validation*.\n\nWith *K-fold cross-validation* strategy, the training data gets randomly split into $k$ distinct subsets (*splits*). Then the model gets trained $10$ times, in each run being tested on a different fold and trained on the other $9$ folds. That way, the algorithm is evaluated on each data point in the training set, but during training is not exposed to the data points that it gets tested on later. The result is an array of $10$ evaluation scores, which can be averaged for better understanding and model comparison, i.e.:\n\n\n```python\nfrom sklearn.model_selection import cross_val_score\n \ndef analyse_cv(model): \n scores = cross_val_score(model, housing_prepared, housing_labels,\n scoring = \"neg_mean_squared_error\", cv=10)\n\n # cross-validation expects utility function (greater is better)\n # rather than cost function (lower is better), so the scores returned\n # are negative as they are the opposite of MSE\n sqrt_scores = np.sqrt(-scores) \n print(\"Scores:\", sqrt_scores)\n print(\"Mean:\", sqrt_scores.mean())\n print(\"Standard deviation:\", sqrt_scores.std())\n \nanalyse_cv(tree_reg)\n```\n\nThis shows that the `DecisionTreeRegression` model does not actually perform well when tested on a set different from the one it was trained on. What about the other models? E.g.:\n\n\n```python\nanalyse_cv(lin_reg)\n```\n\nLet's try one more model \u2013 [`RandomForestRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) that implements many Decision Trees (similar to above) on random subsets of the features. This type of models are called *ensemble learning* models and they are very powerful because they benefit from combining the decisions of multiple algorithms:\n\n\n```python\nfrom sklearn.ensemble import RandomForestRegressor\n\nforest_reg = RandomForestRegressor()\nanalyse_cv(forest_reg)\n```\n\n### Fine-tuning the model\n\nSome learning algorithms have *hyperparameters* \u2013 the parameters of the algorithms that should be set up prior to training and don't get changed during training. Such hyperparameters are usually specified for the `sklearn` algorithms in brackets, so you can always check the list of parameters specified in the documentation. For example, whether the [`LinearRegression`](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) model should calculate the intercept or not should be set prior to training and does not depend on the training itself, and so does the number of helper algorithms (decision trees) that should be combined in a [`RandomForestRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) for the final prediction. `RandomForestRegressor` has $16$ parameters, so if you want to find the *best* setting of the hyperparametes for `RandomForestRegressor`, it will take you a long time to try out all possible combinations.\n\nThe code below shows you how the best hyperparameter setting can be automatically found for an `sklearn` ML algorithm using a `GridSearch` functionality. Let's use the example of `RandomForestRegressor` and focus on specific hyperparameters: the number of helper algorithms (decision trees in the forest, or `n_estimators`) and the number of features the regressor considers in order to find the most informative subsets of instances to each of the helper algorithms (`max_features`):\n\n\n```python\nfrom sklearn.model_selection import GridSearchCV\n\n# specify the range of hyperparameter values for the grid search to try out \nparam_grid = {'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]}\n\nforest_reg = RandomForestRegressor()\ngrid_search = GridSearchCV(forest_reg, param_grid, cv=5,\n scoring=\"neg_mean_squared_error\")\ngrid_search.fit(housing_prepared, housing_labels)\n\ngrid_search.best_params_\n```\n\nYou can also monitor the intermediate results as shown below. Note also that if the best results are achieved with the maximum value for each of the parameters specified for exploration, you might want to keep experimenting with even higher values to see if the results improve any further:\n\n\n```python\ncv_results = grid_search.cv_results_\nfor mean_score, params in zip(cv_results[\"mean_test_score\"], cv_results[\"params\"]):\n print(np.sqrt(-mean_score), params)\n```\n\nOne more insight you can gain from the best estimator is the importance of each feature (expressed in the weight the best estimator learned to assign to each of the features). Here is how you can do that:\n\n\n```python\nfeature_importances = grid_search.best_estimator_.feature_importances_\nfeature_importances\n```\n\nIf you also want to display the feature names, you can do that as follows:\n\n\n```python\nextra_attribs = ['rooms_per_household', 'bedrooms_per_household', 'population_per_household', 'bedrooms_per_rooms']\ncat_one_hot_attribs = ['<1H OCEAN', 'INLAND', 'ISLAND', 'NEAR BAY', 'NEAR OCEAN']\nattributes = num_attribs + extra_attribs + cat_one_hot_attribs\nsorted(zip(feature_importances, attributes), reverse=True)\n```\n\nHow do these compare with the insights you gained earlier (e.g., during data exploration in Step 1, or during attribute exporation in Step 3)?\n\n\n### At last, evaluating your best model on the test set!\n\nFinally, let's take the best model you built and tuned on the training set and apply in to the test set:\n\n\n```python\nfinal_model = grid_search.best_estimator_\n\nX_test = strat_test_set.drop(\"median_house_value\", axis=1)\ny_test = strat_test_set[\"median_house_value\"].copy()\n\nX_test_prepared = full_pipeline.transform(X_test)\nfinal_predictions = final_model.predict(X_test_prepared)\n\nfinal_mse = mean_squared_error(y_test, final_predictions)\nfinal_rmse = np.sqrt(final_mse)\n\nfinal_rmse\n```\n\n# Assignments\n\n**For the tick session**:\n\n\n**It seems to be very slow to run the RandomForestRegressor. How can I speed this up?**\n## 1. \nFamiliarise yourself with the code in this practical. During the tick session, be prepared to discuss the different steps and answer questions (as well as ask questions yourself).\n\n## 2.\nExperiment with the different steps in the ML pipeline:\n- try dropping less informative features from the feature set and test whether it improves performance\n- use other options in preprocessing: e.g., different imputer strategies, min-max rather than standardisation for scaling, feature scaling vs. no feature scaling, and compare the results\n- evaluate the performance of the simple linear regression model on the test set. What is the `final_rmse` for this model?\n- estimate different feature importance weights with the simple linear regression model (if unsure how to extract the feature weights, check [documentation](http://scikit-learn.org/stable/modules/linear_model.html)). How do these compare to the (1) feature importance weights with the best estimator, and (2) feature correlation scores with the target value from Step 3?\n- [`RandomizedSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html), as opposed to the `GridSearchCV` used in the practical, does not try out each parameter values combination. Instead it only tries a fixed number of parameter settings sampled from the specified distributions. As a result, it allows you to try out a wider range of parameter values in a less expensive way than `GridSearchCV`. Apply `RandomizedSearchCV` and compare the best estimator results.\n\nFinally, if you want to have more practice with regression tasks, you can **work on the following optional task**:\n\n## 3. (Optional)\n\nUse the bike sharing dataset (`./bike_sharing/bike_hour.csv`, check `./bike_sharing/Readme.txt` for the description), apply the ML steps and gain insights from the data. What data transformations should be applied? Which attributes are most predictive? What additional attributes can be introduced? Which regression model performs best?\n\nWith dropping the na values I got 48120.666286373504 as the final value and some how get the same 48120.666286373504 for replacing with median?\n\n\n```python\nimport pandas as pd\nimport os\n\ndef load_data(housing_path):\n csv_path = os.path.join(housing_path, \"housing.csv\")\n return pd.read_csv(csv_path)\nhousing = load_data(\"housing/\") #pandas\n```\n\n\n```python\nfrom sklearn.model_selection import train_test_split\n\ntrain_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)\nprint(len(train_set), \"training instances +\", len(test_set), \"test instances\")\n\n#splitting\n```\n\n 13209 training instances + 3303 test instances\n\n\n\n```python\nhousing = strat_train_set.copy()\n```\n\n\n```python\ncorr_matrix = housing.corr()\ncorr_matrix\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
longitudelatitudehousing_median_agetotal_roomstotal_bedroomspopulationhouseholdsmedian_incomemedian_house_value
longitude1.000000-0.924478-0.1058480.0488710.0765980.1080300.063070-0.019583-0.047432
latitude-0.9244781.0000000.005766-0.039184-0.072419-0.115222-0.077647-0.075205-0.142724
housing_median_age-0.1058480.0057661.000000-0.364509-0.325047-0.298710-0.306428-0.1113600.114110
total_rooms0.048871-0.039184-0.3645091.0000000.9293790.8551090.9183920.2000870.135097
total_bedrooms0.076598-0.072419-0.3250470.9293791.0000000.8763200.980170-0.0097400.047689
population0.108030-0.115222-0.2987100.8551090.8763201.0000000.9046370.002380-0.026920
households0.063070-0.077647-0.3064280.9183920.9801700.9046371.0000000.0107810.064506
median_income-0.019583-0.075205-0.1113600.200087-0.0097400.0023800.0107811.0000000.687160
median_house_value-0.047432-0.1427240.1141100.1350970.047689-0.0269200.0645060.6871601.000000
\n
\n\n\n\n\n```python\nhousing = strat_train_set.drop(\"median_house_value\", axis=1) #drop makes a copy!\nhousing_labels = strat_train_set[\"median_house_value\"].copy()\n```\n\n\n```python\nfrom sklearn.impute import SimpleImputer\n\nimputer = SimpleImputer(strategy=\"median\")\nhousing_num = housing.drop(\"ocean_proximity\", axis=1)\nimputer.fit(housing_num)\n```\n\n\n\n\n SimpleImputer(add_indicator=False, copy=True, fill_value=None,\n missing_values=nan, strategy='median', verbose=0)\n\n\n\n\n```python\nX = imputer.transform(housing_num)\nhousing_tr = pd.DataFrame(X, columns=housing_num.columns)\n# housing_tr.info()\n```\n\n\n```python\nfrom sklearn.preprocessing import LabelBinarizer\n\nencoder = LabelBinarizer(sparse_output=True)\nhousing_cat_1hot = encoder.fit_transform(housing[\"ocean_proximity\"])\nhousing_cat_1hot\n```\n\n\n\n\n <16512x5 sparse matrix of type ''\n \twith 16512 stored elements in Compressed Sparse Row format>\n\n\n\n\n```python\nfrom sklearn.base import BaseEstimator, TransformerMixin \n# BaseEstimator allows you to drop *args and **kwargs from you constructor\n# and, in addition, allows you to use methods set_params() and get_params()\n\nrooms_id, bedrooms_id, population_id, household_id = 3, 4, 5, 6\n\nclass CombinedAttributesAdder(BaseEstimator, TransformerMixin):\n def __init__(self, add_bedrooms_per_rooms = True): # note no *args and **kwargs used this time\n self.add_bedrooms_per_rooms = add_bedrooms_per_rooms\n def fit(self, X, y=None):\n return self\n def transform(self, X, y=None):\n rooms_per_household = X[:, rooms_id] / X[:, household_id]\n bedrooms_per_household = X[:, bedrooms_id] / X[:, household_id]\n population_per_household = X[:, population_id] / X[:, household_id]\n if self.add_bedrooms_per_rooms:\n bedrooms_per_rooms = X[:, bedrooms_id] / X[:, rooms_id]\n return np.c_[X, rooms_per_household, bedrooms_per_household, \n population_per_household, bedrooms_per_rooms]\n else:\n return np.c_[X, rooms_per_household, bedrooms_per_household, \n population_per_household]\n \nattr_adder = CombinedAttributesAdder()\nhousing_extra_attribs = attr_adder.transform(housing.values)\n# print(housing_extra_attribs.info)\n```\n\n\n```python\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler\n\nscaler = StandardScaler()\nhousing_tr_scaled = scaler.fit_transform(housing_tr)\n```\n\n\n```python\nfrom sklearn.base import TransformerMixin # TransformerMixin allows you to use fit_transform method\n\nclass CustomLabelBinarizer(TransformerMixin):\n def __init__(self, *args, **kwargs):\n self.encoder = LabelBinarizer(*args, **kwargs)\n def fit(self, X, y=0):\n self.encoder.fit(X)\n return self\n def transform(self, X, y=0):\n return self.encoder.transform(X)\n```\n\n\n```python\n\nfrom sklearn.pipeline import Pipeline\nnum_attribs = list(housing_num)\ncat_attribs = [\"ocean_proximity\"]\n\nfrom sklearn.base import BaseEstimator, TransformerMixin\n\nclass DataFrameSelector(BaseEstimator, TransformerMixin):\n def __init__(self, attribute_names):\n self.attribute_names = attribute_names\n def fit(self, X, y=None):\n return self\n def transform(self, X):\n return X[self.attribute_names].values\n\nnum_pipeline = Pipeline([\n ('selector', DataFrameSelector(num_attribs)),\n #('imputer', Imputer(strategy=\"median\")),\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('attribs_adder', CombinedAttributesAdder()),\n ('std_scaler', StandardScaler()),\n ])\n\ncat_pipeline = Pipeline([\n ('selector', DataFrameSelector(cat_attribs)),\n ('label_binarizer', CustomLabelBinarizer()),\n ])\n```\n\n\n```python\nfrom sklearn.pipeline import FeatureUnion\n\nfull_pipeline = FeatureUnion(transformer_list=[\n (\"num_pipeline\", num_pipeline),\n (\"cat_pipeline\", cat_pipeline),\n ])\n\n\nhousing = strat_train_set.drop(\"median_house_value\", axis=1)\nhousing_labels = strat_train_set[\"median_house_value\"].copy()\n\nhousing_prepared = full_pipeline.fit_transform(housing)\nprint(housing_prepared.shape)\nhousing_prepared\n```\n\n (16512, 17)\n\n\n\n\n\n array([[-1.15604281, 0.77194962, 0.74333089, ..., 0. ,\n 0. , 0. ],\n [-1.17602483, 0.6596948 , -1.1653172 , ..., 0. ,\n 0. , 0. ],\n [ 1.18684903, -1.34218285, 0.18664186, ..., 0. ,\n 0. , 1. ],\n ...,\n [ 1.58648943, -0.72478134, -1.56295222, ..., 0. ,\n 0. , 0. ],\n [ 0.78221312, -0.85106801, 0.18664186, ..., 0. ,\n 0. , 0. ],\n [-1.43579109, 0.99645926, 1.85670895, ..., 0. ,\n 1. , 0. ]])\n\n\n\n\n```python\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.linear_model import LinearRegression\n\n\n\n\n# specify the range of hyperparameter values for the grid search to try out \nparam_grid = {'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]}\n\nforest_reg = RandomForestRegressor()\ngrid_search = GridSearchCV(forest_reg, param_grid, cv=5,\n scoring=\"neg_mean_squared_error\")\ngrid_search.fit(housing_prepared, housing_labels)\n\ngrid_search.best_params_\n```\n", "meta": {"hexsha": "ab37f436b1a87057095d871947c9628acfeb7077", "size": 804022, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "DSPNP_practical1/.ipynb_checkpoints/DSPNP_notebook1-checkpoint.ipynb", "max_stars_repo_name": "marcus800/cl-datasci-pnp-2021", "max_stars_repo_head_hexsha": "aea4a1e1aaeac895c595d67f328485157f1e2b39", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "DSPNP_practical1/.ipynb_checkpoints/DSPNP_notebook1-checkpoint.ipynb", "max_issues_repo_name": "marcus800/cl-datasci-pnp-2021", "max_issues_repo_head_hexsha": "aea4a1e1aaeac895c595d67f328485157f1e2b39", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DSPNP_practical1/.ipynb_checkpoints/DSPNP_notebook1-checkpoint.ipynb", "max_forks_repo_name": "marcus800/cl-datasci-pnp-2021", "max_forks_repo_head_hexsha": "aea4a1e1aaeac895c595d67f328485157f1e2b39", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 221.4326631782, "max_line_length": 333824, "alphanum_fraction": 0.888566482, "converted": true, "num_tokens": 24895, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.44167300566462564, "lm_q2_score": 0.20946968873287058, "lm_q1q2_score": 0.09251710701828052}} {"text": "# Homework 5\n## Due Date: Tuesday, October 3rd at 11:59 PM\n\n# Problem 1\nWe discussed documentation and testing in lecture and also briefly touched on code coverage. You must write tests for your code for your final project (and in life). There is a nice way to automate the testing process called continuous integration (CI).\n\nThis problem will walk you through the basics of CI and show you how to get up and running with some CI software.\n\n### Continuous Integration\nThe idea behind continuous integration is to automate away the testing of your code.\n\nWe will be using it for our projects.\n\nThe basic workflow goes something like this:\n\n1. You work on your part of the code in your own branch or fork\n2. On every commit you make and push to GitHub, your code is automatically tested on a fresh machine on Travis CI. This ensures that there are no specific dependencies on the structure of your machine that your code needs to run and also ensures that your changes are sane\n3. Now you submit a pull request to `master` in the main repo (the one you're hoping to contribute to). The repo manager creates a branch off `master`. \n4. This branch is also set to run tests on Travis. If all tests pass, then the pull request is accepted and your code becomes part of master.\n\nWe use GitHub to integrate our roots library with Travis CI and Coveralls. Note that this is not the only workflow people use. Google git..github..workflow and feel free to choose another one for your group.\n\n### Part 1: Create a repo\nCreate a public GitHub repo called `cs207test` and clone it to your local machine.\n\n**Note:** No need to do this in Jupyter.\n\n### Part 2: Create a roots library\nUse the example from lecture 7 to create a file called `roots.py`, which contains the `quad_roots` and `linear_roots` functions (along with their documentation).\n\nAlso create a file called `test_roots.py`, which contains the tests from lecture.\n\nAll of these files should be in your newly created `cs207test` repo. **Don't push yet!!!**\n\n\n```python\n\n```\n\n### Part 3: Create an account on Travis CI and Start Building\n\n#### Part A:\nCreate an account on Travis CI and set your `cs207test` repo up for continuous integration once this repo can be seen on Travis.\n\n#### Part B:\nCreate an instruction to Travis to make sure that\n\n1. python is installed\n2. its python 3.5\n3. pytest is installed\n\nThe file should be called `.travis.yml` and should have the contents:\n```yml\nlanguage: python\npython:\n - \"3.5\"\nbefore_install:\n - pip install pytest pytest-cov\nscript:\n - pytest\n```\n\nYou should also create a configuration file called `setup.cfg`:\n```cfg\n[tool:pytest]\naddopts = --doctest-modules --cov-report term-missing --cov roots\n```\n\n#### Part C:\nPush the new changes to your `cs207test` repo.\n\nAt this point you should be able to see your build on Travis and if and how your tests pass.\n\n### Part 4: Coveralls Integration\nIn class, we also discussed code coverage. Just like Travis CI runs tests automatically for you, Coveralls automatically checks your code coverage. One minor drawback of Coveralls is that it can only work with public GitHub accounts. However, this isn't too big of a problem since your projects will be public.\n\n#### Part A:\nCreate an account on [`Coveralls`](https://coveralls.zendesk.com/hc/en-us), connect your GitHub, and turn Coveralls integration on.\n\n#### Part B:\nUpdate your the `.travis.yml` file as follows:\n```yml\nlanguage: python\npython:\n - \"3.5\"\nbefore_install:\n - pip install pytest pytest-cov\n - pip install coveralls\nscript:\n - py.test\nafter_success:\n - coveralls\n```\n\nBe sure to push the latest changes to your new repo.\n\n### Part 5: Update README.md in repo\nYou can have your GitHub repo reflect the build status on Travis CI and the code coverage status from Coveralls. To do this, you should modify the `README.md` file in your repo to include some badges. Put the following at the top of your `README.md` file:\n\n```\n[](https://travis-ci.org/dsondak/cs207testing.svg?branch=master)\n\n[](https://coveralls.io/github/dsondak/cs207testing?branch=master)\n```\n\nOf course, you need to make sure that the links are to your repo and not mine. You can find embed code on the Coveralls and Travis CI sites.\n\n---\n\n# Problem 2\nWrite a Python module for reaction rate coefficients. Your module should include functions for constant reaction rate coefficients, Arrhenius reaction rate coefficients, and modified Arrhenius reaction rate coefficients. Here are their mathematical forms:\n\\begin{align}\n &k_{\\textrm{const}} = k \\tag{constant} \\\\\n &k_{\\textrm{arr}} = A \\exp\\left(-\\frac{E}{RT}\\right) \\tag{Arrhenius} \\\\\n &k_{\\textrm{mod arr}} = A T^{b} \\exp\\left(-\\frac{E}{RT}\\right) \\tag{Modified Arrhenius}\n\\end{align}\n\nTest your functions with the following paramters: $A = 10^7$, $b=0.5$, $E=10^3$. Use $T=10^2$.\n\nA few additional comments / suggestions:\n* The Arrhenius prefactor $A$ is strictly positive\n* The modified Arrhenius parameter $b$ must be real \n* $R = 8.314$ is the ideal gas constant. It should never be changed (except to convert units)\n* The temperature $T$ must be positive (assuming a Kelvin scale)\n* You may assume that units are consistent\n* Document each function!\n* You might want to check for overflows and underflows\n\n**Recall:** A Python module is a `.py` file which is not part of the main execution script. The module contains several functions which may be related to each other (like in this problem). Your module will be importable via the execution script. For example, suppose you have called your module `reaction_coeffs.py` and your execution script `kinetics.py`. Inside of `kinetics.py` you will write something like:\n```python\nimport reaction_coeffs\n# Some code to do some things\n# :\n# :\n# :\n# Time to use a reaction rate coefficient:\nreaction_coeffs.const() # Need appropriate arguments, etc\n# Continue on...\n# :\n# :\n# :\n```\nBe sure to include your module in the same directory as your execution script.\n\n---\n\n# Problem 3\nWrite a function that returns the **progress rate** for a reaction of the following form:\n\\begin{align}\n \\nu_{A} A + \\nu_{B} B \\longrightarrow \\nu_{C} C.\n\\end{align}\nOrder your concentration vector so that \n\\begin{align}\n \\mathbf{x} = \n \\begin{bmatrix}\n \\left[A\\right] \\\\\n \\left[B\\right] \\\\\n \\left[C\\right]\n \\end{bmatrix}\n\\end{align}\n\nTest your function with\n\\begin{align}\n \\nu_{i} = \n \\begin{bmatrix}\n 2.0 \\\\\n 1.0 \\\\\n 0.0\n \\end{bmatrix}\n \\qquad \n \\mathbf{x} = \n \\begin{bmatrix}\n 1.0 \\\\ \n 2.0 \\\\ \n 3.0\n \\end{bmatrix}\n \\qquad \n k = 10.\n\\end{align}\n\nYou must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.\n\n---\n# Problem 4\nWrite a function that returns the **progress rate** for a system of reactions of the following form:\n\\begin{align}\n \\nu_{11}^{\\prime} A + \\nu_{21}^{\\prime} B \\longrightarrow \\nu_{31}^{\\prime\\prime} C \\\\\n \\nu_{12}^{\\prime} A + \\nu_{32}^{\\prime} C \\longrightarrow \\nu_{22}^{\\prime\\prime} B + \\nu_{32}^{\\prime\\prime} C\n\\end{align}\nNote that $\\nu_{ij}^{\\prime}$ represents the stoichiometric coefficient of reactant $i$ in reaction $j$ and $\\nu_{ij}^{\\prime\\prime}$ represents the stoichiometric coefficient of product $i$ in reaction $j$. Therefore, in this convention, I have ordered my vector of concentrations as \n\\begin{align}\n \\mathbf{x} = \n \\begin{bmatrix}\n \\left[A\\right] \\\\\n \\left[B\\right] \\\\\n \\left[C\\right]\n \\end{bmatrix}.\n\\end{align}\n\nTest your function with \n\\begin{align}\n \\nu_{ij}^{\\prime} = \n \\begin{bmatrix}\n 1.0 & 2.0 \\\\\n 2.0 & 0.0 \\\\\n 0.0 & 2.0\n \\end{bmatrix}\n \\qquad\n \\nu_{ij}^{\\prime\\prime} = \n \\begin{bmatrix}\n 0.0 & 0.0 \\\\\n 0.0 & 1.0 \\\\\n 2.0 & 1.0\n \\end{bmatrix}\n \\qquad\n \\mathbf{x} = \n \\begin{bmatrix}\n 1.0 \\\\\n 2.0 \\\\\n 1.0\n \\end{bmatrix}\n \\qquad\n k = 10.\n\\end{align}\n\nYou must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.\n\n---\n# Problem 5\nWrite a function that returns the **reaction rate** of a system of irreversible reactions of the form:\n\\begin{align}\n \\nu_{11}^{\\prime} A + \\nu_{21}^{\\prime} B &\\longrightarrow \\nu_{31}^{\\prime\\prime} C \\\\\n \\nu_{32}^{\\prime} C &\\longrightarrow \\nu_{12}^{\\prime\\prime} A + \\nu_{22}^{\\prime\\prime} B\n\\end{align}\n\nOnce again $\\nu_{ij}^{\\prime}$ represents the stoichiometric coefficient of reactant $i$ in reaction $j$ and $\\nu_{ij}^{\\prime\\prime}$ represents the stoichiometric coefficient of product $i$ in reaction $j$. In this convention, I have ordered my vector of concentrations as \n\\begin{align}\n \\mathbf{x} = \n \\begin{bmatrix}\n \\left[A\\right] \\\\\n \\left[B\\right] \\\\\n \\left[C\\right]\n \\end{bmatrix}\n\\end{align}\n\nTest your function with \n\\begin{align}\n \\nu_{ij}^{\\prime} = \n \\begin{bmatrix}\n 1.0 & 0.0 \\\\\n 2.0 & 0.0 \\\\\n 0.0 & 2.0\n \\end{bmatrix}\n \\qquad\n \\nu_{ij}^{\\prime\\prime} = \n \\begin{bmatrix}\n 0.0 & 1.0 \\\\\n 0.0 & 2.0 \\\\\n 1.0 & 0.0\n \\end{bmatrix}\n \\qquad\n \\mathbf{x} = \n \\begin{bmatrix}\n 1.0 \\\\\n 2.0 \\\\\n 1.0\n \\end{bmatrix}\n \\qquad\n k = 10.\n\\end{align}\n\nYou must document your function and write some tests in addition to the one suggested. You choose the additional tests, but you must have at least one doctest in addition to a suite of unit tests.\n\n---\n# Problem 6\nPut parts 3, 4, and 5 in a module called `chemkin`.\n\nNext, pretend you're a client who needs to compute the reaction rates at three different temperatures ($T = \\left\\{750, 1500, 2500\\right\\}$) of the following system of irreversible reactions:\n\\begin{align}\n 2H_{2} + O_{2} \\longrightarrow 2OH + H_{2} \\\\\n OH + HO_{2} \\longrightarrow H_{2}O + O_{2} \\\\\n H_{2}O + O_{2} \\longrightarrow HO_{2} + OH\n\\end{align}\n\nThe client also happens to know that reaction 1 is a modified Arrhenius reaction with $A_{1} = 10^{8}$, $b_{1} = 0.5$, $E_{1} = 5\\times 10^{4}$, reaction 2 has a constant reaction rate parameter $k = 10^{4}$, and reaction 3 is an Arrhenius reaction with $A_{3} = 10^{7}$ and $E_{3} = 10^{4}$.\n\nYou should write a script that imports your `chemkin` module and returns the reaction rates of the species at each temperature of interest given the following species concentrations:\n\n\\begin{align}\n \\mathbf{x} = \n \\begin{bmatrix}\n H_{2} \\\\\n O_{2} \\\\\n OH \\\\\n HO_{2} \\\\\n H_{2}O\n \\end{bmatrix} = \n \\begin{bmatrix}\n 2.0 \\\\\n 1.0 \\\\\n 0.5 \\\\\n 1.0 \\\\\n 1.0\n \\end{bmatrix}\n\\end{align}\n\nYou may assume that these are elementary reactions.\n\n---\n# Problem 7\nGet together with your project team, form a GitHub organization (with a descriptive team name), and give the teaching staff access. You can have has many repositories as you like within your organization. However, we will grade the repository called **`cs207-FinalProject`**.\n\nWithin the `cs207-FinalProject` repo, you must set up Travis CI and Coveralls. Make sure your `README.md` file includes badges indicating how many tests are passing and the coverage of your code.\n", "meta": {"hexsha": "be4153510f671c72bfd4f0cd211f4750b3e1c842", "size": 15815, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "HW5/.ipynb_checkpoints/HW5-checkpoint.ipynb", "max_stars_repo_name": "filip-michalsky/CS207_Systems_Development", "max_stars_repo_head_hexsha": "4790c3101e3037d7741565198e814637e34eaff9", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "HW5/.ipynb_checkpoints/HW5-checkpoint.ipynb", "max_issues_repo_name": "filip-michalsky/CS207_Systems_Development", "max_issues_repo_head_hexsha": "4790c3101e3037d7741565198e814637e34eaff9", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "HW5/.ipynb_checkpoints/HW5-checkpoint.ipynb", "max_forks_repo_name": "filip-michalsky/CS207_Systems_Development", "max_forks_repo_head_hexsha": "4790c3101e3037d7741565198e814637e34eaff9", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.2004830918, "max_line_length": 424, "alphanum_fraction": 0.5685741385, "converted": true, "num_tokens": 3223, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.29746994260479465, "lm_q2_score": 0.31069438321455395, "lm_q1q2_score": 0.09242224034246543}} {"text": "```python\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.rcParams[\"font.size\"] = 18\n```\n\n# Radiation Interatctions with Matter\n\n### Learning Objectives\n\n- Define uncollided flux\n- Define linear interaction coefficient\n- Apply linear interaction coefficients to a slab problem\n- Identify the units of intensity, flux density, fluence, reaction rate\n- Compare linear interaction coefficient and cross section\n- Calculate uncollided flux in a medium \n- Calculate mean free path of a particle in a medium\n- Define the half thickness in a medium\n- Apply the concept of buildup factor to attenuation in a slab\n- Define microscopic cross section\n- Calculate macroscopic cross sections, given a microscopic cross section\n- Calculate the mass interaction coefficients of mixtures\n- Calculate flux density\n- Calculate Reaction Rate Density\n- Recognize the dependence of flux on energy, position, and time\n- Define radiation fluence\n- Calculate uncollided flux density from isotropic point sources\n- Apply the Kelin-Nishina formula to Compton Scattering\n- Compare energy dependence of photon interaction cross sections\n- Describe energy dependence of neutron interaction cross sections\n- Recognize the comparative range of heavy vs. light particles \n- Recognize the comparative range of charged particles\n\n## Linear Interaction Coefficient\n\n- The interaction of radiation with matter is always statistical in nature, and, therefore, must be described in probabilistic terms. \n\nConsider a particle travelling through a homogeneous material.\n\n\\begin{align}\nP_i(\\Delta x) &= \\mbox{probability the particle, causes a reaction of type i in distance }\\Delta x\\\\\n\\end{align}\n\nEmpirically, we find that this probability becomes constant as $\\Delta x \\longrightarrow 0$. Thus:\n\n\n\\begin{align}\n\\mu_i &= \\lim_{\\Delta x \\rightarrow 0}\\frac{P_i(\\Delta x)}{\\Delta x}\\\\\n\\end{align}\n\nFacts about $\\mu_\ud835\udc56$:\n\n- $\\mu_i$ is an *intrinsic* property of the material for a given incident particle and interaction. \n- $\\mu_i$ is independent of the path length traveled prior to the interaction. \n- $\\mu_i$ may represent many types of interaction (scattering: $\\mu_s$, absorption: $\\mu_a$, ...)\n- $\\mu_i$ typically depends on particle energy\n\n\nThe probability, per unit path length, that a neutral particle undergoes some sort of reaction, is the sum of the probabilities, per unit path length of travel, for each type :\n\n\\begin{align}\n\\mu_t(E) = \\sum_i \\mu_i(E)\n\\end{align}\n\n## Think Pair Share:\n\nWhat are the units of the linear interaction coefficient?\n\n### Attenuation of Uncollided Flux\n\nImagine a plane of neutral particles strike a slab of some material, normal to the surface. \n\nWe can describe this using $\\mu_t$ or, equivalently, the macroscopic total cross section $\\Sigma_t$. \n\n\n\\begin{align}\nI(x) &= I_0e^{-\\mu_t x}\\\\\nI(x) &= I_0e^{-\\Sigma_t x}\\\\\n\\end{align}\n\nwhere\n\n\\begin{align}\n I(x) &= \\mbox{uncollided intensity at distance x}\\\\\n I_0 &= \\mbox{initial uncollided intensity}\\\\\n \\mu_t &= \\mbox{total linear interaction coefficient} \\\\\n \\Sigma_t &= \\mbox{macroscopic total cross section} \\\\\n x &= \\mbox{distance into material [m]}\\\\\n\\end{align}\n\n\n\n```python\nimport math\ndef attenuation(distance, initial=100, sig_t=1):\n \"\"\"This function describes neutron attenuation into the slab\"\"\"\n return initial*math.exp(-sig_t*distance)\n\n```\n\nRather than intensity, one can find the probability density:\n\nWe have a strong analogy between decay and attenuation, as above. In the case of decay the probability of decay in a time interval dt is:\n\n\\begin{align}\nP(t)dt &= \\lambda e^{-\\lambda t}dt\\\\\n &= \\mbox{probability of decay in interval dt}\n\\end{align}\n\nFrom this, one can find the mean lifetime of a neutron before decay:\n\n\\begin{align}\n\\bar{t} &= \\int_0^\\infty t'P(t')dt'\\\\\n &= \\int_0^\\infty t'\\lambda e^{-\\lambda t'}dt'\\\\ \n &= \\frac{1}{\\lambda}\n\\end{align}\n\nIn the case of attenuation:\n\\begin{align}\nP(x)dx &= \\Sigma_te^{-\\Sigma_tx}dx\n\\end{align}\n\nSuch that: \n\n\\begin{align}\nP(x)dx &= \\mu_t e^{-\\mu_t x}dx\\\\\n &= \\Sigma_t e^{-\\Sigma_t x}dx\\\\\n &= \\mbox{probability of interaction in interval dx}\n\\end{align}\n\n\nSo, the mean free path is:\n\n\\begin{align}\n\\bar{l} &= \\int_0^\\infty x'P(x')dx'\\\\\n &= \\int_0^\\infty x'\\Sigma_te^{-\\Sigma_t x'}dx'\\\\ \n &= \\frac{1}{\\Sigma_t}\n\\end{align}\n\n\nOr, equivalently in $\\mu_t$ notation:\n\n\\begin{align}\n\\bar{x} &= \\int_0^\\infty x'P(x')dx'\\\\\n &= \\int_0^\\infty x'\\mu_te^{-\\mu_t x'}dx'\\\\ \n &= \\frac{1}{\\mu_t}\n\\end{align}\n\n\n\n```python\ndef prob_dens(distance, initial=100, sig_t=1):\n return sig_t*attenuation(distance, initial=100, sig_t=1)\n\n```\n\n\n```python\nsig_t = 0.2\ni_0 = 100\n\n# This code plots attenuation\nimport numpy as np\nz = np.arange(24)\ny = np.arange(24)\nx = np.arange(24)\nfor h in range(0,24):\n x[h] = h\n y[h] = attenuation(h, initial=i_0, sig_t=sig_t)\n z[h] = prob_dens(h, initial=i_0, sig_t=sig_t)\n\n# creates a figure and axes with matplotlib\nfig, ax = plt.subplots()\nscatter = plt.scatter(x, y, color='blue', s=y*20, alpha=0.4) \nax.plot(x, y, color='red') \nax.plot(x, z, color='green') \n\n\n# adds labels to the plot\nax.set_ylabel('Percent of Neutrons')\nax.set_xlabel('Distance into slab')\nax.set_title('Attenuation')\n\n# adds tooltips\nimport mpld3\nlabels = ['{0}% intensity'.format(i) for i in y]\ntooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels)\nmpld3.plugins.connect(fig, tooltip)\n\nmpld3.display()\n```\n\n\n\n\n\n\n\n\n
\n\n\n\n\n## Half-thickness\n\nIn another analog to decay, the **half-thickness** of a material is the distance required for half of the incident radiation to interact with a medium:\n\n\\begin{align}\n\\frac{I(x_{1/2})}{I(0)} &= e^{-\\mu_t x_{1/2}}\\\\\n\\implies x_{1/2} &= \\frac{\\ln{2}}{\\mu_t}\n\\end{align}\n\n## Think pair share: \nWhat is the concept in the context of decay that is analogous to the half-thickness?\n\n\n## Microscopic Cross Sections\n\n- The microscopic cross section $\\sigma_i$ is the likelihood of the event per unit area. \n- The macroscopic cross section $\\Sigma_i$ is the likelihood of the event per unit area of a certain density of target isotopes.\n- The macroscopid cross section $\\Sigma_i$ is equivalent to the linear interaction coefficient $\\mu_i$, but we tend to use $\\Sigma_i$ in nuclear interactions, reserving $\\mu_i$ for photon interactions.\n\n\\begin{align}\n\\mu_i &= \\mbox{linear interaction coefficient}\\\\\n\\Sigma_i &= \\mbox{macroscopic cross section}\\\\\\\\\n &= \\sigma_i N\\\\\n &= \\sigma_i \\frac{\\rho N_a}{A}\\\\\n \\mbox{where }& \\\\\n N &= \\mbox{atom density of medium}\\\\\n \\rho &= \\mbox{mass density of the medium}\\\\\n N_a &= \\mbox{Avogadro's number}\\\\\n A &= \\mbox{atomic weight of the medium}\n\\end{align}\n\n\n\n```python\ndef macroscopic_xs(micro, N):\n \"\"\"Returns the macroscopic cross section [cm^2] or [barns]\n \n Parameters\n ----------\n micro: double\n microscopic cross section [cm^2] or [barns]\n N: double\n atom density in the medium [atoms/cm^3]\n \"\"\"\n return micro*N\n```\n\n\n```python\ndef NA():\n \"\"\"Returns Avogadro's number \n 6.022x10^23 atoms per mole\n \"\"\"\n return 6.022E23\n\ndef num_dens_from_rho(rho, na, a):\n \"\"\"The atomic number density. \n That is, the concentration of atoms or molecules per unit volume (V)\n \n Parameters\n -----------\n rho : double\n material density (in units like g/cm^3 or kg/m^3) of the sample\n na : double\n Avogadro's number\n a : double\n The atomic or molecular weight of the atom or molecule of interest \n \"\"\"\n return rho*na/a\n```\n\n## Example: \nImagine a beam of neutrons striking a body of water, $H_2O$. Many will be absorbed by the hydrogen in the water, particularly $^1H$. \n\n\n```python\n# Find the macroscpic absorption cross section \n# of the 1H in H2O\nsig_1h = 0.333 # barns\n\n# First, molecular density of water\nrho_h2o = 1 # g/cm^3\na_h2o = 18.0153 # g/mol\nn_h2o = num_dens_from_rho(rho_h2o, NA(), a_h2o) # molecules water / cm^3\nn_h2o_barn = n_h2o/10**(24) # 10^24 molecules water / cm^3\nprint('n_h2o [1/cm^3] = ', n_h2o)\nprint('n_h2o [10^(24)/cm^3] = ', n_h2o_barn)\n\n# Now, there are two Hydrogens in each molecule of water, so:\nmacroscopic_h1 = macroscopic_xs(sig_1h, 2*n_h2o_barn)\nprint('absorption in water from 1H = ', macroscopic_h1)\n```\n\n n_h2o [1/cm^3] = 3.342714248444378e+22\n n_h2o [10^(24)/cm^3] = 0.033427142484443784\n absorption in water from 1H = 0.02226247689463956\n\n\n### Mixtures\nIn a medium that is a mixture of isotopes (e.g. $H_2O$), we can calculate the total macroscopic cross section based on individual microscopic cross sections and number densities for each component of the mixture. We may need to include information about relative isotopic abundances (f).\n\nFor the same problem as above (neutrons striking a body of water) we can calculate the absorption by *all* isotopes in the $H_2O$.\n\n\n\\begin{align}\n\\mu^{H_2O} \\equiv \\Sigma^{H_2O} &= N^1\\sigma_a^1 + N^2\\sigma_a^2 + N^{16}\\sigma_a^{16}\n+ N^{17}\\sigma_a^{17} + N^{18}\\sigma_a^{18}\\\\\n&= f^1N^H\\sigma_a^1 + f^2N^H\\sigma_a^2 + f^{16}N^O\\sigma_a^{16} + f^{17}N^O\\sigma_a^{17} + f^{18}N^O\\sigma_a^{18}\n\\end{align}\n\nSuperscripts 1, 2, 16, 17, and 18 indicate isotopes $^1H$, $^2H$, $^{16}O$,$^{17}O$, and $^{18}O$. \n\n\\begin{align}\nN^H = 2N^{H_2O}\\\\\nN^{O} = N^{H_2O}\\\\\nN^{H_2O} = \\frac{\\rho^{H_2O}N_a}{A^{H_2O}}\n\\end{align}\n\nThus:\n\\begin{align}\n\\mu^{H_2O} \\equiv \\Sigma^{H_2O} &= N^{H_2O}\\left[2f^1\\sigma_a^1 + 2f^2\\sigma_a^2 + f^{16}\\sigma_a^{16} + f^{17}\\sigma_a^{17} + f^{18}\\sigma_a^{18}\\right]\n\\end{align}\n\n\n\n```python\n# We need a lot of data\n\n# Abundances\nf_1 = 0.99985\nf_2 = 0.00015\nf_16 = 0.99756\nf_17 = 0.00039\nf_18 = 0.00205\n\n# Then, microscopic absorption cross sections\nsig_1 = 0.333\nsig_2 = 0.000506\nsig_16 = 0.000190\nsig_17 = 0.239\nsig_18 = 0.000160\n\nmacroscopic_h2o = n_h2o_barn*(2*f_1*sig_1 \n + 2*f_2*sig_2\n + f_16*sig_16\n + f_17*sig_17 \n + f_18*sig_18) \nprint('absorption in water from all isos = ', macroscopic_h2o,\"\\n\",\n 'while absorption in water from 1H = ', macroscopic_h1,\"\\n\",\n 'Thus, absorption in water is mostly from 1H.')\n```\n\n absorption in water from all isos = 0.02226860496564809 \n while absorption in water from 1H = 0.02226247689463956 \n Thus, absorption in water is mostly from 1H.\n\n\n### Reaction Rates\n\n- The microscopic cross section is just the likelihood of the event per unit area. \n- The macroscopic cross section is just the likelihood of the event per unit area of a certain density of target isotopes.\n- The reaction rate is the macroscopic cross section times the flux of incident neutrons.\n\n\\begin{align}\nR_{i,j}(\\vec{r}) &= N_j(\\vec{r})\\int dE \\phi(\\vec{r},E)\\sigma_{i,j}(E)\\\\\nR_{i,j}(\\vec{r}) &= \\mbox{reactions of type i involving isotope j } [reactions/cm^3s]\\\\\nN_j(\\vec{r}) &= \\mbox{number of nuclei participating in the reactions } [\\#/cm^3]\\\\\nE &= \\mbox{energy} [MeV]\\\\\n\\phi(\\vec{r},E)&= \\mbox{flux of neutrons with energy E at position i } [\\#/cm^2s]\\\\\n\\sigma_{i,j}(E)&= \\mbox{cross section } [cm^2]\\\\\n\\end{align}\n\n\nThis can be written more simply as $R_x = \\Sigma_x I N$, where I is intensity of the neutron flux.\n\n\nUsing flux notation, the density of ith type of neutron interaction with isotope j, per unit time is:\n\n\n\\begin{align}\nR_{i,j}(\\vec{r}) = \\Sigma_{i,j}\\phi(\\vec{r})\n\\end{align}\n\n### Reaction Rate Example: Fission Source term\n\nAn example of an important use of reaction rates is the source of neutrons in a reactor are the neutrons from fission. \n\n\\begin{align}\ns &=\\nu \\Sigma_f \\phi\n\\end{align}\n\nwhere\n\n\\begin{align}\ns &= \\mbox{neutrons available for next generation of fissions}\\\\\n\\nu &= \\mbox{the number born per fission}\\\\\n\\Sigma_f &= \\mbox{the number of fissions in the material}\\\\\n\\phi &= \\mbox{initial neutron flux}\n\\end{align}\n\nThis can also be written as:\n\n\\begin{align}\ns =& \\nu\\Sigma_f\\phi\\\\\n =& \\nu\\frac{\\Sigma_f}{\\Sigma_{a,fuel}}\\frac{\\Sigma_{a,fuel}}{\\Sigma_a}{\\Sigma_a} \\phi\\\\\n =& \\eta f {\\Sigma_a} \\phi\\\\\n\\eta =& \\frac{\\nu\\Sigma_f}{\\Sigma_{a,fuel}} \\\\\n =& \\mbox{number of neutrons produced }\\\\\n & \\mbox{ per neutron absorbed by the fuel}\\\\\n =& \\mbox{\"neutron reproduction factor\"}\\\\\nf =& \\frac{\\Sigma_{a,fuel}}{\\Sigma_a} \\\\\n =& \\mbox{number of neutrons absorbed in the fuel}\\\\\n &\\mbox{ per neutron absorbed anywhere}\\\\\n =&\\mbox{\"fuel utilization factor\"}\\\\\n\\end{align}\n\nThis absorption and flux term at the end seeks to capture the fact that some of the neutrons escape. However, if we assume an infinite reactor, we know that all the neutrons are eventually absorbed in either the fuel or the coolant, so we can normalize by $\\Sigma_a\\phi$ and therefore:\n\n\n\\begin{align}\nk_\\infty &= \\frac{\\eta f \\Sigma_a\\phi}{\\Sigma_a \\phi}\\\\\n&= \\eta f\n\\end{align}\n\n## Flux density from Point Source\nFinding $\\phi(\\vec{r}0$ generally requires *particle transport calculations.*\n\nHowever, in some simple practical situations, the flux density can be approximated by the flux density of uncollided source particles.\n\n### Point Source in Vacuum\n\nConsider a source of particles:\n\n- it emits $S_p$ particles per unit time\n- all particles have energy E\n- and they are emitted radially outward into an infinite vacuum\n- isotropically (equally in all directions)\n- from a single point in space\n\n### Think-pair share: \n\n- How many interactions occur?\n\n\n### At a radius r: \nBecause the source is isotropic, each unit area on an imaginary spherical shell of radius $r$ has the same number of particles crossing it. Thus:\n\n\\begin{align}\n\\phi^o(r) &= \\mbox{uncollided flux at radius r in any direction}\\\\\n&= \\frac{S_p}{4\\pi r^2}\n\\end{align}\n\n\n```python\ndef phi_o_r(r, s):\n \"\"\"Returns the uncolided flux at radius r\n due to an isotropic point source in a vacuum\n \n Parameters\n -----------\n r : double\n radius away from the point [length]\n s : double\n point source strength [particles/time]\n \"\"\"\n return s/(4*math.pi*pow(r,2))\n```\n\n\n```python\ns=200\n\nplt.plot(range(1,10), [phi_o_r(r, s) for r in range(1,10)])\n```\n\nThe plot above, this $1/r^2$ reduction in flux and reaction rate, is occaisionally called \"geometric attenuation\".\n\n\n```python\n# The below IFrame displays Page 189 of your textbook:\n# Shultis, J. K. (2016). Fundamentals of Nuclear Science and Engineering Third Edition, \n# 3rd Edition. [Vitalsource]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781498769303/\n# Please take note of Figure 7.2\n\nfrom IPython.display import IFrame\nIFrame(\"https://bookshelf.vitalsource.com/books/9781498769303/pageid/211\", width=1000, height=500)\n\n```\n\n\n\n\n\n\n\n\n\n\n## Point Source in an Attenuating Medium\nSo, the unollided flux is \n\\begin{align}\n\\phi^o(r) &= \\frac{S_p}{4\\pi r^2}\n\\end{align}\n\n### A small volume\n\nAt a distance r, we place a homogeneous mass with a volume $\\Delta V_d$. The interaction rate $R_d$ in the mass is: \n\n\\begin{align}\n&R^o(r)=\\mu_d(E)\\Delta V_d\\frac{S_p}{4\\pi r^2}\\\\\n\\mbox{where}&\\\\\n&\\mu_d(E)=\\mbox{linear interaction coefficient in the volume}\n\\end{align}\n\n### An inifinite volume\n\nFrom this, we can imagine the point source embeeded in an infinite medium of this material. A detector is at distance r in the volume:\n\n\\begin{align}\n&\\phi^o(r) = \\frac{S_p}{4\\pi r^2}e^{-\\mu r}\\\\\n\\mbox{where}&\\\\\n&e^{-\\mu r}=\\mbox{material attenuation}\n\\end{align}\n\n### A slab shield\n\nImagine a slab shield, thickness t, at a distance r, between the point source and a detector.\n\n\\begin{align}\n&\\phi^o(r) = \\frac{S_p}{4\\pi r^2}e^{-\\mu t}\\\\\n\\mbox{where}&\\\\\n&t=\\mbox{thickness of the slab}\n\\end{align}\n\nIf it were made of a series of materials $i$, with coefficients $\\mu_i$, and thicknesses $t_i$:\n\n\\begin{align}\n&\\phi^o(r) = \\frac{S_p}{4\\pi r^2}e^{\\sum_i -\\mu_i t_i}\\\\\n\\mbox{where}&\\\\\n&\\mu_i=\\mbox{linear interaction coefficient of ith slab}\\\\\n&t_i=\\mbox{thickness of ith slab}\n\\end{align}\n\n### Heterogeneous Medium\n\nAn arbitrary heterogeneous medium can be described as having an interaction coefficient $\\mu(\\vec{r})$ at any point $\\vec{r}$ in the medium, a funciton of position in the medium.\n\n\\begin{align}\n&\\phi^o(r) = \\frac{S_p}{4\\pi r^2}e^{\\left[-\\int_0^r \\mu(s) ds\\right]}\\\\\n\\end{align}\n\n## Polyenergetic Point Source\n\n- Previous examples assume a **monoenergetic** point source (particles of a single energy, E). \n- But, a single source can emit particles at several discrete energies, or even a continuum of energies.\n\nLet's define some variables:\n\n\\begin{align}\nf_i &= \\mbox{fraction of the source emitted with energy }E_i\\\\\nE_i &= \\mbox{discrete energy of }f_iS_p\\mbox{ particles}\\\\\nS_p &= \\mbox{still the number of particles emitted from the point source}\n\\end{align}\n\nThe total interaction rate caused by uncollided particles streaming through a small volume mass at distance r from the source is the following, **for some set of i discrete energies**.\n\n\\begin{align}\nR^o(r)=\\sum_i\\frac{S_p f_i\\mu_d(E_i) \\Delta V_d}{4\\pi r^2}e^{\\left[-\\int_0^r \\mu(s,E_i) ds\\right]}\\\\\n\\end{align}\n\nIf the source emits a continuum of energies, it's best to define the fraction $f_i$ as a differential probability:\n\n\\begin{align}\nN(E)dE\\mbox{the probability that a source particle is emitted with energy in dE about E}\n\\end{align}\n\n\nWith this definition, the sum over discrete energies becomes an integral.\n\n\\begin{align}\nR^o(r)=\\int_o^\\infty \\left[\\frac{S_p N(E)\\mu_d(E) \\Delta V_d}{4\\pi r^2}e^{\\left[-\\int_0^r \\mu(s,E) ds\\right]}\\right]dE\\\\\n\\end{align}\n\nPlease note, you may see many nuclear texts list the dE first in the integral... don't be bamboozled. This is equivalent to the above:\n\n\\begin{align}\nR^o(r)=\\int_o^\\infty dE\\frac{S_p N(E)\\mu_d(E) \\Delta V_d}{4\\pi r^2}e^{\\left[-\\int_0^r \\mu(s,E) ds\\right]}\n\\end{align}\n\n\n### Example 7.4 from your book (Shultis & Faw)\n\nA point source with an activity of 500 Ci emits 2-MeV photons with a frequency of 70% per decay. \n\n\\begin{align}\nS_p = 500 Ci\\\\\nf_2 = 0.7\\\\\n\\end{align}\n\nWhat is the flux density of 2-MeV photons 1 meter from the source? \n\n\n\n```python\ns_p = 500 # Ci\nf_2 = 0.7 # fraction emitted at 2MeV\nmu = 1.0/187.0 # mean free path of 2MeV photon in air is 187m\n\n# first, convert S_p is in number of particles per decay (Bq)\nbq_to_ci = 3.7e10 # Bq/Ci\ns_p = s_p*bq_to_ci \n\n# Now, find uncollided flux of 2MeV photons at 1 m\nr = 1.0 #m\ns = s_p*f_2 # just want 2MeV photons\nphi = phi_o_r(r, s)\nprint(\"Uncollided flux is : \", phi)\n\n# Uh oh, we forgot the material attenuation!\nphi = phi_o_r(r, s)*math.exp(-mu*r)\nprint(\"Uncollided flux with attenuation is : \", phi)\n```\n\n Uncollided flux is : 1030528256520.0223\n Uncollided flux with attenuation is : 1025032118881.2917\n\n\n### Think Pair Share\n\nWhat are the units of $\\phi^o$, above?\n\n\n# Photon Interactions\n\n**Recall:** \n \n\\begin{align}\nc &= \\mbox{speed of light}\\\\ \n &=2.9979\\times10^8\\left[\\frac{m}{s}\\right]\\\\\nE &= \\mbox{photon energy}\\\\\n &=h\\nu\\\\\n &=\\frac{hc}{\\lambda}\\\\\nh &= \\mbox{Planck's constant}\\\\\n &= 6.62608\\times10^{\u221234} [J\\cdot s] \\\\\n\\nu &=\\mbox{photon frequency}\\\\\n\\lambda &= \\mbox{photon wavelength}\n\\end{align}\n\n**Nota bene:**\n- **10eV - 20MeV** photons are important in radiation sheilding\n- At **10eV - 20MeV**, only photoelectric effect, pair production, and Compton Scattering are significant\n\n\n
Figure from: \"Radiation Interactions with Tissue.\" Radiology Key. Jan 8 2016.
\n\n\n\n
Figure from: Cullen, D. E. 1994. \"Photon and Electron Interaction Databases and Their Use in Medical Applications.\" UCRL-JC--117419. Lawrence Livermore National Lab. http://inis.iaea.org/Search/search.aspx?orig_q=RN:26035330.
\n\n\n\n## Klein Nishina\n\nThe total Compton cross section, per atom with Z electrons, based on the free-electron approximation, is given by the well-known Klein-Nishina formula [Evans 1955]:\n\n\\begin{align}\n\\sigma_c(E) =\\pi Zr_e^2\\lambda\\left[(-2\\lambda - 2\\lambda^2)\\ln{\\left(1+\\frac{2}{\\lambda}\\right)} + \\frac{2(1+9\\lambda + 8\\lambda^2 + 2\\lambda^3)}{(\\lambda + 2)^2}\\right]\n\\end{align}\n\nHere $\\lambda \\equiv \\frac{m_ec^2}{E}$, a dimensionless quantity, and $r_e$ is the classical electron radius. The value of $r_e$ is given by:\n\n\\begin{align}\nr_e &\\equiv \\frac{e^2}{4\\pi\\epsilon_om_ec^2}\\\\\n&= 2.8179\\times10^{-13}cm\n\\end{align}\n\n\n### Think pair share:\nConceptually, in the above equation:\n\n- what is $r_e$?\n- what is $e$?\n- what is $\\epsilon_o$?\n- what is $m_ec^2$?\n\n\n\n### Total Photon Cross Section\nVarious types of incoherent scattering, including Compton, are actually present in that intermediate energy range. It is occaisionally important to correct for all types of incoherent scattering, but it can typically be assumed to be primarily Compton scattering. \n\nFor photons, then $\\mu$ becomes:\n\n\\begin{align} \n\\mu(E)&\\equiv N\\left[\\sigma_{ph}(E) + \\sigma_{inc}(E) + \\sigma_{pp}(E)\\right]\\\\\n &\\simeq N\\left[\\sigma_{ph}(E) + \\sigma_{c}(E) + \\sigma_{pp}(E)\\right]\\\\\n N &= \\mbox{atom density}\\\\\n &= \\frac{\\rho N_a}{A} \n\\end{align}\n\nIt is common to denote this as the total mass interaction coefficient:\n\n\\begin{align}\n\\frac{\\mu}{\\rho} &= \\frac{N_a}{A}\\left[\\sigma_{ph}(E) + \\sigma_{c}(E) + \\sigma_{pp}(E)\\right]\\\\\n&= \\frac{N_a}{A}\\left[\\frac{\\mu_{ph}(E)}{\\rho} + \\frac{\\mu_{c}(E)}{\\rho} + \\frac{\\mu_{pp}(E)}{\\rho}\\right]\n\\end{align}\n\n## Neutron Interactions\n\nPhotons tend to interact with electrons in a target atom. **Neutrons tend to interact with the nucleus.**\n\nNeutron cross sections:\n\n- Vary rapidly with the incident neutron energy,\n- Vary erratically from one element to another \n- Even vary dramatically between isotopes of the same element.\n\nThere are lots of sources of neutron cross sections. The best place to start is the Brookhaven National Laboratory National Nuclear Data Center [https://www.nndc.bnl.gov/](https://www.nndc.bnl.gov/).\n\nYour book has a clever table (7.1) listing some of the data needed for high and low energy interaction calculations. These include:\n\n- Elastic scattering cross sections \n- Angular distribution of elastically scattered neutrons \n- Inelastic scattering cross sections \n- Angular distribution of inelastically scattered neutrons \n- Gamma-photon yields from inelastic neutron scattering \n- Resonance absorption cross sections \n- Thermal-averaged absorption cross sections \n- Yield of neutron-capture gamma photons\n- Fission cross sections and associated gamma-photon and neutron yields\n\n\n# Total cross sections\n\n**For light nuclei** ($A<25$) and $E<1keV$, the cross section typically varies as:\n\n\\begin{align}\n\\sigma_t = \\sigma_1 + \\frac{\\sigma_2}{\\sqrt{E}}\n\\end{align}\n\n**For solids** at energies less than about 0.01 eV, Bragg cutoffs apply. These are energies below which no coherent scattering is possible from the material's crystalline planes.\n\n\n**For heavy nuclei**, the total cross section has a $\\frac{1}{\\sqrt{E}}$ behavior with low energy, narrow resonances and high energy broad resonances:\n\n\\begin{align}\n\\sigma_t \\propto \\frac{1}{\\sqrt{E}}\n\\end{align}\n\n\n```python\n# The below IFrame displays Page 200 of your textbook:\n# Shultis, J. K. (2016). Fundamentals of Nuclear Science and Engineering Third Edition, \n# 3rd Edition. [Vitalsource]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781498769303/\n# Please take note of Figure 7.2\n\nfrom IPython.display import IFrame\nIFrame(\"https://bookshelf.vitalsource.com/books/9781498769303/pageid/222\", width=1000, height=1000)\n\n```\n\n\n\n\n\n\n\n\n\n\n### Recall fission cross sections :\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "7d8ae7a205160cc3915380a384e7ed8542fcff3e", "size": 68860, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "rad_interactions/00-rad-int-matter.ipynb", "max_stars_repo_name": "katyhuff/npr247", "max_stars_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-12-17T06:07:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T17:14:51.000Z", "max_issues_repo_path": "rad_interactions/00-rad-int-matter.ipynb", "max_issues_repo_name": "katyhuff/npr247", "max_issues_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-08-29T17:27:24.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-29T17:46:50.000Z", "max_forks_repo_path": "rad_interactions/00-rad-int-matter.ipynb", "max_forks_repo_name": "katyhuff/npr247", "max_forks_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-08-25T20:00:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-14T03:05:26.000Z", "avg_line_length": 58.5544217687, "max_line_length": 11764, "alphanum_fraction": 0.6192564624, "converted": true, "num_tokens": 7157, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.5, "lm_q2_score": 0.184767510648, "lm_q1q2_score": 0.092383755324}} {"text": "```javascript\n%%javascript\nMathJax.Hub.Config({TeX: { equationNumbers: { autoNumber: \"AMS\" } }});\n```\n\n\n \n\n\n# Maximum Entropy Principle for inference methods\n\nThe inference method based on the maximum entropy principle (**MaxEnt principle**) \nasserts that the most suitable probability distribution compatible with a given set of constraints is the one with the largest entropy [Jaynes1957a, Jaynes1957b].\nThis method is considered as a powerful estimation technique in a wide range of probabilistic models since it brings a solution to the **universal problem** of trying to stract information from partial or incomplete data --which is usually what we have to work with--. That is the reason why it finds applications in various fields of research, beyond statistical physics, as biology [De Martino2018] and ecology [Tang2021], being also usefull for analyzing and understanding complex social or economic systems [Golan1997,Scharfenaker2020], and make financial predictions [Benedetto2015]. Problems arising from these systems are characterized by having many degrees of freedom and non-trivial interaction patterns between individual subsystems. Hence it must be dealt with\ninductive inference problems due to the insufficient amount of experimental data and the incomplete nature of the information that can be extracted from them. \n\nMoreover, the MaxEnt principle has been proven to be useful for a reasonable estimation of quantum states from incomplete data [Buzek2000,Goncalves2013,Gupta2021], where the amount of experimental resourses and time consuming make quantum tomography impractical even for an intermediate number of qubits, and therefore, approaches to validate quantum processing on these quantum devices are needed.\n\n[De Martino2018] De Martino A, De Martino D. An introduction to the maximum entropy approach and its application to inference problems in biology. Heliyon. 2018 Apr 13;4(4):e00596. https://doi.org/10.1016/j.heliyon.2018.e00596 \n\n[Tang2021] Maximum Entropy Modeling to Predict the Impact of Climate Change on Pine Wilt Disease in China, Xinggang Tang, Yingdan Yuan, Xiangming Li and Jinchi Zhang, Front. Plant Sci., 23 April 2021. https://doi.org/10.3389/fpls.2021.652500.\n\n[Golan1997] A. Golan, G. Judge, and D. Miller, *Maximum Entropy\nEconometrics: Robust Estimation with Limited Data* (John Wiley and Sons, Chichester, United Kingdom,\n1997).\n\n[Scharfenaker2020] Scharfenaker, E., Yang, J. Maximum entropy economics. Eur. Phys. J. Spec. Top. 229, 1577\u20131590 (2020). https://doi.org/10.1140/epjst/e2020-000029-4\n\n[Benedetto2015] A maximum entropy method to assess the predictability of financial and commodity prices, F.Benedetto, G.Giunta, L.Mastroeni, Digital Signal Processing\nVolume 46, November 2015, Pages 19-31. https://doi.org/10.1016/j.dsp.2015.08.001\n\n[Jaynes1957a] E. T. Jaynes, Information theory and statistical mechanics, Physical Review **106**, 620 (1957).\n\n[Jaynes1957b] E. T. Jaynes, Information theory and statistical mechanics. II, Physical Review **108**, 171 (1957).\n\n[Buzek2000] V. Buzek and G. Drobny, Quantum tomography via the maxent principle, Journal of Modern Optics47, 2823 (2000). https://doi.org/10.1080/09500340008232199\n\n[Goncalves2013] D. Goncalves, C. Lavor, M. Gomes-Ruggiero, A. Cesario,R. Vianna, and T. Maciel, Quantum state tomographywith incomplete data: Maximum entropy and variationalquantum tomography, Phys. Rev. A87, 052140 (2013). https://doi.org/10.1103/PhysRevA.87.052140\n\n\n[Gupta2021] Maximal Entropy Approach for Quantum State Tomography, Rishabh Gupta, Rongxin Xia, Raphael D. Levine, and Sabre Kais, PRX QUANTUM **2**, 010318 (2021). https://doi.org/10.1103/PRXQuantum.2.010318\n\n\n# Mathematical problem\n\nLet $X$ be a random variable in a sample space $\\Omega = \\{x_1, \\ldots, x_k\\}$ with **unknown** probabilities \\\\(p_i=P(X=x_i), x_i \u2208 \\Omega\\\\) and $\\sum_{i=1}^k p_i=1$. Mathematically, the MaxEnt formalism with \\\\(m\\\\) constraints on the expectations values $E[g_j]=\\alpha_j$ of functions $g_j(x_i)$, can be expressed as a constrained optimization problem \n\n\\begin{equation}\n\\mathrm{max}\\;S(X)\\;\\;\n\\mathrm{s.t.}\n\\sum_{i=1}^k p_i g_j(x_i)=\\alpha_j, ~j=1,\\dots,m \\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;(1)\\nonumber\n\\end{equation}\n\nwere $S(X)$ is the entropy of the random variable $X$. Within the information theory, $S$ is usually taken as the Shannon entroypy and then the principle gives the less biased distribution, consistent with the available data. In such a case $S(X)= H \\equiv \u2212k\n\\sum_{i=1}^k p_i \\mathrm{log}(p_i)$. The resulting maximum entropy probability is given by:\n\n\\begin{equation}\np_i =\n\\frac{1}\n{Z(\u03bb_1, . . . , \u03bb_m)}\nexp \\left[\u2212\u03bb_1 g_1(x_i) \u2212 \u00b7 \u00b7 \u00b7 \u2212 \u03bb_m g_m(x_i)\\right],\\;\\;\\;\\;\\;\\;\\;\\;\\;(2)\\nonumber\n\\end{equation}\n\nwith $Z(\u03bb_1, . . . , \u03bb_m) = \\sum_{i=1}^k exp [\u2212\u03bb_1g_1(x_i) \u2212 \u00b7 \u00b7 \u00b7 \u2212 \u03bb_mg_m(x_i)]$ and $\u03bb_m$ is the Lagrangian multiplier for the $m$-th constraint given by the relation\n\\begin{equation}\\label{classical lagrange multipliers}\n\\alpha_{j}= \\frac{\\partial}{\\partial\\lambda_{j}}\\ln Z,\\quad 1\\leq j \\leq n.\\;\\;\\;\\;\\;\\;(3)\\nonumber\n\\end{equation}.\n\nHence, to find the MaxEnt probability distribution is considered a hard task due to the nonlinearities in the reconstruction algorithm. In fact, the relations in Ec. (3) represents a system of nonlinear differential equations to be solve. \n\n# MaxEnt inference in Biology \n\n## Inference of gene interaction networks\n\n \n\n**Fig. 1** Inference of gene interaction networks from empirical expression data. Figure extracted from \"Using the principle of entropy maximization to infer genetic interaction networks from gene expression patterns\", T.R. Lezon, J.R. Banavar, M. Cieplak, A. Maritan, N.V. Fedoroff, Proc. Natl. Acad. Sci. 103 (50) (2006) 19033\u201319038.\n\n\n\n\n\n\n# MaxEnt inference in Ecology\n\n## Predicting the distribution of pine species and the impact of climate change on forest diseases\n\n\n\n**Fig. 2** Habitat suitability maps showing the ocurrence of *P. desinflora* by 2050 and 2070 under two distinct climate change scenarios in China. Figure extracted from \"Maximum Entropy Modeling to Predict the Impact of Climate Change on Pine Wilt Disease in China\", Xinggang Tang, Yingdan Yuan, Xiangming Li and Jinchi Zhang, Front. Plant Sci., 23 April 2021.\n\n\n# Solving MaxEnt as a QUBO problem\n\nThe Quadratic Unconstrained Binary Optimization (QUBO) [Kochenberger2014] is a model for representing a wide range of combinatorial optimization problems. Moreover, due to its close connection to Ising models, QUBO constitutes a central problem class for Adiabatic quantum computation feasible to be solved through quantum annealing.\n\nIf $f_Q(x)=x^TQx$ is a quadratic polinomial over binary variables $x_i\\in B=\\{0,1\\}$, where $Q\\in \\mathbb {R} ^{n\\times n}$ is a symmetric $n\\times n$ matrix, the QUBO problem consists of finding a binary vector $x^{*}$ that minimize $f_Q$.\n\n\n## Goal\n\n\nWe have redefined the MaxEnt problem as a QUBO problem $\\left(P^TQP+C^TP\\right)$. For this purpose, the first step is to find an appropriate entropy function $S$ and codified the variables to be obtained as a result of the minimization process, in our case the probabilities $p_i$, as a binary vector. Expanding the Shannon's entropy $H$ to first order in the distribution $P=(p_1,p_1,\\dots,p_k)^T$, we obtain the quadratic entropy\n\n\\begin{equation}\nH(P) \\approx \u2212k\n\\sum_{i=1}^k p_i \\mathrm{log}(p_i)=2 - 2 P^TP,\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; (4)\\nonumber\n\\label{entropy}\n\\end{equation}\n\nwere we have take the value $k=2$ according to a random variable $X\\in\\Omega=\\{0,1\\}$.\nThen, we are interesting in to find the probability distribution $P$ that satisfies the constraints in Eq. $\\left(1\\right)$ and maximizes the quadratic entropy $\\left(4\\right)$.\n\n$1.$ We define the cost function $f(P)$ as:\n\\begin{equation}\nf(P) = -H(P) + \\sum_{j= 0}^{m} (G_j^T P - \\alpha_j) ^2.\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; (5)\\nonumber\n\\end{equation}\n\nwhere $G_j=(g_j(x_1),g_j(x_2),\\dots,g_j(x_k))^T$ is the vector which contains the image of the function $g_j\\in \\{g_1,\\dots,g_m\\}$. After trivial algebra manipulation Eq. $(5)$ is reduced to \n\n\\begin{align}\nf(P)=-2 + \\sum_{j= 0}^{m} \\alpha_j^2 + \\left(\\sum_{j= 0}^{m}(-2) \\alpha_j G_j^T\\right) P + P^T \\left( 2I_k +\\sum_{j= 0}^{m} G_j G_j^T\\right) P \\;\\;\\;\\;\\;\\;\\;\\;\\;\\; (6)\\nonumber\n \\end{align}\n\n\n\n\nThen, $f(P)$ can be rewritten as\n\\begin{equation}\nf(P) = P^T Q P + C^T P + cte, \\;\\;\\;\\;\\;\\;\\;\\;\\;\\; (7)\\nonumber\n\\end{equation}\n\nwith $C^T = \\sum_{j= 0}^{m}(-2) \\alpha_j G_j^T$, and $Q = 2 I_n +\\sum_{j= 0}^{m} G_j G_j^T$.\n\n$2.$ Now we will express each entry $p_i$ of the probability distribution in a binary basis, i.e., \n\\begin{equation}\np_i = \\sum_{k=1}^d \\frac{a_{ik}}{2^k},\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; (8)\\nonumber\n\\end{equation}\nwhere each $a_{ik}$ can be $0$ or $1$. For a given $d$ we have a restriction of the values that we able to represent $(0 \\leq p_i \\leq 1- 1/2^d$ with precision $1/2^d)$. Then,\n\n\\begin{align}\np_i &= (\\frac{1}{2}, \\ldots,\\frac{1}{2^d}) (a_{i1}, \\ldots, a_{id})^T\\;\\;\\;\\;\\;\\;\\;\\;\\;\\; (9)\\nonumber\\\\\n\\mathrm{or}\\;\\; P & = S a,\\nonumber\n\\end{align}\n\nwith $S$ an adequate matrix that performs the transformation.\n\nFinally, we obtain the cost function in a suitable form to be solve as a QUBO problem\n\n\\begin{equation}\nf(p) = cte + C^T S a + a^T S^TQS a.\n\\end{equation}\n\n[Kochenberger2014] Kochenberger, Gary; Hao, Jin-Kao (2014). \"The unconstrained binary quadratic programming problem: a survey\" (PDF). Journal of Combinatorial Optimization. **28**: 58\u201381. http://doi:10.1007/s10878-014-9734-0. \n\n\n# A puzzlelike problem\n\n\n\n\n\n\n\nThe following code finds the MaxEnt probability distribution given the appearance frequencies of the faces of a die as constraints. \n\n\n\n\n```python\nimport numpy as np\nimport function as f \n```\n\n\n```python\n#Example 1: Dice without constraints\nnb = 6 #Number of bits \nlam = 0.001 #Optimization constant\nnumreads = 10000 #number of reads\ny0= np.array ([[1.0],[1.0],[1.0],[1.0],[1.0], [1.0] ])\nalpha = np.array ([1.0 ])\ny = [y0]\nx,p,cost, cost_bin = f.solution(y, alpha, nb, lam,numreads )\nprint('Binary solution: ', x)\nprint('Probability: ', p, 'Sum: ', sum(p))\nprint('Cost: ', cost[0])\n```\n\n Binary solution: [0 0 1 0 1 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 1 0 1 1 0 0 1 0 0 1 0 0 1 0 1 0]\n Probability: [0.15625 0.1875 0.1875 0.171875 0.140625 0.15625 ] Sum: 1.0\n Cost: -0.9996630859375\n\n\n\n\n\n```python\n#Example 2: Dice with fair mean value\nnb = 6 #Number of bits \nlam = 0.001 #Optimization constant\nnumreads = 10000 #number of reads\ny0= np.array ([[1.0],[1.0],[1.0],[1.0],[1.0], [1.0] ])\ny1= np.array ([[1.0],[2.0],[3.0],[4.0],[5.0], [6.0] ])\nalpha = np.array ([1.0, 3.5 ])\ny = [y0, y1]\nx,p,cost, cost_bin = f.solution(y, alpha, nb, lam,numreads )\nprint('Binary solution: ', x)\nprint('Probability: ', p, 'Sum: ', sum(p))\nprint('Cost: ', cost[0])\n```\n\n Binary solution: [0 0 1 0 1 0 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1 0]\n Probability: [0.15625 0.171875 0.171875 0.171875 0.171875 0.15625 ] Sum: 1.0\n Cost: -13.249666015625\n\n\n\n\n\n```python\n#Example 3: Loaded dice\nnb = 8 #Number of bits \nlam = 0.0001 #Optimization constant\nnumreads = 50000 #number of reads\ny0= np.array ([[1.0],[1.0],[1.0],[1.0],[1.0], [1.0] ])\ny1= np.array ([[1.0],[2.0],[3.0],[4.0],[5.0], [6.0] ])\nalpha = np.array ([1.0, 6 ])\ny = [y0, y1]\nx,p,cost, cost_bin = f.solution(y, alpha, nb, lam,numreads )\nprint('Binary solution: ', x)\nprint('Probability: ', p, 'Sum: ', sum(p))\nprint('Cost: ', cost[0])\n```\n\n Binary solution: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0\n 0 1 0 1 1 1 1 1 0 1 0]\n Probability: [0. 0. 0. 0.0234375 0.0078125 0.9765625] Sum: 1.0078125\n Cost: -36.99968707275391\n\n\n\n\n\n```python\n#Example 4: Dice with one face with fixed probability\nnb = 8 #Number of bits \nlam = 0.01 #Optimization constant\nnumreads = 10000 #number of reads\ny0= np.array ([[1.0],[1.0],[1.0],[1.0],[1.0], [1.0] ])\ny1= np.array ([[0.0],[1.0],[0.0],[0.0],[0.0], [0.0] ])\nalpha = np.array ([1.0, 0.8 ])\ny = [y0, y1]\nx,p,cost, cost_bin = f.solution(y, alpha, nb, lam,numreads )\nprint('Binary solution: ', x)\nprint('Probability: ', p, 'Sum: ', sum(p))\nprint('Cost: ', cost[0])\n```\n\n Binary solution: [0 0 0 0 1 1 0 0 1 1 0 0 1 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 1 0 1 1 0 0 0 0 1\n 0 1 0 0 0 0 0 1 1 0 0]\n Probability: [0.046875 0.78515625 0.0390625 0.04296875 0.0390625 0.046875 ] Sum: 1.0\n Cost: -1.6272644042968751\n\n\n\n\n\n```python\n#Example 5: Dice with two faces summing a fixed probability\nnb = 8 #Number of bits \nlam = 0.01 #Optimization constant\nnumreads = 10000 #number of reads\ny0= np.array ([[1.0],[1.0],[1.0],[1.0],[1.0], [1.0] ])\ny1= np.array ([[1.0],[1.0],[0.0],[0.0],[0.0], [0.0] ])\nalpha = np.array ([1.0, 0.7 ])\ny = [y0, y1]\nx,p,cost, cost_bin = f.solution(y, alpha, nb, lam,numreads )\nprint('Binary solution: ', x)\nprint('Probability: ', p, 'Sum: ', sum(p))\nprint('Cost: ', cost[0])\n```\n\n Binary solution: [0 1 0 1 1 0 0 1 0 1 0 1 1 0 0 1 0 0 0 1 0 0 1 1 0 0 0 1 0 0 1 1 0 0 0 1 0\n 1 0 0 0 0 0 1 0 1 0 0]\n Probability: [0.34765625 0.34765625 0.07421875 0.07421875 0.078125 0.078125 ] Sum: 1.0\n Cost: -1.4846789550781248\n\n\n\n", "meta": {"hexsha": "9cd9e363adb14ce1e09bd3a3054ae4cb1e468cf1", "size": 19812, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Quantum Vision/QuantumVision.ipynb", "max_stars_repo_name": "stared/Hackathon2021", "max_stars_repo_head_hexsha": "69e2ba4345b311e62d09d02f6953b25614229e12", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 18, "max_stars_repo_stars_event_min_datetime": "2021-07-26T13:45:16.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-26T09:15:23.000Z", "max_issues_repo_path": "Quantum Vision/QuantumVision.ipynb", "max_issues_repo_name": "stared/Hackathon2021", "max_issues_repo_head_hexsha": "69e2ba4345b311e62d09d02f6953b25614229e12", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2021-07-26T19:33:30.000Z", "max_issues_repo_issues_event_max_datetime": "2021-07-28T08:32:20.000Z", "max_forks_repo_path": "Quantum Vision/QuantumVision.ipynb", "max_forks_repo_name": "stared/Hackathon2021", "max_forks_repo_head_hexsha": "69e2ba4345b311e62d09d02f6953b25614229e12", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 35, "max_forks_repo_forks_event_min_datetime": "2021-07-26T13:10:40.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-31T05:23:48.000Z", "avg_line_length": 43.4473684211, "max_line_length": 781, "alphanum_fraction": 0.5747022007, "converted": true, "num_tokens": 4950, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.399811640739795, "lm_q2_score": 0.23091975234373585, "lm_q1q2_score": 0.09232440506377616}} {"text": "
\n
\n
\n

Natural Language Processing For Everyone

\n

Text Representation

\n

Bruno Gon\u00e7alves
\n www.data4sci.com
\n @bgoncalves, @data4sci

\n
\n\nIn this lesson we will see in some details how we can best represent text in our application. Let's start by importing the modules we will be using:\n\n\n```python\nimport string\nfrom collections import Counter\nfrom pprint import pprint\nimport gzip\n\nimport matplotlib\nimport matplotlib.pyplot as plt \nimport numpy as np\n\nimport watermark\n\n%matplotlib inline\n%load_ext watermark\n```\n\nList out the versions of all loaded libraries\n\n\n```python\n%watermark -n -v -m -g -iv\n```\n\n Python implementation: CPython\n Python version : 3.8.5\n IPython version : 7.19.0\n \n Compiler : Clang 10.0.0 \n OS : Darwin\n Release : 20.3.0\n Machine : x86_64\n Processor : i386\n CPU cores : 16\n Architecture: 64bit\n \n Git hash: 842cbaa9fb86ca89575a80bfaea9a8abcdb598ac\n \n matplotlib: 3.3.2\n numpy : 1.20.1\n json : 2.0.9\n watermark : 2.1.0\n \n\n\nSet the default style\n\n\n```python\nplt.style.use('./d4sci.mplstyle')\n```\n\nWe choose a well known nursery rhyme, that has the added distinction of having been the first audio ever recorded, to be the short snippet of text that we will use in our examples:\n\n\n```python\ntext = \"\"\"Mary had a little lamb, little lamb,\n little lamb. 'Mary' had a little lamb\n whose fleece was white as snow.\n And everywhere that Mary went\n Mary went, MARY went. Everywhere\n that mary went,\n The lamb was sure to go\"\"\"\n```\n\n## Tokenization\n\nThe first step in any analysis is to tokenize the text. What this means is that we will extract all the individual words in the text. For the sake of simplicity, we will assume that our text is well formed and that our words are delimited either by white space or punctuation characters.\n\n\n```python\nprint(string.punctuation)\n```\n\n !\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~\n\n\n\n```python\ndef extract_words(text):\n temp = text.split() # Split the text on whitespace\n text_words = []\n\n for word in temp:\n # Remove any punctuation characters present in the beginning of the word\n while word[0] in string.punctuation:\n word = word[1:]\n\n # Remove any punctuation characters present in the end of the word\n while word[-1] in string.punctuation:\n word = word[:-1]\n\n # Append this word into our list of words.\n text_words.append(word.lower())\n \n return text_words\n```\n\nAfter this step we now have our text represented as an array of individual, lowercase, words:\n\n\n```python\ntext_words = extract_words(text)\nprint(text_words)\n```\n\n ['mary', 'had', 'a', 'little', 'lamb', 'little', 'lamb', 'little', 'lamb', 'mary', 'had', 'a', 'little', 'lamb', 'whose', 'fleece', 'was', 'white', 'as', 'snow', 'and', 'everywhere', 'that', 'mary', 'went', 'mary', 'went', 'mary', 'went', 'everywhere', 'that', 'mary', 'went', 'the', 'lamb', 'was', 'sure', 'to', 'go']\n\n\nAs we saw during the video, this is a wasteful way to represent text. We can be much more efficient by representing each word by a number\n\n\n```python\nword_dict = {}\nword_list = []\nvocabulary_size = 0\ntext_tokens = []\n\nfor word in text_words:\n # If we are seeing this word for the first time, create an id for it and added it to our word dictionary\n if word not in word_dict:\n word_dict[word] = vocabulary_size\n word_list.append(word)\n vocabulary_size += 1\n \n # add the token corresponding to the current word to the tokenized text.\n text_tokens.append(word_dict[word])\n```\n\nWhen we were tokenizing our text, we also generated a dictionary **word_dict** that maps words to integers and a **word_list** that maps each integer to the corresponding word.\n\n\n```python\nprint(\"Word list:\", word_list, \"\\n\\n Word dictionary:\")\npprint(word_dict)\n```\n\n Word list: ['mary', 'had', 'a', 'little', 'lamb', 'whose', 'fleece', 'was', 'white', 'as', 'snow', 'and', 'everywhere', 'that', 'went', 'the', 'sure', 'to', 'go'] \n \n Word dictionary:\n {'a': 2,\n 'and': 11,\n 'as': 9,\n 'everywhere': 12,\n 'fleece': 6,\n 'go': 18,\n 'had': 1,\n 'lamb': 4,\n 'little': 3,\n 'mary': 0,\n 'snow': 10,\n 'sure': 16,\n 'that': 13,\n 'the': 15,\n 'to': 17,\n 'was': 7,\n 'went': 14,\n 'white': 8,\n 'whose': 5}\n\n\nThese two datastructures already proved their usefulness when we converted our text to a list of tokens.\n\n\n```python\nprint(text_tokens)\n```\n\n [0, 1, 2, 3, 4, 3, 4, 3, 4, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 0, 14, 0, 14, 0, 14, 12, 13, 0, 14, 15, 4, 7, 16, 17, 18]\n\n\nUnfortunately, while this representation is convenient for memory reasons it has some severe limitations. Perhaps the most important of which is the fact that computers naturally assume that numbers can be operated on mathematically (by addition, subtraction, etc) in a way that doesn't match our understanding of words.\n\n## One-hot encoding\n\nOne typical way of overcoming this difficulty is to represent each word by a one-hot encoded vector where every element is zero except the one corresponding to a specific word.\n\n\n```python\ndef one_hot(word, word_dict):\n \"\"\"\n Generate a one-hot encoded vector corresponding to *word*\n \"\"\"\n \n vector = np.zeros(len(word_dict))\n vector[word_dict[word]] = 1\n \n return vector\n```\n\nSo, for example, the word \"fleece\" would be represented by:\n\n\n```python\nprint(vocabulary_size)\nprint(len(word_dict))\n```\n\n 19\n 19\n\n\n\n```python\nfleece_hot = one_hot(\"fleece\", word_dict)\nprint(fleece_hot)\n```\n\n [0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n\n\nThis vector has every element set to zero, except element 6, since:\n\n\n```python\nprint(word_dict[\"fleece\"])\nfleece_hot[6] == 1\n```\n\n 6\n\n\n\n\n\n True\n\n\n\n\n```python\nprint(fleece_hot.sum())\n```\n\n 1.0\n\n\n## Bag of words\n\nWe can now use the one-hot encoded vector for each word to produce a vector representation of our original text, by simply adding up all the one-hot encoded vectors:\n\n\n```python\ntext_vector1 = np.zeros(vocabulary_size)\n\nfor word in text_words:\n hot_word = one_hot(word, word_dict)\n text_vector1 += hot_word\n \nprint(text_vector1)\n```\n\n [6. 2. 2. 4. 5. 1. 1. 2. 1. 1. 1. 1. 2. 2. 4. 1. 1. 1. 1.]\n\n\nIn practice, we can also easily skip the encoding step at the word level by using the *word_dict* defined above:\n\n\n```python\ntext_vector = np.zeros(vocabulary_size)\n\nfor word in text_words:\n text_vector[word_dict[word]] += 1\n \nprint(text_vector)\n```\n\n [6. 2. 2. 4. 5. 1. 1. 2. 1. 1. 1. 1. 2. 2. 4. 1. 1. 1. 1.]\n\n\nNaturally, this approach is completely equivalent to the previous one and has the added advantage of being more efficient in terms of both speed and memory requirements.\n\nThis is known as the __bag of words__ representation of the text. It should be noted that these vectors simply contains the number of times each word appears in our document, so we can easily tell that the word *mary* appears exactly 6 times in our little nursery rhyme.\n\n\n```python\ntext_vector[word_dict[\"mary\"]]\n```\n\n\n\n\n 6.0\n\n\n\nA more pythonic (and efficient) way of producing the same result is to use the standard __Counter__ module:\n\n\n```python\nword_counts = Counter(text_words)\npprint(word_counts)\n```\n\n Counter({'mary': 6,\n 'lamb': 5,\n 'little': 4,\n 'went': 4,\n 'had': 2,\n 'a': 2,\n 'was': 2,\n 'everywhere': 2,\n 'that': 2,\n 'whose': 1,\n 'fleece': 1,\n 'white': 1,\n 'as': 1,\n 'snow': 1,\n 'and': 1,\n 'the': 1,\n 'sure': 1,\n 'to': 1,\n 'go': 1})\n\n\nFrom which we can easily generate the __text_vector__ and __word_dict__ data structures:\n\n\n```python\nitems = list(word_counts.items())\n\n# Extract word dictionary and vector representation\nword_dict2 = dict([[items[i][0], i] for i in range(len(items))])\ntext_vector2 = [items[i][1] for i in range(len(items))]\n```\n\n\n```python\nword_counts['mary']\n```\n\n\n\n\n 6\n\n\n\nAnd let's take a look at them:\n\n\n```python\ntext_vector\n```\n\n\n\n\n array([6., 2., 2., 4., 5., 1., 1., 2., 1., 1., 1., 1., 2., 2., 4., 1., 1.,\n 1., 1.])\n\n\n\n\n```python\nprint(\"Text vector:\", text_vector2, \"\\n\\nWord dictionary:\")\npprint(word_dict2)\n```\n\n Text vector: [6, 2, 2, 4, 5, 1, 1, 2, 1, 1, 1, 1, 2, 2, 4, 1, 1, 1, 1] \n \n Word dictionary:\n {'a': 2,\n 'and': 11,\n 'as': 9,\n 'everywhere': 12,\n 'fleece': 6,\n 'go': 18,\n 'had': 1,\n 'lamb': 4,\n 'little': 3,\n 'mary': 0,\n 'snow': 10,\n 'sure': 16,\n 'that': 13,\n 'the': 15,\n 'to': 17,\n 'was': 7,\n 'went': 14,\n 'white': 8,\n 'whose': 5}\n\n\nThe results using this approach are slightly different than the previous ones, because the words are mapped to different integer ids but the corresponding values are the same:\n\n\n```python\nfor word in word_dict.keys():\n if text_vector[word_dict[word]] != text_vector2[word_dict2[word]]:\n print(\"Error!\")\n```\n\nAs expected, there are no differences!\n\n## Term Frequency\n\nThe bag of words vector representation introduced above relies simply on the frequency of occurence of each word. Following a long tradition of giving fancy names to simple ideas, this is known as __Term Frequency__.\n\nIntuitively, we expect the the frequency with which a given word is mentioned should correspond to the relevance of that word for the piece of text we are considering. For example, **Mary** is a pretty important word in our little nursery rhyme and indeed it is the one that occurs the most often:\n\n\n```python\nsorted(items, key=lambda x:x[1], reverse=True)\n```\n\n\n\n\n [('mary', 6),\n ('lamb', 5),\n ('little', 4),\n ('went', 4),\n ('had', 2),\n ('a', 2),\n ('was', 2),\n ('everywhere', 2),\n ('that', 2),\n ('whose', 1),\n ('fleece', 1),\n ('white', 1),\n ('as', 1),\n ('snow', 1),\n ('and', 1),\n ('the', 1),\n ('sure', 1),\n ('to', 1),\n ('go', 1)]\n\n\n\nHowever, it's hard to draw conclusions from such a small piece of text. Let us consider a significantly larger piece of text, the first 100 MB of the english Wikipedia from: http://mattmahoney.net/dc/textdata. For the sake of convenience, text8.gz has been included in this repository in the **data/** directory. We start by loading it's contents into memory as an array of words:\n\n\n```python\ndata = []\n\nfor line in gzip.open(\"data/text8.gz\", 'rt'):\n data.extend(line.strip().split())\n```\n\nNow let's take a look at the first 50 words in this large corpus:\n\n\n```python\ndata[:50]\n```\n\n\n\n\n ['anarchism',\n 'originated',\n 'as',\n 'a',\n 'term',\n 'of',\n 'abuse',\n 'first',\n 'used',\n 'against',\n 'early',\n 'working',\n 'class',\n 'radicals',\n 'including',\n 'the',\n 'diggers',\n 'of',\n 'the',\n 'english',\n 'revolution',\n 'and',\n 'the',\n 'sans',\n 'culottes',\n 'of',\n 'the',\n 'french',\n 'revolution',\n 'whilst',\n 'the',\n 'term',\n 'is',\n 'still',\n 'used',\n 'in',\n 'a',\n 'pejorative',\n 'way',\n 'to',\n 'describe',\n 'any',\n 'act',\n 'that',\n 'used',\n 'violent',\n 'means',\n 'to',\n 'destroy',\n 'the']\n\n\n\nAnd the top 10 most common words\n\n\n```python\ncounts = Counter(data)\n\nsorted_counts = sorted(list(counts.items()), key=lambda x: x[1], reverse=True)\n\nfor word, count in sorted_counts[:10]:\n print(word, count)\n```\n\n the 1061396\n of 593677\n and 416629\n one 411764\n in 372201\n a 325873\n to 316376\n zero 264975\n nine 250430\n two 192644\n\n\nSurprisingly, we find that the most common words are not particularly meaningful. Indeed, this is a common occurence in Natural Language Processing. The most frequent words are typically auxiliaries required due to gramatical rules.\n\nOn the other hand, there is also a large number of words that occur very infrequently as can be easily seen by glancing at the word freqency distribution.\n\n\n```python\ndist = Counter(counts.values())\ndist = list(dist.items())\ndist.sort(key=lambda x:x[0])\ndist = np.array(dist)\n\nnorm = np.dot(dist.T[0], dist.T[1])\n\nplt.loglog(dist.T[0], dist.T[1]/norm)\nplt.xlabel(\"count\")\nplt.ylabel(\"P(count)\")\nplt.title(\"Word frequency distribution\")\nplt.gcf().set_size_inches(11, 8)\n```\n\n## Stopwords\n\nOne common technique to simplify NLP tasks is to remove what are known as Stopwords, words that are very frequent but not meaningful. If we simply remove the most common 100 words, we significantly reduce the amount of data we have to consider while losing little information.\n\n\n```python\nstopwords = set([word for word, count in sorted_counts[:100]])\n\nclean_data = []\n\nfor word in data:\n if word not in stopwords:\n clean_data.append(word)\n\nprint(\"Original size:\", len(data))\nprint(\"Clean size:\", len(clean_data))\nprint(\"Reduction:\", 1-len(clean_data)/len(data))\n```\n\n Original size: 17005207\n Clean size: 9006229\n Reduction: 0.470384041782026\n\n\n\n```python\nclean_data[:50]\n```\n\n\n\n\n ['anarchism',\n 'originated',\n 'term',\n 'abuse',\n 'against',\n 'early',\n 'working',\n 'class',\n 'radicals',\n 'including',\n 'diggers',\n 'english',\n 'revolution',\n 'sans',\n 'culottes',\n 'french',\n 'revolution',\n 'whilst',\n 'term',\n 'still',\n 'pejorative',\n 'way',\n 'describe',\n 'any',\n 'act',\n 'violent',\n 'means',\n 'destroy',\n 'organization',\n 'society',\n 'taken',\n 'positive',\n 'label',\n 'self',\n 'defined',\n 'anarchists',\n 'word',\n 'anarchism',\n 'derived',\n 'greek',\n 'without',\n 'archons',\n 'ruler',\n 'chief',\n 'king',\n 'anarchism',\n 'political',\n 'philosophy',\n 'belief',\n 'rulers']\n\n\n\nWow, our dataset size was reduced almost in half!\n\nIn practice, we don't simply remove the most common words in our corpus but rather a manually curate list of stopwords. Lists for dozens of languages and applications can easily be found online.\n\n## Term Frequency/Inverse Document Frequency\n\nOne way of determining of the relative importance of a word is to see how often it appears across multiple documents. Words that are relevant to a specific topic are more likely to appear in documents about that topic and much less in documents about other topics. On the other hand, less meaningful words (like **the**) will be common across documents about any subject.\n\nTo measure the document frequency of a word we will need to have multiple documents. For the sake of simplicity, we will treat each sentence of our nursery rhyme as an individual document:\n\n\n```python\nprint(text)\n```\n\n Mary had a little lamb, little lamb,\n little lamb. 'Mary' had a little lamb\n whose fleece was white as snow.\n And everywhere that Mary went\n Mary went, MARY went. Everywhere\n that mary went,\n The lamb was sure to go\n\n\n\n```python\ncorpus_text = text.split('.')\ncorpus_words = []\n\nfor document in corpus_text:\n doc_words = extract_words(document)\n corpus_words.append(doc_words)\n```\n\nNow our corpus is represented as a list of word lists, where each list is just the word representation of the corresponding sentence:\n\n\n```python\nprint(len(corpus_words))\n```\n\n 4\n\n\n\n```python\npprint(corpus_words)\n```\n\n [['mary', 'had', 'a', 'little', 'lamb', 'little', 'lamb', 'little', 'lamb'],\n ['mary',\n 'had',\n 'a',\n 'little',\n 'lamb',\n 'whose',\n 'fleece',\n 'was',\n 'white',\n 'as',\n 'snow'],\n ['and', 'everywhere', 'that', 'mary', 'went', 'mary', 'went', 'mary', 'went'],\n ['everywhere',\n 'that',\n 'mary',\n 'went',\n 'the',\n 'lamb',\n 'was',\n 'sure',\n 'to',\n 'go']]\n\n\nLet us now calculate the number of documents in which each word appears:\n\n\n```python\ndocument_count = {}\n\nfor document in corpus_words:\n word_set = set(document)\n \n for word in word_set:\n document_count[word] = document_count.get(word, 0) + 1\n\npprint(document_count)\n```\n\n {'a': 2,\n 'and': 1,\n 'as': 1,\n 'everywhere': 2,\n 'fleece': 1,\n 'go': 1,\n 'had': 2,\n 'lamb': 3,\n 'little': 2,\n 'mary': 4,\n 'snow': 1,\n 'sure': 1,\n 'that': 2,\n 'the': 1,\n 'to': 1,\n 'was': 2,\n 'went': 2,\n 'white': 1,\n 'whose': 1}\n\n\nAs we can see, the word __Mary__ appears in all 4 of our documents, making it useless when it comes to distinguish between the different sentences. On the other hand, words like __white__ which appear in only one document are very discriminative. Using this approach we can define a new quantity, the ___Inverse Document Frequency__ that tells us how frequent a word is across the documents in a specific corpus:\n\n\n```python\ndef inv_doc_freq(corpus_words):\n number_docs = len(corpus_words)\n \n document_count = {}\n\n for document in corpus_words:\n word_set = set(document)\n\n for word in word_set:\n document_count[word] = document_count.get(word, 0) + 1\n \n IDF = {}\n \n for word in document_count:\n IDF[word] = np.log(number_docs/document_count[word])\n \n return IDF\n```\n\nWhere we followed the convention of using the logarithm of the inverse document frequency. This has the numerical advantage of avoiding to have to handle small fractional numbers. \n\nWe can easily see that the IDF gives a smaller weight to the most common words and a higher weight to the less frequent:\n\n\n```python\ncorpus_words\n```\n\n\n\n\n [['mary', 'had', 'a', 'little', 'lamb', 'little', 'lamb', 'little', 'lamb'],\n ['mary',\n 'had',\n 'a',\n 'little',\n 'lamb',\n 'whose',\n 'fleece',\n 'was',\n 'white',\n 'as',\n 'snow'],\n ['and', 'everywhere', 'that', 'mary', 'went', 'mary', 'went', 'mary', 'went'],\n ['everywhere',\n 'that',\n 'mary',\n 'went',\n 'the',\n 'lamb',\n 'was',\n 'sure',\n 'to',\n 'go']]\n\n\n\n\n```python\nIDF = inv_doc_freq(corpus_words)\n\npprint(IDF)\n```\n\n {'a': 0.6931471805599453,\n 'and': 1.3862943611198906,\n 'as': 1.3862943611198906,\n 'everywhere': 0.6931471805599453,\n 'fleece': 1.3862943611198906,\n 'go': 1.3862943611198906,\n 'had': 0.6931471805599453,\n 'lamb': 0.28768207245178085,\n 'little': 0.6931471805599453,\n 'mary': 0.0,\n 'snow': 1.3862943611198906,\n 'sure': 1.3862943611198906,\n 'that': 0.6931471805599453,\n 'the': 1.3862943611198906,\n 'to': 1.3862943611198906,\n 'was': 0.6931471805599453,\n 'went': 0.6931471805599453,\n 'white': 1.3862943611198906,\n 'whose': 1.3862943611198906}\n\n\nAs expected **Mary** has the smallest weight of all words 0, meaning that it is effectively removed from the dataset. You can consider this as a way of implicitly identify and remove stopwords. In case you do want to keep even the words that appear in every document, you can just add a 1. to the argument of the logarithm above:\n\n\\begin{equation}\n\\log\\left[1+\\frac{N_d}{N_d\\left(w\\right)}\\right]\n\\end{equation}\n\nWhen we multiply the term frequency of each word by it's inverse document frequency, we have a good way of quantifying how relevant a word is to understand the meaning of a specific document.\n\n\n```python\ndef tf_idf(corpus_words):\n IDF = inv_doc_freq(corpus_words)\n \n TFIDF = []\n \n for document in corpus_words:\n TFIDF.append(Counter(document))\n \n for document in TFIDF:\n for word in document:\n document[word] = document[word]*IDF[word]\n \n return TFIDF\n```\n\n\n```python\ntf_idf(corpus_words)\n```\n\n\n\n\n [Counter({'mary': 0.0,\n 'had': 0.6931471805599453,\n 'a': 0.6931471805599453,\n 'little': 2.0794415416798357,\n 'lamb': 0.8630462173553426}),\n Counter({'mary': 0.0,\n 'had': 0.6931471805599453,\n 'a': 0.6931471805599453,\n 'little': 0.6931471805599453,\n 'lamb': 0.28768207245178085,\n 'whose': 1.3862943611198906,\n 'fleece': 1.3862943611198906,\n 'was': 0.6931471805599453,\n 'white': 1.3862943611198906,\n 'as': 1.3862943611198906,\n 'snow': 1.3862943611198906}),\n Counter({'and': 1.3862943611198906,\n 'everywhere': 0.6931471805599453,\n 'that': 0.6931471805599453,\n 'mary': 0.0,\n 'went': 2.0794415416798357}),\n Counter({'everywhere': 0.6931471805599453,\n 'that': 0.6931471805599453,\n 'mary': 0.0,\n 'went': 0.6931471805599453,\n 'the': 1.3862943611198906,\n 'lamb': 0.28768207245178085,\n 'was': 0.6931471805599453,\n 'sure': 1.3862943611198906,\n 'to': 1.3862943611198906,\n 'go': 1.3862943611198906})]\n\n\n\nNow we finally have a vector representation of each of our documents that takes the informational contributions of each word into account. Each of these vectors provides us with a unique representation of each document, in the context (corpus) in which it occurs, making it posssible to define the similarity of two documents, etc.\n\n## Porter Stemmer\n\nThere is still, however, one issue with our approach to representing text. Since we treat each word as a unique token and completely independently from all others, for large documents we will end up with many variations of the same word such as verb conjugations, the corresponding adverbs and nouns, etc. \n\nOne way around this difficulty is to use stemming algorithm to reduce words to their root (or stem) version. The most famous Stemming algorithm is known as the **Porter Stemmer** and was introduced by Martin Porter in 1980 [Program 14, 130 (1980)](https://dl.acm.org/citation.cfm?id=275705)\n\nThe algorithm starts by defining consonants (C) and vowels (V):\n\n\n```python\nV = set('aeiouy')\nC = set('bcdfghjklmnpqrstvwxz')\n```\n\nThe stem of a word is what is left of that word after a speficic ending has been removed. A function to do this is easy to implement:\n\n\n```python\ndef get_stem(suffix, word):\n \"\"\"\n Extract the stem of a word\n \"\"\"\n \n if word.lower().endswith(suffix.lower()): # Case insensitive comparison\n return word[:-len(suffix)]\n\n return None\n```\n\nIt also defines words (or stems) to be sequences of vowels and consonants of the form:\n\n\\begin{equation}\n[C](VC)^m[V]\n\\end{equation}\n\nwhere $m$ is called the **measure** of the word and [] represent optional sections. \n\n\n```python\ndef measure(orig_word):\n \"\"\"\n Calculate the \"measure\" m of a word or stem, according to the Porter Stemmer algorthim\n \"\"\"\n \n word = orig_word.lower()\n\n optV = False\n optC = False\n VC = False\n\n m = 0\n pos = 0\n\n # We can think of this implementation as a simple finite state machine\n # looks for sequences of vowels or consonants depending of the state\n # in which it's in, while keeping track of how many VC sequences it\n # has encountered.\n # The presence of the optional V and C portions is recorded in the\n # optV and optC booleans.\n \n # We're at the initial state.\n # gobble up all the optional consonants at the beginning of the word\n while pos < len(word) and word[pos] in C:\n pos += 1\n optC = True\n\n while pos < len(word):\n # Now we know that the next state must be a vowel\n while pos < len(word) and word[pos] in V:\n pos += 1\n optV = True\n\n # Followed by a consonant\n while pos < len(word) and word[pos] in C:\n pos += 1\n optV = False\n \n # If a consonant was found, then we matched VC\n # so we should increment m by one. Otherwise, \n # optV remained true and we simply had a dangling\n # V sequence.\n if not optV:\n m += 1\n\n return m\n```\n\nLet's consider a simple example. The word __crepusculars__ should have measure 4:\n\n[cr] (ep) (usc) (ul) (ars)\n\nand indeed it does.\n\n\n```python\nword = \"crepusculars\"\nprint(measure(word))\n```\n\n 4\n\n\n(agr) = (VC)\n\n\n```python\nword = \"agr\"\nprint(measure(word))\n```\n\n 1\n\n\nThe Porter algorithm sequentially applies a series of transformation rules over a series of 5 steps (step 1 is divided in 3 substeps and step 5 in 2). The rules are only applied if a certain condition is true. \n\nIn addition to possibily specifying a requirement on the measure of a word, conditions can make use of different boolean functions as well: \n\n\n```python\ndef ends_with(char, stem):\n \"\"\"\n Checks the ending of the word\n \"\"\"\n return stem[-1] == char\n\ndef double_consonant(stem):\n \"\"\"\n Checks the ending of a word for a double consonant\n \"\"\"\n if len(stem) < 2:\n return False\n\n if stem[-1] in C and stem[-2] == stem[-1]:\n return True\n\n return False\n\ndef contains_vowel(stem):\n \"\"\"\n Checks if a word contains a vowel or not\n \"\"\"\n return len(set(stem) & V) > 0 \n```\n\nFinally, we define a function to apply a specific rule to a word or stem:\n\n\n```python\ndef apply_rule(condition, suffix, replacement, word):\n \"\"\"\n Apply Porter Stemmer rule.\n if \"condition\" is True replace \"suffix\" by \"replacement\" in \"word\"\n \"\"\"\n \n stem = get_stem(suffix, word)\n\n if stem is not None and condition is True:\n # Remove the suffix\n word = stem\n\n # Add the replacement suffix, if any\n if replacement is not None:\n word += replacement\n\n return word\n```\n\nNow we can see how rules can be applied. For example, this rule, from step 1b is successfully applied to __pastered__:\n\n\n```python\nword = \"plastered\"\nsuffix = \"ed\"\nstem = get_stem(suffix, word)\napply_rule(contains_vowel(stem), suffix, None, word)\n```\n\n\n\n\n 'plaster'\n\n\n\n\n```python\nstem\n```\n\n\n\n\n 'plaster'\n\n\n\n\n```python\ncontains_vowel(stem)\n```\n\n\n\n\n True\n\n\n\nWhile try applying the same rule to **bled** will fail to pass the condition resulting in no change.\n\n\n```python\nword = \"bled\"\nsuffix = \"ed\"\nstem = get_stem(suffix, word)\napply_rule(contains_vowel(stem), suffix, None, word)\n```\n\n\n\n\n 'bled'\n\n\n\n\n```python\nstem\n```\n\n\n\n\n 'bl'\n\n\n\n\n```python\ncontains_vowel(stem)\n```\n\n\n\n\n False\n\n\n\nFor a more complex example, we have, in Step 4:\n\n\n```python\nword = \"adoption\"\nsuffix = \"ion\"\nstem = get_stem(suffix, word)\napply_rule(measure(stem) > 1 and (ends_with(\"s\", stem) or ends_with(\"t\", stem)), suffix, None, word)\n```\n\n\n\n\n 'adopt'\n\n\n\n\n```python\nends_with(\"t\", stem)\n```\n\n\n\n\n True\n\n\n\n\n```python\nends_with(\"s\", stem)\n```\n\n\n\n\n False\n\n\n\n\n```python\nmeasure(stem)\n```\n\n\n\n\n 2\n\n\n\nIn total, the Porter Stemmer algorithm (for the English language) applies several dozen rules (see https://tartarus.org/martin/PorterStemmer/def.txt for a complete list). Implementing all of them is both tedious and error prone, so we abstain from providing a full implementation of the algorithm here. High quality implementations can be found in all major NLP libraries such as [NLTK](http://www.nltk.org/howto/stem.html).\n\nThe dificulties of defining matching rules to arbitrary text cannot be fully resolved without the use of Regular Expressions (typically implemented as Finite State Machines like our __measure__ implementation above), a more advanced topic that is beyond the scope of this course.\n\n
\n \n
\n", "meta": {"hexsha": "c0c2ea1e85d1b8b88f08dc8e8cd911d0ddc91c68", "size": 246123, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "1. Text Representation.ipynb", "max_stars_repo_name": "joshuagladwin/NLP", "max_stars_repo_head_hexsha": "ae641e141a1604bbe1639a2ded4ed2424660eab0", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-09T12:03:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-09T12:03:23.000Z", "max_issues_repo_path": "1. Text Representation.ipynb", "max_issues_repo_name": "joshuagladwin/NLP", "max_issues_repo_head_hexsha": "ae641e141a1604bbe1639a2ded4ed2424660eab0", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1. Text Representation.ipynb", "max_forks_repo_name": "joshuagladwin/NLP", "max_forks_repo_head_hexsha": "ae641e141a1604bbe1639a2ded4ed2424660eab0", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 123.0, "max_line_length": 195472, "alphanum_fraction": 0.8735997855, "converted": true, "num_tokens": 7771, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4186969093556867, "lm_q2_score": 0.22000708951749934, "lm_q1q2_score": 0.09211628841731687}} {"text": "```python\nGodfrey Beddard 'Applying Maths in the Chemical & Biomolecular Sciences an example-based approach' Chapter 9\n```\n\n\n```python\n# import all python add-ons etc that will be needed later on\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sympy import *\nfrom scipy.integrate import quad\ninit_printing() # allows printing of SymPy results in typeset maths format\nplt.rcParams.update({'font.size': 14}) # set font size for plots\n```\n\n# 7 Convolution\n\n## 7.1 Motivation and concept\n\nInstruments measure everything: for example, mass, energy, number of particles, wavelength of light, voltage, current, and images. However, every instrument distorts the data to a greater or lesser extent, and obviously we try to make these distortions insignificant but this is not always possible. In cases when a detector may not respond quickly enough to an event, when very wide slits have to be used in a spectrometer to detect a weak signal, or an electronic circuit does not respond in a linear manner to the input voltage, a distortion to the data is unavoidable. The effect is to _convolute_ the ideal response, as defined by the physics behind the experiment, with the instrumental response. Fortunately Fourier transforms can usually be used to unravel the effect of convolution, however, in some circumstances this may not be possible.\n\n**(i)** To be specific, suppose that the lifetime of electronically excited atoms or molecules is to be measured by exciting them with a pulse of light and their fluorescence measured as it decays with time. This fluorescence could be observed with a photodiode or photomultiplier, whose output voltage is measured with an oscilloscope. Before doing this experiment, two questions have to be answered; \n>(a) Is the laser used to excite the molecules of short enough duration that the molecules or atoms can be excited quickly enough before any significant number can decay back to the ground state? \n\n>(b) Is the detection equipment (photodiode, oscilloscope) used able to respond quickly enough to measure the decaying fluorescence properly? \n\n\n\nFigure 24. Top: A signal representing the ideal response of an experiment to a sudden impulse. Middle: The actual stimulation used in the experiment represented as the instrument response. Bottom: The measured signal, the convolution of the two upper curves.\n____\n\nIf either one or both of these conditions cannot be met, then the data will be distorted by the relatively slow response of the instrument. The convolution curve in fig 24 shows how this distortion affects some data. In this figure, the top curve is the ideal decay of the excited state, but it could represent any ideal response. This behaviour would be observed if the molecules could be excited with an infinitesimally narrow laser pulse and measured with a photo-detector with an unlimited time response. The second curve is the actual shape of the laser pulse, and/or detector response, and is the 'instrument response' drawn on the same timescale. Clearly, this has a width and a rise and decay time that is not so very different to that of the ideal response. The lower curve is the convolution of the ideal response with the instrument response, and is what would be measured experimentally and clearly has characteristics of both curves. A log plot of the data would show that only at long times does the convoluted response have the same slope as the ideal one. It makes no difference if the instrument response consists of a slow 'driving force' for the experiment, in this case a long-lived light-pulse, or a slowly responding detector or both, because the effect producing the convolution is the same. Fortunately, convolution can be calculated easily and rapidly using Fourier transforms.\n\n\nFigure 25. The convolution of a narrow spectral line with a wide slit in a spectrometer.\n\n____\n\n**(ii)** As a second example, consider measuring the width or position of one particular spectral line, such as from a star or a sample of molecules in the lab. The spectrometer has slits on its entrance and exit and these, with the number of grooves in the grating, control the resolution of the spectrometer. Typically, this is $0.1$ nm/mm of slit width for a moderately good spectrometer and 1 nm/mm for a general purpose one. If the slits cannot be closed to more than $0.1$ mm, then the resolution of the general purpose instrument will be approximately 0.1 nm and a narrow spectral line will appear to have this value even it is many times narrower. This is because the grating is rotated while measuring the spectrum and the spectral line is swept across the slits. The effect is to sequentially place a spectral line at all possible points, and hence wavelengths, across the slit. A signal is recorded at all these wavelengths rather than being measured only at its proper one, and the response measured is the convolution of the ideal width of the spectral line with the instrument response, which is the finite width of the slit. In many instruments, a CCD camera measures all wavelengths simultaneously, and a slit is not needed nor is the grating scanned. However, the same reasoning applies because the individual elements of the camera have a finite width, which therefore act as individual slits.\n\n**(iii)** A final use of convolution is to smooth data. Because convoluting one function with another involves integration, this has the effect of summing or averaging. The rolling or moving average method (Section 10.4) is in effect a convolution, and effectively smooths spiky data.\n\nIn the next sections, a convolution will be calculated by direct summation and by a Fourier transform. Convolution is related to the auto- and cross-correlations and these will also be described. How to go about estimating the true response from the convoluted response in real, that is experimental data, i.e. reversing the effects of convolution, is discussed in chapter 13 on numerical methods. This is usually done using iterative, non-linear least-squares methods, (See 13.6.7), because when using real data, which always contains noise, it is found that reverse transforming the convolution often results in a calculated ideal response that is so noisy as to be useless.\n\n\n\nFigure 26. Curves show the instrument response, as a series of impulses (dashed), which produce a response ($w$) at each point on its profile not all of which are shown. These are then added together in this time delayed manner, to produce the convoluted response.\n____\n\n## 7.2 How convolution works\n\nTo understand how convolution works, suppose that the overall instrument response is made up of a series of $\\delta$-function impulses. These can be infinitesimally narrow light pulses that excite a molecule. Suppose these impulses are made at ever shorter time intervals, then the effect is that of smoothly exciting the molecule. Each of the impulses elicits an ideal response but because there are many of them, their responses must be added together. The result is the convolution; the effect is shown in Fig. 26. It is always assumed in the convolution that the response is linear with the impulse, which simply means that doubling the impulse doubles the response and so forth.\n\nThe light pulses occur at each point in the dashed curve, Fig. 26. The response from each impulse is the decaying solid curve. To calculate the overall response at any given point along the x-axis, the effect of all previous impulses must be added into the calculation. Suppose that the pulse exciting the sample has a shape given by some function $f$, the ideal experimental response $w$, and the convolution $C$. The terms can be written down at each time if it is assumed, for the present, that the impulses are discrete and the data is represented as a series of points at times $1, 2, 3$, and so forth; $f$(6), for example, represents the value of $f$ at the sixth time position. The first point of the impulse is $f$(1) and this produces the response\n\n$$\\displaystyle f (1)[w(1) + w(2) + w(3) + \\cdots]$$\n\nThe second and third impulses produce\n\n$\\displaystyle f(2)[w(1) + w(2) + w(3) + \\cdots]$ and $f(3)[w(1) + w(2) + w(3) + \\cdots]$.\n\nThe convolution is the sum of these terms at times 1, 2, 3, and so on therefore;\n\n$$\\displaystyle\\begin{align}\nC(1)& = f (1)w(1)\\\\\nC(2)& = f (1)w(2) + f (2)w(1)\\\\\nC(3) &= f (1)w(3) + f (2)w(2) + f (3)w(1)\\\\\nC(4)& = f (1)w(4) + f (2)w(3) + f (3)w(2) + f (4)w(1)\\\\\n\\end{align}$$\n\nThese sums are shown in Fig. 27 by adding the products of $f$ and $w$ vertically. Clearly, only where both $f$ and $w$ are not zero, will this product have a value. The symmetry in these sums soon becomes apparent, each being the product of one series running to the right, and the other to the left; for instance, look at $C$(4). The name convolution arises from just this effect; the word also means 'folded' and this is shown in the form of the series where each function is folded back onto the other. Convolution is also the distribution of one function in accordance with a 'law' specified by another function (Steward 1987) because the whole of one function $w$, is multiplied with each ordinate of the other $f$, and the results added. The ideal response (the 'one function') is distributed, i.e. spread out according to the law or shape of the driving function $f$.\n\n\n\n\nFigure 27. Diagram showing the notation used to calculate a convolution.\n\n## 7.3 Convolution by summation\n\nWritten as a summation, the convolution at point $k$ is\n\n$$\\displaystyle C(k) = \\sum_{i=0}^k f(i)w(k - i ) \\tag{32}$$\n\nThis sum evaluates just one point; to calculate the whole convolution, the index $k$ must now be varied from 1 to $n$, which is the number of data points, making a double summation. One reason Fourier transforms are used to calculate convolutions is that the fast Fourier transform algorithm, FFT, is far quicker on the computer than calculating the convolution as a double summation, particularly for a large number of data points.\n\nThe algorithm to calculate the summation has a double loop to calculate all values of $k$ and to perform the summation in eqn. 32. The two functions used are those that produced Fig. 24, which are $\\displaystyle f(t) = e^{-t/100}$ and $\\displaystyle w(t) = e^{-(t-100)^2/1000}$, and $2^{10}$ points will be also be used to mimic the data produced by an instrument.\n\nFirst, because the data is discrete, arrays $f$ and $w$ are made; to hold the data points. Then two loops are made, one changes $k$ from 1 to $n$ the and inside one calculates $C(k)$. The indices are arranged as in equation 32. The variable $s$ accumulates the sum as the inner do loop progresses. This is a relatively slow calculation because of the double loop.\n\n\n```python\ndef do_convolution(f,w): # do by double summation \n # Sigma f(n-m)g(m) ; c(0) = f(0)w(0), c(1) = f(0)w(1) + f(1)w(0) etc \n n = len(f)\n c = [0.0 for i in range(n)]\n for k in range(n):\n s = 0.0\n for i in range(k):\n s = s + f[i]*w[k-i]\n pass\n c[k] = s\n return c\n\nn = 2**10\nf = [ np.exp(-i/100.0) for i in range(n)]\nw = [ np.exp(-(i-100)**2/1e3) for i in range(n)]\nt = [i for i in range(n)]\n\nC = do_convolution(f,w)\nmxc = max(C) # use to normalse\nplt.plot(t,C/mxc,color='red',label='C , convolution '+r'$f\\otimes w$')\nplt.plot(t,f,color='black',label='f(x)')\nplt.plot(t,w,color='blue',label='w(x)')\nplt.xlim([0,n])\nplt.legend()\nplt.show()\n```\n\n## 7.4 Convolution by Fourier transform\n\nThe convolution can also become an integral, by supposing that the points are separated by an infinitesimal amount, and therefore, the change $sum \\rightarrow \\int $ is allowable. The integral form of the convolution at time $u$, is\n\n$$\\displaystyle C(u)=\\int_0^\\infty f(t)w(u-t)dt \\tag{33}$$\n\nwhich represents the response at time $u$ to an impulse delivered at time $t$. The limits to the integral are often represented as $\\pm \\infty$. If the signal is zero at times less than zero, then the lower limit can be made zero as illustrated. The convolution integral is frequently written as,\n\n$$\\displaystyle C(t) = f (t) \\otimes w(t) \\qquad \\text{ or } \\qquad C = f \\otimes w. \\tag{34} $$\n\nThe convolution is performed by Fourier transforming functions $f$ and $w$ separately, multiplying the transforms together and then inverse transforming. The symbol $\\otimes$ represents all these calculations because the result is returned in the time domain. Sometimes, the convolution is written only as a conversion into the frequency domain as\n\n$$\\displaystyle f(t)\\otimes w(t) = \\sqrt{2\\pi} F(\\omega)W(\\omega)$$\n\nwhere $F$ and $W$ are the respective transforms of $f$ and $w$, $\\omega$ being angular frequency. Thus convolution in 'normal' space is multiplication in 'Fourier' space. \n\nIf $T$ represents the Fourier transform and $T[\\cdots]^{-1}$ the inverse transform the convolution is formally written as\n\n$$\\displaystyle C = T[T( f )T(w)]^{-1} \\tag{35} $$\n\nwhich is the same as equation 34. If the equations describing $f$ and $w$ are known, an exponential and a Gaussian for example, then the Fourier transform integral of each can be calculated as described in Section 6, the product of these multiplied and the inverse transform integral then calculated. The result is the convolution of the two functions. \n\nAs an example, consider convoluting a square pulse with two delta functions. Their convolution will produce two square pulses centred on the two delta functions, because, as the pulse is swept past the two deltas, only at their overlap will their product have a finite value. Three stages of the convolution are shown at the top of Fig. 28, and the result is shown below this.\n\n\n\nFigure 28. Convolution as Fourier transforms.\n____\n\nNext, the convolution is evaluated using Fourier transforms. The transforms of the two delta functions and the pulse have already been calculated, and are shown in Fig. 29. This product of the two transforms is then reverse transformed and two square pulses are produced.\n\nThis last convolution is, incidentally, another way of describing the interference due to a double slit, and if many delta functions are used then this describes the effect of a diffraction grating on light waves.\n\nThe data needed in a convolution is frequently a list of numbers because it comes from an experiment and in this case a numerical method has to be used to do the transform, which is then called a Discrete Fourier Transform. This is described further in Section 9, but here is an example some code to illustrate convolution using discrete Fourier transforms.\n\n\n\nFigure 29. Left: The two waveforms are the Fourier transform of a square pulse (top) and two delta functions (lower). When these are multiplied together and reverse transformed two pulses are produced which is the convolution of the delta functions and the single square pulse. The same method has been used to make Fig. 24, even though the functions differ.\n\n\n```python\n# convolution by fourier transform\n\nn = 2**10\nf = [ np.exp(-i/100.0) for i in range(n)]\nw = [ np.exp(-(i-100)**2/1e3) for i in range(n)]\nt = [i for i in range(n)]\n\nF = np.fft.rfft(f) # use rfft as input in only real \nW = np.fft.rfft(w)\nC = np.fft.irfft(F*W)\n\nmxc = max(C) # use to normalse\n\nplt.plot(t,C/mxc,color='red',label='C , convolution '+r'$f\\otimes w$')\nplt.plot(t,f,color='black',label='f')\nplt.plot(t,w,color='blue',label='w')\nplt.plot()\nplt.xlim([0,n])\nplt.legend()\nplt.show()\n```\n\n## 7.5 A warning\n\nFinally a warning about using Fourier transforms to perform convolution. The transform assumes that the function being transformed is periodic, this means that if the signal is not of the same size, such as zero, at its start and end there is a frequency associated with changing from end to start so that this will appear as an artefact in the convolution. THis occurs because the transform assumes that the signal is periodic. This does not arise in the case of the summation method and even though this may be slower to calculate, it is more robust. The difference is shown in the next figure 29A. On the left is shown the summation based convolution calculation using an exponential, with lifetime of 10000, and a Gaussian and on the right using the Fourier transform method. All is not lost, however, because by padding the data with zeros to double its length the correct result can be obtained. \n\n\n\nFigure 29A. The figure shows the difference between the correct convolution done by summation ( red curve left ) and the artefact introduced by using the Fourier method ( red curve right ) this is produced when the functions are not the same, preferably zero, at the end of the data.\n\n\n## 8 Autocorrelation and cross-correlation\n\n\nA correlation is a function that measures the similarity of one set of data to another. A cross-correlation is formed if the data are dissimilar, an autocorrelation if there is only one set of data. The data might be a voltage from a detector, it might be an image or residuals from fitting a set of data. In Fig. 30 part of a noisy sinusoidal curve is shown in black and labelled 1. The second curve (2, red) is displaced only a little from the first and is clearly only slightly different; the third (3, grey) which is displaced by more is clearly different from the first as it is positive at large $x$ when the first curve is negative. The right-hand figure shows the autocorrelation of the curve (1) shown on the left, and as this is an oscillating curve, the autocorrelation also oscillates but eventually reaches zero. The oscillation is a result of the fact that a sinusoidal curve is similar to itself after each period, and the autocorrelation measures this similarity by increasing and decreasing. The autocorrelation is also less noisy that the data because it involves summing or integrating over many data points. \n\nA random signal with an average of zero will have an autocorrelation that averages to zero at all points except the first, whereas the autocorrelation of an exponential and similar functions will be not be zero, but decay away in some manner. The autocorrelation is a likened to a measure of the 'memory' a function has, that is, how similar one part of the data is with an earlier or later part. A zero average random signal has no memory because it is random, and each point is independent of its predecessor; this is not true of any other signal. The correlation is therefore a process by which we can compare patterns in data. In data analysis, the residuals, which are the difference between the data and a fitted function, should be random if the fit is correct; the shape of the autocorrelation is therefore a way of testing this.\n\n\n\nFigure 30. A sketch showing the first $120$ points of a set of noisy data of $250$ points. The data is still somewhat similar to itself when displaced by only a few points but much less so, when displaced by many, dashed grey curve. The autocorrelation of all the data is shown on the right. Notice also how as autocorrelation integrates the data, the noise is reduced.\n____\n\nIn ultra-fast (femtosecond) laser spectroscopy, autocorrelations are used to measure the length of the laser pulse because no electronic device is fast enough to do this, as they are limited to a time resolution of a few tens of picoseconds at best, but laser pulses can be less than $10$ fs in duration. In single molecule spectroscopy, the correlation of the number of fluorescent photons detected in a given time interval is used to determine the diffusion coefficient of the molecules. In the study of the electronically excited states of molecules, the correlation of time resolved spectra, recorded as the molecule moves on its potential energy surface, is a measure of excited state and solvent dynamics.\n\nThe correlation function is similar to, but different from, convolution. The autocorrelation is always symmetrical about zero displacement or lag, the cross-correlation is not. In the convolution the two functions $f$ and $w$ are folded on one another, the first point of $f$ multiplying the last of $w$ and so on, until the last point of $f$ multiplies the first of $w$, equation 31. In the auto- and cross-correlation, one function is also moved past the other and the sum of the product of each term is made but with the indices running in the _same direction_, both increasing. \n\nA cross-correlation is shown in Fig. 31 using a triangle and a rectangle, each with a base line, and for clarity, defined with only six points. The first term in auto- or cross-correlation $A$ occurs when point $f$(6) overlaps with $w$(1), when $f$ is to the far left of $w$. The position at $-5$ to the left is shown in the figure as $A$(-5). The middle term in the correlation is at zero displacement, or lag, and there is total overlap of the two shapes and the correlation is at a maximum. The figure on the right shows the last overlap, consisting of just one point in common between the two shapes. There are six terms in the summation of $A$(0) down to one in each of $A$(-5) and $A$(5). The zero lag term is\n\n$$\\displaystyle A(0) = f (1)w(1) + f (2)w(2) + \\cdots + f (6)w(6)$$\n\nThe next term has one point displacement between $f$ and $w$ and five terms are summed,\n\n$$A(1) = f (1)w(2) + f (2)w(3) + f (3)w(4) + f (4)w(5) + f (5)w(6)$$\n\nWith two points displaced, there are four terms\n\n$$\\displaystyle A(2) = f (1)w(3) + f (2)w(4) + f (3)w(5) + f (4)w(6) $$\n\nand so forth for the other terms. The last overlap is\n\n$$\\displaystyle A(5) = f (1)w(6) \\tag{36}$$\n\nOn the negative side, the indices are interchanged, $f$ for $w$ and vice versa, and the first (far left) term is\n$A(-5) = f (6)w(1)$ and similarly for the other terms. There are 11 terms in all or, in general $2n - 1$, for data of $n$ points. In an autocorrelation, $f$ and $w$ are the same function and therefore the autocorrelation must be symmetrical and only terms from zero to five are needed, the others being known by symmetry.\n\n\n\nFigure 31. A pictorial description of cross-correlation of the signals (functions) $w$ and $f$.\n____\n\nThe formula for the autocorrelation for $n$ data points is\n\n$$\\displaystyle A_a(k)=\\sum_{i=0}^{n-k}f(i)f(k+i) \\qquad k=0,1,\\cdots \\rightarrow \\cdots n \\tag{37}$$\n\nwhere the first value of the displacement $k$ is zero, and the last $n$, and both functions are now labelled $f$. Very often the autocorrelation is normalized; this means dividing by $\\sum f(i)^2$, \n\n$$\\displaystyle A_a(k)=\\frac{\\sum\\limits_{i=0}^{n-k}f(i)f(k+i)}{\\sum f(i)^2} \\tag{38}$$\n\nThese last two formulae produce just half of the autocorrelation. To produce the full correlation, symmetrical about zero lag, the mirror image of equation (37) must be added as points $-n \\to -1$ to the left-hand part of the data.\n\nThe cross-correlation uses a similar formula\n\n$$\\displaystyle A_c(k)=\\sum\\limits_{i=0}^{n-k} f(i)w(k+i) \\qquad k=-n+1,\\cdots 0, \\cdots n-1 \\tag{39}$$\n\nbut now $k$ always ranges from $-n + 1 \\to n - 1$. This distinction is crucial, otherwise the whole of the cross-correlation is not calculated.\n\nIn calculating a correlation as a summation with a computer, as with a convolution, each term in the correlation is a sum, so this means that two nested 'loops' are needed to calculate the whole function; one loop sums each individual term, the other calculates the sum, $A(k)$.\n\nSome authors define the correlation up to a maximum of $n$ in the summation, not $n - k$. There is, however, a pitfall in doing this because, if the correlation is not zero above half the length of the data, then this folds round and what is calculated is the sum of the correlation plus its mirror image. The way to avoid this is to add $n$ zeros to the data and the summation continued until $2n$. This should be done routinely if Fourier transforms are used to calculate the correlation.\n\nCorrelations and convolution are not restricted to digitized data but apply also to normal functions. Written as an integral, the cross-correlation of a real, i.e. not complex, function is\n\n$$\\displaystyle A_c =\\int_{-\\infty}^{\\infty}f(t)w(u+t)dt \\tag{40}$$\n\nand the autocorrelation of $f$,\n\n$$\\displaystyle A_c =\\int_{-\\infty}^{\\infty}f(t)f(u+t)dt \\tag{41}$$\n\nNotice that the sign in the second term is positive in the correlation but negative in a convolution, equation (33). If the function contains a complex number, then the conjugate is always placed on the left,\n\n$$\\displaystyle A_c =\\int_{-\\infty}^{\\infty}f(t)^*f(u+t)dt \\tag{41}$$\n\nThe normalised autocorrelation is \n\n$$\\displaystyle G(u) = \\frac{\\int\\limits_{-\\infty}^{\\infty}f(t)^*f(u+t)dt}{\\int\\limits_{-\\infty}^{\\infty}f(t)^2dt} =\\frac{\\langle f(t)\\,f(u+t)\\rangle}{\\langle f(t)^2\\rangle} \\tag{42}$$\n\nand the bracket notation indicates that these are average value. The denominator is the normalization term and is also the value of the numerator with $u = 0$.\n\n## 8.1 Calculating an autocorrelation\n\n**(i)** If the function is periodic then the integration limits should cover one period. The normalized autocorrelation of a cosine $A\\cos(2\\pi\\nu t + \\varphi)$, where the period is $T = 1/\\nu$ and $\\varphi$ is the phase, is calculated as\n\n$$\\displaystyle G(u) = \\frac{\\int\\limits_0^T \\cos(2\\pi \\nu t+\\varphi)\\cos(2\\pi \\nu (u+t)+\\varphi)dt}{\\int\\limits_0^T \\cos^2(2\\pi \\nu t+\\varphi)dt}$$\n\nand the result will be independent of the phase. The normalisation integral is a standard one and can be looked up or converted to an exponential form to simplify integration. The result is $\\displaystyle \\int_0^T \\cos(2\\pi t/T+\\varphi)^2dt = T/2$. The other integral can similarly be calculated. Using SymPy, this is\n\n\n```python\nt,phi,T, u = symbols('t phi T u',positive =True)\n\nf01 = cos(2*pi*t/T+phi)*cos(2*pi*t/T+phi+2*pi*u/T )\n\nG = integrate(f01,(t,0,T),conds='none') # slow calculation\nsimplify(G)\n```\n\nfrom which it is seen that the normalised autocorrelation is also a cosine $\\displaystyle G(u) = \\cos(2\\pi \\frac{u}{T})$. If the initial cosine is written as $\\cos(\\omega t + \\varphi)$ then the period $T = 2\\pi/\\omega$.\n\nIf the trigonometric function is a complex exponential $\\displaystyle Ae^{-i(\\omega t+\\varphi)}$ rather than a sine or cosine then the complex conjugate of the function is taken in both of the autocorrelation integrals. The normalization could not be simpler $\\int_0^Tdt = T$. The correlation is also a very straightforward integral;\n\n$$\\displaystyle G(u)=\\frac{1}{T}\\int\\limits_0^T e^{-i\\omega t+\\varphi}e^{i\\omega (u+t)+\\varphi}dt =\\frac{1}{T}\\int\\limits_0^T e^{i\\omega u}dt=e^{i\\omega u}$$\n\nUsing the Euler relationship, $\\displaystyle e^{-i\\theta} = \\cos(\\theta) + i \\sin(\\theta)$, the real or imaginary parts of the function give the cosine or sine result respectively.\n\n**(ii)** If the function is not periodic, then the limits must be determined by the function being used. The normalized autocorrelation $A(u)$ of the function $f(t) = e^{-at}$, when $t \\ge 0$ and $f (t) = 0$ when $t \\lt 0$, will be calculated, and also its full width at half-maximum, fwhm. The integration limits can be changed from those in equation (42) because the function is zero for $t \\lt 0$ and the lower limit can be zero. The normalization, using equation (42), is\n\n$$\\displaystyle \\int_{-\\infty}^{\\infty} f(t)^2dt=\\int_0^\\infty e^{-2at}dt = \\frac{1}{2a}$$\n\nand the autocorrelation\n\n$$\\displaystyle \\int_{-\\infty}^{\\infty} f(t)f(u+t)dt=\\int_0^\\infty e^{-at}e^{-a(u+t)}dt =e^{-au}\\int_0^\\infty e^{-2at}dt=\\frac{e^{-au}}{2a}$$\n\nImportantly, the autocorrelation must be an even function because it is symmetrical thus it is $\\displaystyle A(u) = \\frac{e^{-a|u|}}{ 2a}$ therefore, the value of $u$ must always be positive. The normalised autocorrelation is $\\displaystyle A(u)=e^{-a|u|}$. The $|u|$ does not follow from the mathematics; it is imposed by our knowledge of symmetry of the function.\n\nAs a check, at $u = 0,\\, A(0) = 1$, which is correct and the function is even or symmetrical about its y-axis, or, $u = 0$. The _fwhm_ is calculated when $\\displaystyle A(u_h) = 0.5 = e^{-a|u_h|}$ or $\\displaystyle |u_h|=a^{-1}\\ln(2)$ and thus _fwhm_ is $\\displaystyle 2a^{-1}\\ln(2)$. This is twice as wide in this instance as the initial function.\n\n**(iii)** The duration of a short laser pulse is often measured as an autocorrelation with an optical correlator. If the intensity profile $I$ of the short laser pulse is a Gaussian centred at zero $\\displaystyle I = e^{-2(t/a)^2}$, it is possible to calculate the width of its normalized autocorrelation. If the calculated autocorrelation shape is compared with an experimentally measured one, an estimation of the laser pulse's duration can be made. The optical correlator to do this measurement is a Michelson interferometer; the path length in one arm is changed relative to the other so that one pulse is moved past the other in time. The pulses are combined in a frequency doubling crystal, and a signal is detected only when the pulses overlap. \n\nTo achieve this, the doubled frequency, which is in the ultraviolet part of the spectrum, is separated from the fundamental wavelength by a filter. The size of the signal vs the distance the mirror moves, which is proportional to time, is the autocorrelation see Fig. 32. \n\n\n\nFigure 32. Schematic of an optical autocorrelator used to measure the duration of pico- and femtosecond laser pulses.\n____\n\nThe pulse is centred at zero delay and (theoretically) extends from $-\\infty$ to $\\infty$, which are the integration limits of the autocorrelation, equation (42). The autocorrelation integral is\n\n$$\\displaystyle A(u)=\\int\\limits_{-\\infty}^{\\infty} e^{-t^2/a^2}e^{-(u+t)^2/a^2}dt = a\\sqrt{\\frac{\\pi}{2}} e^{-u^2/(2a^2)}$$\n\nand the calculation with SymPy is\n\n\n```python\nt, u, a =symbols('t u a',positive=True)\nf01= exp(-(t/a)**2)*exp(-((u+t)**2)/a**2)\nG= simplify(integrate(f01, (t,-oo,oo), conds='none')) # oo is infinity\nG.doit()\n```\n\nThe normalization integration can be looked up but need not be worked out because it is the value of autocorrelation when $u$ = 0. The normalization equation is therefore $\\displaystyle \\int e^{-2t^2/a^2}dt=a\\sqrt{\\pi /2}$.\n\nThe normalized autocorrelation $G(u)$ is also a Gaussian, with a value $\\displaystyle G(u)=e^{-u^2/(2a^2)}$.\n\nThe _fwhm_ of this function is calculated when $G(u)=1/2$ and is $a\\sqrt{2\\ln(2)}$ and that of the original pulse is $a\\sqrt{\\ln(2)}$ therefore, the autocorrelation is $\\sqrt{2} \\approx$ 1.414 times wider than the pulse. Knowing this factor provides a convenient way of measuring the duration of a short laser pulse assuming it has a Gaussian profile.\n\n**(iv)** The randomness or otherwise of the autocorrelation of the residuals obtained from fitting real data to a model (theory) is important when determining the 'goodness of fit'. The function is now a set of data points not an equation. The data in Fig. 33 shows the autocorrelation of a random sequence of values where the mean is 0 (left) and $1/2$ (right). When the mean is zero, only the first point has a value not essentially zero. When the mean is $1/2$, there is a correlation between each point, and this decreases as the separation between points increases. Since the mean is $1/2$ (or any value not zero), this means that each point is related to all the others, because, besides random fluctuations, they all have the same underlying value. Their correlation becomes less the further they are separated. \n\nThe normalized autocorrelation of any line $y$ = constant, is a sloping straight line starting at $1$ and ending at $0$. This is to be expected, because at zero displacement the line is overlapped with itself, whereas at the maximum displacement, only one term remains, see equation (36), and this value is small. In Fig. 33, the random noise has a large correlation at zero displacement because the whole trace must be perfectly correlated with itself; its value is 1 but only because the autocorrelation is normalized.\n\nIn calculating the autocorrelation of residuals from a set of fitted data, the mean value of the data is always subtracted first to prevent this sloping effect on the autocorrelation shown on the right of Fig. 9.33. Of course, if after doing this the autocorrelation is still sloping, then it clearly is not equally distributed about zero and the model used to describe the data may not be correct.\n\nThe autocorrelation calculation is shown below.\n\n\n```python\ndef do_autoc(f,w): # correlation call as (w,w) for autocorrelation ac(k)= sum_i=0^{n-k} f(i)w(k+i) /norm\n n = len(w)\n ac = [0.0 for i in range(n)]\n sf = sum([f[i]**2 for i in range(n)])\n sw = sum([w[i]**2 for i in range(n)])\n normfw = np.sqrt(sf*sw)\n for k in range(n-1):\n s = 0.0\n for i in range(n-k):\n s = s + f[i]*w[k+i]\n ac[k] = s\n \n return ac/normfw\n#-------------\n\nfig1= plt.figure(figsize=(8.0,4.0))\nax0 = fig1.add_subplot(1,2,1)\nax1 = fig1.add_subplot(1,2,2)\n\nn = 250\ns = [ np.random.rand() for i in range(n)]\nt0= [i for i in range(n)]\n\nss = sum(s)/n # get average \ns0 = [s[i] - ss for i in range(n)] # subtract average\n\nax0.plot(t0, do_autoc(s0,s0),color='blue')\nax0.axhline(0,color='black',linewidth=1)\nax0.set_xlabel('x')\nax0.set_title('autocorrelation, av = 0')\nax0.set_yticks([-0.5,0.0,0.5,1])\n\nax1.plot(t0, do_autoc(s,s),color='blue')\nax1.axhline(0,color='black',linewidth=1)\nax1.set_xlabel('x')\nax1.set_title('autocorrelation, av = 0.5')\nax1.set_yticks([-0.5,0.0,0.5,1])\nplt.tight_layout()\n\nplt.show()\n```\n\nFig. 33 Normalized autocorrelations of $250$ random numbers with an average of $0$ (left) and an average of $1/2$ (right). Only the right-hand half of the autocorrelation is calculated and plotted. The left-hand part is the exact mirror image.\n\n_____\n\n## 8.2 Autocorrelation of fluctuating and noisy signals\n\nThe autocorrelation of noise is now considered, and in the next section this will lead to understanding the shape of a spectroscopic transition and this is illustrated with NMR. Any experimental measurement is accompanied by noise. When measuring the properties of single atoms, molecules, or photons, considerable fluctuations in their measured values are expected and many events have to be averaged to obtain a precise result. The measured property might be energy, velocity, the number of photons in a given period measured by a photodiode, or the current in a transistor or diode when this is so small that discrete charge events are recorded. This latter noise is called _shot noise_. If you could hear shot noise, the effect would be rather similar to the sound of heavy rain falling on a car's roof. \n\nThere is thermal noise in all resistors in electrical circuits that causes fluctuations in the current. These fluctuations are caused by the thermal motion of the many electrons as they pass through the inhomogeneous material forming the body of the resistor. The frequency of the noise measured on an oscilloscope is determined by the frequency with which the circuitry responds and therefore depends on the capacitance, resistance, and inductance. This generally produces noise with a spread of frequencies of about equal amplitude, except for multiples of mains frequency and those of switched-mode power supplies, and is called _white noise_. At low frequencies, the amplitude of the noise increases in direct proportion to $1/f$ where $f$ is frequency and is therefore called '$1/f$' noise. The origin of $1/f$ noise is not fully understood.\n\nOn the macroscopic scale, random noise also accompanies experimental measurements. Measuring the amount of any of the many trace gases, such as CO$_2$, IO$_2$, and NOx, in the atmosphere using optical techniques is an inherently noisy process. This is due to the continuous and erratic motion of air packets along the line of sight during the measurement and from one measurement to another. The frequency of the noise is, however, mostly limited to the speed at which the air changes. \n\nIn the laboratory, all sorts of noise sources can affect an experiment; mostly these are due to voltage or current ripple in DC power supplies. In sensitive laser experiments, noise can be caused by dust particles in the air, vibrations of the building and from the air flow coming from air conditioning units. Atomic force microscopes have in the past needed to be suspended inside a sound proof box by elastic bungee ropes, to avoid adding noise to the measurements from the vibrations of the building and from nearby traffic. \n\nIn an attempt to reduce noise a Fourier transform and an autocorrelation of the signal will provide information about the frequencies present, and how quickly they change, or alternatively, how long the noise persists, and hence the possible source. The transform can also be used to remove noise as illustrated in Section 10.\n\nSuppose that the noise on a measurement is represented by some fluctuating signal $f(t)$, the frequency of which is determined by the nature of the experiment and by the measuring apparatus. This signal will be represented by a general Fourier series similar to that in Section 1.1 but where $T$ is the period over which a measurement is made and the summation starts from zero as this makes the resulting equations simpler,\n\n$$\\displaystyle f(t)=\\sum\\limits_{n=0}^\\infty a_n\\cos \\left(\\frac{2\\pi n t}{T}\\right)+\\sum\\limits_{n=0}^\\infty b_n\\sin\\left(\\frac{2\\pi n t}{T}\\right)$$\n\nFollowing Davidson (1962, chapter 14), the time average of $f$ and $f^2$ is the respective integral divided by the time interval $T$. The average $\\langle f \\rangle$ is zero because the noise is random, but the average of $f^2$ is not; the integral is\n\n$$\\displaystyle \\langle f^2 \\rangle =\\frac{1}{T}\\int\\limits_0^T\\left [\\sum\\limits_{n=0}^\\infty a_n\\cos \\left(\\frac{2\\pi n t}{T}\\right)+\\sum\\limits_{n=0}^\\infty b_n\\sin\\left(\\frac{2\\pi n t}{T}\\right) \\right]^2 dt$$\n\nwhich simplifies considerably because of the orthogonality of the cosine integrals such as $\\displaystyle \\int \\cos(2\\pi \\frac{nt}{T})\\sin(2\\pi \\frac{mt}{T})dt=0$, $n$ and $m$ being integers, and the result is very simple;\n\n$$\\displaystyle \\langle f^2\\rangle = \\frac{1}{2}\\sum_n(a_n^2+b_n^2) $$\n\nThis expression can also represent the average of many measurements if the coefficients $a$ and $b$ themselves represent average values. This means that the _ergodic hypothesis_ (or ergodic condition) applies, i.e. for a stationary system each part comprising the ensemble (of particles) will pass through all values accessible to it, given a sufficiently long time. Thus the time average is the same for all parts of the ensemble. This also means that the time average is the equivalent to the ensemble average. To explain further; the word 'stationary' means that there is no preferred origin for the measurement, thus any time period over which measurements are made is just as good as any other. The ensemble average is taken over all coordinates of a system at a fixed time. The time average considers just a part of the ensemble averaged over a sufficiently long time. If the ergodic hypothesis applies these averages are equal.\n\nThe variance (the square of the standard deviation) on the signal is $\\sigma^2=\\langle f^2\\rangle - \\langle f\\rangle^2 $ and in this case the standard deviation is $\\sqrt{\\langle f^2\\rangle}$ and is the determined only by the amplitudes $a$, $b$ of the noise. The energy in the noise is $a^2 + b^2$. \n\nThe autocorrelation of $f(x)$ is \n\n$$\\displaystyle A(u) =\\langle f(t)f(u+t)\\rangle =\\frac{1}{T}\\int_0^Tf(t)f(t+u)dt$$\n\nwhich looks quite complicated when the substitution for $f$ is made. However, using the formulas for $\\sin(A + B), \\cos(A + B)$ and the orthogonality rules, a remarkably simple result is produced:\n\n$$\\displaystyle A(u)=\\frac{1}{2}\\sum_n\\left(a_n^2+b_n^2\\right)\\cos\\left(\\frac{2\\pi nu}{T} \\right) \\tag{43}$$\n\nwhich is an oscillating signal that will repeat itself with a period $T$.\n\n## 8.3 Wiener\u2013Khinchin relations\n\nThe autocorrelation (equation (43)) is related to the energy or power in a given signal. For example, with electromagnetic radiation the energy is the square of the amplitude $E$ of the electric field, the field is given by the constants $a$ and $b$ thus $a^2 + b^2$ represents the energy. This is also true of a sound wave in a fluid where the energy is proportional to the square of the oscillating pressure. There are other examples; the power dissipated in a resistor is proportional to the current squared and the kinetic energy of a molecule is proportional to the square of the velocity. Thus, in general if the signal is $f$, $\\langle f^2\\rangle$ represents the average energy or power. The period $T$ (equation (43)) is somewhat arbitrary and can reasonably take on any value; therefore, it is possible to define $n/T \\equiv \\nu_n$ as a frequency. The amount of power $P$ in a small frequency interval from $\\nu$ to $\\nu + \\nu + \\delta \\nu$ is therefore $\\displaystyle P(\\nu)d\\nu = \\frac{1}{2}\\left(a_\\nu^2 + b_\\nu^2\\right)$ and the autocorrelation can be written as an integral over frequencies rather than a summation over index $n$. This effectively means that there are so many terms in the sum that it can be changed into an integral without any significant error, and doing this produces the autocorrelation;\n\n$$\\displaystyle A(u)=\\int_{v=0}^\\infty P(\\nu)\\cos(2\\pi\\nu u)d\\nu \\tag{44}$$\n\nComparing this equation with a Fourier transform equation, the power spectrum is\n\n$$\\displaystyle P(\\nu) = 4\\int_{u=0}^\\infty A(u)\\cos(2\\pi\\nu u)d\\nu \\tag{45}$$\n\nand these two equations are known as the _Wiener - Khinchin_ relationships: the power spectrum $P(\\nu)$ and autocorrelation $A(u)$ form a Fourier transform pair. Very often the transform pair involve time and frequency, in which case the changes $u\\to t$ and $P(\\nu) \\to J(\\nu)$ are commonly made. In NMR and other spectroscopies $J(\\omega)$ is called the spectral density.\n\nThe power spectrum is proportional to what we would normally observe in a spectroscopic experiment, as the change in the signal vs frequency. The width of the signal is determined by the autocorrelation and this is determined by the noise. If the noise is due to a random process then it is often found that the autocorrelation decays exponentially as $\\displaystyle e^{-t/\\tau}$ with rate constant $k=1/\\tau$. In this case the power spectrum $J(\\nu)$ is\n\n$$\\displaystyle J(\\nu) =4\\int_{u=0}^\\infty e^{-u/\\tau}\\cos(2\\pi \\nu u)du = \\frac{4\\tau}{1+(2\\pi\\nu \\tau)^2} \\tag{46}$$\n\nand the integral is most easily evaluated by converting the cosine to its exponential form. \n\nThe nature of the random processes contributing to the power spectrum is now considered using NMR as an example. The nuclear spin angular momentum in a molecule remains in fixed precessing motion governed by the external magnetic field, but the molecules themselves also undergo random rotational diffusion due to thermal agitation when in solution. This random motion causes the nuclear spin to experience a fluctuating magnetic field in addition to the applied external field. Therefore, those nuclei undergoing NMR transitions experience this fluctuating field and its effect is to return the nuclear spin population to equilibrium with a lifetime called T1 (Sanders & Hunter 1987; Levitt 2001). The timescale of these fluctuations is of the order of tens of picoseconds because this is the timescale of molecular rotation. ( Translational diffusion is far slower ). The molecular rotation rate constant and hence frequency is similar to that of the NMR transition frequency (Larmor frequency) and therefore rotational diffusion can greatly influence the return to equilibrium of the nuclear spins and can dominate both the T1 and T2 decay processes. Loss of spin coherence is characterized by the lifetime T2. \n\nMolecular translational diffusion is far slower than rotation and so causes magnetic field fluctuations at a far lower frequency than the NMR transition and is therefore less important for T1 processes. Similarly, vibrational motion is too high to influence the NMR transition. Large molecules in a viscous solvent have a sluggish response and a small rotational diffusion coefficient, and long rotational relaxation times, and _vice versa_. However, while different solvents and molecules of different sizes will change the frequency of the random magnetic field fluctuations, the timescale remains comparable to that of the NMR transition. In proteins, while overall rotation can be slow, approximately tens of nanoseconds, faster local motion of residues called 'wobbling in a cone' motion still occurs. \n\nThe autocorrelation of rotational diffusion can be shown to be an exponentially decaying function with a lifetime $\\tau$ proportional to the reciprocal of the rotational diffusion coefficient. Fig. 34 shows the spectral density calculated for different rotational relaxation times. The coupling of the magnetic field fluctuations is most effective when $1/2\\pi\\tau$ is close to the Larmor frequency and therefore molecules of different sizes will be affected differently.\n\nWhen plotted on a linear scale the spectral density of a slowly decaying exponential autocorrelation, equation 46, is a narrow function centred at zero frequency, whereas the rapidly decaying autocorrelation has the same shape but is wide. Zero frequency here means the transition frequency, see fig 34. The line-width is a consequence of the time-energy or time-frequency uncertainty, causing a wide spectral line when processes are rapid and vice versa. When plotted on a linear - log scale the power spectrum is constant over a wide range of low frequencies, and this is called 'white noise'. It rapidly decreases, centred about the frequency $1/2\\pi\\tau$ as is shown in the figure. If the noise were completely random, the power spectrum would be constant at all frequencies.\n\nThe Weiner - Khinchin theorem also shows that the autocorrelation of the signal $f$ is the squared modulus of its Fourier transform $g(k)$. Apart from a constant of proportionality, this is\n\n$$\\displaystyle A(u) =\\int_{-\\infty}^\\infty f^*(t)f(u+t)dt = |g(t)|^2$$\n\nBecause the squared modulus of the Fourier transform is produced, the autocorrelation has lost all phase information so it is not possible to invert or reverse $g(k)$ to produce the original function $f$. Thus, in the NMR case, it is not possible to measure the spectral density, which is proportional the shape of the NMR transition, and then work backwards to obtain the function that produced this shape. All that can be done is to generate a model of the interactions, such as rotational diffusion, and, for example, by a non-linear, least-squares method fit this theoretical model to the data.\n\n\n\nFigure 34. Left: Power spectra (or spectral density) vs. frequency for a signal that has an exponential autocorrelation function, the decay lifetimes of the exponentials are from $1 \\to 100$ ps. The density of the fluctuation in the noise is almost constant at lower frequencies and this is called 'white noise'. \n", "meta": {"hexsha": "92c3d470b7af4ccf19fe2e7688ace3602a6304fd", "size": 122862, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapter-9/fourier-D.ipynb", "max_stars_repo_name": "subblue/applied-maths-in-chem-book", "max_stars_repo_head_hexsha": "e3368645412fcc974e2b12d7cc584aa96e8eb2b4", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "chapter-9/fourier-D.ipynb", "max_issues_repo_name": "subblue/applied-maths-in-chem-book", "max_issues_repo_head_hexsha": "e3368645412fcc974e2b12d7cc584aa96e8eb2b4", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "chapter-9/fourier-D.ipynb", "max_forks_repo_name": "subblue/applied-maths-in-chem-book", "max_forks_repo_head_hexsha": "e3368645412fcc974e2b12d7cc584aa96e8eb2b4", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 186.7203647416, "max_line_length": 21356, "alphanum_fraction": 0.838705214, "converted": true, "num_tokens": 11758, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.46101677931231594, "lm_q2_score": 0.19930799790404563, "lm_q1q2_score": 0.09188433128490893}} {"text": "\n \n \n
\n Run\n in Google Colab\n \n View source on GitHub\n
\n\n\n```python\n%matplotlib inline\n```\n\n\nReinforcement Learning (DQN) Tutorial\n=====================================\n**Author**: `Adam Paszke `_\n\n\nThis tutorial shows how to use PyTorch to train a Deep Q Learning (DQN) agent\non the CartPole-v0 task from the `OpenAI Gym `__.\n\n**Task**\n\nThe agent has to decide between two actions - moving the cart left or\nright - so that the pole attached to it stays upright. You can find an\nofficial leaderboard with various algorithms and visualizations at the\n`Gym website `__.\n\n.. figure:: /_static/img/cartpole.gif\n :alt: cartpole\n\n cartpole\n\nAs the agent observes the current state of the environment and chooses\nan action, the environment *transitions* to a new state, and also\nreturns a reward that indicates the consequences of the action. In this\ntask, the environment terminates if the pole falls over too far.\n\nThe CartPole task is designed so that the inputs to the agent are 4 real\nvalues representing the environment state (position, velocity, etc.).\nHowever, neural networks can solve the task purely by looking at the\nscene, so we'll use a patch of the screen centered on the cart as an\ninput. Because of this, our results aren't directly comparable to the\nones from the official leaderboard - our task is much harder.\nUnfortunately this does slow down the training, because we have to\nrender all the frames.\n\nStrictly speaking, we will present the state as the difference between\nthe current screen patch and the previous one. This will allow the agent\nto take the velocity of the pole into account from one image.\n\n**Packages**\n\n\nFirst, let's import needed packages. Firstly, we need\n`gym `__ for the environment\n(Install using `pip install gym`).\nWe'll also use the following from PyTorch:\n\n- neural networks (``torch.nn``)\n- optimization (``torch.optim``)\n- automatic differentiation (``torch.autograd``)\n- utilities for vision tasks (``torchvision`` - `a separate\n package `__).\n\n\n\n\n\n```python\nimport gym\nimport math\nimport random\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom collections import namedtuple\nfrom itertools import count\nfrom PIL import Image\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.nn.functional as F\nimport torchvision.transforms as T\n\n\nenv = gym.make('CartPole-v0').unwrapped\n\n# set up matplotlib\nis_ipython = 'inline' in matplotlib.get_backend()\nif is_ipython:\n from IPython import display\n\nplt.ion()\n\n# if gpu is to be used\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n```\n\nReplay Memory\n-------------\n\nWe'll be using experience replay memory for training our DQN. It stores\nthe transitions that the agent observes, allowing us to reuse this data\nlater. By sampling from it randomly, the transitions that build up a\nbatch are decorrelated. It has been shown that this greatly stabilizes\nand improves the DQN training procedure.\n\nFor this, we're going to need two classses:\n\n- ``Transition`` - a named tuple representing a single transition in\n our environment\n- ``ReplayMemory`` - a cyclic buffer of bounded size that holds the\n transitions observed recently. It also implements a ``.sample()``\n method for selecting a random batch of transitions for training.\n\n\n\n\n\n```python\nTransition = namedtuple('Transition',\n ('state', 'action', 'next_state', 'reward'))\n\n\nclass ReplayMemory(object):\n\n def __init__(self, capacity):\n self.capacity = capacity\n self.memory = []\n self.position = 0\n\n def push(self, *args):\n \"\"\"Saves a transition.\"\"\"\n if len(self.memory) < self.capacity:\n self.memory.append(None)\n self.memory[self.position] = Transition(*args)\n self.position = (self.position + 1) % self.capacity\n\n def sample(self, batch_size):\n return random.sample(self.memory, batch_size)\n\n def __len__(self):\n return len(self.memory)\n```\n\nNow, let's define our model. But first, let quickly recap what a DQN is.\n\nDQN algorithm\n-------------\n\nOur environment is deterministic, so all equations presented here are\nalso formulated deterministically for the sake of simplicity. In the\nreinforcement learning literature, they would also contain expectations\nover stochastic transitions in the environment.\n\nOur aim will be to train a policy that tries to maximize the discounted,\ncumulative reward\n$R_{t_0} = \\sum_{t=t_0}^{\\infty} \\gamma^{t - t_0} r_t$, where\n$R_{t_0}$ is also known as the *return*. The discount,\n$\\gamma$, should be a constant between $0$ and $1$\nthat ensures the sum converges. It makes rewards from the uncertain far\nfuture less important for our agent than the ones in the near future\nthat it can be fairly confident about.\n\nThe main idea behind Q-learning is that if we had a function\n$Q^*: State \\times Action \\rightarrow \\mathbb{R}$, that could tell\nus what our return would be, if we were to take an action in a given\nstate, then we could easily construct a policy that maximizes our\nrewards:\n\n\\begin{align}\\pi^*(s) = \\arg\\!\\max_a \\ Q^*(s, a)\\end{align}\n\nHowever, we don't know everything about the world, so we don't have\naccess to $Q^*$. But, since neural networks are universal function\napproximators, we can simply create one and train it to resemble\n$Q^*$.\n\nFor our training update rule, we'll use a fact that every $Q$\nfunction for some policy obeys the Bellman equation:\n\n\\begin{align}Q^{\\pi}(s, a) = r + \\gamma Q^{\\pi}(s', \\pi(s'))\\end{align}\n\nThe difference between the two sides of the equality is known as the\ntemporal difference error, $\\delta$:\n\n\\begin{align}\\delta = Q(s, a) - (r + \\gamma \\max_a Q(s', a))\\end{align}\n\nTo minimise this error, we will use the `Huber\nloss `__. The Huber loss acts\nlike the mean squared error when the error is small, but like the mean\nabsolute error when the error is large - this makes it more robust to\noutliers when the estimates of $Q$ are very noisy. We calculate\nthis over a batch of transitions, $B$, sampled from the replay\nmemory:\n\n\\begin{align}\\mathcal{L} = \\frac{1}{|B|}\\sum_{(s, a, s', r) \\ \\in \\ B} \\mathcal{L}(\\delta)\\end{align}\n\n\\begin{align}\\text{where} \\quad \\mathcal{L}(\\delta) = \\begin{cases}\n \\frac{1}{2}{\\delta^2} & \\text{for } |\\delta| \\le 1, \\\\\n |\\delta| - \\frac{1}{2} & \\text{otherwise.}\n \\end{cases}\\end{align}\n\nQ-network\n^^^^^^^^^\n\nOur model will be a convolutional neural network that takes in the\ndifference between the current and previous screen patches. It has two\noutputs, representing $Q(s, \\mathrm{left})$ and\n$Q(s, \\mathrm{right})$ (where $s$ is the input to the\nnetwork). In effect, the network is trying to predict the *quality* of\ntaking each action given the current input.\n\n\n\n\n\n```python\nclass DQN(nn.Module):\n\n def __init__(self):\n super(DQN, self).__init__()\n self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2)\n self.bn1 = nn.BatchNorm2d(16)\n self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)\n self.bn2 = nn.BatchNorm2d(32)\n self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2)\n self.bn3 = nn.BatchNorm2d(32)\n self.head = nn.Linear(448, 2)\n\n def forward(self, x):\n x = F.relu(self.bn1(self.conv1(x)))\n x = F.relu(self.bn2(self.conv2(x)))\n x = F.relu(self.bn3(self.conv3(x)))\n return self.head(x.view(x.size(0), -1))\n```\n\nInput extraction\n^^^^^^^^^^^^^^^^\n\nThe code below are utilities for extracting and processing rendered\nimages from the environment. It uses the ``torchvision`` package, which\nmakes it easy to compose image transforms. Once you run the cell it will\ndisplay an example patch that it extracted.\n\n\n\n\n\n```python\nresize = T.Compose([T.ToPILImage(),\n T.Resize(40, interpolation=Image.CUBIC),\n T.ToTensor()])\n\n# This is based on the code from gym.\nscreen_width = 600\n\n\ndef get_cart_location():\n world_width = env.x_threshold * 2\n scale = screen_width / world_width\n return int(env.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART\n\n\ndef get_screen():\n screen = env.render(mode='rgb_array').transpose(\n (2, 0, 1)) # transpose into torch order (CHW)\n # Strip off the top and bottom of the screen\n screen = screen[:, 160:320]\n view_width = 320\n cart_location = get_cart_location()\n if cart_location < view_width // 2:\n slice_range = slice(view_width)\n elif cart_location > (screen_width - view_width // 2):\n slice_range = slice(-view_width, None)\n else:\n slice_range = slice(cart_location - view_width // 2,\n cart_location + view_width // 2)\n # Strip off the edges, so that we have a square image centered on a cart\n screen = screen[:, :, slice_range]\n # Convert to float, rescare, convert to torch tensor\n # (this doesn't require a copy)\n screen = np.ascontiguousarray(screen, dtype=np.float32) / 255\n screen = torch.from_numpy(screen)\n # Resize, and add a batch dimension (BCHW)\n return resize(screen).unsqueeze(0).to(device)\n\n\nenv.reset()\nplt.figure()\nplt.imshow(get_screen().cpu().squeeze(0).permute(1, 2, 0).numpy(),\n interpolation='none')\nplt.title('Example extracted screen')\nplt.show()\n```\n\nTraining\n--------\n\nHyperparameters and utilities\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nThis cell instantiates our model and its optimizer, and defines some\nutilities:\n\n- ``select_action`` - will select an action accordingly to an epsilon\n greedy policy. Simply put, we'll sometimes use our model for choosing\n the action, and sometimes we'll just sample one uniformly. The\n probability of choosing a random action will start at ``EPS_START``\n and will decay exponentially towards ``EPS_END``. ``EPS_DECAY``\n controls the rate of the decay.\n- ``plot_durations`` - a helper for plotting the durations of episodes,\n along with an average over the last 100 episodes (the measure used in\n the official evaluations). The plot will be underneath the cell\n containing the main training loop, and will update after every\n episode.\n\n\n\n\n\n```python\nBATCH_SIZE = 128\nGAMMA = 0.999\nEPS_START = 0.9\nEPS_END = 0.05\nEPS_DECAY = 200\nTARGET_UPDATE = 10\n\npolicy_net = DQN().to(device)\ntarget_net = DQN().to(device)\ntarget_net.load_state_dict(policy_net.state_dict())\ntarget_net.eval()\n\noptimizer = optim.RMSprop(policy_net.parameters())\nmemory = ReplayMemory(10000)\n\n\nsteps_done = 0\n\n\ndef select_action(state):\n global steps_done\n sample = random.random()\n eps_threshold = EPS_END + (EPS_START - EPS_END) * \\\n math.exp(-1. * steps_done / EPS_DECAY)\n steps_done += 1\n if sample > eps_threshold:\n with torch.no_grad():\n return policy_net(state).max(1)[1].view(1, 1)\n else:\n return torch.tensor([[random.randrange(2)]], device=device, dtype=torch.long)\n\n\nepisode_durations = []\n\n\ndef plot_durations():\n plt.figure(2)\n plt.clf()\n durations_t = torch.tensor(episode_durations, dtype=torch.float)\n plt.title('Training...')\n plt.xlabel('Episode')\n plt.ylabel('Duration')\n plt.plot(durations_t.numpy())\n # Take 100 episode averages and plot them too\n if len(durations_t) >= 100:\n means = durations_t.unfold(0, 100, 1).mean(1).view(-1)\n means = torch.cat((torch.zeros(99), means))\n plt.plot(means.numpy())\n\n plt.pause(0.001) # pause a bit so that plots are updated\n if is_ipython:\n display.clear_output(wait=True)\n display.display(plt.gcf())\n```\n\nTraining loop\n^^^^^^^^^^^^^\n\nFinally, the code for training our model.\n\nHere, you can find an ``optimize_model`` function that performs a\nsingle step of the optimization. It first samples a batch, concatenates\nall the tensors into a single one, computes $Q(s_t, a_t)$ and\n$V(s_{t+1}) = \\max_a Q(s_{t+1}, a)$, and combines them into our\nloss. By defition we set $V(s) = 0$ if $s$ is a terminal\nstate. We also use a target network to compute $V(s_{t+1})$ for\nadded stability. The target network has its weights kept frozen most of\nthe time, but is updated with the policy network's weights every so often.\nThis is usually a set number of steps but we shall use episodes for\nsimplicity.\n\n\n\n\n\n```python\ndef optimize_model():\n if len(memory) < BATCH_SIZE:\n return\n transitions = memory.sample(BATCH_SIZE)\n # Transpose the batch (see http://stackoverflow.com/a/19343/3343043 for\n # detailed explanation).\n batch = Transition(*zip(*transitions))\n\n # Compute a mask of non-final states and concatenate the batch elements\n non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,\n batch.next_state)), device=device, dtype=torch.uint8)\n non_final_next_states = torch.cat([s for s in batch.next_state\n if s is not None])\n state_batch = torch.cat(batch.state)\n action_batch = torch.cat(batch.action)\n reward_batch = torch.cat(batch.reward)\n\n # Compute Q(s_t, a) - the model computes Q(s_t), then we select the\n # columns of actions taken\n state_action_values = policy_net(state_batch).gather(1, action_batch)\n\n # Compute V(s_{t+1}) for all next states.\n next_state_values = torch.zeros(BATCH_SIZE, device=device)\n next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach()\n # Compute the expected Q values\n expected_state_action_values = (next_state_values * GAMMA) + reward_batch\n\n # Compute Huber loss\n loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1))\n\n # Optimize the model\n optimizer.zero_grad()\n loss.backward()\n for param in policy_net.parameters():\n param.grad.data.clamp_(-1, 1)\n optimizer.step()\n```\n\nBelow, you can find the main training loop. At the beginning we reset\nthe environment and initialize the ``state`` Tensor. Then, we sample\nan action, execute it, observe the next screen and the reward (always\n1), and optimize our model once. When the episode ends (our model\nfails), we restart the loop.\n\nBelow, `num_episodes` is set small. You should download\nthe notebook and run lot more epsiodes.\n\n\n\n\n\n```python\nnum_episodes = 50\nfor i_episode in range(num_episodes):\n # Initialize the environment and state\n env.reset()\n last_screen = get_screen()\n current_screen = get_screen()\n state = current_screen - last_screen\n for t in count():\n # Select and perform an action\n action = select_action(state)\n _, reward, done, _ = env.step(action.item())\n reward = torch.tensor([reward], device=device)\n\n # Observe new state\n last_screen = current_screen\n current_screen = get_screen()\n if not done:\n next_state = current_screen - last_screen\n else:\n next_state = None\n\n # Store the transition in memory\n memory.push(state, action, next_state, reward)\n\n # Move to the next state\n state = next_state\n\n # Perform one step of the optimization (on the target network)\n optimize_model()\n if done:\n episode_durations.append(t + 1)\n plot_durations()\n break\n # Update the target network\n if i_episode % TARGET_UPDATE == 0:\n target_net.load_state_dict(policy_net.state_dict())\n\nprint('Complete')\nenv.render()\nenv.close()\nplt.ioff()\nplt.show()\n```\n", "meta": {"hexsha": "0f9a97371c8bdbff42a6378461ab4c2bc272fc11", "size": 25709, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "reinforcement-learning/pytorch/deep_q_learning.ipynb", "max_stars_repo_name": "notebookexplore/NotebookExplore", "max_stars_repo_head_hexsha": "63d8db772e482cf003eb9696729984f916ec453f", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 21, "max_stars_repo_stars_event_min_datetime": "2020-02-13T05:42:56.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T21:32:26.000Z", "max_issues_repo_path": "reinforcement-learning/pytorch/deep_q_learning.ipynb", "max_issues_repo_name": "notebookexplore/NotebookExplore", "max_issues_repo_head_hexsha": "63d8db772e482cf003eb9696729984f916ec453f", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "reinforcement-learning/pytorch/deep_q_learning.ipynb", "max_forks_repo_name": "notebookexplore/NotebookExplore", "max_forks_repo_head_hexsha": "63d8db772e482cf003eb9696729984f916ec453f", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 5, "max_forks_repo_forks_event_min_datetime": "2020-02-16T07:58:14.000Z", "max_forks_repo_forks_event_max_datetime": "2021-01-03T23:57:30.000Z", "avg_line_length": 39.1308980213, "max_line_length": 165, "alphanum_fraction": 0.5060095686, "converted": true, "num_tokens": 3998, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4649015565456542, "lm_q2_score": 0.19682620128743877, "lm_q1q2_score": 0.09150480734749852}} {"text": "# Practical Session 1: Data exploration and regression algorithms\n\n*Notebook by Ekaterina Kochmar*\n\n## 0.1. Dataset\n\nThe California House Prices Dataset is originally obtained from the StatLib repository. This dataset contains the collected information on the variables (e.g., median income, number of households, precise geographical position) using all the block groups in California from the 1990 Census. A block group is the smallest geographical unit for which the US Census Bureau publishes sample data, and on average it includes $1425.5$ individuals living in a geographically compact area. The [original data](http://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html) contains $20640$ observations on $9$ variables, with the *median house value* being the dependent variable (or *target attribute*). The [modified dataset](https://www.kaggle.com/camnugent/california-housing-prices) from Aurelien Geron, *Hands-On Machine Learning with Scikit-Learn and TensorFlow* contains an additional categorical variable.\n\nFor more information on the original data, please refer to Pace, R. Kelley and Ronald Barry, *Sparse Spatial Autoregressions*, Statistics and Probability Letters, 33 (1997) 291-297. For the information on the modified dataset, please refer to Aurelien Geron, *Hands-On Machine Learning with Scikit-Learn and TensorFlow*, O\u2032Reilly (2017), ISBN: 978-1491962299.\n\n## 0.2. Understanding your task\n\nYou are given a dataset that contains a range of attributes describing the houses in California. Your task is to predict the median price of a house based on its attributes. That is, you should train a machine learning (ML) algorithm on the available data, and the next time you get new information on some housing in California, you can use your trained algorithm to predict its price.\n\nThe questions to ask yourself before starting a new ML project:\n- Does the task suggest a supervised or an unsupervised approach?\n- Are you trying to predict a discrete or a continuous value?\n- Which ML algorithm is most suitable?\n\nTry to answer these questions before you start working on this task, using the following hints:\n- *Supervised* approaches rely on the availability of target label annotation in data; examples include regression and classification approaches. *Unsupervised* approaches don't use annotated data; clustering is a good example of such approach.\n- *Discrete* variables are associated with classes and imply classification approach. *Continuous* variables are associated with regression.\n\n## 0.3. Machine Learning check-list\n\nIn a typical ML project, you need to:\n\n- Get the dataset\n- Understand the data, the attributes and their correlations\n- Split the data into training and test set\n- Apply normalisation, scaling and other transformations to the attributes if needed\n- Build a machine learning model\n- Evaluate the model and investigate the errors\n- Tune your model to improve performance\n\nThis practical will show you how to implement the above steps.\n\n## 0.4. Prerequisites\n\nSome of you might have used Jupiter notebooks with the following libraries before in the [CL 1A Scientific Computing course](https://www.cl.cam.ac.uk/teaching/1920/SciComp/materials.html).\n\nTo run the notebooks on your machine, check if `Python 3` is installed. In addition, you will need the following libraries:\n\n- `Pandas` for easy data uploading and manipulation. Check installation instructions at https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html\n- `Matplotlib`: for visualisations. Check installation instructions at https://matplotlib.org/users/installing.html\n- `NumPy` and `SciPy`: for scietinfic programming. Check installation instruction at https://www.scipy.org/install.html\n- `Scikit-learn`: for machine learning algorithms. Check installation instructions at http://scikit-learn.org/stable/install.html\n\nAlternatively, a number of these libraries can be installed in one go through [Anaconda](https://www.anaconda.com/products/individual) distribution. \n\n## 0.5. Learning objectives\n\nIn this practical you will learn how to:\n\n- upload and explore a dataset\n- visualise and explore the correlations between the variables\n- structure a machine learning project\n- select the training and test data in a random and in a stratified way\n- handle missing values\n- handle categorical values\n- implement a custom data transformer\n- build a machine learning pipeline\n- implement a regression algorithm\n- evaluate a regression algorithm performance\n\nIn addition, you will learn about such common machine learning concepts as:\n- data scaling and normalisation\n- overfitting and underfitting\n- cross-validation\n- hyperparameter setting with grid search\n\n\n## Step 1: Uploading and inspecting the data\n\nFirst let's upload the dataset using `Pandas` and defining a function pointing to the location of the `housing.csv` file:\n\n\n```python\nimport pandas as pd\nimport os\n\ndef load_data(housing_path):\n csv_path = os.path.join(housing_path, \"housing.csv\")\n return pd.read_csv(csv_path)\n```\n\nNow, let's run `load_data` using the path where you stored your `housing.csv` file. This function will return a `Pandas` DataFrame object containing all the data. It is always a good idea to take a quick look into the uploaded dataset and make sure you understand the data you are working with. For example, you can check the top rows of the uploaded data and get the general information about the dataset using `Pandas` functionality as follows:\n\n\n```python\nhousing = load_data(\"housing/\")\nhousing.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
longitudelatitudehousing_median_agetotal_roomstotal_bedroomspopulationhouseholdsmedian_incomemedian_house_valueocean_proximity
0-122.2337.8841.0880.0129.0322.0126.08.3252452600.0NEAR BAY
1-122.2237.8621.07099.01106.02401.01138.08.3014358500.0NEAR BAY
2-122.2437.8552.01467.0190.0496.0177.07.2574352100.0NEAR BAY
3-122.2537.8552.01274.0235.0558.0219.05.6431341300.0NEAR BAY
4-122.2537.8552.01627.0280.0565.0259.03.8462342200.0NEAR BAY
\n
\n\n\n\nRemember that each row in this table represents a block group (housing district), and each column an attribute. How many attributes does the dataset contain? \n\nAnother way to get the summary information about the number of instances and attributes in the dataset is using `info` function. It also shows each attribute's type and number of non-null values:\n\n\n```python\nhousing.info()\n```\n\n \n RangeIndex: 20640 entries, 0 to 20639\n Data columns (total 10 columns):\n # Column Non-Null Count Dtype \n --- ------ -------------- ----- \n 0 longitude 20640 non-null float64\n 1 latitude 20640 non-null float64\n 2 housing_median_age 20640 non-null float64\n 3 total_rooms 20640 non-null float64\n 4 total_bedrooms 20433 non-null float64\n 5 population 20640 non-null float64\n 6 households 20640 non-null float64\n 7 median_income 20640 non-null float64\n 8 median_house_value 20640 non-null float64\n 9 ocean_proximity 20640 non-null object \n dtypes: float64(9), object(1)\n memory usage: 1.6+ MB\n\n\nBefore proceeding further, think about the following: \n- How is the data represented? \n- What do the attribute types suggest? \n- Are there any missing values in the dataset? If so, should you do anything about them? \n\nYou must have worked with numerical values before, and the data types like `float64` should look familiar. However, *ocean\\_proximity* attribute has values of a different type. You can inspect the values of a particular attribute in the DataFrame using the following code:\n\n\n```python\nhousing[\"ocean_proximity\"].value_counts()\n```\n\n\n\n\n <1H OCEAN 9136\n INLAND 6551\n NEAR OCEAN 2658\n NEAR BAY 2290\n ISLAND 5\n Name: ocean_proximity, dtype: int64\n\n\n\nThe above suggests that the values are categorical: there are $5$ categories that define ocean proximity. ML algorithms prefer to work with numerical data, besides all the other attributes are represented using numbers. Keep that in mind, as this suggests that you will need to cast the categorical data as numerical.\n\nFor now, let's have a general overview of the attributes and distribution of their values (note *ocean_proximity* is excluded from this summary):\n\n\n```python\nhousing.describe()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
longitudelatitudehousing_median_agetotal_roomstotal_bedroomspopulationhouseholdsmedian_incomemedian_house_value
count20640.00000020640.00000020640.00000020640.00000020433.00000020640.00000020640.00000020640.00000020640.000000
mean-119.56970435.63186128.6394862635.763081537.8705531425.476744499.5396803.870671206855.816909
std2.0035322.13595212.5855582181.615252421.3850701132.462122382.3297531.899822115395.615874
min-124.35000032.5400001.0000002.0000001.0000003.0000001.0000000.49990014999.000000
25%-121.80000033.93000018.0000001447.750000296.000000787.000000280.0000002.563400119600.000000
50%-118.49000034.26000029.0000002127.000000435.0000001166.000000409.0000003.534800179700.000000
75%-118.01000037.71000037.0000003148.000000647.0000001725.000000605.0000004.743250264725.000000
max-114.31000041.95000052.00000039320.0000006445.00000035682.0000006082.00000015.000100500001.000000
\n
\n\n\n\nTo make sure you understand the structure of the dataset, try answering the following questions: \n- How can you interpret the values in the table above?\n- What do the percentiles (e.g., $25\\%$ or $50\\%$) tell you about the distribution of values in this dataset (you can select one particular attribute to explain)? \n- How are the missing values handled?\n\nRemember that you can always refer to [`Pandas`](https://pandas.pydata.org/pandas-docs/stable/reference/index.html) documentation.\n\nAnother good way to get an overview of the values distribution is to plot histograms. This time, you'll need to use `matplotlib`:\n\n\n```python\n%matplotlib inline \n#so that the plot will be displayed in the notebook\nimport matplotlib.pyplot as plt\n\nhousing.hist(bins=50, figsize=(20,15))\nplt.show()\n```\n\nTwo observations about this graphs are worth making:\n- the *median_income*, *housing_median_age* and the *median_house_value* have been capped by the team that collected the data: that is, the values for the *median_income* are scaled by dividing the income by \\\\$10000 and capped so that they range between $[0.4999, 15.0001]$ with the incomes lower than $0.4999$ and higher than $15.0001$ binned together; similarly, the *housing_median_age* values have been scaled and binned to range between $[1, 52]$ years and the *median_house_value* \u2013 to range between $[14999, 500001]$. Data manipulations like these are not unusual in data science but it's good to be aware of how the data is represented;\n- several other attributes are \"tail heavy\" \u2013 they have a long distribution tail with many decreasingly rare values to the right of the mean. In practice that means that you might consider using the logarithms of these values rather than the absolute values.\n\n## Step 2: Splitting the data into training and test sets\n\nIn this practical, you are working with a dataset that has been collected and thoroughly labelled in the past. Each instance has a predefined set of values and the correct price label assigned to it. After training the ML model on this dataset you hope to be able to predict the prices for new houses, not contained in this dataset, based on their characteristics such as geographical position, median income, number of rooms and so on. How can you check in advance whether your model is good in making such predictions?\n\nThe answer is: you set part of your dataset, called *test set*, aside and use it to evaluate the performance of your model only. You train and tune your model using the rest of the dataset \u2013 *training set* \u2013 and evaluate the performance of the model trained this way on the test set. Since the model doesn't see the test set during training, this perfomance should give you a reasonable estimate of how well it would perform on new data. Traditionally, you split the data into $80\\%$ training and $20\\%$ test set, making sure that the test instances are selected randomly so that you don't end up with some biased selection leading to over-optimistic or over-pessimistic results on your test set.\n\nFor example, you can select your test set as the code below shows. To ensure random selection of the test items, use `np.random.permutation`. However, if you want to ensure that you have a stable test set, and the same test instances get selected from the dataset in a random fashion in different runs of the program, select a random seed, e.g. using `np.random.seed(42)`.\n\n\n```python\nimport numpy as np\nnp.random.seed(42)\n\ndef split_train_test(data, test_ratio): \n shuffled_indices = np.random.permutation(len(data))\n test_set_size = int(len(data) * test_ratio)\n test_indices = shuffled_indices[:test_set_size]\n train_indices = shuffled_indices[test_set_size:]\n return data.iloc[train_indices], data.iloc[test_indices]\n\ntrain_set, test_set = split_train_test(housing, 0.2)\nprint(len(train_set), \"training instances +\", len(test_set), \"test instances\")\n```\n\n 16512 training instances + 4128 test instances\n\n\nNote that `scikit-learn` provides a similar functionality to the code above with its `train_test_split` function. Morevoer, you can pass it several datasets with the same number of rows each, and it will split them into training and test sets on the same indices (you might find it useful if you need to pass in a separate DataFrame with labels):\n\n\n```python\nfrom sklearn.model_selection import train_test_split\n\ntrain_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)\nprint(len(train_set), \"training instances +\", len(test_set), \"test instances\")\n```\n\n 16512 training instances + 4128 test instances\n\n\nSo far, you have been selecting your test set using random sampling methods. If your data is representative of the task at hand, this should help ensure that the results of the model testing are informative. However, if your dataset is not very large and the data is skewed on some of the attributes or on the target label (as is often the case with the real-world data), random sampling might introduce a sampling bias. *Stratified sampling* is a technique that helps make sure that the distributions of the instance attributes or labels in the training and the test sets are similar, meaning that the proportion of instances drawn from each *stratum* in the dataset is similar in the training and test data.\n\nSampling bias may express itself both in the distribution of labels and in the distribution of the attribute values. For instance, take a look at the *median_income* attribute value distribution. Suppose for now (and you might find a confirmation to that later in the practical) that this attribute is predictive of the house price, however its values are unevenly distributed across the range of $[0.4999, 15.0001]$ with a very long tail. If random sampling doesn't select enough instances for each *stratum* (each range of incomes) the estimate of the under-represented strata's importance will be biased. \n\nFirst, to limit the number of income categories (strata), particularly at the long tail, let's apply further binning to the income values: e.g., you can divide the income by $1.5$, round up the values using `ceil` to have discrete categories (bins), and merge all the categories greater than $5$ into category $5$. The latter can be achieved using `Pandas`' `where` functionality, keeping the original values when they are smaller than $5$ and converting them to $5$ otherwise:\n\n\n```python\nhousing[\"income_cat\"] = np.ceil(housing[\"median_income\"] / 1.5)\nhousing[\"income_cat\"].where(housing[\"income_cat\"] < 5, 5.0, inplace = True)\n\nhousing[\"income_cat\"].hist()\nplt.show()\n```\n\nNow you have a much smaller number of categories of income, with the instances more evenly distributed, so you can hope to get enough data to represent the tail. Next, let's split the dataset into training and test sets making sure both contain similar proportion of instances from each income category. You can do that using `scikit-learn`'s `StratifiedShuffleSplit` specifying the condition on which the data should be stratified (in this case, income category):\n\n\n```python\nfrom sklearn.model_selection import StratifiedShuffleSplit\n\nsplit = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)\nfor train_index, test_index in split.split(housing, housing[\"income_cat\"]):\n strat_train_set = housing.loc[train_index]\n strat_test_set = housing.loc[test_index]\n```\n\nLet's compare the distribution of the income values in the randomly selected train and test sets and the stratified train and test sets against the full dataset. To better understand the effect of random sampling versus stratified sampling, let's also estimate the error that would be introduced in the data by such splits:\n\n\n```python\ntrain_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)\n\ndef income_cat_proportions(data):\n return data[\"income_cat\"].value_counts() / len(data)\n\ncompare_props = pd.DataFrame({\n \"Overall\": income_cat_proportions(housing),\n \"Stratified tr\": income_cat_proportions(strat_train_set),\n \"Random tr\": income_cat_proportions(train_set),\n \"Stratified ts\": income_cat_proportions(strat_test_set),\n \"Random ts\": income_cat_proportions(test_set),\n})\ncompare_props[\"Rand. tr %error\"] = 100 * compare_props[\"Random tr\"] / compare_props[\"Overall\"] - 100\ncompare_props[\"Rand. ts %error\"] = 100 * compare_props[\"Random ts\"] / compare_props[\"Overall\"] - 100\ncompare_props[\"Strat. tr %error\"] = 100 * compare_props[\"Stratified tr\"] / compare_props[\"Overall\"] - 100\ncompare_props[\"Strat. ts %error\"] = 100 * compare_props[\"Stratified ts\"] / compare_props[\"Overall\"] - 100\n\ncompare_props.sort_index()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
OverallStratified trRandom trStratified tsRandom tsRand. tr %errorRand. ts %errorStrat. tr %errorStrat. ts %error
1.00.0398260.0398500.0397290.0397290.040213-0.2433090.9732360.060827-0.243309
2.00.3188470.3188590.3174660.3187980.324370-0.4330651.7322600.003799-0.015195
3.00.3505810.3505940.3485950.3505330.358527-0.5666112.2664460.003455-0.013820
4.00.1763080.1762960.1785370.1763570.1673931.264084-5.056334-0.0068700.027480
5.00.1144380.1144020.1156730.1145830.1094961.079594-4.318374-0.0317530.127011
\n
\n\n\n\nAs you can see, the distributions in the stratified training and test sets are much closer to the original distribution of categories as well as being much closer to each other. \n\nNote, that to help you split the data, you had to introduce a new category \u2013 *income_cat* \u2013 which contains the same information as the original attribute *median_income* binned in a different way:\n\n\n```python\nstrat_train_set.info()\n```\n\n \n Int64Index: 16512 entries, 17606 to 15775\n Data columns (total 11 columns):\n # Column Non-Null Count Dtype \n --- ------ -------------- ----- \n 0 longitude 16512 non-null float64\n 1 latitude 16512 non-null float64\n 2 housing_median_age 16512 non-null float64\n 3 total_rooms 16512 non-null float64\n 4 total_bedrooms 16354 non-null float64\n 5 population 16512 non-null float64\n 6 households 16512 non-null float64\n 7 median_income 16512 non-null float64\n 8 median_house_value 16512 non-null float64\n 9 ocean_proximity 16512 non-null object \n 10 income_cat 16512 non-null float64\n dtypes: float64(10), object(1)\n memory usage: 1.5+ MB\n\n\nBefore proceeding further let's remove the *income_cat* attribute so the data is back to its original state. Here is how you can do that:\n\n\n```python\nfor set_ in (strat_train_set, strat_test_set):\n set_.drop(\"income_cat\", axis=1, inplace=True)\n\nstrat_train_set.info()\n```\n\n \n Int64Index: 16512 entries, 17606 to 15775\n Data columns (total 10 columns):\n # Column Non-Null Count Dtype \n --- ------ -------------- ----- \n 0 longitude 16512 non-null float64\n 1 latitude 16512 non-null float64\n 2 housing_median_age 16512 non-null float64\n 3 total_rooms 16512 non-null float64\n 4 total_bedrooms 16354 non-null float64\n 5 population 16512 non-null float64\n 6 households 16512 non-null float64\n 7 median_income 16512 non-null float64\n 8 median_house_value 16512 non-null float64\n 9 ocean_proximity 16512 non-null object \n dtypes: float64(9), object(1)\n memory usage: 1.4+ MB\n\n\n## Step 3: Exploring the attributes\n\nThe next step is to look more closely into the attributes and gain insights into the data. In particular, you should try to answer the following questions: \n- Which attributes look most informative? \n- How do they correlate with each other and the target label?\n- Is any further normalisation or scaling needed?\n\nThe most informative ways in which you can answer the questions above are by *visualising* the data and by *collecting additional statistics* on the attributes and their relations to each other.\n\nFirst, remember that from now on you're only looking into and gaining insights from the training data. You will use the test data at the evaluation step only, thus ensuring no data leakage between the training and test sets occurs and the results on the test set are a fair evaluation of your algorithm's performance. Let's make a copy of the training set that you can experiment with without a danger of overwriting or changing the original data: \n\n\n```python\nhousing = strat_train_set.copy()\n```\n\n### Visualisations\n\nThe first two attributes describe the geographical position of the houses. Let's apply further visualisations and look into the geographical area that is covered: for that, use a scatter plot plotting longitude against latitude coordinates. To make the scatter plot more informative, use `alpha` option to highlight high density points:\n\n\n```python\nhousing.plot(kind='scatter', x='longitude', y='latitude', alpha=0.2)\n```\n\nYou can experiment with `alpha` values to get a better understanding, but it should be obvious from these plots that the areas in the south and along the coast of California are more densely populated (roughly corresponding to the Bay Area, Los Angeles, San Diego, and the Central Valley). \n\nNow, what does geographical position suggest about the housing prices? In the following code, the size of the circles represents the size of the population, and the color represents the price, ranging from blue for low prices to red for high prices (this color scheme is specified by the preselected `cmap` type):\n\n\n```python\nhousing.plot(kind='scatter', x='longitude', y='latitude', alpha=0.5,\n s=housing[\"population\"]/100, label=\"population\", figsize=(10,7), \n c=housing[\"median_house_value\"], cmap=plt.get_cmap(\"jet\"), colorbar=\"True\",\n )\nplt.legend()\n```\n\nThis plot suggests that the housing prices depend on the proximity to the ocean and on the population size. What does this suggest about the informativeness of the attributes for your ML task?\n\n### Correlations\n\nLet's also look into how the attributes correlate with each other:\n\n\n```python\ncorr_matrix = housing.corr()\ncorr_matrix\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
longitudelatitudehousing_median_agetotal_roomstotal_bedroomspopulationhouseholdsmedian_incomemedian_house_value
longitude1.000000-0.924478-0.1058480.0488710.0765980.1080300.063070-0.019583-0.047432
latitude-0.9244781.0000000.005766-0.039184-0.072419-0.115222-0.077647-0.075205-0.142724
housing_median_age-0.1058480.0057661.000000-0.364509-0.325047-0.298710-0.306428-0.1113600.114110
total_rooms0.048871-0.039184-0.3645091.0000000.9293790.8551090.9183920.2000870.135097
total_bedrooms0.076598-0.072419-0.3250470.9293791.0000000.8763200.980170-0.0097400.047689
population0.108030-0.115222-0.2987100.8551090.8763201.0000000.9046370.002380-0.026920
households0.063070-0.077647-0.3064280.9183920.9801700.9046371.0000000.0107810.064506
median_income-0.019583-0.075205-0.1113600.200087-0.0097400.0023800.0107811.0000000.687160
median_house_value-0.047432-0.1427240.1141100.1350970.047689-0.0269200.0645060.6871601.000000
\n
\n\n\n\nSince you are trying to predict the house value, the last column in this table is the most informative. Let's make the output clearer:\n\n\n```python\ncorr_matrix[\"median_house_value\"].sort_values(ascending=False)\n```\n\n\n\n\n median_house_value 1.000000\n median_income 0.687160\n total_rooms 0.135097\n housing_median_age 0.114110\n households 0.064506\n total_bedrooms 0.047689\n population -0.026920\n longitude -0.047432\n latitude -0.142724\n Name: median_house_value, dtype: float64\n\n\n\nThis makes it clear that the *median_income* is most strongly positively correlated with the price. There is small positive correlation of the price with *total_rooms* and *housing_median_age*, and small negative correlation with *latitude*, which suggests that the prices go up with the increase in income, number of rooms and house age, and go down when you go north. `Pandas`' `scatter_matrix` function allows you to visualise the correlation of attributes with each other (note that since the correlation of an attribute with itself will result in a straight line, `Pandas` uses a histogram instead \u2013 that's what you see along the diagonal):\n\n\n```python\nfrom pandas.plotting import scatter_matrix\n# If the above returns an error, use the following:\n#from pandas.tools.plotting import scatter_matrix\n\nattributes = [\"median_house_value\", \"median_income\", \"total_rooms\", \"housing_median_age\", \"latitude\"]\nscatter_matrix(housing[attributes], figsize=(12,8))\n```\n\nThese plots confirm that the income attribute is the most promising one for predicting house prices, so let's zoom in on this attribute:\n\n\n```python\nhousing.plot(kind=\"scatter\", x=\"median_income\", y=\"median_house_value\", alpha=0.3)\n```\n\nThere are a couple of observations to be made about this plot:\n- The correlation is indeed quite strong: the values follow the upward trend and are not too dispersed otherwise;\n- You can clearly see a line around $500000$ which covers a full range of income values and is due to the fact that the house prices above that value were capped in the original dataset. However, the plot suggests that there are also some other less obvious groups of values, most visible around $350000$ and $450000$, that also cover a range of different income values. Since your ML algorithm will learn to reproduce such data quirks, you might consider looking into these matters further and removing these districts from your dataset (after all, in any real-world application, one can expect a certain amount of noise in the data and clearing the data is one of the steps in any practical application). \n\nThe next thing to notice is that a number of attributes from the original dataset, including *total_rooms*, \t*total_bedrooms* and *population*, do not actually describe each house in particular but rather represent the cumulative counts for *all households* in the block group. At the same time, the task at hand requires you to predict the house price for *each individual household*. In addition, an attribute that measures the proportion of bedrooms against the total number of rooms might be informative. Therefore, the following transformed attributes might be more useful for the prediction:\n\n\n```python\nhousing[\"rooms_per_household\"] = housing[\"total_rooms\"] / housing[\"households\"]\nhousing[\"bedrooms_per_household\"] = housing[\"total_bedrooms\"] / housing[\"households\"]\nhousing[\"bedrooms_per_rooms\"] = housing[\"total_bedrooms\"] / housing[\"total_rooms\"]\nhousing[\"population_per_household\"] = housing[\"population\"] / housing[\"households\"]\n```\n\nA good way to check whether these transformations have any effect on the task is to check attributes correlations again:\n\n\n```python\ncorr_matrix = housing.corr()\ncorr_matrix[\"median_house_value\"].sort_values(ascending=False)\n```\n\n\n\n\n median_house_value 1.000000\n median_income 0.687160\n rooms_per_household 0.146285\n total_rooms 0.135097\n housing_median_age 0.114110\n households 0.064506\n total_bedrooms 0.047689\n population_per_household -0.021985\n population -0.026920\n bedrooms_per_household -0.043343\n longitude -0.047432\n latitude -0.142724\n bedrooms_per_rooms -0.259984\n Name: median_house_value, dtype: float64\n\n\n\nYou can see that the number of rooms per household is more strongly correlated with the house price \u2013 the more rooms the more expensive the house, while the proportion of bedrooms is more strongly correlated with the price than either the number of rooms or bedrooms in the household \u2013 since the correlation is negative, the lower the bedroom-to-room ratio, the more expensive the property.\n\n## Step 4: Data preparation and transformations for machine learning algorithms\n\nNow you are almost ready to implement a regression algorithm for the task at hand. However, there are a couple of other things to address, in particular:\n- handle missing values if there are any;\n- convert all attribute values (e.g. categorical, textual) into numerical format;\n- scale / normalise the feature values if necessary.\n\nFirst, let's separate the labels you're trying to predict (*median_house_value*) from the attributes in the dataset that you will use as *features*. The following code will keep a copy of the labels and the rest of the attributes separate (note that `drop()` will create a copy of the data and will not affect `strat_train_set` itself): \n\n\n```python\nhousing = strat_train_set.drop(\"median_house_value\", axis=1)\nhousing_labels = strat_train_set[\"median_house_value\"].copy()\n```\n\nYou can add the transformed features that you found useful before with the additional function as shown below. Then you can run `add_features(housing)` to add the features:\n\n\n```python\ndef add_features(data):\n # add the transformed features that you found useful before\n data[\"rooms_per_household\"] = data[\"total_rooms\"] / data[\"households\"]\n data[\"bedrooms_per_household\"] = data[\"total_bedrooms\"] / data[\"households\"]\n data[\"bedrooms_per_rooms\"] = data[\"total_bedrooms\"] / data[\"total_rooms\"]\n data[\"population_per_household\"] = data[\"population\"] / data[\"households\"]\n \n# add_features(housing)\n```\n\nYou will learn shortly about how to implement your own *data transformers* and will be able to re-implement addition of these features as a data transfomer.\n\n### Handling missing values\n\nIn Step 1 above, when you took a quick look into the dataset, you might have noticed that all attributes but one have $20640$ values in the dataset; *total_bedrooms* has $20433$, so some values are missing. ML algorithms cannot deal with missing values, so you'll need to decide how to replace these values. There are three possible solutions:\n\n1. remove the corresponding housing blocks from the dataset (i.e., remove the rows in the dataset)\n2. remove the whole attribute (i.e., remove the column)\n3. set the missing values to some predefined value (e.g., zero value, the mean, the median, the most frequent value of the attribute, etc.)\n\nThe following `Pandas` functionality will help you implement each of these options:\n\n\n```python\n## option 1:\n# housing.dropna(subset=[\"total_bedrooms\"])\n## option 2:\n# housing.drop(\"total_bedrooms\", axis=1)\n# option 3:\nmedian = housing[\"total_bedrooms\"].median()\nhousing[\"total_bedrooms\"].fillna(median, inplace=True)\n```\n\nAlthough, all three options are possible, keep in mind that in the first two cases you are throwing away either some valuable attributes (e.g., as you've seen earlier, *bedrooms_per_rooms* correlates well with the label you're trying to predict) or a number of valuable training examples. Option 3, therefore, looks more promising. Note, that for that you estimate a mean or median based on the training set only (as, in general, your ML algorithm has access to the training data only during the training phase), and then store the mean / median values to replace the missing values in the test set (or any new dataset, to that effect). In addition, you might want to calculate and store the mean / median values for all attributes as in a real-life application you can never be sure if any of the attributes will have missing values in the future.\n\nHere is how you can calculate and store median values using `sklearn` (note that you'll need to exclude `ocean_proximity` attribute from this calculation since it has non-numerical values):\n\n\n```python\n# for earlier versions of sklearn use:\n#from sklearn.preprocessing import Imputer \n#imputer = Imputer(strategy=\"median\")\n\nfrom sklearn.impute import SimpleImputer\n\nimputer = SimpleImputer(strategy=\"median\")\nhousing_num = housing.drop(\"ocean_proximity\", axis=1)\nimputer.fit(housing_num)\n```\n\n\n\n\n SimpleImputer(strategy='median')\n\n\n\nYou can check the median values stored in the `imputer` as follows:\n\n\n```python\nimputer.statistics_\n```\n\n\n\n\n array([-118.51 , 34.26 , 29. , 2119.5 , 433. , 1164. ,\n 408. , 3.5409])\n\n\n\nand also make sure that they exactly coincide with the median values for all numerical attributes:\n\n\n```python\nhousing_num.median().values\n```\n\n\n\n\n array([-118.51 , 34.26 , 29. , 2119.5 , 433. , 1164. ,\n 408. , 3.5409])\n\n\n\nFinally, let's replace the missing values in the training data:\n\n\n```python\nX = imputer.transform(housing_num)\nhousing_tr = pd.DataFrame(X, columns=housing_num.columns)\nhousing_tr.info()\n```\n\n \n RangeIndex: 16512 entries, 0 to 16511\n Data columns (total 8 columns):\n # Column Non-Null Count Dtype \n --- ------ -------------- ----- \n 0 longitude 16512 non-null float64\n 1 latitude 16512 non-null float64\n 2 housing_median_age 16512 non-null float64\n 3 total_rooms 16512 non-null float64\n 4 total_bedrooms 16512 non-null float64\n 5 population 16512 non-null float64\n 6 households 16512 non-null float64\n 7 median_income 16512 non-null float64\n dtypes: float64(8)\n memory usage: 1.0 MB\n\n\n### Handling textual and categorical attributes\n\nAnother aspect of the dataset that should be handled is the textual / categorical values of the *ocean_proximity* attribute. ML algorithms prefer working with numerical data, so let's use `sklearn`'s functionality and cast the categorical values as numerical values as follows:\n\n\n```python\nfrom sklearn.preprocessing import LabelEncoder\n\nencoder = LabelEncoder()\nhousing_cat_encoded = encoder.fit_transform(housing[\"ocean_proximity\"])\nhousing_cat_encoded\n```\n\n\n\n\n array([0, 0, 4, ..., 1, 0, 3])\n\n\n\nThe code above mapped the categories to numerical values. You can check what the numerical values correspond to in the original data using:\n\n\n```python\nencoder.classes_\n```\n\n\n\n\n array(['<1H OCEAN', 'INLAND', 'ISLAND', 'NEAR BAY', 'NEAR OCEAN'],\n dtype=object)\n\n\n\nOne problem with the encoding above is that the ML algorithm will automatically assume that the numerical values that are close to each other encode similar concepts, which for this data is not quite true: for example, value $0$ corresponding to *$<$1H OCEAN* category is actually most similar to values $3$ and $4$ (*NEAR BAY* and *NEAR OCEAN*) and not to value $1$ (*INLAND*).\n\nAn alternative to this encoding is called *one-hot encoding* and it runs as follows: for each category, it creates a separate binary attribute which is set to $1$ (hot) when the category coincides with the attribute, and $0$ (cold) otherwise. So, for instance, *$<$1H OCEAN* will be encoded as a one-hot vector $[1, 0, 0, 0, 0]$ and *NEAR OCEAN* will be encoded as $[0, 0, 0, 0, 1]$. The following `sklearn`'s functionality allows to convert categorical values into one-hot vectors:\n\n\n```python\nfrom sklearn.preprocessing import OneHotEncoder\n\nencoder = OneHotEncoder()\n# fit_transform expects a 2D array, but housing_cat_encoded is a 1D array.\n# Reshape it using NumPy's reshape functionality where -1 simply means \"unspecified\" dimension \nhousing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1,1))\nhousing_cat_1hot\n```\n\n\n\n\n <16512x5 sparse matrix of type ''\n \twith 16512 stored elements in Compressed Sparse Row format>\n\n\n\nNote that the data format above says that the output is a sparse matrix. This means that the data structure only stores the location of the non-zero elements, rather than the full set of vectors which are mostly full of zeros. You can check the [documentation on sparse matrices](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html) if you'd like to learn more. If you'd like to see how the encoding looks like you can also convert it back into a dense NumPy array using:\n\n\n```python\nhousing_cat_1hot.toarray()\n```\n\n\n\n\n array([[1., 0., 0., 0., 0.],\n [1., 0., 0., 0., 0.],\n [0., 0., 0., 0., 1.],\n ...,\n [0., 1., 0., 0., 0.],\n [1., 0., 0., 0., 0.],\n [0., 0., 0., 1., 0.]])\n\n\n\nThe steps above, including casting text categories to numerical categories and then converting them into 1-hot vectors, can be performed using `sklearn`'s `LabelBinarizer`:\n\n\n```python\nfrom sklearn.preprocessing import LabelBinarizer\n\nencoder = LabelBinarizer()\nhousing_cat_1hot = encoder.fit_transform(housing[\"ocean_proximity\"])\nhousing_cat_1hot\n```\n\n\n\n\n array([[1, 0, 0, 0, 0],\n [1, 0, 0, 0, 0],\n [0, 0, 0, 0, 1],\n ...,\n [0, 1, 0, 0, 0],\n [1, 0, 0, 0, 0],\n [0, 0, 0, 1, 0]])\n\n\n\nThe above produces dense array as an output, so if you'd like to have a sparse matrix instead you can specify it in the `LabelBinarizer` constructor:\n\n\n```python\nencoder = LabelBinarizer(sparse_output=True)\nhousing_cat_1hot = encoder.fit_transform(housing[\"ocean_proximity\"])\nhousing_cat_1hot\n```\n\n\n\n\n <16512x5 sparse matrix of type ''\n \twith 16512 stored elements in Compressed Sparse Row format>\n\n\n\n### Data transformers\n\nA useful functionality of `sklearn` is [data transformers](http://scikit-learn.org/stable/data_transforms.html): you will see them used in preprocessing very often. For example, you have just used one to impute the missing values. In addition, you can implement your own custom data transformers. In general, a transformer class needs to implement three methods:\n- a constructor method;\n- a `fit` method that learns parameters (e.g. mean and standard deviation for a normalization transformer) or returns `self`; and\n- a `transform` method that applies the learned transformation to the new data.\n\nWhenever you see `fit_transform` method, it means that the method uses an optimised combination of `fit` and `transform`. Here is how you can implement a data transformer that will convert categorical values into 1-hot vectors:\n\n\n```python\nfrom sklearn.base import TransformerMixin # TransformerMixin allows you to use fit_transform method\n\nclass CustomLabelBinarizer(TransformerMixin):\n def __init__(self, *args, **kwargs):\n self.encoder = LabelBinarizer(*args, **kwargs)\n def fit(self, X, y=0):\n self.encoder.fit(X)\n return self\n def transform(self, X, y=0):\n return self.encoder.transform(X)\n```\n\nSimilarly, here is how you can wrap up adding new transformed features like bedroom-to-room ratio with a data transformer:\n\n\n```python\nfrom sklearn.base import BaseEstimator, TransformerMixin \n# BaseEstimator allows you to drop *args and **kwargs from you constructor\n# and, in addition, allows you to use methods set_params() and get_params()\n\nrooms_id, bedrooms_id, population_id, household_id = 3, 4, 5, 6\n\nclass CombinedAttributesAdder(BaseEstimator, TransformerMixin):\n def __init__(self, add_bedrooms_per_rooms = True): # note no *args and **kwargs used this time\n self.add_bedrooms_per_rooms = add_bedrooms_per_rooms\n def fit(self, X, y=None):\n return self\n def transform(self, X, y=None):\n rooms_per_household = X[:, rooms_id] / X[:, household_id]\n bedrooms_per_household = X[:, bedrooms_id] / X[:, household_id]\n population_per_household = X[:, population_id] / X[:, household_id]\n if self.add_bedrooms_per_rooms:\n bedrooms_per_rooms = X[:, bedrooms_id] / X[:, rooms_id]\n return np.c_[X, rooms_per_household, bedrooms_per_household, \n population_per_household, bedrooms_per_rooms]\n else:\n return np.c_[X, rooms_per_household, bedrooms_per_household, \n population_per_household]\n \nattr_adder = CombinedAttributesAdder()\nhousing_extra_attribs = attr_adder.transform(housing.values)\nhousing_extra_attribs\n```\n\n\n\n\n array([[-121.89, 37.29, 38.0, ..., 1.0353982300884956, 2.094395280235988,\n 0.22385204081632654],\n [-121.93, 37.05, 14.0, ..., 0.9557522123893806,\n 2.7079646017699117, 0.15905743740795286],\n [-117.2, 32.77, 31.0, ..., 1.0194805194805194, 2.0259740259740258,\n 0.24129098360655737],\n ...,\n [-116.4, 34.09, 9.0, ..., 1.1398692810457516, 2.742483660130719,\n 0.1796086508753862],\n [-118.01, 33.82, 31.0, ..., 1.0674157303370786, 3.808988764044944,\n 0.19387755102040816],\n [-122.45, 37.77, 52.0, ..., 1.0672926447574336,\n 1.9859154929577465, 0.22035541195476574]], dtype=object)\n\n\n\nIf you'd like to explore the new attributes, you can convert the `housing_extra_attribs` into a `Pandas` DataFrame and apply the functionality as before:\n\n\n```python\nhousing_extra_attribs = pd.DataFrame(housing_extra_attribs, columns=list(housing.columns)+\n [\"rooms_per_household\", \"bedrooms_per_household\", \n \"population_per_household\", \"bedrooms_per_rooms\"])\nhousing_extra_attribs.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
longitudelatitudehousing_median_agetotal_roomstotal_bedroomspopulationhouseholdsmedian_incomeocean_proximityrooms_per_householdbedrooms_per_householdpopulation_per_householdbedrooms_per_rooms
0-121.8937.293815683517103392.7042<1H OCEAN4.625371.03542.09440.223852
1-121.9337.05146791083061136.4214<1H OCEAN6.008850.9557522.707960.159057
2-117.232.773119524719364622.8621NEAR OCEAN4.225111.019482.025970.241291
3-119.6136.3125184737114603531.8839INLAND5.232291.050994.135980.200866
4-118.5934.231765921525445914633.0347<1H OCEAN4.505811.042383.047850.231341
\n
\n\n\n\n\n```python\nhousing_extra_attribs.info()\n```\n\n \n RangeIndex: 16512 entries, 0 to 16511\n Data columns (total 13 columns):\n # Column Non-Null Count Dtype \n --- ------ -------------- ----- \n 0 longitude 16512 non-null object\n 1 latitude 16512 non-null object\n 2 housing_median_age 16512 non-null object\n 3 total_rooms 16512 non-null object\n 4 total_bedrooms 16512 non-null object\n 5 population 16512 non-null object\n 6 households 16512 non-null object\n 7 median_income 16512 non-null object\n 8 ocean_proximity 16512 non-null object\n 9 rooms_per_household 16512 non-null object\n 10 bedrooms_per_household 16512 non-null object\n 11 population_per_household 16512 non-null object\n 12 bedrooms_per_rooms 16512 non-null object\n dtypes: object(13)\n memory usage: 1.6+ MB\n\n\n### Feature scaling\n\nFinally, ML algorithms do not typically perform well when the feature values cover significantly different ranges of values. For example, in the dataset at hand, the income ranges from $0.4999$ to $15.0001$, while population ranges from $3$ to $35682$. Taken at the same scale, these values are not directly comparable. The data transformation that should be applied to these values is called *feature scaling*.\n\nOne of the most common ways to scale the data is to apply *min-max scaling* (also often referred to as *normalisaton*). Min-max scaling puts all values on the scale of $[0, 1]$ making the ranges directly comparable. For that, you need to subtract the min from the actual value and divide by the difference between the maximum and minimum values, i.e.:\n\n\\begin{equation}\nf_{scaled} = \\frac{f - F_{min}}{F_{max} - F_{min}}\n\\end{equation}\n\nwhere $f \\in F$ is the actual feature value of a feature type $F$, and $F_{min}$ and $F_{max}$ are the minumum and maximum values for the feature of type $F$.\n\nAnother common approach is *standardisation*, which subtracts the mean value (so the standardised values have a zero mean) and divides by the variance (so the standardised values have unit variance). Standardisation does not impose a specific range on the values and is more robust to the outliers: i.e., a noisy input or an incorrect income value of $100$ (when the rest of the values lie within the range of $[0.4999, 15.0001]$) will introduce a significant skew in the data after min-max scaling. At the same time, standardisation does not bind values to the same range of $[0, 1]$, which might be problematic for some algorithms.\n\n`Scikit-learn` has an implementation for the `MinMaxScaler`, `StandardScaler`, as well as [other scaling approaches](http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-scaler), i.e.:\n\n\n```python\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler\n\nscaler = StandardScaler()\nhousing_tr_scaled = scaler.fit_transform(housing_tr)\n```\n\n### Putting all the data transformations together\n\nAnother useful functionality of `sklearn` is pipelines. These allow you to stack several separate transformations together. For example, you can apply the numerical transformations such as missing values handling and data scaling as follows:\n\n\n```python\nfrom sklearn.pipeline import Pipeline\n\nnum_pipeline = Pipeline([\n #('imputer', Imputer(strategy=\"median\")),\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('std_scaler', StandardScaler()),\n])\n\nhousing_num_tr = num_pipeline.fit_transform(housing_num)\nhousing_num_tr.shape\n```\n\n\n\n\n (16512, 8)\n\n\n\nPipelines are useful because they help combining several steps together, so that the output of one data transformer (e.g., `Imputer`) is passed on as an input to the next one (e.g., `StandardScaler`) and so you don't need to worry about the intermediate steps. Besides, it makes the code look more concise and readable. However:\n- the code above doesn't handle categorical values;\n- we started with `Pandas` DataFrames because they are useful for data uploading and inspection, but the `Pipeline` expects `NumPy` arrays as input, and at the moment, `sklearn`'s `Pipeline` cannot handle `Pandas` DataFrames.\n\nIn fact, there is a way around the two issues above. Let's implement another custom data transformer that will allow you to select specific attributes from a `Pandas` DataFrame:\n\n\n```python\nfrom sklearn.base import BaseEstimator, TransformerMixin\n\n# Create a class to select numerical or categorical columns \n# since Scikit-Learn doesn't handle DataFrames yet\nclass DataFrameSelector(BaseEstimator, TransformerMixin):\n def __init__(self, attribute_names):\n self.attribute_names = attribute_names\n def fit(self, X, y=None):\n return self\n def transform(self, X):\n return X[self.attribute_names].values\n```\n\nThe transformer above allows you to select a predefined set of attributes from a DataFrame, dropping the rest and converting the selected ones into a `NumPy` array. This is quite useful because now you can select the numerical attributes and apply one set of transformations to them, and then select categorical attributes and apply another set of transformation to them, i.e.:\n\n\n```python\nnum_attribs = list(housing_num)\ncat_attribs = [\"ocean_proximity\"]\n\nnum_pipeline = Pipeline([\n ('selector', DataFrameSelector(num_attribs)),\n #('imputer', Imputer(strategy=\"median\")),\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('attribs_adder', CombinedAttributesAdder()),\n ('std_scaler', StandardScaler()),\n ])\n\ncat_pipeline = Pipeline([\n ('selector', DataFrameSelector(cat_attribs)),\n ('label_binarizer', CustomLabelBinarizer()),\n ])\n```\n\nFinally, to merge the output of the two separate data transformers back together, you can use `sklearn`'s `FeatureUnion` functionality: it runs the two pipelines' `fit` methods and the two `transform` methods in parallel, and then concatenates the output. I.e.:\n\n\n```python\nfrom sklearn.pipeline import FeatureUnion\n\nfull_pipeline = FeatureUnion(transformer_list=[\n (\"num_pipeline\", num_pipeline),\n (\"cat_pipeline\", cat_pipeline),\n ])\n\n\nhousing = strat_train_set.drop(\"median_house_value\", axis=1)\nhousing_labels = strat_train_set[\"median_house_value\"].copy()\n\nhousing_prepared = full_pipeline.fit_transform(housing)\nprint(housing_prepared.shape)\nhousing_prepared\n```\n\n (16512, 17)\n\n\n\n\n\n array([[-1.15604281, 0.77194962, 0.74333089, ..., 0. ,\n 0. , 0. ],\n [-1.17602483, 0.6596948 , -1.1653172 , ..., 0. ,\n 0. , 0. ],\n [ 1.18684903, -1.34218285, 0.18664186, ..., 0. ,\n 0. , 1. ],\n ...,\n [ 1.58648943, -0.72478134, -1.56295222, ..., 0. ,\n 0. , 0. ],\n [ 0.78221312, -0.85106801, 0.18664186, ..., 0. ,\n 0. , 0. ],\n [-1.43579109, 0.99645926, 1.85670895, ..., 0. ,\n 1. , 0. ]])\n\n\n\n## Step 5: Implementation, evaluation and fine-tuning of a regression model\n\nNow that you've explored and prepared the data, you can implement a regression model to predict the house prices on the test set. \n\n### Training and evaluating the model\n\nLet's train a [Linear Regression](http://scikit-learn.org/stable/modules/linear_model.html) model first. During training, a Linear Regression model tries to find the optimal set of weights $w=(w_{1}, w_{2}, ..., w_{n})$ for the features (attributes) $X=(x_{1}, x_{2}, ..., x_{n})$ by minimising the residual sum of squares between the responses predicted by such linear approximation $Xw$ and the observed responses $y$ in the dataset, i.e. trying to solve:\n\n\\begin{equation}\nmin_{w} ||Xw - y||_{2}^{2}\n\\end{equation}\n\n\n```python\nfrom sklearn.linear_model import LinearRegression\n\nlin_reg = LinearRegression()\nlin_reg.fit(housing_prepared, housing_labels)\n```\n\n\n\n\n LinearRegression()\n\n\n\nFirst, let's try the model on some instances from the training set itself:\n\n\n```python\nsome_data = housing.iloc[:5]\nsome_labels = housing_labels.iloc[:5]\n# note the use of transform, as you'd like to apply already learned (fitted) transformations to the data\nsome_data_prepared = full_pipeline.transform(some_data)\n\nprint(\"Predictions:\", list(lin_reg.predict(some_data_prepared)))\nprint(\"Actual labels:\", list(some_labels))\n```\n\n Predictions: [209255.56837821114, 316024.2524890248, 209614.66475986046, 58638.55109778934, 186723.33486566014]\n Actual labels: [286600.0, 340600.0, 196900.0, 46300.0, 254500.0]\n\n\nThe above shows that the model is able to predict some price values, however they don't seem to be very accurate. How can you measure the performance of your model in a more comprehensive way?\n\nTypically, the output of the regression model is measured in terms of the error in prediction. There are two error measures that are commonly used. *Root Mean Square Error (RMSE)* measures the average deviation of the model's prediction from the actual label, but note that it gives a higher weight for large errors:\n\n\\begin{equation}\nRMSE(X, h) = \\sqrt{\\frac{1}{m} \\sum_{i=1}^{m} (h(x^{(i)}) - y^{(i)})^{2}}\n\\end{equation}\n\nwhere $m$ is the number of instances, $h$ is the model (hypothesis), $X$ is the matrix containing all feature values, $x^{(i)}$ is the feature vector describing instance $i$, and $y^{(i)}$ is the actual label for instance $i$.\n\nBecause *RMSE* is highly influenced by the outliers (i.e., large errors), in some situations *Mean Absolute Error (MAE)* is preferred. You may note that its estimation is somewhat similar to the estimation of *RMSE*:\n\n\\begin{equation}\nMAE(X, h) = \\frac{1}{m} \\sum_{i=1}^{m} |h(x^{(i)}) - y^{(i)}|\n\\end{equation}\n\nLet's measure the performance of the linear regression model using these error estimations:\n\n\n```python\nfrom sklearn.metrics import mean_squared_error\n\nhousing_predictions = lin_reg.predict(housing_prepared)\nlin_mse = mean_squared_error(housing_labels, housing_predictions)\nlin_rmse = np.sqrt(lin_mse)\nlin_rmse\n```\n\n\n\n\n 68226.59728659761\n\n\n\nGiven that the majority of the districts' housing values lie somewhere between $[\\$100000, \\$300000]$ an estimation error of over \\\\$68000 is very high. This shows that the regression model *underfits* the training data: it doesn't capture the patterns in the training data well enough because it lacks the descriptive power either due to the features not providing enough information to make a good prediction or due to the model itself being not complex enough. The ways to fix this include:\n- using more features and/or more informative features, for example applying log to some of the existing features to address the long tail distributions;\n- using more complex models;\n- reducing the constraints on the model.\n\nThe model that you used above is not constrained (or, *regularised* \u2013 more on this in later lectures), so you should try using more powerful models or work on the feature set.\n\nFor example, *polynomial regression* models the relationship between the $X$ and $y$ as an $n$-th degree polynomial. Polynomial regression extends simple linear regression by constructing polynomial features from the existing ones. For simplicity, assume that your data has only $2$ features rather than $8$, i.e. $X=[x_{1}, x_{2}]$. The linear regression model above tries to learn the coefficients (weights) $w=[w_{0}, w_{1}, w_{3}]$ for the linear prediction (a plane) $\\hat{y} = w_{0} + w_{1}x_{1} + w_{2}x_{2}$ that minimises the residual sum of squares between the prediction and actual label as you've seen above. \n\nIf you want to fit a paraboloid to the data instead of a plane, you can combine the features in second-order polynomials, so that the model looks like this: \n\n\\begin{equation}\n\\hat{y} = w_{0} + w_{1}x_{1} + w_{2}x_{2} + w_{3}x_{1}x_{2} + w_{4}x_{1}^2 + w_{5}x_{2}^2\n\\end{equation}\n\nThis time, the model tries to learn an optimal set of weights $w=[w_{0}, ..., w_{5}]$ (note that $w_{0}$ is called an intercept).\n\nNote that polynomial regression still employs a linear model. For instance, you can define a new variable $z = [x_1, x_2, x_1x_2, x_1^2, x_2^2]$ and rewrite the polynomial above as:\n\n\\begin{equation}\n\\hat{y} = w_{0} + w_{1}z_{0} + w_{2}z_{1} + w_{3}z_{2} + w_{4}z_{3} + w_{5}z_{4}\n\\end{equation}\n\nFor that reason, the polynomial regression in `sklearn` is addressed at the `preprocessing` steps \u2013 that is, first the second-order polynomials are estimated on the features, and then the same `LinearRegression` model as above is applied. For instance, use a second- and third-order polynomials and compare the results (feel free to use higher order polynomials, though keep in mind that as the complexity of the model increases, so does the processing time, the number of weights to be learned, and the chance that the model *overfits* to the training data). For more information, refer to `sklearn` [documentation](http://scikit-learn.org/stable/auto_examples/linear_model/plot_polynomial_interpolation.html):\n\n\n```python\nfrom sklearn.preprocessing import PolynomialFeatures\n\nmodel = Pipeline([('poly', PolynomialFeatures(degree=3)),\n ('linear', LinearRegression())])\n\nmodel = model.fit(housing_prepared, housing_labels)\nhousing_predictions = model.predict(housing_prepared)\nlin_mse = mean_squared_error(housing_labels, housing_predictions)\nlin_rmse = np.sqrt(lin_mse)\nlin_rmse\n```\n\n\n\n\n 51339.09311598264\n\n\n\nHow does the performance of the polynomial regression model compare to the first-order linear regression? You see that the performance improves as the complexity of the feature space increases. However, note that the more complex the model becomes, the more accurately it learns to replicate the training data, and the less likely it will generalise to the new pattern, i.e. in the test data. This phenomenon of learning to replicate the patterns from the training data too closely is called *overfitting*, and it is an opposite of *underfitting* when the model does not learn enough about the pattern from the training data due to its simplicity.\n\nJust to give you a flavor of the problem, here is an example of a complex model from the `sklearn` suite called `DecisionTreeRegressor` (Decision Trees are outside of the scope of this course, so don't worry if this looks unfamiliar to you. `sklearn` has implementation for a wide range of ML algorithms, so do check the [documentation](http://scikit-learn.org/stable/auto_examples/tree/plot_tree_regression.html) if you want to learn more). Note that the `DecisionTreeRegressor` learns to predict the values in the training data perfectly well (resulting in the error of $0$!) which usually means that it won't work well on the new data \u2013 e.g., check this later on the test data:\n\n\n```python\nfrom sklearn.tree import DecisionTreeRegressor\n\ntree_reg = DecisionTreeRegressor()\ntree_reg = tree_reg.fit(housing_prepared, housing_labels)\nhousing_predictions = tree_reg.predict(housing_prepared)\ntree_mse = mean_squared_error(housing_labels, housing_predictions)\ntree_mse = np.sqrt(tree_mse)\ntree_mse\n```\n\n\n\n\n 0.0\n\n\n\n### Learning to better evaluate you model using cross-validation\n\nObviously, one of the problems with overfitting above is caused by the fact that you're training and testing on the same (training) set (remember, that you should do all model tuning and optimisation on the training data, and only then apply the best model to the test data). So how can you measure the level of overfitting *before* you apply this model to the test data?\n\nThere are two possible solutions. You can either reapply `train_test_split` function from Step 2 to set aside part of the training set as a *development* (or *validation*) set, and then train the model on the smaller training set and tune it on the development set, before applying your best model to the test set. Or you can use *cross-validation*.\n\nWith *K-fold cross-validation* strategy, the training data gets randomly split into $k$ distinct subsets (*splits*). Then the model gets trained $10$ times, in each run being tested on a different fold and trained on the other $9$ folds. That way, the algorithm is evaluated on each data point in the training set, but during training is not exposed to the data points that it gets tested on later. The result is an array of $10$ evaluation scores, which can be averaged for better understanding and model comparison, i.e.:\n\n\n```python\nfrom sklearn.model_selection import cross_val_score\n \ndef analyse_cv(model): \n scores = cross_val_score(model, housing_prepared, housing_labels,\n scoring = \"neg_mean_squared_error\", cv=10)\n\n # cross-validation expects utility function (greater is better)\n # rather than cost function (lower is better), so the scores returned\n # are negative as they are the opposite of MSE\n sqrt_scores = np.sqrt(-scores) \n print(\"Scores:\", sqrt_scores)\n print(\"Mean:\", sqrt_scores.mean())\n print(\"Standard deviation:\", sqrt_scores.std())\n \nanalyse_cv(tree_reg)\n```\n\n Scores: [71302.97621239 68039.17701236 72733.72316834 71776.24020398\n 70702.05438268 74411.24933951 71645.9789824 70345.21765236\n 77351.99235982 70137.6290673 ]\n Mean: 71844.62383811326\n Standard deviation: 2428.901622738753\n\n\nThis shows that the `DecisionTreeRegression` model does not actually perform well when tested on a set different from the one it was trained on. What about the other models? E.g.:\n\n\n```python\nanalyse_cv(lin_reg)\n```\n\n Scores: [66400.11538513 66561.82084573 67510.6874652 74900.77582974\n 67509.87374136 70884.73634886 64791.38470292 68141.40160344\n 70934.13138413 67393.71765602]\n Mean: 68502.86449625254\n Standard deviation: 2789.502396552837\n\n\nLet's try one more model \u2013 [`RandomForestRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) that implements many Decision Trees (similar to above) on random subsets of the features. This type of models are called *ensemble learning* models and they are very powerful because they benefit from combining the decisions of multiple algorithms:\n\n\n```python\nfrom sklearn.ensemble import RandomForestRegressor\n\nforest_reg = RandomForestRegressor()\nanalyse_cv(forest_reg)\n```\n\n Scores: [50064.14969512 47706.34684052 50239.42228771 52645.72403144\n 49822.38774802 53567.01564887 49201.06035738 47859.43987529\n 53254.23210646 50436.53102518]\n Mean: 50479.630961598814\n Standard deviation: 1969.2058038348039\n\n\n### Fine-tuning the model\n\nSome learning algorithms have *hyperparameters* \u2013 the parameters of the algorithms that should be set up prior to training and don't get changed during training. Such hyperparameters are usually specified for the `sklearn` algorithms in brackets, so you can always check the list of parameters specified in the documentation. For example, whether the [`LinearRegression`](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) model should calculate the intercept or not should be set prior to training and does not depend on the training itself, and so does the number of helper algorithms (decision trees) that should be combined in a [`RandomForestRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) for the final prediction. `RandomForestRegressor` has $16$ parameters, so if you want to find the *best* setting of the hyperparametes for `RandomForestRegressor`, it will take you a long time to try out all possible combinations.\n\nThe code below shows you how the best hyperparameter setting can be automatically found for an `sklearn` ML algorithm using a `GridSearch` functionality. Let's use the example of `RandomForestRegressor` and focus on specific hyperparameters: the number of helper algorithms (decision trees in the forest, or `n_estimators`) and the number of features the regressor considers in order to find the most informative subsets of instances to each of the helper algorithms (`max_features`):\n\n\n```python\nfrom sklearn.model_selection import GridSearchCV\n\n# specify the range of hyperparameter values for the grid search to try out \nparam_grid = {'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]}\n\nforest_reg = RandomForestRegressor()\ngrid_search = GridSearchCV(forest_reg, param_grid, cv=5,\n scoring=\"neg_mean_squared_error\")\ngrid_search.fit(housing_prepared, housing_labels)\n\ngrid_search.best_params_\n```\n\n\n\n\n {'max_features': 6, 'n_estimators': 30}\n\n\n\nYou can also monitor the intermediate results as shown below. Note also that if the best results are achieved with the maximum value for each of the parameters specified for exploration, you might want to keep experimenting with even higher values to see if the results improve any further:\n\n\n```python\ncv_results = grid_search.cv_results_\nfor mean_score, params in zip(cv_results[\"mean_test_score\"], cv_results[\"params\"]):\n print(np.sqrt(-mean_score), params)\n```\n\n 65442.56255758722 {'max_features': 2, 'n_estimators': 3}\n 56419.15564006979 {'max_features': 2, 'n_estimators': 10}\n 53740.7564750999 {'max_features': 2, 'n_estimators': 30}\n 60534.30928224138 {'max_features': 4, 'n_estimators': 3}\n 53328.88679847037 {'max_features': 4, 'n_estimators': 10}\n 50942.234922637035 {'max_features': 4, 'n_estimators': 30}\n 58855.46860401188 {'max_features': 6, 'n_estimators': 3}\n 52567.16946461185 {'max_features': 6, 'n_estimators': 10}\n 50345.67100807773 {'max_features': 6, 'n_estimators': 30}\n 58967.3949333711 {'max_features': 8, 'n_estimators': 3}\n 52212.05654437531 {'max_features': 8, 'n_estimators': 10}\n 50345.45130515203 {'max_features': 8, 'n_estimators': 30}\n\n\nOne more insight you can gain from the best estimator is the importance of each feature (expressed in the weight the best estimator learned to assign to each of the features). Here is how you can do that:\n\n\n```python\nfeature_importances = grid_search.best_estimator_.feature_importances_\nfeature_importances\n```\n\n\n\n\n array([7.03704835e-02, 6.35046538e-02, 4.16614744e-02, 1.61778280e-02,\n 1.35451741e-02, 1.43944952e-02, 1.30420492e-02, 3.47772051e-01,\n 6.10278343e-02, 2.08328874e-02, 1.05853691e-01, 5.39136792e-02,\n 7.20488240e-03, 1.64644734e-01, 3.46425049e-05, 3.23336373e-03,\n 2.78607657e-03])\n\n\n\nIf you also want to display the feature names, you can do that as follows:\n\n\n```python\nextra_attribs = ['rooms_per_household', 'bedrooms_per_household', 'population_per_household', 'bedrooms_per_rooms']\ncat_one_hot_attribs = ['<1H OCEAN', 'INLAND', 'ISLAND', 'NEAR BAY', 'NEAR OCEAN']\nattributes = num_attribs + extra_attribs + cat_one_hot_attribs\nsorted(zip(feature_importances, attributes), reverse=True)\n```\n\n\n\n\n [(0.347772051379105, 'median_income'),\n (0.16464473359058016, 'INLAND'),\n (0.10585369057933745, 'population_per_household'),\n (0.07037048351061218, 'longitude'),\n (0.06350465384430061, 'latitude'),\n (0.061027834274864405, 'rooms_per_household'),\n (0.05391367920754573, 'bedrooms_per_rooms'),\n (0.04166147439601101, 'housing_median_age'),\n (0.02083288739395782, 'bedrooms_per_household'),\n (0.016177828017537574, 'total_rooms'),\n (0.014394495230388975, 'population'),\n (0.013545174129594096, 'total_bedrooms'),\n (0.01304204924214857, 'households'),\n (0.0072048823952477635, '<1H OCEAN'),\n (0.0032333637318216315, 'NEAR BAY'),\n (0.0027860765720088385, 'NEAR OCEAN'),\n (3.464250493825699e-05, 'ISLAND')]\n\n\n\nHow do these compare with the insights you gained earlier (e.g., during data exploration in Step 1, or during attribute exporation in Step 3)?\n\n\n### At last, evaluating your best model on the test set!\n\nFinally, let's take the best model you built and tuned on the training set and apply in to the test set:\n\n\n```python\nfinal_model = grid_search.best_estimator_\n\nX_test = strat_test_set.drop(\"median_house_value\", axis=1)\ny_test = strat_test_set[\"median_house_value\"].copy()\n\nX_test_prepared = full_pipeline.transform(X_test)\nfinal_predictions = final_model.predict(X_test_prepared)\n\nfinal_mse = mean_squared_error(y_test, final_predictions)\nfinal_rmse = np.sqrt(final_mse)\n\nfinal_rmse\n```\n\n\n\n\n 48120.666286373504\n\n\n\n# Assignments\n\n**For the tick session**:\n\n## 1. \nFamiliarise yourself with the code in this practical. During the tick session, be prepared to discuss the different steps and answer questions (as well as ask questions yourself).\n\n## 2.\nExperiment with the different steps in the ML pipeline:\n- try dropping less informative features from the feature set and test whether it improves performance\n\n\n```python\ndef analyse_cv_new(model, housing_prepared, housing_labels): \n scores = cross_val_score(model, housing_prepared, housing_labels,\n scoring = \"neg_mean_squared_error\", cv=10)\n\n # cross-validation expects utility function (greater is better)\n # rather than cost function (lower is better), so the scores returned\n # are negative as they are the opposite of MSE\n sqrt_scores = np.sqrt(-scores) \n print(\"Scores:\", sqrt_scores)\n print(\"Mean:\", sqrt_scores.mean())\n print(\"Standard deviation:\", sqrt_scores.std())\n\nanalyse_cv_new(lin_reg, housing_prepared, housing_labels)\n```\n\n Scores: [66400.11538513 66561.82084573 67510.6874652 74900.77582974\n 67509.87374136 70884.73634886 64791.38470292 68141.40160344\n 70934.13138413 67393.71765602]\n Mean: 68502.86449625254\n Standard deviation: 2789.502396552837\n\n\n\n```python\ntotal_bedrooms_id, population_id, households_id = 4, 5, 6\nhousing_prepared_new = np.delete(housing_prepared, [total_bedrooms_id, population_id, households_id], axis=1)\nanalyse_cv_new(lin_reg, housing_prepared_new, housing_labels)\n```\n\n Scores: [68341.00047502 70016.22953188 71567.93792938 72810.29800432\n 71143.49586681 73744.20325876 67859.34345896 71423.46352447\n 73943.76494902 70905.890316 ]\n Mean: 71175.56273146225\n Standard deviation: 1939.0329703057362\n\n\n- use other options in preprocessing: e.g., different imputer strategies, min-max rather than standardisation for scaling, feature scaling vs. no feature scaling, and compare the results\n\n\n```python\nnum_pipeline_new = Pipeline([\n ('selector', DataFrameSelector(num_attribs)),\n ('imputer', SimpleImputer(strategy=\"mean\")),\n ('attribs_adder', CombinedAttributesAdder()),\n ('std_scaler', StandardScaler()),\n ])\n\ncat_pipeline = Pipeline([\n ('selector', DataFrameSelector(cat_attribs)),\n ('label_binarizer', CustomLabelBinarizer()),\n ])\n\nfull_pipeline_new = FeatureUnion(transformer_list=[\n (\"num_pipeline\", num_pipeline_new),\n (\"cat_pipeline\", cat_pipeline),\n ])\n\nhousing_prepared_new = full_pipeline_new.fit_transform(housing)\nanalyse_cv_new(lin_reg, housing_prepared_new, housing_labels)\n```\n\n Scores: [66516.75393928 66631.06952871 67615.86467157 74913.20313627\n 67534.23635142 70940.21464109 65046.37854943 68166.47297387\n 71029.59283166 67433.19883791]\n Mean: 68582.69854612317\n Standard deviation: 2751.9381938658416\n\n\n\n```python\nnum_pipeline_new = Pipeline([\n ('selector', DataFrameSelector(num_attribs)),\n ('imputer', SimpleImputer(strategy=\"mean\")),\n ('attribs_adder', CombinedAttributesAdder()),\n ('std_scaler', MinMaxScaler()),\n ])\n\ncat_pipeline = Pipeline([\n ('selector', DataFrameSelector(cat_attribs)),\n ('label_binarizer', CustomLabelBinarizer()),\n ])\n\nfull_pipeline_new = FeatureUnion(transformer_list=[\n (\"num_pipeline\", num_pipeline_new),\n (\"cat_pipeline\", cat_pipeline),\n ])\n\nhousing_prepared_new = full_pipeline_new.fit_transform(housing)\nanalyse_cv_new(lin_reg, housing_prepared_new, housing_labels)\n```\n\n Scores: [66516.75393928 66631.06952871 67615.86467157 74913.20313627\n 67534.23635142 70940.21464109 65046.37854943 68166.47297387\n 71029.59283166 67433.19883791]\n Mean: 68582.69854612318\n Standard deviation: 2751.9381938658476\n\n\n\n```python\nnum_pipeline_new = Pipeline([\n ('selector', DataFrameSelector(num_attribs)),\n ('imputer', SimpleImputer(strategy=\"mean\")),\n ('attribs_adder', CombinedAttributesAdder()),\n ])\n\ncat_pipeline = Pipeline([\n ('selector', DataFrameSelector(cat_attribs)),\n ('label_binarizer', CustomLabelBinarizer()),\n ])\n\nfull_pipeline_new = FeatureUnion(transformer_list=[\n (\"num_pipeline\", num_pipeline_new),\n (\"cat_pipeline\", cat_pipeline),\n ])\n\nhousing_prepared_new = full_pipeline_new.fit_transform(housing)\nanalyse_cv_new(lin_reg, housing_prepared_new, housing_labels)\n```\n\n Scores: [66516.75393928 66631.06952871 67615.86467157 74913.20313627\n 67534.23635142 70940.21464109 65046.37854943 68166.47297387\n 71029.59283166 67433.19883791]\n Mean: 68582.69854612331\n Standard deviation: 2751.938193866012\n\n\n\n```python\nnum_pipeline_new = Pipeline([\n ('selector', DataFrameSelector(num_attribs)),\n ('imputer', SimpleImputer(strategy=\"most_frequent\")),\n ('attribs_adder', CombinedAttributesAdder()),\n ('std_scaler', StandardScaler()),\n ])\n\ncat_pipeline = Pipeline([\n ('selector', DataFrameSelector(cat_attribs)),\n ('label_binarizer', CustomLabelBinarizer()),\n ])\n\nfull_pipeline_new = FeatureUnion(transformer_list=[\n (\"num_pipeline\", num_pipeline_new),\n (\"cat_pipeline\", cat_pipeline),\n ])\n\nhousing_prepared_new = full_pipeline_new.fit_transform(housing)\nanalyse_cv_new(lin_reg, housing_prepared_new, housing_labels)\n```\n\n Scores: [66264.25432041 66519.31819408 67785.23415589 74900.96620437\n 67476.65729311 70830.92177914 64388.85775551 68130.08856578\n 70811.60742081 67368.98562333]\n Mean: 68447.68913124381\n Standard deviation: 2837.5874876297885\n\n\n- evaluate the performance of the simple linear regression model on the test set. What is the `final_rmse` for this model?\n\n\n```python\n# final_model = grid_search.best_estimator_\nfinal_rmse\n```\n\n\n\n\n 48120.666286373504\n\n\n\n\n```python\nX_test = strat_test_set.drop(\"median_house_value\", axis=1)\ny_test = strat_test_set[\"median_house_value\"].copy()\n\nX_test_prepared = full_pipeline.transform(X_test)\nlin_reg_predictions = lin_reg.predict(X_test_prepared)\n\nlin_reg_mse = mean_squared_error(y_test, lin_reg_predictions)\nlin_reg_rmse = np.sqrt(lin_reg_mse)\n\nlin_reg_rmse\n```\n\n\n\n\n 66947.71053632068\n\n\n\n- estimate different feature importance weights with the simple linear regression model (if unsure how to extract the feature weights, check [documentation](http://scikit-learn.org/stable/modules/linear_model.html)). How do these compare to the (1) feature importance weights with the best estimator, and (2) feature correlation scores with the target value from Step 3?\n\n\n```python\nlin_reg_feature_importances = lin_reg.coef_\nsorted(zip(lin_reg_feature_importances, attributes), key=lambda importance: abs(importance[0]), reverse=True)\n```\n\n\n\n\n [(111141.19494268733, 'ISLAND'),\n (73190.50276486577, 'median_income'),\n (-57129.13357507744, 'latitude'),\n (-56098.57475829553, 'longitude'),\n (-54728.951389406924, 'INLAND'),\n (-46450.15548364255, 'population'),\n (45746.7921736657, 'households'),\n (31271.586491276645, 'rooms_per_household'),\n (-24833.115865396365, 'bedrooms_per_household'),\n (-22903.26946272087, 'NEAR BAY'),\n (22817.9599524631, 'bedrooms_per_rooms'),\n (-18442.4266798994, '<1H OCEAN'),\n (-15066.547410660214, 'NEAR OCEAN'),\n (14043.79310322378, 'housing_median_age'),\n (6873.222029285127, 'total_bedrooms'),\n (1088.019155080726, 'population_per_household'),\n (-1037.0634223499717, 'total_rooms')]\n\n\n\n\n```python\n# feature_importances = grid_search.best_estimator_.feature_importances_\nsorted(zip(feature_importances, attributes), reverse=True) \n```\n\n\n\n\n [(0.347772051379105, 'median_income'),\n (0.16464473359058016, 'INLAND'),\n (0.10585369057933745, 'population_per_household'),\n (0.07037048351061218, 'longitude'),\n (0.06350465384430061, 'latitude'),\n (0.061027834274864405, 'rooms_per_household'),\n (0.05391367920754573, 'bedrooms_per_rooms'),\n (0.04166147439601101, 'housing_median_age'),\n (0.02083288739395782, 'bedrooms_per_household'),\n (0.016177828017537574, 'total_rooms'),\n (0.014394495230388975, 'population'),\n (0.013545174129594096, 'total_bedrooms'),\n (0.01304204924214857, 'households'),\n (0.0072048823952477635, '<1H OCEAN'),\n (0.0032333637318216315, 'NEAR BAY'),\n (0.0027860765720088385, 'NEAR OCEAN'),\n (3.464250493825699e-05, 'ISLAND')]\n\n\n\n\n```python\n# corr_matrix = housing.corr()\ncorr_matrix[\"median_house_value\"].sort_values(ascending=False)\n```\n\n\n\n\n median_house_value 1.000000\n median_income 0.687160\n rooms_per_household 0.146285\n total_rooms 0.135097\n housing_median_age 0.114110\n households 0.064506\n total_bedrooms 0.047689\n population_per_household -0.021985\n population -0.026920\n bedrooms_per_household -0.043343\n longitude -0.047432\n latitude -0.142724\n bedrooms_per_rooms -0.259984\n Name: median_house_value, dtype: float64\n\n\n\n- [`RandomizedSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html), as opposed to the `GridSearchCV` used in the practical, does not try out each parameter values combination. Instead it only tries a fixed number of parameter settings sampled from the specified distributions. As a result, it allows you to try out a wider range of parameter values in a less expensive way than `GridSearchCV`. Apply `RandomizedSearchCV` and compare the best estimator results.\n\n\n```python\nfrom sklearn.model_selection import RandomizedSearchCV\n\nparam = {'n_estimators': range(1, 50), 'max_features': range(1, 10)}\n\nrandom_search= RandomizedSearchCV(forest_reg, param, n_iter=10, cv=5, scoring=\"neg_mean_squared_error\")\nrandom_search.fit(housing_prepared, housing_labels)\n\nrandom_search.best_params_\n```\n\n\n\n\n {'n_estimators': 42, 'max_features': 7}\n\n\n\n\n```python\n# grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring=\"neg_mean_squared_error\")\ngrid_search.best_params_\n```\n\n\n\n\n {'max_features': 6, 'n_estimators': 30}\n\n\n\nFinally, if you want to have more practice with regression tasks, you can **work on the following optional task**:\n\n## 3. (Optional)\n\nUse the bike sharing dataset (`./bike_sharing/bike_hour.csv`, check `./bike_sharing/Readme.txt` for the description), apply the ML steps and gain insights from the data. What data transformations should be applied? Which attributes are most predictive? What additional attributes can be introduced? Which regression model performs best?\n\n\n```python\n\n```\n", "meta": {"hexsha": "ee643ff331d3592f7757f518a78c2e9ceefb36ab", "size": 790049, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Practical 1 - Linear Regression/DSPNP_notebook1.ipynb", "max_stars_repo_name": "VictorZXY/datasci-pnp-practicals", "max_stars_repo_head_hexsha": "0913c887a17c25e4995067eaf29bb8f278f270d3", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Practical 1 - Linear Regression/DSPNP_notebook1.ipynb", "max_issues_repo_name": "VictorZXY/datasci-pnp-practicals", "max_issues_repo_head_hexsha": "0913c887a17c25e4995067eaf29bb8f278f270d3", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Practical 1 - Linear Regression/DSPNP_notebook1.ipynb", "max_forks_repo_name": "VictorZXY/datasci-pnp-practicals", "max_forks_repo_head_hexsha": "0913c887a17c25e4995067eaf29bb8f278f270d3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 230.6712408759, "max_line_length": 333912, "alphanum_fraction": 0.9001985953, "converted": true, "num_tokens": 23532, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.476579651063676, "lm_q2_score": 0.19193279569159502, "lm_q1q2_score": 0.09147126479837617}} {"text": "##### Copyright 2021 The TF-Agents Authors.\n\n\n```\n#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n```\n\n# RL \u548c\u6df1\u5ea6 Q \u7f51\u7edc\u7b80\u4ecb\n\n\n \n \n \n \n
\u5728 TensorFlow.org \u4e0a\u67e5\u770b\n \u5728 Google Colab \u8fd0\u884c\n \u5728 Github \u4e0a\u67e5\u770b\u6e90\u4ee3\u7801\n \u4e0b\u8f7d\u7b14\u8bb0\u672c
\n\n## \u7b80\u4ecb\n\n\u5f3a\u5316\u5b66\u4e60 (RL) \u662f\u4e00\u79cd\u901a\u7528\u6846\u67b6\uff0c\u5176\u4e2d\u4ee3\u7406\u53ef\u4ee5\u5b66\u4e60\u5728\u6240\u5904\u7684\u73af\u5883\u4e2d\u6267\u884c\u64cd\u4f5c\u6765\u4f7f\u5956\u52b1\u6700\u5927\u5316\u3002\u4e24\u4e2a\u4e3b\u8981\u7ec4\u4ef6\u662f\u73af\u5883\uff08\u4ee3\u8868\u8981\u89e3\u51b3\u7684\u95ee\u9898\uff09\u548c\u4ee3\u7406\uff08\u4ee3\u8868\u5b66\u4e60\u7b97\u6cd5\uff09\u3002\n\n\u4ee3\u7406\u4e0e\u73af\u5883\u6301\u7eed\u76f8\u4e92\u4f5c\u7528\u3002\u5728\u6bcf\u4e2a\u65f6\u95f4\u6b65\u9aa4\uff0c\u4ee3\u7406\u90fd\u4f1a\u6839\u636e\u5176*\u7b56\u7565* $\\pi(a_t|s_t)$\uff08\u5176\u4e2d $s_t$ \u662f\u6765\u81ea\u73af\u5883\u7684\u5f53\u524d\u89c2\u6d4b\u503c\uff09\u5bf9\u73af\u5883\u6267\u884c\u64cd\u4f5c\u5e76\u83b7\u5f97\u5956\u52b1 $r_{t+1}$ \u548c\u6765\u81ea\u73af\u5883\u7684\u4e0b\u4e00\u4e2a\u89c2\u6d4b\u503c $s_{t+1}$\u3002\u76ee\u6807\u662f\u6539\u8fdb\u7b56\u7565\uff0c\u4f7f\u5956\u52b1\u603b\u548c\uff08\u56de\u62a5\uff09\u6700\u5927\u5316\u3002\n\n\u6ce8\uff1a\u533a\u5206\u73af\u5883\u7684 `state` \u548c `observation` \u975e\u5e38\u91cd\u8981\uff0c\u8fd9\u662f\u4ee3\u7406\u53ef\u4ee5\u770b\u5230\u7684\u73af\u5883 `state` \u90e8\u5206\uff0c\u4f8b\u5982\u5728\u6251\u514b\u6e38\u620f\u4e2d\uff0c\u73af\u5883\u72b6\u6001\u7531\u5c5e\u4e8e\u6240\u6709\u73a9\u5bb6\u7684\u7eb8\u724c\u548c\u516c\u5171\u724c\u7ec4\u6210\uff0c\u4f46\u662f\u4ee3\u7406\u53ea\u80fd\u89c2\u6d4b\u5230\u81ea\u5df1\u7684\u7eb8\u724c\u548c\u90e8\u5206\u516c\u5171\u724c\u3002\u5728\u5927\u591a\u6570\u6587\u732e\u4e2d\uff0c\u8fd9\u4e9b\u672f\u8bed\u53ef\u4e92\u6362\u4f7f\u7528\uff0c\u89c2\u6d4b\u503c\u4e5f\u8868\u793a\u4e3a $s$\u3002\n\n\n\n\u8fd9\u662f\u4e00\u4e2a\u975e\u5e38\u901a\u7528\u7684\u6846\u67b6\uff0c\u53ef\u4ee5\u5bf9\u6e38\u620f\u3001\u673a\u5668\u4eba\u7b49\u5404\u79cd\u987a\u5e8f\u51b3\u7b56\u95ee\u9898\u8fdb\u884c\u5efa\u6a21\n\n\n## Cartpole \u73af\u5883\n\nCartpole \u73af\u5883\u662f\u6700\u8457\u540d\u7684\u7ecf\u5178\u5f3a\u5316\u5b66\u4e60\u95ee\u9898\u4e4b\u4e00\uff08RL \u7684 *\"Hello, World!\"*\uff09\u3002\u4e00\u6839\u957f\u6746\u8fde\u63a5\u5230\u4e00\u4e2a\u5c0f\u8f66\u4e0a\uff0c\u5c0f\u8f66\u53ef\u4ee5\u6cbf\u7740\u65e0\u6469\u64e6\u7684\u8f68\u9053\u79fb\u52a8\u3002\u957f\u6746\u5f00\u59cb\u65f6\u662f\u76f4\u7acb\u7684\uff0c\u76ee\u6807\u662f\u901a\u8fc7\u63a7\u5236\u5c0f\u8f66\u6765\u9632\u6b62\u5176\u5012\u4e0b\u3002\n\n- \u6765\u81ea\u73af\u5883 $s_t$ \u7684\u89c2\u6d4b\u503c\u662f\u4e00\u4e2a 4D \u5411\u91cf\uff0c\u8868\u793a\u5c0f\u8f66\u7684\u4f4d\u7f6e\u548c\u901f\u5ea6\u4ee5\u53ca\u957f\u6746\u7684\u89d2\u5ea6\u548c\u89d2\u901f\u5ea6\u3002\n- \u4ee3\u7406\u53ef\u4ee5\u901a\u8fc7\u6267\u884c\u4ee5\u4e0b\u4e24\u4e2a\u64cd\u4f5c $a_t$ \u4e4b\u4e00\u6765\u63a7\u5236\u7cfb\u7edf\uff1a\u5411\u53f3 (+1) \u6216\u5411\u5de6 (-1) \u63a8\u5c0f\u8f66\u3002\n- \u5bf9\u4e8e\u957f\u6746\u4fdd\u6301\u76f4\u7acb\u7684\u6bcf\u4e2a\u65f6\u95f4\u6b65\u9aa4\uff0c\u90fd\u4f1a\u63d0\u4f9b $r_{t+1} = 1$ \u5956\u52b1\u3002\u5982\u679c\u6ee1\u8db3\u4ee5\u4e0b\u4efb\u4e00\u6761\u4ef6\uff0c\u5219\u7247\u6bb5\u7ed3\u675f\uff1a\n - \u957f\u6746\u8d85\u8fc7\u67d0\u4e2a\u89d2\u5ea6\u9650\u5236\n - \u5c0f\u8f66\u79fb\u51fa\u4e16\u754c\u8fb9\u7f18\n - \u7ecf\u8fc7 200 \u4e2a\u65f6\u95f4\u6b65\u9aa4\u3002\n\n\u4ee3\u7406\u7684\u76ee\u6807\u662f\u5b66\u4e60\u7b56\u7565 $\\pi(a_t|s_t)$\uff0c\u4ee5\u4f7f\u7247\u6bb5 $\\sum_{t=0}^{T} \\gamma^t r_t$ \u4e2d\u7684\u5956\u52b1\u603b\u548c\u6700\u5927\u5316\u3002\u5728\u8fd9\u91cc\uff0c$\\gamma$ \u662f\u4ee5 $[0, 1]$ \u8868\u793a\u7684\u6298\u6263\u56e0\u5b50\uff0c\u8be5\u56e0\u5b50\u76f8\u5bf9\u4e8e\u5373\u65f6\u5956\u52b1\u5bf9\u672a\u6765\u7684\u5956\u52b1\u6253\u6298\u6263\u3002\u6b64\u53c2\u6570\u6709\u52a9\u4e8e\u6211\u4eec\u4e13\u6ce8\u4e8e\u7b56\u7565\uff0c\u4f7f\u5176\u66f4\u5173\u5fc3\u5feb\u901f\u83b7\u5f97\u5956\u52b1\u3002\n\n\n## DQN \u4ee3\u7406\n\n[DQN\uff08\u6df1\u5ea6 Q \u7f51\u7edc\uff09\u7b97\u6cd5](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf)\u7531 DeepMind \u5728 2015 \u5e74\u5f00\u53d1\u3002\u901a\u8fc7\u5c06\u5f3a\u5316\u5b66\u4e60\u548c\u6df1\u5ea6\u795e\u7ecf\u7f51\u7edc\u8fdb\u884c\u5927\u89c4\u6a21\u7ec4\u5408\uff0c\u5b83\u80fd\u591f\u901a\u5173\u5404\u79cd Atari \u6e38\u620f\uff08\u6709\u4e9b\u751a\u81f3\u8fbe\u5230\u4e86\u8d85\u51fa\u4eba\u7c7b\u80fd\u529b\u7684\u6c34\u5e73\uff09\u3002\u6b64\u7b97\u6cd5\u901a\u8fc7\u4f7f\u7528\u6df1\u5ea6\u795e\u7ecf\u7f51\u7edc\u548c\u4e00\u79cd\u79f0\u4e3a*\u7ecf\u9a8c\u56de\u653e*\u7684\u6280\u672f\u6765\u589e\u5f3a\u7ecf\u5178\u7684 RL \u7b97\u6cd5\uff08\u79f0\u4e3a Q-Learning\uff09\u5f00\u53d1\u800c\u6210\u3002\n\n### Q-Learning\n\nQ-Learning \u57fa\u4e8e Q \u51fd\u6570\u7684\u6982\u5ff5\u3002\u7b56\u7565 $\\pi$, $Q^{\\pi}(s, a)$ \u7684 Q \u51fd\u6570\uff08\u53c8\u79f0\u72b6\u6001-\u64cd\u4f5c\u503c\u51fd\u6570\uff09\u7528\u4e8e\u8861\u91cf\u901a\u8fc7\u9996\u5148\u91c7\u53d6\u64cd\u4f5c $a$\u3001\u968f\u540e\u91c7\u53d6\u7b56\u7565 $\\pi$\uff0c\u4ece\u72b6\u6001 $s$ \u83b7\u5f97\u7684\u9884\u671f\u56de\u62a5\u6216\u6298\u6263\u5956\u52b1\u603b\u548c\u3002\u6211\u4eec\u5c06\u6700\u4f18 Q \u51fd\u6570 $Q^*(s, a)$ \u5b9a\u4e49\u4e3a\u4ece\u89c2\u6d4b\u503c $s$ \u5f00\u59cb\uff0c\u5148\u91c7\u53d6\u64cd\u4f5c $a$\uff0c\u968f\u540e\u91c7\u53d6\u6700\u4f18\u7b56\u7565\u6240\u80fd\u83b7\u5f97\u7684\u6700\u5927\u56de\u62a5\u3002\u6700\u4f18 Q \u51fd\u6570\u9075\u5faa\u4ee5\u4e0b*\u8d1d\u5c14\u66fc*\u6700\u4f18\u6027\u65b9\u7a0b\uff1a\n\n$\\begin{equation}Q^\\ast(s, a) = \\mathbb{E}[ r + \\gamma \\max_{a'} Q^\\ast(s', a') ]\\end{equation}$\n\n\u8fd9\u610f\u5473\u7740\uff0c\u4ece\u72b6\u6001 $s$ \u548c\u64cd\u4f5c $a$ \u83b7\u5f97\u7684\u6700\u5927\u56de\u62a5\u7b49\u4e8e\u5373\u65f6\u5956\u52b1 $r$ \u4e0e\u901a\u8fc7\u9075\u5faa\u6700\u4f18\u7b56\u7565\uff0c\u968f\u540e\u76f4\u5230\u7247\u6bb5\u7ed3\u675f\u6240\u83b7\u5f97\u7684\u56de\u62a5\uff08\u6298\u6263\u56e0\u5b50\u4e3a $\\gamma$\uff09\u7684\u603b\u548c\uff08\u5373\uff0c\u6765\u81ea\u4e0b\u4e00\u4e2a\u72b6\u6001 $s'$ \u7684\u6700\u9ad8\u5956\u52b1\uff09\u3002\u671f\u671b\u662f\u5728\u5373\u65f6\u5956\u52b1 $r$ \u7684\u5206\u5e03\u4ee5\u53ca\u53ef\u80fd\u7684\u4e0b\u4e00\u4e2a\u72b6\u6001 $s'$ \u7684\u57fa\u7840\u4e0a\u8ba1\u7b97\u7684\u3002\n\nQ-Learning \u80cc\u540e\u7684\u57fa\u672c\u601d\u60f3\u662f\u4f7f\u7528\u8d1d\u5c14\u66fc\u6700\u4f18\u6027\u65b9\u7a0b\u4f5c\u4e3a\u8fed\u4ee3\u66f4\u65b0 $Q_{i+1}(s, a) \\leftarrow \\mathbb{E}\\left[ r + \\gamma \\max_{a'} Q_{i}(s', a')\\right]$\uff0c\u53ef\u4ee5\u8868\u660e\u5b83\u4f1a\u6536\u655b\u5230\u6700\u4f18 $Q$ \u51fd\u6570\uff0c\u5373 $Q_i \\rightarrow Q^*$ \u4f5c\u4e3a $i \\rightarrow \\infty$\uff08\u8bf7\u53c2\u9605 [DQN \u8bba\u6587](https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf)\uff09\u3002\n\n### \u6df1\u5ea6 Q-Learning\n\n\u5bf9\u4e8e\u5927\u591a\u6570\u95ee\u9898\uff0c\u5c06 $Q$ \u51fd\u6570\u8868\u793a\u4e3a\u5305\u542b $s$ \u548c $a$ \u6bcf\u79cd\u7ec4\u5408\u7684\u503c\u7684\u8868\u662f\u4e0d\u5207\u5b9e\u9645\u7684\u3002\u76f8\u53cd\uff0c\u6211\u4eec\u8bad\u7ec3\u4e00\u4e2a\u51fd\u6570\u903c\u8fd1\u5668\uff08\u4f8b\u5982\uff0c\u5e26\u53c2\u6570 $\\theta$ \u7684\u795e\u7ecf\u7f51\u7edc\uff09\u6765\u4f30\u7b97 Q \u503c\uff0c\u5373 $Q(s, a; \\theta) \\approx Q^*(s, a)$\u3002\u8fd9\u53ef\u4ee5\u901a\u8fc7\u5728\u6bcf\u4e2a\u6b65\u9aa4 $i$ \u4f7f\u4ee5\u4e0b\u635f\u5931\u6700\u5c0f\u5316\u6765\u5b9e\u73b0\uff1a\n\n$\\begin{equation}L_i(\\theta_i) = \\mathbb{E}*{s, a, r, s'\\sim \\rho(.)} \\left[ (y_i - Q(s, a; \\theta_i))^2 \\right]\\end{equation}$\uff0c\u5176\u4e2d $y_i = r + \\gamma \\max*{a'} Q(s', a'; \\theta_{i-1})$\n\n\u6b64\u5904\uff0c$y_i$ \u79f0\u4e3a TD\uff08\u65f6\u95f4\u5dee\u5206\uff09\u76ee\u6807\uff0c\u800c $y_i - Q$ \u79f0\u4e3a TD \u8bef\u5dee\u3002$\\rho$ \u8868\u793a\u884c\u4e3a\u5206\u5e03\uff0c\u5373\u4ece\u73af\u5883\u4e2d\u6536\u96c6\u7684\u8f6c\u6362 ${s, a, r, s'}$ \u7684\u5206\u5e03\u3002\n\n\u6ce8\u610f\uff0c\u5148\u524d\u8fed\u4ee3 $\\theta_{i-1}$ \u4e2d\u7684\u53c2\u6570\u662f\u56fa\u5b9a\u7684\uff0c\u4e0d\u4f1a\u66f4\u65b0\u3002\u5b9e\u9645\u4e0a\uff0c\u6211\u4eec\u4f7f\u7528\u524d\u51e0\u6b21\u8fed\u4ee3\u800c\u4e0d\u662f\u6700\u540e\u4e00\u6b21\u8fed\u4ee3\u7684\u7f51\u7edc\u53c2\u6570\u5feb\u7167\u3002\u6b64\u526f\u672c\u79f0\u4e3a*\u76ee\u6807\u7f51\u7edc*\u3002\n\nQ-Learning \u662f\u4e00\u79cd*\u79bb\u7b56\u7565*\u7b97\u6cd5\uff0c\u53ef\u5728\u5b66\u4e60\u8d2a\u5fc3\u7b56\u7565 $a = \\max_{a} Q(s, a; \\theta)$ \u7684\u540c\u65f6\u4f7f\u7528\u4e0d\u540c\u7684\u884c\u4e3a\u7b56\u7565\u5728\u73af\u5883/\u6536\u96c6\u6570\u636e\u8fc7\u7a0b\u4e2d\u6267\u884c\u64cd\u4f5c\u3002\u6b64\u884c\u4e3a\u7b56\u7565\u901a\u5e38\u662f\u4e00\u79cd $\\epsilon$ \u8d2a\u5fc3\u7b56\u7565\uff0c\u53ef\u9009\u62e9\u6982\u7387\u4e3a $1-\\epsilon$ \u7684\u8d2a\u5fc3\u64cd\u4f5c\u548c\u6982\u7387\u4e3a $\\epsilon$ \u7684\u968f\u673a\u64cd\u4f5c\uff0c\u4ee5\u786e\u4fdd\u826f\u597d\u8986\u76d6\u72b6\u6001-\u64cd\u4f5c\u7a7a\u95f4\u3002\n\n### \u7ecf\u9a8c\u56de\u653e\n\n\u4e3a\u4e86\u907f\u514d\u8ba1\u7b97 DQN \u635f\u5931\u7684\u5168\u671f\u671b\uff0c\u6211\u4eec\u53ef\u4ee5\u4f7f\u7528\u968f\u673a\u68af\u5ea6\u4e0b\u964d\u7b97\u6cd5\u5c06\u5176\u6700\u5c0f\u5316\u3002\u5982\u679c\u4ec5\u4f7f\u7528\u6700\u540e\u4e00\u4e2a\u8f6c\u6362 ${s, a, r, s'}$ \u6765\u8ba1\u7b97\u635f\u5931\uff0c\u90a3\u4e48\u8fd9\u4f1a\u7b80\u5316\u4e3a\u6807\u51c6 Q-Learning\u3002\n\nAtari DQN \u5de5\u4f5c\u5f15\u5165\u4e86\u4e00\u79cd\u79f0\u4e3a\u201c\u7ecf\u9a8c\u56de\u653e\u201d\u7684\u6280\u672f\uff0c\u53ef\u4f7f\u7f51\u7edc\u66f4\u65b0\u66f4\u52a0\u7a33\u5b9a\u3002\u5728\u6570\u636e\u6536\u96c6\u7684\u6bcf\u4e2a\u65f6\u95f4\u6b65\u9aa4\uff0c\u8f6c\u6362\u90fd\u4f1a\u6dfb\u52a0\u5230\u79f0\u4e3a*\u56de\u653e\u7f13\u51b2\u533a*\u7684\u5faa\u73af\u7f13\u51b2\u533a\u4e2d\u3002\u7136\u540e\uff0c\u5728\u8bad\u7ec3\u8fc7\u7a0b\u4e2d\uff0c\u6211\u4eec\u4e0d\u662f\u4ec5\u4ec5\u4f7f\u7528\u6700\u65b0\u7684\u8f6c\u6362\u6765\u8ba1\u7b97\u635f\u5931\u53ca\u5176\u68af\u5ea6\uff0c\u800c\u662f\u4f7f\u7528\u4ece\u56de\u653e\u7f13\u51b2\u533a\u4e2d\u91c7\u6837\u7684\u8f6c\u6362\u7684 mini-batch \u6765\u8ba1\u7b97\u5b83\u4eec\u3002\u8fd9\u6837\u505a\u6709\u4e24\u4e2a\u4f18\u70b9\uff1a\u901a\u8fc7\u5728\u8bb8\u591a\u66f4\u65b0\u4e2d\u91cd\u7528\u6bcf\u4e2a\u8f6c\u6362\u6765\u63d0\u9ad8\u6570\u636e\u6548\u7387\uff0c\u4ee5\u53ca\u5728\u6279\u6b21\u4e2d\u4f7f\u7528\u4e0d\u76f8\u5173\u7684\u8f6c\u6362\u6765\u63d0\u9ad8\u7a33\u5b9a\u6027\u3002\n\n\n## TF-Agents \u4e2d\u57fa\u4e8e Cartpole \u7684 DQN\n\nTF-Agents \u63d0\u4f9b\u4e86\u8bad\u7ec3 DQN \u4ee3\u7406\u6240\u9700\u7684\u5168\u90e8\u7ec4\u4ef6\uff0c\u4f8b\u5982\u4ee3\u7406\u672c\u8eab\u3001\u73af\u5883\u3001\u7b56\u7565\u3001\u7f51\u7edc\u3001\u56de\u653e\u7f13\u51b2\u533a\u3001\u6570\u636e\u6536\u96c6\u5faa\u73af\u548c\u6307\u6807\u3002\u8fd9\u4e9b\u7ec4\u4ef6\u4ee5 Python \u51fd\u6570\u6216 TensorFlow \u8ba1\u7b97\u56fe\u8fd0\u7b97\u7684\u5f62\u5f0f\u5b9e\u73b0\uff0c\u6211\u4eec\u8fd8\u63d0\u4f9b\u7528\u4e8e\u5728\u5b83\u4eec\u4e4b\u95f4\u8fdb\u884c\u8f6c\u6362\u7684\u5305\u88c5\u5668\u3002\u6b64\u5916\uff0cTF-Agents \u8fd8\u652f\u6301 TensorFlow 2.0 \u6a21\u5f0f\uff0c\u8fd9\u6837\u6211\u4eec\u4fbf\u80fd\u5728\u547d\u4ee4\u5f0f\u6a21\u5f0f\u4e0b\u4f7f\u7528 TF\u3002\n\n\u63a5\u4e0b\u6765\uff0c\u8bf7\u67e5\u770b[\u4f7f\u7528 TF-Agents \u5728 Cartpole \u73af\u5883\u4e2d\u8bad\u7ec3 DQN \u4ee3\u7406\u7684\u6559\u7a0b](https://github.com/tensorflow/agents/blob/master/docs/tutorials/1_dqn_tutorial.ipynb)\u3002\n\n", "meta": {"hexsha": "40bf4fe906140ab7b559b3c22f8af5268f15664d", "size": 7208, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "site/zh-cn/agents/tutorials/0_intro_rl.ipynb", "max_stars_repo_name": "RedContritio/docs-l10n", "max_stars_repo_head_hexsha": "f69a7c0d2157703a26cef95bac34b39ac0250373", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2022-03-29T22:32:18.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-29T22:32:18.000Z", "max_issues_repo_path": "site/zh-cn/agents/tutorials/0_intro_rl.ipynb", "max_issues_repo_name": "Juanita-cortez447/docs-l10n", "max_issues_repo_head_hexsha": "edaba1f2b5e329857860db1e937cb1333b6e3f31", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "site/zh-cn/agents/tutorials/0_intro_rl.ipynb", "max_forks_repo_name": "Juanita-cortez447/docs-l10n", "max_forks_repo_head_hexsha": "edaba1f2b5e329857860db1e937cb1333b6e3f31", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 45.3333333333, "max_line_length": 275, "alphanum_fraction": 0.5688124306, "converted": true, "num_tokens": 2875, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.47657965106367595, "lm_q2_score": 0.19193279338050545, "lm_q1q2_score": 0.0914712636969579}} {"text": "Probabilistic Programming\n=====\nand Bayesian Methods for Hackers \n========\n\n#####Version 0.1\nWelcome to *Bayesian Methods for Hackers*. The full Github repository is available at [github/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers](https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers). The other chapters can be found on the project's [homepage](camdavidsonpilon.github.io/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/). We hope you enjoy the book, and we encourage any contributions!\n\n\u7b2c1\u7ae0\n======\n***\n\n\u30d9\u30a4\u30ba\u63a8\u8ad6\u306e\u8003\u3048\u65b9\n------\n\n\n> \u3042\u306a\u305f\u306f\u512a\u79c0\u306a\u30d7\u30ed\u30b0\u30e9\u30de\u30fc\u3060\uff0e\u3057\u304b\u3057\uff0c\u3060\u308c\u3057\u3082\u66f8\u3044\u305f\u30b3\u30fc\u30c9\u306b\u30d0\u30b0\u306f\u3042\u308b\uff0e\u5b9f\u88c5\u3059\u308b\u306e\u304c\u975e\u5e38\u306b\u96e3\u3057\u3044\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306e\u30b3\u30fc\u30c9\u3092\u306a\u3093\u3068\u304b\u66f8\u3044\u305f\u5f8c\uff0c\u305d\u306e\u30b3\u30fc\u30c9\u304c\u6b63\u3057\u3044\u304b\u3069\u3046\u304b\u3092\u7c21\u5358\u306a\u4f8b\u984c\u3067\u30c6\u30b9\u30c8\u3057\u3088\u3046\u3068\u8003\u3048\u305f\uff0eOK\uff0c\u30c6\u30b9\u30c8\u306b\u30d1\u30b9\u3057\u305f\uff0e\u6b21\u306b\u3082\u3063\u3068\u96e3\u3057\u3044\u554f\u984c\u3067\u30b3\u30fc\u30c9\u3092\u30c6\u30b9\u30c8\u3057\u305f\uff0e\u4eca\u5ea6\u3082\u30d1\u30b9\u3057\u305f\uff0e\u305d\u3057\u3066\uff0c\u306a\u3093\u3068*\u3082\u3063\u3068\u3082\u3063\u3068\u96e3\u3057\u3044\u554f\u984c*\u3067\u3082\uff0c\u30d1\u30b9\u3057\u305f\uff01\u3000\u3060\u304b\u3089\u3042\u306a\u305f\u306f\uff0c\u3053\u306e\u30b3\u30fc\u30c9\u306b\u306f\u30d0\u30b0\u306f\u306a\u3044\u304b\u3082\u3057\u308c\u306a\u3044\u3068\u601d\u3044\u59cb\u3081\u3066\u3057\u307e\u3063\u305f\uff0e\uff0e\uff0e\n\n\n\u3082\u3057\u3053\u306e\u3088\u3046\u306b\u8003\u3048\u305f\u3053\u3068\u304c\u3042\u308b\u306e\u306a\u3089\uff0c\u304a\u3081\u3067\u3068\u3046\uff01\u3000\u3042\u306a\u305f\u306f\u30d9\u30a4\u30ba\u7684\u306b\u8003\u308b\u4eba\u306e\u4ef2\u9593\u5165\u308a\u3060\uff0e\u30d9\u30a4\u30ba\u63a8\u8ad6\u3068\u306f\uff0c\n\u65b0\u3057\u3044\u8a3c\u62e0\u304c\u5f97\u3089\u308c\u308b\u305f\u3073\u306b\u81ea\u5206\u306e\u8003\u3048\u3092\u6539\u3081\u308b\u3068\u3044\u3046\u3082\u306e\u3067\u3042\u308b\uff0e\n\u30d9\u30a4\u30ba\u7684\u306b\u8003\u3048\u308b\u4eba\u306f\uff0c\u3042\u308b\u7d50\u679c\u304c\u5fc5\u305a\u8d77\u3053\u308b\u3068\u306f\u8003\u3048\u306a\u3044\uff0e\u305f\u3076\u3093\u8d77\u3053\u308b\u3060\u308d\u3046\u3068\u8003\u3048\u308b\u306e\u3067\u3042\u308b\uff0e\n\u4e0a\u306e\u4f8b\u306e\u3088\u3046\u306b\uff0c\u666e\u901a\u306f\uff0c\u30d7\u30ed\u30b0\u30e9\u30e0\u306b100%\u307e\u3063\u305f\u304f\u30d0\u30b0\u304c\u306a\u3044\u3068\u306f\u8003\u3048\u306a\u3044\uff0e\n\u306a\u3044\u3068\u8a00\u3044\u5207\u308b\u306b\u306f\uff0c\u5b9f\u969b\u306b\u306f\u3042\u308a\u3048\u306a\u3044\u3088\u3046\u306a\u5834\u5408\u3082\u542b\u3081\u3066\uff0c\u3059\u3079\u3066\u306e\u5834\u5408\u306b\u3064\u3044\u3066\u30c1\u30a7\u30c3\u30af\u3057\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\u3060\u308d\u3046\uff0e\n\u305d\u308c\u3088\u308a\u3082\uff0c\u305f\u304f\u3055\u3093\u306e\u554f\u984c\u306b\u305f\u3044\u3057\u3066\u30c6\u30b9\u30c8\u3057\u3066\uff0c\u305d\u308c\u3089\u5168\u3066\u306b\u30d1\u30b9\u3057\u305f\u3089\uff0c\n\u30d7\u30ed\u30b0\u30e9\u30e0\u306b\u30d0\u30b0\u306f\u300c\u305f\u3076\u3093\u306a\u3044\u3060\u308d\u3046\u300d\u3068\u601d\u3046\u306e\u3067\u3042\u308b\uff0e\u3057\u304b\u3057\u300c\u307e\u3063\u305f\u304f\u306a\u3044\u300d\u3068\u306f\u8a00\u3044\u5207\u308c\u306a\u3044\uff0e\n\u30d9\u30a4\u30ba\u63a8\u8ad6\u3082\u540c\u3058\u3067\u3042\u308b\uff0e\u60c5\u5831\u304c\u5f97\u3089\u308c\u305f\u3089\u4fe1\u5ff5\u3092\u66f4\u65b0\u3059\u308b\uff0e\u3059\u3079\u3066\u306e\u53ef\u80fd\u306a\u5834\u5408\u3092\u30c1\u30a7\u30c3\u30af\u3057\u306a\u3051\u308c\u3070\uff0c\u7d76\u5bfe\u306b\uff0c\u3068\u306f\u8a00\u308f\u306a\u3044\u306e\u3067\u3042\u308b\uff0e\n\n\n\n### \u8003\u3048\u65b9\u306e\u30d9\u30a4\u30ba\u7684\u306a\u8003\u3048\u65b9\n\n\n\u30d9\u30a4\u30b9\u63a8\u8ad6\u304c\u4f1d\u7d71\u7684\u306a\u7d71\u8a08\u7684\u63a8\u8ad6\u3068\u7570\u306a\u308b\u306e\u306f\uff0c\n\u300c\u4e0d\u78ba\u5b9f\u300d\u306a\u3082\u306e\u306f\u4e0d\u78ba\u5b9f\u306a\u307e\u307e\u306b\u3059\u308b\u3068\u3044\u3046\u70b9\u3067\u3042\u308b\uff0e\n\u4e0d\u78ba\u5b9f\u306a\u307e\u307e\u3068\u3044\u3046\u3053\u3068\u306b\uff0c\u6700\u521d\u306f\u30c0\u30e1\u306a\u65b9\u6cd5\u3060\u3068\u601d\u3046\u304b\u3082\u3057\u308c\u306a\u3044\uff0e\n\u7d71\u8a08\u3068\u306f\u30e9\u30f3\u30c0\u30e0\u306a\u73fe\u8c61\u304b\u3089\u78ba\u5b9f\u3055\u3092\u5f15\u304d\u51fa\u3059\u3082\u306e\u3067\u306f\u306a\u304b\u3063\u305f\u306e\u304b\uff1f\n\u3053\u308c\u3092\u7406\u89e3\u3059\u308b\u306b\u306f\uff0c\u30d9\u30a4\u30ba\u7684\u306b\u8003\u3048\u308b\u3053\u3068\u304c\u5fc5\u8981\u306b\u306a\u308b\uff0e\n\n\n\n\u30d9\u30a4\u30ba\u7684\u306a\u8003\u3048\u65b9\uff0c\u3064\u307e\u308a\u30d9\u30a4\u30ba\u4e3b\u7fa9(Bayesian)\u3067\u306f\uff0c\n\u78ba\u7387\u3092\u300c\u3042\u308b\u51fa\u6765\u4e8b\u304c\u3069\u306e\u304f\u3089\u3044\u4fe1\u983c\u3067\u304d\u308b\u304b\u300d\u3092\u8868\u3059\u6307\u6a19\u3068\u89e3\u91c8\u3059\u308b\uff0e\n\u3064\u307e\u308a\uff0c\u3042\u308b\u4e8b\u8c61\u304c\u751f\u3058\u308b\u3068\u3044\u3046\u3053\u3068\u3092\uff0c\n\u3069\u306e\u304f\u3089\u3044\u78ba\u304b\u3060\u3068\u601d\u3063\u3066\u3044\u308b\u306e\u304b\u8868\u3059\u3082\u306e\u3068\u8003\u3048\u308b\uff0e\n\u3059\u3050\u3042\u3068\u3067\u898b\u308b\u3088\u3046\u306b\uff0c\u5b9f\u969b\u306b\u3053\u308c\u304c\u78ba\u7387\u3092\u89e3\u91c8\u3059\u308b\u81ea\u7136\u306a\u65b9\u6cd5\u3067\u3042\u308b\uff0e\n\n\n\u3053\u306e\u89e3\u91c8\u3092\u3082\u3063\u3068\u5206\u304b\u308a\u3084\u3059\u304f\u3059\u308b\u305f\u3081\u306b\uff0c\n\u78ba\u7387\u306e\u3082\u3046\u4e00\u3064\u306e\u89e3\u91c8\u3092\u8003\u3048\u3066\u307f\u3088\u3046\uff0e\n\u305d\u308c\u306f\u300c\u983b\u5ea6\u4e3b\u7fa9\u300d(*Frequentist*)\u3068\u3044\u3046\u3082\u306e\u3067\u3042\u308a\uff0c\n\u3082\u3063\u3068*\u53e4\u5178\u7684*\u306a\u7d71\u8a08\u5b66\u3067\u3042\u308b\uff0e\n\u983b\u5ea6\u4e3b\u7fa9\u3067\u306f\uff0c\u78ba\u7387\u3092\u300c\u9577\u671f\u9593\u306b\u304a\u3051\u308b\u4e8b\u8c61\u306e\u983b\u5ea6\u300d\u3068\u307f\u306a\u3059\n\uff08\u3060\u304b\u3089\u300c\u983b\u5ea6\u4e3b\u7fa9\u300d\u3068\u3044\u3046\u540d\u524d\u3067\u547c\u3070\u308c\u3066\u3044\u308b\uff09\uff0e\n\u305f\u3068\u3048\u3070\uff0c*\u98db\u884c\u6a5f\u4e8b\u6545\u306e\u78ba\u7387*\u3092\u983b\u5ea6\u4e3b\u7fa9\u3067\u8003\u3048\u308c\u3070\uff0c\n\u300c\u9577\u671f\u9593\u306b\u304a\u3051\u308b\u98db\u884c\u6a5f\u4e8b\u6545\u306e\u983b\u5ea6\u300d\u306b\u306a\u308b\uff0e\n\u3053\u306e\u8003\u3048\u65b9\u306f\uff0c\u591a\u304f\u306e\u5834\u5408\uff0c\u4e8b\u8c61\u306e\u78ba\u7387\u3068\u3057\u3066\u610f\u5473\u304c\u3042\u308b\uff0e\n\u3057\u304b\u3057\u9577\u671f\u9593\u306b\u308f\u305f\u3063\u3066\u4e8b\u8c61\u304c\u767a\u751f\u3057\u306a\u3044\u3088\u3046\u306a\u5834\u5408\u306b\u306f\uff0c\n\u7406\u89e3\u3059\u308b\u3053\u3068\u304c\u96e3\u3057\u304f\u306a\u308b\uff0e\n\u4f8b\u3048\u3070\u5927\u7d71\u9818\u9078\u6319\u306e\u7d50\u679c\u306e\u78ba\u7387\u3092\u8a08\u7b97\u3057\u3088\u3046\u3068\u3057\u3066\u3082\uff0c\n\u3042\u308b\u7279\u5b9a\u306e\u9078\u6319\u306f1\u56de\u304d\u308a\u3057\u304b\u884c\u308f\u308c\u306a\u3044\u306e\u3060\uff01\n\u983b\u5ea6\u4e3b\u7fa9\u3067\u3053\u306e\u554f\u984c\u3092\u907f\u3051\u308b\u306b\u306f\uff0c\n\u4ed6\u306e\u3059\u3079\u3066\u306e\u9078\u6319\u3082\u8003\u616e\u3057\u3066\uff0c\n\u3053\u308c\u3089\u306e\u767a\u751f\u3059\u308b\u983b\u5ea6\u3067\u78ba\u7387\u3092\u5b9a\u7fa9\u3059\u308b\u3053\u3068\u306b\u306a\u308b\uff0e\n\n\n\n\u4e00\u65b9\u306e\u30d9\u30a4\u30ba\u4e3b\u7fa9\u3067\u306f\uff0c\u3082\u3063\u3068\u76f4\u611f\u7684\u306b\u8003\u3048\u308b\uff0e\n\u30d9\u30a4\u30ba\u4e3b\u7fa9\u3067\u306f\uff0c\u78ba\u7387\u3092\uff0c\n\u3042\u308b\u4e8b\u8c61\u304c\u767a\u751f\u3059\u308b\u4fe1\u5ff5(*belief*)\n\u3082\u3057\u304f\u306f\u78ba\u4fe1(confidence)\u306e\u5ea6\u5408\u3044\u3068\u307f\u306a\u3059\uff0e\n\u78ba\u7387\u3068\u306f\uff0c\u601d\u3063\u3066\u3044\u308b\u3053\u3068\u3092\u8981\u7d04\u3057\u305f\u3082\u306e\u3067\u3042\u308b\u3060\u3051\u306a\u306e\u3067\u3042\u308b\uff0e\n\u3042\u308b\u4eba\u304c\uff0c\u3042\u308b\u4e8b\u8c61\u306e\u4fe1\u5ff5\u30920\u3060\u3068\u601d\u3063\u3066\u3044\u308b\u5834\u5408\uff0c\n\u305d\u306e\u4e8b\u8c61\u304c\u767a\u751f\u3059\u308b\u3068\u306f\u8003\u3048\u3066\u3044\u306a\u3044\u3053\u3068\u306b\u306a\u308b\uff0e\n\u53cd\u5bfe\u306b\uff0c\u3042\u308b\u4e8b\u8c61\u306e\u4fe1\u5ff5\u30921\u3060\u3068\u601d\u3063\u3066\u3044\u308b\u5834\u5408\uff0c\n\u305d\u306e\u4e8b\u8c61\u304c\u5fc5\u305a\u767a\u751f\u3059\u308b\u3068\u8003\u3048\u3066\u3044\u308b\u3053\u3068\u306b\u306a\u308b\uff0e\n\u4fe1\u5ff5\u30920\u304b\u30891\u306e\u5b9f\u6570\u5024\u3067\u8868\u305b\u3070\uff0c\u305d\u308c\u3092\u4f7f\u3063\u3066\n\u4ed6\u306e\u7d50\u679c\u306b\u91cd\u307f\u3092\u4ed8\u3051\u308b\u3053\u3068\u304c\u3067\u304d\u308b\uff0e\n\u3053\u306e\u5b9a\u7fa9\u3092\u4f7f\u3048\u3070\uff0c\u98db\u884c\u6a5f\u4e8b\u6545\u306e\u78ba\u7387\u306e\u4f8b\u3092\n\u3046\u307e\u304f\u8868\u73fe\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\uff0e\n\u98db\u884c\u6a5f\u4e8b\u6545\u306e\u983b\u5ea6\u304c\u5f97\u3089\u308c\u305f\u6642\u306b\uff0c\u4ed6\u306e\u60c5\u5831\u304c\u4f55\u3082\u306a\u3051\u308c\u3070\uff0c\n\u3042\u308b\u4eba\u306e\u4fe1\u5ff5\u306f\u305d\u306e\u983b\u5ea6\u306b\u4e00\u81f4\u3059\u308b\u3079\u304d\u3067\u3042\u308b\uff0e\n\u540c\u69d8\u306b\uff0c\u78ba\u7387\u304c\u4fe1\u5ff5\u3067\u3042\u308b\u3068\u3044\u3046\u5b9a\u7fa9\u3092\u4f7f\u3048\u3070\uff0c\n\u5927\u7d71\u9818\u9078\u6319\u306e\u7d50\u679c\u306e\u78ba\u7387\uff08\u4fe1\u5ff5\uff09\u3068\u3044\u3046\u3082\u306e\u3092\u8003\u3048\u3066\u3082\u3088\u3044\u3060\u308d\u3046\uff0e\n\u3064\u307e\u308a\uff0c\u3042\u308b\u5019\u88dc\u8005A\u304c\u5f53\u9078\u3059\u308b\u3053\u3068\u3092\u3069\u306e\u304f\u3089\u3044\u78ba\u4fe1\u3057\u3066\u3044\u308b\u306e\u304b\uff0c\n\u3092\u8868\u3057\u3066\u3044\u308b\u306e\u3067\u3042\u308b\uff0e\n\n\n\n\u4e0a\u306e\u30d1\u30e9\u30b0\u30e9\u30d5\u3067\u306f\uff0c\u4e00\u822c\u7684\u306a\u4fe1\u5ff5\uff08\u78ba\u7387\uff09\u3067\u306f\u306a\u304f\uff0c\n\u300c\u3042\u308b\u4eba\u300d\u306e\u4fe1\u5ff5\uff08\u78ba\u7387\uff09\u3068\u3044\u3046\u3082\u306e\u3092\u8aac\u660e\u3057\u3066\u3044\u305f\u3053\u3068\u306b\u6ce8\u610f\u3057\u3066\u6b32\u3057\u3044\uff0e\n\u3053\u308c\u304c\u9762\u767d\u3044\u306e\u306f\uff0c\u4eba\u306b\u3088\u3063\u3066\u4fe1\u5ff5\u304c\u9055\u3046\u3068\u3044\u3046\u3053\u3068\u3092\u5b9a\u7fa9\u304c\u8a31\u3057\u3066\u3044\u308b\u3053\u3068\u306b\u306a\u308b\u70b9\u3067\u3042\u308b\uff0e\n\u3053\u308c\u306f\u65e5\u5e38\u3067\u306f\u898b\u304b\u3051\u308b\uff0e\u3053\u306e\u4e16\u754c\u306b\u3064\u3044\u3066\u6301\u3063\u3066\u3044\u308b\u60c5\u5831\u306f\u4eba\u305d\u308c\u305e\u308c\u9055\u3046\u306e\u3067\uff0c\n\u3042\u308b\u4eba\u306e\u4fe1\u5ff5\u306f\u5225\u306e\u4eba\u306e\u4fe1\u5ff5\u3068\u306f\u9055\u3046\u306e\u3067\u3042\u308b\uff0e\n\u4fe1\u5ff5\u304c\u9055\u3063\u3066\u3044\u308b\u3068\u3044\u3046\u3053\u3068\u306f\uff0c\u300c\u8ab0\u304b\u304c\u9593\u9055\u3063\u3066\u3044\u308b\u300d\u3068\u3044\u3046\u3053\u3068\u3067\u306f\u306a\u3044\uff0e\n\u4ee5\u4e0b\u306e\u4f8b\u3067\uff0c\u5404\u500b\u4eba\u304c\u6301\u3063\u3066\u3044\u308b\u4fe1\u5ff5\u3068\u78ba\u7387\u3068\u306e\u95a2\u4fc2\u3092\u8003\u3048\u3066\u307f\u3066\u307f\u3088\u3046\uff0e\n\n\n\n- \u79c1\u304c\u30b3\u30a4\u30f3\u3092\u6295\u3052\u3066\uff0c\u8868\u304c\u51fa\u308b\u304b\u88cf\u304c\u51fa\u308b\u304b\uff0c\u308f\u305f\u3057\u3068\u3042\u306a\u305f\u304c\u8ced\u3051\u3066\u3044\u308b\u3068\u3057\u3088\u3046\uff0e\u30a4\u30ab\u30b5\u30de\u306e\u30b3\u30a4\u30f3\u3067\u306a\u3051\u308c\u3070\uff0c\u8868\u304c\u51fa\u308b\u78ba\u7387\u306f1/2\u3067\u3042\u308b\uff0e\u3053\u308c\u306f\u308f\u305f\u3057\u3082\u3042\u306a\u305f\u3082\u540c\u610f\u3057\u3066\u3044\u308b\uff0e\u3053\u3053\u3067\uff0c\u79c1\u3060\u3051\u304c\u30b3\u30a4\u30f3\u306e\u7d50\u679c\u3092\u9664\u3044\u3066\u307f\u305f\u3068\u3059\u308b\uff0e\u305d\u3046\u3059\u308b\u3068\uff0c\u79c1\u306b\u3068\u3063\u3066\u306f\u8868\u304b\u88cf\u306e\u78ba\u7387\u306e\u3069\u3061\u3089\u304b\u304c1.0\u306b\u306a\u308b\uff08\u30b3\u30a4\u30f3\u306e\u7d50\u679c\u306b\u3088\u308b\uff09\uff0e\u3067\u306f\u300c\u30b3\u30a4\u30f3\u304c\u8868\u3067\u3042\u308b\u300d\u306b\u3064\u3044\u3066\uff0c\u3042\u306a\u305f\u306e\u4fe1\u5ff5\u306f\u3069\u3046\u3060\u308d\u3046\uff1f\u3000\u79c1\u304c\u30b3\u30a4\u30f3\u306e\u7d50\u679c\u3092\u77e5\u3063\u3066\u3082\uff0c\u30b3\u30a4\u30f3\u306e\u7d50\u679c\u306f\u5909\u308f\u3089\u306a\u3044\uff0e\u79c1\u3068\u3042\u306a\u305f\u306e\u4fe1\u5ff5\u306f\uff0c\u540c\u3058\u3067\u306f\u306a\u304f\u306a\u3063\u3066\u3057\u307e\u3063\u305f\uff0e\n- \u3042\u306a\u305f\u306e\u30d7\u30ed\u30b0\u30e9\u30e0\u306b\u306f\u30d0\u30b0\u304c\u3042\u308b\u304b\u3082\u3057\u308c\u306a\u3044\u3057\uff0c\u306a\u3044\u304b\u3082\u3057\u308c\u306a\u3044\uff0e\u3042\u306a\u305f\u3082\u79c1\u3082\uff0c\u3069\u3061\u3089\u304c\u6b63\u3057\u3044\u306e\u304b\u5206\u304b\u3089\u306a\u3044\u304c\uff0c\u30d0\u30b0\u304c\u3042\u308b\u306e\u304b\u306a\u3044\u306e\u304b\u306b\u3064\u3044\u3066\u306e\u4fe1\u5ff5\u306f\u6301\u3063\u3066\u3044\u308b\uff0e\n- \u75c5\u9662\u306b\u304a\u3044\u3066\uff0c\u3042\u308b\u60a3\u8005\u304cx\uff0cy\uff0cz\u3068\u3044\u3046\u75c7\u72b6\u3092\u81ea\u899a\u3057\u3066\u3044\u308b\uff0e\u305d\u308c\u3089\u306e\u75c7\u72b6\u3092\u767a\u751f\u3059\u308b\u75c5\u6c17\u306f\u305f\u304f\u3055\u3093\u3042\u308b\u304c\uff0c\u3069\u308c\u304b\u4e00\u3064\u306e\u75c5\u6c17\u304c\u539f\u56e0\u3067\u3042\u308b\uff0e\u3042\u308b\u533b\u5e2b\u306f\u305d\u306e\u539f\u56e0\u304c\u3042\u308b\u75c5\u6c17\u3060\u308d\u3046\u3068\u3044\u3046\u601d\u3063\u3066\u3044\u308b\u304c\uff0c\u5225\u306e\u533b\u5e2b\u306f\u3059\u3053\u3057\u9055\u3063\u305f\u539f\u56e0\u3092\u601d\u3063\u3066\u3044\u308b\u304b\u3082\u3057\u308c\u306a\u3044\uff0e\n\n\n\n\u4eba\u9593\u306b\u3068\u3063\u3066\u306f\uff0c\u78ba\u7387\u3092\u4fe1\u5ff5\u306e\u3088\u3046\u306b\u6271\u3046\u3068\u3044\u3046\u3053\u3068\u306f\u81ea\u7136\u306a\u3084\u308a\u65b9\u3067\u3042\u308b\uff0e\u3053\u306e\u4e16\u306e\u4e2d\u3067\u751f\u304d\u3066\u3044\u304f\u305f\u3081\u306b\uff0c\u3044\u3064\u3082\u3053\u306e\u3088\u3046\u306a\u3084\u308a\u65b9\u3092\u3057\u3066\u3044\u308b\u3057\uff0c\u771f\u5b9f\u3068\u3044\u3046\u3082\u306e\u304c\u5b8c\u5168\u3067\u306f\u306a\u3044\u3068\u3044\u3046\u4f8b\u3082\u305f\u304f\u3055\u3093\u898b\u3066\u3044\u308b\uff0e\u305d\u308c\u3067\u3082\u4fe1\u5ff5\u304b\u3089\u4f55\u304b\u60c5\u5831\u304c\u3048\u3089\u308c\u306a\u3044\u304b\u3068\u8003\u3048\u3066\u3082\u3044\u308b\uff0e\u3082\u3057\u983b\u5ea6\u4e3b\u7fa9\u306e\u3088\u3046\u306b\u8003\u3048\u308b\u3068\u3059\u308b\u306a\u3089\uff0c\u304b\u306a\u308a\u8a13\u7df4\u3057\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\u3060\u308d\u3046\uff0e\n\n\n\n\u5f93\u6765\u306e\u78ba\u7387\u8ad6\u306e\u8a18\u6cd5\u306b\u5f93\u3063\u3066\uff0c\n\u3042\u308b\u4e8b\u8c61$A$\u304c\u751f\u3058\u308b\u3068\u3044\u3046\u4fe1\u5ff5\u3092$P(A)$\u3068\u8868\u3057\uff0c\n\u4e8b\u524d\u78ba\u7387(*prior probability*)\u3068\u547c\u3076\u3053\u3068\u306b\u3059\u308b\uff0e\n\n\n\n\u5049\u5927\u306a\u7d4c\u6e08\u5b66\u8005\u3067\u3042\u308a\u601d\u60f3\u5bb6\u3067\u3042\u308b\u30b8\u30e7\u30f3\u30fb\u30e1\u30a4\u30ca\u30fc\u30c9\u30fb\u30b1\u30a4\u30f3\u30ba\u66f0\u304f\uff0c\n\u300c\u4e8b\u5b9f\u304c\u5909\u308f\u3063\u305f\u306a\u3089\u3070\uff0c\u308f\u305f\u3057\u306f\u8003\u3048\u3092\u6539\u3081\u308b\uff0e\u3042\u306a\u305f\u306f\u3069\u3046\u3057\u307e\u3059\u304b\uff1f\u300d\n\u3053\u308c\u306f\u8a3c\u62e0\u304c\u5f97\u3089\u308c\u305f\u5f8c\u306b\u4fe1\u5ff5\u3092\u66f4\u65b0\u3059\uff0c\u30d9\u30a4\u30ba\u4e3b\u7fa9\u7684\u306a\u3084\u308a\u65b9\u3067\u3042\u308b\uff0e\n\u3082\u3057\uff0c\u305d\u306e\u8a3c\u62e0\u304c\u6700\u521d\u306b\u601d\u3063\u3066\u3044\u305f\u4fe1\u5ff5\u3068\u76f8\u53cd\u3059\u308b\u3053\u3068\u3067\u3042\u3063\u305f\u3068\u3057\u3066\u3082\uff0c\n\u8a3c\u62e0\u3092\u7121\u8996\u3059\u308b\u3053\u3068\u306f\u3067\u304d\u306a\u3044\uff0e\n\u3053\u306e\u66f4\u65b0\u3055\u308c\u305f\u4fe1\u5ff5\u3092$P(A |X )$\u3068\u8868\u3057\uff0c\n\u8a3c\u62e0$X$\u304c\u4e0e\u3048\u3089\u308c\u305f\u6642\u306e$A$\u306e\u78ba\u7387\u3067\u3042\u308b\uff0c\u3068\u89e3\u91c8\u3059\u308b\uff0e\n\u4e8b\u524d\u78ba\u7387\u306b\u5bfe\u5fdc\u3057\u3066\uff0c\u3053\u306e\u66f4\u65b0\u3055\u308c\u305f\u4fe1\u5ff5\u3092\u4e8b\u5f8c\u78ba\u7387(*posterior probability*)\u3068\u547c\u3076\uff0e\n\u4f8b\u3048\u3070\u4e0a\u8a18\u306e\u4f8b\u3067\u306f\uff0c\u8a3c\u62e0$X$\u304c\u5f97\u3089\u308c\u305f\u5f8c\u306e\u4e8b\u5f8c\u78ba\u7387\uff08\u4e8b\u5f8c\u4fe1\u5ff5\u3068\u8a00\u3063\u3066\u3082\u3088\u3044\uff09\u306f\u6b21\u306e\u3088\u3046\u306b\u306a\u308b\uff0e\n\n\n\n1\\. $P(A): \\;\\;$ \u30b3\u30a4\u30f3\u306e\u8868\u304c\u51fa\u308b\u78ba\u7387\u306f50\u30d1\u30fc\u30bb\u30f3\u30c8\u3067\u3042\u308b\uff0e$P(A | X):\\;\\;$ \u30b3\u30a4\u30f3\u306e\u7d50\u679c\u3092\u307f\u3066\u8868\u304c\u51fa\u3066\u3044\u305f\u3068\u3059\u308b\uff0e\u3053\u306e\u60c5\u5831\u3092$X$\u3068\u3059\u308b\uff0e\u660e\u3089\u304b\u306b\uff0c\u8868\u306e\u4e8b\u5f8c\u78ba\u7387\u306f1.0\u3067\uff0c\u88cf\u306e\u4e8b\u5f8c\u78ba\u7387\u306f0.0\u3067\u3042\u308b\uff0e\n\n2\\. $P(A): \\;\\;$ \u3053\u306e\u8907\u96d1\u3067\u5de8\u5927\u306a\u30d7\u30ed\u30b0\u30e9\u30e0\u306b\u306f\u305f\u3076\u3093\u30d0\u30b0\u304c\u3042\u308b\uff0e$P(A | X): \\;\\;$ \u30d7\u30ed\u30b0\u30e9\u30e0\u306f\u3059\u3079\u3066\u306e\u30c6\u30b9\u30c8$X$\u306b\u30d1\u30b9\u3057\u305f\uff0e\u305f\u3076\u3093\u30d0\u30b0\u304c\u3042\u308b\u304b\u3082\u3057\u308c\u306a\u3044\u304c\uff0c\u305d\u306e\u53ef\u80fd\u6027\u306f\u975e\u5e38\u306b\u5c0f\u3055\u3044\u3060\u308d\u3046\uff0e\n\n3\\. $P(A):\\;\\;$ \u3042\u308b\u60a3\u8005\u304c\u75c5\u6c17\u306b\u304b\u304b\u3063\u3066\u3044\u308b\uff0e$P(A | X):\\;\\;$ \u8840\u6db2\u691c\u67fb\u306e\u7d50\u679c\uff0c$X$\u3068\u3044\u3046\u8a3c\u62e0\u304c\u5f97\u3089\u308c\u305f\u306e\u3067\uff0c\u3044\u304f\u3064\u304b\u306e\u75c5\u6c17\u306e\u53ef\u80fd\u6027\u306f\u6392\u9664\u3057\u3066\u3082\u3088\u3044\u3060\u308d\u3046\uff0e\n\n\n\n\u3053\u308c\u3089\u306e\u4f8b\u304b\u3089\u660e\u3089\u304b\u306a\u3088\u3046\u306b\uff0c\u65b0\u3057\u3044\u8a3c\u62e0$X$\u304c\u5f97\u3089\u308c\u305f\u3068\u3057\u3066\u3082\uff0c\n\u4e8b\u524d\u306e\u4fe1\u5ff5\u3092\u5b8c\u5168\u306b\u5426\u5b9a\u3059\u308b\u3053\u3068\u306f\u306a\u304f\uff0c\u65b0\u3057\u3044\u8a3c\u62e0\u3092\u4e8b\u524d\u78ba\u7387\u306e\u91cd\u307f\u3068\u3057\u3066\u4f7f\u3063\u3066\u3044\u308b\n\uff08\u3064\u307e\u308a\uff0c\u3042\u308b\u4fe1\u5ff5\u306b\u306f\u3088\u308a\u5927\u304d\u3044\u91cd\u307f\uff0c\u3064\u307e\u308a\u78ba\u4fe1\u5ea6\u3092\u4e0e\u3048\u308b\u306e\u3067\u3042\u308b\uff09\uff0e\n\n\n\u4e8b\u524d\u306b\u3042\u308b\u4e8b\u8c61\u304c\u3069\u306e\u304f\u3089\u3044\u751f\u3058\u308b\u306e\u304b\u3068\u3044\u3046\u3053\u3068\u3092\u8003\u3048\u3066\u3082\uff0c\u305d\u308c\u306f\u975e\u5e38\u306b\u4e0d\u78ba\u5b9f\u3067\u3042\u308b\uff0e\n\u3057\u305f\u304c\u3063\u3066\uff0c\u3069\u3093\u306a\u7d50\u679c\u3092\u4e88\u60f3\u3057\u305f\u3068\u3057\u3066\u3082\uff0c\u9593\u9055\u3063\u3066\u3044\u308b\u53ef\u80fd\u6027\u304c\u9ad8\u3044\uff0e\n\u30c7\u30fc\u30bf\u3084\u8a3c\u62e0\u3084\u60c5\u5831\u304c\u5f97\u3089\u308c\u308c\u3070\uff0c\u4fe1\u5ff5\u3092\u66f4\u65b0\u3057\u3066\uff0c\n\u9593\u9055\u3063\u3066\u3044\u308b\u53ef\u80fd\u6027\u304c\u3082\u3063\u3068\u5c11\u306a\u3044\u4e88\u6e2c\u3092\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3088\u3046\u306b\u306a\u308b\uff0e\n\u30b3\u30a4\u30f3\u306e\u88cf\u8868\u3092\u4e88\u6e2c\u3059\u308b\u3053\u3068\u4f8b\u3067\u306f\uff0c\u6b63\u3057\u3044\u4e88\u6e2c\u304c\u3067\u304d\u308b\u3088\u3046\u306b\u306a\u308b\uff0e\n\n\n\n### \u5b9f\u7528\u7684\u306a\u30d9\u30a4\u30ba\u63a8\u8ad6\n\n\n\u983b\u5ea6\u4e3b\u7fa9\u3068\u30d9\u30a4\u30b9\u4e3b\u7fa9\u304c\u30d7\u30ed\u30b0\u30e9\u30df\u30f3\u30b0\u8a00\u8a9e\u306e\u95a2\u6570\u3060\u3063\u305f\u3089\uff0c\u7d71\u8a08\u7684\u306a\u554f\u984c\u3092\u5165\u529b\u3059\u308b\u3068\uff0c\u30e6\u30fc\u30b6\u30fc\u306b\u8fd4\u3055\u308c\u308b\u7d50\u679c\u306f\u540c\u3058\u3067\u306f\u306a\u3044\u3060\u308d\u3046\uff0e\n\u983b\u5ea6\u4e3b\u7fa9\u306e\u63a8\u8ad6\u95a2\u6570\u306e\u623b\u308a\u5024\u306f\uff0c\u63a8\u5b9a\u5024\u3092\u8868\u3059\u6570\u5024\u3067\u3042\u308b\uff08\u6a19\u672c\u5e73\u5747\u306a\u3069\u306e\u8981\u7d04\u7d71\u8a08\u91cf\u3067\u3042\u308b\u3053\u3068\u304c\u591a\u3044\uff09\uff0e\n\u4e00\u65b9\u3067\u30d9\u30a4\u30ba\u4e3b\u7fa9\u306e\u63a8\u8ad6\u95a2\u6570\u306f\uff0c\u300c\u78ba\u7387\u300d\u3092\u8fd4\u3059\uff0e\n\n\n\u4f8b\u3048\u3070\u30c7\u30d0\u30c3\u30b0\u306e\u4f8b\u984c\u3067\u3042\u308c\u3070\uff0c\u983b\u5ea6\u4e3b\u7fa9\u95a2\u6570\u306e\u5f15\u6570\u306b\n\u300c\u3053\u306e\u30d7\u30ed\u30b0\u30e9\u30e0\u306f\u30c6\u30b9\u30c8$X$\u306e\u3059\u3079\u3066\u3092\u30d1\u30b9\u3057\u305f\u3093\u3060\uff0e\u3053\u306e\u30d7\u30ed\u30b0\u30e9\u30e0\u306b\u306f\u30d0\u30b0\u304c\u306a\u3044\u304b\u306a\uff1f\u300d\u3092\u6e21\u3059\u3068\uff0c\n\u623b\u308a\u5024\u306f\u300c\u30d0\u30b0\u306f\u3042\u308a\u307e\u305b\u3093\u300d\u3060\u308d\u3046\uff0e\n\u3057\u304b\u3057\u30d9\u30a4\u30ba\u4e3b\u7fa9\u95a2\u6570\u306b\n\u300c\u30d7\u30ed\u30b0\u30e9\u30e0\u3092\u66f8\u304f\u3068\u3044\u3064\u3082\u30d0\u30b0\u304c\u3042\u308b\u3093\u3060\uff0e\n\u3053\u306e\u30d7\u30ed\u30b0\u30e9\u30e0\u306f\u30c6\u30b9\u30c8$X$\u306e\u3059\u3079\u3066\u3092\u30d1\u30b9\u3057\u305f\u3093\u3060\uff0e\u3053\u306e\u30d7\u30ed\u30b0\u30e9\u30e0\u306b\u306f\u30d0\u30b0\u304c\u306a\u3044\u304b\u306a\uff1f\u300d\u3068\u3044\u3046\u5f15\u6570\u3092\u6e21\u3059\u3068\uff0c\n\u300c\u30d0\u30b0\u306f\u3042\u308a\u307e\u305b\u3093\u300d\u3068\u300c\u30d0\u30b0\u304c\u3042\u308a\u307e\u3059\u300d\u306e\u7b54\u3048\u306e\u305d\u308c\u305e\u308c\u306b\u78ba\u7387\u304c\u8fd4\u3055\u308c\u308b\uff0e\n\n\n\n> \u30d0\u30b0\u304c\u306a\u3044\u78ba\u7387\u306f0.8\uff0c\u30d0\u30b0\u304c\u3042\u308b\u78ba\u7387\u306f0.2\u3067\u3059\uff0e\n\n\n\u3053\u306e\u623b\u308a\u5024\u306f\u983b\u5ea6\u4e3b\u7fa9\u95a2\u6570\u306e\u623b\u308a\u5024\u3068\u306f\u307e\u3063\u305f\u304f\u9055\u3046\u3082\u306e\u3067\u3042\u308b\uff0e\n\u30d9\u30a4\u30ba\u4e3b\u7fa9\u95a2\u6570\u306f\u5f15\u6570\u306b\u300c\u30d7\u30ed\u30b0\u30e9\u30e0\u3092\u66f8\u304f\u3068\u3044\u3064\u3082\u30d0\u30b0\u304c\u3042\u308b\u3093\u3060\u300d\u3068\u3044\u3046\u60c5\u5831\u3092\u8ffd\u52a0\u3057\u3066\u3044\u308b\u3053\u3068\u306b\u6c17\u304c\u3064\u3044\u3066\u307b\u3057\u3044\uff0e\n\u3053\u308c\u304c**\u4e8b\u524d\u60c5\u5831**(prior)\u3067\u3042\u308b\uff0e\n\u3053\u306e\u4e8b\u524d\u60c5\u5831\u30d1\u30e9\u30e1\u30fc\u30bf\u3092\u5f15\u6570\u306b\u4e0e\u3048\u308b\u3053\u3068\u3067\uff0c\u4eca\u306e\u72b6\u6cc1\u306b\u3064\u3044\u3066\u306e\u4fe1\u5ff5\u3092\u30d9\u30a4\u30ba\u4e3b\u7fa9\u95a2\u6570\u306b\u4f1d\u3048\u3066\u3044\u308b\uff0e\n\u3053\u308c\u3092\u4e0e\u3048\u308b\u304b\u3069\u3046\u304b\u306f\u30e6\u30fc\u30b6\u30fc\u306e\u81ea\u7531\u3060\u304c\uff0c\u4e0e\u3048\u306a\u3044\u5834\u5408\u306b\u306f\u5225\u306e\u7d50\u679c\u304c\u5f97\u3089\u308c\u308b\u3053\u3068\u306b\u306a\u308b\uff0e\n\u305d\u306e\u4f8b\u306f\u5f8c\u3067\u898b\u308b\u3053\u3068\u306b\u3057\u3088\u3046\uff0e\n\n\n#### \u8a3c\u62e0\u3092\u53d6\u308a\u5165\u308c\u308b\n\n\n\u8a3c\u62e0\u3092\u305f\u304f\u3055\u3093\u624b\u306b\u5165\u308c\u308b\u3053\u3068\u304c\u3067\u304d\u308c\u3070\uff0c\u4e8b\u524d\u306e\u4fe1\u5ff5\u306f\uff0c\u305d\u306e\u591a\u6570\u306e\u8a3c\u62e0\u306b\u304b\u304d\u6d88\u3055\u308c\u3066\u3057\u307e\u3046\uff0e\n\u3053\u308c\u306f\u60f3\u50cf\u3067\u304d\u308b\u3060\u308d\u3046\uff0e\n\u4f8b\u3048\u3070\uff0c\u3042\u306a\u305f\u304c\u300c\u4eca\u65e5\uff0c\u592a\u967d\u304c\u7206\u767a\u3059\u308b\u3093\u3058\u3083\u306a\u3044\u304b\u300d\u3068\u3044\u3046\u4e8b\u524d\u4fe1\u5ff5\u3092\u6301\u3063\u3066\u3044\u305f\u3068\u3059\u308c\u3070\uff0c\u65e5\u306b\u65e5\u306b\u305d\u306e\u4fe1\u5ff5\u306f\u63fa\u3089\u3044\u3067\u3044\u304d\uff0c\n\u305d\u3057\u3066\uff0c\u3069\u3093\u306a\u63a8\u8ad6\u3067\u3082\u3044\u3044\u304b\u3089\u81ea\u5206\u306e\u9593\u9055\u3044\u3092\u6b63\u3057\u3066\u304f\u308c\uff0c\u5c11\u306a\u304f\u3068\u3082\u3053\u306e\u4fe1\u5ff5\u3092\u3082\u3063\u3068\u30de\u30b7\u306a\u3082\u306e\u306b\u3057\u3066\u304f\u308c\uff0c\n\u3068\u601d\u3046\u3088\u3046\u306b\u306a\u308b\uff08\u304b\u3082\u3057\u308c\u306a\u3044\uff09\uff0e\n\u305d\u3057\u3066\u30d9\u30a4\u30ba\u63a8\u8ad6\u306f\uff0c\u305d\u306e\u4fe1\u5ff5\u3092\u6b63\u3057\u3066\u304f\u308c\u308b\uff0e\n\n\n$N$\u3092\u624b\u306b\u5165\u308b\u8a3c\u62e0\u306e\u6570\u3068\u3059\u308b\uff0e\u3082\u3057\u7121\u9650\u500b\u306e\u8a3c\u62e0\u304c\u624b\u306b\u5165\u308c\u3070\uff0c\u3064\u307e\u308a$N \\rightarrow \\infty$\u306a\u3089\u3070\uff0c\n\u30d9\u30a4\u30ba\u63a8\u8ad6\u306e\u7d50\u679c\u306f\u983b\u5ea6\u4e3b\u7fa9\u306e\u7d50\u679c\u3068\uff08\u591a\u304f\u306e\u5834\u5408\uff09\u4e00\u81f4\u3059\u308b\uff0e\n\u3057\u305f\u304c\u3063\u3066$N$\u304c\u5927\u304d\u304f\u306a\u308c\u3070\uff0c\u7d71\u8a08\u7684\u63a8\u8ad6\u306f\u5ba2\u89b3\u7684\u306a\u3082\u306e\u306b\u306a\u308b\uff0e\n\u53cd\u5bfe\u306b$N$\u304c\u5c0f\u3055\u3051\u308c\u3070\uff0c\u63a8\u8ad6\u306f*\u4e0d\u5b89\u5b9a*\u306a\u3082\u306e\u306b\u306a\u308b\uff0e\n\u983b\u5ea6\u4e3b\u7fa9\u306e\u63a8\u5b9a\u5024\u306f\u5206\u6563\u3082\u4fe1\u983c\u533a\u9593\u3082\u5927\u304d\u304f\u306a\u308b\uff0e\n\u305d\u3093\u306a\u6642\u306b\u306f\u30d9\u30a4\u30ba\u63a8\u8ad6\u306e\u51fa\u756a\u3067\u3042\u308b\uff0e\n\u4e8b\u524d\u5206\u5e03\u3092\u5f15\u6570\u306b\u3068\u308a\uff0c\u7d50\u679c\u306b\uff08\u63a8\u5b9a\u5024\u3067\u306f\u306a\u304f\uff09\u78ba\u7387\u3092\u51fa\u529b\u3059\u308b\uff0e\n\u3053\u308c\u306f\uff0c$N$\u306e\u5c0f\u3055\u3044\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u306b\u5bfe\u3059\u308b\u7d71\u8a08\u7684\u63a8\u8ad6\u306e\u4e0d\u5b89\u5b9a\u3055\u3092\u53cd\u6620\u3057\u305f\uff0c\u4e0d\u78ba\u5b9f\u3055\u3092\u8868\u3059\u3082\u306e\u306b\u306a\u3063\u3066\u3044\u308b\uff0e\n\n\n\n$N$\u304c\u975e\u5e38\u306b\u5927\u304d\u3044\u5834\u5408\u306f\uff0c\u983b\u5ea6\u4e3b\u7fa9\u3068\u30d9\u30a4\u30ba\u4e3b\u7fa9\u306f\u4f3c\u305f\u3088\u3046\u306a\u63a8\u8ad6\u7d50\u679c\u3092\u51fa\u3057\u3066\u304f\u308b\u306e\u3067\uff0c\u4e8c\u3064\u306e\u533a\u5225\u306f\u3064\u304b\u306a\u304f\u306a\u308b\u3060\u308d\u3046\uff0e\n\u305d\u306e\u305f\u3081\uff0c\u5c11\u306a\u3044\u8a08\u7b97\u3067\u6e08\u3080\u983b\u5ea6\u4e3b\u7fa9\u3092\u7528\u3044\u305f\u304f\u306a\u308b\u304b\u3082\u3057\u308c\u306a\u3044\uff0e\n\u3082\u3057\u305d\u3093\u306a\u72b6\u6cc1\u306b\u3042\u308b\u306e\u3067\u3042\u308c\u3070\uff0c\u305d\u3046\u3059\u308b\u524d\u306b\u4ee5\u4e0b\u306e\n[Andrew Gelman (2005)][1]\u306e\u6587\u7ae0\u3092\u8aad\u3093\u3067\u307b\u3057\u3044\uff0e\n\n\n\n> \u30b5\u30f3\u30d7\u30eb\u6570\u304c\u5927\u304d\u3044\u5834\u5408\uff0c\u3068\u3044\u3046\u3082\u306e\u306f\u5b58\u5728\u3057\u306a\u3044\uff0e\u3082\u3057$N$\u304c\u5c0f\u3055\u3059\u304e\u3066\u5341\u5206\u306b\u6b63\u78ba\u306a\u63a8\u5b9a\u5024\u3092\u5f97\u308b\u3053\u3068\u304c\u3067\u304d\u306a\u3044\u306e\u3067\u3042\u308c\u3070\uff0c\u30c7\u30fc\u30bf\u3092\u3082\u3063\u3068\u5897\u3084\u3059\uff08\u3082\u3057\u304f\u306f\u3082\u3063\u3068\u591a\u304f\u306e\u4eee\u5b9a\u3092\u4f7f\u3046\uff09\u5fc5\u8981\u304c\u3042\u308b\uff0e\u3057\u304b\u3057\uff0c\u3082\u3057$N$\u304c\u300c\u5341\u5206\u306b\u5927\u304d\u3044\u300d\u306e\u3067\u3042\u308c\u3070\uff0c\u30c7\u30fc\u30bf\u3092\u5206\u5272\u3057\u3066\u3082\u3063\u3068\u591a\u304f\u306e\u60c5\u5831\u3092\u5f97\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3060\u308d\u3046\uff08\u4f8b\u3048\u3070\u4e16\u8ad6\u8abf\u67fb\u306e\u5834\u5408\u306b\u306f\uff0c\u5168\u56fd\u533a\u3067\u306e\u826f\u3044\u63a8\u5b9a\u5024\u304c\u5f97\u3089\u308c\u305f\u3089\uff0c\u6b21\u306f\u7537\u5973\u5225\uff0c\u5730\u57df\u5225\uff0c\u5e74\u9f62\u5225\u306e\u63a8\u5b9a\u5024\u3092\u5f97\u308b\u3053\u3068\u3082\u3067\u304d\u308b\u3060\u308d\u3046\uff09\uff0e$N$\u304c\u5341\u5206\u3067\u3042\u308b\u3053\u3068\u306f\u306a\u3044\uff0e\u3082\u3057\u300c\u5341\u5206\u300d\u3060\u3068\u3057\u305f\u3089\uff0c\u3042\u306a\u305f\u306f\u3082\u3046\u3059\u3067\u306b\u3082\u3063\u3068\u591a\u304f\u306e\u30c7\u30fc\u30bf\u3092\u5fc5\u8981\u3068\u3059\u308b\u6b21\u306e\u554f\u984c\u306b\u53d6\u308a\u7d44\u3093\u3067\u3044\u308b\u306e\u3060\uff0e\n\n\n\n\n### \u3058\u3083\u3042\u983b\u5ea6\u4e3b\u7fa9\u306f\u9593\u9055\u3063\u3066\u3044\u308b\u306e\uff1f\n\n\n\n**\u9593\u9055\u3063\u3066\u306f\u3044\u306a\u3044\uff0e**\n\n\n\u983b\u5ea6\u4e3b\u7fa9\u306e\u65b9\u6cd5\u306f\u4eca\u3067\u3082\u591a\u304f\u306e\u5206\u91ce\u3067\u6709\u7528\u3067\u3042\u308a\uff0c\u6700\u5148\u7aef\u3067\u4f7f\u308f\u308c\u308c\u3044\u308b\uff0e\n\u6700\u5c0f\u4e8c\u4e57\u56de\u5e30\u3084lasso\u56de\u5e30\uff0cEM\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306a\u3069\u306e\u30c4\u30fc\u30eb\u306f\u3069\u308c\u3082\u512a\u308c\u3066\u3044\u3066\u51e6\u7406\u3082\u901f\u3044\uff0e\n\u30d9\u30a4\u30ba\u4e3b\u7fa9\u306e\u624b\u6cd5\u306f\uff0c\u305d\u308c\u3089\u306e\u624b\u6cd5\u3092\u88dc\u3046\u3082\u306e\u3067\u3042\u308b\uff0e\n\u305d\u308c\u3089\u306e\u624b\u6cd5\u304c\u9069\u7528\u3067\u304d\u306a\u3044\u554f\u984c\u3092\u89e3\u3044\u305f\u308a\uff0c\n\u3082\u3063\u3068\u67d4\u8edf\u306a\u30e2\u30c7\u30eb\u5316\u3067\u96a0\u308c\u305f\u69cb\u9020\u3092\u89e3\u304d\u660e\u304b\u3057\u305f\u308a\u3059\u308b\u306e\u3067\u3042\u308b\uff0e\n\n\n### \u300c\u30d3\u30c3\u30b0\u30c7\u30fc\u30bf\u300d\u306b\u3064\u3044\u3066\n\n\n\u9006\u8aac\u7684\u306b\u805e\u3053\u3048\u308b\u304b\u3082\u3057\u308c\u306a\u3044\u304c\uff0c\u30d3\u30c3\u30b0\u30c7\u30fc\u30bf\u3067\u4e88\u6e2c\u3057\u305f\u308a\u89e3\u6790\u3057\u305f\u308a\u3059\u308b\u554f\u984c\u306b\u306f\uff0c\n\u5b9f\u969b\u306b\u306f\u6bd4\u8f03\u7684\u5358\u7d14\u306a\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u304c\u4f7f\u308f\u308c\u3066\u3044\u308b[2][4]\uff0e\n\u30d3\u30c3\u30b0\u30c7\u30fc\u30bf\u3092\u7528\u3044\u305f\u4e88\u6e2c\u306e\u96e3\u3057\u3055\u306f\uff0c\u30a2\u30eb\u30b4\u30ea\u30ba\u30e0\u306b\u3042\u308b\u306e\u3067\u306f\u306a\u3044\uff0e\n\u30d3\u30c3\u30b0\u30c7\u30fc\u30bf\u3092\u4fdd\u5b58\u3057\u8aad\u307f\u51fa\u3059\u30b9\u30c8\u30ec\u30fc\u30b8\u3084\n\u30d3\u30c3\u30b0\u30c7\u30fc\u30bf\u306b\u5bfe\u3057\u3066\u5b9f\u884c\u3059\u308b\u6642\u306e\u8a08\u7b97\u91cf\u304c\u5927\u5909\u306a\u306e\u3067\u3042\u308b\uff0e\n\uff08\u4e0a\u8ff0\u306eGelman\u306e\u6587\u7ae0\u3092\u8aad\u3093\u3067\u300c\u81ea\u5206\u306f\u672c\u5f53\u306b\u30d3\u30c3\u30b0\u30c7\u30fc\u30bf\u3092\u6301\u3063\u3066\u3044\u308b\u306e\u3060\u308d\u3046\u304b\uff1f\u300d\u3068\u8003\u3048\u3066\u307f\u3066\u307b\u3057\u3044\uff09\n\n\n\u89e3\u6790\u3059\u308b\u306e\u304c\u3082\u3063\u3068\u96e3\u3057\u3044\u554f\u984c\u306f\uff0c\u300c\u30df\u30c7\u30a3\u30a2\u30e0\u306a\u30c7\u30fc\u30bf\u300d\u306e\u5834\u5408\u3067\u3042\u308a\uff0c\n\u7279\u306b\u554f\u984c\u3068\u306a\u308b\u306e\u306f\u300c\u30b9\u30e2\u30fc\u30eb\u30c7\u30fc\u30bf\u300d\u306e\u5834\u5408\u3067\u3042\u308b\uff0e\nGelman\u306e\u6587\u7ae0\u3092\u501f\u308a\u308b\u306a\u3089\uff0c\u30d3\u30c3\u30b0\u30c7\u30fc\u30bf\u306e\u554f\u984c\u304c\u300c\u5341\u5206\u306b\u30d3\u30c3\u30b0\u300d\u3067\u5b9f\u969b\u306b\u306f\u89e3\u3051\u306a\u3044\u306e\u3067\u3042\u308c\u3070\uff0c\n\u300c\u305d\u308c\u307b\u3069\u5341\u5206\u306b\u30d3\u30c3\u30b0\u3067\u306f\u306a\u3044\u300d\u30c7\u30fc\u30bf\u3092\u6271\u3048\u3070\u3088\u3044\u306e\u3067\u3042\u308b\uff0e\n\n\n\n### \u3053\u3053\u3067\u306e\u30d9\u30a4\u30ba\u63a8\u8ad6\u306e\u67a0\u7d44\u307f\n\n\n\n\u8a08\u7b97\u3059\u308b\u3079\u304d\u4fe1\u5ff5\u306f\uff0c\u30d9\u30a4\u30ba\u7684\u306b\u8003\u3048\u305f\u78ba\u7387\u3068\u89e3\u91c8\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\uff0e\n\u3053\u3053\u3067\uff0c\u3042\u308b\u4e8b\u8c61$A$\u306b\u3064\u3044\u3066\u300c\u4e8b\u524d\u300d\u4fe1\u5ff5\u3092\u6301\u3063\u3066\u3044\u308b\u3068\u3057\u3088\u3046\n\uff08\u4f8b\u3048\u3070\uff0c\u30c6\u30b9\u30c8\u3092\u5b9f\u884c\u3059\u308b\u524d\u306b\uff0c\u30d7\u30ed\u30b0\u30e9\u30e0\u306b\u30d0\u30b0\u304c\u3042\u308a\u305d\u3046\u304b\u3069\u3046\u304b\u306b\u3064\u3044\u3066\u306e\u4fe1\u5ff5\uff09\uff0e\n\n\n\n\u6b21\u306f\uff0c\u5f97\u3089\u308c\u305f\u8a3c\u62e0\u3092\u4f7f\u304a\u3046\uff0e\u30d0\u30b0\u3042\u308a\u30d7\u30ed\u30b0\u30e9\u30e0\u306e\u4f8b\u3092\u4f7f\u3048\u3070\uff0c\n\u30d7\u30ed\u30b0\u30e9\u30e0\u306f\u30c6\u30b9\u30c8$X$\u306b\u30d1\u30b9\u3057\u305f\u306e\u3067\uff0c\u305d\u306e\u60c5\u5831\u3092\u53d6\u308a\u5165\u308c\u3066\u4fe1\u5ff5\u3092\u66f4\u65b0\u3057\u305f\u3044\uff0e\n\u3053\u306e\u66f4\u65b0\u3055\u308c\u305f\u65b0\u3057\u3044\u4fe1\u5ff5\u3092\u300c\u4e8b\u5f8c\u300d\u4fe1\u5ff5\u3068\u547c\u3076\u3053\u3068\u306b\u3059\u308b\uff0e\n\u4ee5\u4e0b\u306e\u5f0f\u3092\u4f7f\u3048\u3070\uff0c\u4fe1\u5ff5\u3092\u66f4\u65b0\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\uff0e\n\u3053\u306e\u5f0f\u306f\uff0c\u767a\u898b\u8005\u306e\u30c8\u30fc\u30de\u30b9\u30fb\u30d9\u30a4\u30ba\u306b\u3061\u306a\u3093\u3067\uff0c\u30d9\u30a4\u30ba\u306e\u5b9a\u7406\u3068\u547c\u3070\u308c\u3066\u3044\u308b\uff0e\n\n\n\\begin{align}\n P( A | X ) = & \\frac{ P(X | A) P(A) } {P(X) } \\\\\\\\[5pt]\n& \\propto P(X | A) P(A)\\;\\; (\\propto \\text{\u306f\u6bd4\u4f8b\u3092\u8868\u3059} )\n\\end{align}\n\n\n\n\u3053\u306e\u516c\u5f0f\u306f\u30d9\u30a4\u30ba\u63a8\u8ad6\u3060\u3051\u306e\u3082\u306e\u3067\u306f\u306a\u3044\uff0e\u30d9\u30a4\u30ba\u63a8\u8ad6\u4ee5\u5916\u3067\u3082\u4f7f\u308f\u308c\u3066\u3044\u308b\u6570\u5b66\u7684\u4e8b\u5b9f\u3067\u3042\u308b\uff0e\n\u30d9\u30a4\u30ba\u63a8\u8ad6\u3067\u306f\uff0c\u5358\u306b\u3053\u306e\u5f0f\u3092\u4f7f\u3063\u3066\n\u521d\u671f\u306e\u4e8b\u524d\u78ba\u7387$P(A)$\u3068\u66f4\u65b0\u5f8c\u306e\u4e8b\u5f8c\u78ba\u7387$P(A | X)$\u3092\u7d50\u3073\u3064\u3051\u3066\u3044\u308b\u3060\u3051\u3067\u3042\u308b\uff0e\n\n\n\n\n##### \u4f8b\u984c\uff1a\u3060\u308c\u3082\u304c\u4e00\u5ea6\u306f\u3084\u308b\u300c\u30b3\u30a4\u30f3\u6295\u3052\u300d\u306e\u554f\u984c\n\n\n\u7d71\u8a08\u5b66\u306e\u30c6\u30ad\u30b9\u30c8\u3067\u3042\u308c\u3070\uff0c\u30b3\u30a4\u30f3\u6295\u3052\u306e\u554f\u984c\u3092\u6271\u3063\u3066\u3044\u306a\u3044\u672c\u306f\u306a\u3044\uff0e\n\u3061\u3087\u3063\u3068\u5909\u308f\u3063\u305f\u3084\u308a\u65b9\u3067\u3053\u306e\u554f\u984c\u3092\u6271\u3063\u3066\u307f\u3088\u3046\uff0e\n\u3042\u306a\u305f\u306f\uff0c\u30b3\u30a4\u30f3\u306e\u8868\u304c\u51fa\u308b\u78ba\u7387\u304c\u3088\u304f\u308f\u304b\u3089\u306a\u3044\u3068\u3059\u308b\uff08\u672c\u5f53\u306f50%\uff09\uff0e\n\u4f55\u3089\u304b\u306e\u6bd4\u7387\uff08\u3053\u3053\u3067\u306f$p$\u3068\u3059\u308b\uff09\u3067\u8868\u88cf\u304c\u3067\u308b\u3068\u3044\u3046\u3053\u3068\u306b\u3064\u3044\u3066\u306f\u4fe1\u3058\u3066\u3044\u308b\u304c\uff0c\n\u305d\u306e$p$\u304c\u3069\u306e\u304f\u3089\u3044\u306a\u306e\u304b\u306b\u3064\u3044\u3066\u306f\uff0c\u307e\u3063\u305f\u304f\u60c5\u5831\u3092\u6301\u3063\u3066\u3044\u306a\u3044\uff0e\n\n\n\u3067\u306f\u30b3\u30a4\u30f3\u6295\u3052\u3092\u521d\u3081\u3066\uff0c\u8868$H$\u304c\u51fa\u305f\u306e\u304b\u88cf$T$\u304c\u51fa\u305f\u306e\u304b\u3092\u8a18\u9332\u3059\u308b\u3053\u3068\u306b\u3059\u308b\uff0e\n\u3053\u3053\u3067\u3061\u3087\u3063\u3068\u8003\u3048\u3066\u307f\u3088\u3046\uff0e\n\u30c7\u30fc\u30bf\u304c\u5897\u3048\u308b\u306b\u3064\u308c\u3066\uff0c\u63a8\u8ad6\u7d50\u679c\u306f\u3069\u306e\u3088\u3046\u306b\u5909\u308f\u3063\u3066\u3044\u304f\u306e\u3060\u308d\u3046\u304b\uff1f\n\u3082\u3063\u3068\u6b63\u78ba\u306b\u8a00\u3048\u3070\uff0c\u30c7\u30fc\u30bf\u304c\u5c11\u306a\u3044\u6642\u3068\u30c7\u30fc\u30bf\u304c\u591a\u3044\u6642\u3068\u3067\uff0c\u4e8b\u5f8c\u78ba\u7387\u306f\u3069\u306e\u3088\u3046\u306b\u9055\u3046\u306e\u3060\u308d\u3046\u304b\uff1f\n\n\n\u4ee5\u4e0b\u306e\u30b3\u30fc\u30c9\u306f\uff0c\n\uff08\u30b3\u30a4\u30f3\u6295\u3052\u306e\uff09\u30c7\u30fc\u30bf\u304c\u5897\u3048\u308b\u305f\u3073\u306b\u66f4\u65b0\u3055\u308c\u308b\u4e8b\u5f8c\u78ba\u7387\u306e\u7cfb\u5217\u3092\u30d7\u30ed\u30c3\u30c8\u3059\u308b\u3082\u306e\u3067\u3042\u308b\uff0e\n\n\n\n\n\n```\n\"\"\"\n\u672c\u66f8\u3067\u306fmatplotlib\u306e\u30b0\u30e9\u30d5\u306e\u30b9\u30bf\u30a4\u30eb\u3092\u5909\u66f4\u3059\u308b\u305f\u3081\u306b\uff0cmatplotlibrc\u30d5\u30a1\u30a4\u30eb\u3092\u30ab\u30b9\u30bf\u30de\u30a4\u30ba\u3057\u3066\u3044\u308b\uff0e\n\u672c\u66f8\u3092\u5b9f\u884c\u3057\u3066\uff0c\u672c\u66f8\u306e\u30b9\u30bf\u30a4\u30eb\u3092\u4f7f\u3044\u305f\u3044\u306e\u3067\u3042\u308c\u3070\uff0c\u4ee5\u4e0b\u306e2\u3064\u306e\u65b9\u6cd5\u304c\u3042\u308b\uff0e\n 1. \u672c\u66f8\u306e style/ \u30c7\u30a3\u30ec\u30af\u30c8\u30ea\u306b\u3042\u308brc\u30d5\u30a3\u30a2\u30eb\u3067\uff0c\u81ea\u5206\u306e\u74b0\u5883\u306ematplotlibrc\u3092\u66f8\u304d\u63db\u3048\u308b\uff0e\n http://matplotlib.org/users/customizing.html\u3092\u53c2\u7167\uff0e\n 2. \u30b9\u30bf\u30a4\u30eb\u306fbmh_matplotlibrc.json\u30d5\u30a1\u30a4\u30eb\u306b\u3082\u3042\u308b\uff0e\u3053\u308c\u3092\u4f7f\u3063\u3066\u4ee5\u4e0b\u306e\u30b3\u30fc\u30c9\u3092\u5b9f\u884c\u3059\u308c\u3070\uff0c\n \u672c\u66f8\u306b\u3060\u3051\u30b9\u30bf\u30a4\u30eb\u3092\u9069\u7528\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\uff0e\n import json\n s = json.load( open(\"../styles/bmh_matplotlibrc.json\") )\n matplotlib.rcParams.update(s)\n\"\"\"\n\n# \u4ee5\u4e0b\u306e\u30b3\u30fc\u30c9\u306f\u8aad\u307f\u98db\u3070\u3057\u3066\u69cb\u308f\u306a\u3044\uff0e\u3053\u3053\u3067\u306f\u3042\u307e\u308a\u91cd\u8981\u3067\u306f\u306a\u3044\u3057\uff0c\n# \u307e\u3060\u8aac\u660e\u3057\u3066\u3044\u306a\u3044\u9032\u3093\u3060\u5185\u5bb9\u3082\u542b\u3093\u3067\u3044\u308b\uff0e\u305d\u306e\u4e0b\u306e\u30b0\u30e9\u30d5\u3092\u898b\u3066\uff01\n%matplotlib inline\nfrom IPython.core.pylabtools import figsize\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfigsize(11, 9)\n\nimport scipy.stats as stats\n\ndist = stats.beta\nn_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]\ndata = stats.bernoulli.rvs(0.5, size=n_trials[-1])\nx = np.linspace(0, 1, 100)\n\n \n# \u5206\u304b\u3063\u3066\u3044\u308b\u4eba\u3078\uff1a\u3053\u3053\u3067\u306f\u4e8c\u9805\u5206\u5e03\u306e\u5171\u5f79\u4e8b\u524d\u5206\u5e03\u3092\u4f7f\u3063\u3066\u3044\u308b\uff0e\nfor k, N in enumerate(n_trials):\n sx = plt.subplot(len(n_trials) / 2, 2, k + 1)\n # u\"$p$, \u8868\u304c\u51fa\u308b\u78ba\u7387\"\n plt.xlabel(\"$p$, probability of heads\") \\\n if k in [0, len(n_trials) - 1] else None\n plt.setp(sx.get_yticklabels(), visible=False)\n heads = data[:N].sum()\n y = dist.pdf(x, 1 + heads, 1 + N - heads)\n plt.plot(x, y, label=\"observe %d tosses,\\n %d heads\" % (N, heads)) # u\"%d\u56de\u6295\u3052\u3066, \\n \u8868\u306f%d\u56de\"\n plt.fill_between(x, 0, y, color=\"#348ABD\", alpha=0.4)\n plt.vlines(0.5, 0, 4, color=\"k\", linestyles=\"--\", lw=1)\n\n leg = plt.legend()\n leg.get_frame().set_alpha(0.4)\n plt.autoscale(tight=True)\n\n\nplt.suptitle(\"Bayesian updating of posterior probabilities\", # u\"\u30d9\u30a4\u30ba\u63a8\u8ad6\u306b\u3088\u308b\u4e8b\u5f8c\u78ba\u7387\u306e\u66f4\u65b0\"\n y=1.02,\n fontsize=14)\n\nplt.tight_layout()\n```\n\n\u4e8b\u5f8c\u78ba\u7387\u306f\u66f2\u7dda\u3067\u8868\u3055\u308c\u3066\u3044\u308b\uff0e\n\u4e0d\u78ba\u5b9f\u3055\u306f\uff0c\u3053\u306e\u66f2\u7dda\u306e\u5e83\u304c\u308a\u5177\u5408\u306b\u6bd4\u4f8b\u3057\u3066\u3044\u308b\uff0e\n\u4e0a\u306e\u30b0\u30e9\u30d5\u3092\u898b\u308c\u3070\u5206\u304b\u308b\u3088\u3046\u306b\uff0c\n\u30c7\u30fc\u30bf\u304c\u5897\u3048\u308b\u305f\u3073\u306b\u4e8b\u5f8c\u78ba\u7387\u306e\u66f2\u7dda\u306f\u53f3\u3078\u5de6\u3078\u3068\u52d5\u304d\u56de\u308b\uff0e\n\u6700\u7d42\u7684\u306b\uff0c\u30c7\u30fc\u30bf\u304c\u305f\u304f\u3055\u3093\u624b\u306b\u5165\u308c\u3070\uff08\u305f\u304f\u3055\u3093\u30b3\u30a4\u30f3\u3092\u6295\u3052\u305f\u3089\uff09\uff0c\n\u4e8b\u5f8c\u78ba\u7387\u66f2\u7dda\u306f\uff0c\u771f\u306e\u78ba\u7387\u3067\u3042\u308b$p=0.5$\u306b\u6b21\u7b2c\u306b\u96c6\u307e\u3063\u3066\u304f\u308b\uff0e\n\n\n\u306a\u304a\uff0c\u66f2\u7dda\u306e\u30d4\u30fc\u30af\u306e\u4f4d\u7f6e\u306f0.5\u3067\u306f\u306a\u3044\u3057\uff0c\u305d\u3046\u3067\u3042\u308b\u7406\u7531\u3082\u306a\u3044\uff0e$p$\u306e\u5024\u306b\u3064\u3044\u3066\u306f\u4f55\u3082\u77e5\u3089\u306a\u3044\uff0c\u3068\u3044\u3046\u524d\u63d0\u3060\u3063\u305f\u306e\u3060\u304b\u3089\uff0e\n\u5b9f\u969b\u306e\u3068\u3053\uff52\uff0c\u30b3\u30a4\u30f3\u6295\u3052\u306e\u7d50\u679c\u304c\u6975\u7aef\u306a\uff0c\u305f\u3068\u3048\u30708\u56de\u6295\u3052\u3066\u8868\u304c1\u56de\u3057\u304b\u306a\u304b\u3063\u305f\u3088\u3046\u306a\u5834\u5408\u306b\u306f\uff0c\n\u4e8b\u5f8c\u78ba\u7387\u66f2\u7dda\u306e\u30d4\u30fc\u30af\u306f0.5\u304b\u3089\u975e\u5e38\u306b\u96e2\u308c\u3066\u3044\u308b\u3060\u308d\u3046\n\uff08\u4e8b\u524d\u60c5\u5831\u304c\u306a\u3044\u306e\u3060\u304b\u3089\uff0c8\u56de\u6295\u3052\u3066\u8868\u304c1\u56de\u3057\u304b\u51fa\u306a\u3044\u30b3\u30a4\u30f3\u306b\u30a4\u30ab\u30b5\u30de\u306f\u306a\u3044\u3068\uff0c\u3069\u306e\u304f\u3089\u3044\u78ba\u4fe1\u3067\u304d\u308b\u3060\u308d\u3046\uff1f\uff09\uff0e\n\u3082\u3063\u3068\u30c7\u30fc\u30bf\u304c\u5897\u3048\u308c\u3070\uff0c\u78ba\u7387\u306f\u3082\u3063\u3068$p=0.5$\u306b\u8fd1\u304f\u306a\u308b\u3060\u308d\u3046\uff0e\n\n\n\u6b21\u306e\u4f8b\u3067\uff0c\u6570\u5b66\u304c\u30d9\u30a4\u30ba\u63a8\u8ad6\u3067\u3069\u306e\u3088\u3046\u306b\u4f7f\u308f\u308c\u308b\u306e\u304b\u898b\u3066\u307f\u3088\u3046\uff0e\n\n\n##### \u4f8b\u984c\uff1a\u30d0\u30b0\u304b\uff0c\u4ed5\u69d8\u304b\uff1f\n\n\n\u300c\u30d7\u30ed\u30b0\u30e9\u30e0\u306b\u306f\u30d0\u30b0\u304c\u306a\u3044\u300d\u3068\u3044\u3046\u4e8b\u8c61\u3092$A$\u3068\u3059\u308b\uff0e\n\u300c\u3053\u306e\u30d7\u30ed\u30b0\u30e9\u30e0\u304c\u3059\u3079\u3066\u306e\u30c7\u30d0\u30c3\u30b0\u30c6\u30b9\u30c8\u306b\u30d1\u30b9\u3059\u308b\u300d\u3068\u3044\u3046\u4e8b\u8c61\u3092$X$\u3068\u3059\u308b\uff0e\n\u3068\u308a\u3042\u3048\u305a\uff0c\u30d0\u30b0\u304c\u306a\u3044\u3068\u3044\u3046\u4e8b\u524d\u78ba\u7387$P(A)$\u3092\u5909\u6570$p$\u306b\u3057\u3066\u304a\u3053\u3046\uff0e\n\u3064\u307e\u308a$P(A) = p$\u3068\u3059\u308b\uff0e\n\n\n\u4eca\u304b\u3089\u8003\u3048\u308b\u306e\u306f\u3053\u306e$P(A|X)$\u3060\uff0e\n\u3064\u307e\u308a\uff0c\u300c\u30c7\u30d0\u30c3\u30b0\u30c6\u30b9\u30c8$X$\u3092\u30d1\u30b9\u3057\u305f\u6642\u306b\uff0c\u30d0\u30b0\u304c\u306a\u3044\u300d\u78ba\u7387\u3067\u3042\u308b\uff0e\n\u4e0a\u306e\u516c\u5f0f\u3092\u4f7f\u3046\u305f\u3081\u306b\uff0c\u3044\u304f\u3064\u304b\u8a08\u7b97\u3057\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\uff0e\n\n\n\u3067\u306f$P(X | A)$\u3068\u306f\u4f55\u3060\u308d\u3046\uff1f\u3000\u3053\u308c\u306f\uff0c\u300c\u30d0\u30b0\u304c\u306a\u3044\u6642\u306b\u3059\u3079\u3066\u306e\u30c6\u30b9\u30c8$X$\u3092\u30d1\u30b9\u3059\u308b\u300d\u78ba\u7387\u3067\u3042\u308b\uff0e\u660e\u3089\u304b\u306b\uff0c\u3053\u308c\u306f1\u3067\u3042\u308b\uff0e\u30d0\u30b0\u304c\u306a\u3051\u308c\u3070\uff0c\u3069\u3093\u306a\u30c6\u30b9\u30c8\u306b\u3082\u30d1\u30b9\u3059\u308b\u304b\u3089\u3060\uff0e\n\n\n\u305d\u308c\u3088\u308a\u3082\u5384\u4ecb\u306a\u306e\u306f$P(X)$\u3067\u3042\u308b\uff0e\u4e8b\u8c61$X$\u304c\u8d77\u304d\u308b\u53ef\u80fd\u6027\u306f2\u3064\u3042\u308b\uff0e\n\u5b9f\u306f\u30d0\u30b0\u304c\u3042\u308b\uff08\u3053\u308c\u3092$\\sim A$\u3084$\\lnot A$\u3068\u66f8\u3044\u3066*not $A$*\u3068\u8aad\u3080\uff09\u306b\u3082\u95a2\u308f\u3089\u305a\u4e8b\u8c61$X$\u304c\u8d77\u3053\u3063\u3066\u3044\u308b\u306e\u304b\uff0c\u305d\u308c\u3068\u3082\u30d0\u30b0\u304c\u306a\u3044\u304b\u3089\u4e8b\u8c61$X$\u304c\u8d77\u304d\u3066\u3044\u308b\u306e\u304b\uff0c\u3067\u3042\u308b\uff0e\n\u3059\u308b\u3068\uff0c$P(X)$\u306f\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u89e3\u91c8\u3067\u304d\u308b\uff0e\n\n\n\\begin{align}\nP(X ) & = P(X \\text{ and } A) + P(X \\text{ and } \\sim A) \\\\\\\\[5pt]\n & = P(X|A)P(A) + P(X | \\sim A)P(\\sim A)\\\\\\\\[5pt]\n& = P(X|A)p + P(X | \\sim A)(1-p)\n\\end{align}\n\n\u3059\u3067\u306b$P(X|A)$\u306f\u8a08\u7b97\u3057\u3066\u3042\u308b\uff0e\u3057\u304b\u3057$P(X | \\sim A)$\u3092\u3069\u3046\u3059\u308b\u304b\u306f\uff0c\u4e3b\u89b3\u7684\u3067\u3042\u308b\uff0e\n\u30d7\u30ed\u30b0\u30e9\u30e0\u306f\u30c6\u30b9\u30c8\u3092\u30d1\u30b9\u3057\u305f\u304c\uff0c\u305d\u308c\u3067\u3082\u30d0\u30b0\u304c\u3042\u308b\u306e\u3060\uff0e\n\u3057\u304b\u3057\u30d0\u30b0\u304c\u3042\u308b\u78ba\u7387\u306f\u5c0f\u3055\u304f\u306a\u3063\u3066\u3044\u308b\uff0e\n\u3053\u308c\u306f\u5b9f\u884c\u3057\u305f\u30c6\u30b9\u30c8\u306e\u6570\u3084\uff0c\u30c6\u30b9\u30c8\u304c\u3069\u308c\u3060\u3051\u7cbe\u5de7\u306a\u306e\u304b\u306b\u3082\u4f9d\u5b58\u3059\u308b\uff0e\n\u3053\u3053\u3067\u306f\u63a7\u3048\u3081\u306b\u8003\u3048\u3066\uff0c$P(X|\\sim A) = 0.5$\u3068\u3057\u3088\u3046\uff0e\u3059\u308b\u3068\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u306a\u308b\uff0e\n\n\\begin{align}\nP(A | X) & = \\frac{1\\cdot p}{ 1\\cdot p +0.5 (1-p) } \\\\\\\\\n& = \\frac{ 2 p}{1+p}\n\\end{align}\n\u3053\u308c\u304c\u4e8b\u5f8c\u78ba\u7387\u3067\u3042\u308b\uff0e\u3053\u308c\u3092\u4e8b\u524d\u78ba\u7387\u30d1\u30e9\u30e1\u30fc\u30bf$p \\in [0,1]$\u306e\u95a2\u6570\u3068\u3057\u3066\u307f\u305f\u3089\uff0c\n\u3069\u3093\u306a\u5f62\u3092\u3057\u3066\u3044\u308b\u3060\u308d\u3046\uff1f\n\n\n\n```\nfigsize(12.5, 4) # \u30b0\u30e9\u30d5\u306e\u7e26\u6a2a\u30b5\u30a4\u30ba\u309212.5:4\u306b\u3059\u308b\np = np.linspace(0, 1, 50) # 0\u304b\u30891\u307e\u3067\u309250\u70b9\u306b\u5206\u5272\nplt.plot(p, 2 * p / (1 + p), color=\"#348ABD\", lw=3) # \u4e8b\u5f8c\u78ba\u7387\u3092\u30d7\u30ed\u30c3\u30c8\uff0e\u8272\u306f\u9752\u7cfb\uff0c\u7dda\u5e45\u306f3\n# plt.fill_between(p, 2*p/(1+p), alpha=.5, facecolor=[\"#A60628\"]) # \u30b3\u30e1\u30f3\u30c8\u3092\u5916\u3057\u3066\u8a66\u3057\u3066\u307f\u3088\u3046\nplt.scatter(0.2, 2 * (0.2) / 1.2, s=140, c=\"#348ABD\") # p=0.2\u306e\u3068\u3053\u308d\u306b\u70b9\u3092\u63cf\u753b\uff0e\u8272\u306f\u9752\u7cfb\uff0c\u30b5\u30a4\u30ba\u306f140\nplt.xlim(0, 1) # x\u8ef8\u306e\u7bc4\u56f2\u3092(0,1)\u306b\u8a2d\u5b9a\nplt.ylim(0, 1) # y\u8ef8\u306e\u7bc4\u56f2\u3092(0,1)\u306b\u8a2d\u5b9a\nplt.xlabel(\"Prior, $P(A) = p$\") # x\u8ef8\u30e9\u30d9\u30eb u\"\u4e8b\u524d\u78ba\u7387$P(A) = p$\"\nplt.ylabel(\"Posterior, $P(A|X)$, with $P(A) = p$\") # y\u8ef8\u30e9\u30d9\u30eb u\"$P(A) = p$\u306e\u6642\u306e\u4e8b\u5f8c\u78ba\u7387$P(A|X)$\"\nplt.title(\"Are there bugs in my code?\") # \u30b0\u30e9\u30d5\u306e\u30bf\u30a4\u30c8\u30eb u\"\u30d7\u30ed\u30b0\u30e9\u30e0\u306b\u30d0\u30b0\u304c\u3042\u308b\u304b\uff1f\"\n```\n\n\u4e8b\u524d\u78ba\u7387$p$\u304c\u5c0f\u3055\u3044\u6642\u306f\uff0c\u30c6\u30b9\u30c8$X$\u306b\u30d1\u30b9\u3057\u305f\u3068\u3044\u3046\u8a3c\u62e0\u304c\u975e\u5e38\u306b\u52b9\u3044\u3066\u3044\u308b\uff0e\n\u3053\u3053\u3067\u4e8b\u524d\u78ba\u7387\u306e\u5024\u3092\u4e00\u3064\u6c7a\u3081\u3066\u307f\u3088\u3046\uff0e\n\u79c1\u306f\u512a\u79c0\u306a\u30d7\u30ed\u30b0\u30e9\u30de\u30fc\u306a\u306e\u3067\uff08\u81ea\u5206\u3067\u306f\u305d\u3046\u601d\u3063\u3066\u3044\u308b\uff09\uff0c0.20\u3067\u3082\u73fe\u5b9f\u7684\u3060\u308d\u3046\uff0e\n\u3064\u307e\u308a\uff0c20%\u306e\u78ba\u7387\u3067\u30d0\u30b0\u306e\u306a\u3044\u30d7\u30ed\u30b0\u30e9\u30e0\u3092\u66f8\u304f\u3053\u3068\u304c\u3067\u304d\u308b\uff0c\u3068\u3044\u3046\u308f\u3051\u3060\uff0e\n\u3082\u3063\u3068\u3082\u73fe\u5b9f\u7684\u306b\u306f\uff0c\u3053\u306e\u4e8b\u524d\u78ba\u7387\u306f\n\u30d7\u30ed\u30b0\u30e9\u30e0\u304c\u3069\u308c\u3060\u3051\u8907\u96d1\u3067\u5927\u898f\u6a21\u306a\u306e\u304b\u306b\u3082\u3088\u308b\u306e\u3060\u304c\uff0c\n\u3068\u308a\u3042\u3048\u305a0.20\u3068\u3057\u3066\u304a\u3053\u3046\uff0e\n\u3059\u308b\u3068\uff0c\u30d7\u30ed\u30b0\u30e9\u30e0\u306b\u306f\u30d0\u30b0\u304c\u306a\u3044\u3068\u3044\u3046\u66f4\u65b0\u3055\u308c\u305f\u4fe1\u5ff5\u306f0.33\u3068\u306a\u308b\uff0e\n\n\n\u3053\u3053\u3067\u4e8b\u524d\u78ba\u7387\u306f\u78ba\u7387\u3067\u3042\u308b\uff0c\u3068\u3044\u3046\u3053\u3068\u3092\u601d\u3044\u51fa\u3057\u3066\u304a\u3053\u3046\uff0e\n$p$\u306f\u30d0\u30b0\u304c\u306a\u3044\u4e8b\u524d\u78ba\u7387\u3067\uff0c$1-p$\u306f\u30d0\u30b0\u304c\u3042\u308b\u4e8b\u524d\u78ba\u7387\u3067\u3042\u308b\n\n\n\u540c\u69d8\u306b\uff0c\u4e8b\u5f8c\u78ba\u7387\u3082\u78ba\u7387\u3067\u3042\u308b\uff0e\n$P(A | X)$\u306f\u300c\u3059\u3079\u3066\u306e\u30c6\u30b9\u30c8\u3092\u30d1\u30b9\u3057\u3066\u30d0\u30b0\u304c\u306a\u3044\u300d\u78ba\u7387\uff0c\n$1-P(A|X)$\u306f\u300c\u3059\u3079\u3066\u306e\u30c6\u30b9\u30c8\u3092\u30d1\u30b9\u3057\u3066\u30d0\u30b0\u304c\u3042\u308b\u300d\u78ba\u7387\u3067\u3042\u308b\uff0e\n\u3053\u306e\u4e8b\u5f8c\u78ba\u7387\u306f\u3069\u3093\u306a\u5024\u3060\u308d\u3046\u304b\uff1f\n\u4ee5\u4e0b\u306e\u30b0\u30e9\u30d5\u306f\uff0c\u4e8b\u524d\u78ba\u7387\u3068\u4e8b\u5f8c\u78ba\u7387\u3092\u8a08\u7b97\u3057\u305f\u3082\u306e\u3067\u3042\u308b\uff0e\n\n\n\n```\nfigsize(12.5, 4)\ncolours = [\"#348ABD\", \"#A60628\"]\n\nprior = [0.20, 0.80]\nposterior = [1. / 3, 2. / 3]\nplt.bar([0, .7], prior, alpha=0.70, width=0.25,\n color=colours[0], label=\"prior distribution\", # u\"\u4e8b\u524d\u78ba\u7387\"\n lw=\"3\", edgecolor=colours[0])\n\nplt.bar([0 + 0.25, .7 + 0.25], posterior, alpha=0.7,\n width=0.25, color=colours[1],\n label=\"posterior distribution\", # u\"\u4e8b\u5f8c\u78ba\u7387\"\n lw=\"3\", edgecolor=colours[1])\n\nplt.xticks([0.20, .95], [\"Bugs Absent\", \"Bugs Present\"]) # u\"\u30d0\u30b0\u304c\u306a\u3044\", u\"\u30d0\u30b0\u304c\u3042\u308b\nplt.title(\"Prior and Posterior probability of bugs present\") # u\"\u30d0\u30b0\u304c\u3042\u308b\u4e8b\u524d\u78ba\u7387\u3068\u4e8b\u5f8c\u78ba\u7387\"\nplt.ylabel(\"Probability\") # u\"\u78ba\u7387\"\nplt.legend(loc=\"upper left\"); # \u51e1\u4f8b\u306e\u4f4d\u7f6e\u306f\u5de6\u4e0a\n```\n\n\u8a3c\u62e0\u3068\u306a\u308b\u4e8b\u8c61$X$\u304c\u5f97\u3089\u308c\u305f\u5f8c\u3067\u306f\uff0c\u30d0\u30b0\u304c\u306a\u3044\u3068\u3044\u3046\u78ba\u7387\u304c\u5927\u304d\u304f\u306a\u3063\u3066\u3044\u308b\uff0e\n\u30c6\u30b9\u30c8\u306e\u6570\u3092\u5897\u3084\u305b\u3070\uff0c\u30d0\u30b0\u304c\u306a\u3044\uff08\u78ba\u73871\uff09\u3068\u3044\u3046\u3053\u3068\u3092\u78ba\u4fe1\u3067\u304d\u308b\u3060\u308d\u3046\uff0e\n\n\n\u3053\u308c\u306f\u30d9\u30a4\u30ba\u63a8\u8ad6\u3068\u30d9\u30a4\u30ba\u5247\u306e\u975e\u5e38\u306b\u5358\u7d14\u306a\u4f8b\u3067\u3042\u308b\uff0e\n\u6b8b\u5ff5\u306a\u304c\u3089\uff0c\u3082\u3046\u5c11\u3057\u8907\u96d1\u306a\u30d9\u30a4\u30ba\u63a8\u8ad6\u3092\u5b9f\u884c\u3059\u308b\u305f\u3081\u306e\u6570\u5b66\u306f\uff0c\n\u975e\u5e38\u306b\u8abf\u6574\u3055\u308c\u305f\u4f8b\u984c\u3067\u306a\u3051\u308c\u3070\uff0c\u3082\u3063\u3068\u3082\u3063\u3068\u96e3\u3057\u304f\u306a\u3063\u3066\u3057\u307e\u3046\uff0e\n\u3042\u3068\u3067\u898b\u308b\u3088\u3046\u306b\uff0c\u3053\u306e\u624b\u306e\u6570\u5b66\u7684\u306a\u89e3\u6790\u306f\u5b9f\u969b\u306b\u306f\u5fc5\u8981\u306a\u3044\uff0e\n\u3044\u308d\u3044\u308d\u306a\u30e2\u30c7\u30eb\u5316\u306e\u305f\u3081\u306e\u30c4\u30fc\u30eb\u3092\u77e5\u308b\u307b\u3046\u304c\u5148\u3067\u3042\u308b\uff0e\n\u6b21\u306e\u7bc0\u306f\u300c\u78ba\u7387\u5206\u5e03\u300d\u3092\u6271\u3046\uff0e\u3082\u3057\u3088\u304f\u77e5\u3063\u3066\u3044\u308b\u306a\u3089\uff0c\n\u8aad\u307f\u98db\u3070\u3057\u3066\uff08\u3082\u3057\u304f\u306f\u659c\u3081\u8aad\u307f\u3057\u3066\uff09\u3082\u3088\u3044\uff0e\n\u3088\u304f\u77e5\u3089\u306a\u3051\u308c\u3070\uff0c\u975e\u5e38\u306b\u91cd\u8981\u306a\u306e\u3067\u3088\u304f\u7406\u89e3\u3057\u3066\u307b\u3057\u3044\uff0e\n\n\n\n_______\n\n## \u78ba\u7387\u5206\u5e03\n\n\n\n**\u78ba\u7387\u5206\u5e03\u3068\u306f\u4f55\u304b\u3092\u7c21\u5358\u306b\u304a\u3055\u3089\u3044\u3057\u3088\u3046\uff0e**\n$Z$\u3092\u78ba\u7387\u5909\u6570\u3068\u3059\u308b\uff0e\n$Z$\u304c\u53d6\u308b\u5024\u305d\u308c\u305e\u308c\u306b\u78ba\u7387\u3092\u4e0e\u3048\u308b\u306e\u304c\u78ba\u7387\u5206\u5e03\u3067\u3042\u308b\uff0e\n\u30b0\u30e9\u30d5\u3067\u63cf\u3051\u3070\uff0c\u78ba\u7387\u5206\u5e03\u306f\u66f2\u7dda\u3067\u66f8\u304f\u3053\u3068\u304c\u3067\u304d\u3066\uff0c\n\u305d\u306e\u66f2\u7dda\u306e\u9ad8\u3055\u306b\u6bd4\u4f8b\u3057\u3066\u78ba\u7387\u304c\u5927\u304d\u304f\u306a\u308b\uff0e\n\u3059\u3067\u306b\u3053\u306e\u7ae0\u306e\u6700\u521d\u306e\u56f3\u3067\uff0c\u78ba\u7387\u5206\u5e03\u306e\u66f2\u7dda\u306e\u4f8b\u3092\u8aac\u660e\u3057\u3066\u3044\u308b\uff0e\n\n\n\u78ba\u7387\u5909\u6570\u306b\u306f\u4ee5\u4e0b\u306e\u3088\u3046\u306b3\u7a2e\u985e\u3042\u308b\uff0e\n\n\n- **$Z$\u304c\u96e2\u6563\u306e\u5834\u5408**\uff1a\u96e2\u6563\u78ba\u7387\u5909\u6570\u306f\uff0c\u4e0e\u3048\u3089\u308c\u305f\u5024\u306e\u30ea\u30b9\u30c8\u306e\u4e2d\u306e\u3069\u308c\u304b\u4e00\u3064\u306e\u5024\u3092\u53d6\u308b\uff0e\u4eba\u53e3\uff0c\u6620\u753b\u306e\u8a55\u4fa1\uff0c\u5f97\u7968\u6570\u306a\u3069\u306f\u96e2\u6563\u78ba\u7387\u5909\u6570\u306e\u4f8b\u3067\u3042\u308b\uff0e\u96e2\u6563\u78ba\u7387\u5909\u6570\u306f\uff0c\u4ee5\u4e0b\u306e\u9023\u7d9a\u306e\u5834\u5408\u3068\u5bfe\u6bd4\u3059\u308b\u3068\u5206\u304b\u308a\u3084\u3059\u3044\uff0e\n- **$Z$\u304c\u9023\u7d9a\u306e\u5834\u5408**\uff1a\u9023\u7d9a\u78ba\u7387\u5909\u6570\u306f\u4efb\u610f\u7cbe\u5ea6\u306e\u5024\u3092\u53d6\u308b\uff0e\u4f8b\u3048\u3070\u6e29\u5ea6\uff0c\u901f\u5ea6\uff0c\u6642\u9593\uff0c\u8272\u306a\u3069\u306f\u9023\u7d9a\u78ba\u7387\u5909\u6570\u3067\u30e2\u30c7\u30eb\u5316\u3055\u308c\u308b\uff0e\u3053\u308c\u3089\u306e\u5024\u306f\uff0c\u3044\u304f\u3089\u3067\u3082\u7cbe\u5ea6\u3088\u304f\u6307\u5b9a\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u304b\u3089\u3060\uff0e\n- **$Z$\u304c\u6df7\u5408\u578b\u306e\u5834\u5408**\uff1a\u6df7\u5408\u578b\u306e\u78ba\u7387\u5909\u6570\u306f\uff0c\u96e2\u6563\u3068\u9023\u7d9a\u306e\u3069\u3061\u3089\u306e\u5024\u3082\u53d6\u308b\uff0e\u4e0a\u306e2\u3064\u306e\u30bf\u30a4\u30d7\u306e\u7d44\u307f\u5408\u308f\u305b\u3067\u3042\u308b\uff0e\n\n\n\n\n###\u96e2\u6563\u306e\u5834\u5408\n\n\n\u3082\u3057$Z$\u304c\u96e2\u6563\u306a\u3089\uff0c\u78ba\u7387\u5206\u5e03\u306f*\u78ba\u7387\u8cea\u91cf\u95a2\u6570*\u3068\u547c\u3070\u308c\u308b\uff0e\n\u3053\u308c\u306f$Z$\u304c\u5024$k$\u3092\u53d6\u308b\u78ba\u7387\u3092$P(Z=k)$\u3067\u8868\u3059\uff0e\n\u78ba\u7387\u8cea\u91cf\u95a2\u6570\u306f\u78ba\u7387\u5909\u6570$Z$\u3092\u5b8c\u5168\u306b\u6c7a\u5b9a\u3059\u308b\uff0e\u3064\u307e\u308a\n\u78ba\u7387\u8cea\u91cf\u95a2\u6570\u304c\u5206\u304b\u308c\u3070$Z$\u304c\u3069\u306e\u3088\u3046\u306b\u632f\u308b\u821e\u3046\u306e\u304b\u304c\u5206\u304b\u308b\u306e\u3067\u3042\u308b\uff0e\n\u3053\u306e\u5148\u3088\u304f\u767b\u5834\u3059\u308b\u6709\u540d\u306a\u78ba\u7387\u8cea\u91cf\u95a2\u6570\u304c\u3044\u304f\u3064\u304b\u3042\u308b\u304c\uff0c\n\u5fc5\u8981\u306b\u5fdc\u3058\u3066\u7d39\u4ecb\u3059\u308b\uff0e\u4e00\u756a\u6700\u521d\u306b\u7d39\u4ecb\u3059\u308b\u6709\u7528\u306a\u78ba\u7387\u8cea\u91cf\u95a2\u6570\u306f\uff0c\u30dd\u30ef\u30bd\u30f3\u5206\u5e03\u3067\u3042\u308b\uff0e\n$Z$\u306e\u78ba\u7387\u8cea\u91cf\u95a2\u6570\u304c\u4ee5\u4e0b\u306e\u5f0f\u306e\u6642\uff0c$Z$\u306f\u30dd\u30ef\u30bd\u30f3\u5206\u5e03\u306b\u5f93\u3046\u3068\u8a00\u3046\uff0e\n\n\n$$P(Z = k) =\\frac{ \\lambda^k e^{-\\lambda} }{k!}, \\; \\; k=0,1,2, \\dots $$\n\n$\\lambda$\u306f\u5206\u5e03\u306e\u30d1\u30e9\u30e1\u30fc\u30bf\u3067\uff0c\u5206\u5e03\u306e\u5f62\u72b6\u3092\u6c7a\u3081\u308b\uff0e\n\u30dd\u30ef\u30bd\u30f3\u5206\u5e03\u306e\u5834\u5408\uff0c$\\lambda$\u306f\u4efb\u610f\u306e\u6b63\u306e\u5b9f\u6570\u3067\u3042\u308b\uff0e\n$\\lambda$\u3092\u5927\u304d\u304f\u3059\u308b\u3068\u5927\u304d\u306a\u5024\u306e\u78ba\u7387\u304c\u9ad8\u304f\u306a\u308a\uff0c\n$\\lambda$\u3092\u5c0f\u3055\u304f\u3059\u308b\u3068\u5c0f\u3055\u306a\u5024\u306e\u78ba\u7387\u304c\u9ad8\u304f\u306a\u308b\uff0e\n\u305d\u306e\u305f\u3081\uff0c$\\lambda$\u306f\u30dd\u30ef\u30bd\u30f3\u5206\u5e03\u306e\u300c\u5f37\u5ea6\u300d\u3068\u8003\u3048\u3066\u3082\u3088\u3044\uff0e\n\n\n\u4efb\u610f\u306e\u6b63\u306e\u5b9f\u6570\u3067\u3042\u308b$\\lambda$\u3068\u306f\u7570\u306a\u308a\uff0c\u4e0a\u306e\u5f0f\u3067\u306e$k$\u306e\u5024\u306f\u6b63\u306e\u6574\u6570\uff0c\u3064\u307e\u308a0, 1, 2, ... \u3067\u306a\u3051\u308c\u3070\u306a\u3089\u306a\u3044\uff0e\u3053\u308c\u306f\u91cd\u8981\u306a\u4e8b\u3067\u3042\u308b\uff0e\u4eba\u6570\u3092\u30e2\u30c7\u30eb\u5316\u3057\u3088\u3046\u3068\u3059\u308b\u306a\u3089\uff0c4.25\u4eba\u3068\u304b5.612\u4eba\u3068\u3044\u3046\u306e\u306f\u610f\u5473\u304c\u7121\u3044\u306e\u3060\u304b\u3089\uff0e\n\n\n\u3082\u3057\u78ba\u7387\u5909\u6570$Z$\u304c\u30dd\u30ef\u30bd\u30f3\u5206\u5e03\u306b\u5f93\u3046\u306a\u3089\uff0c\u305d\u308c\u3092\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u66f8\u304f\uff0e\n\n\n$$Z \\sim \\text{Poi}(\\lambda) $$\n\n\u30dd\u30ef\u30bd\u30f3\u5206\u5e03\u306e\u4fbf\u5229\u306a\u6027\u8cea\u306e\u4e00\u3064\u306f\uff0c\u671f\u5f85\u5024\u304c\u5206\u5e03\u30d1\u30e9\u30e1\u30fc\u30bf\u306b\u7b49\u3057\u3044\u3068\u3044\u3046\u3053\u3068\u3067\u3042\u308b\uff0e\n\n\n$$E[ \\;Z\\; | \\; \\lambda \\;] = \\lambda $$\n\n\n\u3053\u306e\u5148\u3053\u306e\u6027\u8cea\u3092\u5229\u7528\u3059\u308b\u306e\u3067\u899a\u3048\u3066\u304a\u3044\u3066\u307b\u3057\u3044\uff0e\n\u4ee5\u4e0b\u306e\u30b0\u30e9\u30d5\u306f\uff0c\u3044\u304f\u3064\u304b\u306e$\\lambda$\u306e\u5024\u306b\u3064\u3044\u3066\u78ba\u7387\u8cea\u91cf\u95a2\u6570\u3092\u30d7\u30ed\u30c3\u30c8\u3057\u305f\u3082\u306e\u3067\u3042\u308b\uff0e\n\u3053\u306e\u30b0\u30e9\u30d5\u3067\u6ce8\u610f\u3057\u3066\u307b\u3057\u3044\u3053\u3068\u306f2\u3064\uff0e1\u3064\u76ee\u306f$\\lambda$\u3092\u5927\u304d\u304f\u3059\u308c\u3070\uff0c\n\u5927\u304d\u306a\u5024\u306e\u78ba\u7387\u304c\u9ad8\u304f\u306a\u308b\u3053\u3068\uff0e2\u3064\u76ee\u306f\uff0c\u30b0\u30e9\u30d5\u306e\u6a2a\u8ef8\u306f15\u3067\u7d42\u308f\u3063\u3066\u3044\u308b\u304c\uff0c\n\u5206\u5e03\u306f\u305d\u3046\u3067\u306f\u306a\u3044\uff0e\u3059\u3079\u3066\u306e\u6b63\u306e\u6574\u6570\u306b\u5bfe\u3057\u3066\u78ba\u7387\u304c\u5272\u308a\u5f53\u3066\u3089\u308c\u3066\u3044\u308b\uff0e\n\n\n\n```\nfigsize(12.5, 4)\n\nimport scipy.stats as stats\na = np.arange(16)\npoi = stats.poisson\nlambda_ = [1.5, 4.25]\ncolours = [\"#348ABD\", \"#A60628\"]\n\nplt.bar(a, poi.pmf(a, lambda_[0]), color=colours[0],\n label=\"$\\lambda = %.1f$\" % lambda_[0], alpha=0.60,\n edgecolor=colours[0], lw=\"3\")\n\nplt.bar(a, poi.pmf(a, lambda_[1]), color=colours[1],\n label=\"$\\lambda = %.1f$\" % lambda_[1], alpha=0.60,\n edgecolor=colours[1], lw=\"3\")\n\nplt.xticks(a + 0.4, a)\nplt.legend()\nplt.ylabel(\"probability of $k$\") # u\"$k$\u306e\u78ba\u7387\"\nplt.xlabel(\"$k$\")\nplt.title(\"Probability mass function of a Poisson random variable; differing \\\n$\\lambda$ values\") # u\"\u3044\u304f\u3064\u304b\u306e$\\lambda$\u306b\u5bfe\u3059\u308b\u30dd\u30ef\u30bd\u30f3\u5206\u5e03\u306e\u78ba\u7387\u8cea\u91cf\u95a2\u6570\"\n```\n\n###\u9023\u7d9a\u306e\u5834\u5408\n\n\n\u9023\u7d9a\u78ba\u7387\u5909\u6570\u306f\uff0c\u78ba\u7387\u8cea\u91cf\u95a2\u6570\u3067\u306f\u306a\u304f\u78ba\u7387\u5bc6\u5ea6\u5206\u5e03\u95a2\u6570\u3067\u8868\u3055\u308c\u308b\uff0e\n\u3053\u308c\u306f\u5358\u306a\u308b\u540d\u524d\u306e\u554f\u984c\u306e\u3088\u3046\u306b\u898b\u3048\u308b\u304b\u3082\u3057\u308c\u306a\u3044\u304c\uff0c\n\u5bc6\u5ea6\u95a2\u6570\u3068\u8cea\u91cf\u95a2\u6570\u306f\u307e\u3063\u305f\u304f\u9055\u3046\u3082\u306e\u306a\u306e\u3067\u3042\u308b\uff0e\n\u9023\u7d9a\u78ba\u7387\u5909\u6570\u306e\u4f8b\u306f\uff0c\u4ee5\u4e0b\u306e\u5f0f\u3067\u8868\u3055\u308c\u308b\u6307\u6570\u5206\u5e03\u3067\u3042\u308b\uff0e\n\n\n$$f_Z(z | \\lambda) = \\lambda e^{-\\lambda z }, \\;\\; z\\ge 0$$\n\nLike a Poisson random variable, an exponential random variable can take on only non-negative values. But unlike a Poisson variable, the exponential can take on *any* non-negative values, including non-integral values such as 4.25 or 5.612401. This property makes it a poor choice for count data, which must be an integer, but a great choice for time data, temperature data (measured in Kelvins, of course), or any other precise *and positive* variable. The graph below shows two probability density functions with different $\\lambda$ values. \n\nWhen a random variable $Z$ has an exponential distribution with parameter $\\lambda$, we say *$Z$ is exponential* and write\n\n$$Z \\sim \\text{Exp}(\\lambda)$$\n\nGiven a specific $\\lambda$, the expected value of an exponential random variable is equal to the inverse of $\\lambda$, that is:\n\n$$E[\\; Z \\;|\\; \\lambda \\;] = \\frac{1}{\\lambda}$$\n\n\n```\na = np.linspace(0, 4, 100)\nexpo = stats.expon\nlambda_ = [0.5, 1]\n\nfor l, c in zip(lambda_, colours):\n plt.plot(a, expo.pdf(a, scale=1. / l), lw=3,\n color=c, label=\"$\\lambda = %.1f$\" % l)\n plt.fill_between(a, expo.pdf(a, scale=1. / l), color=c, alpha=.33)\n\nplt.legend()\nplt.ylabel(\"PDF at $z$\")\nplt.xlabel(\"$z$\")\nplt.ylim(0, 1.2)\nplt.title(\"Probability density function of an Exponential random variable;\\\n differing $\\lambda$\");\n```\n\n\n###But what is $\\lambda \\;$?\n\n\n**This question is what motivates statistics**. In the real world, $\\lambda$ is hidden from us. We see only $Z$, and must go backwards to try and determine $\\lambda$. The problem is difficult because there is no one-to-one mapping from $Z$ to $\\lambda$. Many different methods have been created to solve the problem of estimating $\\lambda$, but since $\\lambda$ is never actually observed, no one can say for certain which method is best! \n\nBayesian inference is concerned with *beliefs* about what $\\lambda$ might be. Rather than try to guess $\\lambda$ exactly, we can only talk about what $\\lambda$ is likely to be by assigning a probability distribution to $\\lambda$.\n\nThis might seem odd at first. After all, $\\lambda$ is fixed; it is not (necessarily) random! How can we assign probabilities to values of a non-random variable? Ah, we have fallen for our old, frequentist way of thinking. Recall that under Bayesian philosophy, we *can* assign probabilities if we interpret them as beliefs. And it is entirely acceptable to have *beliefs* about the parameter $\\lambda$. \n\n\n\n##### Example: Inferring behaviour from text-message data\n\nLet's try to model a more interesting example, one that concerns the rate at which a user sends and receives text messages:\n\n> You are given a series of daily text-message counts from a user of your system. The data, plotted over time, appears in the chart below. You are curious to know if the user's text-messaging habits have changed over time, either gradually or suddenly. How can you model this? (This is in fact my own text-message data. Judge my popularity as you wish.)\n\n\n\n```\nfigsize(12.5, 3.5)\ncount_data = np.loadtxt(\"data/txtdata.csv\")\nn_count_data = len(count_data)\nplt.bar(np.arange(n_count_data), count_data, color=\"#348ABD\")\nplt.xlabel(\"Time (days)\")\nplt.ylabel(\"count of text-msgs received\")\nplt.title(\"Did the user's texting habits change over time?\")\nplt.xlim(0, n_count_data);\n```\n\nBefore we start modeling, see what you can figure out just by looking at the chart above. Would you say there was a change in behaviour during this time period? \n\nHow can we start to model this? Well, as we have conveniently already seen, a Poisson random variable is a very appropriate model for this type of *count* data. Denoting day $i$'s text-message count by $C_i$, \n\n$$ C_i \\sim \\text{Poisson}(\\lambda) $$\n\nWe are not sure what the value of the $\\lambda$ parameter really is, however. Looking at the chart above, it appears that the rate might become higher late in the observation period, which is equivalent to saying that $\\lambda$ increases at some point during the observations. (Recall that a higher value of $\\lambda$ assigns more probability to larger outcomes. That is, there is a higher probability of many text messages having been sent on a given day.)\n\nHow can we represent this observation mathematically? Let's assume that on some day during the observation period (call it $\\tau$), the parameter $\\lambda$ suddenly jumps to a higher value. So we really have two $\\lambda$ parameters: one for the period before $\\tau$, and one for the rest of the observation period. In the literature, a sudden transition like this would be called a *switchpoint*:\n\n$$\n\\lambda = \n\\begin{cases}\n\\lambda_1 & \\text{if } t \\lt \\tau \\cr\n\\lambda_2 & \\text{if } t \\ge \\tau\n\\end{cases}\n$$\n\n\nIf, in reality, no sudden change occurred and indeed $\\lambda_1 = \\lambda_2$, then the $\\lambda$s posterior distributions should look about equal.\n\nWe are interested in inferring the unknown $\\lambda$s. To use Bayesian inference, we need to assign prior probabilities to the different possible values of $\\lambda$. What would be good prior probability distributions for $\\lambda_1$ and $\\lambda_2$? Recall that $\\lambda$ can be any positive number. As we saw earlier, the *exponential* distribution provides a continuous density function for positive numbers, so it might be a good choice for modeling $\\lambda_i$. But recall that the exponential distribution takes a parameter of its own, so we'll need to include that parameter in our model. Let's call that parameter $\\alpha$.\n\n\\begin{align}\n&\\lambda_1 \\sim \\text{Exp}( \\alpha ) \\\\\\\n&\\lambda_2 \\sim \\text{Exp}( \\alpha )\n\\end{align}\n\n$\\alpha$ is called a *hyper-parameter* or *parent variable*. In literal terms, it is a parameter that influences other parameters. Our initial guess at $\\alpha$ does not influence the model too strongly, so we have some flexibility in our choice. A good rule of thumb is to set the exponential parameter equal to the inverse of the average of the count data. Since we're modeling $\\lambda$ using an exponential distribution, we can use the expected value identity shown earlier to get:\n\n$$\\frac{1}{N}\\sum_{i=0}^N \\;C_i \\approx E[\\; \\lambda \\; |\\; \\alpha ] = \\frac{1}{\\alpha}$$ \n\nAn alternative, and something I encourage the reader to try, would be to have two priors: one for each $\\lambda_i$. Creating two exponential distributions with different $\\alpha$ values reflects our prior belief that the rate changed at some point during the observations.\n\nWhat about $\\tau$? Because of the noisiness of the data, it's difficult to pick out a priori when $\\tau$ might have occurred. Instead, we can assign a *uniform prior belief* to every possible day. This is equivalent to saying\n\n\\begin{align}\n& \\tau \\sim \\text{DiscreteUniform(1,70) }\\\\\\\\\n& \\Rightarrow P( \\tau = k ) = \\frac{1}{70}\n\\end{align}\n\nSo after all this, what does our overall prior distribution for the unknown variables look like? Frankly, *it doesn't matter*. What we should understand is that it's an ugly, complicated mess involving symbols only a mathematician could love. And things will only get uglier the more complicated our models become. Regardless, all we really care about is the posterior distribution.\n\nWe next turn to PyMC, a Python library for performing Bayesian analysis that is undaunted by the mathematical monster we have created. \n\n\nIntroducing our first hammer: PyMC\n-----\n\nPyMC is a Python library for programming Bayesian analysis [3]. It is a fast, well-maintained library. The only unfortunate part is that its documentation is lacking in certain areas, especially those that bridge the gap between beginner and hacker. One of this book's main goals is to solve that problem, and also to demonstrate why PyMC is so cool.\n\nWe will model the problem above using PyMC. This type of programming is called *probabilistic programming*, an unfortunate misnomer that invokes ideas of randomly-generated code and has likely confused and frightened users away from this field. The code is not random; it is probabilistic in the sense that we create probability models using programming variables as the model's components. Model components are first-class primitives within the PyMC framework. \n\nB. Cronin [5] has a very motivating description of probabilistic programming:\n\n> Another way of thinking about this: unlike a traditional program, which only runs in the forward directions, a probabilistic program is run in both the forward and backward direction. It runs forward to compute the consequences of the assumptions it contains about the world (i.e., the model space it represents), but it also runs backward from the data to constrain the possible explanations. In practice, many probabilistic programming systems will cleverly interleave these forward and backward operations to efficiently home in on the best explanations.\n\nBecause of the confusion engendered by the term *probabilistic programming*, I'll refrain from using it. Instead, I'll simply say *programming*, since that's what it really is. \n\nPyMC code is easy to read. The only novel thing should be the syntax, and I will interrupt the code to explain individual sections. Simply remember that we are representing the model's components ($\\tau, \\lambda_1, \\lambda_2$ ) as variables:\n\n\n```\nimport pymc as pm\n\nalpha = 1.0 / count_data.mean() # Recall count_data is the\n # variable that holds our txt counts\nwith pm.Model() as model:\n lambda_1 = pm.Exponential(\"lambda_1\", alpha)\n lambda_2 = pm.Exponential(\"lambda_2\", alpha)\n\nwith model:\n tau = pm.DiscreteUniform(\"tau\", lower=0, upper=n_count_data)\n```\n\nIn the code above, we create the PyMC variables corresponding to $\\lambda_1$ and $\\lambda_2$. We assign them to PyMC's *stochastic variables*, so-called because they are treated by the back end as random number generators. We can demonstrate this fact by calling their built-in `random()` methods.\n\n\n```\nprint \"Random output:\", tau.random(), tau.random(), tau.random()\n```\n\n\n```\n@pm.deterministic\ndef lambda_(tau=tau, lambda_1=lambda_1, lambda_2=lambda_2):\n out = np.zeros(n_count_data)\n out[:tau] = lambda_1 # lambda before tau is lambda1\n out[tau:] = lambda_2 # lambda after (and including) tau is lambda2\n return out\n```\n\nThis code creates a new function `lambda_`, but really we can think of it as a random variable: the random variable $\\lambda$ from above. Note that because `lambda_1`, `lambda_2` and `tau` are random, `lambda_` will be random. We are **not** fixing any variables yet.\n\n`@pm.deterministic` is a decorator that tells PyMC this is a deterministic function. That is, if the arguments were deterministic (which they are not), the output would be deterministic as well. \n\n\n```\nobservation = pm.Poisson(\"obs\", lambda_, value=count_data, observed=True)\n\nmodel = pm.Model([observation, lambda_1, lambda_2, tau])\n```\n\nThe variable `observation` combines our data, `count_data`, with our proposed data-generation scheme, given by the variable `lambda_`, through the `value` keyword. We also set `observed = True` to tell PyMC that this should stay fixed in our analysis. Finally, PyMC wants us to collect all the variables of interest and create a `Model` instance out of them. This makes our life easier when we retrieve the results.\n\nThe code below will be explained in Chapter 3, but I show it here so you can see where our results come from. One can think of it as a *learning* step. The machinery being employed is called *Markov Chain Monte Carlo* (MCMC), which I also delay explaining until Chapter 3. This technique returns thousands of random variables from the posterior distributions of $\\lambda_1, \\lambda_2$ and $\\tau$. We can plot a histogram of the random variables to see what the posterior distributions look like. Below, we collect the samples (called *traces* in the MCMC literature) into histograms.\n\n\n```\n# Mysterious code to be explained in Chapter 3.\nmcmc = pm.MCMC(model)\nmcmc.sample(40000, 10000, 1)\n```\n\n\n```\nlambda_1_samples = mcmc.trace('lambda_1')[:]\nlambda_2_samples = mcmc.trace('lambda_2')[:]\ntau_samples = mcmc.trace('tau')[:]\n```\n\n\n```\nfigsize(12.5, 10)\n# histogram of the samples:\n\nax = plt.subplot(311)\nax.set_autoscaley_on(False)\n\nplt.hist(lambda_1_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_1$\", color=\"#A60628\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.title(r\"\"\"Posterior distributions of the variables\n $\\lambda_1,\\;\\lambda_2,\\;\\tau$\"\"\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_1$ value\")\n\nax = plt.subplot(312)\nax.set_autoscaley_on(False)\nplt.hist(lambda_2_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $\\lambda_2$\", color=\"#7A68A6\", normed=True)\nplt.legend(loc=\"upper left\")\nplt.xlim([15, 30])\nplt.xlabel(\"$\\lambda_2$ value\")\n\nplt.subplot(313)\nw = 1.0 / tau_samples.shape[0] * np.ones_like(tau_samples)\nplt.hist(tau_samples, bins=n_count_data, alpha=1,\n label=r\"posterior of $\\tau$\",\n color=\"#467821\", weights=w, rwidth=2.)\nplt.xticks(np.arange(n_count_data))\n\nplt.legend(loc=\"upper left\")\nplt.ylim([0, .75])\nplt.xlim([35, len(count_data) - 20])\nplt.xlabel(r\"$\\tau$ (in days)\")\nplt.ylabel(\"probability\");\n```\n\n### Interpretation\n\nRecall that Bayesian methodology returns a *distribution*. Hence we now have distributions to describe the unknown $\\lambda$s and $\\tau$. What have we gained? Immediately, we can see the uncertainty in our estimates: the wider the distribution, the less certain our posterior belief should be. We can also see what the plausible values for the parameters are: $\\lambda_1$ is around 18 and $\\lambda_2$ is around 23. The posterior distributions of the two $\\lambda$s are clearly distinct, indicating that it is indeed likely that there was a change in the user's text-message behaviour.\n\nWhat other observations can you make? If you look at the original data again, do these results seem reasonable? \n\nNotice also that the posterior distributions for the $\\lambda$s do not look like exponential distributions, even though our priors for these variables were exponential. In fact, the posterior distributions are not really of any form that we recognize from the original model. But that's OK! This is one of the benefits of taking a computational point of view. If we had instead done this analysis using mathematical approaches, we would have been stuck with an analytically intractable (and messy) distribution. Our use of a computational approach makes us indifferent to mathematical tractability.\n\nOur analysis also returned a distribution for $\\tau$. Its posterior distribution looks a little different from the other two because it is a discrete random variable, so it doesn't assign probabilities to intervals. We can see that near day 45, there was a 50% chance that the user's behaviour changed. Had no change occurred, or had the change been gradual over time, the posterior distribution of $\\tau$ would have been more spread out, reflecting that many days were plausible candidates for $\\tau$. By contrast, in the actual results we see that only three or four days make any sense as potential transition points. \n\n###Why would I want samples from the posterior, anyways?\n\n\nWe will deal with this question for the remainder of the book, and it is an understatement to say that it will lead us to some amazing results. For now, let's end this chapter with one more example.\n\nWe'll use the posterior samples to answer the following question: what is the expected number of texts at day $t, \\; 0 \\le t \\le 70$ ? Recall that the expected value of a Poisson variable is equal to its parameter $\\lambda$. Therefore, the question is equivalent to *what is the expected value of $\\lambda$ at time $t$*?\n\nIn the code below, let $i$ index samples from the posterior distributions. Given a day $t$, we average over all possible $\\lambda_i$ for that day $t$, using $\\lambda_i = \\lambda_{1,i}$ if $t \\lt \\tau_i$ (that is, if the behaviour change has not yet occurred), else we use $\\lambda_i = \\lambda_{2,i}$. \n\n\n```\nfigsize(12.5, 5)\n# tau_samples, lambda_1_samples, lambda_2_samples contain\n# N samples from the corresponding posterior distribution\nN = tau_samples.shape[0]\nexpected_texts_per_day = np.zeros(n_count_data)\nfor day in range(0, n_count_data):\n # ix is a bool index of all tau samples corresponding to\n # the switchpoint occurring prior to value of 'day'\n ix = day < tau_samples\n # Each posterior sample corresponds to a value for tau.\n # for each day, that value of tau indicates whether we're \"before\"\n # (in the lambda1 \"regime\") or\n # \"after\" (in the lambda2 \"regime\") the switchpoint.\n # by taking the posterior sample of lambda1/2 accordingly, we can average\n # over all samples to get an expected value for lambda on that day.\n # As explained, the \"message count\" random variable is Poisson distributed,\n # and therefore lambda (the poisson parameter) is the expected value of\n # \"message count\".\n expected_texts_per_day[day] = (lambda_1_samples[ix].sum()\n + lambda_2_samples[~ix].sum()) / N\n\n\nplt.plot(range(n_count_data), expected_texts_per_day, lw=4, color=\"#E24A33\",\n label=\"expected number of text-messages received\")\nplt.xlim(0, n_count_data)\nplt.xlabel(\"Day\")\nplt.ylabel(\"Expected # text-messages\")\nplt.title(\"Expected number of text-messages received\")\nplt.ylim(0, 60)\nplt.bar(np.arange(len(count_data)), count_data, color=\"#348ABD\", alpha=0.65,\n label=\"observed texts per day\")\n\nplt.legend(loc=\"upper left\");\n```\n\nOur analysis shows strong support for believing the user's behavior did change ($\\lambda_1$ would have been close in value to $\\lambda_2$ had this not been true), and that the change was sudden rather than gradual (as demonstrated by $\\tau$'s strongly peaked posterior distribution). We can speculate what might have caused this: a cheaper text-message rate, a recent weather-to-text subscription, or perhaps a new relationship. (In fact, the 45th day corresponds to Christmas, and I moved away to Toronto the next month, leaving a girlfriend behind.)\n\n\n##### Exercises\n\n1\\. Using `lambda_1_samples` and `lambda_2_samples`, what is the mean of the posterior distributions of $\\lambda_1$ and $\\lambda_2$?\n\n\n```\n# type your code here.\n```\n\n2\\. What is the expected percentage increase in text-message rates? `hint:` compute the mean of `lambda_1_samples/lambda_2_samples`. Note that this quantity is very different from `lambda_1_samples.mean()/lambda_2_samples.mean()`.\n\n\n```\n# type your code here.\n```\n\n3\\. What is the mean of $\\lambda_1$ **given** that we know $\\tau$ is less than 45. That is, suppose we have been given new information that the change in behaviour occurred prior to day 45. What is the expected value of $\\lambda_1$ now? (You do not need to redo the PyMC part. Just consider all instances where `tau_samples < 45`.)\n\n\n```\n# type your code here.\n```\n\n### References\n\n\n- [1] Gelman, Andrew. N.p.. Web. 22 Jan 2013. [N is never large enough](http://andrewgelman.com/2005/07/31/n_is_never_larg/).\n- [2] Norvig, Peter. 2009. [The Unreasonable Effectiveness of Data](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf).\n- [3] Patil, A., D. Huard and C.J. Fonnesbeck. 2010. \nPyMC: Bayesian Stochastic Modelling in Python. Journal of Statistical \nSoftware, 35(4), pp. 1-81. \n- [4] Jimmy Lin and Alek Kolcz. Large-Scale Machine Learning at Twitter. Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data (SIGMOD 2012), pages 793-804, May 2012, Scottsdale, Arizona.\n- [5] Cronin, Beau. \"Why Probabilistic Programming Matters.\" 24 Mar 2013. Google, Online Posting to Google . Web. 24 Mar. 2013. .\n\n\n```\nfrom IPython.core.display import HTML\n\n\ndef css_styling():\n styles = open(\"../styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n", "meta": {"hexsha": "a09cdf407514ceb34a5d8e8c945596cf14d66816", "size": 341367, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_stars_repo_name": "tttamaki/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_stars_repo_head_hexsha": "8a9fa5070c84021a29e3cad9fd5769f84cce0542", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2015-05-24T17:01:37.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-09T11:33:11.000Z", "max_issues_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_issues_repo_name": "tttamaki/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_issues_repo_head_hexsha": "8a9fa5070c84021a29e3cad9fd5769f84cce0542", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Chapter1_Introduction/Chapter1_Introduction.ipynb", "max_forks_repo_name": "tttamaki/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers", "max_forks_repo_head_hexsha": "8a9fa5070c84021a29e3cad9fd5769f84cce0542", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 238.8852344297, "max_line_length": 95083, "alphanum_fraction": 0.8602618296, "converted": true, "num_tokens": 21357, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.31405054499180746, "lm_q2_score": 0.2909808662149068, "lm_q1q2_score": 0.0913826996169797}} {"text": "```python\nfrom IPython.display import Image \nImage('../../../python_for_probability_statistics_and_machine_learning.jpg')\n```\n\n\n\n\n \n\n \n\n\n\n[Python for Probability, Statistics, and Machine Learning](https://www.springer.com/fr/book/9783319307152)\n\n\n```python\nfrom __future__ import division\n%pylab inline\n```\n\n Populating the interactive namespace from numpy and matplotlib\n\n\nWe considered Maximum Likelihood Estimation (MLE) and Maximum A-Posteriori\n(MAP) estimation and in each case we started out with a probability density\nfunction of some kind and we further assumed that the samples were identically\ndistributed and independent (iid). The idea behind robust statistics\n[[maronna2006robust]](#maronna2006robust) is to construct estimators that can survive the\nweakening of either or both of these assumptions. More concretely, suppose you\nhave a model that works great except for a few outliers. The temptation is to\njust ignore the outliers and proceed. Robust estimation methods provide a\ndisciplined way to handle outliers without cherry-picking data that works for\nyour favored model.\n\n### The Notion of Location\n\nThe first notion we need is *location*, which is a generalization of the idea\nof *central value*. Typically, we just use an estimate of the mean for this,\nbut we will see later why this could be a bad idea. The general idea of\nlocation satisfies the following requirements Let $X$ be a random variable with\ndistribution $F$, and let $\\theta(X)$ be some descriptive measure of $F$. Then\n$\\theta(X)$ is said to be a measure of *location* if for any constants *a* and\n*b*, we have the following:\n\n\n
\n\n$$\n\\begin{equation}\n\\theta(X+b) = \\theta(X) +b \n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \\\n\\theta(-X) = -\\theta(X) \n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \\\nX \\ge 0 \\Rightarrow \\theta(X) \\ge 0 \n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \\\n\\theta(a X) = a\\theta(X)\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\n The first condition is called *location equivariance* (or *shift-invariance* in\nsignal processing lingo). The fourth condition is called *scale equivariance*,\nwhich means that the units that $X$ is measured in should not effect the value\nof the location estimator. These requirements capture the intuition of\n*centrality* of a distribution, or where most of the\nprobability mass is located.\n\nFor example, the sample mean estimator is $ \\hat{\\mu}=\\frac{1}{n}\\sum X_i $. The first\nrequirement is obviously satisfied as $ \\hat{\\mu}=\\frac{1}{n}\\sum (X_i+b) = b +\n\\frac{1}{n}\\sum X_i =b+\\hat{\\mu}$. Let us consider the second requirement:$\n\\hat{\\mu}=\\frac{1}{n}\\sum -X_i = -\\hat{\\mu}$. Finally, the last requirement is\nsatisfied with $ \\hat{\\mu}=\\frac{1}{n}\\sum a X_i =a \\hat{\\mu}$.\n\n### Robust Estimation and Contamination\n\nNow that we have the generalized location of centrality embodied in the\n*location* parameter, what can we do with it? Previously, we assumed that our samples\nwere all identically distributed. The key idea is that the samples might be\nactually coming from a *single* distribution that is contaminated by another nearby\ndistribution, as in the following:\n\n$$\nF(X) = \\epsilon G(X) + (1-\\epsilon)H(X)\n$$\n\n where $ \\epsilon $ randomly toggles between zero and one. This means\nthat our data samples $\\lbrace X_i \\rbrace$ actually derived from two separate\ndistributions, $ G(X) $ and $ H(X) $. We just don't know how they are mixed\ntogether. What we really want is an estimator that captures the location of $\nG(X) $ in the face of random intermittent contamination by $ H(X)$. For\nexample, it may be that this contamination is responsible for the outliers in a\nmodel that otherwise works well with the dominant $F$ distribution. It can get\neven worse than that because we don't know that there is only one contaminating\n$H(X)$ distribution out there. There may be a whole family of distributions\nthat are contaminating $G(X)$. This means that whatever estimators we construct\nhave to be derived from a more generalized family of distributions instead of\nfrom a single distribution, as the maximum-likelihood method assumes. This is\nwhat makes robust estimation so difficult --- it has to deal with *spaces* of\nfunction distributions instead of parameters from a particular probability\ndistribution.\n\n### Generalized Maximum Likelihood Estimators\n\nM-estimators are generalized maximum likelihood estimators. Recall that for\nmaximum likelihood, we want to maximize the likelihood function as in the\nfollowing:\n\n$$\nL_{\\mu}(x_i) = \\prod f_0(x_i-\\mu)\n$$\n\n and then to find the estimator $\\hat{\\mu}$ so that\n\n$$\n\\hat{\\mu} = \\arg \\max_{\\mu} L_{\\mu}(x_i)\n$$\n\n So far, everything is the same as our usual maximum-likelihood\nderivation except for the fact that we don't assume a specific $f_0$ as the\ndistribution of the $\\lbrace X_i\\rbrace$. Making the definition of\n\n$$\n\\rho = -\\log f_0\n$$\n\n we obtain the more convenient form of the likelihood product and the\noptimal $\\hat{\\mu}$ as\n\n$$\n\\hat{\\mu} = \\arg \\min_{\\mu} \\sum \\rho(x_i-\\mu)\n$$\n\n If $\\rho$ is differentiable, then differentiating this with respect\nto $\\mu$ gives\n\n\n
\n\n$$\n\\begin{equation}\n\\sum \\psi(x_i-\\hat{\\mu}) = 0 \n\\label{eq:muhat} \\tag{5}\n\\end{equation}\n$$\n\n with $\\psi = \\rho^\\prime$, the first derivative of $\\rho$ , and for technical reasons we will assume that\n$\\psi$ is increasing. So far, it looks like we just pushed some definitions\naround, but the key idea is we want to consider general $\\rho$ functions that\nmay not be maximum likelihood estimators for *any* distribution. Thus, our\nfocus is now on uncovering the nature of $\\hat{\\mu}$.\n\n### Distribution of M-estimates\n\nFor a given distribution $F$, we define $\\mu_0=\\mu(F)$ as the solution to the\nfollowing\n\n$$\n\\mathbb{E}_F(\\psi(x-\\mu_0))= 0\n$$\n\n It is technical to show, but it turns out that $\\hat{\\mu} \\sim\n\\mathcal{N}(\\mu_0,\\frac{v}{n})$ with\n\n$$\nv = \\frac{\\mathbb{E}_F(\\psi(x-\\mu_0)^2)}{(\\mathbb{E}_F(\\psi^\\prime(x-\\mu_0)))^2}\n$$\n\n Thus, we can say that $\\hat{\\mu}$ is asymptotically normal with asymptotic\nvalue $\\mu_0$ and asymptotic variance $v$. This leads to the efficiency ratio\nwhich is defined as the following:\n\n$$\n\\texttt{Eff}(\\hat{\\mu})= \\frac{v_0}{v}\n$$\n\n where $v_0$ is the asymptotic variance of the MLE and measures how\nnear $\\hat{\\mu}$ is to the optimum. In other words, this provides a sense of\nhow much outlier contamination costs in terms of samples. For example, if for\ntwo estimates with asymptotic variances $v_1$ and $v_2$, we have $v_1=3v_2$,\nthen first estimate requires three times as many observations to obtain the\nsame variance as the second. Furthermore, for the sample mean (i.e.,\n$\\hat{\\mu}=\\frac{1}{n} \\sum X_i$) with $F=\\mathcal{N}$, we have $\\rho=x^2/2$\nand $\\psi=x$ and also $\\psi'=1$. Thus, we have $v=\\mathbb{V}(x)$.\nAlternatively, using the sample median as the estimator for the location, we\nhave $v=1/(4 f(\\mu_0)^2)$. Thus, if we have $F=\\mathcal{N}(0,1)$, for the\nsample median, we obtain $v={2\\pi}/{4} \\approx 1.571$. This means that the\nsample median takes approximately 1.6 times as many samples to obtain the same\nvariance for the location as the sample mean. The sample median is \nfar more immune to the effects of outliers than the sample mean, so this \ngives a sense of how much this robustness costs in samples.\n\n** M-Estimates as Weighted Means.** One way to think about M-estimates is a\nweighted means. Operationally, this\nmeans that we want weight functions that can circumscribe the\ninfluence of the individual data points, but, when taken as a whole,\nstill provide good estimated parameters. Most of the time, we have $\\psi(0)=0$ and $\\psi'(0)$ exists so\nthat $\\psi$ is approximately linear at the origin. Using the following\ndefinition:\n\n$$\nW(x) = \\begin{cases}\n \\psi(x)/x & \\text{if} \\: x \\neq 0 \\\\\\\n \\psi'(x) & \\text{if} \\: x =0 \n \\end{cases}\n$$\n\n We can write our Equation ref{eq:muhat} as follows:\n\n\n
\n\n$$\n\\begin{equation}\n\\sum W(x_i-\\hat{\\mu})(x_i-\\hat{\\mu}) = 0 \n\\label{eq:Wmuhat} \\tag{6}\n\\end{equation}\n$$\n\n Solving this for $\\hat{\\mu} $ yields the following,\n\n$$\n\\hat{\\mu} = \\frac{\\sum w_{i} x_i}{\\sum w_{i}}\n$$\n\n where $w_{i}=W(x_i-\\hat{\\mu})$. This is not practically useful\nbecause the $w_i$ contains $\\hat{\\mu}$, which is what we are trying to solve\nfor. The question that remains is how to pick the $\\psi$ functions. This is\nstill an open question, but the Huber functions are a well-studied choice.\n\n### Huber Functions\n\nThe family of Huber functions is defined by the following:\n\n$$\n\\rho_k(x ) = \\begin{cases}\n x^2 & \\mbox{if } |x|\\leq k \\\\\\\n 2 k |x|-k^2 & \\mbox{if } |x| > k\n \\end{cases}\n$$\n\n with corresponding derivatives $2\\psi_k(x)$ with\n\n$$\n\\psi_k(x ) = \\begin{cases}\n x & \\mbox{if } \\: |x| \\leq k \\\\\\\n \\text{sgn}(x)k & \\mbox{if } \\: |x| > k\n \\end{cases}\n$$\n\n where the limiting cases $k \\rightarrow \\infty$ and $k \\rightarrow 0$\ncorrespond to the mean and median, respectively. To see this, take\n$\\psi_{\\infty} = x$ and therefore $W(x) = 1$ and thus the defining Equation\nref{eq:Wmuhat} results in\n\n$$\n\\sum_{i=1}^{n} (x_i-\\hat{\\mu}) = 0\n$$\n\n and then solving this leads to $\\hat{\\mu} = \\frac{1}{n}\\sum x_i$.\nNote that choosing $k=0$ leads to the sample median, but that is not so\nstraightforward to solve for. Nonetheless, Huber functions provide a way\nto move between two extremes of estimators for location (namely, \nthe mean vs. the median) with a tunable parameter $k$. \nThe $W$ function corresponding to Huber's $\\psi$ is the following:\n\n$$\nW_k(x) = \\min\\Big{\\lbrace} 1, \\frac{k}{|x|} \\Big{\\rbrace}\n$$\n\n [Figure](#fig:Robust_Statistics_0001) shows the Huber weight\nfunction for $k=2$ with some sample points. The idea is that the computed\nlocation, $\\hat{\\mu}$ is computed from Equation ref{eq:Wmuhat} to lie somewhere\nin the middle of the weight function so that those terms (i.e., *insiders*)\nhave their values fully reflected in the location estimate. The black circles\nare the *outliers* that have their values attenuated by the weight function so\nthat only a fraction of their presence is represented in the location estimate.\n\n\n\n
\n\n

This shows the Huber weight function, $W_2(x)$ and some cartoon data points that are insiders or outsiders as far as the robust location estimate is concerned.

\n\n\n\n\n\n### Breakdown Point\n\nSo far, our discussion of robustness has been very abstract. A more concrete\nconcept of robustness comes from the breakdown point. In the simplest terms,\nthe breakdown point describes what happens when a single data point in an\nestimator is changed in the most damaging way possible. For example, suppose we\nhave the sample mean, $\\hat{\\mu}=\\sum x_i/n$, and we take one of the $x_i$\npoints to be infinite. What happens to this estimator? It also goes infinite.\nThis means that the breakdown point of the estimator is 0%. On the other hand,\nthe median has a breakdown point of 50%, meaning that half of the data for\ncomputing the median could go infinite without affecting the median value. The median\nis a *rank* statistic that cares more about the relative ranking of the data\nthan the values of the data, which explains its robustness.\n\nThe simpliest but still formal way to express the breakdown point is to\ntake $n$ data points, $\\mathcal{D} = \\lbrace (x_i,y_i) \\rbrace$. Suppose $T$\nis a regression estimator that yields a vector of regression coefficients,\n$\\boldsymbol{\\theta}$,\n\n$$\nT(\\mathcal{D}) = \\boldsymbol{\\theta}\n$$\n\n Likewise, consider all possible corrupted samples of the data\n$\\mathcal{D}^\\prime$. The maximum *bias* caused by this contamination is\nthe following:\n\n$$\n\\texttt{bias}_{m} = \\sup_{\\mathcal{D}^\\prime} \\Vert T(\\mathcal{D^\\prime})-T(\\mathcal{D}) \\Vert\n$$\n\n where the $\\sup$ sweeps over all possible sets of $m$ contaminated samples.\nUsing this, the breakdown point is defined as the following:\n\n$$\n\\epsilon_m = \\min \\Big\\lbrace \\frac{m}{n} \\colon \\texttt{bias}_{m} \\rightarrow \\infty \\Big\\rbrace\n$$\n\n For example, in our least-squares regression, even one point at\ninfinity causes an infinite $T$. Thus, for least-squares regression,\n$\\epsilon_m=1/n$. In the limit $n \\rightarrow \\infty$, we have $\\epsilon_m\n\\rightarrow 0$.\n\n### Estimating Scale\n\nIn robust statistics, the concept of *scale* refers to a measure of the\ndispersion of the data. Usually, we use the\nestimated standard deviation for this, but this has a terrible breakdown point.\nEven more troubling, in order to get a good estimate of location, we have to\neither somehow know the scale ahead of time, or jointly estimate it. None of\nthese methods have easy-to-compute closed form solutions and must be computed\nnumerically.\n\nThe most popular method for estimating scale is the *median absolute deviation*\n\n$$\n\\texttt{MAD} = \\texttt{Med} (\\vert \\mathbf{x} - \\texttt{Med}(\\mathbf{x})\\vert)\n$$\n\n In words, take the median of the data $\\mathbf{x}$ and\nthen subtract that median from the data itself, and then take the median of the\nabsolute value of the result. Another good dispersion estimate is the *interquartile range*,\n\n$$\n\\texttt{IQR} = x_{(n-m+1)} - x_{(n)}\n$$\n\n where $m= [n/4]$. The $x_{(n)}$ notation means the $n^{th}$ data\nelement after the data have been sorted. Thus, in this notation,\n$\\texttt{max}(\\mathbf{x})=x_{(n)}$. In the case where $x \\sim\n\\mathcal{N}(\\mu,\\sigma^2)$, then $\\texttt{MAD}$ and $\\texttt{IQR}$ are constant\nmultiples of $\\sigma$ such that the normalized $\\texttt{MAD}$ is the following,\n\n$$\n\\texttt{MADN}(x) = \\frac{\\texttt{MAD} }{0.675}\n$$\n\n The number comes from the inverse CDF of the normal distribution\ncorresponding to the $0.75$ level. Given the complexity of the\ncalculations, *jointly* estimating both location and scale is a purely\nnumerical matter. Fortunately, the Statsmodels module has many of these\nready to use. Let's create some contaminated data in the following code,\n\n\n```python\nimport statsmodels.api as sm\nfrom scipy import stats\ndata=np.hstack([stats.norm(10,1).rvs(10),stats.norm(0,1).rvs(100)])\n```\n\n These data correspond to our model of contamination that we started\nthis section with. As shown in the histogram in [Figure](#fig:Robust_Statistics_0002), there are two normal distributions, one\ncentered neatly at zero, representing the majority of the samples, and another\ncoming less regularly from the normal distribution on the right. Notice that\nthe group of infrequent samples on the right separates the mean and median\nestimates (vertical dotted and dashed lines). In the absence of the\ncontaminating distribution on the right, the standard deviation for this data\nshould be close to one. However, the usual non-robust estimate for standard\ndeviation (`np.std`) comes out to approximately three. Using the\n$\\texttt{MADN}$ estimator (`sm.robust.scale.mad(data)`) we obtain approximately\n1.25. Thus, the robust estimate of dispersion is less moved by the presence of\nthe contaminating distribution.\n\n\n\n
\n\n

Histogram of sample data. Notice that the group of infrequent samples on the right separates the mean and median estimates indicated by the vertical lines.

\n\n\n\n\n\nThe generalized maximum likelihood M-estimation extends to joint\nscale and location estimation using Huber functions. For example,\n\n\n```python\nhuber = sm.robust.scale.Huber()\nloc,scl=huber(data)\n```\n\n which implements Huber's *proposal two* method of joint estimation of\nlocation and scale. This kind of estimation is the key ingredient to robust\nregression methods, many of which are implemented in Statsmodels in\n`statsmodels.formula.api.rlm`. The corresponding documentation has more\ninformation.\n", "meta": {"hexsha": "b452d76173f6c4a864c7634fd4b8f88d787c14df", "size": 140181, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "chapters/statistics/notebooks/Robust_Statistics.ipynb", "max_stars_repo_name": "nsydn/Python-for-Probability-Statistics-and-Machine-Learning", "max_stars_repo_head_hexsha": "d3e0f8ea475525a694a975dbfd2bf80bc2967cc6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 570, "max_stars_repo_stars_event_min_datetime": "2016-05-05T19:08:27.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T05:09:19.000Z", "max_issues_repo_path": "chapters/statistics/notebooks/Robust_Statistics.ipynb", "max_issues_repo_name": "crlsmcl/https-github.com-unpingco-Python-for-Probability-Statistics-and-Machine-Learning", "max_issues_repo_head_hexsha": "6fd69459a28c0b76b37fad79b7e8e430d09a86a5", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2016-05-12T22:18:58.000Z", "max_issues_repo_issues_event_max_datetime": "2019-11-06T14:37:06.000Z", "max_forks_repo_path": "chapters/statistics/notebooks/Robust_Statistics.ipynb", "max_forks_repo_name": "crlsmcl/https-github.com-unpingco-Python-for-Probability-Statistics-and-Machine-Learning", "max_forks_repo_head_hexsha": "6fd69459a28c0b76b37fad79b7e8e430d09a86a5", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 276, "max_forks_repo_forks_event_min_datetime": "2016-05-27T01:42:05.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-27T11:20:27.000Z", "avg_line_length": 184.4486842105, "max_line_length": 114721, "alphanum_fraction": 0.8922179183, "converted": true, "num_tokens": 4580, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3557748935136303, "lm_q2_score": 0.2568319856991699, "lm_q1q2_score": 0.09137437236301638}} {"text": "```python\nfrom IPython.display import HTML\n\nHTML('''\n
''')\n```\n\n\n\n\n\n
\n\n\n\n\n```javascript\n%%javascript\n MathJax.Hub.Config({\n TeX: { equationNumbers: { autoNumber: \"AMS\" } }\n });\n```\n\n\n \n\n\n\n```python\nfrom IPython.display import HTML\n\nHTML('''\n\n\n''')\n```\n\n\n\n\n\n\n\n\n\n\n\n# Benchmark Problem 5: Stokes Flow\n\n\n```python\nfrom IPython.display import HTML\n\nHTML('''{% include jupyter_benchmark_table.html num=\"[5]\" revision=0 %}''')\n```\n\n\n\n\n{% include jupyter_benchmark_table.html num=\"[5]\" revision=0 %}\n\n\n\nSee the Overleaf document entitled [\"Phase Field Benchmark Problems for Dendritic Growth and Linear Elasticity\"][overleaf] for more details about the benchmark problems. Furthermore, read [the extended essay][benchmarks] for a discussion about the need for benchmark problems.\n\n[benchmarks]: ../ \n[overleaf]: https://www.overleaf.com/read/nqjkdwyybvdz\n\n# Overview\n\nFlow of a liquid can be incorporated into phase field models, so we present this benchmark problem for incompressible fluid flow through a channel (the flow of many liquids can be modeled as incompressible). The flow of a fluid can generally be modeled via the Navier-Stokes equations. When length scales are small, fluid velocities are low, and/or viscosity is large, such that the Reynolds number $Re<<1$, inertial forces are small compared with viscous forces, resulting in a simplification of Navier-Stokes flow to Stokes flow. \n\n# Model Formulation\n\n## Governing Equations\n\nIn this problem, two variables are used: the fluid velocity, $\\textbf{u}$, which is a vector field, and the fluid pressure, $p$, which is a scalar field. The Stokes momentum equation is given as\n\n\\begin{equation}\n-\\mu \\nabla^{2} \\textbf{u} + \\nabla p - \\rho \\textbf{g} = 0,\n\\end{equation}\n\nwhere $\\rho$ is the density, assumed constant in this problem, $\\mu$ is the dynamic viscosity, and $\\textbf{g}$ is the acceleration due to gravity. To fully describe fluid flow, the momentum balance equation is supplemented with the continuity equation for mass flow,\n\n\\begin{equation}\n\\frac{d\\rho}{dt}+\\nabla\\cdot\\left(\\rho{\\mathbf u}\\right)=0;\n\\end{equation}\n\nthis simplifies to \n\\begin{equation}\n\\nabla \\cdot \\textbf{u} = 0\n\\end{equation}\n\nfor incompressible flow. Use $\\rho=100$, $\\mu=1$, and $\\textbf{g}=(0,-0.001)$.\n\n## Domain\n\nIn this problem, we consider flow in a 2D channel (a) without and (b) with an obstruction. The computational domain for case (b) is shown below with inlet boundary condition indicated by arrows for the Stokes flow benchmark problem with an obstruction (case (b)). The domain and boundary conditions, etc., for case (a) are the same as that in case (b), but without the obstruction. \n\n### Figure 1: Domain for variation (b)\n\n\n\n\n## Boundary Conditions\n\nAll solid surfaces, including the boundary for the obstruction, have no-slip boundary conditions, that is, $u_x=u_y=0$. The inlet velocity, on the left boundary, follows a parabolic profile described by \n\n\\begin{equation}\nu_x(0,y) = -0.001(y-3)^2+0.009.\n\\end{equation}\n\nThe outlet velocity (on the right boundary) is left to the solver to determine, as is the pressure over the entire domain. However, we specify that the pressure at point (30, 6) is zero. Finally, the obstruction is described by an ellipse centered at (7, 2.5). The semi-major axis (in the *y*-direction) is $a=1.5$, and the semi-minor axis (in the *x*-direction) is $b=1$. \n\n## Submission Guidelines\n\nBoth variation (a) and (b) should be run to steady state.\nPlease submit the steady state pressure and the steady state velocity fields for both variation (a) and (b) along the $x=7$ and $y=5$ cut planes.\n\nThis will require two CSV or JSON files for each variation. Please,\n\n - link to your first CSV or JSON file, labeled x_cut_plane in the upload form; the columns or keys should be named y, velocity_x, velocity_y and pressure\n \n - link to your second CSV or JSON file, labeled y_cut_plane in the upload form; the columns or keys should be named x, velocity_x, velocity_y and pressure\n\nFurther data to upload can include images of the pressure and velocity fields at steady state. These are not required, but will help others view your work.\n\n\n", "meta": {"hexsha": "7992965f251b6ddb036a6400a31d2208088badc1", "size": 8721, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "benchmarks/benchmark5-hackathon.ipynb", "max_stars_repo_name": "wd15/chimad-phase-field", "max_stars_repo_head_hexsha": "b8ead2ef666201b500033052d0a4efb55796c2da", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "benchmarks/benchmark5-hackathon.ipynb", "max_issues_repo_name": "wd15/chimad-phase-field", "max_issues_repo_head_hexsha": "b8ead2ef666201b500033052d0a4efb55796c2da", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 6, "max_issues_repo_issues_event_min_datetime": "2015-02-06T16:45:52.000Z", "max_issues_repo_issues_event_max_datetime": "2017-12-12T17:39:56.000Z", "max_forks_repo_path": "benchmarks/benchmark5-hackathon.ipynb", "max_forks_repo_name": "wd15/chimad-phase-field", "max_forks_repo_head_hexsha": "b8ead2ef666201b500033052d0a4efb55796c2da", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 33.286259542, "max_line_length": 539, "alphanum_fraction": 0.5744754042, "converted": true, "num_tokens": 1214, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.41489884579676883, "lm_q2_score": 0.22000710486009023, "lm_q1q2_score": 0.09128069387354013}} {"text": "```python\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('classic')\n%matplotlib inline\n```\n\n# Class 5: Managing Data with Pandas \n\nPandas is a Python library for managing datasets. Documentation and examples are available on the website for Pandas: http://pandas.pydata.org/. \n\nIn this Notebook, we'll make use of a dataset containing long-run averages of inflation, money growth, and real GDP. The dataset is available here: https://raw.githubusercontent.com/letsgoexploring/economic-data/master/quantity-theory/csv/quantity_theory_data.csv (Python code to generate the dataset: https://github.com/letsgoexploring/economic-data). Recall that the quantity theory of money implies the following linear relationship between the long-run rate of money growth, the long-run rate of inflation, and the long-run rate of real GDP growth in a country:\n\n\\begin{align}\n\\text{inflation} & = \\text{money growth} - \\text{real GDP growth},\n\\end{align}\n\nGenerally, we treat real GDP growth and money supply growth as exogenous so this is a theory about the determination of inflation.\n\n### Import Pandas\n\n\n```python\n# Import the Pandas module as pd\n\n```\n\n### Import data from a csv file\n\nPandas has a function called `read_csv()` for reading data from a csv file into a Pandas `DataFrame` object.\n\n\n```python\n# Import quantity theory data into a Pandas DataFrame called 'df' with country names as the index.\n\n# Directly from internet\n\n\n# From current working directory\n# df = pd.read_csv('qtyTheoryData.csv')\n```\n\n\n```python\n# Print the first 5 rows\n\n```\n\n\n```python\n# Print the last 10 rows\n\n```\n\n\n```python\n# Print the type of variable 'df'\n\n```\n\n### Properties of `DataFrame` objects\n\nLike entries in a spreadsheet file, elements in a `DataFrame` object have row (or *index*) and column coordinates. Column names are always strings. Index elements can be integers, strings, or dates.\n\n\n```python\n# Print the columns of df\n\n```\n\n\n```python\n# Create a new variable called 'money' equal to the 'money growth' column and print\n\n\n```\n\n\n```python\n# Print the type of the variable money\n\n```\n\nA Pandas `Series` stores one column of data. Like a `DataFrame`, a `Series` object has an index. Note that `money` has the same index as `df`. Instead of having a column, the `Series` has a `name` attribute.\n\n\n```python\n# Print the name of the 'money' variable\n\n```\n\nSelect multiple columns of a `DataFrame` by puting the desired column names in a set a of square brackets (i.e., in a `list`).\n\n\n```python\n# Print the first 5 rows of just the inflation, money growth, and gdp growth columns\n\n```\n\nAs mentioned, the set of row coordinates is the index. Unless specified otherwise, Pandas automatically assigns an integer index starting at 0 to rows of the `DataFrame`.\n\n\n```python\n# Print the index of 'df'\n\n```\n\nNote that in the index of the `df` is the numbers 0 through 177. We could have specified a different index when we imported the data using `read_csv()`. For example, suppose we want to the country names to be the index of `df`. Since country names are in the first column of the data file, we can pass the argument `index_col=0` to `read_csv()`\n\n\n```python\n# Import quantity theory data into a Pandas DataFrame called 'df' with country names as the index.\n\n\n# Print first 5 rows of df\n\n```\n\nUse the `loc` attribute to select rows of the `DataFrame` by index *values*.\n\n\n```python\n# Create a new variable called 'usa_row' equal to the 'United States' row and print\n\n\n```\n\nUse `iloc` attribute to select row based on integer location (starting from 0).\n\n\n```python\n# Create a new variable called 'third_row' equal to the third row in the DataFrame and print\n\n\n```\n\nThere are several ways to return a single element of a Pandas `DataFrame`. For example, here are three that we want to return the value of inflation for the United States from the DataFrame `df`:\n\n1. `df.loc['United States','inflation']`\n2. `df.loc['United States']['inflation']`\n3. `df['inflation']['United States']`\n\nThe first method points directly to the element in the `df` while the second and third methods return *copies* of the element. That means that you can modify the value of inflation for the United States by running:\n\n df.loc['United States','inflation'] = new_value\n \nBut running either:\n\n df.loc['United States']['inflation'] = new_value\n \nor:\n\n df['inflation']['United States'] = new_value\n\nwill return an error.\n\n\n```python\n# Print the inflation rate of the United States (By index and column together)\n\n```\n\n\n```python\n# Print the inflation rate of the United States (first by index, then by column)\n\n```\n\n\n```python\n# Print the inflation rate of the United States (first by column, then by index)\n\n```\n\nNew columns are easily created as functions of existing columns.\n\n\n```python\n# Create a new column called 'difference' equal to the money growth column minus \n# the inflation column and print the modified DataFrame\n\n\n```\n\n\n```python\n# Print the average difference between money growth and inflation\n\n```\n\n\n```python\n# Remove the following columns from the DataFrame: 'iso code','observations','difference'\n\n\n# Print the modified DataFrame\n\n```\n\n### Methods\n\nA Pandas `DataFrame` has a bunch of useful methods defined for it. `describe()` returns some summary statistics.\n\n\n```python\n# Print the summary statistics for 'df'\n\n```\n\nThe `corr()` method returns a `DataFrame` containing the correlation coefficients of the specified `DataFrame`.\n\n\n```python\n# Create a variable called 'correlations' containg the correlation coefficients for columns in 'df'\n\n\n# Print the correlation coefficients\n\n```\n\n\n```python\n# Print the correlation coefficient for inflation and money growth\n\n\n# Print the correlation coefficient for inflation and real GDP growth\n\n\n# Print the correlation coefficient for money growth and real GDP growth\n\n```\n\n`sort_values()` returns a copy of the original `DataFrame` sorted along the given column. The optional argument `ascending` is set to `True` by default, but can be changed to `False` if you want to print the lowest first.\n\n\n```python\n# Print rows for the countries with the 10 lowest inflation rates\n\n```\n\n\n```python\n# Print rows for the countries with the 10 highest inflation rates\n\n```\n\nNote that `sort_values` and `sort_index` return *copies* of the original `DataFrame`. If, in the previous example, we had wanted to actually modify `df`, we would have need to explicitly overwrite it:\n\n df = df.sort_index(ascending=False)\n\n\n```python\n# Print first 10 rows with the index sorted in descending alphabetical order\n\n```\n\n### Quick plotting example\n\nConstruct a graph that visually confirms the quantity theory of money by making a scatter plot with average money growth on the horizontal axis and average inflation on the vertical axis. Set the marker size `s` to 50 and opacity (`alpha`) 0.25. Add a 45 degree line, axis labels, and a title. Lower and upper limits for the horizontal and vertical axes should be -0.2 and 1.2.\n\n\n```python\n# Create data for 45 degree line\n\n\n# Create figure and axis\n\n\n# Plot 45 degree line and create legend in lower right corner\n\n\n# Scatter plot of data inflation against money growth\n\n\n\n```\n\n### Exporting a `DataFrame` to csv\n\nUse the DataFrame method `to_csv()` to export DataFrame to a csv file.\n\n\n```python\n# Export the DataFrame 'df' to a csv file called 'modified_data.csv'.\n\n```\n", "meta": {"hexsha": "9993d1db4568d21f88e55919eb7d07cc34633343", "size": 17807, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lecture Notebooks/Econ126_Class_05_blank.ipynb", "max_stars_repo_name": "t-hdd/econ126", "max_stars_repo_head_hexsha": "17029937bd6c40e606d145f8d530728585c30a1d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Lecture Notebooks/Econ126_Class_05_blank.ipynb", "max_issues_repo_name": "t-hdd/econ126", "max_issues_repo_head_hexsha": "17029937bd6c40e606d145f8d530728585c30a1d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lecture Notebooks/Econ126_Class_05_blank.ipynb", "max_forks_repo_name": "t-hdd/econ126", "max_forks_repo_head_hexsha": "17029937bd6c40e606d145f8d530728585c30a1d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 35.5429141717, "max_line_length": 586, "alphanum_fraction": 0.4332565845, "converted": true, "num_tokens": 1693, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.44939264921326716, "lm_q2_score": 0.2018132198600306, "lm_q1q2_score": 0.0906933775191587}} {"text": "# Funciones de utilidad y aversi\u00f3n al riesgo\n\n\n\nEn el m\u00f3dulo anterior aprendimos \n- qu\u00e9 es un portafolio, c\u00f3mo medir su rendimiento esperado y su volatilidad; \n- un portafolio de activos riesgosos tiene menos riesgo que la suma ponderada de los riesgos individuales,\n- y que esto se logra mediante el concepto de diversificaci\u00f3n;\n- la diversificaci\u00f3n elimina el riesgo idiosincr\u00e1tico, que es el que afecta a cada compa\u00f1\u00eda en particular,\n- sin embargo, el riesgo de mercado no se puede eliminar porque afecta a todos por igual.\n- Finalmente, aprendimos conceptos importantes como frontera de m\u00ednima varianza, portafolios eficientes y el portafolio de m\u00ednima varianza, que son claves en el problema de selecci\u00f3n \u00f3ptima de portafolios.\n\nMuy bien, sin embargo, para plantear el problema de selecci\u00f3n \u00f3ptima de portafolios necesitamos definir la funci\u00f3n que vamos a optimizar: funci\u00f3n de utilidad.\n\n**Objetivos:**\n- \u00bfC\u00f3mo tomamos decisiones seg\u00fan los economistas?\n- \u00bfC\u00f3mo toman decisiones los inversionistas?\n- \u00bfQu\u00e9 son las funciones de utilidad?\n\n*Referencia:*\n- Notas del curso \"Portfolio Selection and Risk Management\", Rice University, disponible en Coursera.\n___\n\n## 1. Introducci\u00f3n\n\nLa teor\u00eda econ\u00f3mica comienza con una suposici\u00f3n muy importante: \n- **cada individuo act\u00faa para obtener el mayor beneficio posible con los recursos disponibles**.\n- En otras palabras, **maximizan su propia utilidad**\n\n\u00bfQu\u00e9 es utilidad?\n- Es un concepto relacionado con la felicidad, pero m\u00e1s amplio.\n- Por ejemplo, yo obtengo utilidad de lavar mis dientes o comer sano. Ninguna de las dos me brindan felicidad, pero lo primero mantendr\u00e1 mis dientes sanos y en el largo plazo, lo segundo probablemente contribuir\u00e1 a una buena vejez.\n\nLos economistas no se preocupan en realidad por lo que nos da utilidad, sino simplemente que cada uno de nosotros tiene sus propias preferencias.\n- Por ejemplo, a mi me gusta el caf\u00e9, el f\u00fatbol, los perros, la academia, viajar, entre otros.\n- Ustedes tienen sus propias preferencias tambi\u00e9n.\n\nLa vida es compleja y con demasiada incertidumbre. Debemos tomar decisiones a cada momento, y estas decisiones involucran ciertos \"trade-off\".\n- Por ejemplo, normalmente tenemos una compensaci\u00f3n entre utilidad hoy contra utilidad en el futuro.\n- Debemos balancear nuestro consumo hoy contra nuestro consumo luego.\n- Por ejemplo, ustedes gastan cerca de cuatro horas a la semana viniendo a clases de portafolios, porque esperan que esto contribuya a mejorar su nivel de vida en el futuro.\n\nDe manera que los economistas dicen que cada individuo se comporta como el siguiente optimizador:\n\n\\begin{align}\n\\max & \\quad\\text{Utilidad}\\\\\n\\text{s. a.} & \\quad\\text{Recursos disponibles}\n\\end{align}\n\n\u00bfQu\u00e9 tiene que ver todo esto con el curso?\n- En este m\u00f3dulo desarrollaremos herramientas para describir las preferencias de los inversionistas cuando se encuentran con decisiones de riesgo y rendimiento.\n- Veremos como podemos medir la actitud frente al riesgo, \u00bfcu\u00e1nto te gusta o disgusta el riesgo?\n- Finalmente, veremos como podemos formular el problema de maximizar la utilidad de un inversionista para tomar la decisi\u00f3n de inversi\u00f3n \u00f3ptima.\n___\n\n## 2. Funciones de utilidad.\n\n\u00bfC\u00f3mo tomamos decisiones?\nPor ejemplo:\n- Ustedes tienen que decidir si venir a clase o quedarse en su casa viendo Netflix, o ir al gimnasio.\n- Tienen que decidir entre irse de fiesta cada fin, o ahorrar para salir de vacaciones.\n\nEn el caso de un portafolio, la decisi\u00f3n que se debe tomar es **\u00bfcu\u00e1to riesgo est\u00e1s dispuesto a tomar por qu\u00e9 cantidad de rendimiento?**\n\n**\u00bfC\u00f3mo evaluar\u00edas el \"trade-off\" entre tener cetes contra una estrategia muy riesgosa con un posible alt\u00edsimo rendimiento?**\n\nDe manera que veremos como tomamos decisiones cuando tenemos distintas posibilidades. Espec\u00edficamente, hablaremos acerca de las **preferencias**, como los economistas usan dichas preferencias para explicar las decisiones y los \"trade-offs\" en dichas decisiones.\n\nUsamos las **preferencias** para describir las decisiones que tomamos. Las preferencias nos dicen c\u00f3mo un individuo eval\u00faa los \"trade-offs\" entre distintas elecciones.\n\nPor definici\u00f3n, las preferencias son \u00fanicas para cada individuo. En el problema de selecci\u00f3n de portafolios:\n- las preferencias que dictan cu\u00e1nto riesgo est\u00e1s dispuesto a asumir por cu\u00e1nto rendimiento, son espec\u00edficas para cada uno de ustedes.\n- Sus respuestas a esa pregunta pueden ser muy distintas, porque tenemos distintas preferencias.\n\nAhora, nosotros no podemos *cuantificar* dichas preferencias.\n- Por esto usamos el concepto de utilidad, para medir qu\u00e9 tan satisfecho est\u00e1 un individuo con sus elecciones.\n- As\u00ed que podemos pensar en la utilidad como un indicador num\u00e9rico que describe las preferencias,\n- o un \u00edndice que nos ayuda a clasificar diferentes decisiones.\n- En t\u00e9rminos simples, **la utilidad nos ayuda a transmitir a n\u00fameros la noci\u00f3n de c\u00f3mo te sientes**;\n- mientras m\u00e1s utilidad, mejor te sientes.\n\n**Funci\u00f3n de utilidad**: manera sistem\u00e1tica de asignar una medida o indicador num\u00e9rico para clasificar diferentes escogencias.\n\nEl n\u00famero que da una funci\u00f3n de utilidad no tiene significado alguno. Simplemente es una manera de clasificar diferentes decisiones.\n\n**Ejemplo.**\n\nPodemos escribir la utilidad de un inversionista como funci\u00f3n de la riqueza,\n\n$$U(W).$$\n\n- $U(W)$ nos da una medida de qu\u00e9 tan satisfechos estamos con el nivel de riqueza que tenemos. \n- $U(W)$ no es la riqueza como tal, sino que la funci\u00f3n de utilidad traduce la cantidad de riqueza en un \u00edndice num\u00e9rico subjetivo.\n\n\u00bfC\u00f3mo lucir\u00eda gr\u00e1ficamente una funci\u00f3n de utilidad de riqueza $U(W)$?\n\n Ver en el tablero \n- \u00bfQu\u00e9 caracteristicas debe tener?\n- \u00bfC\u00f3mo es su primera derivada?\n- \u00bfC\u00f3mo es su segunda derivada?\n- Tiempos buenos: riqueza alta (\u00bfc\u00f3mo es la primera derivada ac\u00e1?)\n- Tiempos malos: poca riqueza (\u00bfc\u00f3mo es la primera derivada ac\u00e1?)\n\n## 3. Aversi\u00f3n al riesgo\n\nUna dimensi\u00f3n importante en la toma de decisiones en finanzas y econom\u00eda es la **incertidumbre**. Probablemente no hay ninguna decisi\u00f3n en econom\u00eda que no involucre riesgo.\n\n- A la mayor\u00eda de las personas no les gusta mucho el riesgo.\n- De hecho, estudios del comportamiento humano de cara al riesgo, sugieren fuertemente que los seres humanos somos aversos al riesgo.\n- Por ejemplo, la mayor\u00eda de hogares poseen seguros para sus activos.\n- As\u00ed, cuando planteamos el problema de selecci\u00f3n \u00f3ptima de portafolios, suponemos que el inversionista es averso al riesgo.\n\n\u00bfQu\u00e9 significa esto en t\u00e9rminos de preferencias? \u00bfC\u00f3mo lo medimos?\n \n- Como seres humanos, todos tenemos diferentes genes y preferencias, y esto aplica tambi\u00e9n a la actitud frente al riesgo.\n- Por tanto, la aversi\u00f3n al riesgo es clave en c\u00f3mo describimos las preferencias de un inversinista.\n- Individuos con un alto grado de aversi\u00f3n al riesgo valorar\u00e1n la seguridad a un alto precio, mientras otros no tanto.\n- De manera que alguien con alta aversi\u00f3n al riesgo, no querr\u00e1 enfrentarse a una situaci\u00f3n con resultado incierto y querr\u00e1 pagar una gran prima de seguro para eliminar dicho riesgo.\n- O equivalentemente, una persona con alta aversi\u00f3n al riesgo requerir\u00e1 una compensaci\u00f3n alta si se decide a asumir ese riesgo.\n\nEl **grado de aversi\u00f3n al riesgo** mide qu\u00e9 tanto un inversionista prefiere un resultado seguro a un resultado incierto.\n\nLo opuesto a aversi\u00f3n al riesgo es **tolerancia al riesgo**.\n \n Ver en el tablero gr\u00e1ficamente, c\u00f3mo se explica la aversi\u00f3n al riesgo desde las funciones de utilidad. \n\n**Conclusi\u00f3n:** la concavidad en la funci\u00f3n de utilidad dicta qu\u00e9 tan averso al riesgo es el individuo.\n\n### \u00bfC\u00f3mo medimos el grado de aversi\u00f3n al riesgo de un individuo?\n\n\u00bfSaben cu\u00e1l es su coeficiente de aversi\u00f3n al riesgo? Podemos estimarlo.\n\nSuponga que se puede participar en la siguiente loter\u00eda:\n- usted puede ganar $\\$1000$ con $50\\%$ de probabilidad, o\n- puede ganar $\\$500$ con $50\\%$ de probabilidad.\n\nEs decir, de entrada usted tendr\u00e1 $\\$500$ seguros pero tambi\u00e9n tiene la posibilidad de ganar $\\$1000$.\n\n\u00bfCu\u00e1nto estar\u00edas dispuesto a pagar por esta oportunidad?\n\nBien, podemos relacionar tu respuesta con tu coeficiente de aversi\u00f3n al riesgo.\n\n| Coeficiente de aversi\u00f3n al riesgo | Cantidad que pagar\u00edas |\n| --------------------------------- | --------------------- |\n| 0 | 750 |\n| 0.5 | 729 |\n| 1 | 707 |\n| 2 | 667 |\n| 3 | 632 |\n| 4 | 606 |\n| 5 | 586 |\n| 10 | 540 |\n| 15 | 525 |\n| 20 | 519 |\n| 50 | 507 |\n\nLa mayor\u00eda de la gente est\u00e1 dispuesta a pagar entre $\\$540$ (10) y $\\$707$ (1). Es muy raro encontrar coeficientes de aversi\u00f3n al riesgo menores a 1. Esto est\u00e1 soportado por una gran cantidad de encuestas.\n\n- En el mundo financiero, los consultores financieros utilizan cuestionarios para medir el coeficiente de aversi\u00f3n al riesgo.\n\n**Ejemplo.** Describir en t\u00e9rminos de aversi\u00f3n al riesgo las siguientes funciones de utilidad que dibujar\u00e9 en el tablero.\n___\n\n# Anuncios\n\n## 1. Quiz la siguiente clase.\n\n## 2. Tarea 5 entrega 2 para hoy, lunes 22 de Junio.\n\n\n\n
\nCreated with Jupyter by Esteban Jim\u00e9nez Rodr\u00edguez.\n
\n", "meta": {"hexsha": "4610afeff1a17ce1ffebdfb0e78e5c4dcf7a911c", "size": 13487, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Modulo3/Clase10_FuncionesUtilidad.ipynb", "max_stars_repo_name": "memoglez3/porinvv2020", "max_stars_repo_head_hexsha": "14068e8c149cd624f5e58c32186f6065fbd5e13d", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Modulo3/Clase10_FuncionesUtilidad.ipynb", "max_issues_repo_name": "memoglez3/porinvv2020", "max_issues_repo_head_hexsha": "14068e8c149cd624f5e58c32186f6065fbd5e13d", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Modulo3/Clase10_FuncionesUtilidad.ipynb", "max_forks_repo_name": "memoglez3/porinvv2020", "max_forks_repo_head_hexsha": "14068e8c149cd624f5e58c32186f6065fbd5e13d", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 42.9522292994, "max_line_length": 272, "alphanum_fraction": 0.6175576481, "converted": true, "num_tokens": 2481, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.34864513533394575, "lm_q2_score": 0.2598256322295121, "lm_q1q2_score": 0.09058694271188626}} {"text": "# KVLCC2 Ikeda estimators\n\n# Purpose\nThe are a lot of different ways to implement Ikeda's method. This notebook is creating a lot of different estimators and saving this to pkl files.\n\n# Methodology\nBuild the estimators and save them\n\n# Setup\n\n\n```python\n# %load imports.py\n\"\"\"\nThese is the standard setup for the notebooks.\n\"\"\"\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport sys\nsys.path.append(\"../../\")\n\nimport pandas as pd\npd.options.display.max_rows = 999\npd.options.display.max_columns = 999\npd.set_option(\"display.max_columns\", None)\nimport numpy as np\nimport os\nimport matplotlib.pyplot as plt\nfrom collections import OrderedDict\nimport copy\nfrom sklearn.pipeline import Pipeline\nfrom rolldecayestimators.transformers import CutTransformer, LowpassFilterDerivatorTransformer, ScaleFactorTransformer, OffsetTransformer\nfrom rolldecayestimators.direct_estimator_cubic import EstimatorQuadraticB, EstimatorCubic\nfrom rolldecayestimators.ikeda_estimator import IkedaQuadraticEstimator\nimport src.equations as equations\nimport rolldecayestimators.lambdas as lambdas\nfrom rolldecayestimators.substitute_dynamic_symbols import lambdify\nimport rolldecayestimators.symbols as symbols\nimport sympy as sp\n\nfrom sympy.physics.vector.printing import vpprint, vlatex\nfrom IPython.display import display, Math, Latex\n\nfrom sklearn.metrics import r2_score\nimport shipflowmotionshelpers.shipflowmotionshelpers as helpers\nimport src.visualization.visualize as visualize\nimport scipy\nfrom copy import deepcopy\nimport joblib\n```\n\n Duplicate key in file WindowsPath('C:/Users/maa/.matplotlib/stylelib/paper.mplstyle'), line 462 ('figure.figsize : 5, 3 ## figure size in inches')\n Duplicate key in file WindowsPath('C:/Users/maa/.matplotlib/stylelib/paper.mplstyle'), line 463 ('figure.dpi : 100 ## figure dots per inch')\n\n\n\n```python\nimport joblib\nfrom src.helpers import get_ikeda, calculate_ikeda, get_estimator_variation, get_data_variation , get_variation, hatify\nfrom rolldecayestimators import fit_on_amplitudes\nfrom copy import deepcopy\nimport rolldecayestimators.ikeda as ikeda_classes\nimport rolldecayestimators.ikeda_speed\nimport scipy\nimport rolldecayestimators.ikeda_speed\nimport src.helpers\nfrom pyscores2.runScores2 import Calculation\nfrom pyscores2.indata import Indata\nfrom pyscores2.output import OutputFile\nimport src.visualization.visualize as visualize\nfrom reports import mdl_results\nfrom notebook_helpers import load_time_series_fnpf\n\nimport reports.examples.FNPF\n```\n\n## Load data from FNPF:\n\n\n```python\ndf_parameters = pd.read_csv('../../data/processed/roll decay KVLCC2/fnpf_parameters.csv', index_col=0)\ndf_parameters.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
BIXYIXZIYYIYZIZZKXXKXYKYYKZZLPPSVXCGYCGZCGaccxactob1cb1crb1lb1qb2cb2crb2lb2qb3cb3crb3lb3qb4cb4crb4lb4qb5cb5crb5lb5qb6cb6crb6lb6qbdensbodyclevclevelcurvdensdensidofadofactordopadopaddingdopodopowerdowndownsdownstdtfilefile_path_tsfnformfreegraviheaveheelhullinterk1ndk2ndk3ndk4ndk5ndk6ndkxxkyylevelevellppmmaxtimaxtimemeshnamenpxpnpypnstepnthrpitchpowepowerreflereflenrnrollrud1rud2rud3rud4rud5rud6rudtsideslimstrestrengthsurgeswaytatftfnhitfnhightfnlotfnlowtitltitletrimupstupstrvm_swlinwlmeyawymaxyminzcgconvencountersid
kvlcc2_rolldecay_0kn0.8530.00.026.0550.026.0550.3417.5561.1771.1774.7065.9810.9932.5190.00.2740.0501.00.01.00.00.00.01.00.00.00.01.00.00.00.01.00.000000.000000.01.00.00.00.01.00.00.00.60.36.06.00.000021000.01000.050.050.02.02.02.02.01.00-5.01.000.02..C:\\Dev\\Prediction-of-roll-damping-using-fully-...1.472020e-070.230.39.806650.00.00.0020.000000e+000.10.10.00.00.00.10.3411851.17656.06.04.706993.42180.0180.00.000000e+00TRAN40.040.030.032.00.02.02.04.7064.7063.957280e+0010.00.00.00.00.00.00.00.01.0030.00.500.500.00.00.30590.30594.04.00.50.5KVLCC2KVLCC20.01.005.00.0000010.0000010.0000010.05.0-5.00.2735NaNNaN21338.0
kvlcc2_rolldecay_15-5kn_const_large20.8530.00.026.0550.026.0550.3417.5561.1771.1774.7065.9810.9932.5190.00.2740.0251.00.01.00.00.00.01.00.00.00.01.00.00.00.01.00.000000.000000.01.00.00.00.01.00.00.00.60.36.06.00.000021000.01000.01.01.02.02.01.01.00.70-5.00.700.02..C:\\Dev\\Prediction-of-roll-damping-using-fully-...1.423410e-010.230.39.806650.00.00.0020.000000e+000.10.10.00.00.00.10.3411851.17656.06.04.706993.42200.0200.00.000000e+00TRAN40.040.030.032.00.02.02.04.7064.7063.826600e+0610.00.00.00.00.00.00.00.00.7030.00.050.050.00.00.30590.30594.04.00.50.5KVLCC2KVLCC20.00.705.00.9669760.0000010.0000010.05.0-5.00.2735NaNNaN21340.0
kvlcc2_rolldecay_15-5kn_ikeda_dev0.8530.00.026.0550.026.0550.3417.5561.1771.1774.7065.9810.9932.5190.00.2740.0501.00.01.00.00.00.01.00.00.00.01.00.00.00.01.06.072172.743710.01.00.00.00.01.00.00.00.60.36.06.00.000041000.01000.00.00.02.02.01.01.00.25-2.00.25NaN..C:\\Dev\\Prediction-of-roll-damping-using-fully-...1.423410e-010.200.39.806650.00.00.0021.000000e-070.10.10.00.00.00.10.3411851.17656.06.04.706993.42600.0600.01.000000e-07TRAN24.024.030.06.00.02.02.04.7064.7063.826600e+0610.00.00.00.00.00.00.00.00.2530.01.001.000.00.00.30590.30594.04.00.50.5KVLCC2KVLCC20.00.252.00.9669760.0000010.0000010.02.0-2.00.27350.00010.021340.0
\n
\n\n\n\n## Load MDL results\n\n\n```python\ndf_rolldecays = mdl_results.df_rolldecays\n```\n\n## Bilge radius\n\n\n```python\nscale_factor = df_rolldecays.iloc[0].scale_factor\nlpp = df_rolldecays.iloc[0].lpp/scale_factor\n\nRs_data = [\n [lpp*scale_factor,40], \n [290,15.21],\n [225,2.4],\n [129,2.4],\n [45,8.48],\n [0,40], \n ] # Measured on full scale geometry\n\n\ndf_Rs = pd.DataFrame(data=Rs_data, columns=['x','R_b'])\ndf_Rs['R_b']/=scale_factor\ndf_Rs['x']/=scale_factor\ndf_Rs['station'] = df_Rs['x']/lpp*20\ndf_Rs.sort_values(by='station', inplace=True)\n\nstations = np.arange(0,21,1)\ndf_Rs_interp = pd.DataFrame(index=stations)\n\ndf_Rs_interp['R_b'] = np.interp(stations,df_Rs['station'].values,df_Rs['R_b'].values)\n```\n\n\n```python\ndf_areas = pd.read_csv('../../data/interim/kvlcc_areas.csv', sep=';', index_col=0)\ndf_areas.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
areaxtbr_b
no
013.826612-5.4950002.0011.6385776.636080
1123.85130610.15993218.2527.89352242.367175
2428.21140928.05128420.8041.82428445.369454
3683.70916543.70621620.8050.28251441.080696
4917.89506661.59756820.8056.15923234.146143
\n
\n\n\n\n\n```python\ndf_areas_model = df_areas.copy()\ndf_areas_model['area']/=(scale_factor**2)\ndf_areas_model['x']/=(scale_factor)\ndf_areas_model['t']/=(scale_factor)\ndf_areas_model['b']/=(scale_factor)\ndf_areas_model['r_b']/=(scale_factor)\n\n\n```\n\n\n```python\nfig,ax=plt.subplots()\ndf_Rs_interp.plot(y='R_b', label='manually',ax=ax)\ndf_areas_model.plot(y='r_b', label='points',ax=ax)\nax.legend()\n```\n\n\n```python\nc_r_tree = joblib.load('../../models/C_r_tree.pkl')\n\ndef predict_C_r(sigma, a_1, a_3):\n \n X = np.array([sigma,a_1,a_3]).T\n \n return c_r_tree.predict(X)\n \n```\n\n\n```python\nrun_paths={\n 21338 : {\n 'scores_indata_path':'../../models/KVLCC2_speed.IN',\n 'scores_outdata_path':'../../data/interim/KVLCC2_speed.out',\n 'roll_decay_model':'../../models/KVLCC2_21338.pkl',\n 'motions_file_paths': ['kvlcc2_rolldecay_0kn'],\n 'combined_motions_ikeda': ['kvlcc2_rolldecay_0kn'], ## hybrid model with motions and Ikeda\n \n },\n 21340 : {\n 'scores_indata_path':'../../models/KVLCC2_speed.IN',\n 'scores_outdata_path':'../../data/interim/KVLCC2_speed.out',\n 'roll_decay_model':'../../models/KVLCC2_21340.pkl',\n #'motions_file_paths': ['kvlcc2_rolldecay_15-5kn'],\n #'combined_motions_ikeda': ['kvlcc2_rolldecay_15-5kn'], ## hybrid model with motions and Ikeda\n 'motions_file_paths': ['kvlcc2_rolldecay_15-5kn_const_large2'],\n 'combined_motions_ikeda': ['kvlcc2_rolldecay_15-5kn_const_large2'], ## hybrid model with motions and Ikeda\n \n }\n}\n```\n\n## Build Ikeda estimators:\n\n\n```python\nruns = OrderedDict()\n\nfor run_id, run in run_paths.items():\n \n mdl_meta_data = df_rolldecays.loc[run_id]\n runs[run_id] = new_run = {\n 'ikedas':OrderedDict(),\n }\n ikedas = new_run['ikedas']\n \n ## Common data:\n scale_factor = mdl_meta_data.scale_factor\n indata_file_path=run['scores_indata_path']\n output_file_path=run['scores_outdata_path']\n motions_file_path=run['motions_file_paths'][0] # Assuming same parameters\n parameters = df_parameters.loc[motions_file_path]\n \n ## Load ScoresII results\n indata = Indata()\n indata.open(indataPath=indata_file_path)\n output_file = OutputFile(filePath=output_file_path)\n \n V = mdl_meta_data.ship_speed*1.852/3.6/np.sqrt(scale_factor)\n \n if not mdl_meta_data.BKL:\n BKL=0\n else:\n BKL=mdl_meta_data.BKL/scale_factor\n \n if not mdl_meta_data.BKB:\n BKB = 0\n else:\n BKB=mdl_meta_data.BKB/scale_factor\n \n \n kg=mdl_meta_data.kg/scale_factor\n \n \n ## Various Ikeda models:\n \n # Regular ikeda (ikeda bilge radius approx.)\n name = 'ikeda'\n ikedas[name] = {}\n ikedas[name]['estimator'] = ikeda_classes.Ikeda.load_scoresII(V=V, w=None, fi_a=None, indata=indata, output_file=output_file, \n scale_factor=scale_factor, BKL=BKL, BKB=BKB, kg=kg)\n \n # ikeda (bilge radius from CAD)\n name = 'ikeda_r'\n ikedas[name] = {}\n R_b = df_Rs_interp['R_b'].values\n ikedas[name]['estimator'] = ikeda_classes.IkedaR.load_scoresII(V=V, w=None, fi_a=None, indata=indata, output_file=output_file, \n scale_factor=scale_factor, BKL=BKL, BKB=BKB, kg=kg, R_b=R_b)\n \n # ikeda (bilge radius from CAD)\n name = 'ikeda_s'\n ikedas[name] = {}\n #R_b = df_Rs_interp['R_b'].values\n R_b = df_areas_model['r_b'].values\n \n ikedas[name]['estimator'] = ikeda_classes.IkedaR.load_scoresII(V=V, w=None, fi_a=None, indata=indata, output_file=output_file, \n scale_factor=scale_factor, BKL=BKL, BKB=BKB, kg=kg, R_b=R_b)\n \n # Same as Ikeda class but with mandatory wetted surface.\n name = 'ikeda_s'\n ikedas[name] = {}\n S_f = parameters.S\n \n ikedas[name]['estimator'] = ikeda_classes.IkedaS.load_scoresII(V=V, w=None, fi_a=None, indata=indata, output_file=output_file, \n scale_factor=scale_factor, BKL=BKL, BKB=BKB, kg=kg, S_f=S_f)\n \n # Same as Ikeda class but with mandatory wetted surface and bilge radius from CAD.\n name = 'ikeda_r_s'\n ikedas[name] = {}\n S_f = parameters.S\n \n ikedas[name]['estimator'] = ikeda_classes.IkedaR.load_scoresII(V=V, w=None, fi_a=None,\n indata=indata, output_file=output_file, \n scale_factor=scale_factor, BKL=BKL, BKB=BKB, kg=kg, S_f=S_f, R_b=R_b)\n \n # Same as Ikeda eddy damping for barge.\n #name = 'ikeda_barge'\n #ikedas[name] = {}\n # \n #ikedas[name]['estimator'] = ikeda_classes.IkedaBarge.load_scoresII(V=V, w=None, fi_a=None, indata=indata, output_file=output_file, \n # scale_factor=scale_factor, BKL=BKL, BKB=BKB, kg=kg)\n \n \n # Same as Ikeda manual C_r.\n name = 'ikeda_C_r'\n ikedas[name] = {}\n \n #ikedas[name]['estimator'] = estimator = ikeda_classes.IkedaCr.load_scoresII(V=V, w=None, fi_a=None,\n # indata=indata, output_file=output_file, \n # scale_factor=scale_factor, BKL=BKL, BKB=BKB, kg=kg, S_f=S_f, R_b=R_b)\n # Note no S_f!\n ikedas[name]['estimator'] = estimator = ikeda_classes.IkedaCr.load_scoresII(V=V, w=None, fi_a=None,\n indata=indata, output_file=output_file, \n scale_factor=scale_factor, BKL=BKL, BKB=BKB, kg=kg, R_b=R_b)\n\n a, a_1, a_3, sigma_s, H = estimator.calculate_sectional_lewis_coefficients()\n estimator.C_r = predict_C_r(sigma=sigma_s, a_1=a_1, a_3=a_3)\n \n\n```\n\n c:\\python36-64\\lib\\re.py:212: FutureWarning: split() requires a non-empty pattern match.\n return _compile(pattern, flags).split(string, maxsplit)\n c:\\python36-64\\lib\\re.py:212: FutureWarning: split() requires a non-empty pattern match.\n return _compile(pattern, flags).split(string, maxsplit)\n\n\n## Saving Ikeda estimators:\n\n\n```python\nfor id,run in runs.items():\n for ikeda_name, ikeda in run['ikedas'].items():\n \n file_name = '%s_%s.pkl' % (id,ikeda_name)\n joblib.dump(ikeda['estimator'], '../../models/%s' % file_name)\n \n```\n\n## Load time series from FNPF\n\n\n```python\ntime_series = load_time_series_fnpf(names=df_parameters.index)\n```\n\n## Load FNPF models\n\n\n```python\nmotion_models, df_results_motions = reports.examples.FNPF.get_models_and_results()\n```\n\n c:\\dev\\prediction-of-roll-damping-using-fully-nonlinear-potential-flow-and-ikedas-method\\venv\\lib\\site-packages\\sklearn\\base.py:334: UserWarning: Trying to unpickle estimator Pipeline from version 0.24.1 when using version 0.23.2. This might lead to breaking code or invalid results. Use at your own risk.\n UserWarning)\n\n\n\n```python\nfor run_id, run in run_paths.items():\n \n mdl_meta_data = df_rolldecays.loc[run_id]\n \n new_run = runs[run_id]\n \n ## MDL:\n model_mdl = joblib.load(run['roll_decay_model'])\n estimator_mdl = model_mdl['estimator']\n estimator_mdl.calculate_amplitudes_and_damping()\n new_run['model_mdl']=model_mdl\n new_run['estimator_mdl']=estimator_mdl\n \n scale_factor = mdl_meta_data.scale_factor\n new_run['meta_data'] = meta_data={\n 'Volume':mdl_meta_data.Volume/(scale_factor**3),\n 'GM':mdl_meta_data.gm/scale_factor,\n 'rho':mdl_meta_data.rho,\n 'g':mdl_meta_data.g,\n 'beam':mdl_meta_data.beam/scale_factor,\n }\n \n new_run['results'] = estimator_mdl.result_for_database(meta_data=meta_data)\n results = new_run['results']\n \n # Prediction\n new_run['df_model'] = get_estimator_variation(estimator = estimator_mdl, results=results, meta_data=meta_data)\n \n # Model tests\n new_run['df'] = get_data_variation(estimator = estimator_mdl, results=results, meta_data=meta_data)\n phi_a = new_run['df']['phi_a']\n \n ## Motions\n new_run['motions'] = OrderedDict()\n for motions_file_path in run.get('motions_file_paths',[]):\n motion_file = new_run['motions'][motions_file_path] = {}\n \n motion_file['parameters'] = parameters = df_parameters.loc[motions_file_path]\n \n motion_file['X'] = X = time_series[motions_file_path]\n \n \n motion_file['model'] = model = motion_models[motions_file_path]\n #assert model.score() > 0.90\n \n motion_file['meta_data'] = meta_data ={\n 'Volume':parameters.V,\n 'GM':mdl_meta_data.gm/mdl_meta_data.scale_factor,\n 'rho':parameters.dens,\n 'g':parameters.gravi,\n 'beam':parameters.B,\n }\n \n results = model.result_for_database(meta_data=meta_data)\n if not 'B_3' in results:\n results['B_3'] = 0\n \n motion_file['results'] = results\n model.calculate_amplitudes_and_damping()\n \n # Prediction\n motion_file['df_model'] = get_estimator_variation(estimator = model, results = results, meta_data=meta_data)\n \n # Simulation\n motion_file['df'] = get_data_variation(estimator = model, results = results, meta_data=meta_data)\n \n \n ## Ikeda\n for ikeda_name, ikeda in new_run['ikedas'].items():\n \n omega0=new_run['results']['omega0']\n #phi_a=new_run['results']['phi_a']\n ikeda_estimator = ikeda['estimator']\n ikeda['df'] = results = ikeda_estimator.calculate(w=omega0, fi_a=phi_a)\n \n results['phi_a'] = phi_a\n results.set_index('phi_a', inplace=True)\n \n ## Convert to dimensional damping [Nm/s]\n ikeda['meta_data'] = meta_data = new_run['meta_data']\n result_ = src.helpers.unhat(df=results, Disp=meta_data['Volume'], beam=meta_data['beam'], g=meta_data['g'], rho=meta_data['rho'])\n ikeda['df'] = results = pd.concat((results,result_), axis=1)\n \n ## Feed the results into a quadratic model:\n output = fit_on_amplitudes.fit_quadratic(y=results['B_44'], phi_a=results.index, omega0=omega0, \n B_1_0=new_run['results']['B_1'], \n B_2_0=new_run['results']['B_2'], \n )\n \n parameters = {\n 'B_1A': output['B_1'] / new_run['results']['A_44'],\n 'B_2A': output['B_2'] / new_run['results']['A_44'],\n 'B_3A': 0,\n 'C_1A': estimator_mdl.parameters['C_1A'],\n 'C_3A': estimator_mdl.parameters['C_3A'],\n 'C_5A': estimator_mdl.parameters['C_5A'],\n }\n ikeda['model'] = EstimatorCubic.load(**parameters, X=estimator_mdl.X)\n \n \n ikeda['results'] = ikeda['model'].result_for_database(meta_data=meta_data)\n ikeda['df_model'] = get_estimator_variation(estimator = ikeda['model'], results = ikeda['results'], meta_data=new_run['meta_data'])\n \n ## Combined model:\n new_run['combined_models'] = combined_models = {}\n combined_motions_ikedas = run.get('combined_motions_ikeda',[])\n for combined_motions_ikeda in combined_motions_ikedas:\n \n combined_models[combined_motions_ikeda] = combined_model = {}\n \n combined_model['motions'] = model_motions = new_run['motions'][combined_motions_ikeda]\n combined_model['ikedas'] = OrderedDict()\n \n for ikeda_name, ikeda in new_run['ikedas'].items():\n \n combined_model['ikedas'][ikeda_name] = combined_model_ikeda = {}\n \n df = ikeda['df']\n df_motions = pd.DataFrame()\n df_motions['phi_a'] = df.index.copy()\n df_motions = get_variation(X_amplitudes=df_motions, results = model_motions['results'], meta_data=model_motions['meta_data'])\n df_motions.set_index('phi_a', inplace=True)\n \n columns_visc = ['B_L','B_F','B_E','B_BK']\n df_combined = df[columns_visc].copy()\n df_combined['B_W'] = df_motions['B_e']\n df_combined['B'] = df_combined.sum(axis=1)\n combined_model_ikeda['df'] = df_combined\n \n ## Feed the results into a cubic model:\n output = fit_on_amplitudes.fit_quadratic(y=df_combined['B'], phi_a=df_combined.index, omega0=omega0, \n B_1_0=new_run['results']['B_1'], \n B_2_0=new_run['results']['B_2'], \n )\n \n parameters = {\n 'B_1A': output['B_1'] / new_run['results']['A_44'],\n 'B_2A': output['B_2'] / new_run['results']['A_44'],\n 'B_3A': 0,\n 'C_1A': estimator_mdl.parameters['C_1A'],\n 'C_3A': estimator_mdl.parameters['C_3A'],\n 'C_5A': estimator_mdl.parameters['C_5A'],\n }\n combined_model_ikeda['model'] = EstimatorCubic.load(**parameters, X=estimator_mdl.X)\n combined_model_ikeda['results'] = combined_model_ikeda['model'].result_for_database(meta_data=meta_data)\n combined_model_ikeda['df_model'] = get_estimator_variation(estimator = combined_model_ikeda['model'], results = combined_model_ikeda['results'], meta_data=new_run['meta_data'])\n \n\n```\n\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n c:\\dev\\rolldecay-estimators\\rolldecayestimators\\ikeda_speed.py:595: RuntimeWarning: invalid value encountered in sqrt\n gamma=sqrt(pi)*f3*(rmax+2*M/H*sqrt(B0**2*A0**2))/((2*Ts*(1-OG/Ts)*sqrt(H0_prim*sigma_prim)))\n\n\n## Save Hybrid models\n\n\n```python\nfor id, run in runs.items():\n \n for key, combined_model in run['combined_models'].items():\n for ikeda_name, ikeda in combined_model['ikedas'].items():\n pipeline = Pipeline([('estimator',ikeda['model'])])\n file_name = '%i_%s_%s.pkl' % (id,key,ikeda_name) \n joblib.dump(pipeline, '../../models/%s' % file_name)\n```\n", "meta": {"hexsha": "a30f57316bc56da4082f07029b78a96f16917f54", "size": 80628, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "reports/ISOPE_outline/00.7_KVLCC2_ikeda_estimators.ipynb", "max_stars_repo_name": "rddaz2013/Prediction-of-roll-motion-using-fully-nonlinear-potential-flow-and-Ikedas-method", "max_stars_repo_head_hexsha": "ac0a27e31d64edc8ae8912b6ed10005029868c90", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "reports/ISOPE_outline/00.7_KVLCC2_ikeda_estimators.ipynb", "max_issues_repo_name": "rddaz2013/Prediction-of-roll-motion-using-fully-nonlinear-potential-flow-and-Ikedas-method", "max_issues_repo_head_hexsha": "ac0a27e31d64edc8ae8912b6ed10005029868c90", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "reports/ISOPE_outline/00.7_KVLCC2_ikeda_estimators.ipynb", "max_forks_repo_name": "rddaz2013/Prediction-of-roll-motion-using-fully-nonlinear-potential-flow-and-Ikedas-method", "max_forks_repo_head_hexsha": "ac0a27e31d64edc8ae8912b6ed10005029868c90", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-06-05T15:38:54.000Z", "max_forks_repo_forks_event_max_datetime": "2021-06-05T15:38:54.000Z", "avg_line_length": 53.1496374423, "max_line_length": 21540, "alphanum_fraction": 0.5798605943, "converted": true, "num_tokens": 11691, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4960938294709195, "lm_q2_score": 0.18242552158390102, "lm_q1q2_score": 0.09050017559578734}} {"text": "# This Jupyter notebook illustrates how to read data in from an external file \n## [notebook provides a simple illustration, users can easily use these examples to modify and customize for their data storage scheme and/or preferred workflows] \n\n\n###Motion Blur Filtering: A Statistical Approach for Extracting Confinement Forces & Diffusivity from a Single Blurred Trajectory\n\n#####Author: Chris Calderon\n\nCopyright 2015 Ursa Analytics, Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\");\nYou may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0\n\n\n\n### Cell below loads the required modules and packages\n\n\n```python\n%matplotlib inline \n#command above avoids using the \"dreaded\" pylab flag when launching ipython (always put magic command above as first arg to ipynb file)\nimport matplotlib.font_manager as font_manager\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as spo\nimport findBerglundVersionOfMA1 #this module builds off of Berglund's 2010 PRE parameterization (atypical MA1 formulation)\nimport MotionBlurFilter\nimport Ursa_IPyNBpltWrapper\n\n```\n\n##Now that required modules packages are loaded, set parameters for simulating \"Blurred\" OU trajectories. Specific mixed continuous/discrete model:\n\n\\begin{align}\ndr_t = & ({v}-{\\kappa} r_t)dt + \\sqrt{2 D}dB_t \\\\\n\\psi_{t_i} = & \\frac{1}{t_E}\\int_{t_{i}-t_E}^{t_i} r_s ds + \\epsilon^{\\mathrm{loc}}_{t_i}\n\\end{align}\n\n###In above equations, parameter vector specifying model is: $\\theta = (\\kappa,D,\\sigma_{\\mathrm{loc}},v)$\n\n\n###Statistically exact discretization of above for uniform time spacing $\\delta$ (non-uniform $\\delta$ requires time dependent vectors and matrices below):\n\n\\begin{align}\nr_{t_{i+1}} = & A + F r_{t_{i}} + \\eta_{t_i} \\\\\n\\psi_{t_i} = & H_A + H_Fr_{t_{i-1}} + \\epsilon^{\\mathrm{loc}}_{t_i} + \\epsilon^{\\mathrm{mblur}}_{t_i} \\\\\n\\epsilon^{\\mathrm{loc}}_{t_i} + & \\epsilon^{\\mathrm{mblur}}_{t_i} \\sim \\mathcal{N}(0,R_i) \\\\\n\\eta_i \\sim & \\mathcal{N}(0,Q) \\\\\nt_{i-1} = & t_{i}-t_E \\\\\n C = & cov(\\epsilon^{\\mathrm{mblur}}_{t_i},\\eta_{t_{i-1}}) \\ne 0\n\\end{align}\n\n\n####Note: Kalman Filter (KF) and Motion Blur Filter (MBF) codes estimate $\\sqrt(2D)$ directly as \"thermal noise\" parameter\n\n### For situations where users would like to read data in from external source, many options exist. \n\n####In cell below, we show how to read in a text file and process the data assuming the text file contains two columns: One column with the 1D measurements and one with localization standard deviation vs. time estimates. Code chunk below sets up some default variables (tunable values indicated by comments below). Note that for multivariate signals, chunks below can readily be modified to process x/y or x/y/z measurements separately. Future work will address estimating 2D/3D models with the MBF (computational [not theoretical] issues exists in this case); however, the code currently provides diagnostic information to determine if unmodeled multivariate interaction effects are important (see main paper and Calderon, Weiss, Moerner, PRE 2014)\n\n### Plot examles from other notebooks can be used to explore output within this notbook or another. Next, a simple example of \"Batch\" processing is illustrated.\n\n\n```python\nfilenameBase='./ExampleData/MyTraj_' #assume all trajectory files have this prefix (adjust file location accordingly)\n\nN=20 #set the number of trajectories to read. \ndelta = 25./1000. #user must specify the time (in seconds) between observations. code provided assumes uniform continuous illumination and \n#NOTE: in this simple example, all trajectories assumed to be collected with exposure time delta input above\n\n\n\n#now loop over trajectories and store MLE results\nresBatch=[] #variable for storing MLE output\n\n#loop below just copies info from cell below (only difference is file to read is modified on each iteration of the loop)\nfor i in range(N):\n \n filei = filenameBase + str(i+1) + '.txt'\n print ''\n print '^'*100\n print 'Reading in file: ', filei\n #first load the sample data stored in text file. here we assume two columns of numerica data (col 1 are measurements)\n data = np.loadtxt(filei)\n (T,ncol)=data.shape\n #above we just used a simple default text file reader; however, any means of extracting the data and\n #casting it to a Tx2 array (or Tx1 if no localization accuracy info available) will work.\n\n\n\n ymeas = data[:,0]\n locStdGuess = data[:,1] #if no localization info avaible, just set this to zero or a reasonable estimate of localization error [in nm]\n\n Dguess = 0.1 #input a guess of the local diffusion coefficient of the trajecotry to seed the MLE searches (need not be accurate)\n velguess = np.mean(np.diff(ymeas))/delta #input a guess of the velocity of the trajecotry to seed the MLE searches (need not be accurate)\n\n MA=findBerglundVersionOfMA1.CostFuncMA1Diff(ymeas,delta) #construct an instance of the Berglund estimator\n res = spo.minimize(MA.evalCostFuncVel, (np.sqrt(Dguess),np.median(locStdGuess),velguess), method='nelder-mead')\n\n #output Berglund estimation result.\n print 'Berglund MLE',res.x[0]*np.sqrt(2),res.x[1],res.x[-1]\n print '-'*100\n\n #obtain crude estimate of mean reversion parameter. see Calderon, PRE (2013)\n kappa1 = np.log(np.sum(ymeas[1:]*ymeas[0:-1])/(np.sum(ymeas[0:-1]**2)-T*res.x[1]**2))/-delta\n\n #construct an instance of the MBF estimator\n BlurF = MotionBlurFilter.ModifiedKalmanFilter1DwithCrossCorr(ymeas,delta,StaticErrorEstSeq=locStdGuess)\n #use call below if no localization info avaible\n # BlurF = MotionBlurFilter.ModifiedKalmanFilter1DwithCrossCorr(ymeas,delta)\n\n parsIG=np.array([np.abs(kappa1),res.x[0]*np.sqrt(2),res.x[1],res.x[-1]]) #kick off MLE search with \"warm start\" based on simpler model\n #kick off nonlinear cost function optimization given data and initial guess\n resBlur = spo.minimize(BlurF.evalCostFunc,parsIG, method='nelder-mead')\n \n print 'parsIG for Motion Blur filter',parsIG\n print 'Motion Blur MLE result:',resBlur\n\n #finally evaluate diagnostic statistics at MLE just obtained\n loglike,xfilt,pit,Shist =BlurF.KFfilterOU1d(resBlur.x) \n\n print np.mean(pit),np.std(pit)\n print 'crude assessment of model: check above mean is near 0.5 and std is approximately',np.sqrt(1/12.)\n print 'statements above based on generalized residual U[0,1] shape' \n print 'other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.'\n \n #finally just store the MLE of the MBF in a list\n resBatch.append(resBlur.x)\n\n```\n\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_1.txt\n Berglund MLE 0.424450138266 0.0410840106778 -0.000253509776551\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 6.01896509e-01 4.24450138e-01 4.10840107e-02 -2.53509777e-04]\n Motion Blur MLE result: status: 0\n nfev: 196\n success: True\n fun: -1.1526156274680164\n x: array([ 9.88732097e-01, 4.30329177e-01, 1.69786977e-02,\n -1.88788339e-04])\n message: 'Optimization terminated successfully.'\n nit: 110\n 0.512506169502 0.289227348189\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_2.txt\n Berglund MLE 0.403080320506 0.0421133323903 0.0315952570332\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.79926789 0.40308032 0.04211333 0.03159526]\n Motion Blur MLE result: status: 0\n nfev: 299\n success: True\n fun: -1.1760118286776404\n x: array([ 1.44062784, 0.41103507, 0.01766368, -0.04607467])\n message: 'Optimization terminated successfully.'\n nit: 172\n 0.496990426074 0.29206459027\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_3.txt\n Berglund MLE 0.425243165442 0.0383252438668 0.0339861519423\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.27321687 0.42524317 0.03832524 0.03398615]\n Motion Blur MLE result: status: 0\n nfev: 312\n success: True\n fun: -1.1942812161329239\n x: array([ 1.37933008, 0.44563642, 0.01178971, 0.52399413])\n message: 'Optimization terminated successfully.'\n nit: 186\n 0.50078852219 0.291016376117\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_4.txt\n Berglund MLE 0.387088460983 0.0390914442611 0.00401072737933\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.29576551 0.38708846 0.03909144 0.00401073]\n Motion Blur MLE result: status: 0\n nfev: 429\n success: True\n fun: -1.2220005538177285\n x: array([ 1.20270778, 0.40031886, 0.01525121, 0.37603661])\n message: 'Optimization terminated successfully.'\n nit: 258\n 0.501678044756 0.287044032779\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_5.txt\n Berglund MLE 0.373653923964 0.0414090910471 -0.0271831915793\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.60942087 0.37365392 0.04140909 -0.02718319]\n Motion Blur MLE result: status: 0\n nfev: 315\n success: True\n fun: -1.1972364793651142\n x: array([ 1.42686492, 0.4006258 , 0.0179909 , -0.15973117])\n message: 'Optimization terminated successfully.'\n nit: 187\n 0.505548331825 0.28907161488\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_6.txt\n Berglund MLE 0.405658929477 0.0410255848183 -0.0527407580382\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.88813023 0.40565893 0.04102558 -0.05274076]\n Motion Blur MLE result: status: 0\n nfev: 278\n success: True\n fun: -1.1785934866396908\n x: array([ 2.206071 , 0.43743055, 0.01557288, -0.2198948 ])\n message: 'Optimization terminated successfully.'\n nit: 161\n 0.49752884216 0.290010436927\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_7.txt\n Berglund MLE 0.440296244194 0.037794781385 0.0555249133931\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.51073785 0.44029624 0.03779478 0.05552491]\n Motion Blur MLE result: status: 0\n nfev: 296\n success: True\n fun: -1.1646850756274711\n x: array([ 1.00900231, 0.45080595, 0.01384808, 0.0596462 ])\n message: 'Optimization terminated successfully.'\n nit: 171\n 0.500138244772 0.285738276794\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_8.txt\n Berglund MLE 0.388193789739 0.0433344124496 -0.0232538599371\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 1.23208956 0.38819379 0.04333441 -0.02325386]\n Motion Blur MLE result: status: 0\n nfev: 318\n success: True\n fun: -1.1901666108672013\n x: array([ 2.23788936, 0.40683237, 0.01763311, -0.06170199])\n message: 'Optimization terminated successfully.'\n nit: 185\n 0.500831978254 0.28767904269\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_9.txt\n Berglund MLE 0.428119184149 0.0368338621282 0.0631101005882\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.42504155 0.42811918 0.03683386 0.0631101 ]\n Motion Blur MLE result: status: 0\n nfev: 313\n success: True\n fun: -1.1921480409010281\n x: array([ 1.0874416 , 0.43535992, 0.0134389 , 0.19952 ])\n message: 'Optimization terminated successfully.'\n nit: 177\n 0.499882963813 0.286378046611\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_10.txt\n Berglund MLE 0.403241526019 0.0448782500646 0.0326597583831\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.36721914 0.40324153 0.04487825 0.03265976]\n Motion Blur MLE result: status: 0\n nfev: 295\n success: True\n fun: -1.1280944693617625\n x: array([ 0.88575554, 0.40813484, 0.02201115, 0.19055693])\n message: 'Optimization terminated successfully.'\n nit: 173\n 0.499027710082 0.290215367984\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_11.txt\n Berglund MLE 0.446022076218 0.0386925793552 0.0317446002903\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.75851698 0.44602208 0.03869258 0.0317446 ]\n Motion Blur MLE result: status: 0\n nfev: 364\n success: True\n fun: -1.1477995850212523\n x: array([ 1.34146215, 0.45217896, 0.01547886, 0.11062506])\n message: 'Optimization terminated successfully.'\n nit: 217\n 0.498350123323 0.28489583658\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_12.txt\n Berglund MLE 0.433878819812 0.0405257997338 0.0194618189645\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.57081066 0.43387882 0.0405258 0.01946182]\n Motion Blur MLE result: status: 0\n nfev: 320\n success: True\n fun: -1.1258718069898108\n x: array([ 1.0535438 , 0.44151737, 0.01909119, 0.0921696 ])\n message: 'Optimization terminated successfully.'\n nit: 189\n 0.4993722138 0.286382226048\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_13.txt\n Berglund MLE 0.442289936266 0.0413708496844 0.0900189405225\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.53984449 0.44228994 0.04137085 0.09001894]\n Motion Blur MLE result: status: 0\n nfev: 281\n success: True\n fun: -1.1194332404998935\n x: array([ 1.44682073, 0.44948337, 0.01867071, 0.17022423])\n message: 'Optimization terminated successfully.'\n nit: 167\n 0.498968531464 0.286556318588\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_14.txt\n Berglund MLE 0.434387236759 0.0392930335766 0.0256938157463\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.57814585 0.43438724 0.03929303 0.02569382]\n Motion Blur MLE result: status: 0\n nfev: 287\n success: True\n fun: -1.1622391321135688\n x: array([ 0.99021595, 0.44116143, 0.01484123, -0.01121947])\n message: 'Optimization terminated successfully.'\n nit: 167\n 0.503103615691 0.292303358037\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_15.txt\n Berglund MLE 0.411721173737 0.0381990349157 0.0153091535058\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.60076698 0.41172117 0.03819903 0.01530915]\n Motion Blur MLE result: status: 0\n nfev: 310\n success: True\n fun: -1.1694922158999403\n x: array([ 1.11385547, 0.42298358, 0.01809222, 0.09968667])\n message: 'Optimization terminated successfully.'\n nit: 191\n 0.493928160614 0.283653076911\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_16.txt\n Berglund MLE 0.449853292681 0.0346580553668 -0.014228133955\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.88913235 0.44985329 0.03465806 -0.01422813]\n Motion Blur MLE result: status: 0\n nfev: 189\n success: True\n fun: -1.1617781756413692\n x: array([ 1.49678266, 0.46607595, 0.01368101, -0.01265549])\n message: 'Optimization terminated successfully.'\n nit: 104\n 0.502248368185 0.285265534615\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_17.txt\n Berglund MLE 0.405791875716 0.0429526801662 0.0281841646573\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.79922712 0.40579188 0.04295268 0.02818416]\n Motion Blur MLE result: status: 0\n nfev: 311\n success: True\n fun: -1.1566192303985554\n x: array([ 1.80755174, 0.41859545, 0.01897457, -0.21484235])\n message: 'Optimization terminated successfully.'\n nit: 182\n 0.498866022839 0.288461118604\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_18.txt\n Berglund MLE 0.480290453713 0.033150036359 -0.0363831128213\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.44079153 0.48029045 0.03315004 -0.03638311]\n Motion Blur MLE result: status: 0\n nfev: 286\n success: True\n fun: -1.1667980253486265\n x: array([ 0.90486109, 0.4872516 , 0.00819091, 0.11628986])\n message: 'Optimization terminated successfully.'\n nit: 170\n 0.498456742145 0.288303457635\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_19.txt\n Berglund MLE 0.401125045744 0.0394428992004 -0.00560612519608\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 0.75923837 0.40112505 0.0394429 -0.00560613]\n Motion Blur MLE result: status: 0\n nfev: 358\n success: True\n fun: -1.182656245292518\n x: array([ 1.41280844, 0.41718142, 0.01732866, -0.11048628])\n message: 'Optimization terminated successfully.'\n nit: 218\n 0.499503636338 0.288565336397\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n Reading in file: ./ExampleData/MyTraj_20.txt\n Berglund MLE 0.4678828153 0.0341188814665 0.000390470606951\n ----------------------------------------------------------------------------------------------------\n parsIG for Motion Blur filter [ 4.71922369e-01 4.67882815e-01 3.41188815e-02 3.90470607e-04]\n Motion Blur MLE result: status: 0\n nfev: 419\n success: True\n fun: -1.1464820600970622\n x: array([ 1.01456235, 0.46829891, 0.01419429, 0.20915194])\n message: 'Optimization terminated successfully.'\n nit: 245\n 0.502468629673 0.293260000307\n crude assessment of model: check above mean is near 0.5 and std is approximately 0.288675134595\n statements above based on generalized residual U[0,1] shape\n other hypothesis tests outlined which can use PIT sequence above outlined/referenced in paper.\n\n\n\n```python\n#Summarize the results of the above N simulations \n#\n\nresSUM=np.array(resBatch)\nprint 'Blur medians',np.median(resSUM[:,0]),np.median(resSUM[:,1]),np.median(resSUM[:,2]),np.median(resSUM[:,3])\nprint 'means',np.mean(resSUM[:,0]),np.mean(resSUM[:,1]),np.mean(resSUM[:,2]),np.mean(resSUM[:,3])\nprint 'std',np.std(resSUM[:,0]),np.std(resSUM[:,1]),np.std(resSUM[:,2]),np.std(resSUM[:,3])\n\nprint '^'*100 ,'\\n\\n'\n```\n\n Blur medians 1.27208496571 0.436395237852 0.0162757902936 0.0759079000468\n means 1.32234434672 0.434561850166 0.0160360992158 0.0655553122287\n std 0.380912845778 0.0234327797792 0.00299196795637 0.182354637945\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \n \n \n\n", "meta": {"hexsha": "dc08bbfa73e58107dc2047442569df1d5cf9c008", "size": 31740, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "src/Example2_ReadInFileIllustration.ipynb", "max_stars_repo_name": "calderoc/MotionBlurFilter", "max_stars_repo_head_hexsha": "86786c2a7956421b93690ac9beeb9f3366fbdf7e", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "src/Example2_ReadInFileIllustration.ipynb", "max_issues_repo_name": "calderoc/MotionBlurFilter", "max_issues_repo_head_hexsha": "86786c2a7956421b93690ac9beeb9f3366fbdf7e", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2017-09-28T10:08:00.000Z", "max_issues_repo_issues_event_max_datetime": "2017-09-28T10:08:00.000Z", "max_forks_repo_path": "src/Example2_ReadInFileIllustration.ipynb", "max_forks_repo_name": "calderoc/MotionBlurFilter", "max_forks_repo_head_hexsha": "86786c2a7956421b93690ac9beeb9f3366fbdf7e", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 55.7820738137, "max_line_length": 761, "alphanum_fraction": 0.5403276623, "converted": true, "num_tokens": 7687, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.49609382947091946, "lm_q2_score": 0.18242551713899047, "lm_q1q2_score": 0.09050017339069463}} {"text": "```python\n\n```\n\nLike in the previous notebook, we'll use example data for this notebook. The notebook assumes it's stored in a folder next to the notebook called `data`. You can download these data from Canvas (in which case you'll have to unzip it in the `data` folder), or the course GitHub page. \n\nWhen you upload these data to CoLab, make sure that you're pointing the data in the right direction. If you've saved the data to your `Colab Notebooks/data` folder, that is the following location: `/content/drive/My Drive/Colab Notebooks/data/`. The code below tries to set it up so that you don't need to worry about it.\n\n\n```python\n# this will ask you to authenticate with Google\nfrom google.colab import drive\ndrive.mount('/content/drive')\n\nimport os\nos.chdir('/content/drive/My Drive/Colab Notebooks/')\n\n## we'll also have to install the library to access nifti files:\n!pip install nibabel\n```\n\n# Neuroimaging week 4: Preprocessing\nThis week's lab is about preprocessing of fMRI data. Preprocessing of (f)MRI data is quite a complex topic, involving techniques from signal-processing (high-pass filtering), linear algebra/statistics (prewhitening/autocorrelation correction), and optimization (image registration). Additionally, there is ongoing discussion about which preprocessing steps are necessary/sufficient and what preprocessing parameters are optimal. As always, this depends on your specific research question, the type and quality of your data, and the type of analysis.\n\nIn this week, we'll discuss a couple (not all!) of preprocessing steps that are common in univariate fMRI analyses. We won't discuss distortion-correction (\"unwarping\") and registration procedures; please use the videos, the book, and the lecture slides to understand these concepts.\n\n### Contents\n1. Introduction: the t-value (yet again)\n2. Temporal filtering\n3. Prewhitening/autocorrelation correction\n4. Spatial filtering\n5. Outlier censoring\n6. Motion correction and motion filtering\n\n### What you'll learn\nSpecifically, after this lab, you'll ...\n\n- be able to explain the influence of preprocessing on the measured effects using the t-value formula\n- understand (the advantage of) temporal filtering from both the time-domain and frequency-domain\n- understand the necessity of prewhitening given the assumptions of the GLM\n- understand the advantage of spatial filtering (smoothing)\n- under how to handle outliers\n- explain the necessity of motion correction and the advantage of motion regression\n- know how to implement the concepts above in Python\n\n**Estimated time needed to complete**: 8 hours
\n**No Deadline**\n\n## 1. Introduction\nAs we said before, preprocessing is a topic that almost warrants its own course. Nonetheless, we'll try to show you (and let you practice with) some of the most common and important preprocessing operations. Additionally, we'll introduce the concept of the fast fourier transform, which allows us to analyze our signal in the frequency domain, which helps to understand several preprocessing steps, such as temporal filtering.\n\n### 1.1. The (conceptual) t-value formula -- yet again\nThe previous two weeks, you have learned that, essentially, we want to find large effects (calculated as t-values) of our contrasts by optimizing various parts of the t-value formula. Conceptually, the t-value formula can be written as:\n\n\\begin{align}\nt\\mathrm{-value} = \\frac{\\mathrm{effect}}{\\mathrm{uncertainty}} = \\frac{\\mathrm{effect}}{\\sqrt{\\mathrm{noise} \\cdot \\mathrm{design\\ variance}}} = \\frac{\\mathbf{c}\\hat{\\beta}}{\\sqrt{\\hat{\\sigma}^{2}\\mathbf{c}(X'X')^{-1}\\mathbf{c}'}}\n\\end{align}\n\nLast week you've learned that by ensuring low design variance (through *high* predictor variance and *low* predictor correlations) leads to larger normalized effects (higher t-values). This week, we will discuss the other term of the denominator of this formula: the noise (also called the residual variance or unexplained variance), which is defined in the t-value formula as follows:\n\n\\begin{align}\n\\mathrm{noise} = \\frac{SSE}{\\mathrm{DF}} = \\frac{\\sum_{i=1}^{N}(y_{i} - \\hat{y_{i}})^{2}}{N - P}\n\\end{align}\n\nThrough preprocessing, we aim to reduce the difference between our prediction ($\\hat{y}$) and our true signal ($y$), thus reducing the noise-term of the formula and thereby optimizing our normalized effects.\n\n### 1.2. The two approaches of preprocessing\n\nBasically, there are *two* ways to preprocess your data:\n1. Manipulating the signal ($y$) **directly** *before* fitting your GLM-model;\n2. Including \"noise predictors\" in your design ($X$) when fitting your model;\n\nOften, preprocessing steps can be done both by method 1 (manipulating the signal directly) and by method 2 (including noise predictors). For example, one of the videos showed that you could apply a high-pass filter by applying a \"gaussian weighted running line smoother\" (the method FSL employs) *directly* on the signal (method 1) **or** you could add \"low-frequency (drift) predictors\" to the design matrix (method 2; in the video they used a 'discrete cosine basis set'; the SPM method). In practice, both methods often yield very similar resuls. The most important thing to understand is that both methods are trying to accomplish the same goal: reduce the noise term of the model.\n\nFirst, we will discuss how temporal and spatial filtering can *directly* filter the signal (method 1) to reduce error. Later in the tutorial, we will discuss including adding outlier-predictors and motion-predictors to the design to reduce noise (method 2). \n\n## 2. Temporal filtering\nIn this section, we will discuss how temporal filtering of the voxel signals may greatly reduce the error term. Along the way, we will also explain how we can look at the representation of the signal in the frequency domain (using the fourier transform) to give us an idea about the nature of the noise components in our data.\n\n### 2.1. A short primer on the frequency domain and the fourier transform\nThus far, we've always looked at our fMRI-signal as activity that varies across **time**. In other words, we're always looking at the signal in the *time domain*. However, there is also a way to look at a signal in the *frequency domain* (also called 'spectral domain') through transforming the signal using the *Fourier transform*. \n\nBasically, the fourier transform calculates to which degree sine waves of different frequencies are present in your signal. If a sine wave of a certain frequency (let's say 2 hertz) is (relatively) strongly present in your signal, it will have a (relatively) high *power* in the frequency domain. Thus, looking at the frequency domain of a signal can tell you something about the frequencies of the (different) sources underlying your signal.\n\nThis may sound quite abstract, so let's look at some examples.\n\n\n```python\n# start with importing the python packages we'll need \nimport numpy as np\nfrom numpy.linalg import lstsq\nimport nibabel as nib\nimport matplotlib.pyplot as plt\nimport matplotlib.pyplot as plt\n%matplotlib inline \n\ndef double_gamma(x, lag=6, a2=12, b1=0.9, b2=0.9, c=0.35, scale=True):\n\n a1 = lag\n d1 = a1 * b1 \n d2 = a2 * b2 \n hrf = np.array([(t/(d1))**a1 * np.exp(-(t-d1)/b1) - c*(t/(d2))**a2 * np.exp(-(t-d2)/b2) for t in x])\n \n if scale:\n hrf = (1 - hrf.min()) * (hrf - hrf.min()) / (hrf.max() - hrf.min()) + hrf.min()\n return hrf\n\ndef create_sine_wave(timepoints, frequency=1,amplitude=1, phase=0):\n return amplitude * np.sin(2*np.pi*frequency*timepoints + phase)\n```\n\nSine waves are oscillating signals that have (for our purposes) two important characteristics: their *frequency* and their *amplitude*. Frequency reflects how fast a signal is oscillating (how many cycles it completes in a given time period) and the amplitude is the (absolute) height of the peaks and troughs of the signal. To illustrate this, we generate a couple of sine-waves (with a sampling rate of 500 Hz, i.e., 500 samples per second) with different amplitudes and frequencies, which we plot below:\n\n\n```python\nmax_time = 5\nsampling_rate = 500\ntimepoints = np.arange(0, max_time, 1.0 / sampling_rate)\n\namplitudes = np.arange(1, 4)\nfrequencies = np.arange(1, 4)\nsines = []\n\nfig, axes = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(13, 8))\nfor i, amp in enumerate(amplitudes):\n \n for ii, freq in enumerate(frequencies):\n this_ax = axes[i, ii]\n \n if ii == 0:\n this_ax.set_ylabel('Activity (A.U.)')\n \n if i == 2:\n this_ax.set_xlabel('Time (seconds)')\n \n sine = create_sine_wave(timepoints, frequency=freq, amplitude=amp) \n sines.append((sine, amp, freq))\n this_ax.plot(timepoints, sine)\n this_ax.set_xlim(0, 5)\n this_ax.set_title('Sine with amp = %i and freq = %i' % (amp, freq))\n this_ax.set_ylim(-3.5, 3.5)\n\nfig.tight_layout()\n```\n\nAs you can see, the signals vary in their amplitude (from 1 to 3) and their frequency (from 1 - 3). Make sure you understand these characteristics! Now, we are going to use the fast fourier transform to plot the same signals in the *frequency domain*. We're not going to use a function to compute the FFT-transformation, but we're going to use a function that computes the \"power spectrum density\" directly (which makes life a little bit easier): the `periodogram` function from `scipy.signal`:\n\n\n```python\nfrom scipy.signal import periodogram\n```\n\nNow, the `periodogram` function takes two arguments, the signal and the sampling frequency (the sampling rate in Hz with which you recorded the signal), and returns both the reconstructed frequencies and their associated power values. An example:\n\n```python\nfreqs, power = periodogram(some_signal, 1000) # sampling_rate = 1000 Hz\n```\n\nWe'll use the `periodogram` function to plot the 9 sine-waves (from the previous plot) again, but this time in the frequency domain:\n\n\n```python\nfig, axes = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(13, 8))\n\nfor i, ax in enumerate(axes.flatten()):\n sine, amp, freq = sines[i]\n title = 'Sine with amp = %i and freq = %i' % (amp, freq)\n freq, power = periodogram(sine, sampling_rate)\n ax.plot(freq, power)\n ax.set_xlim(0, 4)\n ax.set_xticks(np.arange(5))\n ax.set_ylim(0, 25)\n \n if i > 5:\n ax.set_xlabel('Frequency (Hz)')\n \n if i % 3 == 0:\n ax.set_ylabel('Power')\n ax.set_title(title)\n \nfig.tight_layout()\n```\n\nAs you can see, the frequency domain correctly 'identifies' the amplitudes and frequencies from the signals. But the real 'power' from fourier transforms is that they can reconstruct a signal in *multiple underlying oscillatory sources*. Let's see how that works. We're going to load in a time-series recorded for 5 seconds of which we don't know the underlying oscillatory sources. First, we'll plot the signal in the time-domain:\n\n\n```python\nmystery_signal = np.load('data/mystery_signal.npy')\nplt.figure(figsize=(15, 5))\nplt.plot(np.arange(0, 5, 0.001), mystery_signal)\nplt.title('Time domain', fontsize=25)\nplt.xlim(0, 5)\nplt.xlabel('Time (sec.)', fontsize=15)\nplt.ylabel('Activity (A.U.)', fontsize=15)\nplt.show()\n```\n\n
\nToDo \n
\n\nIt's hard to see which frequencies (and corresponding amplitudes) are present in this 'mystery signal'. Get the frequencies and power of the signal using the `periodogram` function (you have to deduce the sampling rate of the signal yourself! It is *not* the variable `sampling_rate` from before). Set the x-limit of the x-axis to (0, 8) (`plt.xlim(0, 8)`). Also, give the plot appropriate labels for the axes.\n\n\n```python\nfreqs, pows = periodogram(mystery_signal, fs=mystery_signal.size / 5)\nplt.figure(figsize=(15, 5))\nplt.plot(freqs, pows)\nplt.xlim(0, 8)\nplt.ylabel('Power')\nplt.xlabel('Frequency (Hz)')\n```\n\nNow you know that you can use visualization of the signal in the frequency domain to help you understand from which underlying frequencies your signal is built up. Unfortunately, real fMRI data is not so 'clean' as the simulated sine waves we have used here, but the frequency representation of the fMRI signal can still tell us a lot about the nature and contributions of different (noise- and signal-related) sources!\n\n### 2.2. Frequency characteristics of fMRI data\nNow, we will load a (much noisier) voxel signal and the corresponding design-matrix (which has just one predictor apart from the intercept). The signal was measured with a TR of 2 seconds and contains 200 volumes (timepoints). The predictor reflects an experiment in which we showed 20 stimuli in intervals of 20 seconds (i.e., one stimulus every 20 seconds).\n\nWe'll plot both the signal ($y$) and the design-matrix ($X$; without intercept):\n\n\n```python\nwith np.load('data/drift_data.npz') as data:\n X, sig = data['X'], data['sig']\n\nplt.figure(figsize=(15, 8))\nplt.subplot(2, 1, 1)\nplt.plot(sig)\nplt.xlim(0, 200)\nplt.title('Signal in time domain', fontsize=20)\nplt.ylabel('Activity (a.u.)', fontsize=15)\n\nplt.subplot(2, 1, 2)\nplt.plot(np.arange(sig.size), X[:, 1], c='tab:orange')\nplt.title('Predictor in time domain', fontsize=20)\nplt.xlabel('Time (TR)', fontsize=15)\nplt.xlim(0, 200)\nplt.ylabel('Activity (a.u.)', fontsize=15)\nplt.ylim(-0.5, 1.5)\nplt.tight_layout()\n\n```\n\n
\nToDo (2 points)\n
\n\nRun linear regression using the variable `X` (which already contains an intercept) to explain the variable `sig`. Calculate the model's MSE, and store this in a variable named `mse_no_filtering`. Then, in the next code-cell, plot the signal and the predicted signal ($\\hat{y}$) in a single figure. Give the axes sensible labels and add a legend.\n\n**Tip** (feel free to ignore): This tutorial, you'll be asked to compute t-values, R-squared, and MSE of several models quite some times. To make your life easier, you could (but certainly don't have to!) write a function that runs, for example, linear regression and returns the R-squared, given a design (X) and signal (y). For example, this function could look like:\n\n```python\ndef compute_mse(X, y):\n # you implement the code here (run lstsq, calculate yhat, etc.)\n ...\n # and finally, after you've computed the model's MSE, return it\n return r_squared\n```\n\nIf you're ambitious, you can even write a single function that calculates t-values, MSE, and R-squared. This could look something like this:\n\n```python\ndef compute_all_statistics(X, y, cvec):\n # Implement everything you want to know and return it\n ...\n return t_value, MSE, r_squared # and whatever else you've computed!\n```\n\nDoing this will save you a lot of time and may prevent you from making unneccesary mistakes (like overwriting variables, typos, etc.). Lazy programmers are the best programmers!\n\n(Note: writing these functions is *optional*!)\n\n\n```python\nb = np.linalg.lstsq(X, sig, rcond=None)[0]\nyhat = X.dot(b)\nmse_no_filtering = .....\n\n\nassert(np.round(mse_no_filtering, 3) == 2.187)\n\n```\n\n\n```python\n# Implement your y/yhat plot here\n\nplt.figure(figsize=(15, 5))\nplt.plot(sig)\nplt.plot(yhat)\nplt.xlim(0, 200)\nplt.title(\"Signal and predicted signal\", fontsize=20)\nplt.xlabel(\"Time (TR)\", fontsize=15)\nplt.ylabel(\"Activation (A.U.)\", fontsize=15)\nplt.legend(['Signal', 'Predicted signal'], fontsize=15)\n```\n\n
\nToThink \n
\n\nIn your plot above, you should see that the fit of your model is \"off\" due to some low frequency \"drift\". Name two potential causes of drift. \n\n1. Slowly decreasing homogeneity of the magnetic field (e.g. due to subject movement)\n2. Increasing thermal noise (i.e., the scanner warms up)\n\nNote: just \"subject movement\" or respiration/cardiac signal is *NOT* correct.\n\n### 2.3. High-pass filtering of fMRI data\nFrom the previous ToDo, you probably noticed that the fit of the predictor to the model was not very good. The cause for this is the slow 'drift' - a low-frequency signal - that prevents the model from a good fit. Using a high-pass filter - meaning that you *remove* the low-frequency signals and thus *pass only the high frequencies* - can, for this reason, improve the model fit. But before we go on with actually high-pass filtering the signal, let's take a look at the frequency domain representation of our voxel signal: \n\n\n```python\nplt.figure(figsize=(17, 5))\nTR = 2\nsampling_frequency = 1 / TR # our sampling rate is 0.5, because our TR is 2 sec!\nfreq, power = periodogram(sig, fs=0.5)\nplt.plot(freq, power)\nplt.xlim(0, freq.max())\nplt.xlabel('Frequency (Hz)', fontsize=15)\nplt.ylabel('Power (dB)', fontsize=15)\nplt.axvline(x=0.01,color='r',ls='dashed', lw=2)\n\n```\n\nIn the frequency-domain plot above, you can clearly see a low-frequency drift component at frequencies approximately below 0.01 Hz (i.e., left of the dashed red line).\n\n
\nToThink\n
\n\nApart from the low frequency drift component around 0.01 Hz, there is also a component visible at 0.05 Hz. What does this component represent? Please explain (concisely).\n\n**This reflects our expected response to the stimuli! We presented a stimulus every 20 seconds (10 TRs), which represents a frequency of 0.05!**\n\nNow, let's get rid of that pesky low frequency drift that messes up our model! There is no guideline on how to choose the cutoff of your high-pass filter, but most recommend to use a cutoff of 100 seconds (i.e., of 0.01 Hz). This means that any oscillation slower than 100 seconds (one cycle in 100 seconds) is removed from your signal.\n\nAnyway, as you've seen in the videos, there are many different ways to high-pass your signal (e.g., frequency-based filtering methods vs. time-based filtering methods). Here, we demonstrate a time-based 'gaussian running line smoother', which is used in FSL. As you've seen in the videos, this high-pass filter is estimated by convolving a gaussian \"kernel\" with the signal (taking the element-wise product and summing the values) across time, which is schematically visualized in the image below:\n\n\n\nOne implementation of this filter is included in the scipy \"ndimage\" subpackage. Let's import it\\*:\n\n---\n\\***Note**: if you're going to restart your kernel during the lab for whatever reason, make sure to re-import this package to avoid `NameErrors` (i.e., the error that you get when you call a function that isn't imported).\n\n\n```python\nfrom scipy.ndimage import gaussian_filter\n```\n\nThe `gaussian_filter` function takes two mandatory input: some kind of (n-dimensional) signal and a cutoff, \"sigma\", that refers to the width of the gaussian filter in standard deviations. \"What? We decided to define our cutoff in seconds (or, equivalently, Hz), right?\", you might think. For some reason neuroimaging packages seem to define cutoff for their temporal filters in **seconds** while more 'low-level' filter implementations (such as in scipy) define cutoffs (of gaussian filters) in **the width of the gaussial filter**, i.e., **sigma**. Fortunately, there is an easy way to (approximately) convert a cutoff in seconds to a cutoff in sigma, given a particular TR (in seconds):\n\n\\begin{align}\n\\sigma \\approx \\frac{\\mathrm{cutoff}_{sec}}{2 \\cdot \\mathrm{TR}_{sec}}\n\\end{align}\n\n
\nToDo \n
\n\nSuppose I acquire some fMRI data (200 volumes) with a sampling frequency of 0.25 Hz and I would like to apply a high-pass filter of 80 seconds. What sigma should I choose? Calculate sigma and store it in a variable named `sigma_todo`.\n\n\n```python\nsigma_todo = ......\n\nnp.testing.assert_equal(sigma_todo, 10)\n\n```\n\nImportantly, the gaussian filter does not return the filtered signal itself, but the estimated low-frequency component of the data. As such, to filter the signal, we have to subtract this low-frequency component from the original signal to get the filtered signal! \n\nBelow, we estimate the low-frequency component using the high-pass filter first and plot it together with the original signal, which shows that it accurately captures the low-frequency drift (upper plot). Then, we subtract the low-frequency component from the original signal to create the filtered signal, and plot it together with the original signal to highlight the effect of filtering (lower plot):\n\n\n```python\nTR = 2.0\nsigma_hp = 100 / (2 * TR) \nfilt = gaussian_filter(sig, sigma_hp)\n\nplt.figure(figsize=(17, 10))\n\nplt.subplot(2, 1, 1)\nplt.plot(sig, lw=2)\nplt.plot(filt, lw=4)\nplt.xlim(0, 200)\nplt.legend(['Original signal', 'Low-freq component'], fontsize=20)\nplt.title(\"Estimated low-frequency component using HP-filter\", fontsize=25)\nplt.ylabel(\"Activation (A.U.)\", fontsize=20)\n\nfilt_sig = sig - filt\n\nplt.subplot(2, 1, 2)\nplt.plot(sig, lw=2)\nplt.plot(filt_sig, lw=2, c='tab:green')\nplt.xlim(0, 200)\nplt.legend(['Original signal', 'Filtered signal'], fontsize=20)\nplt.title(\"Effect of high-pass filtering\", fontsize=25)\nplt.xlabel(\"Time (TR)\", fontsize=20)\nplt.ylabel(\"Activation (A.U.)\", fontsize=20)\n\nplt.tight_layout()\nplt.show()\n```\n\nThe signal looks much better, i.e., it doesn't display much drift anymore. But let's check this by plotting the original and filtered signal in the frequency domain:\n\n\n```python\nplt.figure(figsize=(17, 5))\nfreq, power = periodogram(sig, fs=0.5)\nplt.plot(freq, power, lw=2)\n\nfreq, power = periodogram(filt_sig, fs=0.5)\nplt.plot(freq, power, lw=2)\nplt.xlim(0, freq.max())\nplt.ylabel('Power (dB)', fontsize=15)\nplt.xlabel('Frequency (Hz)', fontsize=15)\nplt.title(\"The effect of high-pass filtering in the frequency domain\", fontsize=20)\nplt.legend([\"Original signal\", \"Filtered signal\"], fontsize=15)\nplt.show()\n```\n\nSweet! It seems that the high-pass filtering worked as expected! But does it really improve our model fit?\n\n
\nToDo \n
\n\n\nWe've claimed several times that high-pass filtering improves model fit, but is that really the case in our case? To find out, fit the same design (variable `X`) on the filtered signal (variable `filt_sig`) using linear regression. Calculate MSE and store it in the variable `mse_with_filter`.\n\n\n```python\n# your code here, ending in \n\nmse_with_filter = ....\n\nassert(np.round(mse_with_filter, 3) == 0.971)\n```\n\n
\nToDo \n
\n\nSo far, we've filtered only a single (simulated) voxel timeseries. Normally, you want to temporally filter *all* your voxels in your 4D fMRI data, of course. Below, we load in such a 4D fMRI file (`data_4d`), which has $50$ timepoints and ($40 \\cdot 40 \\cdot 19 = $) $30400$ voxels.\n\nFor this ToDo, you need to apply the high-pass filter (i.e., the `gaussian_filter` function; use sigma=25) on each and every voxel separately, which means that you need to loop through all voxels (which amounts to three nested for-loops across all three spatial dimensions). Below, we've already loaded in the data and have written the three nested for-loops. Now it's up to you to filter the signal in the inner-most loop and store it in the pre-allocated `data_4d_filt` variable (the loop may take a couple of seconds!).\n\nThere is a test-cell that you can use to test your implementation.\n\n\n```python\ndata_4d = nib.load('data/unfiltered_data_ds.nii.gz').get_data()\nprint(\"Shape of the original 4D fMRI scan: %s\" % (data_4d.shape,))\n\n# Here, we pre-allocate a matrix of the same shape as data_4d, in which\n# you need to store the filtered timeseries\ndata_4d_filt = np.zeros(data_4d.shape)\n\n# Start loop across X-dimension\nfor i in range(data_4d.shape[0]):\n\n # Start loop across Y-dimension\n for ii in range(data_4d.shape[1]):\n \n # Start loop across Z-dimension\n for iii in range(data_4d.shape[2]):\n # Filter the timeseries for voxel at location X=i, Y=ii, Z=iii and store it\n # using an appropriate index in the pre-allocated variable data_4d_filt!\n \n # YOUR CODE BEGINS HERE\n filtered_signal = gaussian_filter(data_4d[i, ii, iii, :], sigma=25)\n data_4d_filt[i, ii, iii, :] = data_4d[i, ii, iii, :] - filtered_signal\n # YOUR CODE ENDS HERE\n```\n\n\n```python\nnp.testing.assert_array_almost_equal(data_4d_filt, np.load('data/answer_filt_4d.npy'))\n```\n\n### 3. Autocorrelation and prewhitening \nAs you (should) have seen in the previous ToDo, the model fit increases tremendously after high-pass filtering! This surely is the most important reason why you should apply a high-pass filter. But there is another important reason: high-pass filters reduce the signal's autocorrelation! \n\n\"Sure, but why should we care about autocorrelation?\", you might think? Well this has to with the estimation of the standard error of our model, i.e., $\\hat{\\sigma}^{2}\\mathbf{c}(X'X)^{-1}\\mathbf{c}'$. As you've seen in the videos, the Gauss-Markov theorem states that in order for OLS to yield valid estimates (including estimates of the parameters' standard errors) *the errors (residuals) have a mean of 0, have 0 covariance (i.e., are uncorrelated), and have equal variance*. \n\nLet's go through these three assumptions step by step. We'll use the previously filtered signal for this.\n\n#### 3.1. Assumption of zero-mean of the residuals\nFirst, let's check whether the mean of the residuals is zero:\n\n\n```python\nb = np.linalg.lstsq(X, filt_sig, rcond=None)[0]\ny_hat = X.dot(b)\nresids = filt_sig - y_hat\nmean_resids = resids.mean()\nprint(\"Mean of residuals: %3.f\" % mean_resids)\n```\n\n
\nToThink\n
\n\nWhat component of the design-matrix ($X$) ensures that the mean of the residuals is zero? Explain (concisely) why.\n\n*The intercept! This predictor makes sure that any constant variance ('offset') is modeled and thus cannot occur in the residuals.*\n\n\n#### 3.2. Equal variance of the residuals\nAlright, sweet - the first assumption seems valid for our data. Now, the next two assumptions -- about equal variance of the residuals and no covariance between residuals -- are trickier to understand and deal with. In the book (and videos), these assumptions are summarized in a single mathemtical statement: the covariance-matrix of the residuals should be equal to the identity-matrix ($\\mathbf{I}$) scaled by the noise-term ($\\hat{\\sigma}^{2}$). Or, put in a formula:\n\n\\begin{align}\n\\mathrm{cov}[\\epsilon] = \\hat{\\sigma}^{2}\\mathbf{I}\n\\end{align}\n\nThis sounds difficult, so let's break it down. First off all, the covariance matrix of the residuals is always a symmetric matrix of shape $N \\times N$, in which the *diagonal represents the variances* and the *off-diagonal represents the covariances*. For example, at index $[i, i]$, the value represents the variance of the residual at timepoint $i$. At index $[i, j]$, the value represents the covariance between the residuals at timepoints $i$ and $j$. \n\nIn OLS, we assume that the covariance matrix of the residuals ($\\mathrm{cov}[\\epsilon]$) equals the \nidentity-matrix ($\\mathbf{I}$) times the noise-term ($\\hat{\\sigma}^{2}$). The identity-matrix is simply a matrix with all zeros except for the diagonal, which contains ones. For example, the identity-matrix for a residual-array of length $8$ looks like:\n\n\n```python\nidentity_mat = np.eye(8) # makes an 'eye'dentity matrix\nprint(identity_mat)\n```\n\nNow, suppose we calculated that the noise-term of a model explaining this hypothetical signal of length $8$ equals 2.58 ($\\hat{\\sigma}^{2} = 2.58$). Then, OLS *assumes* the covariance matrix of the residuals equals the identity-matrix times the noise-term:\n\n\n```python\nnoise_term = 2.58\nassumed_cov_resid = noise_term * identity_mat\nprint(assumed_cov_resid)\n```\n\nIn other words, this assumption about the covariance matrix of the residuals states that the *variance across residuals (the diagonal of the matrix) should be equal* and the *covariance between residuals (the off-diagonal values of the matrix) should be 0* (in the population).\n\nNow, we won't explicitly calculate the covariance matrix of the residuals (which is usually estimated using techniques that fall beyond the scope of this course); however, we *do* want you to understand *conceptually* how fMRI data might invalidate the assumptions about the covariance matrix of the residuals and how fMRI analyses deal with this (i.e., using prewhitening, which is explained later). \n\nSo, let's check *visually* whether the assumption of equal variance of our residuals roughly holds for our (simulated) fMRI data. Now, when we consider this assumption in the context of our fMRI data, the assumption of \"equal variance of the residuals\" (also called homoskedasticity) means that we assume that the \"error\" in the model is equally big across our timeseries data. In other words, the mis-modelling (error) should be constant over time.\n\nLet's check this for our data:\n\n\n```python\nplt.figure(figsize=(15, 5))\nplt.plot(resids, marker='.')\nplt.xlim(0, 200)\nplt.xlabel(\"Time (TR)\", fontsize=15)\nplt.ylabel(\"Activation (A.U.)\", fontsize=15)\nplt.title(\"Residuals\", fontsize=20)\nplt.axhline(0, ls='--', c='black')\nplt.show()\n```\n\n
\nToThink \n
\n\nWhat could cause unequal variance in the residuals of an fMRI signal, *given that autocorrelation (i.e. low-frequency components) are filtered out appropriately*? In other words, can you think of something that might cause larger (or smaller) errors across the duration of an fMRI run?\n\n\n*One reason could be that the noise becomes larger (the signal becomes weaker) due to increasing inhomogeneities of the magnetic field caused by, for example, subject movement. Another cause could be that subjects stop paying attention or do other things that might increase the noise over time. Note: things that cause drift do not necessarily lead to unequal variance across time!*\n\n#### 3.3. Zero covariance between residuals\nThe last assumption of zero covariance between residuals (corresponding to the assumption of all zeros on the off-diagonal elements of the covariance-matrix of the residuals) basically refers to the assumption that *there is no autocorrelation (correlation in time) in the residuals*. In other words, knowing the residual at timepoint $i$ does not tell you anything about the residual at timepoint $i+1$ (they are *independent*). \n\nTake for example the residuals of our unfiltered signal from before (I emphasized the drift a little bit more below), which looked like:\n\n\n```python\nold_sig = sig + np.arange(-2, 2, 0.02)[::-1]\nb = np.linalg.lstsq(X, old_sig, rcond=None)[0]\nresids_new = old_sig - X.dot(b)\n\nplt.figure(figsize=(15, 8))\nplt.subplot(2, 1, 1)\nplt.plot(resids_new, marker='.')\nplt.axhline(0, ls='--', c='black')\nplt.xlim(0, 200)\nplt.xlabel(\"Time (TR)\", fontsize=15)\nplt.title('Residuals (containing unmodelled drift!)', fontsize=20)\nplt.ylabel('Activity (a.u.)', fontsize=15)\nplt.show()\n```\n\nIn the above plot, reflecting the residuals of a signal in which the drift is obviously not modelled (and is thus contained in the residuals), there is strong autocorrelation: given the slow drift (decreasing values over time) we in fact *do know something about the residual at timepoint $i+1$ given the residual at timepoint $i$, namely that it is likely that the residual at timpoint $i+1$ is __lower__ than the residual at timepoint $i$*! As such, drift is a perfect example of something that (if not modelled) causes autocorrelation in the residuals (i.e. covariance between residuals)! In other words, autocorrelation (e.g. caused by drift) will cause the values of the covariance matrix of the residuals at the indices $[i, i+1]$ to be non-zero, violating the third assumption of Gauss-Markov's theorem!\n\n\n
\nToDo \n
\n\nWe stated that autocorrelation captures the information that you have of the residual at timepoint $i+1$ given that you know the residual at timepoint $i$. One way to estimate this (\"lag 1\") dependence is to calculate the covariance of the residuals with the residuals \"shifted\" by one \"lag\". In general, the autocorrelation for the \nresiduals $\\epsilon$ with lag $\\tau$ is calculated as:\n\n\\begin{align}\n\\mathrm{cov}[\\epsilon_{i}, \\epsilon_{i+\\tau}] = \\frac{1}{N-\\tau-1}\\sum_{i=1}^{N-\\tau}(\\epsilon_{i}\\cdot\\epsilon_{i+\\tau})\n\\end{align}\n\nJeanette Mumford explains how to do this quite clearly in her [video on prewhitening](https://www.youtube.com/watch?v=4VSzZKO0k_w) (around minute 10). For this ToDo, calculate the \"lag-1\" covariance ($\\tau = 1$) between the residuals (i.e., using the variable `resids_new`) and store this in a variable named `lag1_cov`.\n\n\n```python\n# your code here:\n\ntau = ...\nlag1_cov = ...\n```\n\n\n```python\n# testing your answer\nnp.testing.assert_almost_equal(lag1_cov, np.load('data/lag1_cov.npy'))\n```\n\n### 3.4. Accounting for autocorrelation: prewhitening\nSo, in summary, if the covariance matrix of your residuals does not equal the identity-matrix scaled by the noise-term ($\\mathrm{cov}[\\epsilon] = \\hat{\\sigma}^{2}\\mathbf{I}$), all statistics (beta-parameters, standard errors, t-values, p-values) from the GLM might be biased (usually inflated). \n\nUnfortunately, even after high-pass filtering (which corrects for *most* but not *all* autocorrelation), the covariance matrix of the residuals of fMRI timeseries usually do no conform to the Markov-Gauss assumptions of equal variance and zero covariance. Fortunately, some methods have been developed by statisticians that transform the data such that the OLS assumptions hold again. One such technique is called *prewhitening*. \n\nWe won't discuss the mathematics of prewhitening, but you have to understand how it works conceptually.\n\nSuppose you have a signal of 20 timepoints (an irrealistically low number, but just ignore that). Now, suppose you have estimated the covariance matrix of the residuals of this signal after modelling (how this covariance-mtarix is calculated is not important for now) - let call this matrix $\\mathbf{V}$, which is an $N \\times N$ matrix ($N$ referring to the number of timepoints of your signal). Now, suppose you take a look at it and you notice that it looks faaaaar from the identity-matrix ($\\mathbf{I}$) that we need for OLS.\n\nFor example, you might see this:\n\n\n```python\nfrom scipy.linalg import toeplitz\n\nN = 20\n\n# Some magic to create a somewhat realistic covariance matrix\ntmp = toeplitz(np.arange(N)).astype(float)\ntmp[np.diag_indices_from(tmp)] += np.arange(0.1, 0.6, 0.025)[::-1]\n\n# V represents the covariance matrix\nV = 1 / tmp\n\nplt.figure(figsize=(10, 5))\nplt.subplot(1, 2, 1)\nplt.imshow(V, vmax=5, cmap='gray')\nplt.axis('off')\nplt.title(\"V (actual covariance matrix)\", fontsize=15)\nplt.subplot(1, 2, 2)\nplt.imshow(np.eye(N), vmax=5, cmap='gray')\nplt.title(\"Identity-matrix (assumed matrix)\", fontsize=15)\nplt.axis('off')\nplt.tight_layout()\n```\n\nWell, shit. We have both unequal variance (different values on the diagonal) *and* non-zero covariance (some non-zero values on the off-diagonal). So, what to do now? Well, we can use the technique of prewhitening to make sure our observed covariance matrix ($\\mathbf{V}$) will be \"converted\" to the identity matrix! Basically, this amounts to plugging in some extra terms to formula for ordinary least squares. As you might have seen in the book/videos, the *original* OLS solution (i.e., how OLS finds the beta-parameters is as follows):\n\n\\begin{align}\n\\hat{\\beta} = (X'X)^{-1}X'y\n\\end{align}\n\nNow, given that we've estimated our covariance matrix of the residuals, $\\mathbf{V}$, we can rewrite the OLS solution such that it prewhitens the data (and thus the covariance matrix of the residuals will approximate $\\hat{\\sigma}^{2}\\mathbf{I}$) as follows:\n\n\\begin{align}\n\\hat{\\beta} = (X'V^{-1}X)^{-1}X'V^{-1}y\n\\end{align}\n\nThen, accordingly, the standard-error of any contrast of the estimated beta-parameters becomes:\n\n\\begin{align}\nSE_{\\mathbf{c}\\hat{\\beta}} = \\sqrt{\\hat{\\sigma}^{2} \\cdot \\mathbf{c}(X'V^{-1}X)^{-1}\\mathbf{c}'}\n\\end{align}\n\nThis \"modification\" of OLS is also called \"generalized least squares\" (GLS) and is central to univariate fMRI analyses! You *don't* have to understand how this works mathematically; again, you should only understand *why* prewhitening makes sure that our data behaves according to the assumptions of the Gauss-Markov theorem.\n\n(Fortunately for us, there is usually an option to 'turn on' prewhitening in existing software packages, so we don't have to do it ourselves. But it is important to actually turn it on whenever you want to meaningfully and in an unbiased way interpret your statistics in fMRI analyses!)\n\n\n
\n ToDo (optional!)\n
\n\nIf you want to practice your linear algebra/programming skills, you can (_optionally!_) do this ToDo: given the target signal (`some_sig`), design-matrix (`some_X`), and the (hypothetical) covariance-matrix of the residuals from before (the variable `V`), calculate the beta-parameters using the prewhitened version of OLS (i.e., 'generalized least squares'; the formula above). Also, calculate the t-value of the contrast `[0, 1]` given the appropriate (GLS) computation of the standard-error. Store your results in the variable `betas_gls` and `tval_gls`, respectively.\n\n\n\n\n\n```python\n# Implement your ToDo here!\nsome_sig = sig[:20] # y\nsome_X = X[:20, :] # X\nc_vec = np.array([0, 1]) # the contrast you should use\n\nbetas_gls = ...\nsigma_hat = ...\ndesvar = ...\ntval_gls = ...\n\n```\n\n\n```python\nnp.testing.assert_array_almost_equal(betas_gls, np.array([1.845, 1.385]), decimal=3)\nnp.testing.assert_almost_equal(tval_gls, 1.139, decimal=3)\n```\n\n## 4. Spatial filtering (smoothing)\nAlright, so in the previous two sections we've looked at how to filter our signal in the time-domain through high-pass filtering and prewhitening. Importantly, these operations are *performed on the timeseries of each voxel separately* (like you did in a previous ToDo)! Essentially, given our 4D fMRI data ($X \\times Y \\times Z \\times T$), temporal filtering as we discussed here is only applied to the fourth dimension (the time-dimension, $T$).\n\nIn addition to temporal filtering, many people also apply a form of *spatial* filtering to their fMRI data, which is thus an operation applied to the *spatial* dimensions ($X$, $Y$, and $Z$) of our 4D data. The most common spatial filtering operation -- spatial smoothing -- is usually implemented using a \"3D gaussian (low-pass) smoothing kernel\". Sounds familiar? Well, it should, because it's essentially the same type of filter as we used for our temporal high-pass filter! Only this time, it's not a 1-dimensional gaussian (\"kernel\") that does the high-pass filtering, but it's a 3-dimensional gaussian that does *low-pass* filtering. Just like the temporal gaussian-weighted running line smoother is applied across time, we apply the 3D gaussian across space (i.e., the 3 spatial dimensions, $X$, $Y$, and $Z$). **Note that we don't advocate this at all in modern neuroimaging practice, but understanding the operation is important.**\n\n\nThe figure below schematically visualized the process\\*:\n\n\n\n(Note that we show the spatial data in 2D, simply because it's easier to visualize, but in reality this is always 3D!)\n\nIn fact, because both (high-pass) temporal filtering and (low-pass) spatial filtering in most fMRI applications depend on the same \"gaussian filtering\" principle, we can even use the same Python function: `gaussian_filter`! However, as we mentioned before, spatial smoothing in fMRI is used as a *low-pass filter*, which means that the (spatial!) frequencies *higher* than a certain cutoff are filtered out, while the (spatial!) frequencies *lower* than this cut-off are *passed*. Therefore, we don't need to subtract the output from the `gaussian_filter` function from the (spatial) data! (If we'd would that, than we're effectively high-pass filtering the data!)\n\n---\n\\* 3D gaussian figure copied from [this website](https://blog.philippklaus.de/2012/10/creating-a-gaussian-window-in-3d-using-matlab).\n\n### 4.1. Smoothing of fMRI data\nBefore we go on and demonstrate smoothing on fMRI data, we need to determine the sigma of our (3D) gaussian kernel that we'll use to smooth the data. Annoyingly, smoothing kernels in the fMRI literature (and software packages!) are usually not reported in terms of sigma, but as \"full-width half maximum\" (FWHM), which refers to the width of the gaussian at half the maximum height:\n\n\n\nFor example, you might read in papers something like \"We smoothed our data with a gaussian kernel with a FWHM of 3 millimeter\". Fortunately, we can easily convert FWHM values to sigmas, using:\n\n\\begin{align}\n\\sigma_{kernel} \\approx \\frac{\\mathrm{FWHM}_{mm}}{2.355 \\cdot \\mathrm{voxel\\ size}_{mm}}\n\\end{align}\n\nSo, for example, if I want to smooth at FWHM = 6 mm and my voxels are 3 mm in size (assuming equal size in the three spatial dimensions, which is common), my sigma becomes:\n\n\\begin{align}\n\\sigma_{kernel} \\approx \\frac{6}{2.355 \\cdot 3} \\approx 0.42\n\\end{align}\n\nThe entire conversion process is a bit annoying, but it's simply necessary due to conventions in the fMRI literature/software packages.\n\nAnyway, having dealt with the conversion issue, let's look at an example. We'll use the 4D fMRI data from before (the `data_4d` variable) and we'll extract a single 3D volume which we'll smooth using the `gaussian_filter` function. This particular data has a voxel-size of $6\\ mm^{3}$; given that we want to smooth at an FWHM of, let's say, 10 millimeter, we need a sigma of ($\\frac{10}{2.355 \\cdot 6} \\approx $) $0.7$.\n\n\n```python\nvol = data_4d[:, :, :, 20] # We'll pick the 21st volume (Python is 0-indexed, remember?)\n\nfwhm = 10\nvoxelsize = 6\n\nsigma = fwhm / (2.355 * voxelsize)\nsmoothed_vol = gaussian_filter(vol, sigma=sigma)\n\n# Let's plot both the unsmoothed and smoothed volume\nplt.figure(figsize=(10, 5))\nplt.subplot(1, 2, 1)\nplt.imshow(vol[:, :, 10], cmap='gray') # And we'll pick the 11th axial slice to visualize\nplt.axis('off')\nplt.title(\"Unsmoothed volume\\n\", fontsize=15)\n\nplt.subplot(1, 2, 2)\nplt.imshow(smoothed_vol[:, :, 10], cmap='gray')\nplt.axis('off')\nplt.title('Smoothed volume\\n($\\sigma = %.1f; FWHM = %s_{mm}$)' % (sigma, fwhm), fontsize=15)\nplt.show()\n```\n\n
\nToDo \n
\n\nNow, in the above example we only smoothed a single volume, but in your analyses you would of course smooth all volumes in your fMRI run! (Just like with temporal filtering you need to filter the timeseries of all voxels separately.) In this ToDo, you need to loop through all (50) volumes from the `data_4d` variable and smooth them separately. Store the smoothed data in the already pre-allocated variable `data_4d_smoothed`. Use a sigma of 0.7 for the gaussian filter.\n\n\n\n```python\n# Implement your ToDo here\n\ndata_4d_smoothed = np.zeros(data_4d.shape)\n\nfor i in range(data_4d.shape[-1]):\n ....\n\n```\n\n\n```python\nnp.testing.assert_array_almost_equal(data_4d_smoothed, np.load('data/smoothed_data_todo.npy'))\n```\n\n
\nToThink \n
\n\nSince the `gaussian_filter` works for any $N$-dimensional array, one could argue that you don't have to loop through all volumes and apply a 3D filter, but you could equally well skip the loop and use a 4D filter straightaway. Explain (concisely) why this is a bad idea (for fMRI data).\n\n\n*This is a bad idea because you have two opposing goals of spatial and temporal filtering: you want to high-pass the temporal dimension while low-pass the spatial dimensions. Applying a single 4D spatial (low-pass) filter will unintentionally also low-pass the temporal dimension.*\n\n## 5. Dealing with outliers\nAs we've seen, high-pass filtering and spatial smoothing are operations that are applied on the signal directly (before model fitting) to reduce the noise term. Another way to reduce the noise term is to include *noise regressors* (also called 'nuisance variables/regressors') in the design matrix. As such, we can subdivide our design matrix into \"predictors of interest\" (which are included to model the task/stimuli) and \"noise predictors\". Or, to reformulate our linear regression equation:\n\n\\begin{align}\ny = X_{interest}\\beta_{interest} + X_{noise}\\beta_{noise} + \\epsilon\n\\end{align}\n\nImportantly, the difference between $X_{noise}$ and $\\epsilon$ is that the $X_{noise}$ term refers to noise-related activity that you *are able to model* while the $\\epsilon$ term refers to the noise that you *can't model* (this is often called the \"irreducible noise/error\" term). \n\n### 5.1. Using noise-predictors for \"despiking\"\nThis technique of adding noise-predictors to the design-matrix is sometimes used to model 'gradient artifacts', which are also called 'spikes' (which you've heard about in one of the videos for this week). This technique is also sometimes called \"despiking\". These spikes reflect sudden large intensity increases in the signal across the entire brain that likely reflect scanner instabilities. One way to deal with these artifacts is to \"censor\" bad timepoints (containing the spike) in your signal using a noise-predictor.\n\nBut what defines a 'spike'/bad timepoint? One way is to average the timeseries across all voxels, generating one single 'global signal' (which makes sense because spikes usually affect measurements across the entire brain), and then apply a z-transform of this global signal, and subsequently identify spikes as any value above a certain threshold. For example, setting this threshold at 5 means that any activity value more than 5 standard deviations from the mean global signal intensity is defined as a spike.\n\nBefore explaining how to include these spikes in the design-matrix, let's take a look at the example signal that we're going to use for this section:\n\n\n```python\nwith np.load('data/spike_data.npz') as spike_data:\n global_signal = spike_data['global_signal']\n spike_sig = spike_data['spike_sig']\n pred = spike_data['pred']\n\nplt.figure(figsize=(15, 5))\nplt.plot(global_signal)\nplt.xlabel(\"Time (TR)\", fontsize=20)\nplt.ylabel(\"Activity (A.U.)\", fontsize=20)\nplt.title(\"Global signal (i.e., average across all voxels)\", fontsize=25)\nplt.xlim(0, 500)\nplt.show()\n```\n\nAs you can see from the plot, there are two apparent 'spikes' in the data (around $t=30$ and $t=300$)!\n\n\n
\nToDo\n
\n\nNow, in order to identify spikes, we need to identify timepoints that have an activity-values that differ more than (let's say) 5 standard deviations from the mean activity value. As such, you need to 'z-score' the activity values: subtract the mean value from each individual value and divide each of the resulting 'demeaned' values by the standard deviation ($\\mathrm{std}$) of the values. In other words, the z-transform of any signal $s$ with mean $\\bar{s}$ is defined as:\n\n\\begin{align}\nz(s) = \\frac{(s - \\bar{s})}{\\mathrm{std}(s)}\n\\end{align}\n\nImplement this z-score transform for the variable `global_signal` and store it in the variable `zscored_global_signal`. Then, identify any values above 5 in the `zscored_global_signal` and store these in a variable named `identified_spikes` which thus should contain a numpy array with boolean values in which the timepoints with spikes should be `True` and timepoints without spikes should be `False`. \n\n(Hint: your `identified_spikes` variable should contain 2 spikes.)\n\n\n```python\n# Implement your ToDo here\n\n```\n\n\n```python\nnp.testing.assert_almost_equal(zscored_global_signal,\n (global_signal - global_signal.mean()) / global_signal.std())\n\nnp.testing.assert_almost_equal(identified_spikes,\n ((global_signal - global_signal.mean()) / global_signal.std() > 5))\n\n```\n\nAlright, if you've done the ToDo correctly, you should have found that there are two spikes in the data at timepoints $t = 30$ and $t = 308$. Now, to remove the influence of these spikes, we can include two regressors that model the influence of these timepoints in our design-matrix. These regressors simply contain all zeros except for at the timepoint of the spike, where it contains a 1. Let's create these regressors: \n\n\n```python\nspike_regressor_1 = np.zeros((500, 1))\nspike_regressor_1[29] = 1 # remember, Python has 0-based indexing!\nspike_regressor_2 = np.zeros((500, 1))\nspike_regressor_2[307] = 1\n\nplt.figure(figsize=(15, 5))\nplt.plot(spike_regressor_1 + 0.01) # The 0.01 is for visualization purposes only\nplt.plot(spike_regressor_2)\nplt.legend(['Spike regressor 1', 'Spike regressor 2'])\nplt.xlabel(\"Time (A.U.)\", fontsize=20)\nplt.ylabel(\"Activity (A.U.)\", fontsize=20)\nplt.title(\"Spike regressors\", fontsize=25)\nplt.xlim(0, 500)\nplt.show()\n```\n\n
\n ToThink \n
\n\nWhy do you think we do not convolve the spike regressors with an HRF (or basis set)? Write your answer in the text-cell below.\n\n\n*Because spikes are not \"activity\" related to neural activity! As such, we do not expect a BOLD-response and are thus not incorporating our \"hypothesis\" of a HRF-shaped response in our model.*\n\nNow, let's plot a signal from a single voxel and the associated (hypothetical) stimulus-predictor.\n\n\n\n\n\n```python\nplt.figure(figsize=(15, 5))\nplt.plot(spike_sig)\nplt.plot(pred)\nplt.xlabel(\"Time (A.U.)\", fontsize=20)\nplt.ylabel(\"Activity (A.U.)\", fontsize=20)\nplt.title(\"Signal + stimulus predictor\", fontsize=25)\nplt.xlim(0, 500)\nplt.legend(['Signal', 'Stimulus-predictor'])\nplt.show()\n```\n\nAs you can see, the spikes that we found in the global signal are also (as expected) present in the signal of this particular voxel.\n\n
\nToDo \n
\n\nCalculate the t-value of the stimulus-predictor-against-baseline contrast in the model with only the stimulus-predictor (don't forget to stack an intercept) and store it in the variable `tval_stimonly` (use `spike_sig` as your target, i.e., $y$). Then, add the two spike-predictors to the design-matrix (use `np.hstack`), which should have 4 columns afterwards (1 intercept, 1 stimulus predictor, 2 spike predictors). Then, calculate the t-value of the stimulus-predictor-against-baseline contrast for the extended model; store the t-value in a variable named `tval_with_spike_preds`.\n\n\n```python\n# Implement your ToDo here\n\n```\n\n\n```python\n# test for part one\nc = np.array([0, 1])\nX_s = np.hstack((np.ones((pred.size, 1)), pred))\nb_s = np.linalg.lstsq(X_s, spike_sig, rcond=None)[0]\nsigmahat_s = np.sum((spike_sig - X_s.dot(b_s)) ** 2) / (X_s.shape[0] - X_s.shape[1])\ndesvar_s = c.dot(np.linalg.inv(X_s.T.dot(X_s))).dot(c.T)\nanswer_stimonly = c.dot(b_s) / np.sqrt(sigmahat_s * desvar_s)\nnp.testing.assert_almost_equal(answer_stimonly, tval_stimonly, decimal=3)\n```\n\n\n```python\n# test for part two\nc = np.array([0, 1, 0, 0])\nX_sp = np.hstack((np.ones((pred.size, 1)), pred, spike_regressor_1, spike_regressor_2))\nb_sp = np.linalg.lstsq(X_sp, spike_sig, rcond=None)[0]\nsigmahat_sp = np.sum((spike_sig - X_sp.dot(b_sp)) ** 2) / (X_sp.shape[0] - X_sp.shape[1])\ndesvar_sp = c.dot(np.linalg.inv(X_sp.T.dot(X_sp))).dot(c.T)\nanswer_with_spike_preds = c.dot(b_sp) / np.sqrt(sigmahat_sp * desvar_sp)\nnp.testing.assert_almost_equal(answer_with_spike_preds, tval_with_spike_preds, decimal=3)\n```\n\nIf you've done the ToDo correctly, you saw that the t-value from the model with the spike-predictors was much bigger! This is, of course, because the noise term got much smaller (and thus increasing the denominator of the t-value formula).\n\n\n## 6. Motion preprocessing\nThis method of preprocessing through including noise predictors, like the spike-predictors in the previous section, is also often used in the context of motion filtering. Basically, in this process you'd like to remove all 'activity' in voxels that are correlated with motion. By including these motion predictors in our model, we make sure that variance in the signal caused my motion is accounted for and does *not* end up in our error term (i.e. the model's residuals/unexplained variance). \n\nHowever, these 'motion predictors' do not magically appear, but are a result of a previous step in addressing motion in fMRI. Basically, \"motion preprocessing\" consists of two parts:\n\n1. Motion correction (realignment);\n2. Motion filtering (denoising)\n\nIn the first part, the functional image is - volume by volume - realigned such that all volumes are in the same orientation (i.e. the location of each voxel is constant over time). This realignment is done by translating (in three directions) and rotating (in three directions) each volume to match a 'reference volume' (usually the first or middle volume of each file). \n\nThen, in a second step, we use those 'realignment parameters' (i.e. how much each subject moved over time) in our design matrix as 'noise predictors' that aim to improve model fit by explaining variance related to movement. Let's take a look at these motion parameters.\n\nLet's take a closer look at both of these operations (motion correction/realignment and filtering/denoising).\n\n\n### 6.1. Motion realignment\nArguably the largest source of noise/unwanted effects in fMRI data is subject motion. When subjects move in the scanner, the homogeneity of the magnetic field is decreased, voxels are shifted in space from timepoint to timepoint, and spurious (de)activity may occur (during modeling). As such, it is *very* important to deal with motion appropriately. By far the *most important thing* you can do to reduce the effect of motion is simply to try to make the subject move as little as possible!\n\nBut even if you have a subject that lies as still as possible in the scanner, some subject motion is unavoidable (due to e.g. breathing). As such, you always want to do motion realignment. This process works as follows:\n\n1. Pick a \"reference volume\" within your fMRI run (e.g. the first or middle volume)\n2. For each of the other volumes, try to rotate (\"twist\" around axes) and translate (move left/right/up/down/forward/backward) such that it matches the reference volume as well as possible\n\nThis process of translating and rotating is often called \"rigid body transformation\", which aims to \"register\" each volume to the reference volume using 6 parameters: $3$ (translation in $X, Y, Z$ directions) $+\\ 3$ (rotation across $X, Y, Z$ axes). \n\nFortunately, you don't have to find these 6 parameters yourself for each volume; there are optimization algorithms that do the rotating/translating and matching to the reference volume for you (this type of registration algorithm that we use for motion realignment is similar to the algorithms used for spatial normalization from functional --> T1 --> standard space). These algorithms usually start out with random values for these six parameters, for which they calculate the \"mismatch\" (also called \"cost\"; for example in terms of the \"correlation distance\" between the realigned volume and the reference volume, $1 - \\rho(vol, ref\\_vol)$). Then, they adjust the parameters such that the \"mismatch\" decreases. This parameter adjustment is iterated until the \"mismatch\" is below some threshold.\n\nTo get a better intuition of this process, you're actually going to manually do motion realignment in the next ToDo!\n\n\n
\nToDo\n
\n\nIn this ToDo, you are going to try to manually register two brain images to a reference image using translation (we leave out rotation for the simplicity of the example). For this exercise, we are going to use a 2D brain image (instead of normal 3D volume), simply because it's easier to plot. As such, you can only tweak two parameters: translation in the up/down direction and translation in the left/right direction.\n\nBelow, we load in a very short fMRI run with 3 brain images (in 2D), `motion_data`. Then, we provide you with a function that mimicks a motion realigment algorithm, but which you have to tweak yourself manually. We set the reference volume to the middle image (`ref_image=2`). You start out with all the translation parameters set to 0. Run the cell below once to see the initial mismatch between the three volume.\n\nAfter running the cell below, you should see an image with three brains: the red one represents the reference image (the middle image/second image); the blue one represents the first image and the green one represents the third image (in time). Now, to translate the first image (green) one voxel upwards and the third image (blue) one voxel downwards, set the `translate_up` parameter as follows:\n\n```python\ntranslate_up = [1, -1]\n```\n\nEquivalently, if you want to translate the first image (green) two voxels to the left and the second image (blue) three voxels to the right, set the `translate_left` parameter as follows:\n\n```python\ntranslate_left = [2, -3]\n```\n\nTry different values for these translation parameters until you you minimize the \"mismatch\" of the brain images relative to the reference image (i.e., until all three images overlay perfectly)!\n\n\n```python\nfrom scipy.stats import pearsonr\n\n\ndef translate_volumes(data, ref_image=2, translation_up=None, translation_left=None):\n \n ref_im = data[:, :, (ref_image - 1)] # assuming data is 3D\n other_ims = np.dstack((data[:, :, :(ref_image -1)],\n data[:, :, (ref_image):]))\n \n for i, up in enumerate(translation_up):\n other_ims[:, :, i] = np.roll(other_ims[:, :, i], -up, axis=0)\n \n for i, left in enumerate(translation_left):\n other_ims[:, :, i] = np.roll(other_ims[:, :, i], -left, axis=1)\n \n plt.imshow(ref_im, cmap='Reds')\n plt.axis('off')\n \n for i in range(other_ims.shape[-1]):\n \n if other_ims.shape[-1] > 2:\n plt.imshow(other_ims[:, :, i], cmap='Greens', alpha=0.5)\n else:\n if i == 1:\n plt.imshow(other_ims[:, :, i], cmap='Greens', alpha=0.5)\n else:\n plt.imshow(other_ims[:, :, i], cmap='Blues', alpha=0.5)\n plt.axis('off')\n corrdist = 1 - pearsonr(other_ims[:, :, i].ravel(),\n ref_im.ravel())[0]\n print(\"Cost (1 - corr) image %i: %.3f\" % (i + 1, corrdist))\n\nmotion_data = np.load('data/motion_data.npy')\n\n# Tweak the parameters translate_up and translate_left to minimize the mismatches\ntranslate_up = [0, 0]\ntranslate_left = [0, 0]\n\ntranslate_volumes(motion_data, ref_image=2, translation_up=translate_up, translation_left=translate_left)\n\ntranslate_up = .....\ntranslate_left = .....\n```\n\n\n```python\nassert(translate_up == [-8, 9])\nassert(translate_left == [-5, 3])\n```\n\n### 6.2 Motion filtering\nEven after motion realignment, your data is still 'contaminated' by motion. This is because movement itself influences the measured activity. For example, suppose that you measure a single voxel in someone's brain; then, this person moves his/her hea\u2020d 2 centimeters. Now, we can do motion realigment to make sure we measure the same voxel before and after the movement, but *this does not change the fact that this particular voxel was originally measured at two different locations*. It could be that after the movement, the voxel was actually a little bit closer to the headcoil, which results in a (slight) increase in signal compared to before the movement (this is also known as 'spin history effects').\n\nIdeally, you want to account for these interactions between motion and the measured activity. One way to do this is through \"motion filtering\", of which one popular approach is to simply add the 6 realignment parameters (rotation and translation in 3 directions) to the design-matrix ($X$)! In other words, we treat the motion realignment parameters as \"nuisance regressors\" that are aimed to explain activity that is related to motion.\n\nAlright, let's load some realigment parameters (6 in total) from an fMRI run of 200 volumes. We'll plot them below:\n\n\n```python\n\"\"\" This data has been motion-corrected using the FSL tool 'MCFLIRT', which outputs a file\nending in *.par that contains the 6 motion parameters (rotation/translation in 3 directions each).\nWe'll load in this file and plot these motion parameters. \"\"\"\n\nmotion_params = np.loadtxt('data/mc/unfiltered_data_mcf.par')\nrotation_params = motion_params[:, :3]\ntranslation_params = motion_params[:, 3:]\n\nplt.figure(figsize=(15, 7))\nplt.subplot(2, 1, 1)\nplt.title('Rotation', fontsize=20)\nplt.plot(rotation_params)\nplt.xlim(0, 199)\nplt.legend(['x', 'y', 'z'])\nplt.ylabel('Rotation in radians', fontsize=15)\n\nplt.subplot(2, 1, 2)\nplt.title('Translation', fontsize=20)\nplt.plot(translation_params)\nplt.legend(['x', 'y', 'z'])\nplt.ylabel('Translation in mm', fontsize=15)\nplt.xlim(0, 199)\nplt.xlabel('Time (TR)', fontsize=15)\nplt.tight_layout()\nplt.show()\n```\n\nLooking at the plots, you could say this is pretty good data! Apart from some movement around volume 85 - 90, the participant didn't move a lot. \n\n
\nToThink \n
\n\nLooking at the plots above, can you deduce which volume was used as a reference volume? \n\n
\nToDo \n
\n\nFor this ToDo, you have to compare two models and the resulting (normalized) effects (t-values): a model *without* the six motion parameters and a model *with* the six motion parameters.\n\nWe provide you with a design-matrix (`X`) with an intercept and a single stimulus-predictor and the signal of a single voxel (`sig`). Calculate the t-value for the contrast of the predictor-of-interest against baseline for both the original design-matrix (only intercept + predictor-of-interest) and the design-matrix extended with the six motion parameters (which thus has 8 predictors). To help you a little bit, we broke down this ToDo in multiple steps, each in a different cell.\n\n\n```python\n# First, we'll load the data\nwith np.load('data/data_last_todo.npz') as data_last_todo:\n X = data_last_todo['X']\n sig = data_last_todo['sig']\n\nprint(\"Shape of original X: %s\" % (X.shape,))\nprint(\"Shape of signal: %s\" % (sig.shape,))\n```\n\nNow, in the cell below, calculate the t-value corresponding to the contrast against baseline of our predictor-of-interest. Save this in a variable named `tvalue_simple_model`.\n\n\n```python\n# Calculate the t-value here\n\n```\n\nThen, in the cell below, make a new (extended) design-matrix by stacking the motion parameters (variable `motion_params`) to the original design-matrix (`X`). \n\n\n```python\n# Create here the extended design matrix with motion parameters (call it e.g. X_ext)\n\n```\n\nLastly, calculate the t-value of the predictor-against-baseline for the extended model (i.e., the design-matrix with motion predictors). Store the t-value in a variable named `tvalue_extended_model`. \n\n\n```python\n# Calculate the t-value of the extended model here\n\n```\n\n\n```python\nnp.testing.assert_almost_equal(tvalue_simple_model, 12.399, decimal=3)\nnp.testing.assert_almost_equal(tvalue_extended_model, 10.213, decimal=3)\n\n```\n\n
\nToThink \n
\n\nIf you did the above ToDo correctly, you should have found that the t-value of the extended model (with motion parameters) is actually **_smaller_** than the simple model (without motion parameters) ... What caused the t-value of *this predictor* to become smaller when the motion-parameters were included in your design-matrix?\n\n\n*If your stimulus predictor is correlated to the model parameters, then the increased design-variance will actually lead to lower t-values! Note that it's not the increase in degrees of freedom that caused a smaller t-value, because the noise term is actually lower when the motion parameters are included!*\n\n\n```python\n\n```\n", "meta": {"hexsha": "8744cc1f6c55d4ae79bf48cdaf480c4f24e4ea0d", "size": 87276, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/nuisances.ipynb", "max_stars_repo_name": "tknapen/brainimaging_VU", "max_stars_repo_head_hexsha": "41c4aec5d676f42410f30a216b49a7814ae508d7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 8, "max_stars_repo_stars_event_min_datetime": "2021-01-31T13:42:02.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-18T23:19:23.000Z", "max_issues_repo_path": "notebooks/nuisances.ipynb", "max_issues_repo_name": "tknapen/brainimaging_VU", "max_issues_repo_head_hexsha": "41c4aec5d676f42410f30a216b49a7814ae508d7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/nuisances.ipynb", "max_forks_repo_name": "tknapen/brainimaging_VU", "max_forks_repo_head_hexsha": "41c4aec5d676f42410f30a216b49a7814ae508d7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-02-01T18:36:22.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-17T18:08:05.000Z", "avg_line_length": 53.2495424039, "max_line_length": 957, "alphanum_fraction": 0.6026628168, "converted": true, "num_tokens": 15924, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.2782568056728001, "lm_q2_score": 0.32423539245106087, "lm_q1q2_score": 0.09022070458949892}} {"text": "```python\n%%capture\n## compile PyRoss for this notebook\nimport os\nowd = os.getcwd()\nos.chdir('../../')\n%run setup.py install\nos.chdir(owd)\n```\n\n\n```python\n%matplotlib inline\nimport numpy as np\nfrom matplotlib import pyplot as plt\nimport pyross\nimport time \nimport seaborn as sns\nimport pandas as pd\nfrom matplotlib.pyplot import cm\n```\n\nIn this notebook we consider a control protocol consisting of an initial lockdown, which is then partly released. For our numerical study we generate synthetic data using the stochastic SIIR model.\n\nWhile we use the UK age structure and contact matrix, we used here simulated data.\n\n**Summary:**\n\n1. We load the age structure and contact matrix for Denmark. The contact matrix is generally given as\n\\begin{equation}\n C = C_{H} + C_{W} + C_{S} + C_{O},\n\\end{equation}\nwhere the four terms denote the number of contacts at home, work, school, and all other remaining contacts.\n2. We define the other model parameters of the SIIR model **(these are not fitted to any real data)**.\n3. We define a \"lockdown-protocol\":\n Withing a certain time range, a lockdown is imposed (shcool closure). The contact matrix is reduced to \n \\begin{equation}\n C = C_{H} \n \\end{equation} \n\nWe want to see an impact if the school reopen, when do we see a change in the number of people infected ? Which age group is the most infected ?\n\n\n## Get the contact Matrices for UK\n\n\n```python\nM=16 # number of age groups\n\n# load age structure data\nmy_data = np.genfromtxt('../../data/age_structures/UK.csv', delimiter=',', skip_header=1)\naM, aF = my_data[:, 1], my_data[:, 2]\n\n# set age groups\nNi=aM+aF; Ni=Ni[0:M]; N=np.sum(Ni)\n```\n\n\n```python\ndf1= pd.DataFrame({'Female':aF, 'Age':['0-4','5-9','10-14','15-19','20-24','25-29','30-34','35-39','40-44','45-49','50-54','55-59','60-64','65-69','70-74','75-79','80-84','85-89','90-94','95-99','100+'], 'Sex':['F']*21})\ndf2 = pd.DataFrame({'Male':aM, 'Age':['0-4','5-9','10-14','15-19','20-24','25-29','30-34','35-39','40-44','45-49','50-54','55-59','60-64','65-69','70-74','75-79','80-84','85-89','90-94','95-99','100+'], 'Sex':['M']*21})\ndf3 = pd.concat([df1, df2], join='inner')\ndf3['number'] = np.concatenate((aF,aM))\n```\n\nC is the sum of contributions from contacts at home, workplace, schools and all other public spheres. Using superscripts $H$, $W$, $S$ and $O$ for each of these, we write the contact matrix as\n$$\nC_{ij} = C^H_{ij} + C^W_{ij} + C^S_{ij} + C^O_{ij}\n$$\n\nWe read in these contact matrices from the data sets provided in the paper *Projecting social contact matrices in 152 countries using contact surveys and demographic data* by Prem et al, sum them to obtain the total contact matrix. We also read in the age distribution of UK obtained from the *Population pyramid* website.\n\n\n```python\n# Get individual contact matrices\nCH, CW, CS, CO = pyross.contactMatrix.UK()\n\n# By default, home, work, school, and others contribute to the contact matrix\nC = CH + CW + CS + CO\n\n# Illustrate the individual contact matrices:\nfig,aCF = plt.subplots(2,2);\naCF[0][0].pcolor(CH, cmap=plt.cm.get_cmap('GnBu', 10));\naCF[0][1].pcolor(CW, cmap=plt.cm.get_cmap('GnBu', 10));\naCF[1][0].pcolor(CS, cmap=plt.cm.get_cmap('GnBu', 10));\naCF[1][1].pcolor(CO, cmap=plt.cm.get_cmap('GnBu', 10));\n```\n\n## Covid19 data \n\n\n```python\n# Get the latest data from Johns Hopkins University\n!git clone https://github.com/CSSEGISandData/COVID-19\n```\n\n fatal: destination path 'COVID-19' already exists and is not an empty directory.\r\n\n\n\n```python\ncases = pd.read_csv('COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv')\ncases.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...5/24/205/25/205/26/205/27/205/28/205/29/205/30/205/31/206/1/206/2/20
0NaNAfghanistan33.000065.0000000000...10582111731183112456130361365914525152051575016509
1NaNAlbania41.153320.1683000000...998100410291050107610991122113711431164
2NaNAlgeria28.03391.6596000000...8306850386978857899791349267939495139626
3NaNAndorra42.50631.5218000000...762763763763763764764764765844
4NaNAngola-11.202717.8739000000...69707071748184868686
\n

5 rows \u00d7 137 columns

\n
\n\n\n\n\n```python\ndeaths = pd.read_csv('COVID-19/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv')\ndeaths.shape\n```\n\n\n\n\n (266, 137)\n\n\n\n\n```python\ncases[cases['Country/Region']=='United Kingdom']\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
Province/StateCountry/RegionLatLong1/22/201/23/201/24/201/25/201/26/201/27/20...5/24/205/25/205/26/205/27/205/28/205/29/205/30/205/31/206/1/206/2/20
217BermudaUnited Kingdom32.3078-64.7505000000...133133139139140140140140141141
218Cayman IslandsUnited Kingdom19.3133-81.2546000000...129134137140140141141141150151
219Channel IslandsUnited Kingdom49.3723-2.3644000000...558559559560560560560560560560
220GibraltarUnited Kingdom36.1408-5.3536000000...154154154157158161169170170172
221Isle of ManUnited Kingdom54.2361-4.5481000000...336336336336336336336336336336
222MontserratUnited Kingdom16.7425-62.1874000000...11111111111111111111
223NaNUnited Kingdom55.3781-3.4360000000...259559261184265227267240269127271222272826274762276332277985
248AnguillaUnited Kingdom18.2206-63.0686000000...3333333333
249British Virgin IslandsUnited Kingdom18.4207-64.6400000000...8888888888
250Turks and Caicos IslandsUnited Kingdom21.6940-71.7979000000...12121212121212121212
257Falkland Islands (Malvinas)United Kingdom-51.7963-59.5236000000...13131313131313131313
\n

11 rows \u00d7 137 columns

\n
\n\n\n\n\n```python\ncols = cases.columns.tolist() \ncase = cases.loc[223,][4:]\ndeath = deaths.loc[223,][4:]\n```\n\n\n```python\nplt.figure(figsize=(20,10))\nsns.set(font_scale=3) # crazy big\nplt.legend(fontsize='x-large', title_fontsize='1000')\nsns.set_style(style='white')\nplt.legend(fontsize='x-large', title_fontsize='10000')\nsns.scatterplot(np.arange(len(death)),death, label='death');\nsns.scatterplot(x=np.arange(len(case)), y=case, label='case');\nplt.ylabel('')\nplt.title('')\nplt.xticks(np.arange(0, 115,15), ('1/22', '2/6', '2/21', '3/16', '3/31', '4/15','4/30', '04/05'));\nplt.savefig('UKcovid19.png', format='png', dpi=200)\n```\n\n## Deterministic SIR model for UK\n\nUsing this code : https://github.com/rajeshrinet/pyross/blob/master/examples/deterministic/ex03-age-structured-SIR-for-India.ipynb\n\nAssume that the population has been partitioned into $i=1,\\ldots, M$ age groups and that we have available the $M\\times M$ contact matrix $C_{ij}$. We assume all initial cases are symptomatic, and remain so.\n\nSee SIR model : pyross/deterministic.pyx\n\n\n```python\n# Generate class with contact matrix for SIR model with UK contact structure\ngenerator = pyross.contactMatrix.SIR(CH, CW, CS, CO)\n```\n\nThe infection parameter $\\beta$ is unknown, so we fit it to the case data till 25th March. \n\n\n```python\n## Parameters of the model (random)\n\nbeta = 0.01546692 # infection rate assumed intrinsic to the pathogen\ngIa = 1./7 # recovery rate of asymptomatic infectives (7 days)\ngIs = 1./7 # recovery rate of symptomatic infectives \nalpha = 0. # fraction of asymptomatic infectives\nfsa = 1 # the self-isolation parameter \n \n \n# initial conditions \nIs_0 = np.zeros((M)); Is_0[0:15]=200\nIa_0 = np.zeros((M)) # no asymptomatic infectives\nR_0 = np.zeros((M))\nS_0 = Ni - (Ia_0 + Is_0 + R_0)\n```\n\n\n```python\n# matrix for linearised dynamics\nL0 = np.zeros((M, M))\nL = np.zeros((2*M, 2*M))\n\nfor i in range(M):\n for j in range(M):\n L0[i,j]=C[i,j]*Ni[i]/Ni[j]\n\nL[0:M, 0:M] = alpha*beta/gIs*L0\nL[0:M, M:2*M] = fsa*alpha*beta/gIs*L0\nL[M:2*M, 0:M] = ((1-alpha)*beta/gIs)*L0\nL[M:2*M, M:2*M] = fsa*((1-alpha)*beta/gIs)*L0\n\n\nr0 = np.max(np.linalg.eigvals(L))\nprint(\"The basic reproductive ratio for these parameters is\", r0)\n```\n\n The basic reproductive ratio for these parameters is (1.264513426078052+0j)\n\n\n\n```python\n# instantiate model\nparameters = {'alpha':alpha,'beta':beta, 'gIa':gIa,'gIs':gIs,'fsa':fsa}\nmodel = pyross.deterministic.SIR(parameters, M, Ni)\n```\n\n\n```python\n# the contact structure is independent of time \ndef contactMatrix(t):\n return C\n```\n\n\n```python\n# time_points to solve the ode (using odeint) Ti = 0 by default \nTf=350; Nf=3500; #Tf is the final day np.linspace(Ti, Tf, Nf) \n```\n\n\n```python\n# run model\ndata=model.simulate(S_0, Ia_0, Is_0, contactMatrix, Tf, Nf)\n```\n\n\n```python\ndata['X'].shape # 48 because 16 groups * 4 equations \n```\n\n\n\n\n (3500, 48)\n\n\n\n\n```python\nt = data['t']; IC = np.zeros((Nf))\nfor i in range(M):\n IC += data['X'][:,2*M+i]\n```\n\n\n```python\nindex_max = np.argmax(IC)\n```\n\n\n```python\nfig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')\nplt.rcParams.update({'font.size': 12})\nplt.plot(t, IC, '-', lw=4, color='#A60628', label='forecast', alpha=0.8)\nplt.axvline(x=index_max/10, ymin=0, ymax=175000, color='#A60628')\nday, cases = np.array(np.arange(1,Tf)), np.array(case[0:Tf])\nplt.plot(cases, 'o-', lw=4, color='#348ABD', ms=5, label='data', alpha=0.5)\nplt.legend(fontsize=15, loc='upper left'); plt.grid() \nplt.autoscale(enable=True, axis='x', tight=True)\nplt.ylabel('Infected individuals');\nplt.title('No measure');\nplt.savefig('FullmatrixC.png', format='png', dpi=200)\n```\n\n\n```python\nSC = np.zeros((Nf))\nfor i in range(M):\n SC += data.get('X')[:,0*M+i]\n IC += data.get('X')[:,2*M+i]\n\nfig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')\nplt.rcParams.update({'font.size': 22})\n\nplt.plot(t, SC*10**(-6), '-', lw=4, color='#348ABD', label='susceptible', alpha=0.8,)\nplt.fill_between(t, 0, SC*10**(-6), color=\"#348ABD\", alpha=0.3)\n\nplt.plot(t, IC*10**(-6), '-', lw=4, color='#A60628', label='infected', alpha=0.8)\nplt.fill_between(t, 0, IC*10**(-6), color=\"#A60628\", alpha=0.3)\n\n\nplt.plot(cases*10**(-6), 'ro-', lw=4, color='dimgrey', ms=16, label='data', alpha=0.5)\n\nplt.legend(fontsize=26); plt.grid() \nplt.autoscale(enable=True, axis='x', tight=True)\nplt.ylabel('Individuals (millions)')\nplt.xticks(np.arange(0, Tf, 90), ('22/01', '30/04' ));\nplt.savefig('C-SIRNomesure.png', format='png', dpi=200)\n```\n\n\n```python\n# matrix for linearised dynamics\nL0 = np.zeros((M, M))\nL = np.zeros((2*M, 2*M))\nxind=[np.argsort(IC)[-1]]\n\nrr = np.zeros((Tf))\n\nfor tt in range(Tf):\n Si = np.array((data['X'][tt*10,0:M])).flatten()\n for i in range(M):\n for j in range(M):\n L0[i,j]=C[i,j]*Si[i]/Ni[j]\n L[0:M, 0:M] = alpha*beta/gIs*L0\n L[0:M, M:2*M] = fsa*alpha*beta/gIs*L0\n L[M:2*M, 0:M] = ((1-alpha)*beta/gIs)*L0\n L[M:2*M, M:2*M] = fsa*((1-alpha)*beta/gIs)*L0\n\n rr[tt] = np.real(np.max(np.linalg.eigvals(L)))\n \n \nfig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')\nplt.rcParams.update({'font.size': 22})\n\nplt.plot(t[::10], rr, 'o', lw=4, color='#A60628', label='suscetible', alpha=0.8,)\nplt.fill_between(t, 0, t*0+1, color=\"dimgrey\", alpha=0.2); plt.ylabel('Basic reproductive ratio')\nplt.ylim(np.min(rr)-.1, np.max(rr)+.1)\nplt.xticks(np.arange(0, Tf, 90), ('22/01', '30/04' ));\nplt.savefig('C-R0Nomesure.png', format='png', dpi=200)\n```\n\n\n```python\nfig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')\nplt.rcParams.update({'font.size': 22})\n\nplt.bar(np.arange(16),data.get('X')[0,0:M]*10**(-6), label='susceptible (initial)', alpha=0.8)\nplt.bar(np.arange(16),data.get('X')[-1,0:M]*10**(-6), label='susceptible (final)', alpha=0.8)\n\nplt.xticks(np.arange(-0.4, 16.45, 3.95), ('0', '20', '40', '60', '80'));\nplt.xlim(-0.45, 15.45); plt.ylabel('Individuals (millions)'); plt.xlabel('Age')\nplt.legend(fontsize=22); plt.axis('tight')\nplt.autoscale(enable=True, axis='x', tight=True)\n\nplt.savefig('C-indsusNomesure.png', format='png', dpi=200)\n```\n\n### Mortality \n\nWe extract the number of susceptibles remaining in each age group, and the difference with the initial number of susceptibles is the total number that are infected. We multiply this with mortality data from China to obtain mortality estimates.\n\n\n\n\n```python\nMM = np.array((0,0,.0,1,1,1,1,1,1,3.5,3.5,3.5,3.5,6,6,14.2)) \n```\n\n\n```python\nfig = plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')\nplt.rcParams.update({'font.size': 22})\n\nm1 = .01*MM*(data.get('X')[0,0:M]-data['X'][-1,0:M])\nplt.bar(np.arange(16),m1*10**(-6), label='susceptible (final)', alpha=0.8)\n\nplt.axis('tight'); plt.xticks(np.arange(-0.4, 16.45, 3.95), ('0', '20', '40', '60', '80'));\nplt.xlim(-0.45, 15.45); plt.ylabel('Mortality (millions)'); plt.xlabel('Age')\n\nplt.autoscale(enable=True, axis='x', tight=True)\n\nplt.savefig('C-mortalityNomesure.png', format='png', dpi=200)\n\n```\n\n##\u00a0Non Pharmaceutical intervention\n\n# School closure\n\nFriday, March 20 in UK\n\n\n### Change the day to open again schools\n\n\n```python\ndayclosure = 58\ndayopen1 = dayclosure+60\ndayopen2 = dayclosure+80\ndayopen3 = dayclosure+200\n\n```\n\n\n```python\nmodel = pyross.deterministic.SIR(parameters, M, Ni)\n```\n\n\n```python\n# the contact matrix is time-dependent\ndef contactMatrix1(t):\n if there.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n```\n\n\n\nToggle cell visibility here.\n\n\n\n```python\n# Examples: \n# Factored form: 1/(x**2*(x**2 + 1))\n# Expanded form: 1/(x**4+x**2)\n\nimport sympy as sym\nfrom IPython.display import Latex, display, Markdown, Javascript, clear_output\nfrom ipywidgets import widgets, Layout # Interactivity module\n```\n\n## Decomposizione in fratti semplici\n\nQuando si utilizza la trasformata di Laplace per l'analisi dei sistemi, la trasformata di Laplace del segnale di uscita si ottiene come prodotto della funzione di trasferimento del sistema per la trasformata di Laplace del segnale di ingresso. Il risultato di questa moltiplicazione solitamente \u00e8 abbastanza complesso da interpretare. Per eseguire la trasformata inversa di Laplace, si esegue prima la decomposizione in fratti semplici. Questo esempio dimostra questa procedura.\n\n---\n\n### Come usare questo notebook?\nAlterna tra l'opzione *Input da funzione* o *Input da coefficienti polinomiali*.\n\n1. *Input da funzione*:\n * Esempio: per inserire la funzione $\\frac{1}{x^2(x^2 + 1)}$ (formato fattorizzato) digitare 1/(x\\*\\*2\\*(x\\*\\*2 + 1)); per inserire la stessa funzione nella forma espansa ($\\frac{1}{x^4+x^2}$) digitare 1/(x\\*\\*4+x\\*\\*2).\n\n2. *Input da coefficienti polinomiali*:\n * Usa i cursori per selezionare l'ordine del numeratore e del denominatore della funzione razionale di interesse.\n * Inserisci i coefficienti sia del numeratore che del denominatore nelle caselle di testo dedicate e fai clic su *Conferma*.\n\n\n```python\n## System selector buttons\nstyle = {'description_width': 'initial'}\ntypeSelect = widgets.ToggleButtons(\n options=[('Input da funzione', 0), ('Input da coefficienti polinomiali', 1),],\n description='Select: ',style={'button_width':'230px'})\n\nbtnReset=widgets.Button(description=\"Reset\")\n\n# function\ntextbox=widgets.Text(description=('Inserisci la funzione:'),style=style)\nbtnConfirmFunc=widgets.Button(description=\"Conferma\") # ex btnConfirm\n\n# poly\nbtnConfirmPoly=widgets.Button(description=\"Conferma\") # ex btn\n\ndisplay(typeSelect)\n\ndef on_button_clickedReset(ev):\n display(Javascript(\"Jupyter.notebook.execute_cells_below()\"))\n\ndef on_button_clickedFunc(ev):\n eq = sym.sympify(textbox.value)\n\n if eq==sym.factor(eq):\n display(Markdown('La funzione $%s$ \u00e8 scritta in forma fattorizzata. ' %sym.latex(eq) + 'La sua forma espansa \u00e8 $%s$.' %sym.latex(sym.expand(eq))))\n \n else:\n display(Markdown('La funzione $%s$ \u00e8 scritta in forma espansa. ' %sym.latex(eq) + 'La sua forma fattorizzata \u00e8 $%s$.' %sym.latex(sym.factor(eq))))\n \n display(Markdown('Il risultato della decomposizione in fratti semplici \u00e8: $%s$' %sym.latex(sym.apart(eq)) + '.'))\n display(btnReset)\n \ndef transfer_function(num,denom):\n num = np.array(num, dtype=np.float64)\n denom = np.array(denom, dtype=np.float64)\n len_dif = len(denom) - len(num)\n if len_dif<0:\n temp = np.zeros(abs(len_dif))\n denom = np.concatenate((temp, denom))\n transferf = np.vstack((num, denom))\n elif len_dif>0:\n temp = np.zeros(len_dif)\n num = np.concatenate((temp, num))\n transferf = np.vstack((num, denom))\n return transferf\n\ndef f(orderNum, orderDenom):\n global text1, text2\n text1=[None]*(int(orderNum)+1)\n text2=[None]*(int(orderDenom)+1)\n display(Markdown('2. Inserisci i coefficienti del numeratore.'))\n for i in range(orderNum+1):\n text1[i]=widgets.Text(description=(r'a%i'%(orderNum-i)))\n display(text1[i])\n display(Markdown('3. Inserisci i coefficienti del denominatore.')) \n for j in range(orderDenom+1):\n text2[j]=widgets.Text(description=(r'b%i'%(orderDenom-j)))\n display(text2[j])\n global orderNum1, orderDenom1\n orderNum1=orderNum\n orderDenom1=orderDenom\n\ndef on_button_clickedPoly(btn):\n clear_output()\n global num,denom\n enacbaNum=\"\"\n enacbaDenom=\"\"\n num=[None]*(int(orderNum1)+1)\n denom=[None]*(int(orderDenom1)+1)\n for i in range(int(orderNum1)+1):\n if text1[i].value=='' or text1[i].value=='Please insert a coefficient':\n text1[i].value='Please insert a coefficient'\n else:\n try:\n num[i]=int(text1[i].value)\n except ValueError:\n if text1[i].value!='' or text1[i].value!='Please insert a coefficient':\n num[i]=sym.var(text1[i].value)\n \n for i in range (len(num)-1,-1,-1):\n if i==0:\n enacbaNum=enacbaNum+str(num[len(num)-i-1])\n elif i==1:\n enacbaNum=enacbaNum+\"+\"+str(num[len(num)-i-1])+\"*x+\"\n elif i==int(len(num)-1):\n enacbaNum=enacbaNum+str(num[0])+\"*x**\"+str(len(num)-1)\n else:\n enacbaNum=enacbaNum+\"+\"+str(num[len(num)-i-1])+\"*x**\"+str(i) \n \n for j in range(int(orderDenom1)+1):\n if text2[j].value=='' or text2[j].value=='Please insert a coefficient':\n text2[j].value='Please insert a coefficient'\n else:\n try:\n denom[j]=int(text2[j].value)\n except ValueError:\n if text2[j].value!='' or text2[j].value!='Please insert a coefficient':\n denom[j]=sym.var(text2[j].value)\n \n for i in range (len(denom)-1,-1,-1):\n if i==0:\n enacbaDenom=enacbaDenom+\"+\"+str(denom[len(denom)-i-1])\n elif i==1:\n enacbaDenom=enacbaDenom+\"+\"+str(denom[len(denom)-i-1])+\"*x\"\n elif i==int(len(denom)-1):\n enacbaDenom=enacbaDenom+str(denom[0])+\"*x**\"+str(len(denom)-1)\n else:\n enacbaDenom=enacbaDenom+\"+\"+str(denom[len(denom)-i-1])+\"*x**\"+str(i)\n \n funcSym=sym.sympify('('+enacbaNum+')/('+enacbaDenom+')')\n\n DenomSym=sym.sympify(enacbaDenom)\n NumSym=sym.sympify(enacbaNum)\n DenomSymFact=sym.factor(DenomSym);\n funcFactSym=NumSym/DenomSymFact;\n \n if DenomSym==sym.expand(enacbaDenom):\n if DenomSym==DenomSymFact:\n display(Markdown('La funzione di interesse \u00e8: $%s$. Il numeratore non pu\u00f2 essere fattorizzato.' %sym.latex(funcSym)))\n else:\n display(Markdown('La funzione di interesse \u00e8: $%s$. Il numeratore non pu\u00f2 essere fattorizzato. La funzione con il denominatore fattorizzato \u00e8: $%s$.' %(sym.latex(funcSym), sym.latex(funcFactSym))))\n\n if sym.apart(funcSym)==funcSym:\n display(Markdown('La decomposizione in fratti semplici non pu\u00f2 essere eseguita.'))\n else:\n display(Markdown('Il risultato della decomposizione in fratti semplici \u00e8: $%s$' %sym.latex(sym.apart(funcSym)) + '.'))\n \n btnReset.on_click(on_button_clickedReset)\n display(btnReset)\n \ndef partial_frac(index):\n\n if index==0:\n x = sym.Symbol('x') \n display(widgets.HBox((textbox, btnConfirmFunc)))\n btnConfirmFunc.on_click(on_button_clickedFunc)\n btnReset.on_click(on_button_clickedReset)\n \n elif index==1:\n display(Markdown('1. Definisci l\\'ordine del numeratore (orderNum) e del denominatore (orderDenom).'))\n widgets.interact(f, orderNum=widgets.IntSlider(min=0,max=10,step=1,value=0),\n orderDenom=widgets.IntSlider(min=0,max=10,step=1,value=0));\n btnConfirmPoly.on_click(on_button_clickedPoly)\n display(btnConfirmPoly) \n\ninput_data=widgets.interactive_output(partial_frac,{'index':typeSelect})\ndisplay(input_data)\n```\n\n\n ToggleButtons(description='Select: ', options=(('Input da funzione', 0), ('Input da coefficienti polinomiali',\u2026\n\n\n\n Output()\n\n", "meta": {"hexsha": "453e79b91cb996434c12dcdccfa43563d3d4d30c", "size": 12122, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_it/examples/02/TD-09-Decomposizione-in-fratti-semplici.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_it/examples/02/.ipynb_checkpoints/TD-09-Decomposizione-in-fratti-semplici-checkpoint.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_it/examples/02/.ipynb_checkpoints/TD-09-Decomposizione-in-fratti-semplici-checkpoint.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 37.88125, "max_line_length": 487, "alphanum_fraction": 0.5364626299, "converted": true, "num_tokens": 2279, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.34510525748676846, "lm_q2_score": 0.2598256322295121, "lm_q1q2_score": 0.08966719171222819}} {"text": "\n\n# Qcamp - Terra\n## IBMQ\n### Donny Greenberg, Kevin Krsulich and Thomas Alexander\n\n\n# Gameplan\n\n* Basics\n * What is Terra?\n * Teleportation\n * QPE a few ways\n* Browsing device info\n* Tips and tricks\n* Learning More, Resources\n\n# What is Terra?\n\nTerra\u2019s core service is the compilation and execution of Quantum circuits for arbitrary backends, and shipping jobs to backends\n * It includes operations for circuit construction, including loading QASM\n * Terra can take the same circuit object and compile and run it on any Quantum hardware or simulator\n * Local simulators are included in Terra and Aer\n * Terra has IBM Q API connections built in - it will send your job to your desired backend and collect the results\n\nKeep in mind:\n\n * Terra is not a language per se, but more of a large piece of infrastructure.\n * Qiskit is very much a work in progress. It is changing rapidly to converge toward the needs of its users. We welcome development suggestions and help!\n * See our Github (https://github.com/Qiskit/qiskit-terra).\n\nIn the future, Terra will include:\n\n * OpenPulse, pulse level control of IBM Quantum Hardware (find Thomas and ask him about it!)\n * More sophisticated circuit builder interface for constructing and composing large circuits (find Kevin and ask him about it!)\n\nBut first, install Terra:\n\n\n```python\n!pip install qiskit\n```\n\n Requirement already satisfied: qiskit in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (0.7.0)\n Requirement already satisfied: qiskit-terra<0.8,>=0.7 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit) (0.7.0)\n Requirement already satisfied: qiskit-aer<0.2,>=0.1 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit) (0.1.0)\n Requirement already satisfied: networkx>=2.2 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit-terra<0.8,>=0.7->qiskit) (2.2)\n Requirement already satisfied: psutil>=5 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit-terra<0.8,>=0.7->qiskit) (5.4.8)\n Requirement already satisfied: requests-ntlm>=1.1.0 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit-terra<0.8,>=0.7->qiskit) (1.1.0)\n Requirement already satisfied: scipy!=0.19.1,>=0.19 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit-terra<0.8,>=0.7->qiskit) (1.1.0)\n Requirement already satisfied: pillow>=4.2.1 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit-terra<0.8,>=0.7->qiskit) (5.3.0)\n Requirement already satisfied: numpy>=1.13 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit-terra<0.8,>=0.7->qiskit) (1.15.4)\n Requirement already satisfied: sympy>=1.3 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit-terra<0.8,>=0.7->qiskit) (1.3)\n Requirement already satisfied: ply>=3.10 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit-terra<0.8,>=0.7->qiskit) (3.11)\n Requirement already satisfied: jsonschema<2.7,>=2.6 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit-terra<0.8,>=0.7->qiskit) (2.6.0)\n Requirement already satisfied: marshmallow<3,>=2.16.3 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit-terra<0.8,>=0.7->qiskit) (2.17.0)\n Requirement already satisfied: marshmallow-polyfield<4,>=3.2 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit-terra<0.8,>=0.7->qiskit) (3.2)\n Requirement already satisfied: requests>=2.19 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from qiskit-terra<0.8,>=0.7->qiskit) (2.20.1)\n Requirement already satisfied: decorator>=4.3.0 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from networkx>=2.2->qiskit-terra<0.8,>=0.7->qiskit) (4.3.0)\n Requirement already satisfied: ntlm-auth>=1.0.2 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from requests-ntlm>=1.1.0->qiskit-terra<0.8,>=0.7->qiskit) (1.2.0)\n Requirement already satisfied: cryptography>=1.3 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from requests-ntlm>=1.1.0->qiskit-terra<0.8,>=0.7->qiskit) (2.4.1)\n Requirement already satisfied: mpmath>=0.19 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from sympy>=1.3->qiskit-terra<0.8,>=0.7->qiskit) (1.0.0)\n Requirement already satisfied: idna<2.8,>=2.5 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from requests>=2.19->qiskit-terra<0.8,>=0.7->qiskit) (2.7)\n Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from requests>=2.19->qiskit-terra<0.8,>=0.7->qiskit) (3.0.4)\n Requirement already satisfied: urllib3<1.25,>=1.21.1 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from requests>=2.19->qiskit-terra<0.8,>=0.7->qiskit) (1.24.1)\n Requirement already satisfied: certifi>=2017.4.17 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from requests>=2.19->qiskit-terra<0.8,>=0.7->qiskit) (2018.11.29)\n Requirement already satisfied: cffi!=1.11.3,>=1.7 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-terra<0.8,>=0.7->qiskit) (1.11.5)\n Requirement already satisfied: asn1crypto>=0.21.0 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-terra<0.8,>=0.7->qiskit) (0.24.0)\n Requirement already satisfied: six>=1.4.1 in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-terra<0.8,>=0.7->qiskit) (1.11.0)\n Requirement already satisfied: pycparser in /Users/talexander/anaconda3/envs/qiskit/lib/python3.6/site-packages (from cffi!=1.11.3,>=1.7->cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-terra<0.8,>=0.7->qiskit) (2.19)\n\n\n# Structural Elements\n\nLet's start building circuits and get acquainted with Terra.\n\n\n```python\n# Housekeeping: uncomment this to suppress deprecation warnings\nimport warnings\nwarnings.filterwarnings('ignore')\n```\n\n\n```python\nfrom qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit\nimport numpy as np\n```\n\n\n```python\n# Create a Quantum Register with 3 qubits\nqr = QuantumRegister(3)\n\n# Create a Classical Register with 3 bits\n# Only necessary if you want to do measurement!\ncr = ClassicalRegister(3)\n\n# Create a Quantum Circuit acting on the qr and cr register\ncircuit = QuantumCircuit(qr, cr)\n```\n\n# QuantumCircuits are the primary unit of computation in Terra\n* QuantumCircuits are backend agnostic\n* They contain:\n * name - for referencing the circuit later (e.g. in the results object)\n * data - a list of gates in the circuit\n * regs - the QuantumRegisters and ClassicalRegisters in the gates of the circuit\n\n# Gates! There are many.\n\nQiskit supports many gates. They are located in the `qiskit/extensions/standard` directory, but are loaded behind the scenes so you don\u2019t need to import them one by one.\n\nGates are technically objects, but in practice you're likely to use them in the form of static functions on the circuit object. More info on gates [here](https://github.com/Qiskit/qiskit-tutorial/blob/master/qiskit/terra/summary_of_quantum_operations.ipynb).\n\nThe basis gateset of the IBM Q devices is `{id, u1, u2, u3, cx}`.\n\nAfter we add some gates, we can print our circuit's Qasm:\n\n\n```python\n# Hadamard gate on qubit 0\ncircuit.h(qr[0])\n\n# CNOT (Controlled-NOT) gate from qubit 0 to qubit 1\ncircuit.cx(qr[0], qr[1])\n\nprint(circuit.qasm())\n```\n\n OPENQASM 2.0;\n include \"qelib1.inc\";\n qreg q2[3];\n creg c2[3];\n h q2[0];\n cx q2[0],q2[1];\n \n\n\nWe can also use the CircuitDrawer to visualize the circuit:\n\n\n```python\ncircuit.draw(output='latex')\n```\n\nNow, we have enough to run the circuit. Let's import a backend, in this case a simulator, and run the circuit.\n\n\n```python\nfrom qiskit import Aer, execute\nqasm_backend = Aer.get_backend('qasm_simulator')\n\njob = execute(circuit, qasm_backend)\n\nresult = job.result()\nresult.get_counts(circuit)\n```\n\n\n\n\n {'000': 1024}\n\n\n\nWhoops! We forgot to measure. Let's do that.\n\n\n```python\ncircuit.measure(qr, cr)\ncircuit.draw()\n```\n\n\n\n\n
            \u250c\u2500\u2500\u2500\u2510        \u250c\u2500\u2510\nq2_0: |0>\u2500\u2500\u2500\u2524 H \u251c\u2500\u2500\u25a0\u2500\u2500\u2500\u2500\u2500\u2524M\u251c\n            \u2514\u2500\u2500\u2500\u2518\u250c\u2500\u2534\u2500\u2510\u250c\u2500\u2510\u2514\u2565\u2518\nq2_1: |0>\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 X \u251c\u2524M\u251c\u2500\u256b\u2500\n         \u250c\u2500\u2510     \u2514\u2500\u2500\u2500\u2518\u2514\u2565\u2518 \u2551 \nq2_2: |0>\u2524M\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256b\u2500\u2500\u256b\u2500\n         \u2514\u2565\u2518           \u2551  \u2551 \n c2_0: 0 \u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2569\u2550\n          \u2551            \u2551    \n c2_1: 0 \u2550\u256c\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\u2550\u2550\u2550\n          \u2551                 \n c2_2: 0 \u2550\u2569\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n                            
\n\n\n\n\n```python\njob = execute(circuit, qasm_backend)\n\nresult = job.result()\nresult.get_counts(circuit)\n```\n\n\n\n\n {'000': 483, '011': 541}\n\n\n\nNote that from version 0.6 onwards, Terra treats the rightmost qubit as qubit 0.\n\n# Qobjs and DAGs\n\nWe actually skipped a few steps that happen under the hood during `execute`, but you'll often ignore these for algorithm development. \n\nExecute calls `compile` to convert the circuit into a `Qobj`, which is a backend-specific object. While doing so, `compile` also calls the `transpiler`, which converts the circuit into a `Directed Acyclic Graph`(`DAG`) of gates, and optimizes the `DAG` for the target execution backend (`DAGs` are much easier to optimize than ciruits). We're going to breeze through these for now.\n\n# Compilation Settings - a good picture of Terra\u2019s robustness\n\n```\ndef compile(circuits, backend,\n config=None, basis_gates=None, \n coupling_map=None, initial_layout=None, \n shots=1024, pass_manager=None, memory=False):\n```\n\n* circuits (QuantumCircuit or list[QuantumCircuit]): circuits to compile\n* backend (BaseBackend or str): a backend for which to compile\n* config (dict): dictionary of parameters (e.g. noise) used by runner - more info [here](https://github.com/Qiskit/qiskit-terra/blob/9149076d16dd98552077e389b21ed2f953d96b2e/src/qasm-simulator-cpp/README.md#config-settings)\n* basis_gates (str): comma-separated basis gate set to compile to\n* coupling_map (list): coupling map (perhaps custom) representing physical qubit connectivity\n* initial_layout (list): user-specified mapping of logical to physical qubits\n* shots (int): number of repetitions of each circuit, for sampling\n* pass_manager (PassManager): a pass manger for the transpiler pipeline\n* memory (bool): if True, per-shot measurement bitstrings are returned as well\n\n# Computational Flow\n\nLet's review our running count of Terra's core objects:\n* QuantumRegister, ClassicalRegister\n* QuantumCircuit\n* Gate\n* Backend\n* DAG\n* Qobj\n* Job, result\n\nAnd the computational flow of Terra is:\n* Gates are added to QuantumCircuits\n* QuantumCircuits are transpiled into DAGs, DAGs are compiled in Qobjs\n* Qobjs are sent to backends\n* Backends return results\n\nSo far we've run a very vanilla Bell state. Let's do some more interesting things.\n\n\n```python\nqr = QuantumRegister(3)\ncr = ClassicalRegister(3)\ncircuit = QuantumCircuit(qr, cr)\ncircuit.ry(np.pi/2, qr[0])\ncircuit.h(qr[1])\ncircuit.cx(qr[1], qr[2])\ncircuit.barrier()\n\ncircuit.cx(qr[0], qr[1])\ncircuit.h(qr[0])\ncircuit.barrier()\n\ncircuit.measure(qr[0], cr[0])\ncircuit.measure(qr[1], cr[1])\n\ncircuit.draw()\n```\n\n\n\n\n
                   \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2591      \u250c\u2500\u2500\u2500\u2510 \u2591    \u250c\u2500\u2510\nq3_0: |0>\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 Ry(1.5708) \u251c\u2500\u2591\u2500\u2500\u2500\u25a0\u2500\u2500\u2524 H \u251c\u2500\u2591\u2500\u2500\u2500\u2500\u2524M\u251c\n         \u250c\u2500\u2500\u2500\u2510     \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2591 \u250c\u2500\u2534\u2500\u2510\u2514\u2500\u2500\u2500\u2518 \u2591 \u250c\u2500\u2510\u2514\u2565\u2518\nq3_1: |0>\u2524 H \u251c\u2500\u2500\u25a0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2524 X \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2524M\u251c\u2500\u256b\u2500\n         \u2514\u2500\u2500\u2500\u2518\u250c\u2500\u2534\u2500\u2510               \u2591 \u2514\u2500\u2500\u2500\u2518      \u2591 \u2514\u2565\u2518 \u2551 \nq3_2: |0>\u2500\u2500\u2500\u2500\u2500\u2524 X \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2500\u256b\u2500\u2500\u256b\u2500\n              \u2514\u2500\u2500\u2500\u2518               \u2591            \u2591  \u2551  \u2551 \n c3_0: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2569\u2550\n                                                  \u2551    \n c3_1: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\u2550\u2550\u2550\n\n c3_2: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n                                                       
\n\n\n\nQiskit allows conditional gates in simulation, but not on the real quantum hardware (yet).\n\n\n```python\ncircuit.z(qr[2]).c_if(cr, 1)\ncircuit.x(qr[2]).c_if(cr, 2)\ncircuit.y(qr[2]).c_if(cr, 3) # Note that ZX =iY\ncircuit.measure(qr[2], cr[2])\ncircuit.draw()\n```\n\n\n\n\n
                   \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2591      \u250c\u2500\u2500\u2500\u2510 \u2591    \u250c\u2500\u2510                        \nq3_0: |0>\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 Ry(1.5708) \u251c\u2500\u2591\u2500\u2500\u2500\u25a0\u2500\u2500\u2524 H \u251c\u2500\u2591\u2500\u2500\u2500\u2500\u2524M\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n         \u250c\u2500\u2500\u2500\u2510     \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2591 \u250c\u2500\u2534\u2500\u2510\u2514\u2500\u2500\u2500\u2518 \u2591 \u250c\u2500\u2510\u2514\u2565\u2518                        \nq3_1: |0>\u2524 H \u251c\u2500\u2500\u25a0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2524 X \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2524M\u251c\u2500\u256b\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n         \u2514\u2500\u2500\u2500\u2518\u250c\u2500\u2534\u2500\u2510               \u2591 \u2514\u2500\u2500\u2500\u2518      \u2591 \u2514\u2565\u2518 \u2551 \u250c\u2500\u2500\u2500\u2500\u2500\u2510\u250c\u2500\u2500\u2500\u2500\u2500\u2510\u250c\u2500\u2500\u2500\u2500\u2500\u2510\u250c\u2500\u2510\nq3_2: |0>\u2500\u2500\u2500\u2500\u2500\u2524 X \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2500\u256b\u2500\u2500\u256b\u2500\u2524  Z  \u251c\u2524  X  \u251c\u2524  Y  \u251c\u2524M\u251c\n              \u2514\u2500\u2500\u2500\u2518               \u2591            \u2591  \u2551  \u2551 \u251c\u2500\u2500\u2534\u2500\u2500\u2524\u251c\u2500\u2500\u2534\u2500\u2500\u2524\u251c\u2500\u2500\u2534\u2500\u2500\u2524\u2514\u2565\u2518\n c3_0: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2569\u2550\u2561     \u255e\u2561     \u255e\u2561     \u255e\u2550\u256c\u2550\n                                                  \u2551    \u2502     \u2502\u2502     \u2502\u2502     \u2502 \u2551 \n c3_1: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\u2550\u2550\u2550\u2561 = 1 \u255e\u2561 = 2 \u255e\u2561 = 3 \u255e\u2550\u256c\u2550\n                                                       \u2502     \u2502\u2502     \u2502\u2502     \u2502 \u2551 \n c3_2: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561     \u255e\u2561     \u255e\u2561     \u255e\u2550\u2569\u2550\n                                                       \u2514\u2500\u2500\u2500\u2500\u2500\u2518\u2514\u2500\u2500\u2500\u2500\u2500\u2518\u2514\u2500\u2500\u2500\u2500\u2500\u2518   
\n\n\n\nYou can find a more in depth guide to teleportation in Anna Phan's notebook on the topic, [here](https://github.com/Qiskit/qiskit-tutorial/blob/master/community/terra/qis_intro/teleportation_superdensecoding.ipynb).\n\nLet's see what pops out.\n\n\n```python\njob = execute(circuit, qasm_backend)\n\nresult = job.result()\nresult.get_counts(circuit)\n```\n\n\n\n\n {'001': 128,\n '101': 124,\n '110': 132,\n '111': 137,\n '010': 119,\n '000': 138,\n '100': 115,\n '011': 131}\n\n\n\nMaybe this isn't accurate enough to tell what we encoded on qubit 1. Let's increase the number of shots.\n\n\n```python\njob = execute(circuit, qasm_backend, shots = 10000)\n\nresult = job.result()\nresult.get_counts(circuit)\n```\n\n\n\n\n {'001': 1205,\n '101': 1224,\n '110': 1289,\n '111': 1219,\n '010': 1315,\n '000': 1245,\n '100': 1208,\n '011': 1295}\n\n\n\nNow let's visualize those results as a histogram\n\n\n```python\nfrom qiskit.tools.visualization import plot_histogram\n```\n\n\n```python\nplot_histogram(result.get_counts(circuit))\n```\n\nAnd now, calculating the percentage of shots with |0> measured on qubit 2 (but qubit 0 in our results).\n\n\n```python\ncounts = result.get_counts(circuit)\nqubit3_p0 = sum([v for k, v in counts.items() if k[0]=='0'])/10000\nqubit3_p0\n```\n\n\n\n\n 0.506\n\n\n\nOur probability of finding 0 is 50%, which is correct, because Ry(pi/2) should put qubit 0 in the |+> state. \n\nBut how do we know that we're not in the |-> state, or any other state along the equator of the Bloch sphere? Let's use a Hadamard to see whether our phase is correct. We'll need to delete the final measurement and add the Hadamard to do this.\n\n\n```python\ndel(circuit.data[-1])\ncircuit.h(qr[2])\ncircuit.measure(qr[2], cr[2])\ncircuit.draw(line_length=200)\n```\n\n\n\n\n
                   \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2591      \u250c\u2500\u2500\u2500\u2510 \u2591    \u250c\u2500\u2510                             \nq3_0: |0>\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 Ry(1.5708) \u251c\u2500\u2591\u2500\u2500\u2500\u25a0\u2500\u2500\u2524 H \u251c\u2500\u2591\u2500\u2500\u2500\u2500\u2524M\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n         \u250c\u2500\u2500\u2500\u2510     \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2591 \u250c\u2500\u2534\u2500\u2510\u2514\u2500\u2500\u2500\u2518 \u2591 \u250c\u2500\u2510\u2514\u2565\u2518                             \nq3_1: |0>\u2524 H \u251c\u2500\u2500\u25a0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2524 X \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2524M\u251c\u2500\u256b\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n         \u2514\u2500\u2500\u2500\u2518\u250c\u2500\u2534\u2500\u2510               \u2591 \u2514\u2500\u2500\u2500\u2518      \u2591 \u2514\u2565\u2518 \u2551 \u250c\u2500\u2500\u2500\u2500\u2500\u2510\u250c\u2500\u2500\u2500\u2500\u2500\u2510\u250c\u2500\u2500\u2500\u2500\u2500\u2510\u250c\u2500\u2500\u2500\u2510\u250c\u2500\u2510\nq3_2: |0>\u2500\u2500\u2500\u2500\u2500\u2524 X \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2500\u256b\u2500\u2500\u256b\u2500\u2524  Z  \u251c\u2524  X  \u251c\u2524  Y  \u251c\u2524 H \u251c\u2524M\u251c\n              \u2514\u2500\u2500\u2500\u2518               \u2591            \u2591  \u2551  \u2551 \u251c\u2500\u2500\u2534\u2500\u2500\u2524\u251c\u2500\u2500\u2534\u2500\u2500\u2524\u251c\u2500\u2500\u2534\u2500\u2500\u2524\u2514\u2500\u2500\u2500\u2518\u2514\u2565\u2518\n c3_0: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\u2550\u2569\u2550\u2561     \u255e\u2561     \u255e\u2561     \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\n                                                  \u2551    \u2502     \u2502\u2502     \u2502\u2502     \u2502      \u2551 \n c3_1: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\u2550\u2550\u2550\u2561 = 1 \u255e\u2561 = 2 \u255e\u2561 = 3 \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u256c\u2550\n                                                       \u2502     \u2502\u2502     \u2502\u2502     \u2502      \u2551 \n c3_2: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2561     \u255e\u2561     \u255e\u2561     \u255e\u2550\u2550\u2550\u2550\u2550\u2550\u2569\u2550\n                                                       \u2514\u2500\u2500\u2500\u2500\u2500\u2518\u2514\u2500\u2500\u2500\u2500\u2500\u2518\u2514\u2500\u2500\u2500\u2500\u2500\u2518        
\n\n\n\n\n```python\nshots = 100000\njob = execute(circuit, qasm_backend, shots=shots)\n\nresult = job.result()\nresult.get_counts(circuit)\ncounts = result.get_counts(circuit)\nqubit3_p0 = sum([v for k, v in counts.items() if k[0]=='0'])/100000\nqubit3_p0\n```\n\n\n\n\n 1.0\n\n\n\nLooks like our final state is indeed |+>, because our P(|0>) = 100%. \n\n# Example: Phase Estimation\n\nLet's see if we can work out a small phase estimation function, and sanity check it on simulators before trying it on the quantum hardware. \n\n- We'll start with a QFT, which comes directly from Terra. \n- We're going to use the Pauli X as our unitary, which simplifies things a lot\n- I'll then define a function to give me my circuit\n\n\n```python\ndef qft(circ, q, n):\n \"\"\"n-qubit QFT on q in circ.\"\"\"\n for j in range(n):\n for k in range(j):\n circ.cu1(np.pi / float(2**(j - k)), q[j], q[k])\n circ.h(q[j])\n```\n\n\n```python\n#Takes in a circuit with an operator on qubit n and appends the qpe circuit\ndef x_qpe(circ, q, n):\n for i in range(n-1):\n circ.h(q[i])\n for j in range(0, n-1, 2): # Only place a CX^n on every other qubit, because CX^n = I for n even\n circ.cx(q[j], q[n-1])\n circ.barrier()\n qft(circ, q, n-1)\n```\n\nPlay around with the ancilla number, the operator, the initial state, etc., see what happens!\n\n\n```python\n# n-1 is the number of ancilla\nn = 4\nqr = QuantumRegister(n)\ncr = ClassicalRegister(n)\ncircuit = QuantumCircuit(qr, cr)\ncircuit.rx(np.pi/2, qr[n-1])\ncircuit.barrier()\nx_qpe(circuit, qr, n)\n```\n\n\n```python\ncircuit.draw(line_length=200)\n```\n\n\n\n\n
                        \u2591           \u250c\u2500\u2500\u2500\u2510           \u2591 \u250c\u2500\u2500\u2500\u2510                                     \nq4_0: |0>\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 H \u251c\u2500\u2500\u25a0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2524 H \u251c\u2500\u25a0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25a0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n                        \u2591      \u250c\u2500\u2500\u2500\u2510\u2514\u2500\u2500\u2500\u2518  \u2502        \u2591 \u2514\u2500\u2500\u2500\u2518 \u25021.5708  \u2502       \u250c\u2500\u2500\u2500\u2510              \nq4_1: |0>\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2500\u2500\u2500\u2500\u2500\u2524 H \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25a0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 H \u251c\u2500\u25a0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n                        \u2591 \u250c\u2500\u2500\u2500\u2510\u2514\u2500\u2500\u2500\u2518       \u2502        \u2591                \u25020.7854 \u2514\u2500\u2500\u2500\u2518 \u25021.5708 \u250c\u2500\u2500\u2500\u2510\nq4_2: |0>\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2591\u2500\u2524 H \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u25a0\u2500\u2500\u2500\u2591\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25a0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25a0\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 H \u251c\n         \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2591 \u2514\u2500\u2500\u2500\u2518          \u250c\u2500\u2534\u2500\u2510\u250c\u2500\u2534\u2500\u2510 \u2591                                      \u2514\u2500\u2500\u2500\u2518\nq4_3: |0>\u2524 Rx(1.5708) \u251c\u2500\u2591\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 X \u251c\u2524 X \u251c\u2500\u2591\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n         \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2591                \u2514\u2500\u2500\u2500\u2518\u2514\u2500\u2500\u2500\u2518 \u2591                                           \n c4_0: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n\n c4_1: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n\n c4_2: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n\n c4_3: 0 \u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\n                                                                                                
\n\n\n\nNow that we have our basic algorithm, let's start trying to test and validate it in **quantum execution environments**.\n\n# Interlude: Backends\n\nQiskit offers connectors into execution `providers`, each with several `backends`:\n\n* BasicAer: Terra's built-in suite of pure-python simulators\n* Aer: Qiskit's suite of high-performance simulators\n* IBMQ: IBM's Quantum devices, and an HPC simulator \n\n\n```python\nfrom qiskit import IBMQ, Aer, BasicAer\n```\n\n# Simuators\n\n* qasm_simulator - a shot-based simulator\n * Input: a Qobj and execution config\n * Output: a results object containing a dictionary with basis states and shots per state\n * {\u201800\u2019: 425, \u201801\u2019: 267, \u201811\u2019: 90}\n * You can specify a random seed so the probabilistic measurement and noise stays the same\n\n# Simulators\n* statevector_simulator - This is the qasm_simulator with a snapshot at the end\n * Returns result object containing a dictionary of basis states with complex amplitudes for each\n* unitary_simulator - Returns a matrix of your circuit!\n* ibmq_qasm_simulator - a public simulator on an HPC machine run by IBM (Note, this is under the IBMQ `provider`)\n\n# Aer vs BasicAer\n- Aer is fast\n- BasicAer is slow\n\n* `aer.noise` - includes sophisticated noise models, which you can find more info about [here](https://qiskit.org/documentation/aer/device_noise_simulation.html)\n* `pip install qiskit` comes with binaries for many platforms so you shouldn\u2019t need to compile cpp (but if you do, check out the [Terra contributing file on github](https://github.com/Qiskit/qiskit-terra/blob/master/.github/CONTRIBUTING.rst) for make instructions.)\n\n\n```python\nbackend = Aer.get_backend(\"qasm_simulator\")\nprint(backend)\n```\n\n qasm_simulator\n\n\n## Ok - Back to Phase Estimation:\n\n\n```python\nqasm_backend = Aer.get_backend('qasm_simulator')\n```\n\nDon't forget to measure! Recall that we don't measure our |u> qubit.\n\n\n```python\ncircuit.barrier()\nfor i in range(n-1):\n circuit.measure(qr[i], cr[i])\n```\n\n\n```python\nshots = 10000\njob = execute(circuit, qasm_backend, shots = shots)\n\nresult = job.result()\ncounts = result.get_counts(circuit)\n```\n\n\n```python\nplot_histogram(counts)\n```\n\nWe seem to be getting an ok answer, so let's try running on the **Quantum hardware**.\n\n# The IBMQ Provider: Executing on Quantum Hardware\n\n\nTo do this, you'll either get your Q Network API token and URL from the [console](https://q-console.mybluemix.net/) (if you are are a member of the Q Network), or you'll need to get an IBM Q Experience API token from the [Q Experience accounts page](https://quantumexperience.ng.bluemix.net/qx/account/advanced).\n\n\n```python\n# IBMQ.enable_account('')\n# uncomment this ^^^ and insert your API key. Add a 'url' argument for Q Network users\n\n# Or you can use:\nIBMQ.load_accounts()\n\nprint(\"Available backends:\")\nIBMQ.backends(filters= lambda b: b.hub is None)\n```\n\n Available backends:\n\n\n\n\n\n [,\n ,\n ,\n ,\n ,\n ]\n\n\n\n\n```python\nq_backend = IBMQ.get_backend('ibmqx4')\n```\n\n## Back again to our circuit:\n\n\n```python\nshots = 8192 # Number of shots to run the program (experiment); maximum is 8192 shots.\njob_exp = execute(circuit, q_backend, shots = shots)\n```\n\n\n```python\n# Check the job status\njob_exp.status()\n```\n\n\n\n\n \n\n\n\nFYI, you can also retrieve an old job by its job_id.\n\n\n```python\njobID = job_exp.job_id()\n\nprint('JOB ID: {}'.format(jobID))\n\njob_get=q_backend.retrieve_job(jobID)\njob_get.result().get_counts(circuit)\n```\n\n JOB ID: 5c72d443c426dc0062a3525c\n\n\n\n\n\n {'0110': 343,\n '0000': 2415,\n '0010': 447,\n '0101': 1604,\n '0100': 1792,\n '0011': 566,\n '0111': 412,\n '0001': 613}\n\n\n\nNote that I increase the timeout and wait time considerably - this is often necessary. The defaults can be too short.\n\n\n```python\n# We recommend increasing the timeout to 30 minutes to avoid timeout errors when the queue is long.\nresult_real = job_exp.result(timeout=3600, wait=5)\ncounts = result_real.get_counts(circuit)\nplot_histogram(counts)\n```\n\n# Visualizing Devices, and Pulling Device Info\n\n- Terra has some neat built-in Jupyter magics for browsing device information, such as:\n - qubit error\n - job queues for public devices\n - coupling maps\n\n- More info [here](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/jupyter/jupyter_backend_tools.ipynb).\n\nYou can view the raw properties data for any backend like this:\n\n\n```python\nq_backend.properties()\n```\n\n\n\n\n BackendProperties(backend_name='ibmq_16_melbourne', backend_version='1.0.0', gates=[Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[0]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.001657744694001706)], qubits=[0]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.003315489388003412)], qubits=[0]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[1]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.007657438332263289)], qubits=[1]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.015314876664526578)], qubits=[1]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[2]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.003534263772314583)], qubits=[2]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.007068527544629166)], qubits=[2]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[3]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0011586761277807)], qubits=[3]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0023173522555614)], qubits=[3]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[4]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.001929259572122588)], qubits=[4]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.003858519144245176)], qubits=[4]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[5]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.002771458801925586)], qubits=[5]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.005542917603851172)], qubits=[5]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[6]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0011505196550350427)], qubits=[6]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0023010393100700854)], qubits=[6]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[7]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.001926840787433881)], qubits=[7]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.003853681574867762)], qubits=[7]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[8]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0021222280974397822)], qubits=[8]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0042444561948795645)], qubits=[8]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[9]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0035423748381594455)], qubits=[9]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.007084749676318891)], qubits=[9]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[10]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.001913741446806283)], qubits=[10]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.003827482893612566)], qubits=[10]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[11]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.00253643788613922)], qubits=[11]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.00507287577227844)], qubits=[11]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[12]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0033615212439866426)], qubits=[12]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.006723042487973285)], qubits=[12]), Gate(gate='u1', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.0)], qubits=[13]), Gate(gate='u2', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.005804811579962099)], qubits=[13]), Gate(gate='u3', parameters=[Nduv(date=datetime.datetime(2019, 2, 25, 7, 31, 1, tzinfo=tzutc()), name='gate_error', unit='', value=0.011609623159924198)], qubits=[13]), Gate(gate='cx', name='CX1_0', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 8, 26, 7, tzinfo=tzutc()), name='gate_error', unit='', value=0.05827277445210416)], qubits=[1, 0]), Gate(gate='cx', name='CX1_2', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 8, 29, 20, tzinfo=tzutc()), name='gate_error', unit='', value=0.03846091469791432)], qubits=[1, 2]), Gate(gate='cx', name='CX2_3', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 8, 32, 52, tzinfo=tzutc()), name='gate_error', unit='', value=0.04739205412562961)], qubits=[2, 3]), Gate(gate='cx', name='CX4_3', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 8, 36, 11, tzinfo=tzutc()), name='gate_error', unit='', value=0.02860996641163832)], qubits=[4, 3]), Gate(gate='cx', name='CX4_10', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 8, 39, 28, tzinfo=tzutc()), name='gate_error', unit='', value=0.031374925181012675)], qubits=[4, 10]), Gate(gate='cx', name='CX5_4', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 8, 42, 44, tzinfo=tzutc()), name='gate_error', unit='', value=0.051453686904960494)], qubits=[5, 4]), Gate(gate='cx', name='CX5_6', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 8, 46, 7, tzinfo=tzutc()), name='gate_error', unit='', value=0.056018111963786504)], qubits=[5, 6]), Gate(gate='cx', name='CX5_9', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 8, 50, 2, tzinfo=tzutc()), name='gate_error', unit='', value=0.1661321750914002)], qubits=[5, 9]), Gate(gate='cx', name='CX6_8', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 8, 53, 25, tzinfo=tzutc()), name='gate_error', unit='', value=0.03190285369377577)], qubits=[6, 8]), Gate(gate='cx', name='CX7_8', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 8, 56, 52, tzinfo=tzutc()), name='gate_error', unit='', value=0.030998414283986614)], qubits=[7, 8]), Gate(gate='cx', name='CX9_8', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 9, 0, 7, tzinfo=tzutc()), name='gate_error', unit='', value=0.054620229393257336)], qubits=[9, 8]), Gate(gate='cx', name='CX9_10', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 9, 4, 10, tzinfo=tzutc()), name='gate_error', unit='', value=0.09468962960456251)], qubits=[9, 10]), Gate(gate='cx', name='CX11_3', parameters=[Nduv(date=datetime.datetime(2019, 2, 17, 9, 34, 13, tzinfo=tzutc()), name='gate_error', unit='', value=0.08470997965114974)], qubits=[11, 3]), Gate(gate='cx', name='CX11_10', parameters=[Nduv(date=datetime.datetime(2019, 2, 17, 9, 27, 43, tzinfo=tzutc()), name='gate_error', unit='', value=0.035970559672938496)], qubits=[11, 10]), Gate(gate='cx', name='CX11_12', parameters=[Nduv(date=datetime.datetime(2019, 2, 17, 9, 30, 59, tzinfo=tzutc()), name='gate_error', unit='', value=0.03696622318009893)], qubits=[11, 12]), Gate(gate='cx', name='CX12_2', parameters=[Nduv(date=datetime.datetime(2019, 2, 23, 9, 22, 46, tzinfo=tzutc()), name='gate_error', unit='', value=0.08406631659128154)], qubits=[12, 2]), Gate(gate='cx', name='CX13_1', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 9, 21, 56, tzinfo=tzutc()), name='gate_error', unit='', value=0.24332932218355438)], qubits=[13, 1]), Gate(gate='cx', name='CX13_12', parameters=[Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='gate_error', unit='', value=0.07046310537371045)], qubits=[13, 12])], general=[], last_update_date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), qubits=[[Nduv(date=datetime.datetime(2019, 2, 23, 7, 31, 25, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=58.03116760943428), Nduv(date=datetime.datetime(2019, 2, 24, 7, 29, 49, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=14.724030341325745), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=5.10007609747442), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.06919999999999993)], [Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 35, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=59.83234286514375), Nduv(date=datetime.datetime(2019, 2, 24, 7, 30, 51, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=107.86811061878761), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=5.23868683687039), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.14939999999999998)], [Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 35, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=78.83230634480451), Nduv(date=datetime.datetime(2019, 2, 24, 7, 31, 51, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=102.16011660048183), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=5.032936880275877), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.04349999999999998)], [Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 35, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=80.71074437501859), Nduv(date=datetime.datetime(2019, 2, 24, 7, 32, 51, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=93.47298373503907), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=4.896169943144527), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.2086)], [Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 35, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=44.873050065955645), Nduv(date=datetime.datetime(2019, 2, 24, 7, 29, 49, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=31.237964501113378), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=5.027220338176452), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.023900000000000032)], [Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 35, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=22.34664027378627), Nduv(date=datetime.datetime(2019, 2, 24, 7, 30, 51, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=40.959721679585265), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=5.067154635274023), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.09240000000000004)], [Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 35, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=67.06584621261001), Nduv(date=datetime.datetime(2019, 2, 24, 7, 31, 51, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=57.56186597322083), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=4.923808354943549), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.03509999999999991)], [Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 35, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=37.321197970352195), Nduv(date=datetime.datetime(2019, 2, 24, 7, 32, 51, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=35.41966924608856), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=4.974518925104361), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.04590000000000005)], [Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 35, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=52.68491800085742), Nduv(date=datetime.datetime(2019, 2, 24, 7, 29, 49, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=68.95434334639211), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=4.739784526128049), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.04410000000000003)], [Nduv(date=datetime.datetime(2019, 2, 20, 7, 23, 13, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=33.78727044438043), Nduv(date=datetime.datetime(2019, 2, 24, 7, 31, 51, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=29.148849628536166), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=4.963352994843123), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.1129)], [Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 35, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=56.6604397300763), Nduv(date=datetime.datetime(2019, 2, 24, 7, 30, 51, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=71.82073067161548), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=4.945087549654646), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.04590000000000005)], [Nduv(date=datetime.datetime(2019, 2, 22, 7, 35, 6, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=5.62745350040824), Nduv(date=datetime.datetime(2019, 2, 22, 7, 38, 20, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=21.6457911319178), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=5.005975975723903), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.38339999999999996)], [Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 35, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=66.84149299149782), Nduv(date=datetime.datetime(2019, 2, 24, 7, 30, 51, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=86.94911909117845), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=4.760146191276317), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.03410000000000002)], [Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 35, tzinfo=tzutc()), name='T1', unit='\u00b5s', value=25.15747019319897), Nduv(date=datetime.datetime(2019, 2, 24, 7, 29, 49, tzinfo=tzutc()), name='T2', unit='\u00b5s', value=30.64437222118539), Nduv(date=datetime.datetime(2019, 2, 24, 9, 25, 54, tzinfo=tzutc()), name='frequency', unit='GHz', value=4.968480671260165), Nduv(date=datetime.datetime(2019, 2, 24, 7, 28, 11, tzinfo=tzutc()), name='readout_error', unit='', value=0.1149)]])\n\n\n\n# A Prettier Device Overview\n\n\n```python\nfrom qiskit.tools.jupyter import *\n```\n\n\n\n# Diving into a Specific Backend\n\n\n```python\n%qiskit_backend_monitor q_backend\n```\n\n\n VBox(children=(HTML(value=\"

\n# close up handlers\n loggerc.removeHandler(hdlr)\n loggera.removeHandler(hdlr)\n loggerq.removeHandler(hdlr)\n hdlr.close()\n```\n\n## Learning more\n\nThe [qiskit-tutorial](https://github.com/Qiskit/qiskit-tutorial) repo on Github has dozens of thoughtful and sophisticated tutorials. \n- We highly recommend going through both the \u201c[qiskit/](https://github.com/Qiskit/qiskit-tutorial/tree/master/qiskit)\u201d directory and the \u201c[community/](https://github.com/Qiskit/qiskit-tutorial/tree/master/community)\u201d directory. \n- We learn new things every time I look through them, and reference them regularly.\n- If you have any questions come find us or one of the other IBM Q members!\n\n# Review - Quantum Algorithm Building Blocks\n\nFour major building blocks of quantum algorithms:\n\n* Quantum Fourier Transform\n * Period-finding and phase\u2194norm swapping\n * Speedup from $O(2^n)$ to $O(n^2)$\n * E.g. Shor\u2019s algorithm, Quantum Phase Estimation\n* Hamiltonian Evolution\n * Applying a Hamiltonian to an initial state over an arbitrary time period\n * Exponential speedup (mostly, with complicated factors)\n * E.g. HHL, QAOA, QPE\n* Unstructured Search (Grover\u2019s)\n * Search for a state (string) exhibiting a binary condition (e.g. satisfy my 3SAT problem\u2026)\n * Speedup of O(\u221an)\n* Variational Optimization\n * Prepare a quantum state using a parameterized short circuit, use a classical optimizer to optimize parameters toward some desired quality evaluated on the QC (e.g. binary classification)\n * Speedups vary, usually no guaranteed speedup, but good for NISQ machines\n * E.g. VQE, VSVM, QAOA\n\n# Quantum Fourier Transform\n\nWe've used it above and it is straightforward to implement, but it is not very intuitive as a building block, and I recommend the [tutorial dedicated to it](https://github.com/Qiskit/qiskit-tutorial/blob/master/community/terra/qis_adv/fourier_transform.ipynb) by Anna Phan. I also highly recommmend 3Blue1Brown's video on the [continuous fourier transform](https://www.youtube.com/watch?v=spUNpyF58BY).\n\n# Hamiltonian Evolution\n\nThis is trickier, we're working on it. For now, the best way to learn about this in Qiskit is in the [Aqua operator class](https://github.com/Qiskit/aqua/blob/master/qiskit_aqua/operator.py#L1119), which includes lots of evolution logic.\n\n# Grover\u2019s Algorithm\n\nPretty straightforward in Terra. See [this notebook](https://github.com/Qiskit/qiskit-tutorial/blob/master/community/algorithms/grover_algorithm.ipynb) by Giacomo Nannicini and Rudy Raymond.\n\n\n\n# Variational Optimization\n\nThis also doesn't have a standalone tutorial, but the [Aqua VQE](https://github.com/Qiskit/aqua/blob/master/qiskit_aqua/algorithms/adaptive/vqe/vqe.py) is a straightforward, well engineered example of variational optimization. The [Aqua Variational SVM](https://github.com/Qiskit/aqua/blob/master/qiskit_aqua/algorithms/adaptive/qsvm/qsvm_variational.py) is also a good example.\n\n# Learning More - A Longer Course\n\n[This doc](https://docs.google.com/document/d/1WoUQky2NXdbrdGkxaUA28VE7W3fryTQG6ezn8Fw-l4E/edit) details a longer course to fluency in Quantum Programming.\n\n# Time Permitting: Transpilation and the DAGCircuit\n\nThe transpiler is the workhorse of Terra. It\u2019s how we keep circuits backend agnostic and compilable for arbitrary quantum hardware. The transpiler in Terra 0.6 was not transparent or extensible enough for increasingly sophisticated transpilation methods, so we tore it down and rewrote it to be much more robust.\n\nThe transpiler now transpiles circuits into circuits, rather than into DAGCircuits. This is much more transparent, and allows the end user to view and understand what individual transpiler passes are doing to their circuit. Here's a sample circuit that won't fit nicely on IBM's hardware (our QPE circuit had nearest neighbor connections, so these qubit remappers won't do much):\n\n\n```python\nfrom qiskit.transpiler import PassManager\nfrom qiskit.transpiler.passes import BasicSwap, CXCancellation, LookaheadSwap, StochasticSwap\nfrom qiskit.transpiler import transpile\nfrom qiskit.mapper import CouplingMap\n```\n\n\n```python\nqr = QuantumRegister(7, 'q')\nqr = QuantumRegister(7, 'q')\ntpl_circuit = QuantumCircuit(qr)\ntpl_circuit.h(qr[3])\ntpl_circuit.cx(qr[0], qr[6])\ntpl_circuit.cx(qr[6], qr[0])\ntpl_circuit.cx(qr[0], qr[1])\ntpl_circuit.cx(qr[3], qr[1])\ntpl_circuit.cx(qr[3], qr[0])\ntpl_circuit.draw()\n```\n\n# Swap mapping \nThe most naive thing we can do is simply move qubits around greedily with swaps. Let\u2019s see how the BasicSwap pass does here:\n\n\n```python\ncoupling = [[0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 6]]\n\nsimulator = BasicAer.get_backend('qasm_simulator')\ncoupling_map = CouplingMap(couplinglist=coupling)\npass_manager = PassManager()\npass_manager.append([BasicSwap(coupling_map=coupling_map)])\nbasic_circ = transpile(tpl_circuit, simulator, pass_manager=pass_manager)\nbasic_circ.draw()\n```\n\nNot great. Let\u2019s try Sven Jandura's LookaheadSwap, submitted for the 2018 QISKit\nDeveloper Challenge. Sven\u2019s swap pass was merged into Terra, and we will have two more passess from other winners of the Qiskit Developer Challenge soon! We\u2019re constructing a diverse set of passes, many user contributed, to meet the wide-ranging needs and mapping scenarios of circuits in the wild.\n\n\n```python\npass_manager = PassManager()\npass_manager.append([LookaheadSwap(coupling_map=coupling_map)])\nlookahead_circ = transpile(tpl_circuit, simulator, pass_manager=pass_manager)\nlookahead_circ.draw()\n```\n\nBetter! One more try with the StochasticSwap:\n\n\n```python\npass_manager = PassManager()\npass_manager.append([StochasticSwap(coupling_map=coupling_map)])\nstoch_circ = transpile(tpl_circuit, simulator, pass_manager=pass_manager)\nstoch_circ.draw()\n```\n\nEven better, but still more room to go. Right now this all happens behind the scenes for many users, but we hope that these tools make digging into transpilation much more accessible to those attempting to squeeze as much performance as possible out of their experiments on hardware.\n\n# Transpiling for Real Hardware\n\nFinally, let's see what the default transpiler does to our circuit to be able to run on a real backend. Note that this will include unrolling into the {U, CX} basis, including the swaps.\n\n\n```python\ntok_circ = transpile(tpl_circuit, backend=q_backend)\ntok_circ.draw(line_length=200)\n```\n\n# Modelling Noise in Aer Based on a Device's Properties\n\nNow that you have these properties, you might want to create a noise model for the qasm_simulator which closely resembles this device. A new feature in Aer allows you to do just that. Much of the content below is drawn from [this notebook](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/aer/device_noise_simulation.ipynb).\n\nFirst, let's pick a backend:\n\n\n```python\nIBMQ.backends(filters= lambda b: b.hub is None)\n```\n\n\n\n\n [,\n ,\n ,\n ,\n ,\n ]\n\n\n\nNow, we need to pull the device information:\n\n\n```python\ndevice = IBMQ.get_backend('ibmq_16_melbourne')\nproperties = device.properties()\ncoupling_map = device.configuration().coupling_map\n```\n\nNow, let's construct the device noise model.\n\nNote: The devices don't currently provide gate times, so we will manually provide them for the gates we are interested in using the optional gate_times argument for basic_device_noise_model.\n\n\n```python\nfrom qiskit.providers.aer import noise\n```\n\n\n```python\n# List of gate times for ibmq_14_melbourne device\n# Note that the None parameter for u1, u2, u3 is because gate\n# times are the same for all qubits\ngate_times = [\n ('u1', None, 0), ('u2', None, 50), ('u3', None, 100),\n ('cx', [1, 0], 678), # I can add gate times for specific couplings, or all couplings\n ('cx', [], 600)\n]\n\n# Construct the noise model from backend properties\n# and custom gate times\nnoise_model = noise.device.\\\n basic_device_noise_model(properties,\n gate_times=gate_times)\nprint(noise_model)\n```\n\n NoiseModel:\n Instructions with noise: ['measure', 'u2', 'cx', 'u3']\n Specific qubit errors: [('u2', [0]), ('u2', [1]), ('u2', [2]), ('u2', [3]), ('u2', [4]), ('u2', [5]), ('u2', [6]), ('u2', [7]), ('u2', [8]), ('u2', [9]), ('u2', [10]), ('u2', [11]), ('u2', [12]), ('u2', [13]), ('u3', [0]), ('u3', [1]), ('u3', [2]), ('u3', [3]), ('u3', [4]), ('u3', [5]), ('u3', [6]), ('u3', [7]), ('u3', [8]), ('u3', [9]), ('u3', [10]), ('u3', [11]), ('u3', [12]), ('u3', [13]), ('cx', [1, 0]), ('cx', [1, 2]), ('cx', [2, 3]), ('cx', [4, 3]), ('cx', [4, 10]), ('cx', [5, 4]), ('cx', [5, 6]), ('cx', [5, 9]), ('cx', [6, 8]), ('cx', [7, 8]), ('cx', [9, 8]), ('cx', [9, 10]), ('cx', [11, 3]), ('cx', [11, 10]), ('cx', [11, 12]), ('cx', [12, 2]), ('cx', [13, 1]), ('cx', [13, 12]), ('measure', [0]), ('measure', [1]), ('measure', [2]), ('measure', [3]), ('measure', [4]), ('measure', [5]), ('measure', [6]), ('measure', [7]), ('measure', [8]), ('measure', [9]), ('measure', [10]), ('measure', [11]), ('measure', [12]), ('measure', [13])]\n\n\nNow, let's use this model to simulate our QPE circuit. Note, this can take a few minutes to run.\n\n\n```python\nshots = 1000\nbasis_gates = noise_model.basis_gates\n\n# Select the QasmSimulator from the Aer provider\nsimulator = Aer.get_backend('qasm_simulator')\n\n# Execute noisy simulation and get counts\nresult_noise = execute(circuit, simulator, \n shots=shots,\n noise_model=noise_model,\n coupling_map=coupling_map,\n basis_gates=basis_gates).result()\ncounts_noise = result_noise.get_counts(circuit)\nplot_histogram(counts_noise, title=\"Counts for QPE circuit with depolarizing noise model\")\n```\n\n\n```python\n# And now our phase estimate:\nangles = np.array([v*int(k, 2) for k, v in counts_noise.items()]) / shots / 2**(n-1)\nres = 2*sum(angles)\nnp.around(res, decimals=5)\n```\n\nThis is actually worse than we get from the device! More tuning to do here.\n", "meta": {"hexsha": "06f6ce8ff42191abf3e4861789be7b1c19a92d41", "size": 158699, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "2019-02-26_QiskitCamp/QiskitCamp_Terra.ipynb", "max_stars_repo_name": "stjordanis/qiskit-presentations", "max_stars_repo_head_hexsha": "6ae1a5e955e856530fb7112f5ac852c3c9569bf5", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2018-09-07T16:43:51.000Z", "max_stars_repo_stars_event_max_datetime": "2018-09-28T09:35:51.000Z", "max_issues_repo_path": "2019-02-26_QiskitCamp/QiskitCamp_Terra.ipynb", "max_issues_repo_name": "stjordanis/qiskit-presentations", "max_issues_repo_head_hexsha": "6ae1a5e955e856530fb7112f5ac852c3c9569bf5", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": 1, "max_issues_repo_issues_event_min_datetime": "2018-09-12T03:38:06.000Z", "max_issues_repo_issues_event_max_datetime": "2018-09-12T03:44:58.000Z", "max_forks_repo_path": "2019-02-26_QiskitCamp/QiskitCamp_Terra.ipynb", "max_forks_repo_name": "stjordanis/qiskit-presentations", "max_forks_repo_head_hexsha": "6ae1a5e955e856530fb7112f5ac852c3c9569bf5", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-09-12T04:08:59.000Z", "max_forks_repo_forks_event_max_datetime": "2018-09-27T03:06:51.000Z", "avg_line_length": 66.6522469551, "max_line_length": 21516, "alphanum_fraction": 0.7439492372, "converted": true, "num_tokens": 18129, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48438008427698437, "lm_q2_score": 0.18476750391438243, "lm_q1q2_score": 0.0894976991176966}} {"text": "\n\n# **CS224W - Colab 3**\n\nIn Colab 2 we constructed GNN models by using PyTorch Geometric's built in GCN layer, `GCNConv`. In this Colab we will go a step deeper and implement the **GraphSAGE** ([Hamilton et al. (2017)](https://arxiv.org/abs/1706.02216)) layer directly. Then we will run our models on the CORA dataset, which is a standard citation network benchmark dataset.\n\n**Note**: Make sure to **sequentially run all the cells in each section** so that the intermediate variables / packages will carry over to the next cell\n\nHave fun and good luck on Colab 3 :)\n\n# Device\nWe recommend using a GPU for this Colab.\n\nPlease click `Runtime` and then `Change runtime type`. Then set the `hardware accelerator` to **GPU**.\n\n## Installation\n\n\n```python\n# Install torch geometric\nimport os\nif 'IS_GRADESCOPE_ENV' not in os.environ:\n !pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html\n !pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html\n !pip install torch-geometric\n !pip install -q git+https://github.com/snap-stanford/deepsnap.git\n```\n\n Looking in links: https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html\n Collecting torch-scatter\n Downloading https://data.pyg.org/whl/torch-1.9.0%2Bcu111/torch_scatter-2.0.8-cp37-cp37m-linux_x86_64.whl (10.4 MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 10.4 MB 12.5 MB/s \n \u001b[?25hInstalling collected packages: torch-scatter\n Successfully installed torch-scatter-2.0.8\n Looking in links: https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html\n Collecting torch-sparse\n Downloading https://data.pyg.org/whl/torch-1.9.0%2Bcu111/torch_sparse-0.6.12-cp37-cp37m-linux_x86_64.whl (3.7 MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3.7 MB 12.2 MB/s \n \u001b[?25hRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from torch-sparse) (1.4.1)\n Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.7/dist-packages (from scipy->torch-sparse) (1.19.5)\n Installing collected packages: torch-sparse\n Successfully installed torch-sparse-0.6.12\n Collecting torch-geometric\n Downloading torch_geometric-2.0.1.tar.gz (308 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 308 kB 14.4 MB/s \n \u001b[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (1.19.5)\n Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (4.62.3)\n Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (1.4.1)\n Requirement already satisfied: networkx in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (2.6.3)\n Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (0.22.2.post1)\n Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (2.23.0)\n Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (1.1.5)\n Collecting rdflib\n Downloading rdflib-6.0.2-py3-none-any.whl (407 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 407 kB 63.7 MB/s \n \u001b[?25hRequirement already satisfied: googledrivedownloader in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (0.4)\n Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (2.11.3)\n Requirement already satisfied: pyparsing in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (2.4.7)\n Collecting yacs\n Downloading yacs-0.1.8-py3-none-any.whl (14 kB)\n Requirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from torch-geometric) (3.13)\n Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->torch-geometric) (2.0.1)\n Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->torch-geometric) (2018.9)\n Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->torch-geometric) (2.8.2)\n Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas->torch-geometric) (1.15.0)\n Collecting isodate\n Downloading isodate-0.6.0-py2.py3-none-any.whl (45 kB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 45 kB 3.5 MB/s \n \u001b[?25hRequirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from rdflib->torch-geometric) (57.4.0)\n Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->torch-geometric) (2.10)\n Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->torch-geometric) (3.0.4)\n Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->torch-geometric) (1.24.3)\n Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->torch-geometric) (2021.5.30)\n Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn->torch-geometric) (1.0.1)\n Building wheels for collected packages: torch-geometric\n Building wheel for torch-geometric (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for torch-geometric: filename=torch_geometric-2.0.1-py3-none-any.whl size=513822 sha256=0508097be7556c0ef204e9968e6883b06a3417e5d4502507cd4808897763bf9b\n Stored in directory: /root/.cache/pip/wheels/78/3d/42/20589db73c66b5109fb93a0c5743edfd6ab5ca820a52afacfc\n Successfully built torch-geometric\n Installing collected packages: isodate, yacs, rdflib, torch-geometric\n Successfully installed isodate-0.6.0 rdflib-6.0.2 torch-geometric-2.0.1 yacs-0.1.8\n Building wheel for deepsnap (setup.py) ... \u001b[?25l\u001b[?25hdone\n\n\n\n```python\nimport torch_geometric\ntorch_geometric.__version__\n```\n\n\n\n\n '2.0.1'\n\n\n\n# 1) GNN Layers\n\n## Implementing Layer Modules\n\nIn Colab 2, we implemented a GCN model for node and graph classification tasks. However, for that notebook we took advantage of PyG's built in GCN module. For Colab 3, we provide a build upon a general Graph Neural Network Stack, into which we will be able to plugin our own module implementations: GraphSAGE and GAT.\n\nWe will then use our layer implemenations to complete node classification on the CORA dataset, a standard citation network benchmark. In this dataset, nodes correspond to documents and edges correspond to undirected citations. Each node or document in the graph is assigned a class label and features based on the documents binarized bag-of-words representation. Specifically, the Cora graph has 2708 nodes, 5429 edges, 7 prediction classes, and 1433 features per node. \n\n## GNN Stack Module\n\nBelow is the implementation of a general GNN stack, where we can plugin any GNN layer, such as **GraphSage**, **GAT**, etc. This module is provided for you. Your implementations of the **GraphSage** and **GAT** (Colab 4) layers will function as components in the GNNStack Module.\n\n\n```python\nimport torch\nimport torch_scatter\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nimport torch_geometric.nn as pyg_nn\nimport torch_geometric.utils as pyg_utils\n\nfrom torch import Tensor\nfrom typing import Union, Tuple, Optional\nfrom torch_geometric.typing import (OptPairTensor, Adj, Size, NoneType,\n OptTensor)\n\nfrom torch.nn import Parameter, Linear\nfrom torch_sparse import SparseTensor, set_diag\nfrom torch_geometric.nn.conv import MessagePassing\nfrom torch_geometric.utils import remove_self_loops, add_self_loops, softmax\n\nclass GNNStack(torch.nn.Module):\n def __init__(self, input_dim, hidden_dim, output_dim, args, emb=False):\n super(GNNStack, self).__init__()\n conv_model = self.build_conv_model(args.model_type)\n self.convs = nn.ModuleList()\n self.convs.append(conv_model(input_dim, hidden_dim))\n assert (args.num_layers >= 1), 'Number of layers is not >=1'\n for l in range(args.num_layers-1):\n self.convs.append(conv_model(args.heads * hidden_dim, hidden_dim))\n\n # post-message-passing\n self.post_mp = nn.Sequential(\n nn.Linear(args.heads * hidden_dim, hidden_dim), nn.Dropout(args.dropout), \n nn.Linear(hidden_dim, output_dim))\n\n self.dropout = args.dropout\n self.num_layers = args.num_layers\n\n self.emb = emb\n\n def build_conv_model(self, model_type):\n if model_type == 'GraphSage':\n return GraphSage\n elif model_type == 'GAT':\n # When applying GAT with num heads > 1, you need to modify the \n # input and output dimension of the conv layers (self.convs),\n # to ensure that the input dim of the next layer is num heads\n # multiplied by the output dim of the previous layer.\n # HINT: In case you want to play with multiheads, you need to change the for-loop that builds up self.convs to be\n # self.convs.append(conv_model(hidden_dim * num_heads, hidden_dim)), \n # and also the first nn.Linear(hidden_dim * num_heads, hidden_dim) in post-message-passing.\n return GAT\n\n def forward(self, data):\n x, edge_index, batch = data.x, data.edge_index, data.batch\n \n for i in range(self.num_layers):\n x = self.convs[i](x, edge_index)\n x = F.relu(x)\n x = F.dropout(x, p=self.dropout,training=self.training)\n\n x = self.post_mp(x)\n\n if self.emb == True:\n return x\n\n return F.log_softmax(x, dim=1)\n\n def loss(self, pred, label):\n return F.nll_loss(pred, label)\n```\n\n## Creating Our Own Message Passing Layer\n\nNow let's start implementing our own message passing layers! Working through this part will help us become acutely familiar with the behind the scenes work of implementing Pytorch Message Passing Layers, allowing us to build our own GNN models. To do so, we will work with and implement 3 critcal functions needed to define a PyG Message Passing Layer: `forward`, `message`, and `aggregate`.\n\nBefore diving head first into the coding details, let us quickly review the key components of the message passing process. To do so, we will focus on a single round of messsage passing with respect to a single central node $x$. Before message passing, $x$ is associated with a feature vector $x^{l-1}$, and the goal of message passing is to update this feature vector as $x^l$. To do so, we implement the following steps: 1) each neighboring node $v$ passes its current message $v^{l-1}$ across the edge $(x, v)$ - 2) for the node $x$, we aggregate all of the messages of the neighboring nodes (for example through a sum or mean) - and 3) we transform the aggregated information by for example applying linear and non-linear transformations. Altogether, the message passing process is applied such that every node $u$ in our graph updates its embedding by acting as the central node $x$ in step 1-3 described above. \n\nNow, we extending this process to that of a single message passing layer, the job of a message passing layer is to update the current feature representation or embedding of each node in a graph by propagating and transforming information within the graph. Overall, the general paradigm of a message passing layers is: 1) pre-processing -> 2) **message passing** / propagation -> 3) post-processing. \n\nThe `forward` fuction that we will implement for our message passing layer captures this execution logic. Namely, the `forward` function handles the pre and post-processing of node features / embeddings, as well as initiates message passing by calling the `propagate` function. \n\n\nThe `propagate` function encapsulates the message passing process! It does so by calling three important functions: 1) `message`, 2) `aggregate`, and 3) `update`. Our implementation will vary slightly from this, as we will not explicitly implement `update`, but instead place the logic for updating node embeddings after message passing and within the `forward` function. To be more specific, after information is propagated (message passing), we can further transform the node embeddings outputed by `propagate`. Therefore, the output of `forward` is exactly the node embeddings after one GNN layer.\n\nLastly, before starting to implement our own layer, let us dig a bit deeper into each of the functions described above:\n\n1. \n\n```\ndef propagate(edge_index, x=(x_i, x_j), extra=(extra_i, extra_j), size=size):\n```\nCalling `propagate` initiates the message passing process. Looking at the function parameters, we highlight a couple of key parameters. \n\n - `edge_index` is passed to the forward function and captures the edge structure of the graph.\n - `x=(x_i, x_j)` represents the node features that will be used in message passing. In order to explain why we pass the tuple `(x_i, x_j)`, we first look at how our edges are represented. For every edge $(i, j) \\in \\mathcal{E}$, we can differentiate $i$ as the source or central node ($x_{central}$) and j as the neighboring node ($x_{neighbor}$). \n \n Taking the example of message passing above, for a central node $u$ we will aggregate and transform all of the messages associated with the nodes $v$ s.t. $(u, v) \\in \\mathcal{E}$ (i.e. $v \\in \\mathcal{N}_{u}$). Thus we see, the subscripts `_i` and `_j` allow us to specifcally differenciate features associated with central nodes (i.e. nodes recieving message information) and neighboring nodes (i.e. nodes passing messages). \n\n This is definitely a somewhat confusing concept; however, one key thing to remember / wrap your head around is that depending on the perspective, a node $x$ acts as a central node or a neighboring node. In fact, in undirected graphs we store both edge directions (i.e. $(i, j)$ and $(j, i)$). From the central node perspective, `x_i`, x is collecting neighboring information to update its embedding. From a neighboring node perspective, `x_j`, x is passing its message information along the edge connecting it to a different central node.\n\n - `extra=(extra_i, extra_j)` represents additional information that we can associate with each node beyond its current feature embedding. In fact, we can include as many additional parameters of the form `param=(param_i, param_j)` as we would like. Again, we highlight that indexing with `_i` and `_j` allows us to differentiate central and neighboring nodes. \n\n The output of the `propagate` function is a matrix of node embeddings after the message passing process and has shape $[N, d]$.\n\n2. \n```\ndef message(x_j, ...):\n```\nThe `message` function is called by propagate and constructs the messages from\nneighboring nodes $j$ to central nodes $i$ for each edge $(i, j)$ in *edge_index*. This function can take any argument that was initially passed to `propagate`. Furthermore, we can again differentiate central nodes and neighboring nodes by appending `_i` or `_j` to the variable name, .e.g. `x_i` and `x_j`. Looking more specifically at the variables, we have:\n\n - `x_j` represents a matrix of feature embeddings for all neighboring nodes passing their messages along their respective edge (i.e. all nodes $j$ for edges $(i, j) \\in \\mathcal{E}$). Thus, its shape is $[|\\mathcal{E}|, d]$!\n - In implementing GAT we will see how to access additional variables passed to propagate\n\n Critically, we see that the output of the `message` function is a matrix of neighboring node embeddings ready to be aggregated, having shape $[|\\mathcal{E}|, d]$.\n\n3. \n```\ndef aggregate(self, inputs, index, dim_size = None):\n```\nLastly, the `aggregate` function is used to aggregate the messages from neighboring nodes. Looking at the parameters we highlight:\n\n - `inputs` represents a matrix of the messages passed from neighboring nodes (i.e. the output of the `message` function).\n - `index` has the same shape as `inputs` and tells us the central node that corresponding to each of the rows / messages $j$ in the `inputs` matrix. Thus, `index` tells us which rows / messages to aggregate for each central node.\n\n The output of `aggregate` is of shape $[N, d]$.\n\n\nFor additional resources refer to the PyG documentation for implementing custom message passing layers: https://pytorch-geometric.readthedocs.io/en/latest/notes/create_gnn.html\n\n## GraphSage Implementation\n\nFor our first GNN layer, we will implement the well known GraphSage ([Hamilton et al. (2017)](https://arxiv.org/abs/1706.02216)) layer! \n\nFor a given *central* node $v$ with current embedding $h_v^{l-1}$, the message passing update rule to tranform $h_v^{l-1} \\rightarrow h_v^l$ is as follows: \n\n\\begin{equation}\nh_v^{(l)} = W_l\\cdot h_v^{(l-1)} + W_r \\cdot AGG(\\{h_u^{(l-1)}, \\forall u \\in N(v) \\})\n\\end{equation}\n\nwhere $W_1$ and $W_2$ are learanble weight matrices and the nodes $u$ are *neighboring* nodes. Additionally, we use mean aggregation for simplicity:\n\n\\begin{equation}\nAGG(\\{h_u^{(l-1)}, \\forall u \\in N(v) \\}) = \\frac{1}{|N(v)|} \\sum_{u\\in N(v)} h_u^{(l-1)}\n\\end{equation}\n\nOne thing to note is that we're adding a **skip connection** to our GraphSage implementation through the term $W_l\\cdot h_v^{(l-1)}$. \n\nBefore implementing this update rule, we encourage you to think about how different parts of the formulas above correspond with the functions outlined earlier: 1) `forward`, 2) `message`, and 3) `aggregate`. As a hint, we are given what the aggregation function is (i.e. mean aggregation)! Now the question remains, what are the messages passed by each neighbor nodes and when do we call the `propagate` function? \n\nNote: in this case the message function or messages are actually quite simple. Additionally, remember that the `propagate` function encapsulates the operations of / the outputs of the combined `message` and `aggregate` functions.\n\n\nLastly, $\\ell$-2 normalization of the node embeddings is applied after each iteration.\n\n\nFor the following questions, DON'T refer to any existing implementations online.\n\n\n```python\nclass GraphSage(MessagePassing):\n \n def __init__(self, in_channels, out_channels, normalize = True,\n bias = False, **kwargs): \n super(GraphSage, self).__init__(**kwargs)\n\n self.in_channels = in_channels\n self.out_channels = out_channels\n self.normalize = normalize\n\n self.lin_l = None\n self.lin_r = None\n\n ############################################################################\n # TODO: Your code here! \n # Define the layers needed for the message and update functions below.\n # self.lin_l is the linear transformation that you apply to embedding \n # for central node.\n # self.lin_r is the linear transformation that you apply to aggregated \n # message from neighbors.\n # Don't forget the bias!\n # Our implementation is ~2 lines, but don't worry if you deviate from this.\n\n ############################################################################\n\n self.reset_parameters()\n\n def reset_parameters(self):\n self.lin_l.reset_parameters()\n self.lin_r.reset_parameters()\n\n def forward(self, x, edge_index, size = None):\n \"\"\"\"\"\"\n\n out = None\n\n ############################################################################\n # TODO: Your code here! \n # Implement message passing, as well as any post-processing (our update rule).\n # 1. Call the propagate function to conduct the message passing.\n # 1.1 See the description of propagate above or the following link for more information: \n # https://pytorch-geometric.readthedocs.io/en/latest/notes/create_gnn.html\n # 1.2 We will only use the representation for neighbor nodes (x_j), so by default\n # we pass the same representation for central and neighbor nodes as x=(x, x). \n # 2. Update our node embedding with skip connection from the previous layer.\n # 3. If normalize is set, do L-2 normalization (defined in \n # torch.nn.functional)\n #\n # Our implementation is ~5 lines, but don't worry if you deviate from this.\n\n ############################################################################\n\n return out\n\n def message(self, x_j):\n\n out = None\n\n ############################################################################\n # TODO: Your code here! \n # Implement your message function here.\n # Hint: Look at the formulation of the mean aggregation function, focusing on \n # what message each neighboring node passes.\n #\n # Our implementation is ~1 lines, but don't worry if you deviate from this.\n\n ############################################################################\n\n return out\n\n def aggregate(self, inputs, index, dim_size = None):\n\n out = None\n\n # The axis along which to index number of nodes.\n node_dim = self.node_dim\n\n ############################################################################\n # TODO: Your code here! \n # Implement your aggregate function here.\n # See here as how to use torch_scatter.scatter: \n # https://pytorch-scatter.readthedocs.io/en/latest/functions/scatter.html#torch_scatter.scatter\n #\n # Our implementation is ~1 lines, but don't worry if you deviate from this.\n\n\n ############################################################################\n\n return out\n\n```\n\n## Building Optimizers\n\nThis function has been implemented for you. **For grading purposes please use the default Adam optimizer**, but feel free to play with other types of optimizers on your own.\n\n\n```python\nimport torch.optim as optim\n\ndef build_optimizer(args, params):\n weight_decay = args.weight_decay\n filter_fn = filter(lambda p : p.requires_grad, params)\n if args.opt == 'adam':\n optimizer = optim.Adam(filter_fn, lr=args.lr, weight_decay=weight_decay)\n elif args.opt == 'sgd':\n optimizer = optim.SGD(filter_fn, lr=args.lr, momentum=0.95, weight_decay=weight_decay)\n elif args.opt == 'rmsprop':\n optimizer = optim.RMSprop(filter_fn, lr=args.lr, weight_decay=weight_decay)\n elif args.opt == 'adagrad':\n optimizer = optim.Adagrad(filter_fn, lr=args.lr, weight_decay=weight_decay)\n if args.opt_scheduler == 'none':\n return None, optimizer\n elif args.opt_scheduler == 'step':\n scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=args.opt_decay_step, gamma=args.opt_decay_rate)\n elif args.opt_scheduler == 'cos':\n scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=args.opt_restart)\n return scheduler, optimizer\n```\n\n## Training and Testing\n\nHere we provide you with the functions to train and test. **Please do not modify this part for grading purposes.**\n\n\n```python\nimport time\n\nimport networkx as nx\nimport numpy as np\nimport torch\nimport torch.optim as optim\nfrom tqdm import trange\nimport pandas as pd\nimport copy\n\nfrom torch_geometric.datasets import TUDataset\nfrom torch_geometric.datasets import Planetoid\nfrom torch_geometric.data import DataLoader\n\nimport torch_geometric.nn as pyg_nn\n\nimport matplotlib.pyplot as plt\n\n\ndef train(dataset, args):\n \n print(\"Node task. test set size:\", np.sum(dataset[0]['test_mask'].numpy()))\n print()\n test_loader = loader = DataLoader(dataset, batch_size=args.batch_size, shuffle=False)\n\n # build model\n model = GNNStack(dataset.num_node_features, args.hidden_dim, dataset.num_classes, \n args)\n scheduler, opt = build_optimizer(args, model.parameters())\n\n # train\n losses = []\n test_accs = []\n best_acc = 0\n best_model = None\n for epoch in trange(args.epochs, desc=\"Training\", unit=\"Epochs\"):\n total_loss = 0\n model.train()\n for batch in loader:\n opt.zero_grad()\n pred = model(batch)\n label = batch.y\n pred = pred[batch.train_mask]\n label = label[batch.train_mask]\n loss = model.loss(pred, label)\n loss.backward()\n opt.step()\n total_loss += loss.item() * batch.num_graphs\n total_loss /= len(loader.dataset)\n losses.append(total_loss)\n\n if epoch % 10 == 0:\n test_acc = test(test_loader, model)\n test_accs.append(test_acc)\n if test_acc > best_acc:\n best_acc = test_acc\n best_model = copy.deepcopy(model)\n else:\n test_accs.append(test_accs[-1])\n \n return test_accs, losses, best_model, best_acc, test_loader\n\ndef test(loader, test_model, is_validation=False, save_model_preds=False, model_type=None):\n test_model.eval()\n\n correct = 0\n # Note that Cora is only one graph!\n for data in loader:\n with torch.no_grad():\n # max(dim=1) returns values, indices tuple; only need indices\n pred = test_model(data).max(dim=1)[1]\n label = data.y\n\n mask = data.val_mask if is_validation else data.test_mask\n # node classification: only evaluate on nodes in test set\n pred = pred[mask]\n label = label[mask]\n\n if save_model_preds:\n print (\"Saving Model Predictions for Model Type\", model_type)\n\n data = {}\n data['pred'] = pred.view(-1).cpu().detach().numpy()\n data['label'] = label.view(-1).cpu().detach().numpy()\n\n df = pd.DataFrame(data=data)\n # Save locally as csv\n df.to_csv('CORA-Node-' + model_type + '.csv', sep=',', index=False)\n \n correct += pred.eq(label).sum().item()\n\n total = 0\n for data in loader.dataset:\n total += torch.sum(data.val_mask if is_validation else data.test_mask).item()\n\n return correct / total\n \nclass objectview(object):\n def __init__(self, d):\n self.__dict__ = d\n\n```\n\n## Let's Start the Training!\n\nWe will be working on the CORA dataset on node-level classification.\n\nThis part is implemented for you. **For grading purposes, please do not modify the default parameters.** However, feel free to play with different configurations just for fun!\n\n**Submit your best accuracy and loss on Gradescope.**\n\n\n```python\nif 'IS_GRADESCOPE_ENV' not in os.environ:\n for args in [\n {'model_type': 'GraphSage', 'dataset': 'cora', 'num_layers': 2, 'heads': 1, 'batch_size': 32, 'hidden_dim': 32, 'dropout': 0.5, 'epochs': 500, 'opt': 'adam', 'opt_scheduler': 'none', 'opt_restart': 0, 'weight_decay': 5e-3, 'lr': 0.01},\n ]:\n args = objectview(args)\n for model in ['GraphSage']:\n args.model_type = model\n\n # Match the dimension.\n if model == 'GAT':\n args.heads = 2\n else:\n args.heads = 1\n\n if args.dataset == 'cora':\n dataset = Planetoid(root='/tmp/cora', name='Cora')\n else:\n raise NotImplementedError(\"Unknown dataset\") \n test_accs, losses, best_model, best_acc, test_loader = train(dataset, args) \n\n print(\"Maximum test set accuracy: {0}\".format(max(test_accs)))\n print(\"Minimum loss: {0}\".format(min(losses)))\n\n # Run test for our best model to save the predictions!\n test(test_loader, best_model, is_validation=False, save_model_preds=True, model_type=model)\n print()\n\n plt.title(dataset.name)\n plt.plot(losses, label=\"training loss\" + \" - \" + args.model_type)\n plt.plot(test_accs, label=\"test accuracy\" + \" - \" + args.model_type)\n plt.legend()\n plt.show()\n\n```\n\n## Question 1.1: What is the maximum accuracy obtained on the test set for GraphSage? (10 points)\n\nRunning the cell above will show the results of your best model and save your best model's predictions to a file named *CORA-Node-GraphSage.csv*. \n\nAs we have seen before you can view this file by clicking on the *Folder* icon on the left side pannel. When you sumbit your assignment, you will have to download this file and attatch it to your submission.\n", "meta": {"hexsha": "c975c31262453c0802581c0e1d75c714c0505a97", "size": 84035, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "CS224W_Colab3.ipynb", "max_stars_repo_name": "jaeinkr/MLOps-Basics", "max_stars_repo_head_hexsha": "200d356b637fac8a1f609d37a4e7946a6e328dae", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "CS224W_Colab3.ipynb", "max_issues_repo_name": "jaeinkr/MLOps-Basics", "max_issues_repo_head_hexsha": "200d356b637fac8a1f609d37a4e7946a6e328dae", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CS224W_Colab3.ipynb", "max_forks_repo_name": "jaeinkr/MLOps-Basics", "max_forks_repo_head_hexsha": "200d356b637fac8a1f609d37a4e7946a6e328dae", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 102.3568818514, "max_line_length": 41706, "alphanum_fraction": 0.7539239603, "converted": true, "num_tokens": 7047, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4960938294709195, "lm_q2_score": 0.1801066728881779, "lm_q1q2_score": 0.0893498090663624}} {"text": "```\n# this mounts your Google Drive to the Colab VM.\nfrom google.colab import drive\ndrive.mount('/content/drive', force_remount=True)\n\n# enter the foldername in your Drive where you have saved the unzipped\n# assignment folder, e.g. 'cs231n/assignments/assignment3/'\nFOLDERNAME = \"CS231n/assignment2\"\nassert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n\n# now that we've mounted your Drive, this ensures that\n# the Python interpreter of the Colab VM can load\n# python files from within it.\nimport sys\nsys.path.append('/content/drive/My Drive/{}'.format(FOLDERNAME))\n\n# this downloads the CIFAR-10 dataset to your Drive\n# if it doesn't already exist.\n%cd drive/My\\ Drive/$FOLDERNAME/cs231n/datasets/\n!bash get_datasets.sh\n%cd /content\n```\n\n Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly&response_type=code\n \n Enter your authorization code:\n \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n Mounted at /content/drive\n /content/drive/My Drive/CS231n/assignment2/cs231n/datasets\n /content\n\n\n# Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. \nOne idea along these lines is batch normalization which was proposed by [1] in 2015.\n\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\n\nThe authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [1] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\n\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n\n[1] [Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.](https://arxiv.org/abs/1502.03167)\n\n\n```\n# As usual, a bit of setup\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef print_mean_std(x,axis=0):\n print(' means: ', x.mean(axis=axis))\n print(' stds: ', x.std(axis=axis))\n print() \n```\n\n =========== You can safely ignore the message below if you are NOT working on ConvolutionalNetworks.ipynb ===========\n \tYou will need to compile a Cython extension for a portion of this assignment.\n \tThe instructions to do this will be given in a section of the notebook below.\n \tThere will be an option for Colab users and another for Jupyter (local) users.\n\n\n\n```\n# Load the (preprocessed) CIFAR10 data.\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)\n```\n\n X_train: (49000, 3, 32, 32)\n y_train: (49000,)\n X_val: (1000, 3, 32, 32)\n y_val: (1000,)\n X_test: (1000, 3, 32, 32)\n y_test: (1000,)\n\n\n## Batch normalization: forward\nIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.\n\nReferencing the paper linked to above in [1] may be helpful!\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization \n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint_mean_std(a,axis=0)\n\ngamma = np.ones((D3,))\nbeta = np.zeros((D3,))\n# Means should be close to zero and stds close to one\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\n# Now means should be close to beta and stds close to gamma\nprint('After batch normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n```\n\n Before batch normalization:\n means: [ -2.3814598 -13.18038246 1.91780462]\n stds: [27.18502186 34.21455511 37.68611762]\n \n After batch normalization (gamma=1, beta=0)\n means: [5.99520433e-17 6.93889390e-17 8.32667268e-19]\n stds: [0.99999999 1. 1. ]\n \n After batch normalization (gamma= [1. 2. 3.] , beta= [11. 12. 13.] )\n means: [11. 12. 13.]\n stds: [0.99999999 1.99999999 2.99999999]\n \n\n\n\n```\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\n\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint_mean_std(a_norm,axis=0)\n```\n\n After batch normalization (test-time):\n means: [-0.03927354 -0.04349152 -0.10452688]\n stds: [1.01531428 1.01238373 0.97819988]\n \n\n\n## Batch normalization: backward\nNow implement the backward pass for batch normalization in the function `batchnorm_backward`.\n\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\n\nOnce you have finished, run the following to numerically check your backward pass.\n\n\n```\n# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\n#You should expect to see relative errors between 1e-13 and 1e-8\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.6674604875341426e-09\n dgamma error: 7.417225040694815e-13\n dbeta error: 2.379446949959628e-12\n\n\n## Batch normalization: alternative backward\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.\n\nSurprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too! \n\nIn the forward pass, given a set of inputs $X=\\begin{bmatrix}x_1\\\\x_2\\\\...\\\\x_N\\end{bmatrix}$, \n\nwe first calculate the mean $\\mu$ and variance $v$.\nWith $\\mu$ and $v$ calculated, we can calculate the standard deviation $\\sigma$ and normalized data $Y$.\nThe equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).\n\n\\begin{align}\n& \\mu=\\frac{1}{N}\\sum_{k=1}^N x_k & v=\\frac{1}{N}\\sum_{k=1}^N (x_k-\\mu)^2 \\\\\n& \\sigma=\\sqrt{v+\\epsilon} & y_i=\\frac{x_i-\\mu}{\\sigma}\n\\end{align}\n\n\n\nThe meat of our problem during backpropagation is to compute $\\frac{\\partial L}{\\partial X}$, given the upstream gradient we receive, $\\frac{\\partial L}{\\partial Y}.$ To do this, recall the chain rule in calculus gives us $\\frac{\\partial L}{\\partial X} = \\frac{\\partial L}{\\partial Y} \\cdot \\frac{\\partial Y}{\\partial X}$.\n\nThe unknown/hart part is $\\frac{\\partial Y}{\\partial X}$. We can find this by first deriving step-by-step our local gradients at \n$\\frac{\\partial v}{\\partial X}$, $\\frac{\\partial \\mu}{\\partial X}$,\n$\\frac{\\partial \\sigma}{\\partial v}$, \n$\\frac{\\partial Y}{\\partial \\sigma}$, and $\\frac{\\partial Y}{\\partial \\mu}$,\nand then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\\frac{\\partial Y}{\\partial X}$.\n\nIf it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\\frac{\\partial L}{\\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\\frac{\\partial \\mu}{\\partial x_i}, \\frac{\\partial v}{\\partial x_i}, \\frac{\\partial \\sigma}{\\partial x_i},$ then assemble these pieces to calculate $\\frac{\\partial y_i}{\\partial x_i}$. \n\nYou should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation. \n\nAfter doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\n\n\n```\nnp.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))\n```\n\n dx difference: 1.8400087424475466e-12\n dgamma difference: 0.0\n dbeta difference: 0.0\n speedup: 1.73x\n\n\n## Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.\n\nConcretely, when the `normalization` flag is set to `\"batchnorm\"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\n\nHINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.\n\n\n```\nnp.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\n# You should expect losses between 1e-4~1e-10 for W, \n# losses between 1e-08~1e-10 for b,\n# and losses between 1e-08~1e-09 for beta and gammas.\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n normalization='batchnorm')\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()\n```\n\n Running check with reg = 0\n Initial loss: 2.2611955101340957\n W1 relative error: 1.10e-04\n W2 relative error: 3.11e-06\n W3 relative error: 4.05e-10\n b1 relative error: 4.44e-08\n b2 relative error: 2.22e-08\n b3 relative error: 1.01e-10\n beta1 relative error: 7.33e-09\n beta2 relative error: 1.89e-09\n gamma1 relative error: 6.96e-09\n gamma2 relative error: 2.41e-09\n \n Running check with reg = 3.14\n Initial loss: 6.996533220108303\n W1 relative error: 1.98e-06\n W2 relative error: 2.29e-06\n W3 relative error: 2.79e-08\n b1 relative error: 5.55e-09\n b2 relative error: 2.22e-08\n b3 relative error: 2.10e-10\n beta1 relative error: 6.65e-09\n beta2 relative error: 3.39e-09\n gamma1 relative error: 6.27e-09\n gamma2 relative error: 5.28e-09\n\n\n# Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.\n\n\n```\nnp.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\nprint('Solver with batch norm:')\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nbn_solver.train()\n\nprint('\\nSolver without batch norm:')\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()\n```\n\n Solver with batch norm:\n (Iteration 1 / 200) loss: 2.340974\n (Epoch 0 / 10) train acc: 0.107000; val_acc: 0.115000\n (Epoch 1 / 10) train acc: 0.313000; val_acc: 0.266000\n (Epoch 2 / 10) train acc: 0.396000; val_acc: 0.280000\n (Epoch 3 / 10) train acc: 0.485000; val_acc: 0.316000\n (Epoch 4 / 10) train acc: 0.524000; val_acc: 0.318000\n (Epoch 5 / 10) train acc: 0.595000; val_acc: 0.341000\n (Epoch 6 / 10) train acc: 0.640000; val_acc: 0.321000\n (Epoch 7 / 10) train acc: 0.689000; val_acc: 0.341000\n (Epoch 8 / 10) train acc: 0.669000; val_acc: 0.299000\n (Epoch 9 / 10) train acc: 0.791000; val_acc: 0.340000\n (Epoch 10 / 10) train acc: 0.779000; val_acc: 0.305000\n \n Solver without batch norm:\n (Iteration 1 / 200) loss: 2.302332\n (Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000\n (Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000\n (Iteration 21 / 200) loss: 2.041970\n (Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000\n (Iteration 41 / 200) loss: 1.900473\n (Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000\n (Iteration 61 / 200) loss: 1.713156\n (Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000\n (Iteration 81 / 200) loss: 1.662209\n (Epoch 5 / 10) train acc: 0.434000; val_acc: 0.300000\n (Iteration 101 / 200) loss: 1.696062\n (Epoch 6 / 10) train acc: 0.536000; val_acc: 0.346000\n (Iteration 121 / 200) loss: 1.550785\n (Epoch 7 / 10) train acc: 0.530000; val_acc: 0.310000\n (Iteration 141 / 200) loss: 1.436308\n (Epoch 8 / 10) train acc: 0.622000; val_acc: 0.342000\n (Iteration 161 / 200) loss: 1.000868\n (Epoch 9 / 10) train acc: 0.654000; val_acc: 0.328000\n (Iteration 181 / 200) loss: 0.925456\n (Epoch 10 / 10) train acc: 0.726000; val_acc: 0.335000\n\n\nRun the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.\n\n\n```\ndef plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):\n \"\"\"utility function for plotting training history\"\"\"\n plt.title(title)\n plt.xlabel(label)\n bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]\n bl_plot = plot_fn(baseline)\n num_bn = len(bn_plots)\n for i in range(num_bn):\n label='with_norm'\n if labels is not None:\n label += str(labels[i])\n plt.plot(bn_plots[i], bn_marker, label=label)\n label='baseline'\n if labels is not None:\n label += str(labels[0])\n plt.plot(bl_plot, bl_marker, label=label)\n plt.legend(loc='lower center', ncol=num_bn+1) \n\n \nplt.subplot(3, 1, 1)\nplot_training_history('Training loss','Iteration', solver, [bn_solver], \\\n lambda x: x.loss_history, bl_marker='o', bn_marker='o')\nplt.subplot(3, 1, 2)\nplot_training_history('Training accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')\nplt.subplot(3, 1, 3)\nplot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n# Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\n\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.\n\n\n```\nnp.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers_ws = {}\nsolvers_ws = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers_ws[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers_ws[weight_scale] = solver\n```\n\n Running weight scale 1 / 20\n Running weight scale 2 / 20\n Running weight scale 3 / 20\n Running weight scale 4 / 20\n Running weight scale 5 / 20\n Running weight scale 6 / 20\n Running weight scale 7 / 20\n Running weight scale 8 / 20\n Running weight scale 9 / 20\n Running weight scale 10 / 20\n Running weight scale 11 / 20\n Running weight scale 12 / 20\n Running weight scale 13 / 20\n Running weight scale 14 / 20\n Running weight scale 15 / 20\n Running weight scale 16 / 20\n Running weight scale 17 / 20\n Running weight scale 18 / 20\n Running weight scale 19 / 20\n Running weight scale 20 / 20\n\n\n\n```\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers_ws[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))\n \n best_val_accs.append(max(solvers_ws[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n## Inline Question 1:\nDescribe the results of this experiment. How does the scale of weight initialization affect models with/without batch normalization differently, and why?\n\n## Answer:\n[FILL THIS IN]\n\n\n# Batch normalization and batch size\nWe will now run a small experiment to study the interaction of batch normalization and batch size.\n\nThe first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.\n\n\n```\ndef run_batchsize_experiments(normalization_mode):\n np.random.seed(231)\n # Try training a very deep net with batchnorm\n hidden_dims = [100, 100, 100, 100, 100]\n num_train = 1000\n small_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n }\n n_epochs=10\n weight_scale = 2e-2\n batch_sizes = [5,10,50]\n lr = 10**(-3.5)\n solver_bsize = batch_sizes[0]\n\n print('No normalization: batch size = ',solver_bsize)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n solver = Solver(model, small_data,\n num_epochs=n_epochs, batch_size=solver_bsize,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n solver.train()\n \n bn_solvers = []\n for i in range(len(batch_sizes)):\n b_size=batch_sizes[i]\n print('Normalization: batch size = ',b_size)\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)\n bn_solver = Solver(bn_model, small_data,\n num_epochs=n_epochs, batch_size=b_size,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n bn_solver.train()\n bn_solvers.append(bn_solver)\n \n return bn_solvers, solver, batch_sizes\n\nbatch_sizes = [5,10,50]\nbn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')\n```\n\n No normalization: batch size = 5\n Normalization: batch size = 5\n Normalization: batch size = 10\n Normalization: batch size = 50\n\n\n\n```\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 2:\nDescribe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed?\n\n## Answer:\n[FILL THIS IN]\n\n\n# Layer Normalization\nBatch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations. \n\nSeveral alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.\n\n[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. \"Layer Normalization.\" stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)\n\n## Inline Question 3:\nWhich of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?\n\n1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.\n2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1. \n3. Subtracting the mean image of the dataset from each image in the dataset.\n4. Setting all RGB values to either 0 or 1 depending on a given threshold.\n\n## Answer:\n[FILL THIS IN]\n\n\n# Layer Normalization: Implementation\n\nNow you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.\n\nHere's what you need to do:\n\n* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_forward`. \n\nRun the cell below to check your results.\n* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`. \n\nRun the second cell below to check your results.\n* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `\"layernorm\"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity. \n\nRun the third cell below to run the batch size experiment on layer normalization.\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after layer normalization \n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 =4, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before layer normalization:')\nprint_mean_std(a,axis=1)\n\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n# Means should be close to zero and stds close to one\nprint('After layer normalization (gamma=1, beta=0)')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n\ngamma = np.asarray([3.0,3.0,3.0])\nbeta = np.asarray([5.0,5.0,5.0])\n# Now means should be close to beta and stds close to gamma\nprint('After layer normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n```\n\n Before layer normalization:\n means: [-59.06673243 -47.60782686 -43.31137368 -26.40991744]\n stds: [10.07429373 28.39478981 35.28360729 4.01831507]\n \n After layer normalization (gamma=1, beta=0)\n means: [-0.58416774 0.03223092 -0.29106935 0.84300618]\n stds: [0.42903404 1.0673565 1.17954475 0.38429471]\n \n After layer normalization (gamma= [3. 3. 3.] , beta= [5. 5. 5.] )\n means: [3.24749679 5.09669275 4.12679194 7.52901853]\n stds: [1.28710213 3.2020695 3.53863425 1.15288413]\n \n\n\n\n```\n# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nln_param = {}\nfx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]\nfg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]\nfb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = layernorm_forward(x, gamma, beta, ln_param)\ndx, dgamma, dbeta = layernorm_backward(dout, cache)\n\n#You should expect to see relative errors between 1e-12 and 1e-8\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.6674604875341426e-09\n dgamma error: 7.417225040694815e-13\n dbeta error: 2.379446949959628e-12\n\n\n# Layer Normalization and batch size\n\nWe will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!\n\n\n```\nln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')\n\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 4:\nWhen is layer normalization likely to not work well, and why?\n\n1. Using it in a very deep network\n2. Having a very small dimension of features\n3. Having a high regularization term\n\n\n## Answer:\n[FILL THIS IN]\n\n", "meta": {"hexsha": "da6ca3eb4b69d38ea648d799cbca134c15adcbc5", "size": 358159, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "assignment2/BatchNormalization.ipynb", "max_stars_repo_name": "moustafa-7/CS231n", "max_stars_repo_head_hexsha": "d06494d940f07c814b9225cc8feb9350d06ba14b", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "assignment2/BatchNormalization.ipynb", "max_issues_repo_name": "moustafa-7/CS231n", "max_issues_repo_head_hexsha": "d06494d940f07c814b9225cc8feb9350d06ba14b", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 9, "max_issues_repo_issues_event_min_datetime": "2021-02-02T22:57:04.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:42:58.000Z", "max_forks_repo_path": "assignment2/BatchNormalization.ipynb", "max_forks_repo_name": "moustafa-7/CS231n", "max_forks_repo_head_hexsha": "d06494d940f07c814b9225cc8feb9350d06ba14b", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 358159.0, "max_line_length": 358159, "alphanum_fraction": 0.9233413093, "converted": true, "num_tokens": 9306, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.32423538592116924, "lm_q2_score": 0.27512971193602087, "lm_q1q2_score": 0.08920678832795585}} {"text": "```python\nfrom IPython.core.display import HTML\ncss_file = '../style.css'\nHTML(open(css_file, 'r').read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n# Introduction to matrices\n\n## Preamble\n\nBefore we start our journey into linear algebra, we take a quick look at creating matrices using the `sympy` package. As always, we start off by initializing LaTex printing using the `init_printing()` function.\n\n\n```python\nfrom sympy import init_printing\ninit_printing()\n```\n\n## Representing matrices\n\nMatrices are represented as $m$ rows of values, spread over $n$ columns, to make up an $m \\times n$ array or grid. The `sympy` package contains the `Matrix()` function to create these objects.\n\n\n```python\nfrom sympy import Matrix\n```\n\nExpression (1) depicts a $4 \\times 3$ matrix of integer values. We can recreate this using the `Matrix()` function. This is a matrix. A matrix has a dimension, which lists, in order, the number of rows and the number of columns. The matrix in (1) has dimension $3 \\times 3$.\n\n$$\\begin{bmatrix} 1 && 2 && 3 \\\\ 4 && 5 && 6 \\\\ 7 && 8 && 9 \\\\ 10 && 11 && 12 \\end{bmatrix} \\tag{1}$$\n\nThe values are entered as a list of list, with each sublist containing a row of values.\n\n\n```python\nmatrix_1 = Matrix([[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9],\n [10, 11, 12]])\nmatrix_1\n```\n\nBy using the `type()` function we can inspect the object type of which `matrix_1` is an instance.\n\n\n```python\ntype(matrix_1)\n```\n\n\n\n\n sympy.matrices.dense.MutableDenseMatrix\n\n\n\nWe note that it is a `MutableDenseMatrix`. Mutable refers to the fact that we can change the values in the matrix and dense refers to the fact that there are not an abundance of zeros in the data.\n\n## Shape\n\nThe `.shape()` method calculates the number of rows and columns of a matrix.\n\n\n```python\nmatrix_1.shape\n```\n\n## Accessing values in rows and columns\n\nThe `.row()` and `.col()` methods give us access to the values in a matrix. Remember that Python indexing starts at $0$, such that the first row (in the mathematical representation) is the zeroth row in `python`.\n\n\n```python\nmatrix_1.row(0) # The first row\n```\n\n\n```python\nmatrix_1.col(0) # The first column\n```\n\nThe `-1` value gives us access to the last row or column.\n\n\n```python\nmatrix_1.row(-1)\n```\n\nEvery element in a matrix is indexed, with a row and column number. In (2), we see a $3 \\times 4$ matrix with the index of every element. Note we place both values together, without a comma separating them.\n\n$$\\begin{pmatrix} a_{11} && a_{12} && a_{13} && a_{14} \\\\ a_{21} && a_{22} && a_{23} && a_{24} \\\\ a_{31} && a_{32} && a_{33} && a_{34} \\end{pmatrix} \\tag{2}$$\n\nSo, if we wish to find the element in the first row and the first column in our `matrix_1` variable (which holds a `sympy` matrix object), we will use `0,0` and not `1,1`. The _indexing_ (using the _address_ of each element) is done by using square brackets.\n\n\n```python\n# Repriting matrix_1\nmatrix_1\n```\n\n\n```python\nmatrix_1[0,0]\n```\n\nLet's look at the element in the second row and third column, which is $6$.\n\n\n```python\nmatrix_1[1,2]\n```\n\nWe can also span a few rows and column. Below, we index the first two rows. This is done by using the colon, `:`, symbol. The last number (after the colon is excluded, such that `0:2` refers to the zeroth and first row indices.\n\n\n```python\nmatrix_1[0:2,0:4]\n```\n\nWe can also specify the actual rows or columns, by placing them in square brackets (creating a list). Below, we also use the colon symbol on is won. This denotes the selection of all values. So, we have the first and third rows (mathematically) or the zeroth and second `python` row index, and all the columns.\n\n\n```python\nmatrix_1[[0,2],:]\n```\n\n## Deleting and inserting rows\n\nRow and column can be inserted into or deleted from a matrix using the `.row_insert()`, `.col_insert()`, `.row_del()`, and `.col_del()` methods. \n\nLet's have a look at where these inserted and deletions take place.\n\n\n```python\nmatrix_1.row_insert(1, Matrix([[10, 20, 30]])) # Using row 1\n```\n\nWe note that the row was inserted as row 1.\n\nIf we call the matrix again, we note that the changes were not permanent.\n\n\n```python\nmatrix_1\n```\n\nWe have to overwrite the computer variable to make the changes permanent or alternatively create a new computer variable. (This is contrary to the current documentation.)\n\n\n```python\nmatrix_2 = matrix_1.row_insert(1, Matrix([[10, 20, 30]]))\n```\n\n\n```python\nmatrix_2\n```\n\n\n```python\nmatrix_3 = matrix_1.row_del(1) # Permanently deleting the second row\nmatrix_3 # A bug in the code currently returns a NoneType object\n```\n\n## Useful matrix constructors\n\nThere are a few special matrices that can be constructed using `sympy` functions. The zero matrix of size $n \\times n$ can be created with the `zeros()` function and the $n \\times n$ identity matrix (more on this later) can be created with the `eye()` function.\n\n\n```python\nfrom sympy import zeros, eye\n```\n\n\n```python\nzeros(5) # A 5x5 matrix of all zeros\nzeros(5)\n```\n\n\n```python\neye(4) # A 4x4 identity matrix\n```\n\nThe `diag()` function creates a diagonal matrix (which is square) with specified values along the main axis (top-left to bottom-right) and zeros everywhere else.\n\n\n```python\nfrom sympy import diag\n```\n\n\n```python\ndiag(1, 2, 3, 4, 5)\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "5eaccf8118cd64ebc9c05a237a2eac45b3612dd1", "size": 53753, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Python/3. Computational Sciences and Mathematics/Linear Algebra/0.0 Start Here/0.3 Introduction_to_matrices.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Python/3. Computational Sciences and Mathematics/Linear Algebra/0.0 Start Here/0.3 Introduction_to_matrices.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Python/3. Computational Sciences and Mathematics/Linear Algebra/0.0 Start Here/0.3 Introduction_to_matrices.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 61.2919042189, "max_line_length": 4072, "alphanum_fraction": 0.7707290756, "converted": true, "num_tokens": 2079, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.5, "lm_q2_score": 0.17781086729958678, "lm_q1q2_score": 0.08890543364979339}} {"text": "# Upload Notebook for Examples\n\nThis notebook is designed to provide examples of different types of outputs that can be used to test the JupyterLab frontend and other Jupyter frontends.\n\n\n```python\nfrom IPython.display import display\nfrom IPython.display import (\n HTML, Image, Latex, Math, Markdown, SVG\n)\n```\n\n## Text\n\nPlain text:\n\n\n```python\ntext = \"\"\"Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam urna\nlibero, dictum a egestas non, placerat vel neque. In imperdiet iaculis fermentum. \nVestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia \nCurae; Cras augue tortor, tristique vitae varius nec, dictum eu lectus. Pellentesque \nid eleifend eros. In non odio in lorem iaculis sollicitudin. In faucibus ante ut \narcu fringilla interdum. Maecenas elit nulla, imperdiet nec blandit et, consequat \nut elit.\"\"\"\nprint(text)\n```\n\n Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam urna\n libero, dictum a egestas non, placerat vel neque. In imperdiet iaculis fermentum. \n Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia \n Curae; Cras augue tortor, tristique vitae varius nec, dictum eu lectus. Pellentesque \n id eleifend eros. In non odio in lorem iaculis sollicitudin. In faucibus ante ut \n arcu fringilla interdum. Maecenas elit nulla, imperdiet nec blandit et, consequat \n ut elit.\n\n\nText as output:\n\n\n```python\ntext\n```\n\n\n\n\n 'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nullam urna\\nlibero, dictum a egestas non, placerat vel neque. In imperdiet iaculis fermentum. \\nVestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia \\nCurae; Cras augue tortor, tristique vitae varius nec, dictum eu lectus. Pellentesque \\nid eleifend eros. In non odio in lorem iaculis sollicitudin. In faucibus ante ut \\narcu fringilla interdum. Maecenas elit nulla, imperdiet nec blandit et, consequat \\nut elit.'\n\n\n\nStandard error:\n\n\n```python\nimport sys; print('this is stderr', file=sys.stderr)\n```\n\n this is stderr\n\n\n## HTML\n\n\n```python\ndiv = HTML('
')\ndiv\n```\n\n\n\n\n
\n\n\n\n\n```python\nfor i in range(3):\n print(7**10)\n display(div)\n```\n\n 10000000000\n\n\n\n
\n\n\n 10000000000\n\n\n\n
\n\n\n 10000000000\n\n\n\n
\n\n\n## Markdown\n\n\n```python\nmd = Markdown(\"\"\"\n### Subtitle\n\nThis is some *markdown* text with math $F=ma$.\n\n\"\"\")\nmd\n```\n\n\n\n\n\n### Subtitle\n\nThis is some *markdown* text with math $F=ma$.\n\n\n\n\n\n\n```python\ndisplay(md)\n```\n\n\n\n### Subtitle\n\nThis is some *markdown* text with math $F=ma$.\n\n\n\n\n## LaTeX\n\nExamples LaTeX in a markdown cell:\n\n\n\\begin{align}\n\\nabla \\times \\vec{\\mathbf{B}} -\\, \\frac1c\\, \\frac{\\partial\\vec{\\mathbf{E}}}{\\partial t} & = \\frac{4\\pi}{c}\\vec{\\mathbf{j}} \\\\ \\nabla \\cdot \\vec{\\mathbf{E}} & = 4 \\pi \\rho \\\\\n\\nabla \\times \\vec{\\mathbf{E}}\\, +\\, \\frac1c\\, \\frac{\\partial\\vec{\\mathbf{B}}}{\\partial t} & = \\vec{\\mathbf{0}} \\\\\n\\nabla \\cdot \\vec{\\mathbf{B}} & = 0\n\\end{align}\n\n\n```python\nmath = Latex(\"$F=ma$\")\nmath\n```\n\n\n\n\n$F=ma$\n\n\n\n\n```python\nmaxwells = Latex(r\"\"\"\n\\begin{align}\n\\nabla \\times \\vec{\\mathbf{B}} -\\, \\frac1c\\, \\frac{\\partial\\vec{\\mathbf{E}}}{\\partial t} & = \\frac{4\\pi}{c}\\vec{\\mathbf{j}} \\\\ \\nabla \\cdot \\vec{\\mathbf{E}} & = 4 \\pi \\rho \\\\\n\\nabla \\times \\vec{\\mathbf{E}}\\, +\\, \\frac1c\\, \\frac{\\partial\\vec{\\mathbf{B}}}{\\partial t} & = \\vec{\\mathbf{0}} \\\\\n\\nabla \\cdot \\vec{\\mathbf{B}} & = 0\n\\end{align}\n\"\"\")\nmaxwells\n```\n\n\n\n\n\n\\begin{align}\n\\nabla \\times \\vec{\\mathbf{B}} -\\, \\frac1c\\, \\frac{\\partial\\vec{\\mathbf{E}}}{\\partial t} & = \\frac{4\\pi}{c}\\vec{\\mathbf{j}} \\\\ \\nabla \\cdot \\vec{\\mathbf{E}} & = 4 \\pi \\rho \\\\\n\\nabla \\times \\vec{\\mathbf{E}}\\, +\\, \\frac1c\\, \\frac{\\partial\\vec{\\mathbf{B}}}{\\partial t} & = \\vec{\\mathbf{0}} \\\\\n\\nabla \\cdot \\vec{\\mathbf{B}} & = 0\n\\end{align}\n\n\n\n\n## SVG\n\n\n```python\nsvg_source = \"\"\"\n\n \n\n\"\"\"\nsvg = SVG(svg_source)\nsvg\n```\n\n\n\n\n \n\n \n\n\n\n\n```python\nfor i in range(3):\n print(10**i)\n display(svg)\n```\n\n 10000000000\n\n\n\n \n\n \n\n\n 10000000000\n\n\n\n \n\n \n\n\n 10000000000\n\n\n\n \n\n \n\n", "meta": {"hexsha": "809a1be74fbb9bf72735f17547f847046e705c4c", "size": 11595, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "galata/test/galata/notebooks/simple_test.ipynb", "max_stars_repo_name": "agoose77/jupyterlab", "max_stars_repo_head_hexsha": "93c79b6e26bb982ee6ec66ec3e24d8839e68fba0", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 11496, "max_stars_repo_stars_event_min_datetime": "2016-10-12T21:02:20.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-31T17:09:23.000Z", "max_issues_repo_path": "galata/test/galata/notebooks/simple_test.ipynb", "max_issues_repo_name": "SkyN9ne/jupyterlab", "max_issues_repo_head_hexsha": "89e271df36031557c52c82197dad43826ec7fa62", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": 10587, "max_issues_repo_issues_event_min_datetime": "2016-10-12T21:22:34.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-31T22:44:58.000Z", "max_forks_repo_path": "galata/test/galata/notebooks/simple_test.ipynb", "max_forks_repo_name": "andrewfulton9/jupyterlab", "max_forks_repo_head_hexsha": "f07934a8d564ea5a89ec3b8d681a29115a5d4547", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2612, "max_forks_repo_forks_event_min_datetime": "2016-10-13T12:56:28.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-30T17:03:04.000Z", "avg_line_length": 23.7116564417, "max_line_length": 516, "alphanum_fraction": 0.4938335489, "converted": true, "num_tokens": 1443, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48047867804790706, "lm_q2_score": 0.18476751738161779, "lm_q1q2_score": 0.0887768524977134}} {"text": "Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Yves Dubief, 2016. NSF for support via NSF-CBET award #1258697.\nThe following cell should always be the first coding cell of your python notebooks\n\n\n```python\n\nstudent_id = raw_input('Please enter your NETID (e.g. ydubief)')\nprint(student_id)\nassignment_name = 'HW1_'+student_id\n\n```\n\n Please enter your NETID (e.g. ydubief)ydubief\n ydubief\n\n\n\n```python\n\"\"\"\nimporting the necessary libraries, do not modify\n\"\"\"\n%matplotlib inline \n# plots graphs within the notebook\n%config InlineBackend.figure_format='svg' # not sure what this does, may be default images to svg format\n\nfrom IPython.display import display,Image, Latex\nfrom __future__ import division\nfrom sympy.interactive import printing\nprinting.init_printing(use_latex='mathjax')\n\n\nfrom IPython.display import display,Image, Latex\n\nfrom IPython.display import clear_output\n\nimport SchemDraw as schem\nimport SchemDraw.elements as e\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport math\nimport scipy.constants as sc\n\nimport sympy as sym\n\nfrom IPython.core.display import HTML\ndef header(text):\n raw_html = '

' + str(text) + '

'\n return raw_html\n\ndef box(text):\n raw_html = '
'+str(text)+'
'\n return HTML(raw_html)\n\ndef nobox(text):\n raw_html = '

'+str(text)+'

'\n return HTML(raw_html)\n\ndef addContent(raw_html):\n global htmlContent\n htmlContent += raw_html\n \nclass PDF(object):\n def __init__(self, pdf, size=(200,200)):\n self.pdf = pdf\n self.size = size\n\n def _repr_html_(self):\n return ''.format(self.pdf, self.size)\n\n def _repr_latex_(self):\n return r'\\includegraphics[width=1.0\\textwidth]{{{0}}}'.format(self.pdf)\n\nclass ListTable(list):\n \"\"\" Overridden list class which takes a 2-dimensional list of \n the form [[1,2,3],[4,5,6]], and renders an HTML Table in \n IPython Notebook. \"\"\"\n \n def _repr_html_(self):\n html = [\"\"]\n for row in self:\n html.append(\"\")\n \n for col in row:\n html.append(\"\".format(col))\n \n html.append(\"\")\n html.append(\"
{0}
\")\n return ''.join(html)\n \nfont = {'family' : 'serif',\n 'color' : 'black',\n 'weight' : 'normal',\n 'size' : 18,\n }\n\nfrom scipy.constants.constants import C2K\nfrom scipy.constants.constants import K2C\nfrom scipy.constants.constants import F2K\nfrom scipy.constants.constants import K2F\nfrom scipy.constants.constants import C2F\nfrom scipy.constants.constants import F2C\n```\n\n

Heat loss through a single-pane window

\n\nThe rear window of an automobile is defogged by attaching a thin, transparent, film-type heating element to its inner surface. By electrically heating this element, a uniform heat flux may be established at the inner surface. \n\n(a)\tFor 4-mm-thick window glass, determine the electrical power required per unit window area to maintain an inner surface temperature of $15^\\circ \udbff\udc20C$ when the interior air temperature and convection coefficient are $T_{\\infty.i}= 25^\\circ \udbff\udc20C$ and $h_i=10 W/m^2.K$, while the exterior (ambient) air temperature and convection coefficient are $T_{\\infty.o}=\udbff\udc15-10^\\circ \udbff\udc20C$ and $h_o=65 W/m^2.K$.\n\n(b) In practice $T\udbff\udc1d_{\\infty.o}$ and $h_o$ vary according to weather conditions and car speed. For values of $h_o=2,20,65,100 W/m^2.K$, determine and plot the electrical power requirement as a function of $T\udbff\udc1d_{\\infty.o}$ for \udbff\udc15$-30\\leq\udbff\udc26 T\udbff\udc1d_{\\infty.o}\\leq 0^\\circ \udbff\udc20C$. From your results, what can you conclude about the need for heater operation at low values of ho? How is this conclusion affected by the value of $T\udbff\udc1d_{\\infty.o}$? If h \udbff\udc36 V n, where V is the vehicle speed and n is a positive exponent, how does the vehicle speed affect the need for heater operation?\n\nThe thermal conductivity of this glass is $1.4 W/m.K$\n\n\n## Assumptions\n\nSteady state, 1D conduction, thermal resistance of the heating element is negligible. Negligible heat transfer by radiation.\n\n## Parameters\n\n\n```python\nL =0.004 #m\n\nk_glass = 1.4 #W/m.K thermal conductivity of glass\n\nT_inf_in = 25 #C\nT_inf_out = -10 #C\nh_in = 65.\nh_out = 65.\nT_s_i = 15 #C\n```\n\n\n```python\n!ipython nbconvert --to html ME144-HW1.ipynb --output $assignment_name\n```\n\n [NbConvertApp] Converting notebook Problem-Template0.ipynb to html\n [NbConvertApp] Writing 319091 bytes to ydubief.html\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "dcabbddd028a908c6138dddffb9f07cc7a3e7f30", "size": 7346, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": ".ipynb_checkpoints/ME144-HW1-checkpoint.ipynb", "max_stars_repo_name": "CarlGriffinsteed/UVM-ME144-Heat-Transfer", "max_stars_repo_head_hexsha": "9c477449d6ba5d6a9ee7c57f1c0ed4aab0ce4cca", "max_stars_repo_licenses": ["CC-BY-3.0"], "max_stars_count": 7, "max_stars_repo_stars_event_min_datetime": "2017-06-02T20:31:22.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-05T13:52:33.000Z", "max_issues_repo_path": ".ipynb_checkpoints/ME144-HW1-checkpoint.ipynb", "max_issues_repo_name": "CarlGriffinsteed/UVM-ME144-Heat-Transfer", "max_issues_repo_head_hexsha": "9c477449d6ba5d6a9ee7c57f1c0ed4aab0ce4cca", "max_issues_repo_licenses": ["CC-BY-3.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": ".ipynb_checkpoints/ME144-HW1-checkpoint.ipynb", "max_forks_repo_name": "CarlGriffinsteed/UVM-ME144-Heat-Transfer", "max_forks_repo_head_hexsha": "9c477449d6ba5d6a9ee7c57f1c0ed4aab0ce4cca", "max_forks_repo_licenses": ["CC-BY-3.0"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2019-01-24T17:43:41.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-25T18:08:34.000Z", "avg_line_length": 30.4813278008, "max_line_length": 580, "alphanum_fraction": 0.5601687993, "converted": true, "num_tokens": 1255, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3886180267058489, "lm_q2_score": 0.22815650216092534, "lm_q1q2_score": 0.08866572964988756}} {"text": "```python\n%matplotlib inline\n```\n\n\nWord Embeddings: Encoding Lexical Semantics\n===========================================\n\nWord embeddings are dense vectors of real numbers, one per word in your\nvocabulary. In NLP, it is almost always the case that your features are\nwords! But how should you represent a word in a computer? You could\nstore its ascii character representation, but that only tells you what\nthe word *is*, it doesn't say much about what it *means* (you might be\nable to derive its part of speech from its affixes, or properties from\nits capitalization, but not much). Even more, in what sense could you\ncombine these representations? We often want dense outputs from our\nneural networks, where the inputs are $|V|$ dimensional, where\n$V$ is our vocabulary, but often the outputs are only a few\ndimensional (if we are only predicting a handful of labels, for\ninstance). How do we get from a massive dimensional space to a smaller\ndimensional space?\n\nHow about instead of ascii representations, we use a one-hot encoding?\nThat is, we represent the word $w$ by\n\n\\begin{align}\\overbrace{\\left[ 0, 0, \\dots, 1, \\dots, 0, 0 \\right]}^\\text{|V| elements}\\end{align}\n\nwhere the 1 is in a location unique to $w$. Any other word will\nhave a 1 in some other location, and a 0 everywhere else.\n\nThere is an enormous drawback to this representation, besides just how\nhuge it is. It basically treats all words as independent entities with\nno relation to each other. What we really want is some notion of\n*similarity* between words. Why? Let's see an example.\n\nSuppose we are building a language model. Suppose we have seen the\nsentences\n\n* The mathematician ran to the store.\n* The physicist ran to the store.\n* The mathematician solved the open problem.\n\nin our training data. Now suppose we get a new sentence never before\nseen in our training data:\n\n* The physicist solved the open problem.\n\nOur language model might do OK on this sentence, but wouldn't it be much\nbetter if we could use the following two facts:\n\n* We have seen mathematician and physicist in the same role in a sentence. Somehow they\n have a semantic relation.\n* We have seen mathematician in the same role in this new unseen sentence\n as we are now seeing physicist.\n\nand then infer that physicist is actually a good fit in the new unseen\nsentence? This is what we mean by a notion of similarity: we mean\n*semantic similarity*, not simply having similar orthographic\nrepresentations. It is a technique to combat the sparsity of linguistic\ndata, by connecting the dots between what we have seen and what we\nhaven't. This example of course relies on a fundamental linguistic\nassumption: that words appearing in similar contexts are related to each\nother semantically. This is called the `distributional\nhypothesis `__.\n\n\nGetting Dense Word Embeddings\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nHow can we solve this problem? That is, how could we actually encode\nsemantic similarity in words? Maybe we think up some semantic\nattributes. For example, we see that both mathematicians and physicists\ncan run, so maybe we give these words a high score for the \"is able to\nrun\" semantic attribute. Think of some other attributes, and imagine\nwhat you might score some common words on those attributes.\n\nIf each attribute is a dimension, then we might give each word a vector,\nlike this:\n\n\\begin{align}q_\\text{mathematician} = \\left[ \\overbrace{2.3}^\\text{can run},\n \\overbrace{9.4}^\\text{likes coffee}, \\overbrace{-5.5}^\\text{majored in Physics}, \\dots \\right]\\end{align}\n\n\\begin{align}q_\\text{physicist} = \\left[ \\overbrace{2.5}^\\text{can run},\n \\overbrace{9.1}^\\text{likes coffee}, \\overbrace{6.4}^\\text{majored in Physics}, \\dots \\right]\\end{align}\n\nThen we can get a measure of similarity between these words by doing:\n\n\\begin{align}\\text{Similarity}(\\text{physicist}, \\text{mathematician}) = q_\\text{physicist} \\cdot q_\\text{mathematician}\\end{align}\n\nAlthough it is more common to normalize by the lengths:\n\n\\begin{align}\\text{Similarity}(\\text{physicist}, \\text{mathematician}) = \\frac{q_\\text{physicist} \\cdot q_\\text{mathematician}}\n {\\| q_\\text{physicist} \\| \\| q_\\text{mathematician} \\|} = \\cos (\\phi)\\end{align}\n\nWhere $\\phi$ is the angle between the two vectors. That way,\nextremely similar words (words whose embeddings point in the same\ndirection) will have similarity 1. Extremely dissimilar words should\nhave similarity -1.\n\n\nYou can think of the sparse one-hot vectors from the beginning of this\nsection as a special case of these new vectors we have defined, where\neach word basically has similarity 0, and we gave each word some unique\nsemantic attribute. These new vectors are *dense*, which is to say their\nentries are (typically) non-zero.\n\nBut these new vectors are a big pain: you could think of thousands of\ndifferent semantic attributes that might be relevant to determining\nsimilarity, and how on earth would you set the values of the different\nattributes? Central to the idea of deep learning is that the neural\nnetwork learns representations of the features, rather than requiring\nthe programmer to design them herself. So why not just let the word\nembeddings be parameters in our model, and then be updated during\ntraining? This is exactly what we will do. We will have some *latent\nsemantic attributes* that the network can, in principle, learn. Note\nthat the word embeddings will probably not be interpretable. That is,\nalthough with our hand-crafted vectors above we can see that\nmathematicians and physicists are similar in that they both like coffee,\nif we allow a neural network to learn the embeddings and see that both\nmathematicians and physicists have a large value in the second\ndimension, it is not clear what that means. They are similar in some\nlatent semantic dimension, but this probably has no interpretation to\nus.\n\n\nIn summary, **word embeddings are a representation of the *semantics* of\na word, efficiently encoding semantic information that might be relevant\nto the task at hand**. You can embed other things too: part of speech\ntags, parse trees, anything! The idea of feature embeddings is central\nto the field.\n\n\nWord Embeddings in Pytorch\n~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nBefore we get to a worked example and an exercise, a few quick notes\nabout how to use embeddings in Pytorch and in deep learning programming\nin general. Similar to how we defined a unique index for each word when\nmaking one-hot vectors, we also need to define an index for each word\nwhen using embeddings. These will be keys into a lookup table. That is,\nembeddings are stored as a $|V| \\times D$ matrix, where $D$\nis the dimensionality of the embeddings, such that the word assigned\nindex $i$ has its embedding stored in the $i$'th row of the\nmatrix. In all of my code, the mapping from words to indices is a\ndictionary named word\\_to\\_ix.\n\nThe module that allows you to use embeddings is torch.nn.Embedding,\nwhich takes two arguments: the vocabulary size, and the dimensionality\nof the embeddings.\n\nTo index into this table, you must use torch.LongTensor (since the\nindices are integers, not floats).\n\n\n\n\n\n```python\n# Author: Robert Guthrie\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\ntorch.manual_seed(1)\n```\n\n\n```python\nword_to_ix = {\"hello\": 0, \"world\": 1}\nembeds = nn.Embedding(2, 5) # 2 words in vocab, 5 dimensional embeddings\nlookup_tensor = torch.tensor([word_to_ix[\"hello\"]], dtype=torch.long)\nhello_embed = embeds(lookup_tensor)\nprint(hello_embed)\n```\n\nAn Example: N-Gram Language Modeling\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nRecall that in an n-gram language model, given a sequence of words\n$w$, we want to compute\n\n\\begin{align}P(w_i | w_{i-1}, w_{i-2}, \\dots, w_{i-n+1} )\\end{align}\n\nWhere $w_i$ is the ith word of the sequence.\n\nIn this example, we will compute the loss function on some training\nexamples and update the parameters with backpropagation.\n\n\n\n\n\n```python\nCONTEXT_SIZE = 2\nEMBEDDING_DIM = 10\n# We will use Shakespeare Sonnet 2\ntest_sentence = \"\"\"When forty winters shall besiege thy brow,\nAnd dig deep trenches in thy beauty's field,\nThy youth's proud livery so gazed on now,\nWill be a totter'd weed of small worth held:\nThen being asked, where all thy beauty lies,\nWhere all the treasure of thy lusty days;\nTo say, within thine own deep sunken eyes,\nWere an all-eating shame, and thriftless praise.\nHow much more praise deserv'd thy beauty's use,\nIf thou couldst answer 'This fair child of mine\nShall sum my count, and make my old excuse,'\nProving his beauty by succession thine!\nThis were to be new made when thou art old,\nAnd see thy blood warm when thou feel'st it cold.\"\"\".split()\n# we should tokenize the input, but we will ignore that for now\n# build a list of tuples. Each tuple is ([ word_i-2, word_i-1 ], target word)\ntrigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2])\n for i in range(len(test_sentence) - 2)]\n# print the first 3, just so you can see what they look like\nprint(trigrams[:3])\n\nvocab = set(test_sentence)\nword_to_ix = {word: i for i, word in enumerate(vocab)}\n\n\nclass NGramLanguageModeler(nn.Module):\n\n def __init__(self, vocab_size, embedding_dim, context_size):\n super(NGramLanguageModeler, self).__init__()\n self.embeddings = nn.Embedding(vocab_size, embedding_dim)\n self.linear1 = nn.Linear(context_size * embedding_dim, 128)\n self.linear2 = nn.Linear(128, vocab_size)\n\n def forward(self, inputs):\n embeds = self.embeddings(inputs).view((1, -1))\n out = F.relu(self.linear1(embeds))\n out = self.linear2(out)\n log_probs = F.log_softmax(out, dim=1)\n return log_probs\n\n\nlosses = []\nloss_function = nn.NLLLoss()\nmodel = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)\noptimizer = optim.SGD(model.parameters(), lr=0.001)\n\nfor epoch in range(10):\n total_loss = 0\n for context, target in trigrams:\n\n # Step 1. Prepare the inputs to be passed to the model (i.e, turn the words\n # into integer indices and wrap them in tensors)\n context_idxs = torch.tensor([word_to_ix[w] for w in context], dtype=torch.long)\n\n # Step 2. Recall that torch *accumulates* gradients. Before passing in a\n # new instance, you need to zero out the gradients from the old\n # instance\n model.zero_grad()\n\n # Step 3. Run the forward pass, getting log probabilities over next\n # words\n log_probs = model(context_idxs)\n\n # Step 4. Compute your loss function. (Again, Torch wants the target\n # word wrapped in a tensor)\n loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))\n\n # Step 5. Do the backward pass and update the gradient\n loss.backward()\n optimizer.step()\n\n # Get the Python number from a 1-element Tensor by calling tensor.item()\n total_loss += loss.item()\n losses.append(total_loss)\nprint(losses) # The loss decreased every iteration over the training data!\n```\n\nExercise: Computing Word Embeddings: Continuous Bag-of-Words\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe Continuous Bag-of-Words model (CBOW) is frequently used in NLP deep\nlearning. It is a model that tries to predict words given the context of\na few words before and a few words after the target word. This is\ndistinct from language modeling, since CBOW is not sequential and does\nnot have to be probabilistic. Typcially, CBOW is used to quickly train\nword embeddings, and these embeddings are used to initialize the\nembeddings of some more complicated model. Usually, this is referred to\nas *pretraining embeddings*. It almost always helps performance a couple\nof percent.\n\nThe CBOW model is as follows. Given a target word $w_i$ and an\n$N$ context window on each side, $w_{i-1}, \\dots, w_{i-N}$\nand $w_{i+1}, \\dots, w_{i+N}$, referring to all context words\ncollectively as $C$, CBOW tries to minimize\n\n\\begin{align}-\\log p(w_i | C) = -\\log \\text{Softmax}(A(\\sum_{w \\in C} q_w) + b)\\end{align}\n\nwhere $q_w$ is the embedding for word $w$.\n\nImplement this model in Pytorch by filling in the class below. Some\ntips:\n\n* Think about which parameters you need to define.\n* Make sure you know what shape each operation expects. Use .view() if you need to\n reshape.\n\n\n\n\n\n```python\nCONTEXT_SIZE = 2 # 2 words to the left, 2 to the right\nraw_text = \"\"\"We are about to study the idea of a computational process.\nComputational processes are abstract beings that inhabit computers.\nAs they evolve, processes manipulate other abstract things called data.\nThe evolution of a process is directed by a pattern of rules\ncalled a program. People create programs to direct processes. In effect,\nwe conjure the spirits of the computer with our spells.\"\"\".split()\n\n# By deriving a set from `raw_text`, we deduplicate the array\nvocab = set(raw_text)\nvocab_size = len(vocab)\n\nword_to_ix = {word: i for i, word in enumerate(vocab)}\ndata = []\nfor i in range(2, len(raw_text) - 2):\n context = [raw_text[i - 2], raw_text[i - 1],\n raw_text[i + 1], raw_text[i + 2]]\n target = raw_text[i]\n data.append((context, target))\nprint(data[:5])\n\n\nclass CBOW(nn.Module):\n\n def __init__(self):\n pass\n\n def forward(self, inputs):\n pass\n\n# create your model and train. here are some functions to help you make\n# the data ready for use by your module\n\n\ndef make_context_vector(context, word_to_ix):\n idxs = [word_to_ix[w] for w in context]\n return torch.tensor(idxs, dtype=torch.long)\n\n\nmake_context_vector(data[0][0], word_to_ix) # example\n```\n", "meta": {"hexsha": "10fae07aaca6519ef8a6807256b777a518f9b56a", "size": 15718, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/_downloads/3161e5aef42e3f09c479534ca90f74ea/word_embeddings_tutorial.ipynb", "max_stars_repo_name": "leejh1230/PyTorch-tutorials-kr", "max_stars_repo_head_hexsha": "ebbf44b863ff96c597631e28fc194eafa590c9eb", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-12-05T05:16:44.000Z", "max_stars_repo_stars_event_max_datetime": "2019-12-05T05:16:44.000Z", "max_issues_repo_path": "docs/_downloads/3161e5aef42e3f09c479534ca90f74ea/word_embeddings_tutorial.ipynb", "max_issues_repo_name": "leejh1230/PyTorch-tutorials-kr", "max_issues_repo_head_hexsha": "ebbf44b863ff96c597631e28fc194eafa590c9eb", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "docs/_downloads/3161e5aef42e3f09c479534ca90f74ea/word_embeddings_tutorial.ipynb", "max_forks_repo_name": "leejh1230/PyTorch-tutorials-kr", "max_forks_repo_head_hexsha": "ebbf44b863ff96c597631e28fc194eafa590c9eb", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 155.6237623762, "max_line_length": 7338, "alphanum_fraction": 0.7009161471, "converted": true, "num_tokens": 3360, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4843800842769844, "lm_q2_score": 0.18242551713899047, "lm_q1q2_score": 0.08836328736605667}} {"text": "```\n# this mounts your Google Drive to the Colab VM.\nfrom google.colab import drive\ndrive.mount('/content/drive', force_remount=True)\n\n# enter the foldername in your Drive where you have saved the unzipped\n# assignment folder, e.g. 'cs231n/assignments/assignment3/'\nFOLDERNAME = 'colab/cs231n/assignments/assignment2/'\nassert FOLDERNAME is not None, \"[!] Enter the foldername.\"\n\n# now that we've mounted your Drive, this ensures that\n# the Python interpreter of the Colab VM can load\n# python files from within it.\nimport sys\nsys.path.append('/content/drive/My Drive/{}'.format(FOLDERNAME))\n\n# this downloads the CIFAR-10 dataset to your Drive\n# if it doesn't already exist.\n%cd drive/My\\ Drive/$FOLDERNAME/cs231n/datasets/\n!bash get_datasets.sh\n%cd /content\n```\n\n Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n \n Enter your authorization code:\n \u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\n Mounted at /content/drive\n /content/drive/My Drive/colab/cs231n/assignments/assignment2/cs231n/datasets\n /content\n\n\n# Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. \nOne idea along these lines is batch normalization which was proposed by [1] in 2015.\n\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However, even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\n\nThe authors of [1] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [1] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\n\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n\n[1] [Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.](https://arxiv.org/abs/1502.03167)\n\n\n```\n# As usual, a bit of setup\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\ndef print_mean_std(x,axis=0):\n print(' means: ', x.mean(axis=axis))\n print(' stds: ', x.std(axis=axis))\n print() \n```\n\n\n```\n# Load the (preprocessed) CIFAR10 data.\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)\n```\n\n X_train: (49000, 3, 32, 32)\n y_train: (49000,)\n X_val: (1000, 3, 32, 32)\n y_val: (1000,)\n X_test: (1000, 3, 32, 32)\n y_test: (1000,)\n\n\n## Batch normalization: forward\nIn the file `cs231n/layers.py`, implement the batch normalization forward pass in the function `batchnorm_forward`. Once you have done so, run the following to test your implementation.\n\nReferencing the paper linked to above in [1] may be helpful!\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization \n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint_mean_std(a,axis=0)\n\ngamma = np.ones((D3,))\nbeta = np.zeros((D3,))\n# Means should be close to zero and stds close to one\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\n# Now means should be close to beta and stds close to gamma\nprint('After batch normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=0)\n```\n\n Before batch normalization:\n means: [ -2.3814598 -13.18038246 1.91780462]\n stds: [27.18502186 34.21455511 37.68611762]\n \n After batch normalization (gamma=1, beta=0)\n means: [5.32907052e-17 7.04991621e-17 1.85962357e-17]\n stds: [0.99999999 1. 1. ]\n \n After batch normalization (gamma= [1. 2. 3.] , beta= [11. 12. 13.] )\n means: [11. 12. 13.]\n stds: [0.99999999 1.99999999 2.99999999]\n \n\n\n\n```\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\n\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\n\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint_mean_std(a_norm,axis=0)\n```\n\n After batch normalization (test-time):\n means: [-0.03927354 -0.04349152 -0.10452688]\n stds: [1.01531427 1.01238373 0.97819987]\n \n\n\n## Batch normalization: backward\nNow implement the backward pass for batch normalization in the function `batchnorm_backward`.\n\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\n\nOnce you have finished, run the following to numerically check your backward pass.\n\n\n```\n# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\n#You should expect to see relative errors between 1e-13 and 1e-8\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.7029258328157158e-09\n dgamma error: 7.420414216247087e-13\n dbeta error: 2.8795057655839487e-12\n\n\n## Batch normalization: alternative backward\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For example, you can derive a very simple formula for the sigmoid function's backward pass by simplifying gradients on paper.\n\nSurprisingly, it turns out that you can do a similar simplification for the batch normalization backward pass too! \n\nIn the forward pass, given a set of inputs $X=\\begin{bmatrix}x_1\\\\x_2\\\\...\\\\x_N\\end{bmatrix}$, \n\nwe first calculate the mean $\\mu$ and variance $v$.\nWith $\\mu$ and $v$ calculated, we can calculate the standard deviation $\\sigma$ and normalized data $Y$.\nThe equations and graph illustration below describe the computation ($y_i$ is the i-th element of the vector $Y$).\n\n\\begin{align}\n& \\mu=\\frac{1}{N}\\sum_{k=1}^N x_k & v=\\frac{1}{N}\\sum_{k=1}^N (x_k-\\mu)^2 \\\\\n& \\sigma=\\sqrt{v+\\epsilon} & y_i=\\frac{x_i-\\mu}{\\sigma}\n\\end{align}\n\n\n\nThe meat of our problem during backpropagation is to compute $\\frac{\\partial L}{\\partial X}$, given the upstream gradient we receive, $\\frac{\\partial L}{\\partial Y}.$ To do this, recall the chain rule in calculus gives us $\\frac{\\partial L}{\\partial X} = \\frac{\\partial L}{\\partial Y} \\cdot \\frac{\\partial Y}{\\partial X}$.\n\nThe unknown/hart part is $\\frac{\\partial Y}{\\partial X}$. We can find this by first deriving step-by-step our local gradients at \n$\\frac{\\partial v}{\\partial X}$, $\\frac{\\partial \\mu}{\\partial X}$,\n$\\frac{\\partial \\sigma}{\\partial v}$, \n$\\frac{\\partial Y}{\\partial \\sigma}$, and $\\frac{\\partial Y}{\\partial \\mu}$,\nand then use the chain rule to compose these gradients (which appear in the form of vectors!) appropriately to compute $\\frac{\\partial Y}{\\partial X}$.\n\nIf it's challenging to directly reason about the gradients over $X$ and $Y$ which require matrix multiplication, try reasoning about the gradients in terms of individual elements $x_i$ and $y_i$ first: in that case, you will need to come up with the derivations for $\\frac{\\partial L}{\\partial x_i}$, by relying on the Chain Rule to first calculate the intermediate $\\frac{\\partial \\mu}{\\partial x_i}, \\frac{\\partial v}{\\partial x_i}, \\frac{\\partial \\sigma}{\\partial x_i},$ then assemble these pieces to calculate $\\frac{\\partial y_i}{\\partial x_i}$. \n\nYou should make sure each of the intermediary gradient derivations are all as simplified as possible, for ease of implementation. \n\nAfter doing so, implement the simplified batch normalization backward pass in the function `batchnorm_backward_alt` and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\n\n\n```\nnp.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))\n```\n\n dx difference: 6.284600172572596e-13\n dgamma difference: 0.0\n dbeta difference: 0.0\n speedup: 2.77x\n\n\n## Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your `FullyConnectedNet` in the file `cs231n/classifiers/fc_net.py`. Modify your implementation to add batch normalization.\n\nConcretely, when the `normalization` flag is set to `\"batchnorm\"` in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\n\nHINT: You might find it useful to define an additional helper layer similar to those in the file `cs231n/layer_utils.py`. If you decide to do so, do it in the file `cs231n/classifiers/fc_net.py`.\n\n\n```\nnp.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\n# You should expect losses between 1e-4~1e-10 for W, \n# losses between 1e-08~1e-10 for b,\n# and losses between 1e-08~1e-09 for beta and gammas.\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n normalization='batchnorm')\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()\n```\n\n Running check with reg = 0\n Initial loss: 2.2611955101340957\n W1 relative error: 1.10e-04\n W2 relative error: 2.85e-06\n W3 relative error: 4.05e-10\n b1 relative error: 2.22e-07\n b2 relative error: 2.22e-08\n b3 relative error: 1.01e-10\n beta1 relative error: 7.33e-09\n beta2 relative error: 1.89e-09\n gamma1 relative error: 6.96e-09\n gamma2 relative error: 1.96e-09\n \n Running check with reg = 3.14\n Initial loss: 6.996533220108303\n W1 relative error: 1.98e-06\n W2 relative error: 2.28e-06\n W3 relative error: 1.11e-08\n b1 relative error: 1.38e-08\n b2 relative error: 7.99e-07\n b3 relative error: 1.73e-10\n beta1 relative error: 6.65e-09\n beta2 relative error: 3.48e-09\n gamma1 relative error: 8.80e-09\n gamma2 relative error: 5.28e-09\n\n\n# Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.\n\n\n```\nnp.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\nprint('Solver with batch norm:')\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True,print_every=20)\nbn_solver.train()\n\nprint('\\nSolver without batch norm:')\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()\n```\n\n Solver with batch norm:\n (Iteration 1 / 200) loss: 2.340975\n (Epoch 0 / 10) train acc: 0.107000; val_acc: 0.115000\n (Epoch 1 / 10) train acc: 0.314000; val_acc: 0.266000\n (Iteration 21 / 200) loss: 2.039365\n (Epoch 2 / 10) train acc: 0.389000; val_acc: 0.279000\n (Iteration 41 / 200) loss: 2.036704\n (Epoch 3 / 10) train acc: 0.501000; val_acc: 0.322000\n (Iteration 61 / 200) loss: 1.776305\n (Epoch 4 / 10) train acc: 0.521000; val_acc: 0.311000\n (Iteration 81 / 200) loss: 1.285794\n (Epoch 5 / 10) train acc: 0.607000; val_acc: 0.310000\n (Iteration 101 / 200) loss: 1.277616\n (Epoch 6 / 10) train acc: 0.667000; val_acc: 0.344000\n (Iteration 121 / 200) loss: 1.074345\n (Epoch 7 / 10) train acc: 0.675000; val_acc: 0.320000\n (Iteration 141 / 200) loss: 1.133021\n (Epoch 8 / 10) train acc: 0.716000; val_acc: 0.310000\n (Iteration 161 / 200) loss: 0.798814\n (Epoch 9 / 10) train acc: 0.805000; val_acc: 0.323000\n (Iteration 181 / 200) loss: 0.996323\n (Epoch 10 / 10) train acc: 0.804000; val_acc: 0.300000\n \n Solver without batch norm:\n (Iteration 1 / 200) loss: 2.302332\n (Epoch 0 / 10) train acc: 0.129000; val_acc: 0.131000\n (Epoch 1 / 10) train acc: 0.283000; val_acc: 0.250000\n (Iteration 21 / 200) loss: 2.041970\n (Epoch 2 / 10) train acc: 0.316000; val_acc: 0.277000\n (Iteration 41 / 200) loss: 1.900473\n (Epoch 3 / 10) train acc: 0.373000; val_acc: 0.282000\n (Iteration 61 / 200) loss: 1.713156\n (Epoch 4 / 10) train acc: 0.390000; val_acc: 0.310000\n (Iteration 81 / 200) loss: 1.662209\n (Epoch 5 / 10) train acc: 0.434000; val_acc: 0.300000\n (Iteration 101 / 200) loss: 1.696062\n (Epoch 6 / 10) train acc: 0.536000; val_acc: 0.346000\n (Iteration 121 / 200) loss: 1.550785\n (Epoch 7 / 10) train acc: 0.530000; val_acc: 0.310000\n (Iteration 141 / 200) loss: 1.436308\n (Epoch 8 / 10) train acc: 0.622000; val_acc: 0.342000\n (Iteration 161 / 200) loss: 1.000868\n (Epoch 9 / 10) train acc: 0.654000; val_acc: 0.328000\n (Iteration 181 / 200) loss: 0.925455\n (Epoch 10 / 10) train acc: 0.726000; val_acc: 0.335000\n\n\nRun the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.\n\n\n```\ndef plot_training_history(title, label, baseline, bn_solvers, plot_fn, bl_marker='.', bn_marker='.', labels=None):\n \"\"\"utility function for plotting training history\"\"\"\n plt.title(title)\n plt.xlabel(label)\n bn_plots = [plot_fn(bn_solver) for bn_solver in bn_solvers]\n bl_plot = plot_fn(baseline)\n num_bn = len(bn_plots)\n for i in range(num_bn):\n label='with_norm'\n if labels is not None:\n label += str(labels[i])\n plt.plot(bn_plots[i], bn_marker, label=label)\n label='baseline'\n if labels is not None:\n label += str(labels[0])\n plt.plot(bl_plot, bl_marker, label=label)\n plt.legend(loc='lower center', ncol=num_bn+1) \n\n \nplt.subplot(3, 1, 1)\nplot_training_history('Training loss','Iteration', solver, [bn_solver], \\\n lambda x: x.loss_history, bl_marker='o', bn_marker='o')\nplt.subplot(3, 1, 2)\nplot_training_history('Training accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.train_acc_history, bl_marker='-o', bn_marker='-o')\nplt.subplot(3, 1, 3)\nplot_training_history('Validation accuracy','Epoch', solver, [bn_solver], \\\n lambda x: x.val_acc_history, bl_marker='-o', bn_marker='-o')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n# Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\n\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.\n\n\n```\nnp.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers_ws = {}\nsolvers_ws = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization='batchnorm')\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers_ws[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers_ws[weight_scale] = solver\n```\n\n Running weight scale 1 / 20\n Running weight scale 2 / 20\n Running weight scale 3 / 20\n Running weight scale 4 / 20\n Running weight scale 5 / 20\n Running weight scale 6 / 20\n Running weight scale 7 / 20\n Running weight scale 8 / 20\n Running weight scale 9 / 20\n Running weight scale 10 / 20\n Running weight scale 11 / 20\n Running weight scale 12 / 20\n Running weight scale 13 / 20\n Running weight scale 14 / 20\n Running weight scale 15 / 20\n Running weight scale 16 / 20\n Running weight scale 17 / 20\n Running weight scale 18 / 20\n Running weight scale 19 / 20\n Running weight scale 20 / 20\n\n\n\n```\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers_ws[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers_ws[ws].train_acc_history))\n \n best_val_accs.append(max(solvers_ws[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers_ws[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers_ws[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers_ws[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()\n```\n\n## Inline Question 1:\nDescribe the results of this experiment. How does the scale of weight initialization affect models with/without batch normalization differently, and why?\n\n## Answer:\nWeight scale is higher than certain point, then both have failed to learn. This is bacause some features are distorted through batchnorm layer when we sclae weight too much. \n\n\n# Batch normalization and batch size\nWe will now run a small experiment to study the interaction of batch normalization and batch size.\n\nThe first cell will train 6-layer networks both with and without batch normalization using different batch sizes. The second layer will plot training accuracy and validation set accuracy over time.\n\n\n```\ndef run_batchsize_experiments(normalization_mode):\n np.random.seed(231)\n # Try training a very deep net with batchnorm\n hidden_dims = [100, 100, 100, 100, 100]\n num_train = 1000\n small_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n }\n n_epochs=10\n weight_scale = 2e-2\n batch_sizes = [5,10,50]\n lr = 10**(-3.5)\n solver_bsize = batch_sizes[0]\n\n print('No normalization: batch size = ',solver_bsize)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=None)\n solver = Solver(model, small_data,\n num_epochs=n_epochs, batch_size=solver_bsize,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n solver.train()\n \n bn_solvers = []\n for i in range(len(batch_sizes)):\n b_size=batch_sizes[i]\n print('Normalization: batch size = ',b_size)\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, normalization=normalization_mode)\n bn_solver = Solver(bn_model, small_data,\n num_epochs=n_epochs, batch_size=b_size,\n update_rule='adam',\n optim_config={\n 'learning_rate': lr,\n },\n verbose=False)\n bn_solver.train()\n bn_solvers.append(bn_solver)\n \n return bn_solvers, solver, batch_sizes\n\nbatch_sizes = [5,10,50]\nbn_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('batchnorm')\n```\n\n No normalization: batch size = 5\n Normalization: batch size = 5\n Normalization: batch size = 10\n Normalization: batch size = 50\n\n\n\n```\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Batch Normalization)','Epoch', solver_bsize, bn_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 2:\nDescribe the results of this experiment. What does this imply about the relationship between batch normalization and batch size? Why is this relationship observed?\n\n## Answer:\nIf the layer is not ehough batch size, then adding batch normalization affects negatively. But after increasing batch size, then the normalization begin to work. We can check that batch normalization can be used as a kind of regularization when we compare both baseline and norm with 5 in batch size.\n\n# Layer Normalization\nBatch normalization has proved to be effective in making networks easier to train, but the dependency on batch size makes it less useful in complex networks which have a cap on the input batch size due to hardware limitations. \n\nSeveral alternatives to batch normalization have been proposed to mitigate this problem; one such technique is Layer Normalization [2]. Instead of normalizing over the batch, we normalize over the features. In other words, when using Layer Normalization, each feature vector corresponding to a single datapoint is normalized based on the sum of all terms within that feature vector.\n\n[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. \"Layer Normalization.\" stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)\n\n## Inline Question 3:\nWhich of these data preprocessing steps is analogous to batch normalization, and which is analogous to layer normalization?\n\n1. Scaling each image in the dataset, so that the RGB channels for each row of pixels within an image sums up to 1.\n2. Scaling each image in the dataset, so that the RGB channels for all pixels within an image sums up to 1. \n3. Subtracting the mean image of the dataset from each image in the dataset.\n4. Setting all RGB values to either 0 or 1 depending on a given threshold.\n\n## Answer:\nanalogous to batch normalization: 3 \\\nanalogous to layer normalization: 2\n\n\n# Layer Normalization: Implementation\n\nNow you'll implement layer normalization. This step should be relatively straightforward, as conceptually the implementation is almost identical to that of batch normalization. One significant difference though is that for layer normalization, we do not keep track of the moving moments, and the testing phase is identical to the training phase, where the mean and variance are directly calculated per datapoint.\n\nHere's what you need to do:\n\n* In `cs231n/layers.py`, implement the forward pass for layer normalization in the function `layernorm_forward`. \n\nRun the cell below to check your results.\n* In `cs231n/layers.py`, implement the backward pass for layer normalization in the function `layernorm_backward`. \n\nRun the second cell below to check your results.\n* Modify `cs231n/classifiers/fc_net.py` to add layer normalization to the `FullyConnectedNet`. When the `normalization` flag is set to `\"layernorm\"` in the constructor, you should insert a layer normalization layer before each ReLU nonlinearity. \n\nRun the third cell below to run the batch size experiment on layer normalization.\n\n\n```\n# Check the training-time forward pass by checking means and variances\n# of features both before and after layer normalization \n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 =4, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before layer normalization:')\nprint_mean_std(a,axis=1)\n\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\n# Means should be close to zero and stds close to one\nprint('After layer normalization (gamma=1, beta=0)')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n\ngamma = np.asarray([3.0,3.0,3.0])\nbeta = np.asarray([5.0,5.0,5.0])\n# Now means should be close to beta and stds close to gamma\nprint('After layer normalization (gamma=', gamma, ', beta=', beta, ')')\na_norm, _ = layernorm_forward(a, gamma, beta, {'mode': 'train'})\nprint_mean_std(a_norm,axis=1)\n```\n\n Before layer normalization:\n means: [-59.06673243 -47.60782686 -43.31137368 -26.40991744]\n stds: [10.07429373 28.39478981 35.28360729 4.01831507]\n \n After layer normalization (gamma=1, beta=0)\n means: [ 4.81096644e-16 -7.40148683e-17 2.22044605e-16 -5.92118946e-16]\n stds: [0.99999995 0.99999999 1. 0.99999969]\n \n After layer normalization (gamma= [3. 3. 3.] , beta= [5. 5. 5.] )\n means: [5. 5. 5. 5.]\n stds: [2.99999985 2.99999998 2.99999999 2.99999907]\n \n\n\n\n```\n# Gradient check layernorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nln_param = {}\nfx = lambda x: layernorm_forward(x, gamma, beta, ln_param)[0]\nfg = lambda a: layernorm_forward(x, a, beta, ln_param)[0]\nfb = lambda b: layernorm_forward(x, gamma, b, ln_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = layernorm_forward(x, gamma, beta, ln_param)\ndx, dgamma, dbeta = layernorm_backward(dout, cache)\n\n#You should expect to see relative errors between 1e-12 and 1e-8\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))\n```\n\n dx error: 1.4336158494902849e-09\n dgamma error: 4.519489546032799e-12\n dbeta error: 2.276445013433725e-12\n\n\n# Layer Normalization and batch size\n\nWe will now run the previous batch size experiment with layer normalization instead of batch normalization. Compared to the previous experiment, you should see a markedly smaller influence of batch size on the training history!\n\n\n```\nln_solvers_bsize, solver_bsize, batch_sizes = run_batchsize_experiments('layernorm')\n\nplt.subplot(2, 1, 1)\nplot_training_history('Training accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.train_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\nplt.subplot(2, 1, 2)\nplot_training_history('Validation accuracy (Layer Normalization)','Epoch', solver_bsize, ln_solvers_bsize, \\\n lambda x: x.val_acc_history, bl_marker='-^', bn_marker='-o', labels=batch_sizes)\n\nplt.gcf().set_size_inches(15, 10)\nplt.show()\n```\n\n## Inline Question 4:\nWhen is layer normalization likely to not work well, and why?\n\n1. Using it in a very deep network\n2. Having a very small dimension of features\n3. Having a high regularization term\n\n\n## Answer:\n[FILL THIS IN]\n\n", "meta": {"hexsha": "2f53e0931b43193e8430caf4084d0375e218e64e", "size": 443582, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "a2/BatchNormalization.ipynb", "max_stars_repo_name": "KIONLEE/cs231n", "max_stars_repo_head_hexsha": "0649469def9dd39fa80e2cfb95c077ec768dcd20", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-10-22T02:10:41.000Z", "max_stars_repo_stars_event_max_datetime": "2021-05-09T11:46:53.000Z", "max_issues_repo_path": "a2/BatchNormalization.ipynb", "max_issues_repo_name": "KIONLEE/cs231n", "max_issues_repo_head_hexsha": "0649469def9dd39fa80e2cfb95c077ec768dcd20", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 10, "max_issues_repo_issues_event_min_datetime": "2021-02-02T22:52:29.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-12T00:47:44.000Z", "max_forks_repo_path": "a2/BatchNormalization.ipynb", "max_forks_repo_name": "KIONLEE/cs231n", "max_forks_repo_head_hexsha": "0649469def9dd39fa80e2cfb95c077ec768dcd20", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 443582.0, "max_line_length": 443582, "alphanum_fraction": 0.9376485069, "converted": true, "num_tokens": 9481, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.32423541204073586, "lm_q2_score": 0.2720245392906821, "lm_q1q2_score": 0.08819998858210565}} {"text": "\n\n\n# PHY321: Introduction to Classical Mechanics and plans for Spring 2022\n**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/)**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA and Department of Physics, University of Oslo, Norway \n**Scott Pratt**, Department of Physics and Astronomy and Facility for Rare Ion Beams (FRIB), Michigan State University, USA\n\nDate: **Jan 12, 2022**\n\nCopyright 1999-2022, [Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/). Released under CC Attribution-NonCommercial 4.0 license\n\n## Aims and Overview of week 2: January 10-14\n\nThe first week starts on Monday January 10. This week is dedicated to a\nreview of learning material and reminder on programming aspects,\nuseful tools, where to find information and much more. \n\n* Introduction to the course and reminder on vectors, space, time and motion.\n\n* Python programming reminder, elements from [CMSE 201 INTRODUCTION TO COMPUTATIONAL MODELING](https://cmse.msu.edu/academics/undergraduate-program/undergraduate-courses/cmse-201-introduction-to-computational-modeling/) and how they are used in this course. Installing software (anaconda). . \n\n* Introduction to Git and GitHub. [Overview video on Git and GitHub](https://mediaspace.msu.edu/media/t/1_8mgx3cyf).\n\n**Recommended reading**: John R. Taylor, Classical Mechanics (Univ. Sci. Books 2005), , see also . Chapters 1.2 and 1.3 of Taylor.\n\n## Classical mechanics\n\nClassical mechanics is a topic which has been taught intensively over\nseveral centuries. It is, with its many variants and ways of\npresenting the educational material, normally the first **real** physics\ncourse many of us meet and it lays the foundation for further physics\nstudies. Many of the equations and ways of reasoning about the\nunderlying laws of motion and pertinent forces, shape our approaches and understanding\nof the scientific method and discourse, as well as the way we develop our insights\nand deeper understanding about physical systems.\n\n## From Continuous to Discretized Approaches\n\nThere is a wealth of\nwell-tested (from both a physics point of view and a pedagogical\nstandpoint) exercises and problems which can be solved\nanalytically. However, many of these problems represent idealized and\nless realistic situations. The large majority of these problems are\nsolved by paper and pencil and are traditionally aimed\nat what we normally refer to as continuous models from which we may find an analytical solution. As a consequence,\nwhen teaching mechanics, it implies that we can seldomly venture beyond an idealized case\nin order to develop our understandings and insights about the\nunderlying forces and laws of motion.\n\nWe aim at changing this here by introducing throughout the course what\nwe will call a **computational path**, where with computations we mean\nsolving scientific problems with all possible tools and means, from\nplain paper an pencil exercises, via symbolic calculations to writing\na code and running a program to solve a specific\nproblem. Mathematically this normally means that we move from a\ncontinuous problem to a discretized one. This appproach enables us to\nsolve a much broader class of problems.\nIn mechanics this means, since we often rephrase the physical problems in terms of differential equations, that we can in most settings reuse the same program with some minimal changes.\n\n## Space, Time, Motion, Reference Frames and Reminder on vectors and other mathematical quantities\n\nOur studies will start with the motion of different types of objects\nsuch as a falling ball, a runner, a bicycle etc etc. It means that an\nobject's position in space varies with time.\nIn order to study such systems we need to define\n\n* choice of origin\n\n* choice of the direction of the axes\n\n* choice of positive direction (left-handed or right-handed system of reference)\n\n* choice of units and dimensions\n\nThese choices lead to some important questions such as\n\n* is the physics of a system independent of the origin of the axes?\n\n* is the physics independent of the directions of the axes, that is are there privileged axes?\n\n* is the physics independent of the orientation of system?\n\n* is the physics independent of the scale of the length?\n\n## Dimension, units and labels\n\nThroughout this course we will use the standardized SI units. The standard unit for length is thus one meter 1m, for mass\none kilogram 1kg, for time one second 1s, for force one Newton 1kgm/s$^2$ and for energy 1 Joule 1kgm$^2$s$^{-2}$.\n\nWe will use the following notations for various variables (vectors are always boldfaced in these lecture notes):\n* position $\\boldsymbol{r}$, in one dimention we will normally just use $x$,\n\n* mass $m$,\n\n* time $t$,\n\n* velocity $\\boldsymbol{v}$ or just $v$ in one dimension,\n\n* acceleration $\\boldsymbol{a}$ or just $a$ in one dimension,\n\n* momentum $\\boldsymbol{p}$ or just $p$ in one dimension,\n\n* kinetic energy $K$,\n\n* potential energy $V$ and\n\n* frequency $\\omega$.\n\nMore variables will be defined as we need them.\n\n## Dimensions and Units\n\nIt is also important to keep track of dimensionalities. Don't mix this\nup with a chosen unit for a given variable. We mark the dimensionality\nin these lectures as $[a]$, where $a$ is the quantity we are\ninterested in. Thus\n\n* $[\\boldsymbol{r}]=$ length\n\n* $[m]=$ mass\n\n* $[K]=$ energy\n\n* $[t]=$ time\n\n* $[\\boldsymbol{v}]=$ length over time\n\n* $[\\boldsymbol{a}]=$ length over time squared\n\n* $[\\boldsymbol{p}]=$ mass times length over time\n\n* $[\\omega]=$ 1/time\n\n## Scalars, Vectors and Matrices\n\nA scalar is something with a value that is independent of coordinate\nsystem. Examples are mass, or the relative time between events. A\nvector has magnitude and direction. Under rotation, the magnitude\nstays the same but the direction changes. Scalars have no spatial\nindex, whereas a three-dimensional vector has 3 indices, e.g. the\nposition $\\boldsymbol{r}$ has components $r_1,r_2,r_3$, which are often\nreferred to as $x,y,z$.\n\nThere are several categories of changes of coordinate system. The\nobserver can translate the origin, might move with a different\nvelocity, or might rotate her/his coordinate axes. For instance, a\nparticle's position vector changes when the origin is translated, but\nits velocity does not. When you study relativity you will find that\nquantities you thought of as scalars, such as time or an electric\npotential, are actually parts of four-dimensional vectors and that\nchanges of the velocity of the reference frame act in a similar way to\nrotations.\n\nIn addition to vectors and scalars, there are matrices, which have two\nindices. One also has objects with 3 or four indices. These are called\ntensors of rank $n$, where $n$ is the number of indices. A matrix is a\nrank-two tensor. The Levi-Civita symbol, $\\epsilon_{ijk}$ used for\ncross products of vectors, is a tensor of rank three.\n\n## Definitions of Vectors\n\nIn these lectures we will use boldfaced lower-case letters to label a\nvector. A vector $\\boldsymbol{a}$ in three dimensions is thus defined as\n\n$$\n\\boldsymbol{a} =(a_x,a_y, a_z),\n$$\n\nand using the unit vectors (see below) in a cartesian system we have\n\n$$\n\\boldsymbol{a} = a_x\\boldsymbol{e}_1+a_y\\boldsymbol{e}_2+a_z\\boldsymbol{e}_3,\n$$\n\nwhere the unit vectors have magnitude $\\vert\\boldsymbol{e}_i\\vert = 1$ with\n$i=1=x$, $i=2=y$ and $i=3=z$. Some authors use letters\n$\\boldsymbol{i}=\\boldsymbol{e}_1$, $\\boldsymbol{j}=\\boldsymbol{e}_2$ and $\\boldsymbol{k}=\\boldsymbol{e}_3$.\n\n## Other ways to define a Vector\n\nAlternatively, you may also encounter the above vector as\n\n$$\n\\boldsymbol{a} = a_1\\boldsymbol{e}_1+a_2\\boldsymbol{e}_2+a_3\\boldsymbol{e}_3.\n$$\n\nHere we have used that $a_1=a_x$, $a_2=a_y$ and $a_3=a_z$. Such a\nnotation is sometimes more convenient if we wish to represent vector\noperations in a mathematically more compact way, see below here. We may also find this useful if we want the different\ncomponents to represent other coordinate systems that the Cartesian one. A typical example would be going from a Cartesian representation to a spherical basis. We will encounter such cases many times in this course. \n\nWe use lower-case letters for vectors and upper-case letters for matrices. Vectors and matrices are always boldfaced.\n\n## Polar Coordinates\n\nAs an example, consider a two-dimensional Cartesian system with a vector $\\boldsymbol{r}=(x,y)$.\nOur vector is then written as\n\n$$\n\\boldsymbol{r} = x\\boldsymbol{e}_1+y\\boldsymbol{e}_2.\n$$\n\nTransforming to polar coordinates with the radius $\\rho\\in [0,\\infty)$\nand the angle $\\phi \\in [0,2\\pi]$ we have the familiar transformations\n\n$$\nx = \\rho \\cos{\\phi} \\hspace{0.5cm} y = \\rho \\sin{\\phi},\n$$\n\nand the inverse relations\n\n$$\n\\rho =\\sqrt{x^2+y^2} \\hspace{0.5cm} \\phi = \\mathrm{arctan}(\\frac{y}{x}).\n$$\n\nWe can rewrite the vector $\\boldsymbol{a}$ in terms of $\\rho$ and $\\phi$ as\n\n$$\n\\boldsymbol{a} = \\rho \\cos{\\phi}\\boldsymbol{e}_1+\\rho \\sin{\\phi}\\boldsymbol{e}_2,\n$$\n\nand we define the new unit vectors as $\\boldsymbol{e}'_1=\\cos{\\phi}\\boldsymbol{e}_1$ and $\\boldsymbol{e}'_2=\\sin{\\phi}\\boldsymbol{e}_2$, we have\n\n$$\n\\boldsymbol{a}' = \\rho\\boldsymbol{e}'_1+\\rho \\boldsymbol{e}'_2.\n$$\n\nBelow we will show that the norms of this vector in a Cartesian basis and a Polar basis are equal.\n\n## Unit Vectors\n\nAlso known as basis vectors, unit vectors point in the direction of\nthe coordinate axes, have unit norm, and are orthogonal to one\nanother. Sometimes this is referred to as an orthonormal basis,\n\n\n
\n\n$$\n\\begin{equation}\n\\boldsymbol{e}_i\\cdot\\boldsymbol{e}_j=\\delta_{ij}=\\begin{bmatrix}\n1 & 0 & 0\\\\\n0& 1 & 0\\\\\n0 & 0 & 1\n\\end{bmatrix}.\n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\nHere, $\\delta_{ij}$ is unity when $i=j$ and is zero otherwise. This is\ncalled the unit matrix, because you can multiply it with any other\nmatrix and not change the matrix. The **dot** denotes the dot product,\n$\\boldsymbol{a}\\cdot\\boldsymbol{b}=a_1b_1+a_2b_2+a_3b_3=|a||b|\\cos\\theta_{ab}$. Sometimes\nthe unit vectors are called $\\hat{x}$, $\\hat{y}$ and\n$\\hat{z}$.\n\n## Our definition of unit vectors\n\nVectors can be decomposed in terms of unit vectors,\n\n\n
\n\n$$\n\\begin{equation}\n\\boldsymbol{r}=r_1\\hat{e}_1+r_2\\hat{e}_2+r_3\\hat{e}_3.\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nThe vector components $r_1$, $r_2$ and $r_3$ might be\ncalled $x$, $y$ and $z$ for a displacement. Another way to write this is to define the vector $\\boldsymbol{r}=(x,y,z)$.\n\nSimilarly, for the velocity we will use in this course the components $\\boldsymbol{v}=(v_x, v_y,v_z$. The accelaration is then given by $\\boldsymbol{a}=(a_x,a_y,a_z)$.\n\n## More definitions, repeated indices\n\nAs mentioned above, repeated indices infer sums.\nThis means that when you encounter an expression like the one on the left-hand side here, it stands actually for a sum (right-hand side)\n\n$$\nx_iy_i=\\sum_i x_iy_i=\\boldsymbol{x}\\cdot\\boldsymbol{y}.\n$$\n\nWe will in our lectures seldom use this notation and rather spell out the summations. This inferred summation over indices is normally called [Einstein summation convention](https://en.wikipedia.org/wiki/Einstein_notation).\n\n## Vector Operations, Scalar Product (or dot product)\n\nFor two vectors $\\boldsymbol{a}$ and $\\boldsymbol{b}$ we have\n\n$$\n\\begin{eqnarray*}\n\\boldsymbol{a}\\cdot\\boldsymbol{b}&=&\\sum_ia_ib_i=|a||b|\\cos\\theta_{ab},\\\\\n|a|&\\equiv& \\sqrt{\\boldsymbol{a}\\cdot\\boldsymbol{a}},\n\\end{eqnarray*}\n$$\n\nor with a norm-2 notation\n\n$$\n|a|\\equiv \\vert\\vert \\boldsymbol{a}\\vert\\vert_2=\\sqrt{\\sum_i a_i^2}.\n$$\n\nNot of all of you are familiar with linear algebra. Numerically we will always deal with arrays and the dot product vector is given by the product of the transposed vector multiplied with the other vector, that is we have\n\n$$\n\\boldsymbol{a}^T\\boldsymbol{b}=\\sum_i a_ib_i=|a||b|\\cos\\theta_{ab}.\n$$\n\nThe superscript $T$ represents the transposition operations.\n\n## Digression, Linear Algebra Notation for Vectors\n\nAs an example, consider a three-dimensional velocity defined by a vector $\\boldsymbol{v}=(v_x,v_y,v_z)$. For those of you familiar with linear algebra, we would write this quantity as\n\n$$\n\\boldsymbol{v}=\\begin{bmatrix} v_x\\\\ v_y \\\\ v_z \\end{bmatrix},\n$$\n\nand the transpose as\n\n$$\n\\boldsymbol{v}^T=\\begin{bmatrix} v_x & v_y &v_z \\end{bmatrix}.\n$$\n\nThe norm is\n\n$$\n\\boldsymbol{v}^T\\boldsymbol{v}=v_x^2+v_y^2+v_z^2,\n$$\n\nas it should.\n\nSince we will use Python as a programming language throughout this course, the above vector, using the package **numpy** (see discussions below), can be written as\n\n\n```python\nimport numpy as np\n# Define the values of vx, vy and vz\nvx = 0.0\nvy = 1.0\nvz = 0.0\nv = np.array([vx, vy, vz])\nprint(v)\n# The print the transpose of v\nprint(v.T)\n```\n\nTry to figure out how to calculate the norm with **numpy**.\nWe will come back to **numpy** in the examples below.\n\n## Norm of a transformed Vector\n\nAs an example, consider our transformation of a two-dimensional Cartesian vector $\\boldsymbol{r}$ to polar coordinates.\nWe had\n\n$$\n\\boldsymbol{r} = x\\boldsymbol{e}_1+y\\boldsymbol{e}_2.\n$$\n\nTransforming to polar coordinates with the radius $\\rho\\in [0,\\infty)$\nand the angle $\\phi \\in [0,2\\pi]$ we have\n\n$$\nx = \\rho \\cos{\\phi} \\hspace{0.5cm} y = \\rho \\sin{\\phi}.\n$$\n\nWe can write this\n\n$$\n\\boldsymbol{r} = \\begin{bmatrix} x \\\\ y \\end{bmatrix}= \\begin{bmatrix} \\rho \\cos{\\phi} \\\\ \\rho \\sin{\\phi} \\end{bmatrix}.\n$$\n\nThe norm in Cartesian coordinates is $\\boldsymbol{r}\\cdot\\boldsymbol{r}=x^2+y^2$ and\nusing Polar coordinates we have\n$\\rho^2(\\cos{\\phi})^2+\\rho^2(\\cos{\\phi})^2=\\rho^2$, which shows that\nthe norm is conserved since we have $\\rho = \\sqrt{x^2+y^2}$. A\ntransformation to a new basis should not change the norm.\n\n## Vector Product (or cross product) of vectors $\\boldsymbol{a}$ and $\\boldsymbol{b}$\n\n$$\n\\begin{eqnarray*}\n\\boldsymbol{c}&=&\\boldsymbol{a}\\times\\boldsymbol{b},\\\\\nc_i&=&\\epsilon_{ijk}a_jb_k.\n\\end{eqnarray*}\n$$\n\nHere $\\epsilon$ is the third-rank anti-symmetric tensor, also known as\nthe Levi-Civita symbol. It is $\\pm 1$ only if all three indices are\ndifferent, and is zero otherwise. The choice of $\\pm 1$ depends on\nwhether the indices are an even or odd permutation of the original\nsymbols. The permutation $xyz$ or $123$ is considered to be $+1$. Its elements are\n\n$$\n\\begin{eqnarray}\n\\epsilon_{ijk}&=&-\\epsilon_{ikj}=-\\epsilon_{jik}=-\\epsilon_{kji}\\\\\n\\nonumber\n\\epsilon_{123}&=&\\epsilon_{231}=\\epsilon_{312}=1,\\\\\n\\nonumber\n\\epsilon_{213}&=&\\epsilon_{132}=\\epsilon_{321}=-1,\\\\\n\\nonumber\n\\epsilon_{iij}&=&\\epsilon_{iji}=\\epsilon_{jii}=0.\n\\end{eqnarray}\n$$\n\n## More on cross-products\n\nYou may have met cross-products when studying magnetic\nfields. Because the matrix is anti-symmetric, switching the $x$ and\n$y$ axes (or any two axes) flips the sign. If the coordinate system is\nright-handed, meaning the $xyz$ axes satisfy\n$\\hat{x}\\times\\hat{y}=\\hat{z}$, where you can point along the $x$ axis\nwith your extended right index finger, the $y$ axis with your\ncontracted middle finger and the $z$ axis with your extended\nthumb. Switching to a left-handed system flips the sign of the vector\n$\\boldsymbol{c}=\\boldsymbol{a}\\times\\boldsymbol{b}$.\n\nNote that\n$\\boldsymbol{a}\\times\\boldsymbol{b}=-\\boldsymbol{b}\\times\\boldsymbol{a}$. The vector $\\boldsymbol{c}$ is\nperpendicular to both $\\boldsymbol{a}$ and $\\boldsymbol{b}$ and the magnitude of\n$\\boldsymbol{c}$ is given by\n\n$$\n|c|=|a||b|\\sin{\\theta_{ab}}.\n$$\n\n## Pseudo-vectors\n\nVectors obtained by the cross product of two real vectors are called\npseudo-vectors because the assignment of their direction can be\narbitrarily flipped by defining the Levi-Civita symbol to be based on\nleft-handed rules. Examples are the magnetic field and angular\nmomentum. If the direction of a real vector prefers the right-handed\nover the left-handed direction, that constitutes a violation of\nparity. For instance, one can polarize the spins (angular momentum) of\nnuclei with a magnetic field so that the spins preferentially point\nalong the direction of the magnetic field. This does not violate\nparity because both are pseudo-vectors. Now assume these polarized\nnuclei decay and that electrons are one of the products. If these\nelectrons prefer to exit the decay parallel vs. antiparallel to the\npolarizing magnetic field, this constitutes parity violation because\nthe direction of the outgoing electron momenta are a real vector. This\nis precisely what is observed in weak decays.\n\n## Differentiation of a vector with respect to a scalar\n\nFor example, the\nacceleration $\\boldsymbol{a}$ is given by the change in velocity per unit time, $\\boldsymbol{a}=d\\boldsymbol{v}/dt$\nwith components\n\n$$\na_i = (d\\boldsymbol{v}/dt)_i=\\frac{dv_i}{dt}.\n$$\n\nHere $i=x,y,z$ or $i=1,2,3$ if we are in three dimensions.\n\n## Gradient operator $\\nabla$\n\nThis represents the derivatives $\\partial/\\partial\nx$, $\\partial/\\partial y$ and $\\partial/\\partial z$. An often used shorthand is $\\partial_x=\\partial/\\partial_x$.\n\nThe gradient of a scalar function of position and time\n$\\Phi(x,y,z)=\\Phi(\\boldsymbol{r},t)$ is given by\n\n$$\n\\boldsymbol{\\nabla}~\\Phi,\n$$\n\nwith components $i$\n\n$$\n(\\nabla\\Phi(x,y,z,t))_i=\\partial/\\partial r_i\\Phi(\\boldsymbol{r},t)=\\partial_i\\Phi(\\boldsymbol{r},t).\n$$\n\nNote that the gradient is a vector.\n\nTaking the dot product of the gradient with a vector, normally called the divergence,\nwe have\n\n$$\n\\mathrm{div} \\boldsymbol{a}, \\nabla\\cdot\\boldsymbol{a}=\\sum_i \\partial_i a_i.\n$$\n\nNote that the divergence is a scalar.\n\n## The curl\n\nThe **curl** of a vector is defined as\n$\\nabla\\times\\boldsymbol{a}$,\n\n$$\n{\\rm\\bf curl}~\\boldsymbol{a},\n$$\n\nwith components\n\n$$\n(\\boldsymbol{\\nabla}\\times\\boldsymbol{a})_i=\\epsilon_{ijk}\\partial_j a_k(\\boldsymbol{r},t).\n$$\n\n## The Laplacian\n\nThe Laplacian is referred to as $\\nabla^2$ and is defined as\n\n$$\n\\boldsymbol{\\nabla}^2=\\boldsymbol{\\nabla}\\cdot\\boldsymbol{\\nabla}=\\frac{\\partial^2}{\\partial x^2}+\\frac{\\partial^2}{\\partial y^2}+\\frac{\\partial^2}{\\partial z^2}.\n$$\n\nQuestion: is the Laplacian a scalar or a vector?\n\n## Some identities\n\nHere we simply state these, but you may wish to prove a few. They are useful for this class and will be essential when you study electromagnetism.\n\n$$\n\\begin{eqnarray}\n\\boldsymbol{a}\\cdot(\\boldsymbol{b}\\times\\boldsymbol{c})&=&\\boldsymbol{b}\\cdot(\\boldsymbol{c}\\times\\boldsymbol{a})=\\boldsymbol{c}\\cdot(\\boldsymbol{a}\\times\\boldsymbol{b})\\\\\n\\nonumber\n\\boldsymbol{a}\\times(\\boldsymbol{b}\\times\\boldsymbol{c})&=&(\\boldsymbol{a}\\cdot\\boldsymbol{c})\\boldsymbol{b}-(\\boldsymbol{a}\\cdot\\boldsymbol{b})\\boldsymbol{c}\\\\\n\\nonumber\n(\\boldsymbol{a}\\times\\boldsymbol{b})\\cdot(\\boldsymbol{c}\\times\\boldsymbol{d})&=&(\\boldsymbol{a}\\cdot\\boldsymbol{c})(\\boldsymbol{b}\\cdot\\boldsymbol{d})\n-(\\boldsymbol{a}\\cdot\\boldsymbol{d})(\\boldsymbol{b}\\cdot\\boldsymbol{c})\n\\end{eqnarray}\n$$\n\n## More useful relations\n\nUsing the fact that multiplication of reals is distributive we can show that\n\n$$\n\\boldsymbol{a}(\\boldsymbol{b}+\\boldsymbol{c})=\\boldsymbol{a}\\boldsymbol{b}+\\boldsymbol{a}\\boldsymbol{c},\n$$\n\nSimilarly we can also show that (using product rule for differentiating reals)\n\n$$\n\\frac{d}{dt}(\\boldsymbol{a}\\boldsymbol{b})=\\boldsymbol{a}\\frac{d\\boldsymbol{b}}{dt}+\\boldsymbol{b}\\frac{d\\boldsymbol{a}}{dt}.\n$$\n\nWe can repeat these operations for the cross products and show that they are distribuitive\n\n$$\n\\boldsymbol{a}\\times(\\boldsymbol{b}+\\boldsymbol{c})=\\boldsymbol{a}\\times\\boldsymbol{b}+\\boldsymbol{a}\\times\\boldsymbol{c}.\n$$\n\nWe have also that\n\n$$\n\\frac{d}{dt}(\\boldsymbol{a}\\times\\boldsymbol{b})=\\boldsymbol{a}\\times\\frac{d\\boldsymbol{b}}{dt}+\\boldsymbol{b}\\times\\frac{d\\boldsymbol{a}}{dt}.\n$$\n\n## Gauss's Theorem\n\nFor an integral over a volume $V$ confined by a surface $S$, Gauss's theorem gives\n\n$$\n\\int_V dv~\\nabla\\cdot\\boldsymbol{A}=\\int_Sd\\boldsymbol{S}\\cdot\\boldsymbol{A}.\n$$\n\nFor a closed path $C$ which carves out some area $S$,\n\n$$\n\\int_C d\\boldsymbol{\\ell}\\cdot\\boldsymbol{A}=\\int_Sd\\boldsymbol{s} \\cdot(\\nabla\\times\\boldsymbol{A})\n$$\n\n## and Stokes's Theorem\n\nStoke's law can be understood by considering a small rectangle,\n$-\\Delta x\n\n Relations Name matrix elements \n\n\n $A = A^{T}$ symmetric $a_{ij} = a_{ji}$ \n $A = \\left (A^{T} \\right )^{-1}$ real orthogonal $\\sum_k a_{ik} a_{jk} = \\sum_k a_{ki} a_{kj} = \\delta_{ij}$ \n $A = A^{ * }$ real matrix $a_{ij} = a_{ij}^{ * }$ \n $A = A^{\\dagger}$ hermitian $a_{ij} = a_{ji}^{ * }$ \n $A = \\left (A^{\\dagger} \\right )^{-1}$ unitary $\\sum_k a_{ik} a_{jk}^{ * } = \\sum_k a_{ki}^{ * } a_{kj} = \\delta_{ij}$ \n\n\n\n## Some famous Matrices\n\n * Diagonal if $a_{ij}=0$ for $i\\ne j$\n\n * Upper triangular if $a_{ij}=0$ for $i > j$\n\n * Lower triangular if $a_{ij}=0$ for $i < j$\n\n * Upper Hessenberg if $a_{ij}=0$ for $i > j+1$\n\n * Lower Hessenberg if $a_{ij}=0$ for $i < j+1$\n\n * Tridiagonal if $a_{ij}=0$ for $|i -j| > 1$\n\n * Lower banded with bandwidth $p$: $a_{ij}=0$ for $i > j+p$\n\n * Upper banded with bandwidth $p$: $a_{ij}=0$ for $i < j+p$\n\n * Banded, block upper triangular, block lower triangular....\n\n## More Basic Matrix Features\n\n**Some Equivalent Statements.**\n\nFor an $N\\times N$ matrix $\\mathbf{A}$ the following properties are all equivalent\n\n * If the inverse of $\\mathbf{A}$ exists, $\\mathbf{A}$ is nonsingular.\n\n * The equation $\\mathbf{Ax}=0$ implies $\\mathbf{x}=0$.\n\n * The rows of $\\mathbf{A}$ form a basis of $R^N$.\n\n * The columns of $\\mathbf{A}$ form a basis of $R^N$.\n\n * $\\mathbf{A}$ is a product of elementary matrices.\n\n * $0$ is not eigenvalue of $\\mathbf{A}$.\n\n## Rotations\n\nHere, we use rotations as an example of matrices and their operations. One can consider a different orthonormal basis $\\hat{e}'_1$, $\\hat{e}'_2$ and $\\hat{e}'_3$. The same vector $\\boldsymbol{r}$ mentioned above can also be expressed in the new basis,\n\n\n
\n\n$$\n\\begin{equation}\n\\boldsymbol{r}=r'_1\\hat{e}'_1+r'_2\\hat{e}'_2+r'_3\\hat{e}'_3.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\nEven though it is the same vector, the components have changed. Each\nnew unit vector $\\hat{e}'_i$ can be expressed as a linear sum of the\nprevious vectors,\n\n\n
\n\n$$\n\\begin{equation}\n\\hat{e}'_i=\\sum_j U_{ij}\\hat{e}_j,\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\nand the matrix $U$ can be found by taking the dot product of both sides with $\\hat{e}_k$,\n\n\n
\n\n$$\n\\begin{eqnarray}\n\\nonumber\n\\hat{e}_k\\cdot\\hat{e}'_i&=&\\sum_jU_{ij}\\hat{e}_k\\cdot\\hat{e}_j\\\\\n\\label{eq:lambda_angles} \\tag{5}\n\\hat{e}_k\\cdot\\hat{e}'_i&=&\\sum_jU_{ij}\\delta_{jk}=U_{ik}.\n\\end{eqnarray}\n$$\n\n## More on the matrix $U$\n\nThus, the matrix lambda has components $U_{ij}$ that are equal to the\ncosine of the angle between new unit vector $\\hat{e}'_i$ and the old\nunit vector $\\hat{e}_j$.\n\n\n
\n\n$$\n\\begin{equation}\nU = \\begin{bmatrix}\n\\hat{e}'_1\\cdot\\hat{e}_1& \\hat{e}'_1\\cdot\\hat{e}_2& \\hat{e}'_1\\cdot\\hat{e}_3\\\\\n\\hat{e}'_2\\cdot\\hat{e}_1& \\hat{e}'_2\\cdot\\hat{e}_2& \\hat{e}'_2\\cdot\\hat{e}_3\\\\\n\\hat{e}'_3\\cdot\\hat{e}_1& \\hat{e}'_3\\cdot\\hat{e}_2& \\hat{e}'_3\\cdot\\hat{e}_3\n\\end{bmatrix},~~~~~U_{ij}=\\hat{e}'_i\\cdot\\hat{e}_j=\\cos\\theta_{ij}.\n\\label{_auto5} \\tag{6}\n\\end{equation}\n$$\n\n## Properties of the matrix $U$\n\nNote that the matrix is not symmetric, $U_{ij}\\ne U_{ji}$. One can also look at the inverse transformation, by switching the primed and unprimed coordinates,\n\n\n
\n\n$$\n\\begin{eqnarray}\n\\label{eq:inverseU} \\tag{7}\n\\hat{e}_i&=&\\sum_jU^{-1}_{ij}\\hat{e}'_j,\\\\\n\\nonumber\nU^{-1}_{ij}&=&\\hat{e}_i\\cdot\\hat{e}'_j=U_{ji}.\n\\end{eqnarray}\n$$\n\nThe definition of transpose of a matrix, $M^{t}_{ij}=M_{ji}$, allows one to state this as\n\n\n
\n\n$$\n\\begin{eqnarray}\n\\label{eq:transposedef} \\tag{8}\nU^{-1}&=&U^{t}.\n\\end{eqnarray}\n$$\n\n## Tensors\n\nA tensor obeying Eq. ([8](#eq:transposedef)) defines what is known as\na unitary, or orthogonal, transformation.\n\nThe matrix $U$ can be used to transform any vector to the new basis. Consider a vector\n\n$$\n\\begin{eqnarray}\n\\boldsymbol{r}&=&r_1\\hat{e}_1+r_2\\hat{e}_2+r_3\\hat{e}_3\\\\\n\\nonumber\n&=&r'_1\\hat{e}'_1+r'_2\\hat{e}'_2+r'_3\\hat{e}'_3.\n\\end{eqnarray}\n$$\n\nThis is the same vector expressed as a sum over two different sets of\nbasis vectors. The coefficients $r_i$ and $r'_i$ represent components\nof the same vector. The relation between them can be found by taking\nthe dot product of each side with one of the unit vectors,\n$\\hat{e}_i$, which gives\n\n$$\n\\begin{eqnarray}\nr_i&=&\\sum_j \\hat{e}_i\\cdot\\hat{e}'_j~r'_j.\n\\end{eqnarray}\n$$\n\nUsing Eq. ([7](#eq:inverseU)) one can see that the transformation of $r$ can be also written in terms of $U$,\n\n\n
\n\n$$\n\\begin{eqnarray}\n\\label{eq:rotateR} \\tag{9}\nr_i&=&\\sum_jU^{-1}_{ij}~r'_j.\n\\end{eqnarray}\n$$\n\nThus, the matrix that transforms the coordinates of the unit vectors,\nEq. ([7](#eq:inverseU)) is the same one that transforms the\ncoordinates of a vector, Eq. ([9](#eq:rotateR)).\n\n## Rotation matrix\n\nAs a small exercise, find the rotation matrix $U$ for finding the\ncomponents in the primed coordinate system given from those in the\nunprimed system, given that the unit vectors in the new system are\nfound by rotating the coordinate system by and angle $\\phi$ about the\n$z$ axis.\n\nIn this case\n\n$$\n\\begin{eqnarray*}\n\\hat{e}'_1&=&\\cos\\phi \\hat{e}_1-\\sin\\phi\\hat{e}_2,\\\\\n\\hat{e}'_2&=&\\sin\\phi\\hat{e}_1+\\cos\\phi\\hat{e}_2,\\\\\n\\hat{e}'_3&=&\\hat{e}_3.\n\\end{eqnarray*}\n$$\n\nBy inspecting Eq. ([5](#eq:lambda_angles)), we get\n\n$$\nU=\\left(\\begin{array}{ccc}\n\\cos\\phi&-\\sin\\phi&0\\\\\n\\sin\\phi&\\cos\\phi&0\\\\\n0&0&1\\end{array}\\right).\n$$\n\n## Unitary Transformations\n\nUnder a unitary transformation $U$ (or basis transformation) scalars\nare unchanged, whereas vectors $\\boldsymbol{r}$ and matrices $M$ change as\n\n$$\n\\begin{eqnarray}\nr'_i&=&U_{ij}~ r_j, ~~({\\rm sum~inferred})\\\\\n\\nonumber\nM'_{ij}&=&U_{ik}M_{km}U^{-1}_{mj}.\n\\end{eqnarray}\n$$\n\nPhysical quantities with no spatial indices are scalars (or\npseudoscalars if they depend on right-handed vs. left-handed\ncoordinate systems), and are unchanged by unitary\ntransformations. This includes quantities like the trace of a matrix,\nthe matrix itself had indices but none remain after performing the\ntrace.\n\n$$\n\\begin{eqnarray}\n{\\rm Tr} M&\\equiv& M_{ii}.\n\\end{eqnarray}\n$$\n\nBecause there are no remaining indices, one expects it to be a scalar. Indeed one can see this,\n\n$$\n\\begin{eqnarray}\n{\\rm Tr} M'&=&U_{ij}M_{jm}U^{-1}_{mi}\\\\\n\\nonumber\n&=&M_{jm}U^{-1}_{mi}U_{ij}\\\\\n\\nonumber\n&=&M_{jm}\\delta_{mj}\\\\\n\\nonumber\n&=&M_{jj}={\\rm Tr} M.\n\\end{eqnarray}\n$$\n\nA similar example is the determinant of a matrix, which is also a scalar.\n\n## Numerical Elements\n\nNumerical algorithms call for approximate discrete models and much of\nthe development of methods for continuous models are nowadays being\nreplaced by methods for discrete models in science and industry,\nsimply because **much larger classes of problems can be addressed** with\ndiscrete models, often by simpler and more generic methodologies.\n\nAs we will see throughout this course, when properly scaling the equations at hand,\ndiscrete models open up for more advanced abstractions and the possibility to\nstudy real life systems, with the added bonus that we can explore and\ndeepen our basic understanding of various physical systems\n\nAnalytical solutions are as important as before. In addition, such\nsolutions provide us with invaluable benchmarks and tests for our\ndiscrete models. Such benchmarks, as we will see below, allow us \nto discuss possible sources of errors and their behaviors. And\nfinally, since most of our models are based on various algorithms from\nnumerical mathematics, we have a unique oppotunity to gain a deeper\nunderstanding of the mathematical approaches we are using.\n\nWith computing and data science as important elements in essentially\nall aspects of a modern society, we could then try to define Computing as\n**solving scientific problems using all possible tools, including\nsymbolic computing, computers and numerical algorithms, and analytical\npaper and pencil solutions**. \nComputing provides us with the tools to develope our own understanding of the scientific method by enhancing algorithmic thinking.\n\n## Computations and the Scientific Method\n\nThe way we will teach this course reflects this definition of\ncomputing. The course contains both classical paper and pencil\nexercises as well as computational projects and exercises. The hope is\nthat this will allow you to explore the physics of systems governed by\nthe degrees of freedom of classical mechanics at a deeper level, and\nthat these insights about the scientific method will help you to\ndevelop a better understanding of how the underlying forces and\nequations of motion and how they impact a given system.\n\nFurthermore,\nby introducing various numerical methods via computational projects\nand exercises, we aim at developing your competences and skills about\nthese topics.\n\n## Computational Competences\n\nThese competences will enable you to\n\n* understand how algorithms are used to solve mathematical problems,\n\n* derive, verify, and implement algorithms,\n\n* understand what can go wrong with algorithms,\n\n* use these algorithms to construct reproducible scientific outcomes and to engage in science in ethical ways, and\n\n* think algorithmically for the purposes of gaining deeper insights about scientific problems.\n\nAll these elements are central for maturing and gaining a better understanding of the modern scientific process *per se*.\n\nThe power of the scientific method lies in identifying a given problem\nas a special case of an abstract class of problems, identifying\ngeneral solution methods for this class of problems, and applying a\ngeneral method to the specific problem (applying means, in the case of\ncomputing, calculations by pen and paper, symbolic computing, or\nnumerical computing by ready-made and/or self-written software). This\ngeneric view on problems and methods is particularly important for\nunderstanding how to apply available, generic software to solve a\nparticular problem.\n\n*However, verification of algorithms and understanding their limitations requires much of the classical knowledge about continuous models.*\n\n## A well-known example to illustrate many of the above concepts\n\nBefore we venture into a reminder on Python and mechanics relevant applications, let us briefly outline some of the\nabovementioned topics using an example many of you may have seen before in for example CMSE201. \nA simple algorithm for integration is the Trapezoidal rule. \nIntegration of a function $f(x)$ by the Trapezoidal Rule is given by following algorithm for an interval $x \\in [a,b]$\n\n$$\n\\int_a^b(f(x) dx = \\frac{1}{2}\\left [f(a)+2f(a+h)+\\dots+2f(b-h)+f(b)\\right] +O(h^2),\n$$\n\nwhere $h$ is the so-called stepsize defined by the number of integration points $N$ as $h=(b-a)/(n)$.\nPython offers an extremely versatile programming environment, allowing for\nthe inclusion of analytical studies in a numerical program. Here we show an\nexample code with the **trapezoidal rule**. We use also **SymPy** to evaluate the exact value of the integral and compute the absolute error\nwith respect to the numerically evaluated one of the integral\n$\\int_0^1 dx x^2 = 1/3$.\nThe following code for the trapezoidal rule allows you to plot the relative error by comparing with the exact result. By increasing to $10^8$ points one arrives at a region where numerical errors start to accumulate.\n\n\n```python\n%matplotlib inline\n\nfrom math import log10\nimport numpy as np\nfrom sympy import Symbol, integrate\nimport matplotlib.pyplot as plt\n# function for the trapezoidal rule\ndef Trapez(a,b,f,n):\n h = (b-a)/float(n)\n s = 0\n x = a\n for i in range(1,n,1):\n x = x+h\n s = s+ f(x)\n s = 0.5*(f(a)+f(b)) +s\n return h*s\n# function to compute pi\ndef function(x):\n return x*x\n# define integration limits\na = 0.0; b = 1.0;\n# find result from sympy\n# define x as a symbol to be used by sympy\nx = Symbol('x')\nexact = integrate(function(x), (x, a, b))\n# set up the arrays for plotting the relative error\nn = np.zeros(9); y = np.zeros(9);\n# find the relative error as function of integration points\nfor i in range(1, 8, 1):\n npts = 10**i\n result = Trapez(a,b,function,npts)\n RelativeError = abs((exact-result)/exact)\n n[i] = log10(npts); y[i] = log10(RelativeError);\nplt.plot(n,y, 'ro')\nplt.xlabel('n')\nplt.ylabel('Relative error')\nplt.show()\n```\n\n## Analyzing the above example\n\nThis example shows the potential of combining numerical algorithms\nwith symbolic calculations, allowing us to\n\n* Validate and verify their algorithms. \n\n* Including concepts like unit testing, one has the possibility to test and test several or all parts of the code.\n\n* Validation and verification are then included *naturally* and one can develop a better attitude to what is meant with an ethically sound scientific approach.\n\n* The above example allows the student to also test the mathematical error of the algorithm for the trapezoidal rule by changing the number of integration points. The students get **trained from day one to think error analysis**. \n\n* With a Jupyter notebook you can keep exploring similar examples and turn them in as your own notebooks.\n\n## Python practicalities, Software and needed installations\n\nWe will make extensive use of Python as programming language and its\nmyriad of available libraries. You will find\nJupyter notebooks invaluable in your work. \n\nIf you have Python installed (we strongly recommend Python3) and you feel\npretty familiar with installing different packages, we recommend that\nyou install the following Python packages via **pip** as \n\n1. pip install numpy scipy matplotlib ipython scikit-learn mglearn sympy pandas pillow \n\nFor Python3, replace **pip** with **pip3**.\n\nFor OSX users we recommend, after having installed Xcode, to\ninstall **brew**. Brew allows for a seamless installation of additional\nsoftware via for example \n\n1. brew install python3\n\nFor Linux users, with its variety of distributions like for example the widely popular Ubuntu distribution,\nyou can use **pip** as well and simply install Python as \n\n1. sudo apt-get install python3 (or python for pyhton2.7)\n\netc etc.\n\n## Python installers\n\nIf you don't want to perform these operations separately and venture\ninto the hassle of exploring how to set up dependencies and paths, we\nrecommend two widely used distrubutions which set up all relevant\ndependencies for Python, namely \n\n* [Anaconda](https://docs.anaconda.com/), \n\nwhich is an open source\ndistribution of the Python and R programming languages for large-scale\ndata processing, predictive analytics, and scientific computing, that\naims to simplify package management and deployment. Package versions\nare managed by the package management system **conda**. \n\n* [Enthought canopy](https://www.enthought.com/product/canopy/) \n\nis a Python\ndistribution for scientific and analytic computing distribution and\nanalysis environment, available for free and under a commercial\nlicense.\n\nFurthermore, [Google's Colab](https://colab.research.google.com/notebooks/welcome.ipynb) is a free Jupyter notebook environment that requires \nno setup and runs entirely in the cloud. Try it out!\n\n## Useful Python libraries\nHere we list several useful Python libraries we strongly recommend (if you use anaconda many of these are already there)\n\n* [NumPy](https://www.numpy.org/) is a highly popular library for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays\n\n* [The pandas](https://pandas.pydata.org/) library provides high-performance, easy-to-use data structures and data analysis tools \n\n* [Xarray](http://xarray.pydata.org/en/stable/) is a Python package that makes working with labelled multi-dimensional arrays simple, efficient, and fun!\n\n* [Scipy](https://www.scipy.org/) (pronounced \u201cSigh Pie\u201d) is a Python-based ecosystem of open-source software for mathematics, science, and engineering. \n\n* [Matplotlib](https://matplotlib.org/) is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.\n\n* [Autograd](https://github.com/HIPS/autograd) can automatically differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can even take derivatives of derivatives of derivatives\n\n* [SymPy](https://www.sympy.org/en/index.html) is a Python library for symbolic mathematics. \n\n* [scikit-learn](https://scikit-learn.org/stable/) has simple and efficient tools for machine learning, data mining and data analysis\n\n* [TensorFlow](https://www.tensorflow.org/) is a Python library for fast numerical computing created and released by Google\n\n* [Keras](https://keras.io/) is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano\n\n* And many more such as [pytorch](https://pytorch.org/), [Theano](https://pypi.org/project/Theano/) etc \n\nYour jupyter notebook can easily be\nconverted into a nicely rendered **PDF** file or a Latex file for\nfurther processing. For example, convert to latex as\n\n pycod jupyter nbconvert filename.ipynb --to latex \n\n\nAnd to add more versatility, the Python package [SymPy](http://www.sympy.org/en/index.html) is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) and is entirely written in Python.\n\n## Numpy examples and Important Matrix and vector handling packages\n\nThere are several central software libraries for linear algebra and eigenvalue problems. Several of the more\npopular ones have been wrapped into ofter software packages like those from the widely used text **Numerical Recipes**. The original source codes in many of the available packages are often taken from the widely used\nsoftware package LAPACK, which follows two other popular packages\ndeveloped in the 1970s, namely EISPACK and LINPACK. We describe them shortly here.\n\n * LINPACK: package for linear equations and least square problems.\n\n * LAPACK:package for solving symmetric, unsymmetric and generalized eigenvalue problems. From LAPACK's website it is possible to download for free all source codes from this library. Both C/C++ and Fortran versions are available.\n\n * BLAS (I, II and III): (Basic Linear Algebra Subprograms) are routines that provide standard building blocks for performing basic vector and matrix operations. Blas I is vector operations, II vector-matrix operations and III matrix-matrix operations. Highly parallelized and efficient codes, all available for download from .\n\n## Numpy and arrays\n\n[Numpy](http://www.numpy.org/) provides an easy way to handle arrays in Python. The standard way to import this library is as\n\n\n```python\nimport numpy as np\n```\n\nHere follows a simple example where we set up an array of ten elements, all determined by random numbers drawn according to the normal distribution,\n\n\n```python\nn = 10\nx = np.random.normal(size=n)\nprint(x)\n```\n\nWe defined a vector $x$ with $n=10$ elements with its values given by the Normal distribution $N(0,1)$.\nAnother alternative is to declare a vector as follows\n\n\n```python\nimport numpy as np\nx = np.array([1, 2, 3])\nprint(x)\n```\n\nHere we have defined a vector with three elements, with $x_0=1$, $x_1=2$ and $x_2=3$. Note that both Python and C++\nstart numbering array elements from $0$ and on. This means that a vector with $n$ elements has a sequence of entities $x_0, x_1, x_2, \\dots, x_{n-1}$. We could also let (recommended) Numpy to compute the logarithms of a specific array as\n\n\n```python\nimport numpy as np\nx = np.log(np.array([4, 7, 8]))\nprint(x)\n```\n\nIn the last example we used Numpy's unary function $np.log$. This function is\nhighly tuned to compute array elements since the code is vectorized\nand does not require looping. We normaly recommend that you use the\nNumpy intrinsic functions instead of the corresponding **log** function\nfrom Python's **math** module. The looping is done explicitely by the\n**np.log** function. The alternative, and slower way to compute the\nlogarithms of a vector would be to write\n\n\n```python\nimport numpy as np\nfrom math import log\nx = np.array([4, 7, 8])\nfor i in range(0, len(x)):\n x[i] = log(x[i])\nprint(x)\n```\n\nWe note that our code is much longer already and we need to import the **log** function from the **math** module. \nThe attentive reader will also notice that the output is $[1, 1, 2]$. Python interprets automagically our numbers as integers (like the **automatic** keyword in C++). To change this we could define our array elements to be double precision numbers as\n\n\n```python\nimport numpy as np\nx = np.log(np.array([4, 7, 8], dtype = np.float64))\nprint(x)\n```\n\nor simply write them as double precision numbers (Python uses 64 bits as default for floating point type variables), that is\n\n\n```python\nimport numpy as np\nx = np.log(np.array([4.0, 7.0, 8.0])\nprint(x)\n```\n\nTo check the number of bytes (remember that one byte contains eight bits for double precision variables), you can use simple use the **itemsize** functionality (the array $x$ is actually an object which inherits the functionalities defined in Numpy) as\n\n\n```python\nimport numpy as np\nx = np.log(np.array([4.0, 7.0, 8.0])\nprint(x.itemsize)\n```\n\n## Matrices in Python\n\nHaving defined vectors, we are now ready to try out matrices. We can\ndefine a $3 \\times 3 $ real matrix $\\hat{A}$ as (recall that we user\nlowercase letters for vectors and uppercase letters for matrices)\n\n\n```python\nimport numpy as np\nA = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))\nprint(A)\n```\n\nIf we use the **shape** function we would get $(3, 3)$ as output, that is verifying that our matrix is a $3\\times 3$ matrix. We can slice the matrix and print for example the first column (Python organized matrix elements in a row-major order, see below) as\n\n\n```python\nimport numpy as np\nA = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))\n# print the first column, row-major order and elements start with 0\nprint(A[:,0])\n```\n\nWe can continue this was by printing out other columns or rows. The example here prints out the second column\n\n\n```python\nimport numpy as np\nA = np.log(np.array([ [4.0, 7.0, 8.0], [3.0, 10.0, 11.0], [4.0, 5.0, 7.0] ]))\n# print the first column, row-major order and elements start with 0\nprint(A[1,:])\n```\n\nNumpy contains many other functionalities that allow us to slice, subdivide etc etc arrays. We strongly recommend that you look up the [Numpy website for more details](http://www.numpy.org/). Useful functions when defining a matrix are the **np.zeros** function which declares a matrix of a given dimension and sets all elements to zero\n\n\n```python\nimport numpy as np\nn = 10\n# define a matrix of dimension 10 x 10 and set all elements to zero\nA = np.zeros( (n, n) )\nprint(A)\n```\n\nor initializing all elements to\n\n\n```python\nimport numpy as np\nn = 10\n# define a matrix of dimension 10 x 10 and set all elements to one\nA = np.ones( (n, n) )\nprint(A)\n```\n\nor as unitarily distributed random numbers (see the material on random number generators in the statistics part)\n\n\n```python\nimport numpy as np\nn = 10\n# define a matrix of dimension 10 x 10 and set all elements to random numbers with x \\in [0, 1]\nA = np.random.rand(n, n)\nprint(A)\n```\n\n## Meet the Pandas\n\n\n\n\n

Figure 1:

\n\n\nAnother useful Python package is\n[pandas](https://pandas.pydata.org/), which is an open source library\nproviding high-performance, easy-to-use data structures and data\nanalysis tools for Python. **pandas** stands for panel data, a term borrowed from econometrics and is an efficient library for data analysis with an emphasis on tabular data.\n\n**pandas** has two major classes, the **DataFrame** class with\ntwo-dimensional data objects and tabular data organized in columns and\nthe class **Series** with a focus on one-dimensional data objects. Both\nclasses allow you to index data easily as we will see in the examples\nbelow. **pandas** allows you also to perform mathematical operations on\nthe data, spanning from simple reshapings of vectors and matrices to\nstatistical operations.\n\nThe following simple example shows how we can, in an easy way make\ntables of our data. Here we define a data set which includes names,\nplace of birth and date of birth, and displays the data in an easy to\nread way. We will see repeated use of **pandas**, in particular in\nconnection with classification of data.\n\n\n```python\nimport pandas as pd\nfrom IPython.display import display\ndata = {'First Name': [\"Frodo\", \"Bilbo\", \"Aragorn II\", \"Samwise\"],\n 'Last Name': [\"Baggins\", \"Baggins\",\"Elessar\",\"Gamgee\"],\n 'Place of birth': [\"Shire\", \"Shire\", \"Eriador\", \"Shire\"],\n 'Date of Birth T.A.': [2968, 2890, 2931, 2980]\n }\ndata_pandas = pd.DataFrame(data)\ndisplay(data_pandas)\n```\n\nIn the above we have imported **pandas** with the shorthand **pd**, the latter has become the standard way we import **pandas**. We make then a list of various variables\nand reorganize the above lists into a **DataFrame** and then print out a neat table with specific column labels as *Name*, *place of birth* and *date of birth*.\nDisplaying these results, we see that the indices are given by the default numbers from zero to three.\n**pandas** is extremely flexible and we can easily change the above indices by defining a new type of indexing as\n\n\n```python\ndata_pandas = pd.DataFrame(data,index=['Frodo','Bilbo','Aragorn','Sam'])\ndisplay(data_pandas)\n```\n\nThereafter we display the content of the row which begins with the index **Aragorn**\n\n\n```python\ndisplay(data_pandas.loc['Aragorn'])\n```\n\nWe can easily append data to this, for example\n\n\n```python\nnew_hobbit = {'First Name': [\"Peregrin\"],\n 'Last Name': [\"Took\"],\n 'Place of birth': [\"Shire\"],\n 'Date of Birth T.A.': [2990]\n }\ndata_pandas=data_pandas.append(pd.DataFrame(new_hobbit, index=['Pippin']))\ndisplay(data_pandas)\n```\n\nHere are other examples where we use the **DataFrame** functionality to handle arrays, now with more interesting features for us, namely numbers. We set up a matrix \nof dimensionality $10\\times 5$ and compute the mean value and standard deviation of each column. Similarly, we can perform mathematial operations like squaring the matrix elements and many other operations.\n\n\n```python\nimport numpy as np\nimport pandas as pd\nfrom IPython.display import display\nnp.random.seed(100)\n# setting up a 10 x 5 matrix\nrows = 10\ncols = 5\na = np.random.randn(rows,cols)\ndf = pd.DataFrame(a)\ndisplay(df)\nprint(df.mean())\nprint(df.std())\ndisplay(df**2)\n```\n\nThereafter we can select specific columns only and plot final results\n\n\n```python\ndf.columns = ['First', 'Second', 'Third', 'Fourth', 'Fifth']\ndf.index = np.arange(10)\n\ndisplay(df)\nprint(df['Second'].mean() )\nprint(df.info())\nprint(df.describe())\n\nfrom pylab import plt, mpl\nplt.style.use('seaborn')\nmpl.rcParams['font.family'] = 'serif'\n\ndf.cumsum().plot(lw=2.0, figsize=(10,6))\nplt.show()\n\n\ndf.plot.bar(figsize=(10,6), rot=15)\nplt.show()\n```\n\nWe can produce a $4\\times 4$ matrix\n\n\n```python\nb = np.arange(16).reshape((4,4))\nprint(b)\ndf1 = pd.DataFrame(b)\nprint(df1)\n```\n\nand many other operations. \n\nThe **Series** class is another important class included in\n**pandas**. You can view it as a specialization of **DataFrame** but where\nwe have just a single column of data. It shares many of the same\nfeatures as **DataFrame**. As with **DataFrame**, most operations are\nvectorized, achieving thereby a high performance when dealing with\ncomputations of arrays, in particular labeled arrays. As we will see\nbelow it leads also to a very concice code close to the mathematical\noperations we may be interested in. For multidimensional arrays, we\nrecommend strongly\n[xarray](http://xarray.pydata.org/en/stable/). **xarray** has much of\nthe same flexibility as **pandas**, but allows for the extension to\nhigher dimensions than two.\n\n## Introduction to Git and GitHub/GitLab and similar\n\n[Git](https://git-scm.com/) is a distributed version-control system\nfor tracking changes in any set of files, originally designed for\ncoordinating work among programmers cooperating on source code during\nsoftware development.\n\nThe [reference document and videos here](https://git-scm.com/doc)\ngive you an excellent introduction to the **git**.\n\nWe believe you will find version-control software very useful in your work.\n\n## GitHub, GitLab and many other\n\n[GitHub](https://github.com/), [GitLab](https://about.gitlab.com/), [Bitbucket](https://bitbucket.org/product?&aceid=&adposition=&adgroup=92266806717&campaign=1407243017&creative=414608923671&device=c&keyword=bitbucket&matchtype=e&network=g&placement=&ds_kids=p51241248597&ds_e=GOOGLE&ds_eid=700000001551985&ds_e1=GOOGLE&gclid=Cj0KCQiA6Or_BRC_ARIsAPzuer_yrxzs-R8KDVdF0-DduJR9hTBYcjdE8L9_CkA9eyz8XT7-3bFGOpQaAqe2EALw_wcB&gclsrc=aw.ds) and other are code hosting platforms for\nversion control and collaboration. They let you and others work\ntogether on projects from anywhere.\n\nAll teaching material related to this course is open and freely\navailable via the GitHub site of the course. The video here gives a\nshort intro to\n[GitHub](https://www.youtube.com/watch/w3jLJU7DT5E?reload=9).\n\nSee also the [overview video on Git and GitHub](https://mediaspace.msu.edu/media/t/1_8mgx3cyf).\n\n## Useful Git and GitHub links\n\nThese are a couple references that we have found useful (git commands, markdown, GitPages):\n* \n\n* \n\n* \n\n## Useful IDEs and text editors\n\nWhen dealing with homeworks, at some point you would need to use an\neditor, or an integrated development envinroment (IDE). As an IDE, we\nwould like to recommend **anaconda** since we end up using\njupyter-notebooks. **anaconda** runs on all known operating systems.\n\nIf you prefer editing **Python** codes, there are several excellent cross-platform editors.\nIf you are in a Windows environment, **word** is the classical text editor.\n\nThere is however a wealth of text editors and/ord IDEs that run on all operating\nsystems and functions well with Python. Some of the more popular ones are\n\n* [Atom](https://atom.io/)\n\n* [Sublime](https://www.sublimetext.com/)\n\n## Our first Physics encounter\n\nWe start studying the problem of a falling object and use this to introduce numerical aspects.\n\n## Falling baseball in one dimension\n\nWe anticipate the mathematical model to come and assume that we have a\nmodel for the motion of a falling baseball without air resistance.\nOur system (the baseball) is at an initial height $y_0$ (which we will\nspecify in the program below) at the initial time $t_0=0$. In our program example here we will plot the position in steps of $\\Delta t$ up to a final time $t_f$. \nThe mathematical formula for the position $y(t)$ as function of time $t$ is\n\n$$\ny(t) = y_0-\\frac{1}{2}gt^2,\n$$\n\nwhere $g=9.80665=0.980655\\times 10^1$m/s${}^2$ is a constant representing the standard acceleration due to gravity.\nWe have here adopted the conventional standard value. This does not take into account other effects, such as buoyancy or drag.\nFurthermore, we stop when the ball hits the ground, which takes place at\n\n$$\ny(t) = 0= y_0-\\frac{1}{2}gt^2,\n$$\n\nwhich gives us a final time $t_f=\\sqrt{2y_0/g}$. \n\nAs of now we simply assume that we know the formula for the falling object. Afterwards, we will derive it.\n\n## Our Python Encounter\n\nWe start with preparing folders for storing our calculations, figures and if needed, specific data files we use as input or output files.\n\n\n```python\n# Common imports\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport os\n\n# Where to save the figures and data files\nPROJECT_ROOT_DIR = \"Results\"\nFIGURE_ID = \"Results/FigureFiles\"\nDATA_ID = \"DataFiles/\"\n\nif not os.path.exists(PROJECT_ROOT_DIR):\n os.mkdir(PROJECT_ROOT_DIR)\n\nif not os.path.exists(FIGURE_ID):\n os.makedirs(FIGURE_ID)\n\nif not os.path.exists(DATA_ID):\n os.makedirs(DATA_ID)\n\ndef image_path(fig_id):\n return os.path.join(FIGURE_ID, fig_id)\n\ndef data_path(dat_id):\n return os.path.join(DATA_ID, dat_id)\n\ndef save_fig(fig_id):\n plt.savefig(image_path(fig_id) + \".png\", format='png')\n\n#in case we have an input file we wish to read in\n#infile = open(data_path(\"MassEval2016.dat\"),'r')\n```\n\nYou could also define a function for making our plots. You\ncan obviously avoid this and simply set up various **matplotlib**\ncommands every time you need them. You may however find it convenient\nto collect all such commands in one function and simply call this\nfunction.\n\n\n```python\nfrom pylab import plt, mpl\nplt.style.use('seaborn')\nmpl.rcParams['font.family'] = 'serif'\n\ndef MakePlot(x,y, styles, labels, axlabels):\n plt.figure(figsize=(10,6))\n for i in range(len(x)):\n plt.plot(x[i], y[i], styles[i], label = labels[i])\n plt.xlabel(axlabels[0])\n plt.ylabel(axlabels[1])\n plt.legend(loc=0)\n```\n\nThereafter we start setting up the code for the falling object.\n\n\n```python\n%matplotlib inline\nimport matplotlib.patches as mpatches\n\ng = 9.80655 #m/s^2\ny_0 = 10.0 # initial position in meters\nDeltaT = 0.1 # time step\n# final time when y = 0, t = sqrt(2*10/g)\ntfinal = np.sqrt(2.0*y_0/g)\n#set up arrays \nt = np.arange(0,tfinal,DeltaT)\ny =y_0 -g*.5*t**2\n# Then make a nice printout in table form using Pandas\nimport pandas as pd\nfrom IPython.display import display\ndata = {'t[s]': t,\n 'y[m]': y\n }\nRawData = pd.DataFrame(data)\ndisplay(RawData)\nplt.style.use('ggplot')\nplt.figure(figsize=(8,8))\nplt.scatter(t, y, color = 'b')\nblue_patch = mpatches.Patch(color = 'b', label = 'Height y as function of time t')\nplt.legend(handles=[blue_patch])\nplt.xlabel(\"t[s]\")\nplt.ylabel(\"y[m]\")\nsave_fig(\"FallingBaseball\")\nplt.show()\n```\n\nHere we used **pandas** (see below) to systemize the output of the position as function of time.\n\n## Average quantities\nWe define now the average velocity as\n\n$$\n\\overline{v}(t) = \\frac{y(t+\\Delta t)-y(t)}{\\Delta t}.\n$$\n\nIn the code we have set the time step $\\Delta t$ to a given value. We could define it in terms of the number of points $n$ as\n\n$$\n\\Delta t = \\frac{t_{\\mathrm{final}-}t_{\\mathrm{initial}}}{n}.\n$$\n\nSince we have discretized the variables, we introduce the counter $i$ and let $y(t)\\rightarrow y(t_i)=y_i$ and $t\\rightarrow t_i$\nwith $i=0,1,\\dots, n$. This gives us the following shorthand notations that we will use for the rest of this course. We define\n\n$$\ny_i = y(t_i),\\hspace{0.2cm} i=0,1,2,\\dots,n.\n$$\n\nThis applies to other variables which depend on say time. Examples are the velocities, accelerations, momenta etc.\nFurthermore we use the shorthand\n\n$$\ny_{i\\pm 1} = y(t_i\\pm \\Delta t),\\hspace{0.12cm} i=0,1,2,\\dots,n.\n$$\n\n## Compact equations\nWe can then rewrite in a more compact form the average velocity as\n\n$$\n\\overline{v}_i = \\frac{y_{i+1}-y_{i}}{\\Delta t}.\n$$\n\nThe velocity is defined as the change in position per unit time.\nIn the limit $\\Delta t \\rightarrow 0$ this defines the instantaneous velocity, which is nothing but the slope of the position at a time $t$.\nWe have thus\n\n$$\nv(t) = \\frac{dy}{dt}=\\lim_{\\Delta t \\rightarrow 0}\\frac{y(t+\\Delta t)-y(t)}{\\Delta t}.\n$$\n\nSimilarly, we can define the average acceleration as the change in velocity per unit time as\n\n$$\n\\overline{a}_i = \\frac{v_{i+1}-v_{i}}{\\Delta t},\n$$\n\nresulting in the instantaneous acceleration\n\n$$\na(t) = \\frac{dv}{dt}=\\lim_{\\Delta t\\rightarrow 0}\\frac{v(t+\\Delta t)-v(t)}{\\Delta t}.\n$$\n\n**A note on notations**: When writing for example the velocity as $v(t)$ we are then referring to the continuous and instantaneous value. A subscript like\n$v_i$ refers always to the discretized values.\n\n## A differential equation\nWe can rewrite the instantaneous acceleration as\n\n$$\na(t) = \\frac{dv}{dt}=\\frac{d}{dt}\\frac{dy}{dt}=\\frac{d^2y}{dt^2}.\n$$\n\nThis forms the starting point for our definition of forces later. It is a famous second-order differential equation. If the acceleration is constant we can now recover the formula for the falling ball we started with.\nThe acceleration can depend on the position and the velocity. To be more formal we should then write the above differential equation as\n\n$$\n\\frac{d^2y}{dt^2}=a(t,y(t),\\frac{dy}{dt}).\n$$\n\nWith given initial conditions for $y(t_0)$ and $v(t_0)$ we can then\nintegrate the above equation and find the velocities and positions at\na given time $t$.\n\nIf we multiply with mass, we have one of the famous expressions for Newton's second law,\n\n$$\nF(y,v,t)=m\\frac{d^2y}{dt^2}=ma(t,y(t),\\frac{dy}{dt}),\n$$\n\nwhere $F$ is the force acting on an object with mass $m$. We see that it also has the right dimension, mass times length divided by time squared.\nWe will come back to this soon.\n\n## Integrating our equations\n\nFormally we can then, starting with the acceleration (suppose we have measured it, how could we do that?)\ncompute say the height of a building. To see this we perform the following integrations from an initial time $t_0$ to a given time $t$\n\n$$\n\\int_{t_0}^t dt' a(t') = \\int_{t_0}^t dt' \\frac{dv}{dt'} = v(t)-v(t_0),\n$$\n\nor as\n\n$$\nv(t)=v(t_0)+\\int_{t_0}^t dt' a(t').\n$$\n\nWhen we know the velocity as function of time, we can find the position as function of time starting from the defintion of velocity as the derivative with respect to time, that is we have\n\n$$\n\\int_{t_0}^t dt' v(t') = \\int_{t_0}^t dt' \\frac{dy}{dt'} = y(t)-y(t_0),\n$$\n\nor as\n\n$$\ny(t)=y(t_0)+\\int_{t_0}^t dt' v(t').\n$$\n\nThese equations define what is called the integration method for\nfinding the position and the velocity as functions of time. There is\nno loss of generality if we extend these equations to more than one\nspatial dimension.\n\n## Constant acceleration case, the velocity\nLet us compute the velocity using the constant value for the acceleration given by $-g$. We have\n\n$$\nv(t)=v(t_0)+\\int_{t_0}^t dt' a(t')=v(t_0)+\\int_{t_0}^t dt' (-g).\n$$\n\nUsing our initial time as $t_0=0$s and setting the initial velocity $v(t_0)=v_0=0$m/s we get when integrating\n\n$$\nv(t)=-gt.\n$$\n\nThe more general case is\n\n$$\nv(t)=v_0-g(t-t_0).\n$$\n\nWe can then integrate the velocity and obtain the final formula for the position as function of time through\n\n$$\ny(t)=y(t_0)+\\int_{t_0}^t dt' v(t')=y_0+\\int_{t_0}^t dt' v(t')=y_0+\\int_{t_0}^t dt' (-gt'),\n$$\n\nWith $y_0=10$m and $t_0=0$s, we obtain the equation we started with\n\n$$\ny(t)=10-\\frac{1}{2}gt^2.\n$$\n\n## Computing the averages\nAfter this mathematical background we are now ready to compute the mean velocity using our data.\n\n\n```python\n# Now we can compute the mean velocity using our data\n# We define first an array Vaverage\nn = np.size(t)\nVaverage = np.zeros(n)\nfor i in range(1,n-1):\n Vaverage[i] = (y[i+1]-y[i])/DeltaT\n# Now we can compute the mean accelearatio using our data\n# We define first an array Aaverage\nn = np.size(t)\nAaverage = np.zeros(n)\nAaverage[0] = -g\nfor i in range(1,n-1):\n Aaverage[i] = (Vaverage[i+1]-Vaverage[i])/DeltaT\ndata = {'t[s]': t,\n 'y[m]': y,\n 'v[m/s]': Vaverage,\n 'a[m/s^2]': Aaverage\n }\nNewData = pd.DataFrame(data)\ndisplay(NewData[0:n-2])\n```\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
t[s]y[m]v[m/s]a[m/s^2]
00.010.0000000.000000-9.80655
10.19.950967-1.470982-9.80655
20.29.803869-2.451638-9.80655
30.39.558705-3.432292-9.80655
40.49.215476-4.412948-9.80655
50.58.774181-5.393602-9.80655
60.68.234821-6.374258-9.80655
70.77.597395-7.354913-9.80655
80.86.861904-8.335567-9.80655
90.96.028347-9.316222-9.80655
101.05.096725-10.296878-9.80655
111.14.067037-11.277533-9.80655
121.22.939284-12.258187-9.80655
\n
\n\n\nNote that we don't print the last values!\n\n## Including Air Resistance in our model\n\nIn our discussions till now of the falling baseball, we have ignored\nair resistance and simply assumed that our system is only influenced\nby the gravitational force. We will postpone the derivation of air\nresistance till later, after our discussion of Newton's laws and\nforces.\n\nFor our discussions here it suffices to state that the accelerations is now modified to\n\n$$\n\\boldsymbol{a}(t) = -g +D\\boldsymbol{v}(t)\\vert v(t)\\vert,\n$$\n\nwhere $\\vert v(t)\\vert$ is the absolute value of the velocity and $D$ is a constant which pertains to the specific object we are studying.\nSince we are dealing with motion in one dimension, we can simplify the above to\n\n$$\na(t) = -g +Dv^2(t).\n$$\n\nWe can rewrite this as a differential equation\n\n$$\na(t) = \\frac{dv}{dt}=\\frac{d^2y}{dt^2}= -g +Dv^2(t).\n$$\n\nUsing the integral equations discussed above we can integrate twice\nand obtain first the velocity as function of time and thereafter the\nposition as function of time.\n\nFor this particular case, we can actually obtain an analytical\nsolution for the velocity and for the position. Here we will first\ncompute the solutions analytically, thereafter we will derive Euler's\nmethod for solving these differential equations numerically.\n\n## Analytical solutions\n\nFor simplicity let us just write $v(t)$ as $v$. We have\n\n$$\n\\frac{dv}{dt}= -g +Dv^2(t).\n$$\n\nWe can solve this using the technique of separation of variables. We\nisolate on the left all terms that involve $v$ and on the right all\nterms that involve time. We get then\n\n$$\n\\frac{dv}{g -Dv^2(t) }= -dt,\n$$\n\nWe scale now the equation to the left by introducing a constant\n$v_T=\\sqrt{g/D}$. This constant has dimension length/time. Can you\nshow this?\n\nNext we integrate the left-hand side (lhs) from $v_0=0$ m/s to $v$ and\nthe right-hand side (rhs) from $t_0=0$ to $t$ and obtain\n\n$$\n\\int_{0}^v\\frac{dv'}{g -D(v')^2(t) }= \\frac{v_T}{g}\\mathrm{arctanh}(\\frac{v}{v_T}) =-\\int_0^tdt' = -t.\n$$\n\nWe can reorganize these equations as\n\n$$\nv_T\\mathrm{arctanh}(\\frac{v}{v_T}) =-gt,\n$$\n\nwhich gives us $v$ as function of time\n\n$$\nv(t)=v_T\\tanh{-(\\frac{gt}{v_T})}.\n$$\n\n## Finding the final height\nWith the velocity we can then find the height $y(t)$ by integrating yet another time, that is\n\n$$\ny(t)=y(t_0)+\\int_{t_0}^t dt' v(t')=\\int_{0}^t dt'[v_T\\tanh{-(\\frac{gt'}{v_T})}].\n$$\n\nThis integral is trickier but we can look it up in a table over \nknown integrals and we get\n\n$$\ny(t)=y(t_0)-\\frac{v_T^2}{g}\\log{[\\cosh{(\\frac{gt}{v_T})}]}.\n$$\n\nAlternatively we could have used the symbolic Python package **Sympy** (example will be inserted later). \n\nIn most cases however, we need to revert to numerical solutions.\n\n## Our first attempt at solving differential equations\n\nHere we will try the simplest possible approach to solving the second-order differential \nequation\n\n$$\na(t) =\\frac{d^2y}{dt^2}= -g +Dv^2(t).\n$$\n\nWe rewrite it as two coupled first-order equations (this is a standard approach)\n\n$$\n\\frac{dy}{dt} = v(t),\n$$\n\nwith initial condition $y(t_0)=y_0$ and\n\n$$\na(t) =\\frac{dv}{dt}= -g +Dv^2(t),\n$$\n\nwith initial condition $v(t_0)=v_0$.\n\nMany of the algorithms for solving differential equations start with simple Taylor equations.\nIf we now Taylor expand $y$ and $v$ around a value $t+\\Delta t$ we have\n\n$$\ny(t+\\Delta t) = y(t)+\\Delta t \\frac{dy}{dt}+\\frac{\\Delta t^2}{2!} \\frac{d^2y}{dt^2}+O(\\Delta t^3),\n$$\n\nand\n\n$$\nv(t+\\Delta t) = v(t)+\\Delta t \\frac{dv}{dt}+\\frac{\\Delta t^2}{2!} \\frac{d^2v}{dt^2}+O(\\Delta t^3).\n$$\n\nUsing the fact that $dy/dt = v$ and $dv/dt=a$ and keeping only terms up to $\\Delta t$ we have\n\n$$\ny(t+\\Delta t) = y(t)+\\Delta t v(t)+O(\\Delta t^2),\n$$\n\nand\n\n$$\nv(t+\\Delta t) = v(t)+\\Delta t a(t)+O(\\Delta t^2).\n$$\n\n## Discretizing our equations\n\nUsing our discretized versions of the equations with for example\n$y_{i}=y(t_i)$ and $y_{i\\pm 1}=y(t_i+\\Delta t)$, we can rewrite the\nabove equations as (and truncating at $\\Delta t$)\n\n$$\ny_{i+1} = y_i+\\Delta t v_i,\n$$\n\nand\n\n$$\nv_{i+1} = v_i+\\Delta t a_i.\n$$\n\nThese are the famous Euler equations (forward Euler).\n\nTo solve these equations numerically we start at a time $t_0$ and simply integrate up these equations to a final time $t_f$,\nThe step size $\\Delta t$ is an input parameter in our code.\nYou can define it directly in the code below as\n\n\n```python\nDeltaT = 0.1\n```\n\nWith a given final time **tfinal** we can then find the number of integration points via the **ceil** function included in the **math** package of Python\nas\n\n\n```python\n#define final time, assuming that initial time is zero\nfrom math import ceil\ntfinal = 0.5\nn = ceil(tfinal/DeltaT)\nprint(n)\n```\n\n 5\n\n\nThe **ceil** function returns the smallest integer not less than the input in say\n\n\n```python\nx = 21.15\nprint(ceil(x))\n```\n\n 22\n\n\nwhich in the case here is 22.\n\n\n```python\nx = 21.75\nprint(ceil(x))\n```\n\n 22\n\n\nwhich also yields 22. The **floor** function in the **math** package\nis used to return the closest integer value which is less than or equal to the specified expression or value.\nCompare the previous result to the usage of **floor**\n\n\n```python\nfrom math import floor\nx = 21.75\nprint(floor(x))\n```\n\n 21\n\n\nAlternatively, we can define ourselves the number of integration(mesh) points. In this case we could have\n\n\n```python\nn = 10\ntinitial = 0.0\ntfinal = 0.5\nDeltaT = (tfinal-tinitial)/(n)\nprint(DeltaT)\n```\n\n 0.05\n\n\nSince we will set up one-dimensional arrays that contain the values of\nvarious variables like time, position, velocity, acceleration etc, we\nneed to know the value of $n$, the number of data points (or\nintegration or mesh points). With $n$ we can initialize a given array\nby setting all elelements to zero, as done here\n\n\n```python\n# define array a\na = np.zeros(n)\nprint(a)\n```\n\n [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n\n\n## Code for implementing Euler's method\nIn the code here we implement this simple Eurler scheme choosing a value for $D=0.0245$ m/s.\n\n\n```python\n# Common imports\nimport numpy as np\nimport pandas as pd\nfrom math import *\nimport matplotlib.pyplot as plt\nimport os\n\n# Where to save the figures and data files\nPROJECT_ROOT_DIR = \"Results\"\nFIGURE_ID = \"Results/FigureFiles\"\nDATA_ID = \"DataFiles/\"\n\nif not os.path.exists(PROJECT_ROOT_DIR):\n os.mkdir(PROJECT_ROOT_DIR)\n\nif not os.path.exists(FIGURE_ID):\n os.makedirs(FIGURE_ID)\n\nif not os.path.exists(DATA_ID):\n os.makedirs(DATA_ID)\n\ndef image_path(fig_id):\n return os.path.join(FIGURE_ID, fig_id)\n\ndef data_path(dat_id):\n return os.path.join(DATA_ID, dat_id)\n\ndef save_fig(fig_id):\n plt.savefig(image_path(fig_id) + \".png\", format='png')\n\n\ng = 9.80655 #m/s^2\nD = 0.00245 #m/s\nDeltaT = 0.1\n#set up arrays \ntfinal = 0.5\nn = ceil(tfinal/DeltaT)\n# define scaling constant vT\nvT = sqrt(g/D)\n# set up arrays for t, a, v, and y and we can compare our results with analytical ones\nt = np.zeros(n)\na = np.zeros(n)\nv = np.zeros(n)\ny = np.zeros(n)\nyanalytic = np.zeros(n)\n# Initial conditions\nv[0] = 0.0 #m/s\ny[0] = 10.0 #m\nyanalytic[0] = y[0]\n# Start integrating using Euler's method\nfor i in range(n-1):\n # expression for acceleration\n a[i] = -g + D*v[i]*v[i]\n # update velocity and position\n y[i+1] = y[i] + DeltaT*v[i]\n v[i+1] = v[i] + DeltaT*a[i]\n # update time to next time step and compute analytical answer\n t[i+1] = t[i] + DeltaT\n yanalytic[i+1] = y[0]-(vT*vT/g)*log(cosh(g*t[i+1]/vT))\n if ( y[i+1] < 0.0):\n break\na[n-1] = -g + D*v[n-1]*v[n-1]\ndata = {'t[s]': t,\n 'y[m]': y-yanalytic,\n 'v[m/s]': v,\n 'a[m/s^2]': a\n }\nNewData = pd.DataFrame(data)\ndisplay(NewData)\n#finally we plot the data\nfig, axs = plt.subplots(3, 1)\naxs[0].plot(t, y, t, yanalytic)\naxs[0].set_xlim(0, tfinal)\naxs[0].set_ylabel('y and exact')\naxs[1].plot(t, v)\naxs[1].set_ylabel('v[m/s]')\naxs[2].plot(t, a)\naxs[2].set_xlabel('time[s]')\naxs[2].set_ylabel('a[m/s^2]')\nfig.tight_layout()\nsave_fig(\"EulerIntegration\")\nplt.show()\n```\n\nTry different values for $\\Delta t$ and study the difference between the exact solution and the numerical solution.\n\n## Simple extension, the Euler-Cromer method\n\nThe Euler-Cromer method is a simple variant of the standard Euler\nmethod. We use the newly updated velocity $v_{i+1}$ as an input to the\nnew position, that is, instead of\n\n$$\ny_{i+1} = y_i+\\Delta t v_i,\n$$\n\nand\n\n$$\nv_{i+1} = v_i+\\Delta t a_i,\n$$\n\nwe use now the newly calculate for $v_{i+1}$ as input to $y_{i+1}$, that is \nwe compute first\n\n$$\nv_{i+1} = v_i+\\Delta t a_i,\n$$\n\nand then\n\n$$\ny_{i+1} = y_i+\\Delta t v_{i+1},\n$$\n\nImplementing the Euler-Cromer method yields a simple change to the previous code. We only need to change the following line in the loop over time\nsteps\n\n\n```python\nfor i in range(n-1):\n # more codes in between here\n v[i+1] = v[i] + DeltaT*a[i]\n y[i+1] = y[i] + DeltaT*v[i+1]\n # more code\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "39e8bcefb89d61a57b648f4daf9e5720dcdbe6bf", "size": 174790, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/pub/week2/ipynb/week2.ipynb", "max_stars_repo_name": "Shield94/Physics321", "max_stars_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "doc/pub/week2/ipynb/week2.ipynb", "max_issues_repo_name": "Shield94/Physics321", "max_issues_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/pub/week2/ipynb/week2.ipynb", "max_forks_repo_name": "Shield94/Physics321", "max_forks_repo_head_hexsha": "9875a3bf840b0fa164b865a3cb13073aff9094ca", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 38.8508557457, "max_line_length": 27680, "alphanum_fraction": 0.6425710853, "converted": true, "num_tokens": 21232, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO\n\n", "lm_q1_score": 0.26894142136999516, "lm_q2_score": 0.32766830082071396, "lm_q1q2_score": 0.08812357856061397}} {"text": "```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"./styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n# Shallow Water Equations\n\nAs a simple example of solving the Riemann problem for a nonlinear system we look at the *shallow water* equations. These are a simplification of the Navier-Stokes equations, reduced here to one spatial dimension $x$, which determine the height $h(x, t)$ of the water with respect to some reference location, and its velocity $u(x, t)$. In the simplest case (where the bed of the channel is flat, and the gravitational constant is renormalised to $1$) these can be written in the conservation law form\n$$\n \\partial_t \\begin{pmatrix} h \\\\ h u \\end{pmatrix} + \\partial_x \\begin{pmatrix} hu \\\\ h u^2 + \\tfrac{1}{2} h^2 \\end{pmatrix} = {\\bf 0}.\n$$\nThe *conserved variables* ${\\bf q} = (q_1, q_2)^T = (h, h u)^T$ are effectively the total mass and momentum of the fluid.\n\n## Quasilinear form\n\nAs seen in the [theory lesson](Lesson_Theory.ipynb), to construct the solution we need the eigenvalues and eigenvectors of the Jacobian matrix. We can construct them directly, by first noting that, in terms of the conserved variables, \n$$\n {\\bf f} = \\begin{pmatrix} f_1 \\\\ f_2 \\end{pmatrix} = \\begin{pmatrix} q_2 \\\\ \\frac{q_2^2}{q_1} + \\frac{q_1^2}{2} \\end{pmatrix}.\n$$\nTherefore the Jacobian is\n$$ \\frac{\\partial {\\bf f}}{\\partial {\\bf q}} = \\begin{pmatrix} 0 & 1 \\\\ -u^2 + h & 2 u \\end{pmatrix}.\n$$\nThe eigenvalues and eigenvectors follow immediately as\n$$\n\\begin{align}\n \\lambda_{1} & = u - \\sqrt{h}, & \\lambda_{2} & = u + \\sqrt{h}, \\\\\n {\\bf r}_{1} & = \\begin{pmatrix} 1 \\\\ u - \\sqrt{h} \\end{pmatrix}, & {\\bf r}_{2} & = \\begin{pmatrix} 1 \\\\ u + \\sqrt{h} \\end{pmatrix} .\n\\end{align}\n$$\nHere we have followed the standard convention $\\lambda_1 \\le \\lambda_2 \\le \\dots \\le \\lambda_N$.\n\nAn alternative approach that may be considerably easier to apply for more complex problems is to write down a different quasilinear form of the equation, which in this case is in terms of the *primitive variables* ${\\bf w} = (h, u)^T$,\n$$\n \\partial_t \\begin{pmatrix} h \\\\ u \\end{pmatrix} + \\begin{pmatrix} u & h \\\\ 1 & u \\end{pmatrix} \\partial_x \\begin{pmatrix} h \\\\ u \\end{pmatrix} = {\\bf 0}.\n$$\nThe general form here would be written \n$$\n \\partial_t {\\bf w} + B({\\bf w}) \\partial_x {\\bf w} = {\\bf 0}.\n$$\nIt is straightforward to check that \n$$\n B = \\left( \\frac{\\partial {\\bf q}}{\\partial {\\bf w}} \\right)^{-1} \\frac{\\partial {\\bf f}}{\\partial {\\bf w}} = \\left( \\frac{\\partial {\\bf q}}{\\partial {\\bf w}} \\right)^{-1} \\frac{\\partial {\\bf f}}{\\partial {\\bf q}} \\left( \\frac{\\partial {\\bf q}}{\\partial {\\bf w}} \\right).\n$$\nThus $B$ is *similar* to the Jacobian, so must have the same eigenvalues, which is straightforward to check. We also have that\n$$\n\\begin{align}\n B \\left\\{ \\left( \\frac{\\partial {\\bf q}}{\\partial {\\bf w}} \\right)^{-1} {\\bf r} \\right\\} = \\lambda \\left\\{ \\left( \\frac{\\partial {\\bf q}}{\\partial {\\bf w}} \\right)^{-1} {\\bf r} \\right\\},\n\\end{align}\n$$\nshowing that the eigenvectors of the Jacobian can be straightforwardly found from the eigenvectors of $B$, which for the shallow water case are\n$$\n\\begin{align}\n {\\bf \\hat{r}}_1 &= \\begin{pmatrix} -\\sqrt{h} \\\\ 1 \\end{pmatrix} & {\\bf \\hat{r}}_2 &= \\begin{pmatrix} \\sqrt{h} \\\\ 1 \\end{pmatrix}.\n\\end{align}\n$$\n\n## Rarefaction waves\n\nThe solution across a continuous rarefaction wave is given by the solution of the ordinary differential equation\n$$\n \\partial_{\\xi} {\\bf q} = \\frac{{\\bf r}}{{\\bf r} \\cdot \\partial_{{\\bf q}} \\lambda}\n$$\nwhere $\\lambda, {\\bf r}$ are the eigenvalues and eigenvectors of the Jacobian matrix. Note that we can change variables to get the (physically equivalent) relation differential equation\n$$\n \\partial_{\\xi} {\\bf w} = \\frac{{\\bf r}}{{\\bf r} \\cdot \\partial_{{\\bf w}} \\lambda}\n$$\nwhere now the eigenvectors are those of the appropriate matrix for the quasilinear form for ${\\bf w}$. Where ${\\bf w}$ are the primitive variables as above, the matrix is $B$ and the eigenvectors given by ${\\bf \\hat{r}}$ as above.\n\nFor the shallow water equations we will solve this equation for the primitive variables for the first wave only - symmetry gives the other wave straightforwardly. Starting from\n$$\n \\lambda_1 = u - \\sqrt{h}, \\qquad {\\bf \\hat{r}}_1 = \\begin{pmatrix} -\\sqrt{h} \\\\ 1 \\end{pmatrix}\n$$\nwe have\n$$\n \\partial_{{\\bf w}} \\lambda_1 = \\begin{pmatrix} -\\frac{1}{2 \\sqrt{h}} \\\\ 1 \\end{pmatrix}\n$$\nand hence\n$$\n {\\bf \\hat{r}}_1 \\cdot \\partial_{{\\bf w}} \\lambda_1 = \\frac{3}{2}\n$$\nfrom which we have\n$$\n \\partial_{\\xi} \\begin{pmatrix} h \\\\ u \\end{pmatrix} = \\frac{2}{3} \\begin{pmatrix} -\\sqrt{h} \\\\ 1 \\end{pmatrix}.\n$$\n\nThis is straightforwardly integrated to get\n$$\n \\begin{pmatrix} h \\\\ u \\end{pmatrix} = \\begin{pmatrix} \\left( c_1 - \\frac{\\xi}{3} \\right)^2 \\\\ \\frac{2}{3} \\xi + c_2 \\end{pmatrix}. \n$$\n\nTo fix the integration constants $c_{1,2}$ we need to say which state the solution is starting from. As we are looking at the left wave, we expect it to start from the left state ${\\bf w}_l = (h_l, u_l)^T$. The left state will connect to the rarefaction wave when the characteristic speeds match, i.e. when $\\xi = \\xi_l = \\lambda_1 = u_l - \\sqrt{h_l}$. Therefore we have\n$$\n \\begin{pmatrix} h_l \\\\ u_l \\end{pmatrix} = \\begin{pmatrix} \\left( c_1 - \\frac{\\xi_l}{3} \\right)^2 \\\\ \\frac{2}{3} \\xi_l + c_2 \\end{pmatrix},\n$$\nfrom which we determine\n$$\n c_1 = \\frac{1}{3} \\xi_l + \\sqrt{h_l}, \\qquad c_2 = u_l - \\frac{2}{3} \\xi_l.\n$$\n\nThis gives the final solution\n$$\n \\begin{pmatrix} h \\\\ u \\end{pmatrix} = \\begin{pmatrix} \\left( \\frac{\\xi_l - \\xi}{3} + \\sqrt{h_l} \\right)^2 \\\\ \\frac{2}{3} (\\xi - \\xi_l) + u_l \\end{pmatrix}. \n$$\n\n### Rarefaction examples\n\nLet us look at all points that can be connected to a certain state by a rarefaction. We do this in the *phase plane*, which is the $(h, u)$ plane. The \"known state\" will be given by a marker, and all states along the rarefaction curve given by the line, sometimes known as an integral curve.\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\n\n```python\nhl = np.linspace(0.1, 10.1)\nul = np.linspace(-1.0, 1.0)\nHL, UL = np.meshgrid(hl, ul)\nXIL = UL - np.sqrt(HL)\nxi_min = np.min(XIL)\nxi_max = np.max(XIL)\nh_min = np.min(hl)\nh_max = np.max(hl)\nu_min = np.min(ul)\nu_max = np.max(ul)\nxi = np.linspace(xi_min, xi_max)\n```\n\n\n```python\ndef plot_sw_rarefaction(hl, ul):\n \"Plot the rarefaction curve through the state (hl, ul)\"\n \n xil = ul - np.sqrt(hl)\n h = ((xil - xi) / 3.0 + np.sqrt(hl))**2\n u = 2.0 * (xi - xil) / 3.0 + ul\n \n fig = plt.figure(figsize=(12,8))\n ax = fig.add_subplot(111)\n ax.plot(hl, ul, 'rx', markersize = 16, markeredgewidth = 3)\n ax.plot(h, u, 'k--', linewidth = 2)\n ax.set_xlabel(r\"$h$\")\n ax.set_ylabel(r\"$u$\")\n dh = h_max - h_min\n du = u_max - u_min\n ax.set_xbound(h_min - 0.1 * dh, h_max + 0.1 * dh)\n ax.set_ybound(u_min - 0.1 * du, u_max + 0.1 * du)\n fig.tight_layout()\n```\n\n\n```python\nfrom ipywidgets import interactive, FloatSlider\n\ninteractive(plot_sw_rarefaction, \n hl = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ul = FloatSlider(min = -1.0, max = 1.0, value = 0.0))\n```\n\n\n

Failed to display Jupyter Widget of type interactive.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n\nThere is a problem with this: we haven't checked if the states on the curve can be *physically* connected to this point. That is, we haven't checked how the characteristic speed changes along the curve.\n\nHere it is obvious: we know that the characteristics must spread across the rarefaction, so $\\lambda$ must increase, and as $\\xi = \\lambda$ we must have the characteristic coordinate increasing.\n\n\n```python\ndef plot_sw_rarefaction_physical(hl, ul):\n \"Plot the rarefaction curve through the state (hl, ul)\"\n \n xil = ul - np.sqrt(hl)\n xi_physical = np.linspace(xil, xi_max)\n xi_unphysical = np.linspace(xi_min, xil)\n h_physical = ((xil - xi_physical) / 3.0 + np.sqrt(hl))**2\n u_physical = 2.0 * (xi_physical - xil) / 3.0 + ul\n h_unphysical = ((xil - xi_unphysical) / 3.0 + np.sqrt(hl))**2\n u_unphysical = 2.0 * (xi_unphysical - xil) / 3.0 + ul\n \n \n fig = plt.figure(figsize=(12,8))\n ax = fig.add_subplot(111)\n ax.plot(hl, ul, 'rx', markersize = 16, markeredgewidth = 3)\n ax.plot(h_physical, u_physical, 'k-', linewidth = 2, label=\"Physical\")\n ax.plot(h_unphysical, u_unphysical, 'k--', linewidth = 2, label=\"Unphysical\")\n ax.set_xlabel(r\"$h$\")\n ax.set_ylabel(r\"$u$\")\n dh = h_max - h_min\n du = u_max - u_min\n ax.set_xbound(h_min - 0.1 * dh, h_max + 0.1 * dh)\n ax.set_ybound(u_min - 0.1 * du, u_max + 0.1 * du)\n ax.legend()\n fig.tight_layout()\n```\n\n\n```python\ninteractive(plot_sw_rarefaction_physical, \n hl = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ul = FloatSlider(min = -1.0, max = 1.0, value = 0.0))\n```\n\n\n

Failed to display Jupyter Widget of type interactive.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n\nWe see that along the physical part of the rarefaction curve the height $h$ decreases.\n\nInstead of writing the solution in terms of the similarity coordinate $\\xi$ we can instead write the solution in terms of any other single parameter. It is useful to write it in terms of the height, which can be done simply by re-arranging the equations giving $u$ and $h$ in terms of $\\xi$. So, a state with height $h_m$ to the right of the state $(h_l, u_l)$ can be connected across a rarefaction if\n$$\n u_m = u_l + 2 \\left( \\sqrt{h_l} - \\sqrt{h_m} \\right).\n$$\n\nIn this form we will look at the characteristic curves and the behaviour in state space to cross-check.\n\n\n```python\ndef plot_sw_rarefaction_physical_characteristics(hl, ul, hm):\n \"Plot the rarefaction curve through the state (hl, ul) finishing at (hm, um)\"\n \n um = ul + 2.0 * (np.sqrt(hl) - np.sqrt(hm))\n \n h_maximum = np.max([h_max, hl, hm])\n h_minimum = np.min([h_min, hl, hm])\n u_maximum = np.max([u_max, ul, um])\n u_minimum = np.min([u_min, ul, um])\n dh = h_maximum - h_minimum\n du = u_maximum - u_minimum\n xi_min = u_minimum - np.sqrt(h_maximum)\n xi_max = u_maximum - np.sqrt(h_minimum)\n \n xil = ul - np.sqrt(hl)\n xim = um - np.sqrt(hm)\n xi_physical = np.linspace(xil, xi_max)\n xi_unphysical = np.linspace(xi_min, xil)\n h_physical = ((xil - xi_physical) / 3.0 + np.sqrt(hl))**2\n u_physical = 2.0 * (xi_physical - xil) / 3.0 + ul\n h_unphysical = ((xil - xi_unphysical) / 3.0 + np.sqrt(hl))**2\n u_unphysical = 2.0 * (xi_unphysical - xil) / 3.0 + ul\n \n \n fig = plt.figure(figsize=(12,8))\n ax1 = fig.add_subplot(121)\n ax1.plot(hl, ul, 'rx', markersize = 16, markeredgewidth = 3, label=r\"$(h_l, u_l)$\")\n ax1.plot(hm, um, 'b+', markersize = 16, markeredgewidth = 3, label=r\"$(h_m, u_m)$\")\n ax1.plot(h_physical, u_physical, 'k-', linewidth = 2, label=\"Physical\")\n ax1.plot(h_unphysical, u_unphysical, 'k--', linewidth = 2, label=\"Unphysical\")\n ax1.set_xlabel(r\"$h$\")\n ax1.set_ylabel(r\"$u$\")\n ax1.set_xbound(h_minimum - 0.1 * dh, h_maximum + 0.1 * dh)\n ax1.set_ybound(u_minimum - 0.1 * du, u_maximum + 0.1 * du)\n ax1.legend()\n \n ax2 = fig.add_subplot(122)\n left_edge = np.min([-1.0, -1.0 - xil])\n right_edge = np.max([1.0, 1.0 - xim])\n x_start_points_l = np.linspace(left_edge, 0.0, 20)\n x_start_points_r = np.linspace(0.0, right_edge, 20)\n x_end_points_l = x_start_points_l + xil\n x_end_points_r = x_start_points_r + xim\n \n for xs, xe in zip(x_start_points_l, x_end_points_l):\n ax2.plot([xs, xe], [0.0, 1.0], 'b-')\n for xs, xe in zip(x_start_points_r, x_end_points_r):\n ax2.plot([xs, xe], [0.0, 1.0], 'g-')\n \n # Rarefaction wave\n if (xim > xil):\n xi = np.linspace(xil, xim, 11)\n x_end_rarefaction = xi\n for xe in x_end_rarefaction:\n ax2.plot([0.0, xe], [0.0, 1.0], 'r--')\n else:\n x_fill = [x_end_points_l[-1], x_start_points_l[-1], x_end_points_r[0]]\n t_fill = [1.0, 0.0, 1.0]\n ax2.fill_between(x_fill, t_fill, 1.0, facecolor = 'red', alpha = 0.5)\n \n ax2.set_xbound(-1.0, 1.0)\n ax2.set_ybound(0.0, 1.0)\n ax2.set_xlabel(r\"$x$\")\n ax2.set_ylabel(r\"$t$\")\n fig.tight_layout()\n```\n\n\n```python\ninteractive(plot_sw_rarefaction_physical_characteristics, \n hl = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ul = FloatSlider(min = -1.0, max = 1.0, value = 0.0), \n hm = FloatSlider(min = 0.1, max = 10.0, value = 0.5))\n```\n\n\n

Failed to display Jupyter Widget of type interactive.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n\nWe clearly see that only if $h_m < h_l$ do the characteristics spread as they should for a rarefaction. This is, in fact, already given by results above: we showed that $\\partial_{\\xi} h \\propto -\\sqrt{h}$. As the height $h$ is positive, this means that as $\\xi$ increase across the rarefaction, the height must decrease.\n\n## All rarefaction solution\n\nThe above exercise assumed we knew the left state and found all right states connecting it by a rarefaction. Now we assume we know both left *and* right states, and assume they connect to a central state, *both* along rarefactions.\n\nFirst, we need to find which states will connect to the right state across a rarefaction.\n\n### Exercise\n\nRepeat the above calculations for states connecting to a known right state. That is, show that, given the right state $(h_r, u_r)$, the left state that connects to it across a rarefaction satisfies\n$$\n \\begin{pmatrix} h \\\\ u \\end{pmatrix} = \\begin{pmatrix} \\left( -\\frac{\\xi_r - \\xi}{3} + \\sqrt{h_r} \\right)^2 \\\\ \\frac{2}{3} (\\xi - \\xi_r) + u_r \\end{pmatrix}. \n$$\nor equivalently, given $h_m$, that\n$$\n u_m = u_r - 2 \\left( \\sqrt{h_r} - \\sqrt{h_m} \\right).\n$$\nAlso check that $h$ decreases across the rarefaction, so for a physical solution $h_m < h_r$.\n\nThen we can plot the curve of all states that can be connected to $(h_l, u_l)$ across a left rarefaction, and the curve of all states that can be connected to $(h_r, u_r)$ across a right rarefaction. *If* they intersect along the *physical* part of the curve, then we have the solution to the Riemann problem. Clearly this only occurs if $h_m < h_l$ *and* $h_m < h_r$.\n\nIn this case (and note that this is a special case!) we can solve it analytically. We note that, using our *assumption* that both curves are rarefactions, we have that\n$$\n\\begin{align}\n u_m & = u_l + 2 \\left( \\sqrt{h_l} - \\sqrt{h_m} \\right) \\\\\n & = u_r - 2 \\left( \\sqrt{h_r} - \\sqrt{h_m} \\right)\n\\end{align}\n$$\nTherefore we have\n$$\n h_m = \\frac{1}{16} \\left( u_l - u_r + 2 \\left( \\sqrt{h_l} + \\sqrt{h_r} \\right) \\right)^2.\n$$\n\n\n```python\ndef plot_sw_all_rarefaction(hl, ul, hr, ur):\n \"Plot the all rarefaction solution curve for states (hl, ul) and (hr, ur)\"\n \n hm = (ul - ur + 2.0 * (np.sqrt(hl) + np.sqrt(hr)))**2 / 16.0\n um = ul + 2.0 * (np.sqrt(hl) - np.sqrt(hm))\n \n h_maximum = np.max([h_max, hl, hr, hm])\n h_minimum = np.min([h_min, hl, hr, hm])\n u_maximum = np.max([u_max, ul, ur, um])\n u_minimum = np.min([u_min, ul, ur, um])\n dh = h_maximum - h_minimum\n du = u_maximum - u_minimum\n xil_min = u_minimum - np.sqrt(h_maximum)\n xil_max = u_maximum - np.sqrt(h_minimum)\n xir_min = u_minimum + np.sqrt(h_minimum)\n xir_max = u_maximum + np.sqrt(h_maximum)\n \n xil = ul - np.sqrt(hl)\n xilm = um - np.sqrt(hm)\n xil_physical = np.linspace(xil, xil_max)\n xil_unphysical = np.linspace(xil_min, xil)\n hl_physical = ((xil - xil_physical) / 3.0 + np.sqrt(hl))**2\n ul_physical = 2.0 * (xil_physical - xil) / 3.0 + ul\n hl_unphysical = ((xil - xil_unphysical) / 3.0 + np.sqrt(hl))**2\n ul_unphysical = 2.0 * (xil_unphysical - xil) / 3.0 + ul\n \n xir = ur + np.sqrt(hr)\n xirm = um + np.sqrt(hm)\n xir_unphysical = np.linspace(xir, xir_max)\n xir_physical = np.linspace(xir_min, xir)\n hr_physical = (-(xir - xir_physical) / 3.0 + np.sqrt(hr))**2\n ur_physical = 2.0 * (xir_physical - xir) / 3.0 + ur\n hr_unphysical = (-(xir - xir_unphysical) / 3.0 + np.sqrt(hr))**2\n ur_unphysical = 2.0 * (xir_unphysical - xir) / 3.0 + ur\n \n fig = plt.figure(figsize=(12,8))\n ax1 = fig.add_subplot(111)\n if (hm < np.min([hl, hr])):\n ax1.plot(hm, um, 'b+', markersize = 16, markeredgewidth = 3, \n label=r\"$(h_m, u_m)$, physical solution\")\n else:\n ax1.plot(hm, um, 'b+', markersize = 16, markeredgewidth = 3, \n label=r\"$(h_m, u_m)$, not physical solution\")\n ax1.plot(hl, ul, 'rx', markersize = 16, markeredgewidth = 3, label=r\"$(h_l, u_l)$\")\n ax1.plot(hr, ur, 'go', markersize = 16, markeredgewidth = 3, label=r\"$(h_r, u_r)$\")\n ax1.plot(hl_physical, ul_physical, 'k-', linewidth = 2, label=\"Physical (left)\")\n ax1.plot(hl_unphysical, ul_unphysical, 'k--', linewidth = 2, label=\"Unphysical (left)\")\n ax1.plot(hr_physical, ur_physical, 'c-', linewidth = 2, label=\"Physical (right)\")\n ax1.plot(hr_unphysical, ur_unphysical, 'c--', linewidth = 2, label=\"Unphysical (right)\")\n ax1.set_xlabel(r\"$h$\")\n ax1.set_ylabel(r\"$u$\")\n ax1.set_xbound(h_minimum - 0.1 * dh, h_maximum + 0.1 * dh)\n ax1.set_ybound(u_minimum - 0.1 * du, u_maximum + 0.1 * du)\n ax1.legend()\n \n fig.tight_layout()\n```\n\n\n```python\ninteractive(plot_sw_all_rarefaction, \n hl = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ul = FloatSlider(min = -1.0, max = 1.0, value = -0.5), \n hr = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ur = FloatSlider(min = -1.0, max = 1.0, value = 0.5))\n```\n\n\n

Failed to display Jupyter Widget of type interactive.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n\nGiven the central state and the relation along rarefaction curves, we can then construct the characteristics and the solution in terms of the similarity coordinate (which, given a time $t$, gives the solution as a function of $x$).\n\n\n```python\ndef plot_sw_all_rarefaction_solution(hl, ul, hr, ur):\n \"Plot the all rarefaction solution curve for states (hl, ul) and (hr, ur)\"\n \n hm = (ul - ur + 2.0 * (np.sqrt(hl) + np.sqrt(hr)))**2 / 16.0\n um = ul + 2.0 * (np.sqrt(hl) - np.sqrt(hm))\n \n xi1l = ul - np.sqrt(hl)\n xi1m = um - np.sqrt(hm)\n xi1r = ur - np.sqrt(hr)\n hl_raref = np.linspace(hl, hm, 20)\n ul_raref = ul + 2.0 * (np.sqrt(hl) - np.sqrt(hl_raref))\n xil_raref = ul_raref - np.sqrt(hl_raref)\n \n xi2r = ur + np.sqrt(hr)\n xi2m = um + np.sqrt(hm)\n xi2l = ul + np.sqrt(hl)\n hr_raref = np.linspace(hm, hr)\n ur_raref = ur - 2.0 * (np.sqrt(hr) - np.sqrt(hr_raref))\n xir_raref = ur_raref + np.sqrt(hr_raref)\n \n xi_min = np.min([-1.0, xi1l, xi1m, xi2r, xi2m])\n xi_max = np.max([1.0, xi1l, xi1m, xi2r, xi2m])\n d_xi = xi_max - xi_min\n h_max = np.max([hl, hr, hm])\n h_min = np.min([hl, hr, hm])\n d_h = h_max - h_min\n u_max = np.max([ul, ur, um])\n u_min = np.min([ul, ur, um])\n d_u = u_max - u_min\n \n xi = np.array([xi_min - 0.1 * d_xi, xi1l])\n h = np.array([hl, hl])\n u = np.array([ul, ul])\n xi = np.append(xi, xil_raref)\n h = np.append(h, hl_raref)\n u = np.append(u, ul_raref)\n xi = np.append(xi, [xi1m, xi2m])\n h = np.append(h, [hm, hm])\n u = np.append(u, [um, um])\n xi = np.append(xi, xir_raref)\n h = np.append(h, hr_raref)\n u = np.append(u, ur_raref)\n xi = np.append(xi, [xi2r, xi_max + 0.1 * d_xi])\n h = np.append(h, [hr, hr])\n u = np.append(u, [ur, ur])\n \n fig = plt.figure(figsize=(12,8))\n ax1 = fig.add_subplot(221)\n if (hm < np.min([hl, hr])):\n ax1.plot(xi, h, 'b-', label = \"Physical solution\")\n else:\n ax1.plot(xi, h, 'r--', label = \"Unphysical solution\")\n ax1.set_ybound(h_min - 0.1 * d_h, h_max + 0.1 * d_h)\n ax1.set_xlabel(r\"$\\xi$\")\n ax1.set_ylabel(r\"$h$\")\n ax1.legend()\n ax2 = fig.add_subplot(222)\n if (hm < np.min([hl, hr])):\n ax2.plot(xi, u, 'b-', label = \"Physical solution\")\n else:\n ax2.plot(xi, u, 'r--', label = \"Unphysical solution\")\n ax2.set_ybound(u_min - 0.1 * d_u, u_max + 0.1 * d_u)\n ax2.set_xlabel(r\"$\\xi$\")\n ax2.set_ylabel(r\"$u$\")\n ax2.legend()\n \n ax3 = fig.add_subplot(223)\n left_end = np.min([-1.0, 1.1*xi1l])\n right_end = np.max([1.0, 1.1*xi2r])\n left_edge = left_end - xi1l\n right_edge = right_end - xi1r\n x1_start_points_l = np.linspace(np.min([left_edge, left_end]), 0.0, 20)\n x1_start_points_r = np.linspace(0.0, np.max([right_edge, right_end]), 20)\n x1_end_points_l = x1_start_points_l + xi1l\n t1_end_points_r = np.ones_like(x1_start_points_r)\n \n # Look for intersections\n t1_end_points_r = np.minimum(t1_end_points_r, x1_start_points_r / (xi2r - xi1r))\n x1_end_points_r = x1_start_points_r + xi1r * t1_end_points_r\n # Note: here we are cheating, and using the characteristic speed of the middle state, \n # ignoring howo it varies across the rarefaction\n x1_final_points_r = x1_end_points_r + (1.0 - t1_end_points_r) * xi1m\n \n for xs, xe in zip(x1_start_points_l, x1_end_points_l):\n ax3.plot([xs, xe], [0.0, 1.0], 'b-')\n for xs, xe, te in zip(x1_start_points_r, x1_end_points_r, t1_end_points_r):\n ax3.plot([xs, xe], [0.0, te], 'g-')\n for xs, xe, ts in zip(x1_end_points_r, x1_final_points_r, t1_end_points_r):\n ax3.plot([xs, xe], [ts, 1.0], 'g-')\n \n # Highlight the edges of both rarefactions\n ax3.plot([0.0, xi1l], [0.0, 1.0], 'r-', linewidth=2)\n ax3.plot([0.0, xi1m], [0.0, 1.0], 'r-', linewidth=2)\n ax3.plot([0.0, xi2m], [0.0, 1.0], 'r-', linewidth=2)\n ax3.plot([0.0, xi2r], [0.0, 1.0], 'r-', linewidth=2)\n \n # Rarefaction wave\n if (xi1l < xi1m):\n xi = np.linspace(xi1l, xi1m, 11)\n x_end_rarefaction = xi\n for xe in x_end_rarefaction:\n ax3.plot([0.0, xe], [0.0, 1.0], 'r--')\n else:\n x_fill = [xi1l, 0.0, xi1m]\n t_fill = [1.0, 0.0, 1.0]\n ax3.fill_between(x_fill, t_fill, 1.0, facecolor = 'red', alpha = 0.5)\n \n ax3.set_xlabel(r\"$x$\")\n ax3.set_ylabel(r\"$t$\")\n ax3.set_title(\"1-characteristics\")\n ax3.set_xbound(left_end, right_end)\n \n ax4 = fig.add_subplot(224)\n left_end = np.min([-1.0, 1.1*xi1l])\n right_end = np.max([1.0, 1.1*xi2r])\n left_edge = left_end - xi2l\n right_edge = right_end - xi2r\n x2_start_points_l = np.linspace(np.min([left_edge, left_end]), 0.0, 20)\n x2_start_points_r = np.linspace(0.0, np.max([right_edge, right_end]), 20)\n x2_end_points_r = x2_start_points_r + xi2r\n t2_end_points_l = np.ones_like(x2_start_points_l)\n \n # Look for intersections\n t2_end_points_l = np.minimum(t2_end_points_l, x2_start_points_l / (xi1l - xi2r))\n x2_end_points_l = x2_start_points_l + xi2r * t2_end_points_l\n # Note: here we are cheating, and using the characteristic speed of the middle state, \n # ignoring howo it varies across the rarefaction\n x2_final_points_l = x2_end_points_l + (1.0 - t2_end_points_l) * xi2m\n \n for xs, xe in zip(x2_start_points_r, x2_end_points_r):\n ax4.plot([xs, xe], [0.0, 1.0], 'g-')\n for xs, xe, te in zip(x2_start_points_l, x2_end_points_l, t2_end_points_l):\n ax4.plot([xs, xe], [0.0, te], 'b-')\n for xs, xe, ts in zip(x2_end_points_l, x2_final_points_l, t2_end_points_l):\n ax4.plot([xs, xe], [ts, 1.0], 'b-')\n \n # Highlight the edges of both rarefactions\n ax4.plot([0.0, xi1l], [0.0, 1.0], 'r-', linewidth=2)\n ax4.plot([0.0, xi1m], [0.0, 1.0], 'r-', linewidth=2)\n ax4.plot([0.0, xi2m], [0.0, 1.0], 'r-', linewidth=2)\n ax4.plot([0.0, xi2r], [0.0, 1.0], 'r-', linewidth=2)\n \n # Rarefaction wave\n if (xi2r > xi2m):\n xi = np.linspace(xi2m, xi2r, 11)\n x_end_rarefaction = xi\n for xe in x_end_rarefaction:\n ax4.plot([0.0, xe], [0.0, 1.0], 'r--')\n else:\n x_fill = [xi2m, 0.0, xi2r]\n t_fill = [1.0, 0.0, 1.0]\n ax4.fill_between(x_fill, t_fill, 1.0, facecolor = 'red', alpha = 0.5)\n \n ax4.set_xlabel(r\"$x$\")\n ax4.set_ylabel(r\"$t$\")\n ax4.set_title(\"2-characteristics\")\n ax4.set_xbound(left_end, right_end)\n \n fig.tight_layout()\n```\n\n\n```python\ninteractive(plot_sw_all_rarefaction_solution, \n hl = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ul = FloatSlider(min = -1.0, max = 1.0, value = -0.5), \n hr = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ur = FloatSlider(min = -1.0, max = 1.0, value = 0.5))\n```\n\n\n

Failed to display Jupyter Widget of type interactive.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n\n## Shocks\n\nWe note that the [general theory](Lesson_Theory.ipynb) tells us that across a shock the Rankine-Hugoniot conditions\n$$\n V_s \\left[ {\\bf q} \\right] = \\left[ {\\bf f}({\\bf q}) \\right]\n$$\nmust be satisfied.\n\nFor the shallow water equations we will start, as with the rarefaction case, by assuming we know the left state ${\\bf q}_l = (h_l, u_l)$, and work out which states ${\\bf q}_m$ can be connected to it across a shock. \n\nNote here that the procedure is *identical* for the right state as the direction does not matter. However, there will be multiple solutions, and checking which is physically correct does require checking whether the left or the right state is known\n\nWriting out the conditions in full we see that\n$$\n\\begin{align}\n V_s \\left( h_m - h_l \\right) & = h_m u_m - h_l u_l \\\\\n V_s \\left( h_m u_m - h_l u_l \\right) & = h_m u_m^2 + \\tfrac{1}{2} h_m^2 - h_l u_l^2 - \\tfrac{1}{2} h_l^2\n\\end{align}\n$$\n\nEliminating the shock speed $V_s$ gives, using the second equation,\n$$\n u_m^2 - (2 u_l) u_m + \\left[ u_l^2 - \\tfrac{1}{2} \\left( h_l - h_m \\right) \\left( \\frac{h_l}{h_m} - \\frac{h_m}{h_l} \\right) \\right] = 0.\n$$\nThis has the solutions (assuming that $h_m$ is known!)\n$$\n u_m = u_l \\pm \\sqrt{\\tfrac{1}{2} \\left( h_l - h_m \\right) \\left( \\frac{h_l}{h_m} - \\frac{h_m}{h_l} \\right)}.\n$$\n\nWe can again use the Rankine-Hugoniot relations to find the shock speed.\n$$\n V_s = u_l \\pm \\frac{h_m}{h_m - h_l} \\sqrt{\\tfrac{1}{2} \\left( h_l - h_m \\right) \\left( \\frac{h_l}{h_m} - \\frac{h_m}{h_l} \\right)}.\n$$\n\nWe should at this point find which sign is appropriate. Comparing the shock speeds against the characteristic speed will show that\n\n* we need $h_m > h_l$ for the wave to be a shock, and\n* we take the negative sign if connected to a left state, and the positive if connected to a right state.\n\nHowever, we can see this by plotting the *Hugoniot locus*: the curve of all states that can be connected to $(h_l, u_l)$ across a shock.\n\n\n```python\ndef plot_sw_shock_physical(hl, ul):\n \"Plot the shock curve through the state (hl, ul)\"\n \n h = np.linspace(h_min, h_max, 500)\n u_negative = ul - np.sqrt(0.5 * (hl - h) * (hl / h - h / hl))\n u_positive = ul + np.sqrt(0.5 * (hl - h) * (hl / h - h / hl))\n \n vs_negative = ul - h / (h - hl) * np.sqrt(0.5 * (hl - h) * (hl / h - h / hl))\n vs_positive = ul + h / (h - hl) * np.sqrt(0.5 * (hl - h) * (hl / h - h / hl))\n \n xi1_negative = u_negative - np.sqrt(h) \n xi1_positive = u_positive - np.sqrt(h)\n xi2_negative = u_negative + np.sqrt(h) \n xi2_positive = u_positive + np.sqrt(h)\n \n xi1_l = ul - np.sqrt(hl)\n xi2_l = ul + np.sqrt(hl)\n \n h1_physical = h[np.logical_and(xi1_negative <= vs_negative, xi1_l >= vs_negative)]\n u1_physical = u_negative[np.logical_and(xi1_negative <= vs_negative, xi1_l >= vs_negative)]\n h2_physical = h[np.logical_and(xi2_positive >= vs_positive, xi2_l <= vs_positive)]\n u2_physical = u_positive[np.logical_and(xi2_positive >= vs_positive, xi2_l <= vs_positive)]\n h1_unphysical = h[np.logical_or(xi1_negative >= vs_negative, xi1_l <= vs_negative)]\n u1_unphysical = u_negative[np.logical_or(xi1_negative >= vs_negative, xi1_l <= vs_negative)]\n h2_unphysical = h[np.logical_or(xi2_positive <= vs_positive, xi2_l >= vs_positive)]\n u2_unphysical = u_positive[np.logical_or(xi2_positive <= vs_positive, xi2_l >= vs_positive)]\n \n fig = plt.figure(figsize=(12,8))\n ax = fig.add_subplot(111)\n ax.plot(hl, ul, 'rx', markersize = 16, markeredgewidth = 3)\n ax.plot(h1_physical, u1_physical, 'b-', linewidth = 2, \n label=\"Physical, 1-shock\")\n ax.plot(h1_unphysical, u1_unphysical, 'b--', linewidth = 2, \n label=\"Unphysical, 1-shock\")\n ax.plot(h2_physical, u2_physical, 'g-', linewidth = 2, \n label=\"Physical, 2-shock\")\n ax.plot(h2_unphysical, u2_unphysical, 'g--', linewidth = 2, \n label=\"Unphysical, 2-shock\")\n ax.plot(h[::5], u_negative[::5], 'co', markersize = 12, markeredgewidth = 2, alpha = 0.3,\n label=\"Negative branch\")\n ax.plot(h[::5], u_positive[::5], 'ro', markersize = 12, markeredgewidth = 2, alpha = 0.3,\n label=\"Positive branch\")\n ax.set_xlabel(r\"$h$\")\n ax.set_ylabel(r\"$u$\")\n dh = h_max - h_min\n du = u_max - u_min\n ax.set_xbound(h_min, h_max)\n ax.set_ybound(u_min, u_max)\n ax.legend()\n fig.tight_layout()\n```\n\n\n```python\ninteractive(plot_sw_shock_physical, \n hl = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ul = FloatSlider(min = -1.0, max = 1.0, value = 0.0))\n```\n\n\n

Failed to display Jupyter Widget of type interactive.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n\nWe see from these results, as claimed above, that\n\n* we need $h_m > h_l$ (or $h_m > h_r$) for the wave to be a shock, and\n* we take the negative sign if connected to a left state, and the positive if connected to a right state.\n\n## All shock solution\n\nWhen we assumed the solution contained two rarefactions it was possible to write the full solution in closed form. If we assume the solution contains two shocks then it is not possible to do this. However, it is straightforward to find the solution numerically. \n\nWe assume the left state ${\\bf w}_l = (h_l, u_l)$ and the right state ${\\bf w}_r = (h_r, u_r)$ are known, and that they both connect to the central state ${\\bf w}_m = (h_m, u_m)$ through shocks. We know that\n$$\n\\begin{align}\n u_m & = u_l - \\sqrt{\\tfrac{1}{2} \\left( h_l - h_m \\right) \\left( \\frac{h_l}{h_m} - \\frac{h_m}{h_l} \\right)}, \\\\\n u_m & = u_r + \\sqrt{\\tfrac{1}{2} \\left( h_r - h_m \\right) \\left( \\frac{h_r}{h_m} - \\frac{h_m}{h_r} \\right)}.\n\\end{align}\n$$\nWe schematically write these equations as\n$$\n\\begin{align}\n u_m & = \\phi_l \\left( h_m; {\\bf w}_l \\right), \\\\\n u_m & = \\phi_r \\left( h_m; {\\bf w}_r \\right),\n\\end{align}\n$$\nto indicate that the velocity in the central state, $u_m$, can be written as a function of the single unknown $h_m$ and known data.\n\nWe immediately see that $h_m$ is a root of the nonlinear equation\n$$\n \\phi \\left( h_m; {\\bf w}_l, {\\bf w}_r \\right) = \\phi_l \\left( h_m; {\\bf w}_l \\right) - \\phi_r \\left( h_m; {\\bf w}_r \\right) = 0.\n$$\n\nFinding the roots of scalar nonlinear equations is a standard problem in numerical methods, with methods such as bisection, Newton-Raphson and more being well-known. `scipy` provides a number of standard algorithms - here we will use the recommended `brentq` method.\n\nNote that as soon as we have numerically determined $h_m$ then either formula above gives $u_m$, and the shock speeds follow.\n\n\n```python\ndef plot_sw_all_shock(hl, ul, hr, ur):\n \"Plot the all shock solution curve for states (hl, ul) and (hr, ur)\"\n \n from scipy.optimize import brentq\n \n def phi(hstar):\n \"Function defining the root\"\n \n phi_l = ul - np.sqrt(0.5 * (hl - hstar) * (hl / hstar - hstar / hl))\n phi_r = ur + np.sqrt(0.5 * (hr - hstar) * (hr / hstar - hstar / hr))\n \n return phi_l - phi_r\n \n # There is a solution only in the physical case. \n physical_solution = True\n try:\n hm = brentq(phi, np.max([hl, hr]), 10.0 * h_max)\n except ValueError:\n physical_solution = False\n hm = hl\n um = ul - np.sqrt(0.5 * (hl - hm) * (hl / hm - hm / hl))\n \n h = np.linspace(h_min, h_max, 500)\n u_negative = ul - np.sqrt(0.5 * (hl - h) * (hl / h - h / hl))\n u_positive = ur + np.sqrt(0.5 * (hr - h) * (hr / h - h / hr))\n \n h_maximum = np.max([h_max, hl, hr, hm])\n h_minimum = np.min([h_min, hl, hr, hm])\n u_maximum = np.max([u_max, ul, ur, um])\n u_minimum = np.min([u_min, ul, ur, um])\n dh = h_maximum - h_minimum\n du = u_maximum - u_minimum\n xil_min = u_minimum - np.sqrt(h_maximum)\n xil_max = u_maximum - np.sqrt(h_minimum)\n xir_min = u_minimum + np.sqrt(h_minimum)\n xir_max = u_maximum + np.sqrt(h_maximum)\n \n vs_negative = ul - h / (h - hl) * np.sqrt(0.5 * (hl - h) * (hl / h - h / hl))\n vs_positive = ur + h / (h - hr) * np.sqrt(0.5 * (hr - h) * (hr / h - h / hr))\n \n xi1_negative = u_negative - np.sqrt(h) \n xi1_positive = u_positive - np.sqrt(h)\n xi2_negative = u_negative + np.sqrt(h) \n xi2_positive = u_positive + np.sqrt(h)\n \n xi1_l = ul - np.sqrt(hl)\n xi2_r = ur + np.sqrt(hr)\n \n h1_physical = h[np.logical_and(xi1_negative <= vs_negative, xi1_l >= vs_negative)]\n u1_physical = u_negative[np.logical_and(xi1_negative <= vs_negative, xi1_l >= vs_negative)]\n h2_physical = h[np.logical_and(xi2_positive >= vs_positive, xi2_r <= vs_positive)]\n u2_physical = u_positive[np.logical_and(xi2_positive >= vs_positive, xi2_r <= vs_positive)]\n h1_unphysical = h[np.logical_or(xi1_negative >= vs_negative, xi1_l <= vs_negative)]\n u1_unphysical = u_negative[np.logical_or(xi1_negative >= vs_negative, xi1_l <= vs_negative)]\n h2_unphysical = h[np.logical_or(xi2_positive <= vs_positive, xi2_r >= vs_positive)]\n u2_unphysical = u_positive[np.logical_or(xi2_positive <= vs_positive, xi2_r >= vs_positive)]\n \n fig = plt.figure(figsize=(12,8))\n ax = fig.add_subplot(111)\n ax.plot(hl, ul, 'rx', markersize = 16, markeredgewidth = 3, label = r\"${\\bf w}_l$\")\n ax.plot(hr, ur, 'r+', markersize = 16, markeredgewidth = 3, label = r\"${\\bf w}_r$\")\n if physical_solution:\n ax.plot(hm, um, 'ro', markersize = 16, markeredgewidth = 3, label = r\"${\\bf w}_m$\")\n ax.plot(h1_physical, u1_physical, 'b-', linewidth = 2, \n label=\"Physical, 1-shock\")\n ax.plot(h1_unphysical, u1_unphysical, 'b--', linewidth = 2, \n label=\"Unphysical, 1-shock\")\n ax.plot(h2_physical, u2_physical, 'g-', linewidth = 2, \n label=\"Physical, 2-shock\")\n ax.plot(h2_unphysical, u2_unphysical, 'g--', linewidth = 2, \n label=\"Unphysical, 2-shock\")\n ax.set_xlabel(r\"$h$\")\n ax.set_ylabel(r\"$u$\")\n ax.set_xbound(h_minimum - 0.1 * dh, h_maximum + 0.1 * dh)\n ax.set_ybound(u_minimum - 0.1 * du, u_maximum + 0.1 * du)\n ax.legend()\n \n fig.tight_layout()\n```\n\n\n```python\ninteractive(plot_sw_all_shock, \n hl = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ul = FloatSlider(min = -1.0, max = 1.0, value = 0.2), \n hr = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ur = FloatSlider(min = -1.0, max = 1.0, value = -0.2))\n```\n\n\n

Failed to display Jupyter Widget of type interactive.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n\nFinally, we can plot the solution in physical space.\n\n\n```python\ndef plot_sw_all_shock_solution(hl, ul, hr, ur):\n \"Plot the all shock solution for states (hl, ul) and (hr, ur)\"\n \n from scipy.optimize import brentq\n \n def phi(hstar):\n \"Function defining the root\"\n \n phi_l = ul - np.sqrt(0.5 * (hl - hstar) * (hl / hstar - hstar / hl))\n phi_r = ur + np.sqrt(0.5 * (hr - hstar) * (hr / hstar - hstar / hr))\n \n return phi_l - phi_r\n \n # There is a solution only in the physical case. \n physical_solution = True\n try:\n hm = brentq(phi, np.max([hl, hr]), 10.0 * h_max)\n except ValueError:\n physical_solution = False\n hm = hl\n um = ul - np.sqrt(0.5 * (hl - hm) * (hl / hm - hm / hl))\n \n xi1l = ul - np.sqrt(hl)\n xi1m = um - np.sqrt(hm)\n xi1r = ur - np.sqrt(hr)\n if physical_solution:\n vsl = ul - hm / (hm - hl) * np.sqrt(0.5 * (hl - hm) * (hl / hm - hm / hl))\n else:\n vsl = xi1l\n \n xi2r = ur + np.sqrt(hr)\n xi2m = um + np.sqrt(hm)\n xi2l = ul + np.sqrt(hl)\n if physical_solution:\n vsr = ur + hm / (hm - hr) * np.sqrt(0.5 * (hr - hm) * (hr / hm - hm / hr))\n else:\n vsr = xi2r\n \n xi_min = np.min([-1.0, xi1l, xi1m, xi2r, xi2m])\n xi_max = np.max([1.0, xi1l, xi1m, xi2r, xi2m])\n d_xi = xi_max - xi_min\n h_maximum = np.max([hl, hr, hm])\n h_minimum = np.min([hl, hr, hm])\n d_h = h_maximum - h_minimum\n u_maximum = np.max([ul, ur, um])\n u_minimum = np.min([ul, ur, um])\n d_u = u_maximum - u_minimum\n \n xi = np.array([xi_min - 0.1 * d_xi, vsl, vsl, vsr, vsr, xi_max + 0.1 * d_xi])\n h = np.array([hl, hl, hm, hm, hr, hr])\n u = np.array([ul, ul, um, um, ur, ur])\n \n fig = plt.figure(figsize=(12,8))\n ax1 = fig.add_subplot(221)\n if (hm > np.max([hl, hr])):\n ax1.plot(xi, h, 'b-', label = \"Physical solution\")\n else:\n ax1.plot(xi, h, 'r--', label = \"Unphysical solution\")\n ax1.set_ybound(h_minimum - 0.1 * d_h, h_maximum + 0.1 * d_h)\n ax1.set_xlabel(r\"$\\xi$\")\n ax1.set_ylabel(r\"$h$\")\n ax1.legend()\n ax2 = fig.add_subplot(222)\n if (hm > np.max([hl, hr])):\n ax2.plot(xi, u, 'b-', label = \"Physical solution\")\n else:\n ax2.plot(xi, u, 'r--', label = \"Unphysical solution\")\n ax2.set_ybound(u_minimum - 0.1 * d_u, u_maximum + 0.1 * d_u)\n ax2.set_xlabel(r\"$\\xi$\")\n ax2.set_ylabel(r\"$u$\")\n ax2.legend()\n \n ax3 = fig.add_subplot(223)\n left_end = np.min([-1.0, 1.1*xi1l])\n right_end = np.max([1.0, 1.1*xi2r])\n left_edge = left_end - xi1l\n right_edge = right_end - xi1r\n x1_start_points_l = np.linspace(np.min([left_edge, left_end]), 0.0, 20)\n x1_start_points_r = np.linspace(0.0, np.max([right_edge, right_end]), 20)\n t1_end_points_l = np.ones_like(x1_start_points_l)\n t1_end_points_r = np.ones_like(x1_start_points_r)\n \n # Look for intersections\n t1_end_points_l = np.minimum(t1_end_points_l, x1_start_points_l / (vsl - xi1l))\n x1_end_points_l = x1_start_points_l + xi1l * t1_end_points_l\n t1_end_points_r = np.minimum(t1_end_points_r, x1_start_points_r / (vsr - xi1r))\n x1_end_points_r = x1_start_points_r + xi1r * t1_end_points_r\n # Note: here we are cheating, and using the characteristic speed of the middle state, \n # ignoring how it varies across the rarefaction\n t1_final_points_r = np.ones_like(x1_start_points_r)\n t1_final_points_r = np.minimum(t1_final_points_r, \n (x1_end_points_r - t1_end_points_r * xi1m) / (vsl - xi1m))\n x1_final_points_r = x1_end_points_r + (t1_final_points_r - t1_end_points_r) * xi1m\n \n for xs, xe, te in zip(x1_start_points_l, x1_end_points_l, t1_end_points_l):\n ax3.plot([xs, xe], [0.0, te], 'b-')\n for xs, xe, te in zip(x1_start_points_r, x1_end_points_r, t1_end_points_r):\n ax3.plot([xs, xe], [0.0, te], 'g-')\n for xs, xe, ts, te in zip(x1_end_points_r, x1_final_points_r, t1_end_points_r, \n t1_final_points_r):\n ax3.plot([xs, xe], [ts, te], 'g-')\n \n # Highlight the shocks\n ax3.plot([0.0, vsl], [0.0, 1.0], 'r-', linewidth=2)\n ax3.plot([0.0, vsr], [0.0, 1.0], 'r-', linewidth=2)\n \n # Unphysical shock\n if not physical_solution:\n x_fill = []\n if xi1l < xi1m:\n x_fill = [xi1l, 0.0, xi1m]\n elif xi1l < vsl:\n x_fill = [xi1l, 0.0, vsl]\n elif vsl < xi1m:\n x_fill = [vsl, 0.0, xi1m]\n if len(x_fill) > 0:\n t_fill = [1.0, 0.0, 1.0]\n ax3.fill_between(x_fill, t_fill, 1.0, facecolor = 'red', alpha = 0.5)\n \n x_fill = []\n if xi2r > xi2m:\n x_fill = [xi2m, 0.0, xi2r]\n elif xi2m < vsr:\n x_fill = [xi2m, 0.0, vsr]\n elif vsr < xi2r:\n x_fill = [vsr, 0.0, xi2r]\n if len(x_fill) > 0:\n t_fill = [1.0, 0.0, 1.0]\n ax3.fill_between(x_fill, t_fill, 1.0, facecolor = 'red', alpha = 0.5)\n \n ax3.set_xlabel(r\"$x$\")\n ax3.set_ylabel(r\"$t$\")\n ax3.set_title(\"1-characteristics\")\n ax3.set_xbound(left_end, right_end)\n \n ax4 = fig.add_subplot(224)\n left_end = np.min([-1.0, 1.1*xi1l])\n right_end = np.max([1.0, 1.1*xi2r])\n left_edge = left_end - xi2l\n right_edge = right_end - xi2r\n x2_start_points_l = np.linspace(np.min([left_edge, left_end]), 0.0, 20)\n x2_start_points_r = np.linspace(0.0, np.max([right_edge, right_end]), 20)\n x2_end_points_r = x2_start_points_r + xi2r\n t2_end_points_l = np.ones_like(x2_start_points_l)\n t2_end_points_r = np.ones_like(x2_start_points_r)\n \n # Look for intersections\n t2_end_points_r = np.minimum(t2_end_points_r, x2_start_points_r / (vsr - xi2r))\n x2_end_points_r = x2_start_points_r + xi2r * t2_end_points_r\n t2_end_points_l = np.minimum(t2_end_points_l, x2_start_points_l / (vsl - xi2l))\n x2_end_points_l = x2_start_points_l + xi2l * t2_end_points_l\n # Note: here we are cheating, and using the characteristic speed of the middle state, \n # ignoring how it varies across the rarefaction\n t2_final_points_l = np.ones_like(x2_start_points_l)\n t2_final_points_l = np.minimum(t2_final_points_l, \n (x2_end_points_l - t2_end_points_l * xi2m) / (vsr - xi2m))\n x2_final_points_l = x2_end_points_l + (t2_final_points_l - t2_end_points_l) * xi2m\n \n for xs, xe, te in zip(x2_start_points_r, x2_end_points_r, t2_end_points_r):\n ax4.plot([xs, xe], [0.0, te], 'b-')\n for xs, xe, te in zip(x2_start_points_l, x2_end_points_l, t2_end_points_l):\n ax4.plot([xs, xe], [0.0, te], 'g-')\n for xs, xe, ts, te in zip(x2_end_points_l, x2_final_points_l, t2_end_points_l, \n t2_final_points_l):\n ax4.plot([xs, xe], [ts, te], 'g-')\n \n # Highlight the shocks\n ax4.plot([0.0, vsl], [0.0, 1.0], 'r-', linewidth=2)\n ax4.plot([0.0, vsr], [0.0, 1.0], 'r-', linewidth=2)\n \n # Unphysical shock\n if not physical_solution:\n x_fill = []\n if xi1l < xi1m:\n x_fill = [xi1l, 0.0, xi1m]\n elif xi1l < vsl:\n x_fill = [xi1l, 0.0, vsl]\n elif vsl < xi1m:\n x_fill = [vsl, 0.0, xi1m]\n if len(x_fill) > 0:\n t_fill = [1.0, 0.0, 1.0]\n ax4.fill_between(x_fill, t_fill, 1.0, facecolor = 'red', alpha = 0.5)\n \n x_fill = []\n if xi2r > xi2m:\n x_fill = [xi2m, 0.0, xi2r]\n elif xi2m < vsr:\n x_fill = [xi2m, 0.0, vsr]\n elif vsr < xi2r:\n x_fill = [vsr, 0.0, xi2r]\n if len(x_fill) > 0:\n t_fill = [1.0, 0.0, 1.0]\n ax4.fill_between(x_fill, t_fill, 1.0, facecolor = 'red', alpha = 0.5)\n \n ax4.set_xlabel(r\"$x$\")\n ax4.set_ylabel(r\"$t$\")\n ax4.set_title(\"2-characteristics\")\n ax4.set_xbound(left_end, right_end)\n \n fig.tight_layout()\n```\n\n\n```python\ninteractive(plot_sw_all_shock_solution, \n hl = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ul = FloatSlider(min = -1.0, max = 1.0, value = 0.2), \n hr = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ur = FloatSlider(min = -1.0, max = 1.0, value = -0.2))\n```\n\n\n

Failed to display Jupyter Widget of type interactive.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n\n## Full solution\n\nThe all shock solution illustrates how the full solution can be obtained. We know that \n\n1. the central state ${\\bf w}_m$ will be connected to the known states ${\\bf w}_{l, r}$ across waves that are either shocks or rarefactions,\n2. if $h_m > h_{l, r}$ then the wave will be a shock, otherwise it will be a rarefaction, and\n3. given $h_m$ and the known data, we can compute $u_m$ for either a shock or a rarefaction.\n\nSo, using the results above, we can find the full solution to the Riemann problem by solving the nonlinear algebraic root-finding problem\n$$\n \\Phi \\left( h_m ; {\\bf w}_l, {\\bf w}_r \\right) = 0,\n$$\nwhere\n$$\n \\Phi \\left( h_m ; {\\bf w}_l, {\\bf w}_r \\right) = \\Phi_l \\left( h_m ; {\\bf w}_l \\right) - \\Phi_r \\left( h_m ; {\\bf w}_r \\right),\n$$\nand\n$$\n\\begin{align}\n \\Phi_l & = u_m \\left( h_m ; {\\bf w}_l \\right) & \\Phi_r & = u_m \\left( h_m ; {\\bf w}_r \\right) \\\\\n & = \\begin{cases} u_l + 2 \\left( \\sqrt{h_l} - \\sqrt{h_m} \\right) & h_l > h_m \\\\ u_l - \\sqrt{\\tfrac{1}{2} \\left( h_l - h_m \\right) \\left( \\frac{h_l}{h_m} - \\frac{h_m}{h_l} \\right)} & h_l < h_m \\end{cases} & & = \\begin{cases} u_r - 2 \\left( \\sqrt{h_r} - \\sqrt{h_m} \\right) & h_r > h_m \\\\ u_r + \\sqrt{\\tfrac{1}{2} \\left( h_r - h_m \\right) \\left( \\frac{h_r}{h_m} - \\frac{h_m}{h_r} \\right)} & h_r < h_m \\end{cases}.\n\\end{align}\n$$\n\n\n```python\ndef plot_sw_Riemann_curves(hl, ul, hr, ur):\n \"Plot the solution curves for states (hl, ul) and (hr, ur)\"\n \n from scipy.optimize import brentq\n \n def phi(hstar):\n \"Function defining the root\"\n \n if hl < hstar:\n phi_l = ul - np.sqrt(0.5 * (hl - hstar) * (hl / hstar - hstar / hl))\n else:\n phi_l = ul + 2.0 * (np.sqrt(hl) - np.sqrt(hstar))\n if hr < hstar:\n phi_r = ur + np.sqrt(0.5 * (hr - hstar) * (hr / hstar - hstar / hr))\n else:\n phi_r = ur - 2.0 * (np.sqrt(hr) - np.sqrt(hstar))\n \n return phi_l - phi_r\n \n hm = brentq(phi, 0.1 * h_min, 10.0 * h_max)\n if hl < hm:\n um = ul - np.sqrt(0.5 * (hl - hm) * (hl / hm - hm / hl))\n else:\n um = ul + 2.0 * (np.sqrt(hl) - np.sqrt(hm))\n \n h_maximum = np.max([h_max, hl, hr, hm])\n h_minimum = np.min([h_min, hl, hr, hm])\n u_maximum = np.max([u_max, ul, ur, um])\n u_minimum = np.min([u_min, ul, ur, um])\n dh = h_maximum - h_minimum\n du = u_maximum - u_minimum\n \n # Now plot the rarefaction and shock curves as appropriate\n # Here we only plot the physical pieces.\n \n h1_shock = np.linspace(hl, h_max)\n u1_shock = ul - np.sqrt(0.5 * (hl - h1_shock) * (hl / h1_shock - h1_shock / hl))\n h2_shock = np.linspace(hr, h_max)\n u2_shock = ur + np.sqrt(0.5 * (hr - h2_shock) * (hr / h2_shock - h2_shock / hr))\n \n h1_rarefaction = np.linspace(h_min, hl)\n u1_rarefaction = ul + 2.0 * (np.sqrt(hl) - np.sqrt(h1_rarefaction))\n h2_rarefaction = np.linspace(h_min, hr)\n u2_rarefaction = ur - 2.0 * (np.sqrt(hr) - np.sqrt(h2_rarefaction))\n \n fig = plt.figure(figsize=(12,8))\n ax = fig.add_subplot(111)\n ax.plot(hl, ul, 'rx', markersize = 16, markeredgewidth = 3, label = r\"${\\bf w}_l$\")\n ax.plot(hr, ur, 'r+', markersize = 16, markeredgewidth = 3, label = r\"${\\bf w}_r$\")\n ax.plot(hm, um, 'ro', markersize = 16, markeredgewidth = 3, label = r\"${\\bf w}_m$\")\n ax.plot(h1_shock, u1_shock, 'b-', linewidth = 2, \n label=\"1-shock\")\n ax.plot(h1_rarefaction, u1_rarefaction, 'b-.', linewidth = 2, \n label=\"1-rarefaction\")\n ax.plot(h2_shock, u2_shock, 'g-', linewidth = 2, \n label=\"2-shock\")\n ax.plot(h2_rarefaction, u2_rarefaction, 'g-.', linewidth = 2, \n label=\"2-rarefaction\")\n ax.set_xlabel(r\"$h$\")\n ax.set_ylabel(r\"$u$\")\n ax.set_xbound(h_minimum - 0.1 * dh, h_maximum + 0.1 * dh)\n ax.set_ybound(u_minimum - 0.1 * du, u_maximum + 0.1 * du)\n ax.legend()\n \n fig.tight_layout()\n```\n\n\n```python\ninteractive(plot_sw_Riemann_curves, \n hl = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ul = FloatSlider(min = -1.0, max = 1.0, value = 0.2), \n hr = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ur = FloatSlider(min = -1.0, max = 1.0, value = -0.2))\n```\n\n\n

Failed to display Jupyter Widget of type interactive.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n\nFinally, we can plot the solution in physical space.\n\n\n```python\ndef plot_sw_Riemann_solution(hl, ul, hr, ur):\n \"Plot the Riemann problem solution for states (hl, ul) and (hr, ur)\"\n \n from scipy.optimize import brentq\n \n def phi(hstar):\n \"Function defining the root\"\n \n if hl < hstar:\n phi_l = ul - np.sqrt(0.5 * (hl - hstar) * (hl / hstar - hstar / hl))\n else:\n phi_l = ul + 2.0 * (np.sqrt(hl) - np.sqrt(hstar))\n if hr < hstar:\n phi_r = ur + np.sqrt(0.5 * (hr - hstar) * (hr / hstar - hstar / hr))\n else:\n phi_r = ur - 2.0 * (np.sqrt(hr) - np.sqrt(hstar))\n \n return phi_l - phi_r\n \n left_raref = False\n left_shock = False\n right_raref = False\n right_shock = False\n \n hm = brentq(phi, 0.1 * h_min, 10.0 * h_max)\n if hl < hm:\n um = ul - np.sqrt(0.5 * (hl - hm) * (hl / hm - hm / hl))\n else:\n um = ul + 2.0 * (np.sqrt(hl) - np.sqrt(hm))\n \n h_maximum = np.max([h_max, hl, hr, hm])\n h_minimum = np.min([h_min, hl, hr, hm])\n u_maximum = np.max([u_max, ul, ur, um])\n u_minimum = np.min([u_min, ul, ur, um])\n dh = h_maximum - h_minimum\n du = u_maximum - u_minimum\n \n xi1l = ul - np.sqrt(hl)\n xi1m = um - np.sqrt(hm)\n xi1r = ur - np.sqrt(hr)\n if hm > hl:\n left_shock = True\n vsl = ul - hm / (hm - hl) * np.sqrt(0.5 * (hl - hm) * (hl / hm - hm / hl))\n else:\n left_raref = True\n hl_raref = np.linspace(hl, hm, 20)\n ul_raref = ul + 2.0 * (np.sqrt(hl) - np.sqrt(hl_raref))\n xil_raref = ul_raref - np.sqrt(hl_raref)\n \n xi2r = ur + np.sqrt(hr)\n xi2m = um + np.sqrt(hm)\n xi2l = ul + np.sqrt(hl)\n if hm > hr:\n right_shock = True\n vsr = ur + hm / (hm - hr) * np.sqrt(0.5 * (hr - hm) * (hr / hm - hm / hr))\n else:\n right_raref = True\n hr_raref = np.linspace(hm, hr)\n ur_raref = ur - 2.0 * (np.sqrt(hr) - np.sqrt(hr_raref))\n xir_raref = ur_raref + np.sqrt(hr_raref)\n \n xi_min = np.min([-1.0, xi1l, xi1m, xi2r, xi2m])\n xi_max = np.max([1.0, xi1l, xi1m, xi2r, xi2m])\n d_xi = xi_max - xi_min\n h_maximum = np.max([hl, hr, hm])\n h_minimum = np.min([hl, hr, hm])\n d_h = h_maximum - h_minimum\n u_maximum = np.max([ul, ur, um])\n u_minimum = np.min([ul, ur, um])\n d_u = u_maximum - u_minimum\n \n xi = np.array([xi_min - 0.1 * d_xi])\n h = np.array([hl])\n u = np.array([ul])\n if left_shock:\n xi = np.append(xi, [vsl, vsl])\n h = np.append(h, [hl, hm])\n u = np.append(u, [ul, um])\n else:\n xi = np.append(xi, xil_raref)\n h = np.append(h, hl_raref)\n u = np.append(u, ul_raref)\n if right_shock:\n xi = np.append(xi, [vsr, vsr])\n h = np.append(h, [hm, hr])\n u = np.append(u, [um, ur])\n else:\n xi = np.append(xi, xir_raref)\n h = np.append(h, hr_raref)\n u = np.append(u, ur_raref)\n xi = np.append(xi, [xi_max + 0.1 * d_xi])\n h = np.append(h, [hr])\n u = np.append(u, [ur])\n \n fig = plt.figure(figsize=(12,8))\n ax1 = fig.add_subplot(221)\n ax1.plot(xi, h, 'b-', label = \"True solution\")\n ax1.set_ybound(h_minimum - 0.1 * d_h, h_maximum + 0.1 * d_h)\n ax1.set_xlabel(r\"$\\xi$\")\n ax1.set_ylabel(r\"$h$\")\n ax1.legend()\n ax2 = fig.add_subplot(222)\n ax2.plot(xi, u, 'b-', label = \"True solution\")\n ax2.set_ybound(u_minimum - 0.1 * d_u, u_maximum + 0.1 * d_u)\n ax2.set_xlabel(r\"$\\xi$\")\n ax2.set_ylabel(r\"$u$\")\n ax2.legend()\n \n ax3 = fig.add_subplot(223)\n left_end = np.min([-1.0, 1.1*xi1l])\n right_end = np.max([1.0, 1.1*xi2r])\n left_edge = left_end - xi1l\n right_edge = right_end - xi1r\n x1_start_points_l = np.linspace(np.min([left_edge, left_end]), 0.0, 20)\n x1_start_points_r = np.linspace(0.0, np.max([right_edge, right_end]), 20)\n t1_end_points_l = np.ones_like(x1_start_points_l)\n t1_end_points_r = np.ones_like(x1_start_points_r)\n \n # Look for intersections\n if left_shock:\n t1_end_points_l = np.minimum(t1_end_points_l, x1_start_points_l / (vsl - xi1l))\n x1_end_points_l = x1_start_points_l + xi1l * t1_end_points_l\n if right_shock:\n t1_end_points_r = np.minimum(t1_end_points_r, x1_start_points_r / (vsr - xi1r))\n else:\n t1_end_points_r = np.minimum(t1_end_points_r, x1_start_points_r / (xi2r - xi1r))\n x1_end_points_r = x1_start_points_r + xi1r * t1_end_points_r\n # Note: here we are cheating, and using the characteristic speed of the middle state, \n # ignoring how it varies across the rarefaction\n t1_final_points_r = np.ones_like(x1_start_points_r)\n if left_shock:\n t1_final_points_r = np.minimum(t1_final_points_r, \n (x1_end_points_r - t1_end_points_r * xi1m) / \n (vsl - xi1m))\n x1_final_points_r = x1_end_points_r + (t1_final_points_r - t1_end_points_r) * xi1m\n \n for xs, xe, te in zip(x1_start_points_l, x1_end_points_l, t1_end_points_l):\n ax3.plot([xs, xe], [0.0, te], 'b-')\n for xs, xe, te in zip(x1_start_points_r, x1_end_points_r, t1_end_points_r):\n ax3.plot([xs, xe], [0.0, te], 'g-')\n for xs, xe, ts, te in zip(x1_end_points_r, x1_final_points_r, t1_end_points_r, \n t1_final_points_r):\n ax3.plot([xs, xe], [ts, te], 'g-')\n \n # Highlight the waves\n if left_shock:\n ax3.plot([0.0, vsl], [0.0, 1.0], 'r-', linewidth=2)\n else:\n ax3.plot([0.0, xi1l], [0.0, 1.0], 'r-', linewidth=2)\n ax3.plot([0.0, xi1m], [0.0, 1.0], 'r-', linewidth=2)\n xi = np.linspace(xi1l, xi1m, 11)\n x_end_rarefaction = xi\n for xe in x_end_rarefaction:\n ax3.plot([0.0, xe], [0.0, 1.0], 'r--')\n if right_shock:\n ax3.plot([0.0, vsr], [0.0, 1.0], 'r-', linewidth=2)\n else:\n ax3.plot([0.0, xi2m], [0.0, 1.0], 'r-', linewidth=2)\n ax3.plot([0.0, xi2r], [0.0, 1.0], 'r-', linewidth=2)\n \n ax3.set_xlabel(r\"$x$\")\n ax3.set_ylabel(r\"$t$\")\n ax3.set_title(\"1-characteristics\")\n ax3.set_xbound(left_end, right_end)\n \n ax4 = fig.add_subplot(224)\n left_end = np.min([-1.0, 1.1*xi1l])\n right_end = np.max([1.0, 1.1*xi2r])\n left_edge = left_end - xi2l\n right_edge = right_end - xi2r\n x2_start_points_l = np.linspace(np.min([left_edge, left_end]), 0.0, 20)\n x2_start_points_r = np.linspace(0.0, np.max([right_edge, right_end]), 20)\n x2_end_points_r = x2_start_points_r + xi2r\n t2_end_points_l = np.ones_like(x2_start_points_l)\n t2_end_points_r = np.ones_like(x2_start_points_r)\n \n # Look for intersections\n if right_shock:\n t2_end_points_r = np.minimum(t2_end_points_r, x2_start_points_r / (vsr - xi2r))\n x2_end_points_r = x2_start_points_r + xi2r * t2_end_points_r\n if left_shock:\n t2_end_points_l = np.minimum(t2_end_points_l, x2_start_points_l / (vsl - xi2l))\n else:\n t2_end_points_l = np.minimum(t2_end_points_l, x2_start_points_l / (xi1l - xi2l))\n x2_end_points_l = x2_start_points_l + xi2l * t2_end_points_l\n # Note: here we are cheating, and using the characteristic speed of the middle state, \n # ignoring how it varies across the rarefaction\n t2_final_points_l = np.ones_like(x2_start_points_l)\n if right_shock:\n t2_final_points_l = np.minimum(t2_final_points_l, \n (x2_end_points_l - t2_end_points_l * xi2m) / \n (vsr - xi2m))\n x2_final_points_l = x2_end_points_l + (t2_final_points_l - t2_end_points_l) * xi2m\n \n for xs, xe, te in zip(x2_start_points_r, x2_end_points_r, t2_end_points_r):\n ax4.plot([xs, xe], [0.0, te], 'b-')\n for xs, xe, te in zip(x2_start_points_l, x2_end_points_l, t2_end_points_l):\n ax4.plot([xs, xe], [0.0, te], 'g-')\n for xs, xe, ts, te in zip(x2_end_points_l, x2_final_points_l, t2_end_points_l, \n t2_final_points_l):\n ax4.plot([xs, xe], [ts, te], 'g-')\n \n # Highlight the waves\n if left_shock:\n ax4.plot([0.0, vsl], [0.0, 1.0], 'r-', linewidth=2)\n else:\n ax4.plot([0.0, xi1l], [0.0, 1.0], 'r-', linewidth=2)\n ax4.plot([0.0, xi1m], [0.0, 1.0], 'r-', linewidth=2)\n if right_shock:\n ax4.plot([0.0, vsr], [0.0, 1.0], 'r-', linewidth=2)\n else:\n ax4.plot([0.0, xi2m], [0.0, 1.0], 'r-', linewidth=2)\n ax4.plot([0.0, xi2r], [0.0, 1.0], 'r-', linewidth=2)\n xi = np.linspace(xi2m, xi2r, 11)\n x_end_rarefaction = xi\n for xe in x_end_rarefaction:\n ax4.plot([0.0, xe], [0.0, 1.0], 'r--')\n \n ax4.set_xlabel(r\"$x$\")\n ax4.set_ylabel(r\"$t$\")\n ax4.set_title(\"2-characteristics\")\n ax4.set_xbound(left_end, right_end)\n \n fig.tight_layout()\n```\n\n\n```python\ninteractive(plot_sw_Riemann_solution, \n hl = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ul = FloatSlider(min = -1.0, max = 1.0, value = 0.2), \n hr = FloatSlider(min = 0.1, max = 10.0, value = 1.0), \n ur = FloatSlider(min = -1.0, max = 1.0, value = -0.2))\n```\n\n\n

Failed to display Jupyter Widget of type interactive.

\n

\n If you're reading this message in the Jupyter Notebook or JupyterLab Notebook, it may mean\n that the widgets JavaScript is still loading. If this message persists, it\n likely means that the widgets JavaScript library is either not installed or\n not enabled. See the Jupyter\n Widgets Documentation for setup instructions.\n

\n

\n If you're reading this message in another frontend (for example, a static\n rendering on GitHub or NBViewer),\n it may mean that your frontend doesn't currently support widgets.\n

\n\n\n", "meta": {"hexsha": "c330800252791e09635d45654e049cbbe6d7f06a", "size": 93906, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lesson_04_Shallow_Water.ipynb", "max_stars_repo_name": "IanHawke/RiemannPython", "max_stars_repo_head_hexsha": "57d6e372861a9c89b15755fb1d6ff9ea8116f6e2", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 9, "max_stars_repo_stars_event_min_datetime": "2015-08-24T01:24:34.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-08T18:26:24.000Z", "max_issues_repo_path": "Lesson_04_Shallow_Water.ipynb", "max_issues_repo_name": "IanHawke/RiemannPython", "max_issues_repo_head_hexsha": "57d6e372861a9c89b15755fb1d6ff9ea8116f6e2", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lesson_04_Shallow_Water.ipynb", "max_forks_repo_name": "IanHawke/RiemannPython", "max_forks_repo_head_hexsha": "57d6e372861a9c89b15755fb1d6ff9ea8116f6e2", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 3, "max_forks_repo_forks_event_min_datetime": "2018-07-31T17:41:21.000Z", "max_forks_repo_forks_event_max_datetime": "2019-07-11T13:50:22.000Z", "avg_line_length": 44.0459662289, "max_line_length": 510, "alphanum_fraction": 0.5244073861, "converted": true, "num_tokens": 21853, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.399811640739795, "lm_q2_score": 0.22000710486009023, "lm_q1q2_score": 0.0879614015685248}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nToggle cell visibility here.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n```\n\n\n\nToggle cell visibility here.\n\n\n## Krmiljenje povratne zveze stanj - zmogljivost krmiljenja\n\nZa sistem:\n\n$$\n\\dot{x}=\\underbrace{\\begin{bmatrix}-0.5&1\\\\0&-0.1\\end{bmatrix}}_{A}x+\\underbrace{\\begin{bmatrix}0\\\\1\\end{bmatrix}}_{B}u\n$$\n\nna\u010drtuj krmilnik tako, da bo prva spremenljivka stanja sistema sledila referen\u010dni kora\u010dni funkciji brez odstopka v stacionarnem \u010dasu s \u010dasom ustalitve (odziv naj dose\u017ee 95% kon\u010dne vrednosti) kraj\u0161im od 1 s.\n\nZ namenom zagotovitve zgornjim zahtevam dodamo fiktivno spremenljivko stanja $x_3$ z dinamiko $\\dot{x_3}=x_1-x_{1r}$, kjer $x_{1r}$ predstavlja referen\u010dni signal, tako da, \u010de je raz\u0161irjen sistem asimptoti\u010dno stabilen, potem konvergira nova spremenljivka stanja $x_3$ k vrednosti 0, kar zagotavlja, da gre $x_1$ k vrednosti $x_{1r}$.\n\nRaz\u0161irjen sistem lahko popi\u0161emo z naslednjimi ena\u010dbami:\n\n$$\n\\dot{x}_a=\\underbrace{\\begin{bmatrix}-0.5&1&0\\\\0&-0.1&0\\\\1&0&0\\end{bmatrix}}_{A_a}x_a+\\underbrace{\\begin{bmatrix}0\\\\1\\\\0\\end{bmatrix}}_{B_a}u+\\underbrace{\\begin{bmatrix}0\\\\0\\\\-1\\end{bmatrix}}_{B_{\\text{ref}}}x_{1r}\n$$\n\nin naslednjo spoznavnostno matriko:\n\n$$\n\\begin{bmatrix}B_a&A_aB_a&A_a^2B_a\\end{bmatrix} = \\begin{bmatrix}0&1&-0.6\\\\1&-0.1&0.01\\\\0&0&1\\end{bmatrix}\n$$\n\nKer $\\text{rank}=3$ je raz\u0161irjen sistem vodljiv.\n\nZ namenom zagotovitve druge zahteve, je mo\u017ena re\u0161itev ta, da s prilagajanjem polov dose\u017eemo, da ima sistem dominanten pol pri $-3$ rad/s (opomba: $e^{\\lambda t}=e^{-3t}$ pri $t=1$ s zna\u0161a $0.4978..<0.05$). Izbrana pola sta tako $\\lambda_1=-3\\,\\text{in}\\,\\lambda_2=\\lambda_3=-30$, s pripadajo\u010do matriko oja\u010danja $K_a=\\begin{bmatrix}1048.75&62.4&2700\\end{bmatrix}$.\n\nZaprtozan\u010dni sistem lahko tako zapi\u0161emo kot:\n\n$$\n\\dot{x}_a=(A_a-B_aK_a)x_a+B_av+B_{\\text{ref}}x_{1r}=\\begin{bmatrix}-0.5&1&0\\\\-1048.75&-62.5&-2700\\\\1&0&0\\end{bmatrix}x_a+\\begin{bmatrix}0\\\\1\\\\0\\end{bmatrix}v+\\begin{bmatrix}0\\\\0\\\\-1\\end{bmatrix}x_{1r}\n$$\n\n### Kako upravljati s tem interaktivnim primerom?\nPreizkusi razli\u010dne re\u0161itve s spreminjanjem oja\u010danja $K$ ali neposrednim dolo\u010danjem vrednosti zaprtozan\u010dnih lastnih vrednosti.\n\n\n```python\n%matplotlib inline\nimport control as control\nimport numpy\nimport sympy as sym\nfrom IPython.display import display, Markdown\nimport ipywidgets as widgets\nimport matplotlib.pyplot as plt\n\n\n#print a matrix latex-like\ndef bmatrix(a):\n \"\"\"Returns a LaTeX bmatrix - by Damir Arbula (ICCT project)\n\n :a: numpy array\n :returns: LaTeX bmatrix as a string\n \"\"\"\n if len(a.shape) > 2:\n raise ValueError('bmatrix can at most display two dimensions')\n lines = str(a).replace('[', '').replace(']', '').splitlines()\n rv = [r'\\begin{bmatrix}']\n rv += [' ' + ' & '.join(l.split()) + r'\\\\' for l in lines]\n rv += [r'\\end{bmatrix}']\n return '\\n'.join(rv)\n\n\n# Display formatted matrix: \ndef vmatrix(a):\n if len(a.shape) > 2:\n raise ValueError('bmatrix can at most display two dimensions')\n lines = str(a).replace('[', '').replace(']', '').splitlines()\n rv = [r'\\begin{vmatrix}']\n rv += [' ' + ' & '.join(l.split()) + r'\\\\' for l in lines]\n rv += [r'\\end{vmatrix}']\n return '\\n'.join(rv)\n\n\n#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !\nclass matrixWidget(widgets.VBox):\n def updateM(self,change):\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.M_[irow,icol] = self.children[irow].children[icol].value\n #print(self.M_[irow,icol])\n self.value = self.M_\n\n def dummychangecallback(self,change):\n pass\n \n \n def __init__(self,n,m):\n self.n = n\n self.m = m\n self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))\n self.value = self.M_\n widgets.VBox.__init__(self,\n children = [\n widgets.HBox(children = \n [widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]\n ) \n for j in range(n)\n ])\n \n #fill in widgets and tell interact to call updateM each time a children changes value\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].value = self.M_[irow,icol]\n self.children[irow].children[icol].observe(self.updateM, names='value')\n #value = Unicode('example@example.com', help=\"The email value.\").tag(sync=True)\n self.observe(self.updateM, names='value', type= 'All')\n \n def setM(self, newM):\n #disable callbacks, change values, and reenable\n self.unobserve(self.updateM, names='value', type= 'All')\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].unobserve(self.updateM, names='value')\n self.M_ = newM\n self.value = self.M_\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].value = self.M_[irow,icol]\n for irow in range(0,self.n):\n for icol in range(0,self.m):\n self.children[irow].children[icol].observe(self.updateM, names='value')\n self.observe(self.updateM, names='value', type= 'All') \n\n #self.children[irow].children[icol].observe(self.updateM, names='value')\n\n \n#overlaod class for state space systems that DO NOT remove \"useless\" states (what \"professor\" of automatic control would do this?)\nclass sss(control.StateSpace):\n def __init__(self,*args):\n #call base class init constructor\n control.StateSpace.__init__(self,*args)\n #disable function below in base class\n def _remove_useless_states(self):\n pass\n```\n\n\n```python\n# Preparatory cell\n\nA = numpy.matrix('-0.5 1 0; 0 -0.1 0; 1 0 0')\nB = numpy.matrix('0; 1; 0')\nBr = numpy.matrix('0; 0; -1')\nC = numpy.matrix('1 0 0')\nX0 = numpy.matrix('0; 0; 0')\nK = numpy.matrix([1048.75,62.4,2700])\n\nAw = matrixWidget(3,3)\nAw.setM(A)\nBw = matrixWidget(3,1)\nBw.setM(B)\nBrw = matrixWidget(3,1)\nBrw.setM(Br)\nCw = matrixWidget(1,3)\nCw.setM(C)\nX0w = matrixWidget(3,1)\nX0w.setM(X0)\nKw = matrixWidget(1,3)\nKw.setM(K)\n\n\neig1c = matrixWidget(1,1)\neig2c = matrixWidget(2,1)\neig3c = matrixWidget(1,1)\neig1c.setM(numpy.matrix([-3])) \neig2c.setM(numpy.matrix([[-30],[0]]))\neig3c.setM(numpy.matrix([-30]))\n```\n\n\n```python\n# Misc\n\n#create dummy widget \nDW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))\n\n#create button widget\nSTART = widgets.Button(\n description='Test',\n disabled=False,\n button_style='', # 'success', 'info', 'warning', 'danger' or ''\n tooltip='Test',\n icon='check'\n)\n \ndef on_start_button_clicked(b):\n #This is a workaround to have intreactive_output call the callback:\n # force the value of the dummy widget to change\n if DW.value> 0 :\n DW.value = -1\n else: \n DW.value = 1\n pass\nSTART.on_click(on_start_button_clicked)\n\n# Define type of method \nselm = widgets.Dropdown(\n options= ['Nastavi K', 'Nastavi lastne vrednosti'],\n value= 'Nastavi K',\n description='',\n disabled=False\n)\n\n# Define the number of complex eigenvalues for the observer\nselc = widgets.Dropdown(\n options= ['brez kompleksnih lastnih vrednosti', 'dve kompleksni lastni vrednosti'],\n value= 'brez kompleksnih lastnih vrednosti',\n description='Lastne vrednosti:',\n disabled=False\n)\n\n#define type of ipout \nselu = widgets.Dropdown(\n options=['impulzna funkcija', 'kora\u010dna funkcija', 'sinusoidna funkcija', 'kvadratni val'],\n value='impulzna funkcija',\n description='Vhod:',\n disabled=False,\n style = {'description_width': 'initial','button_width':'180px'}\n)\n# Define the values of the input\nu = widgets.FloatSlider(\n value=1,\n min=0,\n max=20.0,\n step=0.1,\n description='Referenca:',\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True,\n readout_format='.1f',\n)\nperiod = widgets.FloatSlider(\n value=0.5,\n min=0.01,\n max=4,\n step=0.01,\n description='Perioda: ',\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True,\n readout_format='.2f',\n)\n```\n\n\n```python\n# Support functions\n\ndef eigen_choice(selc):\n if selc == 'brez kompleksnih lastnih vrednosti':\n eig1c.children[0].children[0].disabled = False\n eig2c.children[1].children[0].disabled = True\n eigc = 0\n if selc == 'dve kompleksni lastni vrednosti':\n eig1c.children[0].children[0].disabled = True\n eig2c.children[1].children[0].disabled = False\n eigc = 2\n return eigc\n\ndef method_choice(selm):\n if selm == 'Nastavi K':\n method = 1\n selc.disabled = True\n if selm == 'Nastavi lastne vrednosti':\n method = 2\n selc.disabled = False\n return method\n```\n\n\n```python\ndef main_callback(Aw, Bw, Brw, X0w, K, eig1c, eig2c, eig3c, u, period, selm, selc, selu, DW):\n A, B, Br = Aw, Bw, Brw \n sols = numpy.linalg.eig(A)\n eigc = eigen_choice(selc)\n method = method_choice(selm)\n \n if method == 1:\n sol = numpy.linalg.eig(A-B*K)\n if method == 2:\n if eigc == 0:\n K = control.acker(A, B, [eig1c[0,0], eig2c[0,0], eig3c[0,0]])\n Kw.setM(K) \n if eigc == 2:\n K = control.acker(A, B, [eig1c[0,0], \n numpy.complex(eig2c[0,0],eig2c[1,0]), \n numpy.complex(eig2c[0,0],-eig2c[1,0])])\n Kw.setM(K)\n sol = numpy.linalg.eig(A-B*K)\n print('Lastne vrednosti sistema so:',round(sols[0][0],4),',',round(sols[0][1],4),'in',round(sols[0][2],4))\n print('Lastne vrednosti krmiljenega sistema so:',round(sol[0][0],4),',',round(sol[0][1],4),'in',round(sol[0][2],4))\n \n sys = sss(A-B*K,Br,C,0)\n T = numpy.linspace(0, 6, 1000)\n \n if selu == 'impulzna funkcija': #selu\n U = [0 for t in range(0,len(T))]\n U[0] = u\n T, yout, xout = control.forced_response(sys,T,U,X0w)\n if selu == 'kora\u010dna funkcija':\n U = [u for t in range(0,len(T))]\n T, yout, xout = control.forced_response(sys,T,U,X0w)\n if selu == 'sinusoidna funkcija':\n U = u*numpy.sin(2*numpy.pi/period*T)\n T, yout, xout = control.forced_response(sys,T,U,X0w)\n if selu == 'kvadratni val':\n U = u*numpy.sign(numpy.sin(2*numpy.pi/period*T))\n T, yout, xout = control.forced_response(sys,T,U,X0w)\n \n fig = plt.figure(num='Simulacija', figsize=(16,10))\n \n fig.add_subplot(211)\n plt.title('Odziv prve spremenljivke stanj')\n plt.ylabel('$X_1$ vs ref')\n plt.plot(T,xout[0],T,U,'r--')\n plt.xlabel('$t$ [s]')\n plt.legend(['$x_1$','Referenca'])\n plt.axvline(x=0,color='black',linewidth=0.8)\n plt.axhline(y=0,color='black',linewidth=0.8)\n plt.grid()\n \n fig.add_subplot(212)\n poles, zeros = control.pzmap(sys,Plot=False)\n plt.title('Diagram polov in ni\u010del')\n plt.ylabel('Im')\n plt.plot(numpy.real(poles),numpy.imag(poles),'rx',numpy.real(zeros),numpy.imag(zeros),'bo')\n plt.xlabel('Re')\n plt.axvline(x=0,color='black',linewidth=0.8)\n plt.axhline(y=0,color='black',linewidth=0.8)\n plt.grid()\n \nalltogether = widgets.VBox([widgets.HBox([selm, \n selc, \n selu]),\n widgets.Label(' ',border=3),\n widgets.HBox([widgets.Label('K:',border=3), Kw, \n widgets.Label(' ',border=3),\n widgets.Label(' ',border=3),\n widgets.Label('Lastne vrednosti:',border=3), \n eig1c, \n eig2c, \n eig3c,\n widgets.Label(' ',border=3),\n widgets.Label(' ',border=3),\n widgets.Label('X0:',border=3), X0w]),\n widgets.Label(' ',border=3),\n widgets.HBox([u, \n period, \n START]),\n widgets.Label(' ',border=3),\n widgets.HBox([widgets.Label('Dinami\u010dna matrika Aa:',border=3),\n Aw,\n widgets.Label('Vhodna matrika Ba:',border=3),\n Bw,\n widgets.Label('Referen\u010dna matrika Br:',border=3),\n Brw])])\nout = widgets.interactive_output(main_callback, {'Aw':Aw, 'Bw':Bw, 'Brw':Brw, 'X0w':X0w, 'K':Kw, 'eig1c':eig1c, 'eig2c':eig2c, 'eig3c':eig3c, \n 'u':u, 'period':period, 'selm':selm, 'selc':selc, 'selu':selu, 'DW':DW})\nout.layout.height = '640px'\ndisplay(out, alltogether)\n```\n\n\n Output(layout=Layout(height='640px'))\n\n\n\n VBox(children=(HBox(children=(Dropdown(options=('Nastavi K', 'Nastavi lastne vrednosti'), value='Nastavi K'), \u2026\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "45aeac29c77d827def00160b588980967680b7be", "size": 19921, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_si/examples/04/SS-31-Krmiljenje_povratne_zveze_stanj_zmogljivost.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_si/examples/04/SS-31-Krmiljenje_povratne_zveze_stanj_zmogljivost.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_si/examples/04/SS-31-Krmiljenje_povratne_zveze_stanj_zmogljivost.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 38.6815533981, "max_line_length": 381, "alphanum_fraction": 0.4848150193, "converted": true, "num_tokens": 4195, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3998116264369279, "lm_q2_score": 0.2200070895174993, "lm_q1q2_score": 0.0879613922876462}} {"text": "# Optimizaci\u00f3n media-varianza\n\n\n\n\nLa **teor\u00eda de portafolios** es una de los avances m\u00e1s importantes en las finanzas modernas e inversiones.\n- Apareci\u00f3 por primera vez en un [art\u00edculo corto](https://www.math.ust.hk/~maykwok/courses/ma362/07F/markowitz_JF.pdf) llamado \"Portfolio Selection\" en la edici\u00f3n de Marzo de 1952 de \"the Journal of Finance\".\n- Escrito por un desconocido estudiante de la Universidad de Chicago, llamado Harry Markowitz.\n- Escrito corto (s\u00f3lo 14 p\u00e1ginas), poco texto, f\u00e1cil de entender, muchas gr\u00e1ficas y unas cuantas referencias.\n- No se le prest\u00f3 mucha atenci\u00f3n hasta los 60s.\n\nFinalmente, este trabajo se convirti\u00f3 en una de las m\u00e1s grandes ideas en finanzas, y le di\u00f3 a Markowitz el Premio Noble casi 40 a\u00f1os despu\u00e9s.\n- Markowitz estaba incidentalmente interesado en los mercados de acciones e inversiones.\n- Estaba m\u00e1s bien interesado en entender c\u00f3mo las personas tomaban sus mejores decisiones cuando se enfrentaban con \"trade-offs\".\n- Principio de conservaci\u00f3n de la miseria. O, dir\u00edan los instructores de gimnasio: \"no pain, no gain\".\n- Si queremos m\u00e1s de algo, tenemos que perder en alg\u00fan otro lado.\n- El estudio de este fen\u00f3meno era el que le atra\u00eda a Markowitz.\n\nDe manera que nadie se hace rico poniendo todo su dinero en la cuenta de ahorros. La \u00fanica manera de esperar altos rendimientos es si se toma bastante riesgo. Sin embargo, riesgo significa tambi\u00e9n la posibilidad de perder, tanto como ganar.\n\nPero, \u00bfqu\u00e9 tanto riesgo es necesario?, y \u00bfhay alguna manera de minimizar el riesgo mientras se maximizan las ganancias?\n- Markowitz b\u00e1sicamente cambi\u00f3 la manera en que los inversionistas pensamos acerca de esas preguntas.\n- Alter\u00f3 completamente la pr\u00e1ctica de la administraci\u00f3n de inversiones.\n- Incluso el t\u00edtulo de su art\u00edculo era innovador. Portafolio: una colecci\u00f3n de activos en lugar de tener activos individuales.\n- En ese tiempo, un portafolio se refer\u00eda a una carpeta de cuero.\n- En el resto de este m\u00f3dulo, no ocuparemos de la parte anal\u00edtica de la teor\u00eda de portafolios, la cual puede ser resumida en dos frases:\n - No pain, no gain.\n - No ponga todo el blanquillo en una sola bolsa.\n \n\n**Objetivos:**\n- \u00bfQu\u00e9 es la l\u00ednea de asignaci\u00f3n de capital?\n- \u00bfQu\u00e9 es el radio de Sharpe?\n- \u00bfC\u00f3mo deber\u00edamos asignar nuestro capital entre un activo riesgoso y un activo libre de riesgo?\n\n*Referencia:*\n- Notas del curso \"Portfolio Selection and Risk Management\", Rice University, disponible en Coursera.\n___ \n\n## 1. L\u00ednea de asignaci\u00f3n de capital\n\n### 1.1. Motivaci\u00f3n\n\nEl proceso de construcci\u00f3n de un portafolio tiene entonces los siguientes dos pasos:\n1. Escoger un portafolio de activos riesgosos.\n2. Decidir qu\u00e9 tanto de tu riqueza invertir\u00e1s en el portafolio y qu\u00e9 tanto invertir\u00e1s en activos libres de riesgo.\n\nAl paso 2 lo llamamos **decisi\u00f3n de asignaci\u00f3n de activos**.\n\nPreguntas importantes:\n1. \u00bfQu\u00e9 es el portafolio \u00f3ptimo de activos riesgosos?\n - \u00bfCu\u00e1l es el mejor portafolio de activos riesgosos?\n - Es un portafolio eficiente en media-varianza.\n2. \u00bfQu\u00e9 es la distribuci\u00f3n \u00f3ptima de activos?\n - \u00bfC\u00f3mo deber\u00edamos distribuir nuestra riqueza entre el portafolo riesgoso \u00f3ptimo y el activo libre de riesgo?\n - Concepto de **l\u00ednea de asignaci\u00f3n de capital**.\n - Concepto de **radio de Sharpe**.\n\nDos suposiciones importantes:\n- Funciones de utilidad media-varianza.\n- Inversionista averso al riesgo.\n\nLa idea sorprendente que saldr\u00e1 de este an\u00e1lisis, es que cualquiera que sea la actitud del inversionista de cara al riesgo, el mejor portafolio de activos riesgosos es id\u00e9ntico para todos los inversionistas.\n\nLo que nos importar\u00e1 a cada uno de nosotros en particular, es simplemente la desici\u00f3n \u00f3ptima de asignaci\u00f3n de activos.\n___\n\n### 1.2. L\u00ednea de asignaci\u00f3n de capital\n\nSean:\n- $r_s$ el rendimiento del activo riesgoso,\n- $r_f$ el rendimiento libre de riesgo, y\n- $w$ la fracci\u00f3n invertida en el activo riesgoso.\n\n Realizar deducci\u00f3n de la l\u00ednea de asignaci\u00f3n de capital en el tablero.\n\n**Tres doritos despu\u00e9s...**\n\n#### L\u00ednea de asignaci\u00f3n de capital (LAC):\n$E[r_p]$ se relaciona con $\\sigma_p$ de manera af\u00edn. Es decir, mediante la ecuaci\u00f3n de una recta:\n\n$$E[r_p]=r_f+\\frac{E[r_s-r_f]}{\\sigma_s}\\sigma_p.$$\n\n- La pendiente de la LAC es el radio de Sharpe $\\frac{E[r_s-r_f]}{\\sigma_s}=\\frac{E[r_s]-r_f}{\\sigma_s}$,\n- el cual nos dice qu\u00e9 tanto rendimiento obtenemos por unidad de riesgo asumido en la tenencia del activo (portafolio) riesgoso.\n\nAhora, la pregunta es, \u00bfd\u00f3nde sobre esta l\u00ednea queremos estar?\n___\n\n### 1.3. Resolviendo para la asignaci\u00f3n \u00f3ptima de capital\n\nRecapitulando de la clase pasada, tenemos las curvas de indiferencia: **queremos estar en la curva de indiferencia m\u00e1s alta posible, que sea tangente a la LAC**.\n\n Ver en el tablero.\n\nAnal\u00edticamente, el problema es\n\n$$\\max_{w} \\quad E[U(r_p)]\\equiv\\max_{w} \\quad E[r_p]-\\frac{1}{2}\\gamma\\sigma_p^2,$$\n\ndonde los puntos $(\\sigma_p,E[r_p])$ se restringen a estar en la LAC, esto es $E[r_p]=r_f+\\frac{E[r_s-r_f]}{\\sigma_s}\\sigma_p$ y $\\sigma_p=w\\sigma_s$. Entonces el problema anterior se puede escribir de la siguiente manera:\n\n$$\\max_{w} \\quad r_f+wE[r_s-r_f]-\\frac{1}{2}\\gamma w^2\\sigma_s^2.$$\n\n Encontrar la $w$ que maximiza la anterior expresi\u00f3n en el tablero.\n\n**Tres doritos despu\u00e9s...**\n\nLa soluci\u00f3n es entonces:\n\n$$w^\\ast=\\frac{E[r_s-r_f]}{\\gamma\\sigma_s^2}.$$\n\nDe manera intuitiva:\n- $w^\\ast\\propto E[r_s-r_f]$: a m\u00e1s exceso de rendimiento que se obtenga del activo riesgoso, m\u00e1s querremos invertir en \u00e9l.\n- $w^\\ast\\propto \\frac{1}{\\gamma}$: mientras m\u00e1s averso al riesgo seas, menos querr\u00e1s invertir en el activo riesgoso.\n- $w^\\ast\\propto \\frac{1}{\\sigma_s^2}$: mientras m\u00e1s riesgoso sea el activo, menos querr\u00e1s invertir en \u00e9l.\n___\n\n## 2. Ejemplo de asignaci\u00f3n \u00f3ptima de capital: acciones y billetes de EU\n\nPongamos algunos n\u00fameros con algunos datos, para ilustrar la derivaci\u00f3n que acabamos de hacer.\n\nEn este caso, consideraremos:\n- **Portafolio riesgoso**: mercado de acciones de EU (representados en alg\u00fan \u00edndice de mercado como el S&P500).\n- **Activo libre de riesgo**: billetes del departamento de tesorer\u00eda de EU (T-bills).\n\nTenemos los siguientes datos:\n\n$$E[r_{US}]=11.9\\%,\\quad \\sigma_{US}=19.15\\%, \\quad r_f=1\\%.$$\n\nRecordamos que podemos escribir la expresi\u00f3n de la LAC como:\n\n\\begin{align}\nE[r_p]&=r_f+\\left[\\frac{E[r_{US}-r_f]}{\\sigma_{US}}\\right]\\sigma_p\\\\\n &=0.01+\\text{S.R.}\\sigma_p,\n\\end{align}\n\ndonde $\\text{S.R}=\\frac{0.119-0.01}{0.1915}\\approx0.569$ es el radio de Sharpe (\u00bfqu\u00e9 es lo que es esto?).\n\nGrafiquemos la LAC con estos datos reales:\n\n\n```python\n# Importamos librer\u00edas que vamos a utilizar\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n```\n\n\n```python\n# Datos\nErus, sus, rf = 0.119, 0.1915, 0.01\n# Radio de Sharpe para este activo\nSR = (Erus-rf)/sus\n# Vector de volatilidades del portafolio\nsp = np.linspace(0, 0.5, 100)\n# LAC\nErp = rf+SR*sp\n```\n\n\n```python\n# Gr\u00e1fica\nplt.figure(figsize=(10,6))\nplt.plot(sp, Erp, lw='3', label='LAC')\nplt.plot(0, rf, 'o', ms=10, label='Libre de riesgo')\nplt.plot(sus, Erus, 'o', ms=10, label='Portafolio riesgoso')\nplt.axhline(y=Erus, color='gray')\nplt.axvline(x=sus, color='gray')\nplt.axhline(y=0, color='k')\nplt.axvline(x=0, color='k')\nplt.grid()\nplt.xlabel('Volatility $\\sigma_p$')\nplt.ylabel('Expected return $E[r_p]$')\nplt.legend(loc='best')\n```\n\nBueno, y \u00bfen qu\u00e9 punto de esta l\u00ednea querr\u00edamos estar?\n- Pues ya vimos que depende de tus preferencias.\n- En particular, de tu actitud de cara al riesgo, medido por tu coeficiente de aversi\u00f3n al riesgo.\n\nSoluci\u00f3n al problema de asignaci\u00f3n \u00f3ptima de capital:\n\n$$\\max_{w} \\quad E[U(r_p)]$$\n\n$$w^\\ast=\\frac{E[r_s-r_f]}{\\gamma\\sigma_s^2}$$\n\nDado que ya tenemos datos, podemos intentar para varios coeficientes de aversi\u00f3n al riesgo:\n\n\n```python\n# importar pandas\nimport pandas as pd\n```\n\n\n```python\n# Crear un DataFrame con los pesos, rendimiento\n# esperado y volatilidad del portafolio \u00f3ptimo \n# entre los activos riesgoso y libre de riesgo\n# cuyo \u00edndice sean los coeficientes de aversi\u00f3n\n# al riesgo del 1 al 10 (enteros)\ng = np.arange(1, 11)\nwopt = (Erus-rf)/(g*sus**2)\nsp = wopt*sus\nErp = rf+(Erus-rf)/sus*sp\ndata = pd.DataFrame(index=g, columns=['$w_{opt}$', '$E[r_p]$', '$\\sigma_p$'])\ndata.index.name = '$\\gamma$'\ndata['$w_{opt}$'] = wopt\ndata['$E[r_p]$'] = Erp\ndata['$\\sigma_p$'] = sp\ndata\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
$w_{opt}$$E[r_p]$$\\sigma_p$
$\\gamma$
12.9722750.3339780.569191
21.4861370.1719890.284595
30.9907580.1179930.189730
40.7430690.0909940.142298
50.5944550.0747960.113838
60.4953790.0639960.094865
70.4246110.0562830.081313
80.3715340.0504970.071149
90.3302530.0459980.063243
100.2972270.0423980.056919
\n
\n\n\n\n\u00bfC\u00f3mo se interpreta $w^\\ast>1$?\n- Cuando $01$, tenemos $1-w^\\ast<0$. Lo anterior implica una posici\u00f3n corta en el activo libre de riesgo (suponiendo que se puede) y una posici\u00f3n larga (de m\u00e1s del 100%) en el mercado de activos: apalancamiento.\n\n# Anuncios parroquiales.\n\n## 1. Quiz la siguiente clase.\n## 2. [Calificaciones](https://docs.google.com/spreadsheets/d/18-SDXpkuN6LULO16_1VPHPksrP-QCEpj7OLxiaSpQ3U/edit?usp=sharing)\n\n\n\n
\nCreated with Jupyter by Esteban Jim\u00e9nez Rodr\u00edguez.\n
\n", "meta": {"hexsha": "b3e81c0ac9d004a06bfe74a477cfd65b512eeac1", "size": 46801, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Modulo3/Clase13_OptimizacionMediaVarianza.ipynb", "max_stars_repo_name": "PiedrasAyala95/PorInv2018-2", "max_stars_repo_head_hexsha": "8f5eb1648989728f21d01720c85827d9478211ab", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2018-08-27T16:54:10.000Z", "max_stars_repo_stars_event_max_datetime": "2018-08-27T16:54:10.000Z", "max_issues_repo_path": "Modulo3/Clase13_OptimizacionMediaVarianza.ipynb", "max_issues_repo_name": "PiedrasAyala95/PorInv2018-2", "max_issues_repo_head_hexsha": "8f5eb1648989728f21d01720c85827d9478211ab", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Modulo3/Clase13_OptimizacionMediaVarianza.ipynb", "max_forks_repo_name": "PiedrasAyala95/PorInv2018-2", "max_forks_repo_head_hexsha": "8f5eb1648989728f21d01720c85827d9478211ab", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 88.1374764595, "max_line_length": 29000, "alphanum_fraction": 0.8026751565, "converted": true, "num_tokens": 3491, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.49218813572079556, "lm_q2_score": 0.17781087383497526, "lm_q1q2_score": 0.08751640250372206}} {"text": "```python\nfrom IPython.core.display import HTML, Image\ncss_file = 'style.css'\nHTML(open(css_file, 'r').read())\n```\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n```python\nfrom sympy import init_printing, Matrix, symbols\nfrom IPython.display import Image\nfrom warnings import filterwarnings\n```\n\n\n```python\ninit_printing(use_latex = 'mathjax')\nfilterwarnings('ignore')\n```\n\n# Projections onto subspaces\n\n## Geometry in the plane\n\n* Projection of a vector onto another (in the plane)\n* Consider the orthogonal projection of **b** onto **a**\n\n\n```python\nImage(filename = 'Orthogonal projection in the plane.png')\n```\n\n* Note that **p** falls on a line, which is a subspace of the plane ℝ2\n* Remember from the previous lecture that orthogonal subspaces have A**x** = **0**\n* Note that **p** is some scalar multiple of **a**\n* With **a** perpendicular to **e** and **e** = **b** - x**a**\n* Thus we have **a**T(**b** - x**a**) = 0 and x**a**T**a** = **a**T**b**\n* Since **a**T**a** is a number we can simplify\n$$ x=\\frac { { \\underline { a } }^{ T }\\underline { b } }{ { \\underline { a } }^{ T }\\underline { a } } $$\n\n* We also have **p** = **a**x\n$$ \\underline { p } =\\underline { a } x=\\underline { a } \\frac { { \\underline { a } }^{ T }\\underline { b } }{ { \\underline { a } }^{ T }\\underline { a } } $$\n\n* This equation is helpful\n * Doubling (or any other scalar multiple of) **b** doubles (or scalar multiplies) **p**\n * Doubling (or scalar multiple of) **a** has no effect\n\n* Eventually we are looking for proj**p** = P**b**, where P is the projection matrix\n$$ \\underline { p } =P\\underline { b } \\\\ P=\\frac { 1 }{ { \\underline { a } }^{ T }\\underline { a } } \\underline { a } { \\underline { a } }^{ T } $$\n\n* Properties of the projection matrix P\n * The columnspace of P (C(P)) is the line which contains **a**\n * The rank is 1, rank(P) = 1\n * P is symmetrix, i.e. PT = P\n * Applying the projection matrix a second time (i.e. P2) nothing changes, thus P2 = P\n\n## Why project?\n\n(projecting onto more than a one-dimensional line)\n\n* Because A**x** = **b** may not have a solution\n * **b** may not be in the columnspace\n * May have more equations than unknowns\n* Solve for the closest vector in the columnspace\n * This is done by solving for **p** instead, where **p** is the projection of **b** onto the columnsapce of A\n$$ A\\hat { x } =\\underline { p } $$\n\n* Now we have to get **b** orthogonally project (as **p**) onto the column(sub)space\n* This is done by calculating two bases vectors for the plane that contains **p**, i.e. **a**1 and **a**2\n\n* Going way back to the graph up top we note that **e** is perpendicular to the plane\n* So, we have:\n$$ A\\hat { x } =\\underline { p } $$\n* We know that both **a**1 and **a**2 is perpendicular to **e**, so:\n$$ { a }_{ 1 }^{ T }\\underline { e } =0;\\quad { a }_{ 2 }^{ T }\\underline { e } =0\\\\ \\because \\quad \\underline { e } =\\underline { b } -\\underline { p } \\\\ \\because \\quad \\underline { p } =A\\hat { x } \\\\ { a }_{ 1 }^{ T }\\left( \\underline { b } -A\\hat { x } \\right) =0;\\quad { a }_{ 2 }^{ T }\\left( \\underline { b } -A\\hat { x } \\right) =0 $$\n\n* We know that from ...\n$$ \\begin{bmatrix} { a }_{ 1 }^{ T } \\\\ { a }_{ 2 }^{ T } \\end{bmatrix}\\left( \\underline { b } -A\\hat { x } \\right) =\\begin{bmatrix} 0 \\\\ 0 \\end{bmatrix}\\\\ { A }^{ T }\\left( \\underline { b } -A\\hat { x } \\right) =0 $$\n* ... **e** must be in the nullspace of AT\n* Which is right because from the previous lecture the nullspace of AT is orthogonal to the columnspace of A\n\n* Simplifying the last equations we have\n$$ {A}^{T}{A} \\hat{x} = {A}^{T}{b} $$\n\n* Just look back at the plane example in ℝ2 example we started with\n* Simplifying things back to a column vector **a** instead of a matrix subspace A in this last equation does give us what we had in ℝ2\n\n* Solving this we have\n$$ \\hat { x } ={ \\left( { A }^{ T }A \\right) }^{ -1 }{ A }^{ T }\\underline { b } $$\n\n* Which leaves us with\n$$ \\underline { p } =A\\hat { x } \\\\ \\underline { p } =A{ \\left( { A }^{ T }A \\right) }^{ -1 }{ A }^{ T }\\underline { b } $$\n\n* Making the projection matrix P\n$$ P=A{ \\left( { A }^{ T }A \\right) }^{ -1 }{ A }^{ T } $$\n\n* Just note that for a square invertible matrix A, P is the identity matrix\n* Most of the time A is not square (and thus invertible) so we have to leave the equation as it is\n* Also, note that PT = P and P2 = P\n\n## Applications\n\n### Least squares\n\n* Given a set of data points in two dimensions, i.e. with variables (*t*,*b*)\n* We need to fit them onto the best line\n* So, as an example consider the points (1,1), (2,2), (3,2)\n\n* A best line in this instance means a straight line in the form\n$$ {b}={C}+{D}{t} $$\n* Using the three points above we get three equations\n$$ {C}+{D}=1 \\\\ {C}+{2D} = 2 \\\\ {C}+{3D}=2 $$\n\n* If the line goes through all points, we would give a solution\n* Instead we have the following\n$$ \\begin{bmatrix} 1 & 1 \\\\ 1 & 2 \\\\ 1 & 3 \\end{bmatrix}\\begin{bmatrix} C \\\\ D \\end{bmatrix}=\\begin{bmatrix} 1 \\\\ 2 \\\\ 2 \\end{bmatrix} $$\n* Three equation, two unknowns, no solution, **so** solve ...\n$$ { A }^{ T }A\\hat { x } ={ A }^{ T }b $$\n* ... which for the solution is\n$$ \\hat { x } ={ \\left( { A }^{ T }A \\right) }^{ -1 }{ A }^{ T }b $$\n\n\n```python\nA = Matrix([[1, 1], [1, 2], [1, 3]])\nA\n```\n\n\n\n\n$$\\left[\\begin{matrix}1 & 1\\\\1 & 2\\\\1 & 3\\end{matrix}\\right]$$\n\n\n\n\n```python\nb = Matrix([1, 2, 2])\nb\n```\n\n\n\n\n$$\\left[\\begin{matrix}1\\\\2\\\\2\\end{matrix}\\right]$$\n\n\n\n\n```python\n(A.transpose() * A).inv() * A.transpose() * b\n```\n\n\n\n\n$$\\left[\\begin{matrix}\\frac{2}{3}\\\\\\frac{1}{2}\\end{matrix}\\right]$$\n\n\n\n* Thus, the solution is:\n$$ b=\\frac { 2 }{ 3 } +\\frac { 1 }{ 2 } t $$\n\n\n```python\n\n```\n", "meta": {"hexsha": "5faadba48dcd7c34506408bd9a02876726b9e256", "size": 26716, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_15_Projection_onto_subspaces.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_15_Projection_onto_subspaces.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/Linear Algebra/0.0 MIT-18.06 - Jupyter/Lecture_15_Projection_onto_subspaces.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 53.219123506, "max_line_length": 12416, "alphanum_fraction": 0.7032115586, "converted": true, "num_tokens": 2611, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4610167793123159, "lm_q2_score": 0.18952109361853836, "lm_q1q2_score": 0.08737240419176645}} {"text": "```python\n%matplotlib inline\n```\n\n\nLink Prediction using Graph Neural Networks\n===========================================\n\nIn the [introduction](1_introduction.ipynb), you have already learned\nthe basic workflow of using GNNs for node classification,\ni.e.\u00a0predicting the category of a node in a graph. This tutorial will\nteach you how to train a GNN for link prediction, i.e.\u00a0predicting the\nexistence of an edge between two arbitrary nodes in a graph.\n\nBy the end of this tutorial you will be able to\n\n- Build a GNN-based link prediction model.\n- Train and evaluate the model on a small DGL-provided dataset.\n\n\n\n```python\nimport dgl\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport itertools\nimport numpy as np\nimport scipy.sparse as sp\n```\n\n Using backend: pytorch\n\n\nOverview of Link Prediction with GNN\n------------------------------------\n\nMany applications such as social recommendation, item recommendation,\nknowledge graph completion, etc., can be formulated as link prediction,\nwhich predicts whether an edge exists between two particular nodes. This\ntutorial shows an example of predicting whether a citation relationship,\neither citing or being cited, between two papers exists in a citation\nnetwork.\n\nThis tutorial formulates the link prediction problem as a binary classification\nproblem as follows:\n\n- Treat the edges in the graph as *positive examples*.\n- Sample a number of non-existent edges (i.e.\u00a0node pairs with no edges\n between them) as *negative* examples.\n- Divide the positive examples and negative examples into a training\n set and a test set.\n- Evaluate the model with any binary classification metric such as Area\n Under Curve (AUC).\n\n
\n \n**Note**: The practice comes from\n [SEAL](https://papers.nips.cc/paper/2018/file/53f0d7c537d99b3824f0f99d62ea2428-Paper.pdf),\n although the model here does not use their idea of node labeling.\n\n
\n\nIn some domains such as large-scale recommender systems or information\nretrieval, you may favor metrics that emphasize good performance of\ntop-K predictions. In these cases you may want to consider other metrics\nsuch as mean average precision, and use other negative sampling methods,\nwhich are beyond the scope of this tutorial.\n\nLoading graph and features\n--------------------------\n\nFollowing the [introduction](1_introduction.ipynb), this tutorial\nfirst loads the Cora dataset.\n\n\n\n\n\n```python\nimport dgl.data\n\ndataset = dgl.data.CoraGraphDataset()\ng = dataset[0]\n```\n\n NumNodes: 2708\n NumEdges: 10556\n NumFeats: 1433\n NumClasses: 7\n NumTrainingSamples: 140\n NumValidationSamples: 500\n NumTestSamples: 1000\n Done loading data from cached files.\n\n\nPrepare training and testing sets\n---------------------------------\n\nThis tutorial randomly picks 10% of the edges for positive examples in\nthe test set, and leave the rest for the training set. It then samples\nthe same number of edges for negative examples in both sets.\n\n\n\n\n\n```python\n# Split edge set for training and testing\nu, v = g.edges()\n\neids = np.arange(g.number_of_edges())\neids = np.random.permutation(eids)\ntest_size = int(len(eids) * 0.1)\ntrain_size = g.number_of_edges() - test_size\ntest_pos_u, test_pos_v = u[eids[:test_size]], v[eids[:test_size]]\ntrain_pos_u, train_pos_v = u[eids[test_size:]], v[eids[test_size:]]\n\n# Find all negative edges and split them for training and testing\nadj = sp.coo_matrix((np.ones(len(u)), (u.numpy(), v.numpy())))\nadj_neg = 1 - adj.todense() - np.eye(g.number_of_nodes())\nneg_u, neg_v = np.where(adj_neg != 0)\n\nneg_eids = np.random.choice(len(neg_u), g.number_of_edges() // 2)\ntest_neg_u, test_neg_v = neg_u[neg_eids[:test_size]], neg_v[neg_eids[:test_size]]\ntrain_neg_u, train_neg_v = neg_u[neg_eids[test_size:]], neg_v[neg_eids[test_size:]]\n```\n\nWhen training, you will need to remove the edges in the test set from\nthe original graph. You can do this via ``dgl.remove_edges``.\n\n
\n \n**Note**: ``dgl.remove_edges`` works by creating a subgraph from the\n original graph, resulting in a copy and therefore could be slow for\n large graphs. If so, you could save the training and test graph to\n disk, as you would do for preprocessing.\n\n
\n\n\n\n\n\n```python\ntrain_g = dgl.remove_edges(g, eids[:test_size])\n```\n\nDefine a GraphSAGE model\n------------------------\n\nThis tutorial builds a model consisting of two\n[GraphSAGE](https://arxiv.org/abs/1706.02216) layers, each computes\nnew node representations by averaging neighbor information. DGL provides\n``dgl.nn.SAGEConv`` that conveniently creates a GraphSAGE layer.\n\n\n\n\n\n```python\nfrom dgl.nn import SAGEConv\n\n# ----------- 2. create model -------------- #\n# build a two-layer GraphSAGE model\nclass GraphSAGE(nn.Module):\n def __init__(self, in_feats, h_feats):\n super(GraphSAGE, self).__init__()\n self.conv1 = SAGEConv(in_feats, h_feats, 'mean')\n self.conv2 = SAGEConv(h_feats, h_feats, 'mean')\n \n def forward(self, g, in_feat):\n h = self.conv1(g, in_feat)\n h = F.relu(h)\n h = self.conv2(g, h)\n return h\n```\n\nThe model then predicts the probability of existence of an edge by\ncomputing a score between the representations of both incident nodes\nwith a function (e.g.\u00a0an MLP or a dot product), which you will see in\nthe next section.\n\n\\begin{align}\\hat{y}_{u\\sim v} = f(h_u, h_v)\\end{align}\n\n\n\n\nPositive graph, negative graph, and ``apply_edges``\n---------------------------------------------------\n\nIn previous tutorials you have learned how to compute node\nrepresentations with a GNN. However, link prediction requires you to\ncompute representation of *pairs of nodes*.\n\nDGL recommends you to treat the pairs of nodes as another graph, since\nyou can describe a pair of nodes with an edge. In link prediction, you\nwill have a *positive graph* consisting of all the positive examples as\nedges, and a *negative graph* consisting of all the negative examples.\nThe *positive graph* and the *negative graph* will contain the same set\nof nodes as the original graph. This makes it easier to pass node\nfeatures among multiple graphs for computation. As you will see later,\nyou can directly fed the node representations computed on the entire\ngraph to the positive and the negative graphs for computing pair-wise\nscores.\n\nThe following code constructs the positive graph and the negative graph\nfor the training set and the test set respectively.\n\n\n\n\n\n```python\ntrain_pos_g = dgl.graph((train_pos_u, train_pos_v), num_nodes=g.number_of_nodes())\ntrain_neg_g = dgl.graph((train_neg_u, train_neg_v), num_nodes=g.number_of_nodes())\n\ntest_pos_g = dgl.graph((test_pos_u, test_pos_v), num_nodes=g.number_of_nodes())\ntest_neg_g = dgl.graph((test_neg_u, test_neg_v), num_nodes=g.number_of_nodes())\n```\n\nThe benefit of treating the pairs of nodes as a graph is that you can\nuse the ``DGLGraph.apply_edges`` method, which conveniently computes new\nedge features based on the incident nodes\u2019 features and the original\nedge features (if applicable).\n\nDGL provides a set of optimized builtin functions to compute new\nedge features based on the original node/edge features. For example,\n``dgl.function.u_dot_v`` computes a dot product of the incident nodes\u2019\nrepresentations for each edge.\n\n\n\n\n\n```python\nimport dgl.function as fn\n\nclass DotPredictor(nn.Module):\n def forward(self, g, h):\n with g.local_scope():\n g.ndata['h'] = h\n # Compute a new edge feature named 'score' by a dot-product between the\n # source node feature 'h' and destination node feature 'h'.\n g.apply_edges(fn.u_dot_v('h', 'h', 'score'))\n # u_dot_v returns a 1-element vector for each edge so you need to squeeze it.\n return g.edata['score'][:, 0]\n```\n\nYou can also write your own function if it is complex.\nFor instance, the following module produces a scalar score on each edge\nby concatenating the incident nodes\u2019 features and passing it to an MLP.\n\n\n\n\n\n```python\nclass MLPPredictor(nn.Module):\n def __init__(self, h_feats):\n super().__init__()\n self.W1 = nn.Linear(h_feats * 2, h_feats)\n self.W2 = nn.Linear(h_feats, 1)\n\n def apply_edges(self, edges):\n \"\"\"\n Computes a scalar score for each edge of the given graph.\n\n Parameters\n ----------\n edges :\n Has three members ``src``, ``dst`` and ``data``, each of\n which is a dictionary representing the features of the\n source nodes, the destination nodes, and the edges\n themselves.\n\n Returns\n -------\n dict\n A dictionary of new edge features.\n \"\"\"\n h = torch.cat([edges.src['h'], edges.dst['h']], 1)\n return {'score': self.W2(F.relu(self.W1(h))).squeeze(1)}\n\n def forward(self, g, h):\n with g.local_scope():\n g.ndata['h'] = h\n g.apply_edges(self.apply_edges)\n return g.edata['score']\n```\n\n
\n \n**Note**: The builtin functions are optimized for both speed and memory.\n We recommend using builtin functions whenever possible.\n\n
\n\n
\n \n**Note**: If you have read the [message passing\n tutorial](3_message_passing.ipynb), you will notice that the\n argument ``apply_edges`` takes has exactly the same form as a message\n function in ``update_all``.\n\n
\n\n\n\n\nTraining loop\n-------------\n\nAfter you defined the node representation computation and the edge score\ncomputation, you can go ahead and define the overall model, loss\nfunction, and evaluation metric.\n\nThe loss function is simply binary cross entropy loss.\n\n\\begin{align}\\mathcal{L} = -\\sum_{u\\sim v\\in \\mathcal{D}}\\left( y_{u\\sim v}\\log(\\hat{y}_{u\\sim v}) + (1-y_{u\\sim v})\\log(1-\\hat{y}_{u\\sim v})) \\right)\\end{align}\n\nThe evaluation metric in this tutorial is AUC.\n\n\n\n\n\n```python\nmodel = GraphSAGE(train_g.ndata['feat'].shape[1], 16)\n# You can replace DotPredictor with MLPPredictor.\n#pred = MLPPredictor(16)\npred = DotPredictor()\n\ndef compute_loss(pos_score, neg_score):\n scores = torch.cat([pos_score, neg_score])\n labels = torch.cat([torch.ones(pos_score.shape[0]), torch.zeros(neg_score.shape[0])])\n return F.binary_cross_entropy_with_logits(scores, labels)\n\ndef compute_auc(pos_score, neg_score):\n scores = torch.cat([pos_score, neg_score]).numpy()\n labels = torch.cat(\n [torch.ones(pos_score.shape[0]), torch.zeros(neg_score.shape[0])]).numpy()\n return roc_auc_score(labels, scores)\n```\n\nThe training loop goes as follows:\n\n
\n \n**Note**: This tutorial does not include evaluation on a validation\n set. In practice you should save and evaluate the best model based on\n performance on the validation set.\n\n
\n\n\n\n\n\n```python\n# ----------- 3. set up loss and optimizer -------------- #\n# in this case, loss will in training loop\noptimizer = torch.optim.Adam(itertools.chain(model.parameters(), pred.parameters()), lr=0.01)\n\n# ----------- 4. training -------------------------------- #\nall_logits = []\nfor e in range(100):\n # forward\n h = model(train_g, train_g.ndata['feat'])\n pos_score = pred(train_pos_g, h)\n neg_score = pred(train_neg_g, h)\n loss = compute_loss(pos_score, neg_score)\n \n # backward\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n \n if e % 5 == 0:\n print('In epoch {}, loss: {}'.format(e, loss))\n\n# ----------- 5. check results ------------------------ #\nfrom sklearn.metrics import roc_auc_score\nwith torch.no_grad():\n pos_score = pred(test_pos_g, h)\n neg_score = pred(test_neg_g, h)\n print('AUC', compute_auc(pos_score, neg_score))\n```\n\n In epoch 0, loss: 0.6184065937995911\n In epoch 5, loss: 0.6056914925575256\n In epoch 10, loss: 0.5802127122879028\n In epoch 15, loss: 0.5393418073654175\n In epoch 20, loss: 0.48020118474960327\n In epoch 25, loss: 0.4126580059528351\n In epoch 30, loss: 0.36391153931617737\n In epoch 35, loss: 0.32281294465065\n In epoch 40, loss: 0.2892597019672394\n In epoch 45, loss: 0.2589336931705475\n In epoch 50, loss: 0.23045368492603302\n In epoch 55, loss: 0.2066962718963623\n In epoch 60, loss: 0.18129807710647583\n In epoch 65, loss: 0.1579950898885727\n In epoch 70, loss: 0.1354110985994339\n In epoch 75, loss: 0.11393584311008453\n In epoch 80, loss: 0.0939987450838089\n In epoch 85, loss: 0.07589612156152725\n In epoch 90, loss: 0.0597052127122879\n In epoch 95, loss: 0.04581739008426666\n AUC 0.8605017856741763\n\n", "meta": {"hexsha": "0a94df041f0fd74e09154b4e86d9d5e69d993249", "size": 18259, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "4_link_predict.ipynb", "max_stars_repo_name": "Geniussh/WSDM21-Hands-on-Tutorial", "max_stars_repo_head_hexsha": "5343d54376940ea7b1e608aa110b922f2a5a0ce8", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 13, "max_stars_repo_stars_event_min_datetime": "2021-03-08T07:27:01.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-09T10:17:00.000Z", "max_issues_repo_path": "4_link_predict.ipynb", "max_issues_repo_name": "Geniussh/WSDM21-Hands-on-Tutorial", "max_issues_repo_head_hexsha": "5343d54376940ea7b1e608aa110b922f2a5a0ce8", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "4_link_predict.ipynb", "max_forks_repo_name": "Geniussh/WSDM21-Hands-on-Tutorial", "max_forks_repo_head_hexsha": "5343d54376940ea7b1e608aa110b922f2a5a0ce8", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 9, "max_forks_repo_forks_event_min_datetime": "2021-03-04T08:19:30.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-23T20:28:44.000Z", "avg_line_length": 33.4413919414, "max_line_length": 187, "alphanum_fraction": 0.5619146722, "converted": true, "num_tokens": 3194, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48438008427698437, "lm_q2_score": 0.18010667068774938, "lm_q1q2_score": 0.08724008432657912}} {"text": "# EOSC 576 Problems\n\n\n```python\n__author__ = 'Yingkai (Kyle) Sha'\n__email__ = 'yingkai@eos.ubc.ca'\n```\n\n\n```python\nfrom IPython.core.display import HTML\nHTML(open(\"../custom.css\", \"r\").read())\n```\n\n\n\n\n\n\n\n\n\n\n```python\nimport numpy as np\nimport sympy as sp\nimport matplotlib.pyplot as plt\n% matplotlib inline\n```\n\n#Content\n 1. [**Chapter 4 - Organic Matter Production**](#Chapter-4---Organic-Matter-Production)\n 1. [**Chapter 10 - Carbon Cycle, CO2, Climate**](#Chapter-10---Carbon-Cycle,-CO2,-Climate)\n\n# Chapter 4 - Organic Matter Production\n\n**4.10** Assume the composition of organic matter is $(CH_2)_{30}(CH_2O)_{76}(NH_3)_{16}(H_3PO_4)$\n\n(a) Calculate the C:N:P stoichiometric ratio of this organic matter\n\n*Ans:*\n$$\nC:N:P = 106:16:1 \n$$\n\n(b) Calculate the amount of $O_2$ that would be required to oxidize this material if $H_3PO_4$, $HNO_3$, $H_2O$, and $CO_2$ are the oxidation products of phosphorus, nitrogen, hydrogen, and carbon, respectively. Give the full equation for the oxidation reaction. ...\n\n*Ans:* Since organic matter has $C:N:P = 106:16:1$, 1mol organic reactant finally becomes 106mol $CO_2$, 16mol $HNO_3$, 1mol $H_3PO_4$. Then we add $H_2O$ to balance hydrogen, we will get: \n $$\n (CH_2)_{30}(CH_2O)_{76}(NH_3)_{16}(H_3PO_4) + 193O_2 \\longrightarrow 106CO_2 + 16HNO_3 + H_3PO_4 + 122H_2O\n $$\n\n(c) Suppose water upwelling to the surface has a total carbon concentration of $2000\\ mmol/m^3$, an oxygen concentration of $160\\ mmol/m^3$, a nitrate concentration of $5\\ mmol/m^3$, and a phosphate concentration of $1\\ mmol/m^3$. \n \n * Which of these nutrients is likely to limit production if the light supply is adequate and there is NO nitrogen fixation? \n * Which of the elements will limit production if nitrogen fixation is allowed? \n * In each case, calculate the concentration of the remaining nutrients after the limiting nutrient is exhausted. \n \n *Ans:* photosynthesis consume nutrients in the ratio of $C:N:P = 106:16:1$. So if there is no nitrogen fixation, nitrate is the main source of $N$ and it is the limiting nutrient, when nitrate runs out, we still have $1 - 5/16 = 0.6875\\ mmol/m^3$ phosphate.\n \n *Ans:* If nitrogen fixation is allowed, then atmospheric bi-nitrogen could also be a source of $N$ and this time phosphate is the limiting nutrient. The concentration of the remaining nutrients depends on the intensity of nitrogen fixation relative to photosynthesis.\n \n \n \n\n\n\n\n**4.11** Nitrate may serve as the terminal electron acceptor (i.e., oxidant) for the remineralization\nof organic matter if oxygen is not available. The nitrate loses its oxygen and is converted to\ndissolved N2, in the process of which it gains electrons. This is referred to as *denitrification*.\n\n(a) Write a balanced equation for the oxidation of the organic matter in problem 4.10 by\ndenitrification. Assume that the organic matter reacts with nitrate in the form $HNO_3$, and\nthat all the nitrogen present in both the organic matter and nitrate is converted to $N_2$. All\nother oxidation products are as in problem 4.10 (b)\n\n*Ans:*\n$$\n(CH_2)_{30}(CH_2O)_{76}(NH_3)_{16}(H_3PO_4) + 107HNO_3 \\longrightarrow 106CO_2 + 61.5N_2 + H_3PO_4 + 185H_2O\n$$\n\n(b) What fraction of the $N_2$ in (a) comes from nitrate?\n\n*Ans:*\n$$\n107/(61.5*2) = 0.8699\n$$\n\n**4.14** ... In this problem, you are to estimate the diurnally (24 hr) averaged light supply function $\\gamma_P(I_0)$ at the surface of the ocean, which we will define as\n$\\left<\\gamma_P(I_0)\\right>$. Assume that $I_n = 1000\\ W/m^2$, and that the diurnal variation of the irradiance function $f(\\tau)$ is given as a triangular function that increases linearly from 0 at 6 AM to 1 at noon, then back to 0 at 6 PM. Do this in two steps:\n\n(a) Starting with (4.2.13), give an equation for the surface irradiance, $I_0$ for the first 6 hours\nof daylight in terms of the time $t$ in hours, with $t$ set to 0 at daybreak. Assume that the\nfraction of photosynthetically active radiation (PAR) is $f_{PAR} = 0.4$ and that the cloud cover\ncoefficient $f(C) = 0.8$.\n\n*Ans*: Eq. (4.2.13) is\n\n$$\n I_0 = f_{PAR}\\cdot f(C) \\cdot f(\\tau) \\cdot I_n\n$$\n\nbased on the knowns, we have ($t$ in hours):\n\n\\begin{equation}\n I_0 = \\left\\{\n \\begin{array}{c}\n 320 \\times \\left(\\frac{1}{6}t-1\\right) \\qquad 6 < t < 12 \\\\\n 320 \\times \\left( 3-\\frac{1}{6}t\\right) \\qquad 12 < t < 18 \\\\\n 0 \\qquad 0 < t < 6, \\qquad 18 < t <24\n \\end{array}\n \\right.\n \\end{equation}\n\n(b) Calculate $\\left<\\gamma_P(I_0)\\right>$. Use the `Platt and Jassby` formulation (4.2.16). To calculate $I_k$ from\n(4.2.17), use for $V_P$ the typical $V_{max} = 1.4$ given in the text, and the representative value for $\\alpha$ of $0.025$. Solve the problem analytically by stepwise integration over the 24 hours of the day.\n\n*Ans*:\nBased on Eq. (4.2.17)\n$$\n I_k = \\frac{V_P}{\\alpha} = 56\\ W/m^2\n$$\n\nThen based on Eq. (4.2.16)\n$$\n \\gamma_P(I_0) = \\frac{I_0}{\\sqrt{I_k^2 + I_0^2}}\n$$\nSo we have:\n$$\n \\left<\\gamma_P(I_0)\\right> = \\frac1{24}\\int_0^{24}{\\gamma_P(I_0)dt}\n$$\nHere we solve it numerically:\n\n\n```python\nt = np.linspace(0, 24, 100)\nhit1 = (t>6)&(t<=12)\nhit2 = (t>12)&(t<=18)\nI0 = np.zeros(np.size(t))\nI0[hit1] = 320 * ((1./6) * t[hit1] - 1)\nI0[hit2] = 320 * (3 - (1./6) * t[hit2])\nIk = 56\nrI0 = I0/np.sqrt(Ik**2 + I0**2)\n```\n\n\n```python\nfig=plt.figure(figsize=(11, 5))\nax1=plt.subplot2grid((1, 2), (0, 0), colspan=1, rowspan=1)\nax2=plt.subplot2grid((1, 2), (0, 1), colspan=1, rowspan=1)\nax1.plot(t, I0, 'k-', linewidth=3); ax1.grid(); \nax1.set_xlabel('t in hours', fontweight=12)\nax1.set_ylabel('$I_0$', fontweight=12)\nax1.set_xlim(0, 24); ax1.set_ylim(0, 320)\nax2.plot(t, rI0, 'k-', linewidth=3); ax2.grid(); \nax2.set_xlabel('t in hours', fontweight=12)\nax2.set_ylabel('$\\gamma_P(I_0)$', fontweight=12)\nax2.set_xlim(0, 24); ax2.set_ylim(0, 1)\n```\n\n\n```python\ndelta_t = t[1]-t[0]\nresult = (1./24) * np.sum(rI0*delta_t)\nprint('Daily average of rI0 is: {}'.format(result))\n```\n\n Daily average of rI0 is: 0.420146160518\n\n\nSo light limits is important.\n\n**4.15** In this problem, you are to find the depth at which the diurnally averaged light supply $\\left<\\gamma_P\\left(I(z)\\right)\\right>$ crosses the threshold necessary for phytoplankton to achieve the minimum concentration at which zooplankton can survive, $0.60\\ mmol/m^3$. Use the temperature dependent growth rate given by the `Eppley relationship` (4.2.8) for a temperature of $10^\\circ C$, a mortality rate $\\lambda_P$ of $0.05\\ d^{-1}$, and a nitrate half-saturation constant $K_N$ of $0.1\\ mmol/m^3$. Assume that the total nitrate concentration $N_T$ is $10\\ mmol/m^3$. Do this in two steps:\n\n(a) Find the minimum light supply function $\\gamma_P(I)$ that is required in order for phytoplankton\nto cross the threshold concentration (assume zooplankton concentration $Z = 0$)\n\n*Ans:*\nThe steady state of phytoplankton in N-P-Z model:\n$$\nSMS(P) = 0 = V_{max}\\gamma_P(N)\\gamma_P(I) - \\lambda_P\n$$\n\nAnd now we try to solve light limits $\\gamma_P(I)$.\n\nThe threshold of phytoplankton $P = 0.60\\ mmol/m^3$, so we have the concentration of nutrient:\n$$\nN = N_T - P - Z = 9.4\\ mmol/m^3\n$$\nThen calling Eq. 4.2.11., nutrient limits is:\n$$\n\\gamma_P(N) = \\frac{N}{K_N+N} = 0.99\n$$\nFor the maximum growth rate, we have Eq. 4.2.8:\n$$\nV_{max} = V_P(T) = ab^{cT} = 0.6*1.066^{10} = 0.637\n$$\nThus the minimum light supply function is:\n$$\n\\gamma_P(I) = \\frac{\\lambda_P}{V_{max}\\gamma(N)} = 0.079\n$$\n\n(b) Assuming that $\\gamma_P(I)$ from (a) is equal to the diurnal average $\\left<\\gamma_P\\left(I(z)\\right)\\right>$, at what depth $H$ in\nthe ocean will the diurnally averaged light supply function cross the threshold you\nestimated in (a)? Assume that P is constant with depth and use a total attenuation\ncoefficient of $0.12\\ m^{-1}$.\n\n*Ans:*\n\nHere I borrowed 2 values from problem **4.14** $\\alpha = 0.025$, and $I_0 = 1000$.\n\nBased on Eq. (4.2.16), Eq. (4.2.17):\n\n$$\nI = \\frac{\\gamma_P(I)I_k}{\\sqrt{1-\\gamma_P(I)^2}}, \\qquad\\ I_k = \\frac{V_P}{\\alpha}\n$$\n\nFor the critical depth, growth equals to death, $V_P = \\lambda_P=0.05$, and we get $I = 0.1584$\n\nThen from Beer's Law:\n\n$$\nI = I_0\\exp(-KH), \\qquad\\ K=0.12\n$$\n\nSo we have:\n\n$$\nH = -\\frac1K\\ln\\frac{I}{I_0} = 72.92\\ m \n$$\n\nThis is the deepest place for zooplankton to survive, and phytoplankton has a concentration of $60\\ mmol/m^3$. \n\n#Chapter 10 - Carbon Cycle, CO2, Climate\n\n**10.4** Explain why the surface ocean concentration of anthropogenic CO2 is higher\nin low latitudes than it is in high latitudes. Why is it higher in the Atlantic\nthan in the Pacific ?\n\n*Ans:*\n\nThe basic idea is the variation of buffering factor $\\gamma_{DIC}$ is more important than the solubility of $\\mathrm{CO_2}$\n\nIf we integrate eq(10.2.16) begin with *Anthropocene*, $C_{ant}$ is a function of $\\gamma_{DIC}$:\n$$\n C_{ant}(t) = \\int_{t=t_\\pi}^{t_0}{\\frac{\\partial DIC}{\\partial t}dt} = \\frac1{\\gamma_{DIC}}\\frac{DIC}{pCO_2^{oc}}\\left(\\left.pCO_2^{atm}\\right|_{t_0}^{t_\\pi}\\right)\n$$\n\n * Tropics has low $\\gamma_{DIC}$ so high accumulated $C_{ant}$ takeup;\n * High-latitude regions has high $\\gamma_{DIC}$ so ...\n * Atlantic has a lower $\\gamma_{DIC}$ than Pacific due to its high *Alk* (see eq(10.2.11))\n\n**10.5** How long will it take for a pulse of $\\mathrm{CO_2}$ emitted into the atmosphere to be reduced to 50%, 20%, 10%, and 1% of its original value? For each answer list? which process is the primary one responsible for the removal of $\\mathrm{CO_2}$ from the atmosphere at the point in time the threshold is crossed.\n\n*Ans:*\n\nWe have many choices of impulse response functions (IRF), a simple one used by IPCC-SAR is: \n$$\nIRF = A_0 + \\sum_{i=1}^5{A_i\\exp\\left(-\\frac{t}{\\tau_i}\\right)}\n$$\n$A_i$ and $\\tau_i$ are empirical values, $t$ for \"year\" (details here)\n\n\n```python\ndef IRF_IPCC(A, tau, t):\n IRF = A[0]*np.ones(t.shape)\n for i in range(5):\n IRF = IRF + A[i+1]*np.exp(-1*t/tau[i])\n return IRF\n```\n\n\n```python\nA_std = np.array([0.1369, 0.1298, 0.1938, 0.2502, 0.2086, 0.0807])\ntau_std = np.array([371.6, 55.7, 17.01, 4.16, 1.33])\nt = np.linspace(0, 500, 501)\nIRF = IRF_IPCC(A_std, tau_std, t)\n```\n\n\n```python\nfig = plt.figure(figsize=(10, 4)); ax = fig.gca();ax.grid()\nplt.plot(t, IRF, 'k-', linewidth=3.5)\nax.set_title('IRF v.s. time', fontsize=14)\n```\n\n\n```python\nhit = np.flipud(t)[np.searchsorted(np.flipud(IRF), [0.5, 0.2])]\nprint('Time to reduced to 50% is {} year, to 20% is {} year'.format(hit[0], hit[1]))\n```\n\n Time to reduced to 50% is 16.0 year, to 20% is 276.0 year\n\n\nFor 50%, DIC buffering is dominate. For 20%, it costs 276 yr and DIC buffering is nearly saturate (see Fig.10.2.3), and $\\mathrm{CaCO_3}$ buffering begin to dominate.\n\n**10.8** Explain the apparent paradox that the tropical Pacific is viewed as being a\nlarge sink for anthropogenic $\\mathrm{CO_2}$, despite the fact that it is a region of net\noutgassing of $\\mathrm{CO_2}$.\n\n*Ans:*\n\nAccording to **10.4** we know that tropical ocean takes up more $Ant_{C}$ because it has a lower $\\gamma_{DIC}$. The outgassing in tropical Pacific is due to the upwelling and inefficient biological pump, these are the business of natural carbon (and since natural carbon cycle is in equilibrium, this outgassing is balanced by some other downwelling regions). \n\n**10.13**\n", "meta": {"hexsha": "765acb9e12488e74fb7e7bb7f010d9541b077d3c", "size": 67478, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "EOSC_576/EOSC_576_Problems.ipynb", "max_stars_repo_name": "yingkaisha/Homework", "max_stars_repo_head_hexsha": "fff00fb5a41513e0edf2b1f8d8a74687a1db7120", "max_stars_repo_licenses": ["Unlicense"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-02-17T23:19:36.000Z", "max_stars_repo_stars_event_max_datetime": "2021-02-17T23:19:36.000Z", "max_issues_repo_path": "EOSC_576/EOSC_576_Problems.ipynb", "max_issues_repo_name": "yingkaisha/homework", "max_issues_repo_head_hexsha": "fff00fb5a41513e0edf2b1f8d8a74687a1db7120", "max_issues_repo_licenses": ["Unlicense"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "EOSC_576/EOSC_576_Problems.ipynb", "max_forks_repo_name": "yingkaisha/homework", "max_forks_repo_head_hexsha": "fff00fb5a41513e0edf2b1f8d8a74687a1db7120", "max_forks_repo_licenses": ["Unlicense"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 60.5727109515, "max_line_length": 621, "alphanum_fraction": 0.7277927621, "converted": true, "num_tokens": 4361, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4263216071250873, "lm_q2_score": 0.20434189993684582, "lm_q1q2_score": 0.08711536718406988}} {"text": "```javascript\n%%javascript\n$('#appmode-leave').hide();\n$('#copy-binder-link').hide();\n$('#visit-repo-link').hide();\n```\n\n# Water anomalies\nDespite its simple molecular structure, water is an exceptionally complicated fluid.\nMany of its properties do not follow the trends obeyed by other liquids and are often referred to as water anomalies.\nScientists often report more than 50 _anomalous_ properties of water, and some of the most well-known examples are\n1. Water has an unusually high melting point for a molecule of such a low molecular weight.\n2. Water has an unusually high boiling point for a molecule of such a low molecular weight.\n3. A liquid-liquid transition occurs at about 330 K.\n4. Pressure reduces ice's melting point.\n5. Cold liquid water has a high density that increases on warming (up to 3.984 \u00b0C).\n\nMost of these anomalous behaviours have been explained and there is ample scientific (and non-scientific) literature discussing them. As an example, this website [https://water.lsbu.ac.uk/water/water_anomalies.html](https://water.lsbu.ac.uk/water/water_anomalies.html) will provide a good overview, and plenty of references, on the topic.\n\nIn this numerical workshop you will use a computational technique called Molecular Dynamics (MD) to study the how the water density changes with temperature.\nMD is one of the most widely used type of atomistic simulations, and it is now routinely used in many research groups to complement experimental studies.\n\n## Molecular dynamics\nMolecular dynamics is conceptually very simple, an iterative solution of Newton's equations of motions at the atomic level, but subtly complicated to use for direct quantitative comparison with experiments.\nAlthough a detailed description of MD is beyond the scope of this laboratory, it is worth discussing some basic ideas for you to start appreciating the power and limitations of this technique. There are plenty of webpage and tutorials that describe the working principles of MD; Wikipedia has a fairly good an general overview of this topic [https://en.wikipedia.org/wiki/Molecular_dynamics]( https://en.wikipedia.org/wiki/Molecular_dynamics)\n\nIn MD the atoms are treated as point particles with a mass and a partial charge. Their interactions are described by simple empirical equations, such as the Coulomb and the van der Waals (dispersion) forces, supplemented with two-, three- or four-body interactions to better capture the covalent nature of the intramolecular bonds.\nFor example, in classical molecular dynamics the interaction *energy* between two non-bonded atoms separated by a distance, $r$, can be written as\n\n\\begin{equation}\nU_{ij} = \\frac{1}{4\\pi\\varepsilon_0}\\frac{q_i q_j}{r} + \\frac{A}{r^12} - \\frac{B}{r^6} \\tag{1}\n\\end{equation}\n\nWhere the first term is the Coulomb interaction and the last two the repulsive and attractive parts of the van der Waals interactions.\nOn the other hand the bonded two-, three- and four-body interactions between covalently bonded atoms are typically described by *harmonic* potentials\n\n\\begin{eqnarray}\nU_{ij}^b &=& K_b(b_{ij}-b_0)^2 \\tag{2} \\\\\nU_{ijk}^a &=& K_\\theta(\\theta_{ijk}-\\theta_0)^2 \\tag{3} \\\\\nU_{ijkl}^t &=& K_\\phi[1+\\cos(n\\phi_{jikl}-\\phi_0)]^2 \\tag{3} \\\\\n\\end{eqnarray}\n\nwhere $b_{ij}$, $\\theta_{ijk}$ and $\\phi_{jikl}$ are the bond lengths, angle and torsional angle between the atoms and the other quantities are fitting parameters, which are key to determine the accuracy of the simulations.\n\nOnce the interaction energy is known we can then compute the forces on the atoms as the sum of all pair-wise interactions\n\n\\begin{equation}\nF_i = \\sum_{j\\neq i} F_{ij} = -\\Bigg[\n \\sum \\frac{\\partial U_{ij}}{\\partial x_i} +\n \\sum \\frac{\\partial U_{ij}^a}{\\partial x_i} +\n \\sum \\frac{\\partial U_{ijk}^b}{\\partial x_i} +\n \\sum \\frac{\\partial U_{ijkl}^t}{\\partial x_i} \\Bigg] \\tag{5} \n\\end{equation}\n\nThen, by knowing the positions, velocities and forces for all the atoms at a certain time $t$, we can use the Newton's equations of motions to *predict* the positions and velocities of the particles after a certain (short) amount of time as passed\n\n\\begin{eqnarray}\na_i(t) &=& \\frac{F_i(t)}{m_i} \\tag{6} \\\\\nv_i(t+\\delta t) &=& v_i(t) + a_i(t)\\delta t \\tag{7} \\\\\nx_i(t+\\delta t) &=& x_i(t) + v_i(t)\\delta t + \\frac{1}{2}a_i(t)\\delta t^2 \\tag{8} \\\\\n\\end{eqnarray}\n\nwhere $a_i$, $v_i$ and $x_i$ are the acceleration, velocity and position of particle $i$, and $\\delta t$ is called the time step.\nThese three equations (or some variants of them) are usually called **equations of motions**.\nNow that we have the new atomic positions we can compute the new forces on the atoms, and use again the Newton's equations of motions to *propagate* the atoms' positions further. \nThis iterative procedure will generate a **trajectory** for the atoms, and by using energies and velocities collected along the way we will also get information about the temperature, pressure and other thermodynamic quantities of the system.\n\n\n### Importance of the time step\nThe time step is one of the most important quantities in MD, and it is key to understand the potentials and limitations of atomistic molecular dynamics simulations.\nIn fact, for the above equations of motions to be valid, the time step has to be short enough to describe the fastest **atomic** motion in the system, which in the case of water is the O-H stretching mode.\nThe O-H stretching has a vibrational frequency of approximately $1\\times10^{14}$Hz, *i.e* it takes about $1\\times10^{-14}$s to complete one oscillation. Therefore, if we want to describe this very fast atomic motion using discrete points in time we would need 10-20 snapshots. Hence the time step has to be of the order of 1~fs ($1\\times10^{-15}$s) or less.\n\nNow, let\u2019s imagine running a simulation with a 1 fs time step and that the computer take 10 ms to calculate energies, forces and do one cycle of the equations of motions.\nIn the table below you can see how long it would you take to simulate a chemical or physical process depending on the time scale it experimentally occurs\n\n| Experimental time scale | Phenomenon | Simulation time \n| :-----: | :--------: |:---------\n| 10 fs | O-H vibration | 0.1 s\n| 1 ps | H-bond persistence | 10 s\n| 1 nm | Ion permeation through a membrane | 3 hours\n| 1$\\mu$s | Conformational rearrangement | 115 days \n| 1 ms | Protein folding (fast) | 317 days\n| 1 s | Protein folding (typical) | 317,000 days\n\nObviously, the time requires to do on MD cycles depends on the number of operations the computer has to perform, hence it increases with the system size.\nAlthough computational power has increased exponentially since MD was first introduced in the 1940s, we we can now afford to study systems of millions of atoms or hundreds of nm i size, there are still strong limitations to what can be reliably simulated due to the finite (small) number of atoms is included in the system (compared to Avogadro's number) and the short time scale that the simulation can span.\n\n### Ensemble\nMD codes are more complicated that a simple iterative solution of Newton's equations of motions and they include algorithms to control the temperature, pressure and other thermodynamics quantities of the system.\nOf particular relevance for this experience is the need to use **thermostats** and **barostats** to fix the boundary conditions of the simulations.\nFor this laboratory is not important to know the working details of these algorithms, but is key that you are aware that the variables relating to the temperature and pressure of the simulations are input parameters that may need to be changed.\n\n## Scope of the laboratory\nThe scope of this virtual experiment is to introduce you to atomistic molecular dynamics simulations and to compute the variations of the water density as a function of temperature, and to compare it with experimental values. As briefly mentioned above, the accuracy of the simulation depends on the parameters that are used to compute the intermolecular interactions. This laboratory will be run in groups and each of you will run individually MD calculations for a chosen water model and share the results with the other group members to be included in the final report (with proper acknowledgment of their origin).\n\nThe water models available for this laboratory are\n* SPC/E\n* TIP3P\n* TIP4P\n* TIP4P/ew\n* TIP5P\n\nwhich is a very small selections of the available models for water.\nThe models are listed in increasing level of complexity and one would reasonably expect that the more complex model gives more accurate results.\n\nYou will choose one model, in consultation with the other members of your team, and run a series of simulations to compute the water density at various conditions, to locate the water density maximum, if it exists for that model.\nEach simulation may take 5/10 minutes, you do not have to look at the simulation as it runs but it is important that your computed does not go to sleep or disconnect during the run time of the simulation. \nAlthough the files should remain on the server you should save a copy of the files on your local computer before disconnecting.\n\nYou can start by performing a few calculations around ambient conditions, say between 273 and 310 K, and then move to much lower temperature if needed.\n\nNote that the water will not freeze during your simulation even at fairly high undercooling, so you can go to quite low temperature without any real problems.\n\n## Final report\nThe final report for this experience should include your calculations for the water density as a function of temperature for a selection of the models, literature experimental data and a comparison with published MD results. Optionally you can also include data obtained by your peers for a different water potential.\nThe comparison with previous simulations should be done at least for the same water model that you have used in your simulations.\n\nYour report should show how you extracted the average density from the simulation output, and an estimate of the errors.\n\n\n## Launch virtual experiment\nLet's now have a look at the python notebook to run MD simulations\n[Molecular dynamics Simulation](md.ipynb)\n", "meta": {"hexsha": "f878d86de17cd17e70043c3641fca0ea05fb9b72", "size": 12439, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "week_10_waterDensity/waterDensity.ipynb", "max_stars_repo_name": "blake-armstrong/TeachingNotebook", "max_stars_repo_head_hexsha": "30cdca5bffd552eaecc0368c3e92744c4d6d368c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "week_10_waterDensity/waterDensity.ipynb", "max_issues_repo_name": "blake-armstrong/TeachingNotebook", "max_issues_repo_head_hexsha": "30cdca5bffd552eaecc0368c3e92744c4d6d368c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "week_10_waterDensity/waterDensity.ipynb", "max_forks_repo_name": "blake-armstrong/TeachingNotebook", "max_forks_repo_head_hexsha": "30cdca5bffd552eaecc0368c3e92744c4d6d368c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 67.2378378378, "max_line_length": 626, "alphanum_fraction": 0.6864699735, "converted": true, "num_tokens": 2365, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4186969093556867, "lm_q2_score": 0.20689404637077294, "lm_q1q2_score": 0.08662589777953476}} {"text": "```python\n#we may need some code in the ../python directory and/or matplotlib styles\nimport sys\nimport os\nsys.path.append('../python/')\n\n#set up matplotlib\nos.environ['MPLCONFIGDIR'] = '../mplstyles'\nprint(os.environ['MPLCONFIGDIR'])\nimport matplotlib as mpl\nfrom matplotlib import pyplot as plt\n#got smarter about the mpl config: see mplstyles/ directory\nplt.style.use('standard')\nprint(mpl.__version__) \nprint(mpl.get_configdir())\n\n\n#fonts\n# Set the font dictionaries (for plot title and axis titles)\ntitle_font = {'fontname':'Arial', 'size':'16', 'color':'black', 'weight':'normal',\n 'verticalalignment':'bottom'} # Bottom vertical alignment for more space\naxis_font = {'fontname':'Arial', 'size':'32'}\nlegend_font = {'fontname':'Arial', 'size':'22'}\n\n#fonts global settings\nmpl.rc('font',family=legend_font['fontname'])\n\n\n#set up numpy\nimport numpy as np\n```\n\n ../mplstyles\n 3.0.3\n /home/phys/villaa/analysis/misc/nrFano_paper2019/mplstyles\n\n\n# Summary\n\nIn this notebook we follow the logic for our analysis that defines an effective nuclear-recoil Fano factor for germanium. We use the other notebooks in this directory as supporting references and present the line of logic for our publication [REF]. \n\nIt is planned that all of the plots that go into the publication will be produced here from the data we have placed in the `data/` directory below this one. It is planned that all the data is referenced and where it came from is clear. \n\nThe basic idea of this analysis is that there are measurements in the literature that constrain the width (second moment) of the ionization distribution for various materials. This has also been predicted by Lindhard [REF]. This variance in the number of charges produced by a nuclear recoil of a given energy far exceeds what is measured from electron recoils. In the electron recoils this is parameterized by the Fano factor and so here we define the \"effective\" Fano factor for nuclear recoils. \n\nWhile the width of the ionization distribution has not been important in the past because of excellent discrimination between electron- and nuclear-recoil events above about 10 keV, it is becoming more important for dark matter searches interested in lower recoil energies [REF] (SuperCDMS low threshold) and discriminationless [REF] (CDMSlite & HVeV) searches. \n\n# 1. Lindhard has Predicted this Variance and Dougherty has Measured it for Silicon\n\nDougherty has measured this effect in silicon and shown that it is near the predicted values from Lindhard [[Dough92][Dough92]]. See the notebook `silicon_Fano.ipynb` for the details of how an effective Fano factor is extracted from this silicon measurement. \n\n[Dough92]: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.45.2104 \"Dougherty paper 1992\"\n\nThe following table is a summary of those data along with the effective Fano estimate and uncertainty for each recoil energy data point measured in that publication.\n\nExperimental uncertainties are quoted as values following the \"$\\pm$\" symbol. The Observed width and Expected width are both in FWHM execpt for the 25.3 keV recoil energy point, which is quoted in half width at half max (HWHM). The excess fluctuation are given in 1$\\sigma$. \n\n## Table 1 of the paper:\n\nSi recoil energy (keV)|Observed ionization (keV)|Lindhard shift (keV)|Ionization efficiency (%)|Observed width (keV)|Expected width (keV)|Excess fluct. (%)| effective Fano\n:-|:-|:-|:-|:-|:-|:-|:-\n109.1$\\pm$0.7|55.5$\\pm$2|0.55|51.4$\\pm$2|16$\\pm$3|3.5$\\pm$0.4|6.1$\\pm$1.2|208$\\pm$40\n75.7 $\\pm$0.4|33.3$\\pm$0.4|0.31+0.94|45.6$\\pm$0.5|9.6$\\pm$1.0|1.1$\\pm$0.3|5.3$\\pm$0.6|123$\\pm$5\n25.3$\\pm$0.3|8.90$\\pm$0.1|0.074|35.5$\\pm$0.6|1.30$\\pm$0.04|0.75$\\pm$0.1|3.6$\\pm$0.3|24.3$\\pm$3.6\n7.50$\\pm$0.03|2.01$\\pm$0.02|0.012|26.9$\\pm$0.4|0.55$\\pm$0.07|0.24$\\pm$0.01|2.8$\\pm$0.4|5.75$\\pm$3.07\n4.15$\\pm$0.15|0.93$\\pm$0.02|0.008|22.5$\\pm$0.5|0.32$\\pm$0.06|0.236$\\pm$0.005|2.2$\\pm$0.9|2.35$\\pm$9.21\n\n\n```python\nimport dataPython as dp\nimport numpy as np\n\nlind_data0 = dp.getXYdata('data/lindhard2_OmegaepsD_fmt.txt')\nlind_data1 = dp.getXYdata('data/lindhard2_OmegaepsE_fmt.txt')\n\nlindD_e = np.asarray(lind_data0['xx'])\nlindD = np.asarray(lind_data0['yy'])\nlindE_e = np.asarray(lind_data1['xx'])\nlindE = np.asarray(lind_data1['yy'])\n```\n\n\n```python\nEsi = np.vectorize(lambda x: np.sqrt(2)*2*x/(6.87758e-5*1000))\n```\n\n\n```python\n#create a yield model\nimport lindhard as lind\n\n#lindhard\nlpar = lind.getLindhardPars('Si',True) #use the \"calculated\" value of k\nprint(lpar)\n#ylind = lind.getLindhard(lpar)\nylind = lind.getLindhardSi_k(0.15)\nylindv = np.vectorize(ylind) #careful, this expects inputs in eV\n```\n\n {'Z': 14, 'A': 28, 'k': 0.14600172346755985, 'a': 3.0, 'b': 0.15, 'c': 0.7, 'd': 0.6}\n\n\n\n```python\n#convert the vectors\nepsg = 3.8e-3 #keV average energy per electron-hole pair created\n\n\nF_D = Esi(lindD_e)*(1/(epsg*ylindv(1000*Esi(lindD_e))))*lindD\nF_E = Esi(lindE_e)*(1/(epsg*ylindv(1000*Esi(lindE_e))))*lindE\n```\n\n\n```python\n#get Dougherty Data\nddataY = dp.getXYdata_wXYerr('data/Dougherty_Yield.txt')\nddataFluct = dp.getXYdata_wXYerr('data/Dougherty_Fluct.txt')\n\nddataY_G = dp.getXYdata_wXYerr('data/Gerbier_Yield.txt')\nddataFluct_G = dp.getXYdata_wXYerr('data/Gerbier_Fluct.txt')\n\n#convert to numpy arrays\nddata_e = np.asarray(ddataFluct['xx'])\nddata_fluct = np.asarray(ddataFluct['yy'])\nddata_fluct_err = np.asarray(ddataFluct['ey'])\n\nddata_Y = np.asarray(ddataY['yy'])\nddata_Y_err = np.asarray(ddataY['ey'])\nprint(ddata_Y)\nprint(ddata_Y_err)\n\nddataG_e = np.asarray(ddataFluct_G['xx'])\nddataG_fluct = np.asarray(ddataFluct_G['yy'])/1000\nddataG_fluct_err = np.asarray(ddataFluct_G['ey'])/1000\nprint(ddataG_e)\nprint(ddataG_fluct)\n\nddataG_Y = np.asarray(ddataY_G['yy'])\nddataG_Y_err = np.asarray(ddataY_G['ey'])\nprint(ddataG_Y)\nprint(ddataG_Y_err)\n\nepsg = 3.8e-3 #epsilon-gamma for silicon in keV per pair\nddata_fluct_F = (ddata_fluct/100)**2 * (ddata_e/(epsg*(ddata_Y/100)))\n#ddata_fluct_F_err = (ddata_fluct_err/100)**2 * (ddata_e/(epsg*(ddata_Y/100)))\nddata_fluct_F_err = np.sqrt(((ddata_fluct/100)*(2*ddata_e/(epsg*(ddata_Y/100))))**2*(ddata_fluct_err/100)**2 \\\n +((ddata_fluct/100)**2*(ddata_e/(epsg*(ddata_Y/100)**2)))**2*(ddata_Y_err/100)**2 )\nprint(ddata_fluct_F_err)\nddata_fluct_F_err_A = np.sqrt(((ddata_fluct/100)*(2*ddata_e/(epsg*(ddata_Y/100))))**2*(ddata_fluct_err/100)**2)\nddata_fluct_F_err_B = np.sqrt(((ddata_fluct/100)**2*(ddata_e/(epsg*(ddata_Y/100)**2)))**2*(ddata_Y_err/100)**2 )\nddata_fluct_F_err = np.sqrt(ddata_fluct_F_err_A**2 + ddata_fluct_F_err_B**2)\n \nddata_fluct_F_G = (ddataG_fluct/ddataG_e)**2 * (ddataG_e/(epsg*(ddataG_Y/100)))\n#ddata_fluct_F_err = (ddata_fluct_err/100)**2 * (ddata_e/(epsg*(ddata_Y/100)))\nddata_fluct_F_err_G = np.sqrt(((ddataG_fluct/ddataG_e)*(2*ddataG_e/(epsg*(ddataG_Y/100))))**2*(ddataG_fluct_err/ddataG_e)**2 \\\n +((ddataG_fluct/ddataG_e)**2*(ddataG_e/(epsg*(ddataG_Y/100)**2)))**2*(ddataG_Y_err/100)**2 )\n\nddata_fluct_F_err_G_A = np.sqrt(((ddataG_fluct/ddataG_e)*(2*ddataG_e/(epsg*(ddataG_Y/100))))**2*(ddataG_fluct_err/ddataG_e)**2)\nddata_fluct_F_err_G_B = np.sqrt(((ddataG_fluct/ddataG_e)**2*(ddataG_e/(epsg*(ddataG_Y/100)**2)))**2*(ddataG_Y_err/100)**2 )\nddata_fluct_F_err_G = np.sqrt(ddata_fluct_F_err_G_A**2 + ddata_fluct_F_err_G_B**2) \n \nprint(ddata_fluct_F)\nprint(ddata_fluct_F_err)\nprint(ddata_fluct_F_G)\nprint(ddata_fluct_F_err_G)\n```\n\n [51.4 45.6 35.5 26.9 22.5]\n [2. 0.5 0.6 0.4 0.5]\n [21.7 19.5 13.5 8.6 4.7 4.15 3.9 3.3 ]\n [1. 1.101 0.601 0.348 0.185 0.166 0.241 0.131]\n [40.7 38.7 33.6 31.1 26.6 27.4 22.9 25.9]\n [0.5 0.7 0.7 0.5 0.8 0.8 2. 1.6]\n [82.17366352 27.81718869 4.07177705 1.64573833 1.92281409]\n [207.84410199 122.71543167 24.30600445 5.75229896 2.34923977]\n [82.17366352 27.81718869 4.07177705 1.64573833 1.92281409]\n [29.79629465 42.27128645 20.95522371 11.91560371 7.2041105 6.37725701\n 17.11395553 5.28378686]\n [3.53496608 8.32817692 2.96120797 0.91062459 2.81212102 3.00232178\n 9.49203685 4.44875839]\n\n\n## Figure 1 of the Paper:\n\n\n```python\n#set up a plot\nfrom mpl_toolkits.axes_grid1.inset_locator import inset_axes\nfrom mpl_toolkits.axes_grid1.inset_locator import InsetPosition\nfig,axes = plt.subplots(1,1,figsize=(9.0,8.0),sharex=True)\nax1 = axes\n\nxmax=10\n\nax1.errorbar(ddata_e,ddata_fluct_F,yerr=[ddata_fluct_F_err,ddata_fluct_F_err], marker='o', markersize=8, \\\n ecolor='k',color='k', linestyle='none', label='Dougherty eff. F', linewidth=2)\nax1.errorbar(ddataG_e,ddata_fluct_F_G,yerr=[ddata_fluct_F_err_G,ddata_fluct_F_err_G], marker='^', markersize=8, \\\n ecolor='k',color='k', linestyle='none', label='Gerbier eff. F', linewidth=2)\n\n\n#ax1.plot (X, diff, 'm-', label='Thomas-Fermi (newgrad)')\n#ax1.plot (Esi(epr), 100*np.sqrt(f_Omega2_eta2(epr))*ylindv(1000*Esi(epr)), 'g-', label='$\\Omega/\\epsilon$ (NAC III approx. D)')\nax1.plot (Esi(lindD_e), F_D, 'k-', label='eff. F (Lind. approx. D)')\nax1.plot (Esi(lindE_e), F_E, 'k--', label='eff. F (Lind. approx. E)')\n\n\n\n\nax1.set_yscale('linear')\nax1.set_xscale('linear')\nax1.set_xlim(Esi(0), 150)\nax1.set_ylim(0.1,300)\nax1.set_xlabel('recoil energy ($E_r$) [keV]',**axis_font)\nax1.set_ylabel('effective Fano factor',**axis_font)\n#ax1.grid(True)\n#ax1.xaxis.grid(True,which='minor',linestyle='--')\nax1.legend(loc=4,prop={'size':22})\n\n###Make inset\nbbox_ll_x = 0.07\nbbox_ll_y = -0.0225\nbbox_w = 1\nbbox_h = 1\neps = 0.01\naxins = inset_axes(ax1, height=\"35%\", width=\"55%\", bbox_to_anchor=(bbox_ll_x,bbox_ll_y,bbox_w-bbox_ll_x,bbox_h), loc='upper left',bbox_transform=ax1.transAxes)\n#ax1.add_patch(plt.Rectangle((bbox_ll_x, bbox_ll_y+eps), bbox_w-eps-bbox_ll_x, bbox_h-eps, ls=\"--\", ec=\"c\", fc=\"None\",\n# transform=ax1.transAxes))\n\n#axins = plt.axes([0,0,1,1])\n#axins_pos = InsetPosition(ax3, [0.25, 0.65, 0.7, 0.3])\n#axins.set_axes_locator(axins_pos)\n\n# larger region than the original image\nx1, x2, y1, y2 = 0, 10, 0, 20\naxins.set_xlim(x1, x2)\naxins.set_ylim(y1, y2)\n\n\n\naxins.errorbar(ddata_e,ddata_fluct_F,yerr=[ddata_fluct_F_err,ddata_fluct_F_err], marker='o', markersize=8, \\\n ecolor='k',color='k', linestyle='none', label='', linewidth=2)\naxins.errorbar(ddataG_e,ddata_fluct_F_G,yerr=[ddata_fluct_F_err_G,ddata_fluct_F_err_G], marker='^', markersize=8, \\\n ecolor='k',color='k', linestyle='none', label='', linewidth=2)\naxins.plot (Esi(lindD_e), F_D, 'k-', label='')\naxins.plot (Esi(lindE_e), F_E, 'k--', label='')\n\naxins.yaxis.grid(True,which='minor',linestyle='--')\naxins.xaxis.grid(True,which='minor',linestyle='--')\naxins.grid(True)\n####\n\nfor axis in ['top','bottom','left','right']:\n ax1.spines[axis].set_linewidth(2)\n\n#plt.tight_layout()\nplt.savefig('figures/paper_figures/SiFano_Figure1.eps')\nplt.savefig('figures/paper_figures/SiFano_Figure1.pdf')\nplt.show()\n```\n\n# 2. Edelweiss has Observed Anomalous NR Widening in Germanium\n\nThe 2004 EDELWEISS publication [[Edw04][Edw04]] has published a detailed and complete analysis of the measured resolutions for 7 cryogenic germanium detectors that they used for dark matter searches about ~10 keV analysis thresholds. \n\nIn this paper it was recognized (in a similar way to the Doughterty measurement) that the measured ionization yield width was larger than expected. This effect remained even after estimating the nuclear recoil band widening based on multiple-scatters. The point in this section is to estimate how much wider the measured nuclear recoil band was than the _single-scatter_ prediction. \n\n[Edw04]: https://doi.org/10.1016/j.nima.2004.04.218 \"EDELWEISS 2004 Publication\"\n\nThe single-scatter prediction for the ionization yield width can be analytically estimated based on the ionization and heat channel resolutions. Each of those resolutions is extracted for each detector in the publication by the following functional forms (see the notebook `edelweiss_res.ipynb`):\n\n\\begin{equation}\n\\begin{aligned}\n\\sigma_I(E_I) &= \\sqrt{(\\sigma_I^0)^2 + (a'_I E_I)^2} \\\\\n\\sigma_H(E_H) &= \\sqrt{(\\sigma_H^0)^2 + (a'_H E_H)^2},\n\\end{aligned}\n\\end{equation}\n\nwhere $E_H$ is a recoil energy estimator (unbiased for electron-recoils) based on the heat signal, and $E_I$ is a recoil energy estimator (again unbiased for electron-recoils) based on the ionization signal. \n\nIn the EDELWEISS paper the recoil energy, an estimator for the true recoil energy of an event, is defined as follows:\n\n\\begin{equation}\nE_r = \\left(1+\\frac{V}{\\epsilon_{\\gamma}}\\right)E_H - \\frac{V}{\\epsilon_{\\gamma}} E_I, \n\\end{equation}\n\nwhere $V$ is the voltage, and $\\epsilon_{\\gamma}$ is the average energy to create a single electron-hole pair. Finally, the ionization yield, Q, is defined as:\n\n\\begin{equation}\nQ = \\frac{E_I}{E_r}\n\\end{equation}\n\nGiven these definitons, if one _assumes_ a normal distribution for the resulting ionization yield distribution and propagates the uncertainty on Q via the equations above and a first-order Taylor expansion the result is the one published by EDELWEISS [[Edw04][Edw04]]:\n\n\\begin{equation}\n\\sigma_{Q}^0(E_r) = \\sqrt{\\frac{1}{E^2_r} \\left( \\left(1+\\frac{V}{\\epsilon_{\\gamma}}\\langle Q\\rangle\\right)^2\\sigma_I^2 + \\left( 1+\\frac{V}{\\epsilon_{\\gamma}}\\right)^2\\langle Q\\rangle^2\\sigma_H^2\\right)},\n\\end{equation}\n\nwhere $\\langle Q \\rangle$ is the average ionization yield as a function of recoil energy. \n\n[Edw04]: https://doi.org/10.1016/j.nima.2004.04.218 \"EDELWEISS 2004 Publication\"\n\nIn order to discover how much the effective Fano factor for nuclear recoils is contributing we want to first see how much wider the nuclear recoil band is than this estimate. In the EDELWEISS paper [[Edw04][Edw04]], this is done by simply adding a constant in quadrature:\n\n\\begin{equation}\n\\sigma_{Q}(E_r) = \\sqrt{(\\sigma_{Q}^{0})^2 + C^2}\n\\end{equation}\n\nThe constant C comes out to be _around_ 0.04, and this is larger than the expected effect of multiple-scattering (see Section 3). \n\nTo duplicate this fit for the EDELWEISS detector \"GGA3\" and add fitting uncertainties, we have first computed the full non-normal ionization yield distribution from the resolutions, it shows that the EDELWEISS analytical form very slightly underpredicts the ionization yield width. We call this function $\\tilde{\\sigma}_{Q}^0$. \n\n[Edw04]: https://doi.org/10.1016/j.nima.2004.04.218 \"EDELWEISS 2004 Publication\"\n\n\n```python\n# import data from Edelweiss\nimport pandas as pds\nres_data = pds.read_csv(\"data/edelweiss_NRwidth_GGA3_data.txt\", skiprows=1, \\\n names=['E_recoil', 'sig_NR', 'E_recoil_err', 'sig_NR_err'], \\\n delim_whitespace=True)\n\nresER_data = pds.read_csv(\"data/edelweiss_ERwidth_GGA3_data.txt\", skiprows=1, \\\n names=['E_recoil', 'sig_ER', 'sig_ER_err'], \\\n delim_whitespace=True)\n\nresER_data = resER_data.sort_values(by='E_recoil')\n\nprint (res_data.head(10))\nE_recoil = res_data[\"E_recoil\"]\nsig_NR = res_data[\"sig_NR\"]\nsig_NR_err = res_data['sig_NR_err']\nE_recoil_ER = resER_data[\"E_recoil\"]\nsig_ER = resER_data[\"sig_ER\"]\nsig_ER_err = resER_data['sig_ER_err']\n```\n\n E_recoil sig_NR E_recoil_err sig_NR_err\n 0 16.1946 0.062345 0.946176 0.001157\n 1 16.4428 0.062345 0.945278 0.001157\n 2 44.2627 0.046528 0.992477 0.001543\n 3 24.5012 0.059397 0.992477 0.001185\n 4 97.7172 0.044847 1.033260 0.002783\n 5 58.4014 0.050082 0.991830 0.002288\n 6 34.2156 0.053417 1.033260 0.001102\n\n\n\n```python\nimport h5py\nfilename = 'data/sims.h5'\n#remove vars\nf = h5py.File(filename,'r')\n\n#save the results for the Edw fit\npath='{}/'.format('ER')\n\nxE = np.asarray(f[path+'xE'])\nqbootsigs = np.asarray(f[path+'qbootsigs'])\nqbootsigerrsu = np.asarray(f[path+'qbootsigerrsu'])\nqbootsigerrsl = np.asarray(f[path+'qbootsigerrsl'])\n\n\nf.close()\n```\n\n\n```python\n#get the resolutions for GGA3\nimport EdwRes as er\n\naH=0.0381\nV=4.0\nC=0.0\nsigHv,sigIv,sigQerv,sigH_NRv,sigI_NRv,sigQnrv = er.getEdw_det_res('GGA3',V,'data/edw_res_data.txt',aH,C)\n\nimport fano_calc as fc\n\n#recall defaults (filename='test.h5', \n#det='GGA3',band='ER',F=0.00001,V=4.0,alpha=(1/10000.0),aH=0.035,Erv=None,sigv=None,erase=False)\nE,sig = fc.RWCalc(filename='data/res_calc.h5')\n\nprint(np.shape(E))\n```\n\n (200,)\n\n\nIn the figure below, the functon $\\tilde{\\sigma}_{QER}^0$ is shown as the solid curve. This curve is the electron-recoil version of the correct single-scatter ionization yield width $\\tilde{\\sigma}_{Q}^0$. The dashed curve is the resolution predicted from the EDELWEISS publication assuming a normal distribution for the ionization at each measured recoil energy. \n\nBoth of these are using the adjusted resolution parameter $a_H^{\\prime}$ equal to:\n\n\\begin{equation}\na_H^{\\prime} = \\frac{0.0386}{2\\sqrt{2\\log(2)}}.\n\\end{equation}\n\nThis adjustment was done in the EDELWEISS publication to fit the measured width of the electron recoil band [[Edw04][Edw04]]. \n\n[Edw04]: https://doi.org/10.1016/j.nima.2004.04.218 \"EDELWEISS 2004 Publication\"\n\n\nAlso shown in the figure is a high-statistics set of simulated data with the same resolutions (same value of $a_H^{\\prime}$) and the experimental data of EDELWEISS [[Edw04][Edw04]].\n\n## Figure 2a of the Paper:\n\n\n```python\n#set up a 1d plot\nfig,axes = plt.subplots(1,1,figsize=(9.0,8.0),sharex=True)\nax1 = axes\n\nmask = [True, True, False, False, True, True, True, True, True]\n\n\n\nX=np.arange(0.1,200,0.1)\n\n\nax1.plot(X,sigQerv(X),color='r',linestyle=\"--\",linewidth=2, \\\n label='single-scatter res. model (ER) (aH={})'.format(aH))\nax1.plot(E,sig,color='r',linestyle=\"-\",linewidth=2, \\\n label='single-scatter res. model (ER) (aH={})'.format(aH))\nax1.errorbar(xE,qbootsigs, yerr=(qbootsigerrsl,qbootsigerrsu), \\\n color='k', marker='o',markersize=4,linestyle='none',label='ER scatters', linewidth=2)\nax1.errorbar(E_recoil_ER[mask],sig_ER[mask], yerr=sig_ER_err[mask], \\\n color='k', marker='^',markersize=8,linestyle='none',label='Edw. ER scatters', linewidth=2)\n\n\n\n\nymin = 0.04\nymax = 0.066\n\n\n\nax1.set_yscale('linear')\n#ax1.set_yscale('log')\nax1.set_xlim(40, 200) \nax1.set_ylim(ymin,ymax)\nax1.set_xlabel(r'recoil energy [keV]',**axis_font)\nax1.set_ylabel('ionization yield width',**axis_font)\nax1.grid(True)\nax1.yaxis.grid(True,which='minor',linestyle='--')\nax1.legend(loc=1,prop={'size':22})\n#ax1.legend(bbox_to_anchor=(1.04,1),borderaxespad=0,prop={'size':22})\n\nfor axis in ['top','bottom','left','right']:\n ax1.spines[axis].set_linewidth(2)\n\nplt.tight_layout()\nplt.savefig('figures/paper_figures/ERyieldWidth_Figure2a.eps')\nplt.savefig('figures/paper_figures/ERyieldWidth_Figure2a.pdf')\nplt.show()\n```\n\nWith the value of $a_H^{\\prime}$ specified by the fit above, it is possible to calculate the expected single-scatter nuclear recoil ionization yield width as a function of energy, $\\tilde{\\sigma}_{Q}^0(E_r)$. \n\nWith this function in hand, we can then repeat the fit done in the EDELWEISS paper using the corrected version of the equation above:\n\n\\begin{equation}\n\\sigma_{Q} = \\sqrt{(\\tilde{\\sigma}_{Q}^0)^2 + C^2}.\n\\end{equation}\n\nFurthermore, we allow the parameter C to be a linear function of energy, to improve the fit quality, $C = C_0 + mE_r$. This fit is displayed in the figure below. \n\n\n```python\nfilename = 'data/systematic_error_fits.h5'\n#remove vars\nf = h5py.File(filename,'r')\nfor i in f['mcmc/edwdata_sys_error']:\n print(i)\n\n#save the results for the Edw fit\npath='{}/{}/'.format('mcmc','edwdata_sys_error')\n\nCms = np.asarray(f[path+'Cms'])\nslope = np.asarray(f[path+'m'])\na_yield = np.asarray(f[path+'A'])\nb_yield = np.asarray(f[path+'B'])\naH = np.asarray(f[path+'aH'])\nscale = np.asarray(f[path+'scale'])\nsamples = np.asarray(f[path+'samples'])\nsampsize = np.asarray(f[path+'sampsize'])\nxl = np.asarray(f[path+'Er'])\nupvec = np.asarray(f[path+'Csig_u'])\ndnvec = np.asarray(f[path+'Csig_l'])\nSigtot = np.asarray(f[path+'Sigss'])\nSigss = np.sqrt(Sigtot**2 - (Cms+slope*xl)**2)\n\nprint(Cms)\nprint(samples[0:5,:])\nf.close()\nprint(np.shape(samples))\n```\n\n A\n B\n Cms\n Csig_l\n Csig_u\n Er\n Sigss\n aH\n m\n samples\n sampsize\n scale\n 0.0401182258\n [[1.65991113e-02 3.17938117e-02 1.72848169e-04 9.60623060e-01\n 2.53571472e-01 3.89854952e-02]\n [1.64140930e-02 3.62294874e-02 1.21912111e-04 9.89509101e-01\n 1.72416566e-01 1.04542858e-01]\n [1.64008065e-02 3.62266219e-02 1.25921253e-04 9.95054885e-01\n 1.72930975e-01 1.00686204e-01]\n [1.62507246e-02 3.65956482e-02 1.11214649e-04 1.00693739e+00\n 1.64393892e-01 1.03460830e-01]\n [1.61375604e-02 3.63839307e-02 1.10043601e-04 1.01873876e+00\n 1.64451219e-01 1.12642479e-01]]\n (470000, 6)\n\n\n## Figure 2b of the Paper:\n\n\n```python\n#set up a 1d plot\nfig,axes = plt.subplots(1,1,figsize=(9.0,8.0),sharex=True)\nax1 = axes\n\nprint(np.shape(samples[np.random.randint(len(samples), size=100)]))\n#for Cms_em, m_em in samples[np.random.randint(len(samples), size=100)]:\nfor aH_em, Cms_em, m_em, scale_em, A_em, B_em in samples[np.random.randint(len(samples), size=100)]:\n ax1.plot(xl, np.sqrt(Sigss**2+(Cms_em+m_em*xl)**2), color=\"orange\", alpha=0.1)\n\nax1.plot(xl,upvec,color='r',linestyle=\"--\",linewidth=2, \\\n label='1$\\sigma$ fluct.')\nax1.plot(xl,dnvec,color='r',linestyle=\"--\",linewidth=2, \\\n label='')\n\nax1.plot(xl,np.sqrt(Sigss**2+(Cms+xl*slope)**2),color='g',linestyle=\"-\",linewidth=3, \\\n label='(C$_0$={:01.3}; m={:01.2E})'.format(Cms,slope))\n\nax1.errorbar(E_recoil[2::],sig_NR[2::], yerr=sig_NR_err[2::], \\\n color='k', marker='o', markersize=4,linestyle='none',label='NR Edw. Measurement', linewidth=2)\n\nymin = 0.04\nymax = 0.1\n\n\n\nax1.set_yscale('linear')\n#ax1.set_yscale('log')\nax1.set_xlim(10, 200) \nax1.set_ylim(ymin,ymax)\nax1.set_xlabel(r'recoil energy [keV]',**axis_font)\nax1.set_ylabel('ionization yield width',**axis_font)\nax1.grid(True)\nax1.yaxis.grid(True,which='minor',linestyle='--')\nax1.legend(loc=1,prop={'size':22})\n#ax1.legend(bbox_to_anchor=(1.04,1),borderaxespad=0,prop={'size':22})\n\nfor axis in ['top','bottom','left','right']:\n ax1.spines[axis].set_linewidth(2)\n\nplt.tight_layout()\nplt.savefig('figures/paper_figures/EdwyieldWidthFit_Figure2b.eps')\nplt.savefig('figures/paper_figures/EdwyieldWidthFit_Figure2b.pdf')\nplt.show()\n```\n\n# 3. Multiple-Scattering Cannot Account for All of the Yield Broadening\n\nAn obvious candidate, _aside_ from an intrinsic effective Fano factor, that might account for the yield broadening observed over the single-scatter prediction is multiple-scattering. If a neutron enters the detector and scatters more than once, the known non-linearity of the average ionization yield for each collison **_guarantees_** that the total yield will fluctuate to lower values than expected given the **_total_** energy deposited. \n\nWe use a Monte Carlo simulation of neutron scattering from a $^{252}$Cf source to approximate this effect. The following empirical single-scatter yield model is used since it approximates the EDELWEISS data fairly well:\n\n\\begin{equation}\n\\langle Q \\rangle = 0.16E_r^{0.18}. \n\\end{equation}\n\nWe apply this yield model to _each individual scatter_ in the simulated data and then sum to obtain the expected measured ionization yield, also folding in the measured EDELWEISS sensor resolutions appropriately. \n\n\n```python\nimport observable_simulations as osim\n\nQ,Ernr,Q_ss,Ernr_ss = osim.simQEr()\n\nEmin = 20 \nEmax = 30\n\nimport histogram_yield as hy\n\nbindf, bindfE = hy.QEr_Ebin(Q, Ernr, bins=[Emin, Emax],silent=True)\n\nqbins = np.linspace(0,0.6,40)\nxcq = (qbins[:-1] + qbins[1:]) / 2\n\nfor i,Qv in enumerate(bindf):\n n,nx = np.histogram(Qv,bins=qbins)\n \n \nbindf_ss, bindfE_ss = hy.QEr_Ebin(Q_ss, Ernr_ss, bins=[Emin, Emax],silent=True)\n\nfor i,Qv in enumerate(bindf_ss):\n n_ss,nx_ss = np.histogram(Qv,bins=qbins)\n \n```\n\n## Figure 3a of the Paper:\n\n\n```python\n#set up a 1d plot\nfig,axes = plt.subplots(1,1,figsize=(9.0,8.0),sharex=True)\nax1 = axes\n\n\nmshist = n/np.sum(n)/np.diff(xcq)[0]\nsshist = n_ss/np.sum(n_ss)/np.diff(xcq)[0]\n\nestring = r'${}\\mathrm{{keV}}< E_r \\leq {}\\mathrm{{keV}}$'.format(Emin,Emax)\n#print(estring)\nax1.step(xcq,mshist, where='mid',color='m', linestyle='-', \\\n label='multiple scatters {}'.format(estring), linewidth=2)\nax1.step(xcq,sshist, where='mid',color='b', linestyle='-', \\\n label='single scatters'.format(estring), linewidth=2)\n\nymin = 0.0\nymax = 10\n\nblue = '#118DFA'\nax1.fill_between(xcq,np.zeros(np.shape(xcq)),mshist,step='mid',facecolor='m',alpha=0.4, \\\n label='')\nax1.fill_between(xcq,np.zeros(np.shape(xcq)),sshist,step='mid',facecolor='b',alpha=0.4, \\\n label='')\n\n\nax1.set_yscale('linear')\n#ax1.set_yscale('log')\nax1.set_xlim(0.0, 0.6) \nax1.set_ylim(ymin,ymax)\nax1.set_xlabel(r'ionization yield',**axis_font)\nax1.set_ylabel('PDF',**axis_font)\nax1.grid(True)\nax1.yaxis.grid(True,which='minor',linestyle='--')\nax1.legend(loc=1,prop={'size':22})\n#ax1.legend(bbox_to_anchor=(1.04,1),borderaxespad=0,prop={'size':22})\n\nfor axis in ['top','bottom','left','right']:\n ax1.spines[axis].set_linewidth(2)\n\nplt.tight_layout()\nplt.savefig('figures/paper_figures/MSyieldHist_Figure3a.eps')\nplt.savefig('figures/paper_figures/MSyieldHist_Figure3a.pdf')\nplt.show()\n```\n\n\n```python\nfilename = 'data/mcmc_fits.h5'\n#remove vars\nf = h5py.File(filename,'r')\n\n#save the results for the Edw fit\npath='{}/{}/'.format('mcmc','multiples')\n\nCms = np.asarray(f[path+'Cms'])\nslope = np.asarray(f[path+'m'])\nsamples = np.asarray(f[path+'samples'])\nsampsize = np.asarray(f[path+'sampsize'])\nxl = np.asarray(f[path+'Er'])\nupvec = np.asarray(f[path+'Csig_u'])\ndnvec = np.asarray(f[path+'Csig_l'])\nSigss = np.asarray(f[path+'Sigss'])\n\nf.close()\nprint(np.shape(samples))\n```\n\n (40000, 2)\n\n\n\n```python\nfilename = 'data/sims.h5'\n#remove vars\nf = h5py.File(filename,'r')\n\n#save the results for the Edw fit\npath='{}/'.format('NR')\n\nxE = np.asarray(f[path+'xE'])\nqbootsigs = np.asarray(f[path+'qbootsigs'])\nqbootsigerrsu = np.asarray(f[path+'qbootsigerrsu'])\nqbootsigerrsl = np.asarray(f[path+'qbootsigerrsl'])\n\n\nf.close()\n```\n\n## Figure 3b of the Paper:\n\n\n```python\n#set up a 1d plot\nfig,axes = plt.subplots(1,1,figsize=(9.0,8.0),sharex=True)\nax1 = axes\n\nfor Cms_em, m_em in samples[np.random.randint(len(samples), size=100)]:\n ax1.plot(xl, np.sqrt(Sigss**2+(Cms_em+m_em*xl)**2), color=\"orange\", alpha=0.1)\n\nax1.plot(xl,upvec,color='r',linestyle=\"--\",linewidth=2, \\\n label='1$\\sigma$ fluct.')\nax1.plot(xl,dnvec,color='r',linestyle=\"--\",linewidth=2, \\\n label='')\n\nax1.plot(xl,np.sqrt(Sigss**2+(Cms+xl*slope)**2),color='g',linestyle=\"-\",linewidth=3, \\\n label='(C$_0$={:01.3}; m={:01.2E})'.format(Cms,slope))\n\nax1.errorbar(xE,qbootsigs, yerr=(qbootsigerrsl,qbootsigerrsu), \\\n color='k', marker='o', markersize=4,linestyle='none',label='simulated NR scatters', linewidth=2)\n\nymin = 0.025\nymax = 0.045\n\n\n\nax1.set_yscale('linear')\n#ax1.set_yscale('log')\nax1.set_xlim(10, 200) \nax1.set_ylim(ymin,ymax)\nax1.set_xlabel(r'recoil energy [keV]',**axis_font)\nax1.set_ylabel('ionization yield width',**axis_font)\nax1.grid(True)\nax1.yaxis.grid(True,which='minor',linestyle='--')\nax1.legend(loc=1,prop={'size':22})\n#ax1.legend(bbox_to_anchor=(1.04,1),borderaxespad=0,prop={'size':22})\n\nfor axis in ['top','bottom','left','right']:\n ax1.spines[axis].set_linewidth(2)\n\nplt.tight_layout()\nplt.savefig('figures/paper_figures/MSyieldWidthFit_Figure3b.eps')\nplt.savefig('figures/paper_figures/MSyieldWidthFit_Figure3b.pdf')\nplt.show()\n```\n\n# 4. This Implies a Certain \"Effective\" Fano Factor\n\nIt is clear from the previous sections that the additions that need to be added to the account for the simulated multiple scatters are not as large as those required to explain the EDELWEISS data. This means that there is an \"extra\" unaccounted variance that needs to be added to the ionization yield. \n\nWe take the position that this variance is the intrinsic variance in the number of electron-hole pairs produced in a primary nuclear recoil reaction. Since this is analagous the the variance that is parameterized by the Fano factor for electron recoils, we call this the effective Fano factor for nuclear recoils. \n\nWe note that physically, this variance comes from a different mechanism than for electron recoils. In the nuclear recoil case most of the variance comes from the variation of the energy put into the phonon system from the primary recoil. In electron recoils there is little to no energy put into the phonon system from the primary recoil. Therefore, it is not surprising that the effective Fano factor can be significantly larger than the electron-recoil counterpart. \n\nSince we have accurately modeled the ionization yield variance without severe approximation (we do assume the number of electron-hole pairs is distributed normally; a very mild assumption when large numbers of pairs are expected), we can also include an intrinsic Fano factor in the modeling. Effectively we make the following replacement:\n\n\\begin{equation}\n\\tilde{\\sigma}_{Q}^0(E_r) \\rightarrow \\tilde{\\sigma}_{Q}^0(E_r;F_n),\n\\end{equation}\n\nWhere $F_n$ is the effective nuclear recoil Fano factor. \n\nTo extract the effective Fano factor for the nuclear recoils we need to come up with a parameter C$_F$ which is a function of recoil energy and is a corrected version of the measured \"widening\" parameter C from the EDELWEISS data and the widening parameter C$^{\\prime}$ from the effect of multiple-scattering. \n\nThe corrected parameter C$_F$ is assumed to be due to the effective Fano factor for nuclear recoils and is given by:\n\n\\begin{equation}\nC_F = \\sqrt{C^2 - C^{\\prime 2}}.\n\\end{equation}\n\nThis parameter can be used to extract the effective Fano factor at a given recoil energy by applying our $\\tilde{E}_r$-Q plane model (from `QEr_2D_joint.ipynb`), with an arbitrary Fano factor, until the correct ionization yield (Q) width is obtained (see `Qwidth_confirm.ipynb`). Mathematically this corresponds to adjusting F$_n$ until the following equality is satisfied:\n\n\\begin{equation}\n\\tilde{\\sigma}_{Q}^0(E_r;F_n) = \\sqrt{\\left(\\tilde{\\sigma}_{Q}^0(E_r)\\right)^2 + C_F^2}.\n\\end{equation}\n\n## Uncertainties on F$_n$\n\nSince both C and C$^{\\prime}$ have uncertainty it is necessary to propagate that uncertainty to F$_n$. If we call the uncertainty (1$\\sigma$) on C $\\sigma$ and on C$^{\\prime}$ $\\sigma^{\\prime}$, then the uncertainty on C$_F$ is given by:\n\n\\begin{equation}\n\\sigma_{C_F} = \\frac{1}{\\sqrt{C^2 - C^{\\prime 2}}} \\sqrt{C^2 \\sigma^2 + C^{\\prime 2} \\sigma^{\\prime 2}}.\n\\end{equation}\n\nThese uncertainties are propagated to the extracted F$_n$ by solving the following equality for F$^+_n$ and F$^-_n$ which represent the corresponding upper and lower boundaries on F$_n$. \n\n\\begin{equation}\n\\begin{aligned}\n\\tilde{\\sigma}_{Q}^0(E_r;F^+_n) &= \\sqrt{\\left(\\tilde{\\sigma}_{Q}^0(E_r)\\right)^2 + \\left(C_F + \\sigma_{C_F}\\right)^2} \\\\\n\\tilde{\\sigma}_{Q}^0(E_r;F^-_n) &= \\sqrt{\\left(\\tilde{\\sigma}_{Q}^0(E_r)\\right)^2 + \\left(C_F - \\sigma_{C_F}\\right)^2} \n\\end{aligned}\n\\end{equation}\n\n\n```python\nimport fano_calc as fc\n\n(Er,F,Fup,Fdn) = fc.RWCalcFMCMC('data/mcmc_fano.h5')\n```\n\n GGA3/4.0/5.556E-02/0.0381/\n True\n\n\n# Figure 4 of the Paper:\n\n\n```python\n#set up a plot\nfrom mpl_toolkits.axes_grid1.inset_locator import inset_axes\nfrom mpl_toolkits.axes_grid1.inset_locator import InsetPosition\nfig,axes = plt.subplots(1,1,figsize=(9.0,8.0),sharex=True)\nax1 = axes\n\nxmax=10\n\n#ax1.errorbar(ddata_e,ddata_fluct_F,yerr=[ddata_fluct_F_err,ddata_fluct_F_err], marker='o', markersize=8, \\\n# ecolor='k',color='k', linestyle='none', label='Dougherty eff. F', linewidth=2)\n\n\n#ax1.plot (X, diff, 'm-', label='Thomas-Fermi (newgrad)')\n#ax1.plot (Esi(epr), 100*np.sqrt(f_Omega2_eta2(epr))*ylindv(1000*Esi(epr)), 'g-', label='$\\Omega/\\epsilon$ (NAC III approx. D)')\nax1.plot (Er, F, 'k-', label='extracted Ge eff. Fano')\nax1.plot (Er, Fup, 'b', label='')\nax1.plot (Er, Fdn, 'b', label='')\n\n\nblue = '#118DFA'\nax1.fill_between(Er,Fdn,Fup,facecolor=blue,alpha=0.5,label='1$\\sigma$ statistical region')\n\n\nax1.set_yscale('linear')\nax1.set_xscale('linear')\nax1.set_xlim(10, 200)\nax1.set_ylim(6,300)\nax1.set_xlabel('recoil energy ($E_r$) [keV]',**axis_font)\nax1.set_ylabel('effective Fano factor (F$_n$)',**axis_font)\n#ax1.grid(True)\n#ax1.xaxis.grid(True,which='minor',linestyle='--')\nax1.legend(loc=4,prop={'size':22})\n\n\n###Make inset\nbbox_ll_x = 0.07\nbbox_ll_y = -0.0225\nbbox_w = 1\nbbox_h = 1\neps = 0.01\naxins = inset_axes(ax1, height=\"25%\", width=\"50%\", bbox_to_anchor=(bbox_ll_x,bbox_ll_y,bbox_w-bbox_ll_x,bbox_h), loc='upper left',bbox_transform=ax1.transAxes)\n#ax1.add_patch(plt.Rectangle((bbox_ll_x, bbox_ll_y+eps), bbox_w-eps-bbox_ll_x, bbox_h-eps, ls=\"--\", ec=\"c\", fc=\"None\",\n# transform=ax1.transAxes))\n\n#axins = plt.axes([0,0,1,1])\n#axins_pos = InsetPosition(ax3, [0.25, 0.65, 0.7, 0.3])\n#axins.set_axes_locator(axins_pos)\n\n# larger region than the original image\nx1, x2, y1, y2 = 7, 30, 0, 30\naxins.set_xlim(x1, x2)\naxins.set_ylim(y1, y2)\naxins.plot (Er, F, 'k-', label='')\naxins.plot (Er, Fup, 'b', label='')\naxins.plot (Er, Fdn, 'b', label='')\naxins.fill_between(Er,Fdn,Fup,facecolor=blue,alpha=0.5,label='')\naxins.yaxis.grid(True,which='minor',linestyle='--')\naxins.xaxis.grid(True,which='minor',linestyle='--')\naxins.grid(True)\n####\n\nfor axis in ['top','bottom','left','right']:\n ax1.spines[axis].set_linewidth(2)\n axins.spines[axis].set_linewidth(2)\n\n#plt.tight_layout()\n#plt.savefig('figures/figure.png')\nplt.savefig('figures/paper_figures/GeFano_Figure4.eps')\nplt.savefig('figures/paper_figures/GeFano_Figure4.pdf')\nplt.show()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "34d3bcb89156783a9014e6844769506a743bb5dd", "size": 669084, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "analysis_notebooks/nrFano_paper.ipynb", "max_stars_repo_name": "villano-lab/nrFano_paper2019", "max_stars_repo_head_hexsha": "f44565bfb3e45b2dfbe2a73cba9f620a7120abd7", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2020-04-06T17:27:33.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T20:38:54.000Z", "max_issues_repo_path": "analysis_notebooks/nrFano_paper.ipynb", "max_issues_repo_name": "villano-lab/nrFano_paper2019", "max_issues_repo_head_hexsha": "f44565bfb3e45b2dfbe2a73cba9f620a7120abd7", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "analysis_notebooks/nrFano_paper.ipynb", "max_forks_repo_name": "villano-lab/nrFano_paper2019", "max_forks_repo_head_hexsha": "f44565bfb3e45b2dfbe2a73cba9f620a7120abd7", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 554.7960199005, "max_line_length": 178380, "alphanum_fraction": 0.9399567169, "converted": true, "num_tokens": 11207, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.46879062662624377, "lm_q2_score": 0.18476751738161779, "lm_q1q2_score": 0.08661728025350399}} {"text": "```python\nfrom IPython.core.display import display_html\nfrom urllib.request import urlopen\n\ncssurl = 'http://j.mp/1DnuN9M'\ndisplay_html(urlopen(cssurl).read(), raw=True)\n```\n\n\n\n\n\n\n\n\n\n\n# Filtro de suavizado\n\n## El problema\n\nQueremos recuperar una imagen corrupta, es decir, una imagen que a traves de un proceso desconocido, perdi\u00f3 informaci\u00f3n.\n\nPero como te puedes imaginar la informaci\u00f3n simplemente no es recuperable de la nada, en esta ocasi\u00f3n intentaremos recuperar algo de la definici\u00f3n de la imagen tratando de minimizar los bordes visibles en imagen, es decir suavizarla.\n\nEmpecemos primero por mostrar nuestra imagen:\n\n\n```python\n# Se importan funciones para graficar y se inicializa con graficas en linea\n%matplotlib inline\nfrom matplotlib.pyplot import imshow, cm, figure\n```\n\n\n```python\n# Se importa funcion para cargar imagenes\nfrom scipy.ndimage import imread\n```\n\n\n```python\n# Se guardan las rutas a los archivos en variables para facil acceso\ncorrecta = \"imagenes/stones.jpg\"\ncorrupta = \"imagenes/stones_c.jpg\"\n```\n\n\n```python\n# Se lee la imagen del archivo a una variable de python y se grafica\nim_corrupta = imread(corrupta)\n\nf = figure(figsize=(8,6))\nax = imshow(im_corrupta, cmap=cm.gray, interpolation='none');\n\nax.axes.get_xaxis().set_visible(False)\nax.axes.get_yaxis().set_visible(False)\n\nax.axes.spines[\"right\"].set_color(\"none\")\nax.axes.spines[\"left\"].set_color(\"none\")\nax.axes.spines[\"top\"].set_color(\"none\")\nax.axes.spines[\"bottom\"].set_color(\"none\")\n```\n\nComo podemos ver la imagen a perdido definici\u00f3n al utilizar un metodo de compresi\u00f3n muy ingenuo, el cual simplemente repite la misma informaci\u00f3n una y otra vez, tomemos una muestra de los datos para ilustrar esto mejor:\n\n\n```python\ntamano_muestra = 12\nmuestra = im_corrupta[0:tamano_muestra, 0:tamano_muestra]\nmuestra\n```\n\n\n\n\n array([[44, 44, 44, 44, 33, 33, 33, 33, 17, 17, 17, 17],\n [44, 44, 44, 44, 33, 33, 33, 33, 17, 17, 17, 17],\n [44, 44, 44, 44, 33, 33, 33, 33, 17, 17, 17, 17],\n [44, 44, 44, 44, 33, 33, 33, 33, 17, 17, 17, 17],\n [43, 43, 43, 43, 34, 34, 34, 34, 20, 20, 20, 20],\n [43, 43, 43, 43, 34, 34, 34, 34, 20, 20, 20, 20],\n [43, 43, 43, 43, 34, 34, 34, 34, 20, 20, 20, 20],\n [43, 43, 43, 43, 34, 34, 34, 34, 20, 20, 20, 20],\n [45, 45, 45, 45, 34, 34, 34, 34, 26, 26, 26, 26],\n [45, 45, 45, 45, 34, 34, 34, 34, 26, 26, 26, 26],\n [45, 45, 45, 45, 34, 34, 34, 34, 26, 26, 26, 26],\n [45, 45, 45, 45, 34, 34, 34, 34, 26, 26, 26, 26]], dtype=uint8)\n\n\n\nLo cual graficamente se ve:\n\n\n```python\nimshow(muestra, cmap=cm.gray, interpolation='none');\n```\n\n## La soluci\u00f3n\n\nMuy bien, es momento de pensar!\n\nSi lo que queremos es **minimizar** las *diferencias* entre dos valores contiguos, es decir\n\n$$\nx_{(i+1)j} - x_{ij}\n$$\n\npodemos empezar restandolos y ver que pasa:\n\n\n```python\nfrom numpy import matrix, eye, array\n```\n\n\n```python\n# Creamos una matriz identidad y la trasladamos para obtener el valor de la celda\n# contigua derecha\nI = eye(tamano_muestra, dtype=int).tolist()\n# Agregamos un vector cero, por el momento\nceros = [0 for i in range(tamano_muestra)]\nid_trasladada = matrix(array(I[1:tamano_muestra] + [ceros]))\nid_trasladada\n```\n\n\n\n\n matrix([[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])\n\n\n\n\n```python\nmuestra_rest = matrix(muestra) * id_trasladada - matrix(muestra)\nmuestra_rest\n```\n\n\n\n\n matrix([[-44, 0, 0, 0, 11, 0, 0, 0, 16, 0, 0, 0],\n [-44, 0, 0, 0, 11, 0, 0, 0, 16, 0, 0, 0],\n [-44, 0, 0, 0, 11, 0, 0, 0, 16, 0, 0, 0],\n [-44, 0, 0, 0, 11, 0, 0, 0, 16, 0, 0, 0],\n [-43, 0, 0, 0, 9, 0, 0, 0, 14, 0, 0, 0],\n [-43, 0, 0, 0, 9, 0, 0, 0, 14, 0, 0, 0],\n [-43, 0, 0, 0, 9, 0, 0, 0, 14, 0, 0, 0],\n [-43, 0, 0, 0, 9, 0, 0, 0, 14, 0, 0, 0],\n [-45, 0, 0, 0, 11, 0, 0, 0, 8, 0, 0, 0],\n [-45, 0, 0, 0, 11, 0, 0, 0, 8, 0, 0, 0],\n [-45, 0, 0, 0, 11, 0, 0, 0, 8, 0, 0, 0],\n [-45, 0, 0, 0, 11, 0, 0, 0, 8, 0, 0, 0]])\n\n\n\nEsta matriz nos muestra la diferencia con el elemento contiguo, si lo analizamos graficamente:\n\n\n```python\nimshow(muestra_rest, cmap=cm.gray, interpolation='none');\n```\n\npodemos ver que solo hay diferencia entre los conjuntos de pixeles de la imagen que fueron eliminados.\n\nAhora, el punto es minimizar estas diferencias segun un factor de desempe\u00f1o, y como pudiste notar en el ejemplo, pueden haber valores negativos, por lo que una buena idea es hacer el factor de desempe\u00f1o un factor cuadratico:\n\n$$\n\\left|\\left| x_{(i+1)j} - x_{ij} \\right|\\right|^2\n$$\n\ny utilizando la forma matricial, que honestamente es mucho mas util en el caso de estas imagenes, nos queda:\n\n$$\n\\left|\\left| X (I_t - I) \\right|\\right|^2 = \\left|\\left| X D_1 \\right|\\right|^2\n$$\n\nen donde:\n\n$$\nI =\n\\begin{pmatrix}\n1 & 0 & 0 & \\dots & 0 & 0 & 0 \\\\\n0 & 1 & 0 & \\dots & 0 & 0 & 0 \\\\\n0 & 0 & 1 & \\dots & 0 & 0 & 0 \\\\\n\\vdots & \\vdots & \\vdots & & \\vdots & \\vdots & \\vdots \\\\\n0 & 0 & 0 & \\dots & 1 & 0 & 0 \\\\\n0 & 0 & 0 & \\dots & 0 & 1 & 0\n\\end{pmatrix}\n$$\n\n$$\nI_t =\n\\begin{pmatrix}\n0 & 1 & 0 & \\dots & 0 & 0 & 0 \\\\\n0 & 0 & 1 & \\dots & 0 & 0 & 0 \\\\\n0 & 0 & 0 & \\dots & 0 & 0 & 0 \\\\\n\\vdots & \\vdots & \\vdots & & \\vdots & \\vdots & \\vdots \\\\\n0 & 0 & 0 & \\dots & 0 & 1 & 0 \\\\\n0 & 0 & 0 & \\dots & 0 & 0 & 1\n\\end{pmatrix}\n$$\n\ny por lo tanto $D_1$ es de la forma:\n\n$$\nD_1 =\n\\begin{pmatrix}\n-1 & 1 & 0 & \\dots & 0 & 0 & 0 \\\\\n0 & -1 & 1 & \\dots & 0 & 0 & 0 \\\\\n0 & 0 & -1 & \\dots & 0 & 0 & 0 \\\\\n\\vdots & \\vdots & \\vdots & & \\vdots & \\vdots & \\vdots \\\\\n0 & 0 & 0 & \\dots & -1 & 1 & 0 \\\\\n0 & 0 & 0 & \\dots & 0 & -1 & 1\n\\end{pmatrix}\n$$\n\n
\n\nCabe mencionar que $I$ e $I_t$ no son matrices cuadradas, ya que tienen una columna mas, principalmente para ajustar el hecho de que la operaci\u00f3n de resta es binaria y necesitamos hacer una operaci\u00f3n por cada uno de las $n$ columnas, por lo que necesitaremos $n + 1$ operandos; sin embargo al obtener el factor cuadrado, nos quedar\u00e1 una matriz de las dimensiones adecuadas. \n\n
\n\nAsi pues, este factor cuadrado, lo denotaremos por la funci\u00f3n $f_1(X)$ de la siguiente manera:\n\n$$\nf_1(X) = \\left|\\left| X D_1 \\right|\\right|^2 = D_1^T X^T X D_1\n$$\n\ny de la misma manera obtendremos un operador para la diferencia entre los elementos contiguos verticalmente, el cual se ver\u00e1:\n\n$$\nf_2(X) = \\left|\\left| D_2 X \\right|\\right|^2 = X^T D_2^T D_2 X\n$$\n\nen donde $D_2$ es de la forma:\n\n$$\nD_2 =\n\\begin{pmatrix}\n-1 & 0 & 0 & \\dots & 0 & 0 \\\\\n1 & -1 & 0 & \\dots & 0 & 0 \\\\\n0 & 1 & -1 & \\dots & 0 & 0 \\\\\n\\vdots & \\vdots & \\vdots & & \\vdots & \\vdots \\\\\n0 & 0 & 0 & \\dots & -1 & 0 \\\\\n0 & 0 & 0 & \\dots & 1 & -1 \\\\\n0 & 0 & 0 & \\dots & 0 & 1\n\\end{pmatrix}\n$$\n\nAsi pues, nuestro objetivo es minimizar la siguiente expresi\u00f3n:\n\n$$\n\\min_{X \\in \\mathbb{R}^{n \\times m}} f_1(X) + f_2(X)\n$$\n\nSin embargo tenemos que considerar que una optimizaci\u00f3n perfecta nos llevaria al caso en que todos los valores son exactamente iguales, por lo que agregaremos un termino para penalizar una diferencia demasiado grande con la imagen a suavizar, el cual simplemente es la diferencia entre la imagen obtenida y la imagen corrupta:\n\n$$\nf_3(X) = \\left|\\left| X - X_C \\right|\\right|^2\n$$\n\nPor lo que nuestra expresi\u00f3n a minimizar se vuelve:\n\n$$\n\\min_{X \\in \\mathbb{R}^{n \\times m}} V(X) = \\min_{X \\in \\mathbb{R}^{n \\times m}} \\delta \\left( f_1(X) + f_2(X) \\right) + f_3(X) \\quad \\delta > 0\n$$\n\nen donde $\\delta$ es la ponderaci\u00f3n que le damos al termino *suavizante*.\n\n
\n $\\DeclareMathOperator{\\trace}{tr}$\n
\n\n
\n\nCabe hacer la aclaraci\u00f3n de que hasta el momento hemos utilizado una norma matricial, normal, sin embargo ahora utilizaremos la norma de Frobenius, la cual se define como:\n\n$$\nf_1(X) = \\left|\\left| X D_1 \\right|\\right|_F^2 = \\trace{(D_1^T X^T X D_1)}\n$$\n\ny esta nos provee una manera facil de calcular la forma cuadratica que queremos. Mas a\u00fan, esta $f_1(X) \\in \\mathbb{R}$, por lo que podemos usar los conceptos de calculo variacional que hemos aprendido.\n\n
\n\nAhora empezamos a calcular el valor de estas funciones alrededor de $X$ con una variaci\u00f3n $H$.\n\n$$\n\\begin{align}\nf_1(X + H) &= \\trace{\\left( D_1^T (X + H)^T (X + H) D_1 \\right)} \\\\\n&= \\trace{\\left( D_1^T (X^T + H^T) (X + H) D_1 \\right)} \\\\\n&= \\trace{\\left( D_1^T (X^T X + X^T H + H^T X + H^T H) D_1 \\right)} \\\\\n&= \\trace{\\left( D_1^T X^T X D_1 + D_1^T X^T H D_1 + D_1^T H^T X D_1 + D_1^T H^T H D_1 \\right)} \\\\\n&= \\trace{\\left( D_1^T X^T X D_1 \\right)} + \\trace{\\left( D_1^T X^T H D_1 \\right)} + \\trace{\\left( D_1^T H^T X D_1 \\right)} + \\trace{\\left( D_1^T H^T H D_1 \\right)} \\\\\n\\end{align}\n$$\n\nAqui hacemos notar que el primer termino es $f_1(X) = \\trace{\\left( D_1^T X^T X D_1 \\right)}$, el segundo y tercer termino son el mismo, ya que la traza es invariante ante la transposici\u00f3n y el ultimo termino es de orden superior, $o\\left(\\left|\\left|H\\right|\\right|_F\\right)$.\n\n
\n\nRecordemos que la variable con respecto a la que estamos haciendo estos calculos es la perturbaci\u00f3n $H$, por lo que los terminos de orden superior estan relacionados a $H$ y no a $X$ la cual asumimos es nuestro optimo.\n\n
\n\n\nSi desarrollamos la expasi\u00f3n de la serie de Taylor alrededor de $X$ con una perturbaci\u00f3n $H$, notaremos que los terminos que obtuvimos corresponden a los de esta expansi\u00f3n:\n\n$$\nf_1(X + H) = f_1(X) + f_1'(X) \\cdot H + o\\left(\\left|\\left| H \\right|\\right|_F\\right)\n$$\n\ny por lo tanto:\n\n$$\nf_1'(X) \\cdot H = 2 \\trace{\\left( D_1^T X^T H D_1 \\right)}\n$$\n\nSi expandimos las otras dos funciones alrededor de X con una perturbaci\u00f3n $H$, podremos ver que:\n\n$$\nf_2'(X) \\cdot H = 2 \\trace{\\left( X^T D_2^T D_2 H \\right)}\n$$\n\n$$\nf_3'(X) \\cdot H = 2 \\trace{\\left( \\left( X - X_C \\right)^T H \\right)}\n$$\n\nAhora, por superposici\u00f3n podemos asegurar que nuestro criterio de desempe\u00f1o $V(X)$ tiene una derivada de la forma:\n\n$$\n\\begin{align}\nV'(X) \\cdot H &= \\left( f_1'(X) \\cdot H + f_2'(X) \\cdot H \\right) \\delta + f_3'(X) \\cdot H \\\\\n&= \\left( 2 \\trace{\\left( D_1^T X^T H D_1 \\right)} + 2 \\trace{\\left( X^T D_2^T D_2 H \\right)} \\right) \\delta + 2 \\trace{\\left( \\left( X - X_C \\right)^T H \\right)} \\\\\n&= 2 \\trace{\\left[ \\left( \\left( D_1^T X^T H D_1 \\right) + \\left( X^T D_2^T D_2 H \\right) \\right) \\delta + \\left( X - X_C \\right)^T H \\right]}\n\\end{align}\n$$\n\ny al utilizar la condici\u00f3n de optimalidad de primer orden tenemos que:\n\n$$\nV'(X) \\cdot H = 2 \\trace{\\left[ \\left( \\left( D_1^T X^T H D_1 \\right) + \\left( X^T D_2^T D_2 H \\right) \\right) \\delta + \\left( X - X_C \\right)^T H \\right]} = 0\n$$\n\ny al hacer manipulaci\u00f3n algebraica, obtenemos que:\n\n$$\n\\begin{align}\n\\trace{\\left[ \\left( \\left( D_1^T X^T H D_1 \\right) + \\left( X^T D_2^T D_2 H \\right) \\right) \\delta + \\left( X - X_C \\right)^T H \\right]} &= 0 \\\\\n\\trace{\\left[ \\left( \\left( D_1^T H^T X D_1 \\right) + \\left( H^T D_2^T D_2 X \\right) \\right) \\delta + H^T \\left( X - X_C \\right) \\right]} &= 0 \\\\\n\\trace{\\left[ \\left( \\left( H^T X D_1 D_1^T \\right) + \\left( H^T D_2^T D_2 X \\right) \\right) \\delta + H^T \\left( X - X_C \\right) \\right]} &= 0 \\\\\n\\trace{\\left[ H^T \\left( X D_1 D_1^T + D_2^T D_2 X \\right) \\delta + \\left( X - X_C \\right) \\right]} &= 0\n\\end{align}\n$$\n\nEn este punto nos preguntamos, para que condiciones de perturbaci\u00f3n queremos que nuestra condici\u00f3n de optimalidad se cumpla, por lo que si exigimos que esto se cumpla para toda $H$, tenemos que:\n\n$$\n\\left( X D_1 D_1^T + D_2^T D_2 X \\right) \\delta + \\left( X - X_C \\right) = 0\n$$\n\nlo cual implica que:\n\n$$\nX \\delta D_1 D_1^T + ( \\delta D_2^T D_2 + I) X = X_C\n$$\n\nlo cual tiene la forma de la ecuaci\u00f3n de Lyapunov:\n\n$$\nA X + X B = Q\n$$\n\nen donde $A$ y $B$ son de la forma:\n\n$$\nA = \\delta D_2^T D_2 + I \\quad B = \\delta D_1 D_1^T\n$$\n\npor lo que ya encontramos una forma de programar este algoritmo de suavizado, utilizando la funci\u00f3n ```solve_sylvester``` proporcionada por el paquete Scipy.\n\nAhora regresemos a la programaci\u00f3n; lo que tenemos que construir son las matrices $D_1$ y $D_2$ para incorporarlas a una funci\u00f3n que calcule todo en linea.\n\nEmpecemos construyendo una de las filas de esta matriz. Recordemos que $D_1$ es de la forma:\n\n$$\nD_1 =\n\\begin{pmatrix}\n-1 & 1 & 0 & \\dots & 0 & 0 & 0 \\\\\n0 & -1 & 1 & \\dots & 0 & 0 & 0 \\\\\n0 & 0 & -1 & \\dots & 0 & 0 & 0 \\\\\n\\vdots & \\vdots & \\vdots & & \\vdots & \\vdots & \\vdots \\\\\n0 & 0 & 0 & \\dots & -1 & 1 & 0 \\\\\n0 & 0 & 0 & \\dots & 0 & -1 & 1\n\\end{pmatrix}\n$$\n\npor lo que primero tenemos que construir un arreglo de la forma:\n\n$$\n\\begin{pmatrix}\n-1 & 1 & 0 & \\dots & 0 & 0 & 0\n\\end{pmatrix}\n$$\n\nLa siguiente funci\u00f3n describe una manera **dificil** de conseguir esto, sin embargo para efectos de demostraci\u00f3n servir\u00e1:\n\n\n```python\ndef fun(i, tot):\n '''Arreglo especial\n Esta funcion crea un arreglo de tama\u00f1o tot con un -1 en el elemento i y un\n 1 en el elemento i+1, siendo los demas lugares del arreglo ceros:\n\n indice -> 0, 1, ..., i-1, i, i+1, i+2, ..., tot\n arreglo -> [0, 0, ..., 0, -1, 1, 0, ..., 0].\n\n Ejemplo\n -------\n >>> fun(3, 5)\n array([ 0, 0, -1, 1, 0])\n '''\n\n # Se importan funciones necesarias\n from numpy import array\n\n # Se define el inicio del arreglo\n if i == 0:\n a = [-1]\n a.append(1)\n else:\n a = [0]\n\n # Se incluyen numeros restantes en el arreglo\n for t in range(tot - 1)[1:]:\n if i == t:\n a.append(-1)\n a.append(1)\n else:\n a.append(0)\n\n # Se convierte en arreglo de numpy el resultado\n return array(a)\n```\n\nCuando mandamos llamar esta funci\u00f3n para que nos de un arreglo de diez elementos, con el $-1$ en el segundo lugar, obtendremos:\n\n\n```python\nfun(1, 10)\n```\n\n\n\n\n array([ 0, -1, 1, 0, 0, 0, 0, 0, 0, 0])\n\n\n\n
\n\nPython lista los arreglos, y en general todas sus estructuras, empezando en ```0```, por lo que el indice ```1``` corresponde al segundo lugar.\n\n
\n\nY ahora, utilizando una funci\u00f3n especial de Python, crearemos un arreglo de arreglos, utilizando una sintaxis muy parecida a la de una definici\u00f3n matem\u00e1tica de la forma:\n\n$$\n\\left\\{ f(i) : i \\in [0, 10] \\right\\}\n$$\n\n\n```python\narreglo_de_arreglos = [fun(i, 11) for i in range(10)]\narreglo_de_arreglos\n```\n\n\n\n\n [array([-1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]),\n array([ 0, -1, 1, 0, 0, 0, 0, 0, 0, 0, 0]),\n array([ 0, 0, -1, 1, 0, 0, 0, 0, 0, 0, 0]),\n array([ 0, 0, 0, -1, 1, 0, 0, 0, 0, 0, 0]),\n array([ 0, 0, 0, 0, -1, 1, 0, 0, 0, 0, 0]),\n array([ 0, 0, 0, 0, 0, -1, 1, 0, 0, 0, 0]),\n array([ 0, 0, 0, 0, 0, 0, -1, 1, 0, 0, 0]),\n array([ 0, 0, 0, 0, 0, 0, 0, -1, 1, 0, 0]),\n array([ 0, 0, 0, 0, 0, 0, 0, 0, -1, 1, 0]),\n array([ 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 1])]\n\n\n\nEsto se puede convertir facilmente en una matriz por medio de la instrucci\u00f3n ```matrix```.\n\n\n```python\nmatrix(arreglo_de_arreglos)\n```\n\n\n\n\n matrix([[-1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [ 0, -1, 1, 0, 0, 0, 0, 0, 0, 0, 0],\n [ 0, 0, -1, 1, 0, 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, -1, 1, 0, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 0, -1, 1, 0, 0, 0, 0, 0],\n [ 0, 0, 0, 0, 0, -1, 1, 0, 0, 0, 0],\n [ 0, 0, 0, 0, 0, 0, -1, 1, 0, 0, 0],\n [ 0, 0, 0, 0, 0, 0, 0, -1, 1, 0, 0],\n [ 0, 0, 0, 0, 0, 0, 0, 0, -1, 1, 0],\n [ 0, 0, 0, 0, 0, 0, 0, 0, 0, -1, 1]])\n\n\n\nPor lo que estamos listos para juntar todos estos elementos en una funci\u00f3n que ejecute todo este flujo de trabajo:\n\n\n```python\ndef suavizado_imagen(imagen_corrupta, delta):\n '''Suavizado de imagen\n \n Esta funcion toma la imagen especificada en la primer variable por su ruta, y\n le aplica un suavizado en proporcion al flotante pasado a la segunda variable.\n \n Ejemplo\n -------\n >>> suavizado_imagen(\"ruta/de/la/imagen.png\", 0.1)\n '''\n \n # Se importan funciones necesarias\n from matplotlib.pyplot import imshow, cm, figure\n from scipy.linalg import solve_sylvester\n from scipy.ndimage import imread\n from numpy import matrix, eye, array\n \n # Se define funcion auxiliar para las filas de la matriz D\n def fun(i, tot):\n '''Arreglo especial\n Esta funcion crea un arreglo de tama\u00f1o tot con un -1 en el elemento i y un\n 1 en el elemento i+1, siendo los demas lugares del arreglo ceros:\n \n indice -> 0, 1, ..., i-1, i, i+1, i+2, ..., tot\n arreglo -> [0, 0, ..., 0, -1, 1, 0, ..., 0].\n \n Ejemplo\n -------\n >>> fun(3, 5)\n array([ 0, 0, -1, 1, 0])\n '''\n \n # Se importan funciones necesarias\n from numpy import array\n \n # Se define el inicio del arreglo\n if i == 0:\n a = [-1]\n a.append(1)\n else:\n a = [0]\n \n # Se incluyen numeros restantes en el arreglo\n for t in range(tot - 1)[1:]:\n if i == t:\n a.append(-1)\n a.append(1)\n else:\n a.append(0)\n \n # Se convierte en arreglo de numpy el resultado\n return array(a)\n \n # Se importa la imagen a tratar y se obtiene sus dimensiones\n im_corrupta = imread(imagen_corrupta)\n n = im_corrupta.shape[0]\n m = im_corrupta.shape[1]\n \n # Se obtienen las matrices D1 y D2\n D1 = matrix(array([fun(i, n + 1) for i in range(n)]))\n D2 = matrix(array([fun(i, m + 1) for i in range(m)]))\n \n # Se obtiene la imagen suavizada al resolver la ecuacion de Lyapunov (o Sylvester)\n imagen_suavizada = solve_sylvester(eye(n) + delta*D1*D1.T,\n delta*D2*D2.T,\n im_corrupta)\n \n # Se dibuja la imagen suavizada\n f = figure(figsize=(8,6))\n ax = imshow(imagen_suavizada, cmap=cm.gray, interpolation='none')\n \n # Se quitan bordes de la grafica\n ax.axes.get_xaxis().set_visible(False)\n ax.axes.get_yaxis().set_visible(False)\n \n # Se hacen transparentes las lineas de los bordes\n ax.axes.spines[\"right\"].set_color(\"none\")\n ax.axes.spines[\"left\"].set_color(\"none\")\n ax.axes.spines[\"top\"].set_color(\"none\")\n ax.axes.spines[\"bottom\"].set_color(\"none\")\n```\n\n\n```python\n# Se prueba la funcion con un suavizado de 10\nsuavizado_imagen(corrupta, 10)\n```\n\nY hemos obtenido el resultado deseado...\n\n## La cereza del pastel\n\nContentos con nuestros resultados podriamos irnos a descansar, pero aun queda un truco mas. Ya que hemos obtenido una funci\u00f3n que ejecuta todo nuestro c\u00f3digo, podemos hacer que IPython la ejecute en linea al momento de darle un parametro diferente.\n\nPara esto utilizaremos un Widget de IPython:\n\n\n```python\n# Se importan widgets de IPython para interactuar con la funcion\nfrom IPython.html.widgets import interact, fixed\n```\n\n :0: FutureWarning: IPython widgets are experimental and may change in the future.\n\n\nDada la funci\u00f3n que obtuvimos, ahora solo tenemos que mandar llamar a la funci\u00f3n:\n\n```python\ninteract(funcion_con_codigo,\n parametro_fijo=fixed(param),\n parametro_a_variar=(inicio, fin))\n```\n\n\n```python\n# Se llama a la funcion interactiva\ninteract(suavizado_imagen, imagen_corrupta=fixed(corrupta), delta=(0.0, 10.0))\n```\n\nCon lo que solo tenemos que mover el deslizador para cambiar ```delta``` y ver los resultados de estos cambios.\n\n\n```python\n# Se muestra la imagen correcta\nim_correcta = imread(correcta)\n\nf = figure(figsize=(8,6))\nax = imshow(im_correcta, cmap=cm.gray, interpolation='none');\n\nax.axes.get_xaxis().set_visible(False)\nax.axes.get_yaxis().set_visible(False)\n\nax.axes.spines[\"right\"].set_color(\"none\")\nax.axes.spines[\"left\"].set_color(\"none\")\nax.axes.spines[\"top\"].set_color(\"none\")\nax.axes.spines[\"bottom\"].set_color(\"none\")\n```\n\nEspero te hayas divertido con esta larga explicaci\u00f3n y al final sepas un truco mas.\n\nSi deseas compartir este Notebook de IPython utiliza la siguiente direcci\u00f3n:\n\nhttp://bit.ly/1CJNEBn\n\no bien el siguiente c\u00f3digo QR:\n\n\n\n\n```python\n# Codigo para generar codigo :)\nfrom qrcode import make\nimg = make(\"http://bit.ly/1CJNEBn\")\nimg.save(\"codigos/suave.jpg\")\n```\n", "meta": {"hexsha": "1a16af318906dfb9231e7a45c2f8c528f7771eb5", "size": 452272, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "IPythonNotebooks/Control Optimo/Filtro suavizado.ipynb", "max_stars_repo_name": "robblack007/DCA", "max_stars_repo_head_hexsha": "0ea5f8b613e2dabe1127b857c7bfe9be64c52d20", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "IPythonNotebooks/Control Optimo/Filtro suavizado.ipynb", "max_issues_repo_name": "robblack007/DCA", "max_issues_repo_head_hexsha": "0ea5f8b613e2dabe1127b857c7bfe9be64c52d20", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "IPythonNotebooks/Control Optimo/Filtro suavizado.ipynb", "max_forks_repo_name": "robblack007/DCA", "max_forks_repo_head_hexsha": "0ea5f8b613e2dabe1127b857c7bfe9be64c52d20", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-03-20T12:44:13.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-20T12:44:13.000Z", "avg_line_length": 386.2271562767, "max_line_length": 175762, "alphanum_fraction": 0.9125570453, "converted": true, "num_tokens": 8957, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3486451488696663, "lm_q2_score": 0.24798742624020276, "lm_q1q2_score": 0.08645961313932088}} {"text": "# Scientific documents with $\\LaTeX$\n\n## Introduction\n\nIn your research, you will produce papers, reports and—very importantly—your thesis. These documents can be written using a WYSIWYG (What You See Is What You Get) editor (e.g., Word). However, an alternative especially suited for scientific publications is LaTeX. In LaTeX, the document is written in a text file (`.tex`) with certain typesetting (tex) syntax. Text formatting is done using markups (like HTML). The file is then \"compiled\" (like source code of a programming language) into a file – typically in PDF.\n\n### Why $\\LaTeX$?\n\nA number of reasons: \n\n* The input is a small, portable text file\n* LaTeX\u00a0compilers are freely available for all OS'\n* Exactly the same result on any computer (not true for Word, for example)\n* LaTeX\u00a0produces beautiful, professional looking docs\n* Images are easy to embed and annotate \n* Mathematical formulas (esp complex ones) are easy to write\n* LaTeX\u00a0is very stable – current version basically same since 1994! (9 major versions of MS Word since 1994 – with compatibility issues)\n* LaTeX\u00a0is free!\n* You can focus on content, and not worry so much about formatting while writing \n* An increasing number of Biology journals provide $\\LaTeX$\u00a0templates, making formatting quicker. \n* Referencing (bibliography) is easy (and can also be version controlled) and works with tools like Mendeley and Zotero\n* Plenty of online support available – your question has probably already been answered\n* You can integrate LaTeX\u00a0into a workflow to auto-generate lengthy and complex documents (like your thesis).\n\n---\n\n\n\n
LaTeX documents scale up better then WYSIWYG editors.
\n\n---\n\n### Limitations of $\\LaTeX$\n\n* It has a steeper learning curve.\n* Can be difficult to manage revisions with multiple authors – especially if they don't use LaTeX! (Cue: Windows on a virtual machine!)\n* Tracking changes are not available out of the box (but can be enabled using a suitable package) \n* Typesetting tables can be a bit complex.\n* Images and floats are easy to embed, and won't jump around like Word, but if you don't use the right package, they can be difficult to place where you want!\n\n### Installing LaTeX\n\nType this in terminal: \n\n```bash\nsudo apt-get install texlive-full texlive-fonts-recommended texlive-pictures texlive-latex-extra imagemagick\n```\nIt's a large installation, and will take some time. \n\nWe will use a text editor in this lecture, but you can use one of a number of dedicated editors (e.g., texmaker,\nGummi, TeXShop, etc.) There are also WYSIWYG frontends (e.g., Lyx, TeXmacs). \n\n[Overleaf](https://www.overleaf.com/) is also very good (and works with git), especially for collaborating with non LaTeX-ers (your university may have a blanket license for the pro version).\n\n## A first LaTeX\u00a0example\n\n$\\star$ In your code editor type the following in a file called `FirstExample.tex` and save it in a suitable location in your coursework directory (e.g, `/Week1/Code/`):\n\n```tex\n\n\\documentclass[12pt]{article}\n\n\\title{A Simple Document}\n\n\\author{Your Name}\n\n\\date{}\n\n\\begin{document}\n \\maketitle\n \n \\begin{abstract}\n This paper must be cool!\n \\end{abstract}\n \n \\section{Introduction}\n Blah Blah!\n \n \\section{Materials \\& Methods}\n One of the most famous equations is:\n \\begin{equation}\n E = mc^2\n \\end{equation}\n This equation was first proposed by Einstein in 1905 \n \\cite{einstein1905does}.\n \n \\bibliographystyle{plain}\n \\bibliography{FirstBiblio}\n\\end{document}\n```\n\nNow, let's get a citation for this paper:\n\n$\\star$ In Google Scholar, go to \"settings\" (upper right corner) and choose BibTeX as bibliography manager. Then type \"energy of a body einstein 1905\"\n\nThe paper should be the one on the top.\n\nClick \"Import into BibTeX\" should show the following text, that you will save in the file `FirstBiblio.bib` (in the same directory as `FirstExample.tex`):\n\n```bash\n@article{einstein1905does,\n title={Does the inertia of a body depend upon its energy-content?},\n author={Einstein, A.},\n journal={Annalen der Physik},\n volume={18},\n pages={639--641},\n year={1905}\n}\n```\nNow we can create a `.pdf` of the article.\n\n$\\star$ In the terminal type (make sure you are the right directory!):\n\n``` bash\n pdflatex FirstExample.tex\n bibtex FirstExample\n pdflatex FirstExample.tex\n pdflatex FirstExample.tex\n```\nThis should produce the file `FirstExample.pdf`:\n\n\n\nIn the above bash script, we repeated the `pdflatex` command 3 times. Here's why:\n\n* The first `pdflatex` run generates two files:`FirstExample.log` and `FirstExample.aux` (and an incomplete `.pdf`). \n * At this step, all cite{...} arguments info that bibtex needs are written into the `.aux` file.\n* Then, running `bibtex` (followed by the filename without the `.tex` extension) results in bibtex reading the `.aux` file that was generated. It then produces two more files: `FirstExample.bbl` and `FirstExample.blg`\n * At this step, bibtex takes the citation info in the aux file and puts the relevant biblogrphic entries into the `.bbl` file (you can take a peek at all these files), formatted according to the instructions provided by the bibliography style that you have specified using `bibliographystyle{plain}`.\n* The second `pdflatex` run updates `FirstExample.log` and `FirstExample.aux` (and a still-incomplete `.pdf` - the citations are not correctly formatted yet)\n * At this step, the reference list in the `.bbl` generated in the above step is included in the document, and the correct labels for the in-text `cite{...}` commands are written in `.aux` file (but the non in the actual pdf).\n* The third and final `pdflatex` run then updates `FirstExample.log` and `FirstExample.aux` one last time, and now produces the complete `.pdf` file, with citations correctly formatted. \n * At this step, latex knows what the correct in-text citation labels are, and includes them in the pdf document.\n\nThroughout all this, the `.log` file plays no role except to record info about how the commands are running. \n\nPHEW! Why go through this repetitive sequence of commands? Well, \"it is what it is\" – $\\LaTeX$, with all its advantages does have its quirks. The reason why it is this way, is probably that back then (Donald Knuth's PhD Thesis writing days – late 1950's to early 1960's), computers had *tiny* memories (RAMs), and writing files to disk and then reading them back in for the next step of the algorithm/program was the best (and only) way to go. Why has this not been fixed? I am not sure - keep an eye out, and it might well be (and then, raise an issue on TheMulQuaBio's [Github](https://github.com/mhasoba/TheMulQuaBio/issues)!)\n\nAnyway, as such, you don't have to run these commands literally step by step, because you can create a bash script that does it for you, as we will now learn.\n\n### A bash script to compile LaTeX\n\nLet's write a useful little bash script to compile latex with bibtex.\n\n$\\star$ Type the following script and call it `CompileLaTeX.sh` (you know where to put it!):\n\n```bash\n#!/bin/bash\npdflatex $1.tex\nbibtex $1\npdflatex $1.tex\npdflatex $1.tex\nevince $1.pdf &\n\n## Cleanup\nrm *.aux\nrm *.log\nrm *.bbl\nrm *.blg\n```\nHow do you run this script? The same as your previous bash scripts, so:\n\n```bash\nbash CompileLaTeX.sh FirstExample\n```\n\n*Why have I not written the `.tex` extension of `FirstExample` in the command above? Can you make this bash script more convenient to use?*\n\n## A few $\\LaTeX$\u00a0basics\n\n### Spaces, new lines and special characters\n\n* Several spaces in your text editor are treated as one space in the typeset document\n* Several empty lines are treated as one empty line\n* One empty line defines a new paragraph\n* Some characters are \"special\": # $ % ^ & _ { } ~ \\\n\nTo type these special characters, you have to add a \"backslash\" in front, e.g., \\\\\\$ produces $\\$$.\n\n### Document structure:\n\n* Each LaTeX\u00a0command starts with \\\\ . For example, to get $\\LaTeX$, you need `\\LaTeX`\n* The first command is always `\\\\`documentclass`` defining the type of document (e.g., `article, book, report, letter`).\n* You can set several options. For example, to set size of text to 10 points and the letter paper size: \n`\\documentclass[10pt,letterpaper]{article}`.\n* After having declared the type of document, you can specify packages you want to use. The most useful are:\n \n `\\usepackage{color}`: use colors for text in your document.\n\n `\\usepackage{amsmath,amssymb}`: American Mathematical Society formats and commands for typesetting mathematics.\n\n `\\usepackage{fancyhdr}`: fancy headers and footers.\n\n `\\usepackage{graphicx}`: include figures in pdf, ps, eps, gif and jpeg.\n\n `\\usepackage{listings}`: typeset source code for various programming languages.\n\n `\\usepackage{rotating}`: rotate tables and figures.\n\n `\\usepackage{lineno}`: line numbers.\n\n* Once you select the packages, you can start your document with `\\begin{document}`, and end it with `\\end{document}`.\n\n### Typesetting math\n\nThere are two ways to display math\n\n1. Inline mathematics (i.e., within the text).\n\n2. Stand-alone, numbered equations and formulae.\n\nFor inline math, the \"dollar\" sign flanks the math to be typeset. For example, the code:\n\n```\n$\\int_0^1 p^x (1-p)^y dp$\n```\n\nbecomes $\\int_0^1 p^x (1-p)^y dp$\n\nFor numbered equations (almost always a great idea), LaTeX\u00a0provides the\n`equation` environment:\n\n```\n\\begin{equation}\n \\int_0^1 \\left(\\ln \\left( \\frac{1}{x} \\right) \n \\right)^y dx = y!\n\\end{equation}\n```\n\nbecomes \n\n$$\\int_0^1 \\left(\\ln \\left( \\frac{1}{x} \\right) \\right)^y dx = y!$$\n\n## LaTeX\u00a0templates\n\nThere a lots of useful LaTeX templates out there. I have added some templates in the `TheMulQuaBio` repo that you should have a look and play around with. Or just google \"latex template\" along with the name of a journal you want! \n\n## A few more tips\n\nThe following tips might prove handy:\n\n* LaTeX\u00a0has a full set of symbols and operators (plenty of lists online)\n* Long documents can be split into separate `.tex` documents and combined using `input`\n* Long documents can be split into separate `.tex` documents and Figures can be included using the `graphicx` package\n* You can use Mendeley or Zotero to export and maintain `.bib` files\n* You can redefine environments and commands in the preamble\n\n## Practicals\n\n### First $\\LaTeX$ example\n\nTest `CompileLaTeX.sh` with `FirstExample.tex` and bring it under verson control under`week1` in your repository. Make sure that `CompileLaTeX.sh` will work if somebody else ran it from their computer using `FirstExample.tex` as an input.\n\n### Practicals wrap-up\n\nMake sure you have your `Week 1` directory organized with `Data`, `Sandbox` and `Code` with the necessary files and this week's (functional!) scripts in there. Every script should run without errors on my computer. This includes the five solutions (single-line commands you came up with) in `UnixPrac1.txt`.\n\n*Commit and push every time you do some significant amount of coding work (after testing it), and then again before the given deadline (this will be announced in class).*\n\n## Readings & Resources\n\n### General \n\n* [http://en.wikibooks.org/wiki/LaTeX/Introduction](http://en.wikibooks.org/wiki/LaTeX/Introduction)\n* [The not so Short Introduction to LaTeX](https://ctan.org/tex-archive/info/lshort/english/)\n* [The Visual LaTeX\u00a0FAQ: sometimes it is difficult to describe what you want to do!](http://mirror.las.iastate.edu/tex-archive/info/visualFAQ/visualFAQ.pdf)\n* [The Overleaf knowledge base](https://www.overleaf.com/learn), including\n * [Learn LaTeX in 30 minutes](https://www.overleaf.com/learn/latex/Learn_LaTeX_in_30_minutes)\n * [Presentations in LaTeX](https://www.overleaf.com/learn/latex/Beamer_Presentations:_A_Tutorial_for_Beginners_(Part_1)\u2014Getting_Started)\n * [Bibliographies in LaTeX](https://www.overleaf.com/learn/latex/Bibliography_management_with_bibtex)\n\n### Templates\n* The [Overleaf templates](https://www.overleaf.com/latex/templates) \n * Includes many [Imperial College Dissertation templates](https://www.overleaf.com/latex/templates?addsearch=imperial%20college)).\n\n### $\\LaTeX$ Tables\n* [$\\LaTeX$ table generator](http://www.tablesgenerator.com/)\n", "meta": {"hexsha": "09c20255e81b3befe4ed515d57ac5a954fa78125", "size": 16962, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "content/notebooks/04-LaTeX.ipynb", "max_stars_repo_name": "nesbitm/VBiTE_2021", "max_stars_repo_head_hexsha": "3c8e54d4878ff3f9b9272da73c3c8700902ddb21", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "content/notebooks/04-LaTeX.ipynb", "max_issues_repo_name": "nesbitm/VBiTE_2021", "max_issues_repo_head_hexsha": "3c8e54d4878ff3f9b9272da73c3c8700902ddb21", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "content/notebooks/04-LaTeX.ipynb", "max_forks_repo_name": "nesbitm/VBiTE_2021", "max_forks_repo_head_hexsha": "3c8e54d4878ff3f9b9272da73c3c8700902ddb21", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 41.8814814815, "max_line_length": 653, "alphanum_fraction": 0.6209763, "converted": true, "num_tokens": 3136, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.18242553269617778, "lm_q2_score": 0.4726834766204328, "lm_q1q2_score": 0.08622953501916375}} {"text": "
\n
\n
\n

Natural Language Processing For Everyone

\n

Text Representation

\n

Bruno Gon\u00e7alves
\n www.data4sci.com
\n @bgoncalves, @data4sci

\n
\n\nIn this lesson we will see in some details how we can best represent text in our application. Let's start by importing the modules we will be using:\n\n\n```python\nimport string\nfrom collections import Counter\nfrom pprint import pprint\nimport gzip\n\nimport matplotlib\nimport matplotlib.pyplot as plt \nimport numpy as np\n\nimport watermark\n\n%matplotlib inline\n%load_ext watermark\n```\n\nList out the versions of all loaded libraries\n\n\n```python\n%watermark -n -v -m -g -iv\n```\n\n Python implementation: CPython\n Python version : 3.8.5\n IPython version : 7.19.0\n \n Compiler : Clang 10.0.0 \n OS : Darwin\n Release : 20.6.0\n Machine : x86_64\n Processor : i386\n CPU cores : 16\n Architecture: 64bit\n \n Git hash: ae641e141a1604bbe1639a2ded4ed2424660eab0\n \n numpy : 1.19.2\n matplotlib: 3.3.2\n json : 2.0.9\n watermark : 2.1.0\n \n\n\nSet the default style\n\n\n```python\nplt.style.use('./d4sci.mplstyle')\n```\n\nWe choose a well known nursery rhyme, that has the added distinction of having been the first audio ever recorded, to be the short snippet of text that we will use in our examples:\n\n\n```python\ntext = \"\"\"Mary had a little lamb, little lamb,\n little lamb. 'Mary' had a little lamb\n whose fleece was white as snow.\n And everywhere that Mary went\n Mary went, MARY went. Everywhere\n that mary went,\n The lamb was sure to go\"\"\"\n```\n\n## Tokenization\n\nThe first step in any analysis is to tokenize the text. What this means is that we will extract all the individual words in the text. For the sake of simplicity, we will assume that our text is well formed and that our words are delimited either by white space or punctuation characters.\n\n\n```python\nprint(string.punctuation)\n```\n\n !\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~\n\n\n\n```python\ndef extract_words(text):\n temp = text.split() # Split the text on whitespace\n text_words = []\n\n for word in temp:\n # Remove any punctuation characters present in the beginning of the word\n while word[0] in string.punctuation:\n word = word[1:]\n\n # Remove any punctuation characters present in the end of the word\n while word[-1] in string.punctuation:\n word = word[:-1]\n\n # Append this word into our list of words.\n text_words.append(word.lower())\n \n return text_words\n```\n\nAfter this step we now have our text represented as an array of individual, lowercase, words:\n\n\n```python\ntext_words = extract_words(text)\nprint(text_words)\n```\n\n ['mary', 'had', 'a', 'little', 'lamb', 'little', 'lamb', 'little', 'lamb', 'mary', 'had', 'a', 'little', 'lamb', 'whose', 'fleece', 'was', 'white', 'as', 'snow', 'and', 'everywhere', 'that', 'mary', 'went', 'mary', 'went', 'mary', 'went', 'everywhere', 'that', 'mary', 'went', 'the', 'lamb', 'was', 'sure', 'to', 'go']\n\n\nAs we saw during the video, this is a wasteful way to represent text. We can be much more efficient by representing each word by a number\n\n\n```python\nword_dict = {}\nword_list = []\nvocabulary_size = 0\ntext_tokens = []\n\nfor word in text_words:\n # If we are seeing this word for the first time, create an id for it and added it to our word dictionary\n if word not in word_dict:\n word_dict[word] = vocabulary_size\n word_list.append(word)\n vocabulary_size += 1\n \n # add the token corresponding to the current word to the tokenized text.\n text_tokens.append(word_dict[word])\n```\n\nWhen we were tokenizing our text, we also generated a dictionary **word_dict** that maps words to integers and a **word_list** that maps each integer to the corresponding word.\n\n\n```python\nprint(\"Word list:\", word_list, \"\\n\\n Word dictionary:\")\npprint(word_dict)\n```\n\n Word list: ['mary', 'had', 'a', 'little', 'lamb', 'whose', 'fleece', 'was', 'white', 'as', 'snow', 'and', 'everywhere', 'that', 'went', 'the', 'sure', 'to', 'go'] \n \n Word dictionary:\n {'a': 2,\n 'and': 11,\n 'as': 9,\n 'everywhere': 12,\n 'fleece': 6,\n 'go': 18,\n 'had': 1,\n 'lamb': 4,\n 'little': 3,\n 'mary': 0,\n 'snow': 10,\n 'sure': 16,\n 'that': 13,\n 'the': 15,\n 'to': 17,\n 'was': 7,\n 'went': 14,\n 'white': 8,\n 'whose': 5}\n\n\nThese two datastructures already proved their usefulness when we converted our text to a list of tokens.\n\n\n```python\nprint(text_tokens)\n```\n\n [0, 1, 2, 3, 4, 3, 4, 3, 4, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 0, 14, 0, 14, 0, 14, 12, 13, 0, 14, 15, 4, 7, 16, 17, 18]\n\n\nUnfortunately, while this representation is convenient for memory reasons it has some severe limitations. Perhaps the most important of which is the fact that computers naturally assume that numbers can be operated on mathematically (by addition, subtraction, etc) in a way that doesn't match our understanding of words.\n\n## One-hot encoding\n\nOne typical way of overcoming this difficulty is to represent each word by a one-hot encoded vector where every element is zero except the one corresponding to a specific word.\n\n\n```python\ndef one_hot(word, word_dict):\n \"\"\"\n Generate a one-hot encoded vector corresponding to *word*\n \"\"\"\n \n vector = np.zeros(len(word_dict))\n vector[word_dict[word]] = 1\n \n return vector\n```\n\nSo, for example, the word \"fleece\" would be represented by:\n\n\n```python\nfleece_hot = one_hot(\"fleece\", word_dict)\nprint(fleece_hot)\n```\n\n [0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n\n\nThis vector has every element set to zero, except element 6, since:\n\n\n```python\nprint(word_dict[\"fleece\"])\nfleece_hot[6] == 1\n```\n\n 6\n\n\n\n\n\n True\n\n\n\n\n```python\nprint(fleece_hot.sum())\n```\n\n 1.0\n\n\n## Bag of words\n\nWe can now use the one-hot encoded vector for each word to produce a vector representation of our original text, by simply adding up all the one-hot encoded vectors:\n\n\n```python\ntext_vector1 = np.zeros(vocabulary_size)\n\nfor word in text_words:\n hot_word = one_hot(word, word_dict)\n text_vector1 += hot_word\n \nprint(text_vector1)\n```\n\n [6. 2. 2. 4. 5. 1. 1. 2. 1. 1. 1. 1. 2. 2. 4. 1. 1. 1. 1.]\n\n\nIn practice, we can also easily skip the encoding step at the word level by using the *word_dict* defined above:\n\n\n```python\ntext_vector = np.zeros(vocabulary_size)\n\nfor word in text_words:\n text_vector[word_dict[word]] += 1\n \nprint(text_vector)\n```\n\n [6. 2. 2. 4. 5. 1. 1. 2. 1. 1. 1. 1. 2. 2. 4. 1. 1. 1. 1.]\n\n\nNaturally, this approach is completely equivalent to the previous one and has the added advantage of being more efficient in terms of both speed and memory requirements.\n\nThis is known as the __bag of words__ representation of the text. It should be noted that these vectors simply contains the number of times each word appears in our document, so we can easily tell that the word *mary* appears exactly 6 times in our little nursery rhyme.\n\n\n```python\ntext_vector[word_dict[\"mary\"]]\n```\n\n\n\n\n 6.0\n\n\n\nA more pythonic (and efficient) way of producing the same result is to use the standard __Counter__ module:\n\n\n```python\nword_counts = Counter(text_words)\npprint(word_counts)\n```\n\n Counter({'mary': 6,\n 'lamb': 5,\n 'little': 4,\n 'went': 4,\n 'had': 2,\n 'a': 2,\n 'was': 2,\n 'everywhere': 2,\n 'that': 2,\n 'whose': 1,\n 'fleece': 1,\n 'white': 1,\n 'as': 1,\n 'snow': 1,\n 'and': 1,\n 'the': 1,\n 'sure': 1,\n 'to': 1,\n 'go': 1})\n\n\nFrom which we can easily generate the __text_vector__ and __word_dict__ data structures:\n\n\n```python\nitems = list(word_counts.items())\n\n# Extract word dictionary and vector representation\nword_dict2 = dict([[items[i][0], i] for i in range(len(items))])\ntext_vector2 = [items[i][1] for i in range(len(items))]\n```\n\n\n```python\nword_counts['mary']\n```\n\n\n\n\n 6\n\n\n\nAnd let's take a look at them:\n\n\n```python\ntext_vector\n```\n\n\n\n\n array([6., 2., 2., 4., 5., 1., 1., 2., 1., 1., 1., 1., 2., 2., 4., 1., 1.,\n 1., 1.])\n\n\n\n\n```python\nprint(\"Text vector:\", text_vector2, \"\\n\\nWord dictionary:\")\npprint(word_dict2)\n```\n\n Text vector: [6, 2, 2, 4, 5, 1, 1, 2, 1, 1, 1, 1, 2, 2, 4, 1, 1, 1, 1] \n \n Word dictionary:\n {'a': 2,\n 'and': 11,\n 'as': 9,\n 'everywhere': 12,\n 'fleece': 6,\n 'go': 18,\n 'had': 1,\n 'lamb': 4,\n 'little': 3,\n 'mary': 0,\n 'snow': 10,\n 'sure': 16,\n 'that': 13,\n 'the': 15,\n 'to': 17,\n 'was': 7,\n 'went': 14,\n 'white': 8,\n 'whose': 5}\n\n\nThe results using this approach are slightly different than the previous ones, because the words are mapped to different integer ids but the corresponding values are the same:\n\n\n```python\nfor word in word_dict.keys():\n if text_vector[word_dict[word]] != text_vector2[word_dict2[word]]:\n print(\"Error!\")\n```\n\nAs expected, there are no differences!\n\n## Term Frequency\n\nThe bag of words vector representation introduced above relies simply on the frequency of occurence of each word. Following a long tradition of giving fancy names to simple ideas, this is known as __Term Frequency__.\n\nIntuitively, we expect the the frequency with which a given word is mentioned should correspond to the relevance of that word for the piece of text we are considering. For example, **Mary** is a pretty important word in our little nursery rhyme and indeed it is the one that occurs the most often:\n\n\n```python\nsorted(items, key=lambda x:x[1], reverse=True)\n```\n\n\n\n\n [('mary', 6),\n ('lamb', 5),\n ('little', 4),\n ('went', 4),\n ('had', 2),\n ('a', 2),\n ('was', 2),\n ('everywhere', 2),\n ('that', 2),\n ('whose', 1),\n ('fleece', 1),\n ('white', 1),\n ('as', 1),\n ('snow', 1),\n ('and', 1),\n ('the', 1),\n ('sure', 1),\n ('to', 1),\n ('go', 1)]\n\n\n\nHowever, it's hard to draw conclusions from such a small piece of text. Let us consider a significantly larger piece of text, the first 100 MB of the english Wikipedia from: http://mattmahoney.net/dc/textdata. For the sake of convenience, text8.gz has been included in this repository in the **data/** directory. We start by loading it's contents into memory as an array of words:\n\n\n```python\ndata = []\n\nfor line in gzip.open(\"data/text8.gz\", 'rt'):\n data.extend(line.strip().split())\n```\n\nNow let's take a look at the first 50 words in this large corpus:\n\n\n```python\ndata[:50]\n```\n\n\n\n\n ['anarchism',\n 'originated',\n 'as',\n 'a',\n 'term',\n 'of',\n 'abuse',\n 'first',\n 'used',\n 'against',\n 'early',\n 'working',\n 'class',\n 'radicals',\n 'including',\n 'the',\n 'diggers',\n 'of',\n 'the',\n 'english',\n 'revolution',\n 'and',\n 'the',\n 'sans',\n 'culottes',\n 'of',\n 'the',\n 'french',\n 'revolution',\n 'whilst',\n 'the',\n 'term',\n 'is',\n 'still',\n 'used',\n 'in',\n 'a',\n 'pejorative',\n 'way',\n 'to',\n 'describe',\n 'any',\n 'act',\n 'that',\n 'used',\n 'violent',\n 'means',\n 'to',\n 'destroy',\n 'the']\n\n\n\nAnd the top 10 most common words\n\n\n```python\ncounts = Counter(data)\n\nsorted_counts = sorted(list(counts.items()), key=lambda x: x[1], reverse=True)\n\nfor word, count in sorted_counts[:10]:\n print(word, count)\n```\n\n the 1061396\n of 593677\n and 416629\n one 411764\n in 372201\n a 325873\n to 316376\n zero 264975\n nine 250430\n two 192644\n\n\nSurprisingly, we find that the most common words are not particularly meaningful. Indeed, this is a common occurence in Natural Language Processing. The most frequent words are typically auxiliaries required due to gramatical rules.\n\nOn the other hand, there is also a large number of words that occur very infrequently as can be easily seen by glancing at the word freqency distribution.\n\n\n```python\ndist = Counter(counts.values())\ndist = list(dist.items())\ndist.sort(key=lambda x: x[0])\ndist = np.array(dist)\n\nnorm = np.dot(dist.T[0], dist.T[1])\n\nplt.loglog(dist.T[0], dist.T[1]/norm)\nplt.xlabel(\"count\")\nplt.ylabel(\"P(count)\")\nplt.title(\"Word frequency distribution\")\n```\n\n## Stopwords\n\nOne common technique to simplify NLP tasks is to remove what are known as Stopwords, words that are very frequent but not meaningful. If we simply remove the most common 100 words, we significantly reduce the amount of data we have to consider while losing little information.\n\n\n```python\nstopwords = set([word for word, count in sorted_counts[:100]])\n\nclean_data = []\n\nfor word in data:\n if word not in stopwords:\n clean_data.append(word)\n\nprint(\"Original size:\", len(data))\nprint(\"Clean size:\", len(clean_data))\nprint(\"Reduction:\", 1-len(clean_data)/len(data))\n```\n\n Original size: 17005207\n Clean size: 9006229\n Reduction: 0.470384041782026\n\n\n\n```python\nclean_data[:50]\n```\n\n\n\n\n ['anarchism',\n 'originated',\n 'term',\n 'abuse',\n 'against',\n 'early',\n 'working',\n 'class',\n 'radicals',\n 'including',\n 'diggers',\n 'english',\n 'revolution',\n 'sans',\n 'culottes',\n 'french',\n 'revolution',\n 'whilst',\n 'term',\n 'still',\n 'pejorative',\n 'way',\n 'describe',\n 'any',\n 'act',\n 'violent',\n 'means',\n 'destroy',\n 'organization',\n 'society',\n 'taken',\n 'positive',\n 'label',\n 'self',\n 'defined',\n 'anarchists',\n 'word',\n 'anarchism',\n 'derived',\n 'greek',\n 'without',\n 'archons',\n 'ruler',\n 'chief',\n 'king',\n 'anarchism',\n 'political',\n 'philosophy',\n 'belief',\n 'rulers']\n\n\n\nWow, our dataset size was reduced almost in half!\n\nIn practice, we don't simply remove the most common words in our corpus but rather a manually curate list of stopwords. Lists for dozens of languages and applications can easily be found online.\n\n## Term Frequency/Inverse Document Frequency\n\nOne way of determining of the relative importance of a word is to see how often it appears across multiple documents. Words that are relevant to a specific topic are more likely to appear in documents about that topic and much less in documents about other topics. On the other hand, less meaningful words (like **the**) will be common across documents about any subject.\n\nTo measure the document frequency of a word we will need to have multiple documents. For the sake of simplicity, we will treat each sentence of our nursery rhyme as an individual document:\n\n\n```python\nprint(text)\n```\n\n Mary had a little lamb, little lamb,\n little lamb. 'Mary' had a little lamb\n whose fleece was white as snow.\n And everywhere that Mary went\n Mary went, MARY went. Everywhere\n that mary went,\n The lamb was sure to go\n\n\n\n```python\ncorpus_text = text.split('.')\ncorpus_words = []\n\nfor document in corpus_text:\n doc_words = extract_words(document)\n corpus_words.append(doc_words)\n```\n\nNow our corpus is represented as a list of word lists, where each list is just the word representation of the corresponding sentence:\n\n\n```python\nprint(len(corpus_words))\n```\n\n 4\n\n\n\n```python\npprint(corpus_words)\n```\n\n [['mary', 'had', 'a', 'little', 'lamb', 'little', 'lamb', 'little', 'lamb'],\n ['mary',\n 'had',\n 'a',\n 'little',\n 'lamb',\n 'whose',\n 'fleece',\n 'was',\n 'white',\n 'as',\n 'snow'],\n ['and', 'everywhere', 'that', 'mary', 'went', 'mary', 'went', 'mary', 'went'],\n ['everywhere',\n 'that',\n 'mary',\n 'went',\n 'the',\n 'lamb',\n 'was',\n 'sure',\n 'to',\n 'go']]\n\n\nLet us now calculate the number of documents in which each word appears:\n\n\n```python\ndocument_count = {}\n\nfor document in corpus_words:\n word_set = set(document)\n \n for word in word_set:\n document_count[word] = document_count.get(word, 0) + 1\n\npprint(document_count)\n```\n\n {'a': 2,\n 'and': 1,\n 'as': 1,\n 'everywhere': 2,\n 'fleece': 1,\n 'go': 1,\n 'had': 2,\n 'lamb': 3,\n 'little': 2,\n 'mary': 4,\n 'snow': 1,\n 'sure': 1,\n 'that': 2,\n 'the': 1,\n 'to': 1,\n 'was': 2,\n 'went': 2,\n 'white': 1,\n 'whose': 1}\n\n\nAs we can see, the word __Mary__ appears in all 4 of our documents, making it useless when it comes to distinguish between the different sentences. On the other hand, words like __white__ which appear in only one document are very discriminative. Using this approach we can define a new quantity, the ___Inverse Document Frequency__ that tells us how frequent a word is across the documents in a specific corpus:\n\n\n```python\ndef inv_doc_freq(corpus_words):\n number_docs = len(corpus_words)\n \n document_count = {}\n\n for document in corpus_words:\n word_set = set(document)\n\n for word in word_set:\n document_count[word] = document_count.get(word, 0) + 1\n \n IDF = {}\n \n for word in document_count:\n IDF[word] = np.log(1+number_docs/document_count[word])\n \n return IDF\n```\n\nWhere we followed the convention of using the logarithm of the inverse document frequency. This has the numerical advantage of avoiding to have to handle small fractional numbers. \n\nWe can easily see that the IDF gives a smaller weight to the most common words and a higher weight to the less frequent:\n\n\n```python\ncorpus_words\n```\n\n\n\n\n [['mary', 'had', 'a', 'little', 'lamb', 'little', 'lamb', 'little', 'lamb'],\n ['mary',\n 'had',\n 'a',\n 'little',\n 'lamb',\n 'whose',\n 'fleece',\n 'was',\n 'white',\n 'as',\n 'snow'],\n ['and', 'everywhere', 'that', 'mary', 'went', 'mary', 'went', 'mary', 'went'],\n ['everywhere',\n 'that',\n 'mary',\n 'went',\n 'the',\n 'lamb',\n 'was',\n 'sure',\n 'to',\n 'go']]\n\n\n\n\n```python\nIDF = inv_doc_freq(corpus_words)\n\npprint(IDF)\n```\n\n {'a': 1.0986122886681098,\n 'and': 1.6094379124341003,\n 'as': 1.6094379124341003,\n 'everywhere': 1.0986122886681098,\n 'fleece': 1.6094379124341003,\n 'go': 1.6094379124341003,\n 'had': 1.0986122886681098,\n 'lamb': 0.8472978603872034,\n 'little': 1.0986122886681098,\n 'mary': 0.6931471805599453,\n 'snow': 1.6094379124341003,\n 'sure': 1.6094379124341003,\n 'that': 1.0986122886681098,\n 'the': 1.6094379124341003,\n 'to': 1.6094379124341003,\n 'was': 1.0986122886681098,\n 'went': 1.0986122886681098,\n 'white': 1.6094379124341003,\n 'whose': 1.6094379124341003}\n\n\nAs expected **Mary** has the smallest weight of all words 0, meaning that it is effectively removed from the dataset. You can consider this as a way of implicitly identify and remove stopwords. In case you do want to keep even the words that appear in every document, you can just add a 1. to the argument of the logarithm above:\n\n\\begin{equation}\n\\log\\left[1+\\frac{N_d}{N_d\\left(w\\right)}\\right]\n\\end{equation}\n\nWhen we multiply the term frequency of each word by it's inverse document frequency, we have a good way of quantifying how relevant a word is to understand the meaning of a specific document.\n\n\n```python\ndef tf_idf(corpus_words):\n IDF = inv_doc_freq(corpus_words)\n \n TFIDF = []\n \n for document in corpus_words:\n TFIDF.append(Counter(document))\n \n for document in TFIDF:\n for word in document:\n document[word] = document[word]*IDF[word]\n \n return TFIDF\n```\n\n\n```python\ntf_idf(corpus_words)\n```\n\n\n\n\n [Counter({'mary': 0.6931471805599453,\n 'had': 1.0986122886681098,\n 'a': 1.0986122886681098,\n 'little': 3.295836866004329,\n 'lamb': 2.5418935811616103}),\n Counter({'mary': 0.6931471805599453,\n 'had': 1.0986122886681098,\n 'a': 1.0986122886681098,\n 'little': 1.0986122886681098,\n 'lamb': 0.8472978603872034,\n 'whose': 1.6094379124341003,\n 'fleece': 1.6094379124341003,\n 'was': 1.0986122886681098,\n 'white': 1.6094379124341003,\n 'as': 1.6094379124341003,\n 'snow': 1.6094379124341003}),\n Counter({'and': 1.6094379124341003,\n 'everywhere': 1.0986122886681098,\n 'that': 1.0986122886681098,\n 'mary': 2.0794415416798357,\n 'went': 3.295836866004329}),\n Counter({'everywhere': 1.0986122886681098,\n 'that': 1.0986122886681098,\n 'mary': 0.6931471805599453,\n 'went': 1.0986122886681098,\n 'the': 1.6094379124341003,\n 'lamb': 0.8472978603872034,\n 'was': 1.0986122886681098,\n 'sure': 1.6094379124341003,\n 'to': 1.6094379124341003,\n 'go': 1.6094379124341003})]\n\n\n\nNow we finally have a vector representation of each of our documents that takes the informational contributions of each word into account. Each of these vectors provides us with a unique representation of each document, in the context (corpus) in which it occurs, making it posssible to define the similarity of two documents, etc.\n\n## Porter Stemmer\n\nThere is still, however, one issue with our approach to representing text. Since we treat each word as a unique token and completely independently from all others, for large documents we will end up with many variations of the same word such as verb conjugations, the corresponding adverbs and nouns, etc. \n\nOne way around this difficulty is to use stemming algorithm to reduce words to their root (or stem) version. The most famous Stemming algorithm is known as the **Porter Stemmer** and was introduced by Martin Porter in 1980 [Program 14, 130 (1980)](https://dl.acm.org/citation.cfm?id=275705)\n\nThe algorithm starts by defining consonants (C) and vowels (V):\n\n\n```python\nV = set('aeiouy')\nC = set('bcdfghjklmnpqrstvwxz')\n```\n\nThe stem of a word is what is left of that word after a speficic ending has been removed. A function to do this is easy to implement:\n\n\n```python\ndef get_stem(suffix, word):\n \"\"\"\n Extract the stem of a word\n \"\"\"\n \n if word.lower().endswith(suffix.lower()): # Case insensitive comparison\n return word[:-len(suffix)]\n\n return None\n```\n\nIt also defines words (or stems) to be sequences of vowels and consonants of the form:\n\n\\begin{equation}\n[C](VC)^m[V]\n\\end{equation}\n\nwhere $m$ is called the **measure** of the word and [] represent optional sections. \n\n\n```python\ndef measure(orig_word):\n \"\"\"\n Calculate the \"measure\" m of a word or stem, according to the Porter Stemmer algorthim\n \"\"\"\n \n word = orig_word.lower()\n\n optV = False\n optC = False\n VC = False\n\n m = 0\n pos = 0\n\n # We can think of this implementation as a simple finite state machine\n # looks for sequences of vowels or consonants depending of the state\n # in which it's in, while keeping track of how many VC sequences it\n # has encountered.\n # The presence of the optional V and C portions is recorded in the\n # optV and optC booleans.\n \n # We're at the initial state.\n # gobble up all the optional consonants at the beginning of the word\n while pos < len(word) and word[pos] in C:\n pos += 1\n optC = True\n\n while pos < len(word):\n # Now we know that the next state must be a vowel\n while pos < len(word) and word[pos] in V:\n pos += 1\n optV = True\n\n # Followed by a consonant\n while pos < len(word) and word[pos] in C:\n pos += 1\n optV = False\n \n # If a consonant was found, then we matched VC\n # so we should increment m by one. Otherwise, \n # optV remained true and we simply had a dangling\n # V sequence.\n if not optV:\n m += 1\n\n return m\n```\n\nLet's consider a simple example. The word __crepusculars__ should have measure 4:\n\n[cr] (ep) (usc) (ul) (ars)\n\nand indeed it does.\n\n\n```python\nword = \"crepusculars\"\nprint(measure(word))\n```\n\n 4\n\n\n(agr) = (VC)\n\n\n```python\nword = \"agr\"\nprint(measure(word))\n```\n\n 1\n\n\nThe Porter algorithm sequentially applies a series of transformation rules over a series of 5 steps (step 1 is divided in 3 substeps and step 5 in 2). The rules are only applied if a certain condition is true. \n\nIn addition to possibily specifying a requirement on the measure of a word, conditions can make use of different boolean functions as well: \n\n\n```python\ndef ends_with(char, stem):\n \"\"\"\n Checks the ending of the word\n \"\"\"\n return stem[-1] == char\n\ndef double_consonant(stem):\n \"\"\"\n Checks the ending of a word for a double consonant\n \"\"\"\n if len(stem) < 2:\n return False\n\n if stem[-1] in C and stem[-2] == stem[-1]:\n return True\n\n return False\n\ndef contains_vowel(stem):\n \"\"\"\n Checks if a word contains a vowel or not\n \"\"\"\n return len(set(stem) & V) > 0 \n```\n\nFinally, we define a function to apply a specific rule to a word or stem:\n\n\n```python\ndef apply_rule(condition, suffix, replacement, word):\n \"\"\"\n Apply Porter Stemmer rule.\n if \"condition\" is True replace \"suffix\" by \"replacement\" in \"word\"\n \"\"\"\n \n stem = get_stem(suffix, word)\n\n if stem is not None and condition is True:\n # Remove the suffix\n word = stem\n\n # Add the replacement suffix, if any\n if replacement is not None:\n word += replacement\n\n return word\n```\n\nNow we can see how rules can be applied. For example, this rule, from step 1b is successfully applied to __pastered__:\n\n\n```python\nword = \"plastered\"\nsuffix = \"ed\"\nstem = get_stem(suffix, word)\napply_rule(contains_vowel(stem), suffix, None, word)\n```\n\n\n\n\n 'plaster'\n\n\n\n\n```python\nstem\n```\n\n\n\n\n 'plaster'\n\n\n\n\n```python\ncontains_vowel(stem)\n```\n\n\n\n\n True\n\n\n\nWhile try applying the same rule to **bled** will fail to pass the condition resulting in no change.\n\n\n```python\nword = \"bled\"\nsuffix = \"ed\"\nstem = get_stem(suffix, word)\napply_rule(contains_vowel(stem), suffix, None, word)\n```\n\n\n\n\n 'bled'\n\n\n\n\n```python\nstem\n```\n\n\n\n\n 'bl'\n\n\n\n\n```python\ncontains_vowel(stem)\n```\n\n\n\n\n False\n\n\n\nFor a more complex example, we have, in Step 4:\n\n\n```python\nword = \"adoption\"\nsuffix = \"ion\"\nstem = get_stem(suffix, word)\napply_rule(measure(stem) > 1 and (ends_with(\"s\", stem) or ends_with(\"t\", stem)), suffix, None, word)\n```\n\n\n\n\n 'adopt'\n\n\n\n\n```python\nends_with(\"t\", stem)\n```\n\n\n\n\n True\n\n\n\n\n```python\nends_with(\"s\", stem)\n```\n\n\n\n\n False\n\n\n\n\n```python\nmeasure(stem)\n```\n\n\n\n\n 2\n\n\n\nIn total, the Porter Stemmer algorithm (for the English language) applies several dozen rules (see https://tartarus.org/martin/PorterStemmer/def.txt for a complete list). Implementing all of them is both tedious and error prone, so we abstain from providing a full implementation of the algorithm here. High quality implementations can be found in all major NLP libraries such as [NLTK](http://www.nltk.org/howto/stem.html).\n\nThe dificulties of defining matching rules to arbitrary text cannot be fully resolved without the use of Regular Expressions (typically implemented as Finite State Machines like our __measure__ implementation above), a more advanced topic that is beyond the scope of this course.\n\n
\n \n
\n", "meta": {"hexsha": "b4542b6a6fb3e4297891dfb1fd45accc5a7a5419", "size": 267051, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "1. Text Representation.ipynb", "max_stars_repo_name": "millsgt/NLP", "max_stars_repo_head_hexsha": "200da19d1372a8520625681edd5e0011a727be43", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 97, "max_stars_repo_stars_event_min_datetime": "2019-05-06T13:27:53.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-25T17:36:22.000Z", "max_issues_repo_path": "1. Text Representation.ipynb", "max_issues_repo_name": "millsgt/NLP", "max_issues_repo_head_hexsha": "200da19d1372a8520625681edd5e0011a727be43", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "1. Text Representation.ipynb", "max_forks_repo_name": "millsgt/NLP", "max_forks_repo_head_hexsha": "200da19d1372a8520625681edd5e0011a727be43", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 78, "max_forks_repo_forks_event_min_datetime": "2019-05-06T12:14:22.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T10:57:28.000Z", "avg_line_length": 134.1290808639, "max_line_length": 216456, "alphanum_fraction": 0.8812024669, "converted": true, "num_tokens": 7761, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3960681520167196, "lm_q2_score": 0.21733751597763015, "lm_q1q2_score": 0.08608046831716423}} {"text": "# Homework 1\n\n**For exercises in the week 22-28.10.19**\n\n**Points: 7 + 2 bonus point**\n\nPlease solve the problems at home and bring to class a [declaration form](http://ii.uni.wroc.pl/~jmi/Dydaktyka/misc/kupony-klasyczne.pdf) to indicate which problems you are willing to present on the backboard.\n\n\n\n### Declartation\n\n| Exercise || 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |\n|----------||---|---|---|---|---|---|---|---|\n| Points || 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 |\n\n## Problem 1 (McKay 4.1) [1p]\n\nYou are given a set of 12 balls in which:\n- 11 balls are equal\n- 1 ball is different (either heavier or lighter).\n\nYou have a two-pan balance. How many weightings you must use to detect toe odd ball?\n\n*Hint:* A weighting can be seen as a random event. You can design them to maximize carry the most information, i.e. to maximize the entropy of their outcome.\n\n## Answ 1:\n\n\nhttp://learning.eng.cam.ac.uk/pub/Public/Turner/Teaching/ml-lecture-1-slides.pdf\n\n## Problem 2 [1p]\n\nBayes' theorem allows to reason about conditional probabilities of causes and their effects:\n\n\\begin{equation}\np(A,B)=p(A|B)p(B)=p(B|A)p(A)\n\\end{equation}\n\n\\begin{equation}\np(A|B) = \\frac{p(B|A)p(A)}{p(B)}\n\\end{equation}\n\nBayes' theorem allows us to reason about probabilities of causes, when\nwe observe their results. Instead of directly answering the hard\nquestion $p(\\text{cause}|\\text{result})$ we can instead separately\nwork out the marginal probabilities of causes $p(\\text{cause})$ and\ncarefully study their effects $p(\\text{effect}|\\text{cause})$.\n\nSolve the following using Bayes' theorem.\n\n1. There are two boxes on the table: box \\#1 holds two\n black balls and eight red ones, box \\#2 holds 5 black ones and\n 5 red ones. We pick a box at random (with equal probabilities),\n and then a ball from that box.\n 1. What is the probability, that the\n ball came from box \\#1 if we happened to pick a red ball?\n \n1. The government has started a preventive program of\n mandatory tests for the Ebola virus. Mass testing method is\n imprecise, yielding 1% of false positives (healthy, but the test\n indicates the virus) and 1% of false negatives (\n having the virus but healthy according to test results).\n As Ebola is rather infrequent, lets assume that it occurs in\n one in a million people in Europe.\n 1. What is the probability,\n that a random European, who has been tested positive for Ebola\n virus, is indeed a carrier?\n 2. Suppose we have an additional information, that the person has just\n arrived from a country where one in a thousand people is a carrier.\n How much will be the increase in probability?\n 3. How accurate should be the test, for a 80% probability of true\n positive in a European?\n\n## Ans 2:\n\n1A.\n$$ \\frac{\\frac{8}{10} * \\frac{1}{2}}{\\frac{13}{20}} $$\n2A.\n$$ \\frac{\\frac{99}{100} * \\frac{1}{1 000 000}}{\\frac{1}{1 000 000} * 0.99 + (1-\\frac{1}{1 000 000}) * 0.01} $$\n\n\n```python\nacc = .99\nD = 10**-6\nppb = lambda acc, D: (acc * D)/(D * acc + (1 - D) * (1 - acc) )\nP1 = ppb(acc, D)\n\nD = 10**-3\n\nP2 = ppb(acc, D)\n\nprint(f\"P1: {P1}, P2: {P2}\")\n\nfor i in range(10):\n acc = 1 - 1/10**i\n print(acc, ppb(acc, 10**-6)) \nacc=0.9999997499999\nprint(acc, ppb(acc, 10**-6))\n```\n\n P1: 9.899029895070276e-05, P2: 0.09016393442622944\n 0.0 0.0\n 0.9 8.999928000575998e-06\n 0.99 9.899029895070276e-05\n 0.999 0.0009980039920159671\n 0.9999 0.009900019604000295\n 0.99999 0.09090834710646178\n 0.999999 0.49999999999281103\n 0.9999999 0.9090909835145884\n 0.99999999 0.9900990195566637\n 0.999999999 0.9990010000262294\n 0.9999997499999 0.8000000560288553\n\n\n## Problem 3 [1.5p]\n\nGiven observations $x_1,\\ldots,x_n$\n coming from a certain distribution,\n prove that MLE of a particular parameter of that distribution is equal to the sample mean $\\frac{1}{n}\\sum_{i=1}^n x_i$:\n1. Bernoulli distribution with success probability $p$ and MLE $\\hat{p}$,\n2. Gaussian distribution $\\mathcal{N}(\\mu,\\sigma)$ and MLE $\\hat{\\mu}$,\n3. Poisson distribution $\\mathit{Pois}(\\lambda)$ and MLE $\\hat{\\lambda}$.\n\n\n```python\n\n```\n\n## Problem 4 [1.5p]\n\n1D Gaussian manipulatoin for Kalman filters.\n\nA [1D Kalman filter](https://en.wikipedia.org/wiki/Kalman_filter) tracks the location of an object given imprecise measurements of its location. At its core it performs an update of the form:\n\n$$\n p(x|m) = \\frac{p(m|x)p(x)}{p(m)} = \\frac{p(m|x)p(x)}{Z},\n$$\n\nwhere:\n- $p(x|m)$ is the updated belief about the location,\n- $p(x) = \\mathcal{N}(\\mu=\\mu_x, \\sigma=\\sigma_x)$ is the belief about the location,\n- $p(m|x) = \\mathcal{N}(\\mu=x, \\sigma=\\sigma_m)$ is the noisy measurement, centered on the location of the object,\n- $Z = p(m) =\\int p(m|x)p(x) dx$ is a normalization constant not dependent on $x$.\n\nCompute $p(x|m)$.\n\n*Hint:* The product $\\mathcal{N}(x;\\mu_1, \\sigma_1)\\mathcal{N}(x;\\mu_2, \\sigma_2)$ ressembles an unnormalized probability distribution, which one? Can you normalize it by computing the mean and standard deviation and fitting it to a knoen PDF?\n\n## Problem 5 (Murphy, 2.17) [1p]\n\nExpected value of the minimum.\n\nLet $X, Y$ be sampled uniformily on the interval $[0,1]$. What is the expected value of $\\min(X,Y)$?\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\nx = np.random.rand(10000)\n\nx = np.min(np.random.rand(2,100000000), axis=0)\nx.shape\n# plt.scatter(range(len(x)), np.sort(x))\nnp.mean(x)\n```\n\n\n\n\n 0.33335897927530145\n\n\n\nhttp://premmi.github.io/expected-value-of-minimum-two-random-variables\n\n## Problem 6 (Kohavi) [1p]\n\nThe failure of leave-one-out evaluation. \n\nConsider a binary classification dataset in which the labels are assigned completely at random, with 50% probability given to either class. Assume you have a collected a dataset with 100 records in which exactly 50 of them belong to class 0 and 50 to class 1. \n\nWhat will be the leave-one-out accuracy of the majority voting classifier?\n\nNB: sometimes it is useful to equalize the number of classes in each fold of cross-validation, e.g. using the [StratifiedKFold](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html) implementation from SKlearn.\n\n## Ans 6:\n\n0, because everytime the majority decides, and majority is always different than the one which left.\n\n## Problem 7 [1pb]\nDo Problem 7a from [Assignment 1](https://github.com/janchorowski/ml_uwr/blob/fall2019/assignment1/Assignment1.ipynb).\n\n## Problem 8 [1bp]\n\nMany websites ([Reddit](reddit.com), [Wykop](wykop.pl), [StackOverflow](stackoverflow.com)) provide sorting of comments based on user votes. Discuss what are the implications when sorting by:\n- difference between up- and down-votes\n- mean score\n- lower or upper confidence bound of the score\n\nAt least for Reddit the sorting algorithm can be found online, what is it?\n\n## Ans 8:\n### 1:\n\n\n### 2:\n\n\n### 4:\n\n\n#ruby\nrequire 'statistics2'\n\ndef ci_lower_bound(pos, n, confidence)\n if n == 0\n return 0\n end\n z = Statistics2.pnormaldist(1-(1-confidence)/2)\n phat = 1.0*pos/n\n (phat + z*z/(2*n) - z * Math.sqrt((phat*(1-phat)+z*z/(4*n))/n))/(1+z*z/n)\nend", "meta": {"hexsha": "1b9d894c71405318b992eac58e588c76e64a9f42", "size": 359251, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "homework1/Homework1.ipynb", "max_stars_repo_name": "iCarrrot/ML", "max_stars_repo_head_hexsha": "05177012d36ca64a5b2730287b3ae5b086306197", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "homework1/Homework1.ipynb", "max_issues_repo_name": "iCarrrot/ML", "max_issues_repo_head_hexsha": "05177012d36ca64a5b2730287b3ae5b086306197", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "homework1/Homework1.ipynb", "max_forks_repo_name": "iCarrrot/ML", "max_forks_repo_head_hexsha": "05177012d36ca64a5b2730287b3ae5b086306197", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 874.0900243309, "max_line_length": 176996, "alphanum_fraction": 0.9526236531, "converted": true, "num_tokens": 2213, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.49609382947091946, "lm_q2_score": 0.1732882037945951, "lm_q1q2_score": 0.08596720862259781}} {"text": "
\n\n*Practical Data Science*\n\n# Feature Engineering\n\nNikolai Stein
\nChair of Information Systems and Management\n\nWinter Semester 21/22\n\n

Table of Contents

\n\n\n__Credits__\n\nParts of the material of this lecture are adopted from www.kaggle.com\n\n## Introduction\n\n**This lecture provides an overview on different feature engineering techniques.**\n\nStarting with a baseline dataset, we will\n\n- modify existing variables \n- add additional features to our dataset \n- train a predictive model \n\n**Feature engineering** is an essential part of building a powerful predictive model. \n\nEach problem is domain specific and better features (suited to the problem) are often the deciding factor of the performance of your system. \n\nFeature Engineering requires experience as well as creativity and this is the reason **Data Scientists often spend the majority of their time** in the data preparation phase before modeling.\n\n_\"Coming up with features is difficult, time-consuming, requires expert knowledge. Applied machine learning is basically feature engineering.\"_\n\nProf. Andrew Ng.\n\n_\"Feature engineering is the process of transforming raw data into features that better represent the underlying problem to the predictive models, resulting in improved model accuracy on unseen data.\"_\n\nDr. Jason Brownlee\n\n_\"At the end of the day, some machine learning projects succeed and some fail. What makes the difference? Easily the most important factor is the features used.\"_\n\nProf. Pedro Domingos\n\n## Loading the Data\nThis week, we will work with a sample of the [adult dataset](https://archive.ics.uci.edu/ml/datasets/adult) which has some census information on individuals. We'll use it to train a model to predict whether salary is greater than \\$50k or not. Again, our first step is to load and familiarize ourself with the data. To this end, we can use the pandas library and load the dataset with the following commands:\n\n\n```python\nimport pandas as pd\n```\n\n\n```python\nfile_path = 'https://github.com/NikoStein/pds_data/raw/main/data/adult.csv'\nadult_data = pd.read_csv(file_path)\nadult_data.head()\n```\n\n## Select Variables and Split Dataset\n\nBefore we start to engineer new features, we select the feature and target variables. \n\nThe (binary) variable ``salary`` describes if a person earns more or less that \\\\$50k. We replace the labels with numeric values (0: Salary < \\\\$50k, 1: Salary > \\\\$50k) and subsequently select it as our target variable y.\n\n\n```python\nadult_data = adult_data.assign(salary=(adult_data['salary']=='>=50k').astype(int))\ny = adult_data['salary']\n```\n\nThe remaining columns serve as our features X.\n\n\n```python\nX = adult_data.drop('salary', axis=1)\n```\n\nNext, we perform a train-test split to train and evaluate our machine learning models for the model validation.\n\n\n```python\nfrom sklearn.model_selection import train_test_split\n```\n\n\n```python\ntrain_X, val_X, train_y, val_y = train_test_split(X, y, random_state = 0)\n```\n\nNow we are ready to start preparing and enhancing our numerical and categorical features!\n\n## Feature Engineering on Numeric Data\n\nBy Numeric data we mean continuous data and not discrete data which is typically represented as categorical data. Integers and floats are the most common and widely used numeric data types for continuous numeric data. Even though numeric data can be directly fed into machine learning models, we still have to engineer and preprocess features which are relevant to the scenario, problem, domain and machine learning model.\n\nTo this end, we can distinguish between preprocessing and feature generation.\n\nTo work on our numeric features, we have to identify all numeric columns in our dataset:\n\n\n```python\nnumCols = [cname for cname in train_X.columns if train_X[cname].dtype != \"object\"]\nnumCols\n```\n\nTo avoid problems with missing values we use a ``SimpleImputer`` for the numeric columns before we continue:\n\n\n```python\nfrom sklearn.impute import SimpleImputer\n\nsimple_imputer = SimpleImputer()\n\ntrain_X_num = pd.DataFrame(simple_imputer.fit_transform(train_X[numCols]), columns=numCols, index=train_X.index)\nval_X_num = pd.DataFrame(simple_imputer.transform(val_X[numCols]), columns=numCols, index=val_X.index)\n```\n\n### Preprocessing\n\nOur dataset may contain attributes with a mixture of scales for various quantities. However, many machine learning methods require or at least are more effective if the data attributes have the same scale. \n\nFor example, ``capital gain`` and ``capital loss`` is measured in USD while age is measured in years in our dataset at hand.\n\nTo avoid having numeric values from different scales we can use two popular data scaling methods: normalization and standardization.\n\n#### Normalization\n\nNormalization refers to rescaling numeric attributes into the range 0 and 1. It is useful to scale the input attributes for a model that relies on the magnitude of values, such as distance measures used in k-nearest neighbors and in the preparation of coefficients in regression.\n\nUsing Scikit-learn's ``MinMaxScaler`` we can rescale an attribute according to the following formula:\n\n\n\\begin{equation}\n X = \\frac{(X - min(X))}{(max(X) - min(X))}\n\\end{equation}\n\n\n```python\nfrom sklearn.preprocessing import MinMaxScaler\n\nscaler = MinMaxScaler()\n\ntrain_X_num_normalized = pd.DataFrame(scaler.fit_transform(train_X_num), \n columns=train_X_num.columns, index=train_X_num.index)\nval_X_num_normalized = pd.DataFrame(scaler.transform(val_X_num), \n columns=train_X_num.columns, index=val_X_num.index)\n\ntrain_X_num_normalized\n```\n\n#### Standardization\n\nIn contrast to normalization, we could also use standardization for our numerical variables. In this context, standardization refers to shifting the distribution of each attribute to have a mean of zero and a standard deviation of one. It is useful to standardize attributes for a model that relies on the distribution of attributes such as Gaussian processes.\n\nUsing Scikit-learn's ```StandardScaler``` we can rescale an attribute according to the following formula:\n\n\n\\begin{equation}\n X = \\frac{(X - mean(X))}{\\sqrt{var(X)}}\n\\end{equation}\n\n\n```python\nfrom sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler()\n\ntrain_X_num_standardized = pd.DataFrame(scaler.fit_transform(train_X_num), \n columns=train_X_num.columns, index=train_X_num.index)\nval_X_num_standardized = pd.DataFrame(scaler.transform(val_X_num), \n columns=train_X_num.columns, index=val_X_num.index)\n\ntrain_X_num_standardized.head()\n```\n\n#### Summary\n\nData rescaling is an important part of data preparation before applying machine learning algorithms. However, it is hard to know whether normalization or standardization of the data will improve the performance of a predictive model in advance. \n\nA good tip for a practical application is to create rescaled copies of your dataset and evaluate them against each other. This process can quickly show which rescaling method will improve your selected models in the problem at hand.\n\n### Binarization\n\nFor some problems raw frequencies or counts may not be relevant for building a model. In these cases it is only relevant if a numeric value exceeds a specific threshold (e.g. a person is at least 40 years old). Hence we do not require the number of times the action was performed but only a binary feature.\n\nWe can binarize a feature using Scikit-learn's ``Binarizer`` function (Note that we use the raw dataset for this example - clearly we could normalize or standardize the dataframe afterwards):\n\n\n```python\nfrom sklearn.preprocessing import Binarizer\n\ntrain_X_binary_age = train_X_num.copy()\nval_X_binary_age = val_X_num.copy()\n\nbinarizer = Binarizer(threshold=40)\n\ntrain_X_binary_age['40Plus'] = binarizer.transform([train_X_binary_age['age']])[0]\nval_X_binary_age['40Plus'] = binarizer.transform([val_X_binary_age['age']])[0]\n\ntrain_X_binary_age.head()\n```\n\n### Binning\n\nThe problem of working with raw, numeric features is that often the distribution of values in these features will be skewed. This signifies that some values will occur quite frequently while some will be quite rare. Hence there are strategies to deal with this, which include binning. \n\nBinning is used for transforming continuous numeric features into discrete ones. These discrete values can be interpreted as categories or bins into which the raw values are grouped into. Each group represents a specific degree of intensity and hence a specific range of continuous numeric values fall into it.\n\nLet's again use the age variable to perform two different types of binning.\n\n#### Fixed-Width Binning\n\nIn fixed-width binning, specific fixed widths for each bin are defined by the user. Each bin has a fixed range of values which should be assigned to that bin on the basis of some domain knowledge.\n\nWe can use Pandas ```cut``` function to bin the age into predefined groups and assign labels:\n\n\n```python\ntrain_X_bin_age = train_X_num.copy()\nval_X_bin_age = val_X_num.copy()\n\nbin_ranges = [0, 25, 60, 999]\nbin_labels = [0, 1, 2]\n\ntrain_X_bin_age['AgeBinned'] = pd.cut(train_X_bin_age['age'], \n bins=bin_ranges, labels=bin_labels)\nval_X_bin_age['AgeBinned'] = pd.cut(val_X_bin_age['age'], \n bins=bin_ranges, labels=bin_labels)\n\ntrain_X_bin_age.head()\n```\n\n#### Adaptive Binning\n\nThe major drawback in using fixed-width binning is unbalanced bin sizes. As we manually decide the bin ranges, we can end up with irregular bins which are not uniform based on the number of data points. Some bins (such as \"young (0)\" and \"old (2)\") might be sparsely populated while some (such as \"medium (1)\") are densely populated.\n\nTo overcome this issues we can use adaptive binning based on the distribution of the data.\n\nTo cut the space into equal partitions we can use the quantiles as cut-points:\n\n\n```python\nquantile_list = [0, 0.33, 0.66, 1]\nquantile_labels = [0, 1, 2]\n\ntrain_X_bin_age['AgeBinnedAdaptive'] = pd.qcut(train_X_bin_age['age'], \n q=quantile_list, labels=quantile_labels)\nval_X_bin_age['AgeBinnedAdaptive'] = pd.qcut(val_X_bin_age['age'], \n q=quantile_list, labels=quantile_labels)\n\ntrain_X_bin_age.head(5)\n```\n\n### Statistical Transformations\n\nMany variables, such as ``capital-gain`` or ``fnlwgt`` (sampling weight) span several orders of magnitude. While the vast majority of persons has very small capital-gains, a few people have very high gains. To work with such skewed variables we can use the log transformation. \n\nLog transforms are useful when applied to skewed distributions as they tend to expand the values which fall in the range of lower magnitudes and tend to compress or reduce the values which fall in the range of higher magnitudes. This tends to make the skewed distribution as normal-like as possible.\n\n\n```python\nimport numpy as np\n\ntrain_X_logGains = train_X_num.copy()\nval_X_logGains = val_X_num.copy()\n\ntrain_X_logGains['logfnlwgt'] = np.log1p(train_X_logGains['fnlwgt'])\nval_X_logGains['logfnlwgt'] = np.log1p(val_X_logGains['fnlwgt'])\n```\n\nWe can see this effect plotting both histograms:\n\n\n```python\n%matplotlib inline\ntrain_X_logGains[['fnlwgt', 'logfnlwgt']].hist();\n```\n\n### Evaluation\n\nWe can train support vector machines (``SVC``) using the different datasets and feature engineering techniques to evaluate their impact on the model performance. Note that we could (and should) combine these techniques to train powerful models and apply them in real-world problems.\n\n\n```python\nfrom sklearn.svm import SVC\nfrom sklearn.metrics import accuracy_score\n\ndef score_dataset(X_train, X_valid, y_train, y_valid):\n model = SVC(gamma='auto', random_state=0)\n model.fit(X_train, y_train)\n preds = model.predict(X_valid)\n return accuracy_score(y_valid, preds)\n```\n\n\n```python\nprint(\"Raw Features: {}\".\n format(score_dataset(train_X_num, val_X_num, train_y, val_y)))\nprint(\"Normalized Features: {}\".\n format(score_dataset(train_X_num_normalized, val_X_num_normalized, train_y, val_y)))\nprint(\"Standardized Features: {}\".\n format(score_dataset(train_X_num_standardized, val_X_num_standardized, train_y, val_y)))\nprint(\"Binary Age: {}\".format(score_dataset(train_X_binary_age, val_X_binary_age, train_y, val_y)))\nprint(\"Binned Age: {}\".format(score_dataset(train_X_bin_age, val_X_bin_age, train_y, val_y)))\nprint(\"Log FNLWGT: {}\".format(score_dataset(train_X_logGains, val_X_logGains, train_y, val_y)))\n```\n\n## Feature Engineering on Categorical Data\n\nIn contrast to continuous numeric data we mean discrete values which belong to a specific finite set of categories or classes when we talk about categorical data. These discrete values can be text or numeric in nature and there are two major classes of categorical data, nominal and ordinal.\n\nWhile a lot of advancements have been made in state of the art machine learning frameworks to accept categorical data types like text labels. Typically any standard workflow in feature engineering involves some form of transformation of these categorical values into numeric labels and then applying some encoding scheme on these values.\n\n### Label and One-Hot-Encoding\n\nLast week, we already talked about label and one-hot-encoding to prepare our categorical features for machine learning models. To get started, we will impute missing values and encode all categorical features using the ``OrdinalEncoder``:\n\n\n```python\nfrom sklearn.preprocessing import OrdinalEncoder\n```\n\nAgain, we will use a helper function to evaluate the performance of our models. This time, we will rely on a random forest model.\n\n\n```python\ncatCols = [cname for cname in train_X.columns if train_X[cname].dtype == \"object\"]\n\ntrain_X_cat = train_X[catCols].copy()\nval_X_cat = val_X[catCols].copy()\n\nsimple_imputer = SimpleImputer(strategy='most_frequent')\n\ntrain_X_labelenc = pd.DataFrame(simple_imputer.fit_transform(train_X_cat), columns=train_X_cat.columns, index=train_X_cat.index)\nval_X_labelenc = pd.DataFrame(simple_imputer.transform(val_X_cat), columns=val_X_cat.columns, index=val_X_cat.index)\n\nordinal_encoder = OrdinalEncoder()\ntrain_X_labelenc = pd.DataFrame(ordinal_encoder.fit_transform(train_X_labelenc), columns=train_X_cat.columns, index=train_X_cat.index)\nval_X_labelenc = pd.DataFrame(ordinal_encoder.transform(val_X_labelenc), columns=val_X_cat.columns, index=val_X_cat.index)\n\ntrain_X_labelenc.head()\n```\n\n\n```python\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import accuracy_score\n\ndef score_dataset(X_train, X_valid, y_train, y_valid):\n model = RandomForestClassifier(n_estimators=100, random_state=0)\n model.fit(X_train, y_train)\n preds = model.predict(X_valid)\n return accuracy_score(y_valid, preds)\n```\n\nTo evaluate the model we combine the raw numerical data and the encoded categorical variables.\n\n\n```python\ntrain_X_label_num = train_X_num_standardized.join(train_X_labelenc.add_suffix(\"_labelenc\"))\nval_X_label_num = val_X_num_standardized.join(val_X_labelenc.add_suffix(\"_labelenc\"))\n\n\nprint(\"Label encoded categorical + raw numeric: {}\".\n format(score_dataset(train_X_label_num, val_X_label_num, train_y, val_y)))\n```\n\n### Count Encodings\n\nWhile label and one-hot encoding often yield good results, there are also a lot of other (more complex) techniques to encode categorical variables. The package [categorical-encoding](https://github.com/scikit-learn-contrib/categorical-encoding) offers implementations of many different techniques.\n\nOne prominent variant is called count encoding. Count encoding replaces each categorical value with the number of times it appears in the dataset. For example, if the value \"USA\" occures 50 times in the country feature, then each \"USA\" would be replaced with the number 50.\n\n\n```python\n!pip install category_encoders\n```\n\nor\n\n\n```python\n!conda install -c conda-forge category_encoders -y\n```\n\n\n```python\nfrom category_encoders import CountEncoder\n\ncount_encoder = CountEncoder(handle_unknown=0, handle_missing='value')\n\ntrain_X_countenc = count_encoder.fit_transform(train_X_cat)\nval_X_countenc = count_encoder.transform(val_X_cat)\n\ntrain_X_count_num = train_X_num.join(train_X_countenc.add_suffix(\"_countenc\"))\nval_X_count_num = val_X_num.join(val_X_countenc.add_suffix(\"_countenc\"))\n\nprint(\"Count encoded categorical + raw numeric: {}\".\n format(score_dataset(train_X_count_num, val_X_count_num, train_y, val_y)))\n```\n\n### Target Encodings\n\nTarget encoding is another advanced (but sometimes dangerous) approach to encode categorical features. It replaces a categorical value with the average value of the target for that value of the feature. \n\nFor example, given the country value \"GER\", you'd calculate the average outcome for all the rows with country == 'GER'. This value is often blended with the target probability over the entire dataset to reduce the variance of values with few occurences.\n\nThis technique uses the targets to create new features. So including the validation or test data in the target encodings would be a form of target leakage. Instead, you should learn the target encodings from the training dataset only and apply it to the other datasets (as we did with all other encoding methods).\n\n\n```python\nfrom category_encoders import TargetEncoder\n\ntarget_encoder = TargetEncoder()\n\ntrain_X_targetenc = target_encoder.fit_transform(train_X_cat, train_y)\nval_X_targetenc = target_encoder.transform(val_X_cat)\n\ntrain_X_target_num = train_X_num.join(train_X_targetenc.add_suffix(\"_targetenc\"))\nval_X_target_num = val_X_num.join(val_X_targetenc.add_suffix(\"_targetenc\"))\n\nprint(\"Target encoded categorical + raw numeric: {}\".\n format(score_dataset(train_X_target_num, val_X_target_num, train_y, val_y)))\n```\n\n### CatBoost Encoding\n\nFinally, we'll look at CatBoost encoding. This is similar to target encoding in that it's based on the target probablity for a given value. However with CatBoost, for each row, the target probability is calculated only from the rows before it.\n\n\n```python\nfrom category_encoders import CatBoostEncoder\n\ncatboost_encoder = CatBoostEncoder()\n\ntrain_X_catboostenc = catboost_encoder.fit_transform(train_X_cat, train_y)\nval_X_catboostenc = catboost_encoder.transform(val_X_cat)\n\ntrain_X_catboost_num = train_X_num.join(train_X_catboostenc.add_suffix(\"_targetenc\"))\nval_X_catboost_num = val_X_num.join(val_X_catboostenc.add_suffix(\"_targetenc\"))\n\nprint(\"CatBoost encoded categorical + raw numeric: {}\".\n format(score_dataset(train_X_catboost_num, val_X_catboost_num, train_y, val_y)))\n```\n\n### Warning\n\nTarget encoding is a powerful but dangerous way to improve on your machine learning methods. \n\nAdvantages: \n* Compact transformation of categorical variables\n* Powerful basis for feature engineering\n\nDisadvantages:\n* Careful validation is required to avoid overfitting\n* Significant performance improvements only on some datasets\n\n## Conclusion\n\nToday, we have seen a variety of ways to encode numerical and categorical features to improve the performance of our machine learning models. To try even more encoding methods you can try the implementations in the categorical-encoding package on [github](https://github.com/scikit-learn-contrib/categorical-encoding).\n\nWhile the approaches we have talked about today have the potential to create powerful models, they require a lot of manual tuning and testing. \n", "meta": {"hexsha": "63cdf736fba9fbbb562e0fcb3ba41cdf86ddff22", "size": 36428, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nbs/04_Feature_Engineering.ipynb", "max_stars_repo_name": "pds2122/course", "max_stars_repo_head_hexsha": "962801729cc3c72c2566d0aea77a6089aac71683", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "nbs/04_Feature_Engineering.ipynb", "max_issues_repo_name": "pds2122/course", "max_issues_repo_head_hexsha": "962801729cc3c72c2566d0aea77a6089aac71683", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "nbs/04_Feature_Engineering.ipynb", "max_forks_repo_name": "pds2122/course", "max_forks_repo_head_hexsha": "962801729cc3c72c2566d0aea77a6089aac71683", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-11-23T18:03:51.000Z", "max_forks_repo_forks_event_max_datetime": "2021-11-23T18:03:51.000Z", "avg_line_length": 34.5944919278, "max_line_length": 3806, "alphanum_fraction": 0.6246843088, "converted": true, "num_tokens": 5401, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.43014734858584286, "lm_q2_score": 0.19930798839207908, "lm_q1q2_score": 0.08573180275883076}} {"text": "\n\n---\n \n\u3053\u306e\u30d5\u30a1\u30a4\u30eb\u306f PyTorch \u306e\u30c1\u30e5\u30fc\u30c8\u30ea\u30a2\u30eb\u306b\u3042\u308b\u30d5\u30a1\u30a4\u30eb \u3092\u7ffb\u8a33\u3057\u3066\uff0c\u52a0\u7b46\u4fee\u6b63\u3057\u305f\u3082\u306e\n\u3067\u3059\u3002\n\n\u3059\u3050\u308c\u305f\u30c1\u30e5\u30fc\u30c8\u30ea\u30a2\u30eb\u306e\u5185\u5bb9\uff0c\u30b3\u30fc\u30c9\u3092\u516c\u958b\u3055\u308c\u305f PyTorch \u958b\u767a\u9663\u3068 Transfomer \u306e\u539f\u8457\u8ad6\u6587\u8457\u8005\u9663 (Vaswani \u3089) \u306b\u656c\u610f\u3092\u8868\u3057\u307e\u3059\u3002\n\n- Original: https://pytorch.org/tutorials/beginner/transformer_tutorial.html\n- Date: 2020-0807\n- Translated and modified: Shin Asakawa \n\n---\n\n\n```python\n# 2020\u5e748\u670811\u65e5\u73fe\u5728\uff0ctorchtext \u3092 upgrade \u3057\u306a\u3044\u3068\u3053\u306e\u30c1\u30e5\u30fc\u30c8\u30ea\u30a2\u30eb\u306f\u52d5\u4f5c\u3057\u306a\u3044\n!pip install --upgrade torchtext\n```\n\n Collecting torchtext\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/b9/f9/224b3893ab11d83d47fde357a7dcc75f00ba219f34f3d15e06fe4cb62e05/torchtext-0.7.0-cp36-cp36m-manylinux1_x86_64.whl (4.5MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.5MB 4.8MB/s \n \u001b[?25hRequirement already satisfied, skipping upgrade: tqdm in /usr/local/lib/python3.6/dist-packages (from torchtext) (4.41.1)\n Collecting sentencepiece\n \u001b[?25l Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)\n \u001b[K |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.1MB 59.0MB/s \n \u001b[?25hRequirement already satisfied, skipping upgrade: torch in /usr/local/lib/python3.6/dist-packages (from torchtext) (1.6.0+cu101)\n Requirement already satisfied, skipping upgrade: numpy in /usr/local/lib/python3.6/dist-packages (from torchtext) (1.18.5)\n Requirement already satisfied, skipping upgrade: requests in /usr/local/lib/python3.6/dist-packages (from torchtext) (2.23.0)\n Requirement already satisfied, skipping upgrade: future in /usr/local/lib/python3.6/dist-packages (from torch->torchtext) (0.16.0)\n Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext) (2020.6.20)\n Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext) (1.24.3)\n Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext) (2.10)\n Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext) (3.0.4)\n Installing collected packages: sentencepiece, torchtext\n Found existing installation: torchtext 0.3.1\n Uninstalling torchtext-0.3.1:\n Successfully uninstalled torchtext-0.3.1\n Successfully installed sentencepiece-0.1.91 torchtext-0.7.0\n\n\n\n```python\n%load_ext autoreload\n%autoreload 2\n```\n\n\n```python\n# from https://github.com/dmlc/xgboost/issues/1715\nimport os\nos.environ['KMP_DUPLICATE_LIB_OK']='True'\n```\n\n\n```python\n%matplotlib inline\n```\n\n## ``nn.Transformer`` \u3068 ``TorchText`` \u3092\u7528\u3044\u305f Seq2Seq (\u7cfb\u5217-to-\u7cfb\u5217) \u30e2\u30c7\u30eb\n\n\n\u3053\u306e\u30c1\u30e5\u30fc\u30c8\u30ea\u30a2\u30eb\u3067\u306f\uff0c[nn.Transformer](https://pytorch.org/docs/master/nn.html?highlight=nn%20transformer#torch.nn.Transformer) \u30e2\u30b8\u30e5\u30fc\u30eb\u3092\u7528\u3044\u305f sequence-to-sequnce (\u8a33\u6ce8:\u65e5\u672c\u8a9e\u3067\u306f `seq2seq \u30e2\u30c7\u30eb` \u306a\u3069\u3068\u547c\u3070\u308c\u307e\u3059) \u30e2\u30c7\u30eb\u306e\u8a13\u7df4\u65b9\u6cd5\u3092\u793a\u3057\u307e\u3059\u3002\n\n\n\nPyTorch \u30ea\u30ea\u30fc\u30b9 1.2 \u306b\u306f\uff0c[Attention is All You Need](https://arxiv.org/pdf/1706.03762.pdf) (\u8a33\u6ce8:\u521d\u3081\u3066\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30fc\u3092\u63d0\u6848\u3057\u305f\u8ad6\u6587) \u306b\u57fa\u3065\u3044\u305f\u6a19\u6e96\u7684\u306a\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30fc\u30e2\u30b8\u30e5\u30fc\u30eb\u304c\u542b\u307e\u308c\u307e\u3059\u3002\n\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30fc\u306f\u4e26\u5217\u5316\u304c\u5bb9\u6613\u3067\uff0cseq2seq \u30e2\u30c7\u30eb\u3092\u51cc\u3050\u6027\u80fd\u304c\u793a\u3055\u308c\u3066\u3044\u307e\u3059\u3002\n``nn.Transfomer`` \u30e2\u30b8\u30e5\u30fc\u30eb\u306f\uff0c\u6ce8\u610f\u6a5f\u69cb\u306b\u57fa\u3065\u3044\u3066\uff0c\u5165\u51fa\u529b\u60c5\u5831\u9593\u5927\u57df\u7684\u4f9d\u5b58\u6027\u3092\u89e3\u6d88\u3059\u308b\u6a5f\u69cb\u3067\u3059\n(\u6700\u8fd1\u306e\u5225\u5b9f\u88c5\u306f [nn.MultiheadAttention](https://pytorch.org/docs/master/nn.html?highlight=multiheadattention#torch.nn.MultiheadAttention))\u3002\n``nn.Transformer`` \u306f\u5358\u4e00\u8981\u7d20\u3067\u69cb\u6210\u3055\u308c\u3066\u304a\u308a\uff0c\u672c\u30c1\u30e5\u30fc\u30c8\u30ea\u30a2\u30eb\u5185\u306e [nn.TransformerEncoder](https://pytorch.org/docs/master/nn.html?highlight=nn%20transformerencoder#torch.nn.TransformerEncoder) \u306e\u3054\u3068\u304f\uff0c\u4fee\u6b63\uff0c\u69cb\u6210\u304c\u5bb9\u6613\u3067\u3059\u3002\n\n\n
\n\n\n\n\n\n# \u30e2\u30c7\u30eb\u306e\u5b9a\u7fa9\n\n\n\n\n\u672c\u30c1\u30e5\u30fc\u30c8\u30ea\u30a2\u30eb\u3067\u306f \u8a00\u8a9e\u30e2\u30c7\u30eb\u8ab2\u984c\u3067 ``nn.TransformerEncoder`` \u30e2\u30c7\u30eb\u3092\u5b66\u7fd2\u3057\u307e\u3059\u3002\n\u8a00\u8a9e\u30e2\u30c7\u30eb\u8ab2\u984c\u3068\u306f \u4efb\u610f\u306e\u5358\u8a9e (\u307e\u305f\u306f\u5358\u8a9e\u7cfb\u5217\uff09 \u304c\u4e0e\u3048\u3089\u308c\u305f\u5834\u5408\u306b\uff0c\u5f8c\u7d9a\u3059\u308b\u5358\u8a9e\u306e\u5c24\u5ea6\uff08\u78ba\u7387\uff09\u3092\u5272\u308a\u5f53\u3066\u308b\u3053\u3068\u6307\u3057\u307e\u3059\u3002\n\u6587\u7ae0\u3092\u8868\u3059\u4e00\u9023\u306e\u30c8\u30fc\u30af\u30f3\u7cfb\u5217\u306f\uff0c\u57cb\u3081\u8fbc\u307f\u5c64\u306b\u5165\u529b\u3055\u308c \u305d\u306e\u5f8c\uff0c\u5358\u8a9e\u306e\u9806\u756a\u3092\u7b26\u53f7\u5316\u3057\u305f\u4f4d\u7f6e\u7b26\u53f7\u5316\u5c64\u306e\u60c5\u5831\u304c\u4ed8\u52a0\u3055\u308c\u307e\u3059(\u8a73\u7d30\u306f\u6b21\u30d1\u30e9\u30b0\u30e9\u30d5\u53c2\u7167)\u3002\n``nn.TransformerEncoder`` \u306f [nn.TransformerEncoderLayer](https://pytorch.org/docs/master/nn.html?highlight=transformerencoderlayer#torch.nn.TransformerEncoderLayer) \u3092\u69cb\u6210\u8981\u7d20\u3068\u3059\u308b\u8907\u6570\u5c64\u304b\u3089\u306a\u308b\u30cb\u30e5\u30fc\u30e9\u30eb\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u3067\u3059\u3002\n``nn.TransformerEncoder`` \u306e\u81ea\u5df1\u6ce8\u610f\u5c64\u306f \u5165\u529b\u7cfb\u5217\u306e\u521d\u982d\u306b\u8fd1\u3044\u4f4d\u7f6e\u306b\u3057\u304b\u6ce8\u610f\u3092\u6255\u3046\u3053\u3068\u304c\u3067\u304d\u306a\u3044\u305f\u3081\u3001\u5165\u529b\u7cfb\u5217\u306b\u5bfe\u3059\u308b \u30de\u30b9\u30af\u5316\u6ce8\u610f\u6a5f\u69cb\u304c\u5fc5\u8981\u3068\u306a\u308a\u307e\u3059\u3002\n\u8a00\u8a9e\u30e2\u30c7\u30eb\u8ab2\u984c\u3067\u306f \u5c06\u6765\u306e\u4f4d\u7f6e\u30c8\u30fc\u30af\u30f3\u304c\u30de\u30b9\u30af\u3055\u308c\u308b\u307e\u3059\u3002\n\u5b9f\u969b\u306e\u5358\u8a9e\u3092\u5f97\u308b\u305f\u3081 ``nn.TransformerEncoder`` \u30e2\u30c7\u30eb\u306e\u51fa\u529b\u306f\u6700\u7d42\u7dda\u5f62\u5c64\u306b\u9001\u3089\u308c \u6700\u7d42\u5c64\u3068\u3057\u3066 \u5bfe\u6570\u30bd\u30d5\u30c8\u30de\u30c3\u30af\u30b9\u95a2\u6570\u304c\u8a2d\u3051\u3089\u308c\u3066\u3044\u307e\u3059\u3002\n\n\n\n\n\n\n```python\nimport math\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nclass TransformerModel(nn.Module):\n\n def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5):\n super(TransformerModel, self).__init__()\n from torch.nn import TransformerEncoder, TransformerEncoderLayer\n self.model_type = 'Transformer'\n self.src_mask = None\n self.pos_encoder = PositionalEncoding(ninp, dropout)\n encoder_layers = TransformerEncoderLayer(ninp, nhead, nhid, dropout)\n self.transformer_encoder = TransformerEncoder(encoder_layers, nlayers)\n self.encoder = nn.Embedding(ntoken, ninp)\n self.ninp = ninp\n self.decoder = nn.Linear(ninp, ntoken)\n\n self.init_weights()\n\n def _generate_square_subsequent_mask(self, sz):\n mask = (torch.triu(torch.ones(sz, sz)) == 1).transpose(0, 1)\n mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))\n return mask\n\n def init_weights(self):\n initrange = 0.1\n self.encoder.weight.data.uniform_(-initrange, initrange)\n self.decoder.bias.data.zero_()\n self.decoder.weight.data.uniform_(-initrange, initrange)\n\n def forward(self, src):\n if self.src_mask is None or self.src_mask.size(0) != len(src):\n device = src.device\n mask = self._generate_square_subsequent_mask(len(src)).to(device)\n self.src_mask = mask\n\n src = self.encoder(src) * math.sqrt(self.ninp)\n src = self.pos_encoder(src)\n output = self.transformer_encoder(src, self.src_mask)\n output = self.decoder(output)\n return output\n```\n\n\n\n\u4f4d\u7f6e\u7b26\u53f7\u5316\u5668 ``PositionalEncoding`` \u30e2\u30b8\u30e5\u30fc\u30eb\u3092\u7528\u3044\u308b\u3053\u3068\u3067\uff0c\u7cfb\u5217\u4e2d\u306e\u30c8\u30fc\u30af\u30f3\u306e\u76f8\u5bfe\u4f4d\u7f6e\u3084\u7d76\u5bfe\u4f4d\u7f6e\u306b\u95a2\u3059\u308b\u60c5\u5831\u3092\u4ed8\u52a0\u3055\u308c\u307e\u3059\u3002\n\u4f4d\u7f6e\u7b26\u53f7\u5316\u5668\u306f\u57cb\u3081\u8fbc\u307f\u3068\u540c\u4e00\u6b21\u5143\u3092\u6301\u3061 \u4e21\u8005 \u3092\u5408\u7b97\u3057\u3066\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30fc\u3078\u306e\u5165\u529b\u3068\u3057\u307e\u3059\u3002\n\u3053\u3053\u3067\u306f \u7570\u306a\u308b\u5468\u6ce2\u6570\u306e ``sine``\uff08\u6b63\u5f26\u6ce2\uff09 \u3068 ``cosine`` \uff08\u4f59\u5f26\u6ce2\uff09 \u95a2\u6570\u3092\u5229\u7528\u3057\u307e\u3059\u3002\n\n### (\u8a33\u6ce8) Transformer: Attention is all you need\n\u539f\u8457\u8ad6\u6587\u4e2d\u306e \u4f4d\u7f6e\u7b26\u53f7\u5316\u5668\u306f\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u5b9a\u7fa9\u3055\u308c\u3066\u3044\u308b:\n\u307e\u305a\uff0c\u30de\u30eb\u30c1\u30d8\u30c3\u30c9\u81ea\u5df1\u6ce8\u610f (MHSA) \u306f\uff0c\u30af\u30a8\u30ea\uff0c\u30ad\u30fc\uff0c\u30d0\u30ea\u30e5\u30fc\u30d9\u30af\u30c8\u30eb\u3092\u5b66\u7fd2\u3059\u3079\u304d\u30d9\u30af\u30c8\u30eb\u3068\u3057\u3066\u6b21\u5f0f\u3067\u5b9a\u7fa9\u3055\u308c\u308b:\n\n$$\n\\text{MultiHead}\\left(Q,K,V\\right)=\\text{Concat}\\left(\\mathop{head}_1,\\ldots,\\mathop{head}_h\\right)W^O\n$$\n\n\u3053\u3053\u3067\uff0c\u5404\u30d8\u30c3\u30c9\u306f, $\\text{head}_i =\\text{Attention}\\left(QW_i^Q,KW_i^K,VW_i^V\\right)$ \u3067\u3042\u308b\u3002\n\n\u305d\u308c\u305e\u308c\u306e\u6b21\u5143\u306f\u4ee5\u4e0b\u306e\u3068\u304a\u308a\u3067\u3042\u308b:\n\n\n- $W_i^Q\\in\\mathbb{R}^{d_{\\mathop{model}}\\times d_k}$,\n- $W_i^K \\in\\mathbb{R}^{d_{\\mathop{model}}\\times d_k}$,\n- $W_i^V\\in\\mathbb{R}^{d_{\\mathop{model}}\\times d_v}$, \n- $W^O\\in\\mathbb{R}^{hd_v\\times d_{\\mathop{model}}}$. $h=8$\n- $d_k=d_v=\\frac{d_{\\mathop{model}}}{h}=64$\n\n$$\\text{FFN}(x)=\\max\\left(0,xW_1+b_1\\right)W_2+b_2$$\n\n\n\n### (\u7d9a \u8a33\u6ce8) \u4f4d\u7f6e\u7b26\u53f7\u5668 Position encoders\n\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30fc\u306e\u5165\u529b\u306b\u306f\uff0c\u4e0a\u8ff0\u306e\u5358\u8a9e\u8868\u73fe\u306b\u52a0\u3048\u3066\uff0c\u4f4d\u7f6e\u7b26\u53f7\u5668\u304b\u3089\u306e\u4fe1\u53f7\u3082\u91cd\u306d\u5408\u308f\u3055\u308c\u308b\u3002\n\u4f4d\u7f6e $i$ \u306e\u4fe1\u53f7\u306f\u6b21\u5f0f\u3067\u5468\u6ce2\u6570\u9818\u57df\u3078\u3068\u5909\u63db\u3055\u308c\u308b:\n\n$$\n\\begin{align}\n\\text{PE}_{(\\text{pos},2i)} &= \\sin\\left(\\frac{\\text{pos}}{10000^{\\frac{2i}{d_{\\text{model}}}}}\\right)\\\\\n\\text{PE}_{(\\text{pos},2i+1)} &= \\cos\\left(\\frac{\\text{pos}}{10000^{\\frac{2i}{d_{\\text{model}}}}}\\right)\n\\end{align}\n$$\n\n\u4f4d\u7f6e\u7b26\u53f7\u5668\u306b\u3088\u308b\u4f4d\u7f6e\u8868\u73fe\u306f\uff0c$i$ \u756a\u76ee\u306e\u4f4d\u7f6e\u60c5\u5831\u3092\u30ef\u30f3\u30db\u30c3\u30c8\u8868\u73fe\u3059\u308b\u306e\u3067\u306f\u306a\u304f\uff0c\u5468\u6ce2\u6570\u9818\u57df\u306b\u5909\u63db\u3059\u308b\u3053\u3068\u3067\u5468\u671f\u60c5\u5831\u3092\u8868\u73fe\u3059\u308b\u8a66\u307f\u3068\u898b\u306a\u3057\u5f97\u308b\u3002\n\n\n\n```python\nclass PositionalEncoding(nn.Module):\n\n def __init__(self, d_model, dropout=0.1, max_len=5000):\n super(PositionalEncoding, self).__init__()\n self.dropout = nn.Dropout(p=dropout)\n\n pe = torch.zeros(max_len, d_model)\n position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)\n div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))\n pe[:, 0::2] = torch.sin(position * div_term)\n pe[:, 1::2] = torch.cos(position * div_term)\n pe = pe.unsqueeze(0).transpose(0, 1)\n self.register_buffer('pe', pe)\n\n def forward(self, x):\n x = x + self.pe[:x.size(0), :]\n return self.dropout(x)\n```\n\n\n```python\n#help(nn.Dropout)\n```\n\n\n```python\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nPE = PositionalEncoding(max_len=100, dropout=0., d_model=10)\n\n#PE(torch.rand(4))\n#torch.ones(4)\nX = PE(torch.Tensor((1,0,0,0,0,0,0,0,0,0))).detach().numpy()\n#plt.plot(range(len(X[0])), X[0])\nplt.plot(X[1][0])\nplt.plot(X[2][0])\nplt.plot(X[3][0])\n\n```\n\n\n\n# \u30c7\u30fc\u30bf\u306e\u30ed\u30fc\u30c9\u3068\u30d0\u30c3\u30c1\u5316\n\n\n\n\u8a13\u7df4\u306b\u306f ``torchtext`` \u306e Wikitext-2 \u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3092\u4f7f\u7528\u3057\u307e\u3059\u3002\nvocab \u30aa\u30d6\u30b8\u30a7\u30af\u30c8\u306f\u8a13\u7df4\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u306b\u57fa\u3065\u3044\u3066\u69cb\u7bc9\u3055\u308c\uff0c\u30c8\u30fc\u30af\u30f3\u3092\u30c6\u30f3\u30bd\u30eb\u3078\u3068\u6570\u5024\u5316\u3059\u308b\u305f\u3081\u306b\u4f7f\u7528\u3055\u308c\u307e\u3059\u3002\n\u7cfb\u5217\u30c7\u30fc\u30bf\u304b\u3089 ``batchify()`` \u95a2\u6570\u3092\u4f7f\u3063\u3066\u30c7\u30fc\u30bf\u3092\u5217 column \u306b\u914d\u7f6e\u3057 ``batch_size`` \u306e\u5927\u304d\u3055\u306e\u30d0\u30c3\u30c1\u306b\u5206\u5272\u3057\u305f\u5f8c\u306b\u6b8b\u3063\u305f\u30c8\u30fc\u30af\u30f3\u3092\u5207\u308a\u53d6\u308a\u307e\u3059\u3002\n\u4f8b\u3048\u3070 \u30a2\u30eb\u30d5\u30a1\u30d9\u30c3\u30c8\u3092\u30b7\u30fc\u30b1\u30f3\u30b9 (\u5168\u957726) \u3068\u3057 \u30d0\u30c3\u30c1\u30b5\u30a4\u30ba\u3092 4 \u3068\u3059\u308b\u3068 \u30a2\u30eb\u30d5\u30a1\u30d9\u30c3\u30c8\u3092\u9577\u3055 6 \u306e 4 \u3064\u306e\u30b7\u30fc\u30b1\u30f3\u30b9\u306b\u5206\u5272\u3059\u308b\u3053\u3068\u306b\u306a\u308a\u307e\u3059\u3002\n\n\\begin{align}\\begin{bmatrix}\n \\text{A} & \\text{B} & \\text{C} & \\ldots & \\text{X} & \\text{Y} & \\text{Z}\n \\end{bmatrix}\n \\Rightarrow\n \\begin{bmatrix}\n \\begin{bmatrix}\\text{A} \\\\ \\text{B} \\\\ \\text{C} \\\\ \\text{D} \\\\ \\text{E} \\\\ \\text{F}\\end{bmatrix} &\n \\begin{bmatrix}\\text{G} \\\\ \\text{H} \\\\ \\text{I} \\\\ \\text{J} \\\\ \\text{K} \\\\ \\text{L}\\end{bmatrix} &\n \\begin{bmatrix}\\text{M} \\\\ \\text{N} \\\\ \\text{O} \\\\ \\text{P} \\\\ \\text{Q} \\\\ \\text{R}\\end{bmatrix} &\n \\begin{bmatrix}\\text{S} \\\\ \\text{T} \\\\ \\text{U} \\\\ \\text{V} \\\\ \\text{W} \\\\ \\text{X}\\end{bmatrix}\n \\end{bmatrix}\\end{align}\n\n\n\n\u3053\u308c\u3089\u306e\u5217\u306f\u30e2\u30c7\u30eb\u306b\u3088\u3063\u3066\u72ec\u7acb\u3057\u305f\u3082\u306e\u3068\u3057\u3066\u6271\u308f\u308c ``G`` \u3068 ``F`` \u306e\u4f9d\u5b58\u6027\u3092\u5b66\u7fd2\u3059\u308b\u3053\u3068\u306f\u3067\u304d\u307e\u305b\u3093\u304c\u3001\u3088\u308a\u52b9\u7387\u7684\u306a\u30d0\u30c3\u30c1\u51e6\u7406\u304c\u53ef\u80fd\u306b\u306a\u308a\u307e\u3059\u3002\n\n\n\n\n\n```python\nimport torchtext\nfrom torchtext.data.utils import get_tokenizer\nTEXT = torchtext.data.Field(tokenize=get_tokenizer(\"basic_english\"),\n init_token='',\n eos_token='',\n lower=True)\ntrain_txt, val_txt, test_txt = torchtext.datasets.WikiText2.splits(TEXT)\nTEXT.build_vocab(train_txt)\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\ndef batchify(data, bsz):\n data = TEXT.numericalize([data.examples[0].text])\n # Divide the dataset into bsz parts.\n nbatch = data.size(0) // bsz\n # Trim off any extra elements that wouldn't cleanly fit (remainders).\n data = data.narrow(0, 0, nbatch * bsz)\n # Evenly divide the data across the bsz batches.\n data = data.view(bsz, -1).t().contiguous()\n return data.to(device)\n\nbatch_size = 20\neval_batch_size = 10\ntrain_data = batchify(train_txt, batch_size)\nval_data = batchify(val_txt, eval_batch_size)\ntest_data = batchify(test_txt, eval_batch_size)\n```\n\n /usr/local/lib/python3.6/dist-packages/torchtext/data/field.py:150: UserWarning: Field class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.\n warnings.warn('{} class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.'.format(self.__class__.__name__), UserWarning)\n\n\n downloading wikitext-2-v1.zip\n\n\n wikitext-2-v1.zip: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.48M/4.48M [00:00<00:00, 8.65MB/s]\n\n\n extracting\n\n\n /usr/local/lib/python3.6/dist-packages/torchtext/data/example.py:78: UserWarning: Example class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.\n warnings.warn('Example class will be retired in the 0.8.0 release and moved to torchtext.legacy. Please see 0.7.0 release notes for further information.', UserWarning)\n\n\n### \u5165\u529b\u7cfb\u5217\u3068\u30bf\u30fc\u30b2\u30c3\u30c8\u7cfb\u5217\u3092\u751f\u6210\u3059\u308b\u305f\u3081\u306e\u95a2\u6570\n\n\n\n\n\u95a2\u6570 ``get_batch()`` \u306f\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30e2\u30c7\u30eb\u306e\u5165\u529b\u7cfb\u5217\u3068\u76ee\u6a19\u7cfb\u5217\u3068\u3092\u751f\u6210\u3057\u307e\u3059\u3002\n\u30bd\u30fc\u30b9\u30c7\u30fc\u30bf\u3092\u9577\u3055 ``bptt`` \u306e\u30c1\u30e3\u30f3\u30af\u306b\u7d30\u5206\u5316\u3057\u307e\u3059\u3002\n\u8a00\u8a9e\u30e2\u30c7\u30eb\u8ab2\u984c\u3067\u306f\uff0c\u30e2\u30c7\u30eb\u306f ``Target`` \u3068\u3057\u3066\u4ee5\u4e0b\u306e\u5358\u8a9e\u3092\u5fc5\u8981\u3068\u3057\u307e\u3059\u3002\n\u4f8b\u3048\u3070\u3001 ``bptt`` \u306e\u5024\u304c 2 \u306e\u5834\u5408\u3001 ``i`` = 0 \u306e\u5834\u5408\uff0c\u4ee5\u4e0b\u306e 2 \u3064\u306e\u5909\u6570\u304c\u5f97\u3089\u308c\u307e\u3059\u3002\n\n\n\n\n\n\n\n\u30c1\u30e3\u30f3\u30af\u306f\u5bf8\u6cd5 0 \u306b\u6cbf\u3063\u3066\u304a\u308a\u3001\u30c8\u30e9\u30f3\u30b9\u30d5\u30a9\u30fc\u30de\u30fc\u30e2\u30c7\u30eb\u306e ``S`` \u5bf8\u6cd5\u3068\u4e00\u81f4\u3057\u3066\u3044\u308b\u3053\u3068\u306b\u6ce8\u610f\u3059\u308b\u5fc5\u8981\u304c\u3042\u308a\u307e\u3059\u3002\n\u30d0\u30c3\u30c1\u6b21\u5143 ``N`` \u306f\u6b21\u5143 1 \u306b\u6cbf\u3063\u3066\u3044\u307e\u3059\u3002\n\n\n\n\n\n```python\nbptt = 35\ndef get_batch(source, i):\n seq_len = min(bptt, len(source) - 1 - i)\n data = source[i:i+seq_len]\n target = source[i+1:i+1+seq_len].view(-1)\n return data, target\n```\n\n\n\n# \u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u306e\u521d\u671f\u5316\n\n\n\n\u30e2\u30c7\u30eb\u306f\u4ee5\u4e0b\u306e\u30cf\u30a4\u30d1\u30fc\u30d1\u30e9\u30e1\u30fc\u30bf\u3067\u8a2d\u5b9a\u3055\u308c\u3066\u3044\u307e\u3059\u3002\n\u8a9e\u5f59\u30b5\u30a4\u30ba\u306f\u30dc\u30ad\u30e3\u30d6\u30aa\u30d6\u30b8\u30a7\u30af\u30c8\u306e\u9577\u3055\u306b\u7b49\u3057\u3044\u3067\u3059\u3002\n\n\n```python\nntokens = len(TEXT.vocab.stoi) # the size of vocabulary\nemsize = 200 # embedding dimension\nnhid = 200 # the dimension of the feedforward network model in nn.TransformerEncoder\nnlayers = 2 # the number of nn.TransformerEncoderLayer in nn.TransformerEncoder\nnhead = 2 # the number of heads in the multiheadattention models\ndropout = 0.2 # the dropout value\nmodel = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, dropout).to(device)\n```\n\n\n\n# \u30e2\u30c7\u30eb\u306e\u5b9f\u884c\n\n\n\n\u640d\u5931\u3092\u8ffd\u8de1\u3059\u308b\u305f\u3081\u306b [CrossEntropyLoss](https://pytorch.org/docs/master/nn.html?highlight=crossentropyloss#torch.nn.CrossEntropyLoss) \u3092\u9069\u7528\u3057 [SGD](https://pytorch.org/docs/master/optim.html?highlight=sgd#torch.optim.SGD) \u306f\u6700\u9069\u5316\u5668\u3068\u3057\u3066\u78ba\u7387\u7684\u52fe\u914d\u964d\u4e0b\u6cd5\u3092\u5b9f\u88c5\u3057\u3066\u3044\u307e\u3059\u3002\n\u521d\u671f\u5b66\u7fd2\u7387\u306f 5.0 \u306b\u8a2d\u5b9a\u3055\u308c\u3066\u3044\u307e\u3059\u3002\n[StepLR](https://pytorch.org/docs/master/optim.html?highlight=steplr#torch.optim.lr_scheduler.StepLR) \u306f\u30a8\u30dd\u30c3\u30af\u5358\u4f4d\u3067\u5b66\u7fd2\u7387\u3092\u8abf\u6574\u3059\u308b\u305f\u3081\u306b\u9069\u7528\u3055\u308c\u3066\u3044\u308b\u3002\n\u5b66\u7fd2\u4e2d\u306f [nn.utils.clip_grad_norm](https://pytorch.org/docs/master/nn.html?highlight=nn%20utils%20clip_grad_norm#torch.nn.utils.clip_grad_norm_) \u95a2\u6570\u3092\u7528\u3044\u3066\u3001\u7206\u767a\u3057\u306a\u3044\u3088\u3046\u306b\u5168\u3066\u306e\u52fe\u914d\u3092\u307e\u3068\u3081\u3066\u30b9\u30b1\u30fc\u30ea\u30f3\u30b0\u3057\u3066\u3044\u307e\u3059\u3002\n\n\n\n\n```python\ncriterion = nn.CrossEntropyLoss()\nlr = 5.0 # learning rate\noptimizer = torch.optim.SGD(model.parameters(), lr=lr)\nscheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.95)\n\nimport time\ndef train():\n model.train() # Turn on the train mode\n total_loss = 0.\n start_time = time.time()\n ntokens = len(TEXT.vocab.stoi)\n for batch, i in enumerate(range(0, train_data.size(0) - 1, bptt)):\n data, targets = get_batch(train_data, i)\n optimizer.zero_grad()\n output = model(data)\n loss = criterion(output.view(-1, ntokens), targets)\n loss.backward()\n torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)\n optimizer.step()\n\n total_loss += loss.item()\n log_interval = 200\n if batch % log_interval == 0 and batch > 0:\n cur_loss = total_loss / log_interval\n elapsed = time.time() - start_time\n print('| epoch {:3d} | {:5d}/{:5d} batches | '\n 'lr {:02.2f} | ms/batch {:5.2f} | '\n 'loss {:5.2f} | ppl {:8.2f}'.format(\n epoch, batch, len(train_data) // bptt, scheduler.get_lr()[0],\n elapsed * 1000 / log_interval,\n cur_loss, math.exp(cur_loss)))\n total_loss = 0\n start_time = time.time()\n\ndef evaluate(eval_model, data_source):\n eval_model.eval() # Turn on the evaluation mode\n total_loss = 0.\n ntokens = len(TEXT.vocab.stoi)\n with torch.no_grad():\n for i in range(0, data_source.size(0) - 1, bptt):\n data, targets = get_batch(data_source, i)\n output = eval_model(data)\n output_flat = output.view(-1, ntokens)\n total_loss += len(data) * criterion(output_flat, targets).item()\n return total_loss / (len(data_source) - 1)\n```\n\n\n\u30a8\u30dd\u30c3\u30af\u3092\u30eb\u30fc\u30d7\u3057\u307e\u3059\u3002\n\u691c\u8a3c\u306e\u640d\u5931\u304c\u3053\u308c\u307e\u3067\u306e\u3068\u3053\u308d\u6700\u9ad8\u3067\u3042\u308c\u3070\u30e2\u30c7\u30eb\u3092\u4fdd\u5b58\u3057\u307e\u3059\u3002\n\u5404\u30a8\u30dd\u30c3\u30af\u306e\u5f8c\u306b\u5b66\u7fd2\u7387\u3092\u8abf\u6574\u3057\u307e\u3059\u3002\n\n\n\n\n```python\nbest_val_loss = float(\"inf\")\nepochs = 3 # The number of epochs\nbest_model = None\n\nfor epoch in range(1, epochs + 1):\n epoch_start_time = time.time()\n train()\n val_loss = evaluate(model, val_data)\n print('-' * 89)\n print('| end of epoch {:3d} | time: {:5.2f}s | valid loss {:5.2f} | '\n 'valid ppl {:8.2f}'.format(epoch, (time.time() - epoch_start_time),\n val_loss, math.exp(val_loss)))\n print('-' * 89)\n\n if val_loss < best_val_loss:\n best_val_loss = val_loss\n best_model = model\n\n scheduler.step()\n```\n\n /usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:351: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`.\n \"please use `get_last_lr()`.\", UserWarning)\n\n\n | epoch 1 | 200/ 2981 batches | lr 5.00 | ms/batch 18.39 | loss 7.98 | ppl 2930.49\n | epoch 1 | 400/ 2981 batches | lr 5.00 | ms/batch 16.38 | loss 6.78 | ppl 882.26\n | epoch 1 | 600/ 2981 batches | lr 5.00 | ms/batch 16.43 | loss 6.36 | ppl 577.99\n | epoch 1 | 800/ 2981 batches | lr 5.00 | ms/batch 16.50 | loss 6.23 | ppl 506.77\n | epoch 1 | 1000/ 2981 batches | lr 5.00 | ms/batch 16.51 | loss 6.12 | ppl 453.77\n | epoch 1 | 1200/ 2981 batches | lr 5.00 | ms/batch 16.57 | loss 6.09 | ppl 440.96\n | epoch 1 | 1400/ 2981 batches | lr 5.00 | ms/batch 16.56 | loss 6.04 | ppl 418.54\n | epoch 1 | 1600/ 2981 batches | lr 5.00 | ms/batch 16.69 | loss 6.04 | ppl 420.40\n | epoch 1 | 1800/ 2981 batches | lr 5.00 | ms/batch 16.68 | loss 5.95 | ppl 385.45\n | epoch 1 | 2000/ 2981 batches | lr 5.00 | ms/batch 16.71 | loss 5.95 | ppl 385.00\n | epoch 1 | 2200/ 2981 batches | lr 5.00 | ms/batch 16.77 | loss 5.84 | ppl 344.93\n | epoch 1 | 2400/ 2981 batches | lr 5.00 | ms/batch 16.83 | loss 5.89 | ppl 360.86\n | epoch 1 | 2600/ 2981 batches | lr 5.00 | ms/batch 16.85 | loss 5.90 | ppl 365.97\n | epoch 1 | 2800/ 2981 batches | lr 5.00 | ms/batch 16.87 | loss 5.80 | ppl 328.75\n -----------------------------------------------------------------------------------------\n | end of epoch 1 | time: 52.53s | valid loss 5.72 | valid ppl 303.92\n -----------------------------------------------------------------------------------------\n | epoch 2 | 200/ 2981 batches | lr 4.51 | ms/batch 17.11 | loss 5.79 | ppl 326.93\n | epoch 2 | 400/ 2981 batches | lr 4.51 | ms/batch 17.00 | loss 5.76 | ppl 318.27\n | epoch 2 | 600/ 2981 batches | lr 4.51 | ms/batch 17.09 | loss 5.58 | ppl 266.30\n | epoch 2 | 800/ 2981 batches | lr 4.51 | ms/batch 17.12 | loss 5.63 | ppl 277.53\n | epoch 2 | 1000/ 2981 batches | lr 4.51 | ms/batch 17.13 | loss 5.58 | ppl 264.12\n | epoch 2 | 1200/ 2981 batches | lr 4.51 | ms/batch 17.20 | loss 5.60 | ppl 271.37\n | epoch 2 | 1400/ 2981 batches | lr 4.51 | ms/batch 17.28 | loss 5.61 | ppl 274.10\n | epoch 2 | 1600/ 2981 batches | lr 4.51 | ms/batch 17.30 | loss 5.65 | ppl 283.50\n | epoch 2 | 1800/ 2981 batches | lr 4.51 | ms/batch 17.41 | loss 5.57 | ppl 261.41\n | epoch 2 | 2000/ 2981 batches | lr 4.51 | ms/batch 17.43 | loss 5.61 | ppl 272.49\n | epoch 2 | 2200/ 2981 batches | lr 4.51 | ms/batch 17.44 | loss 5.50 | ppl 244.07\n | epoch 2 | 2400/ 2981 batches | lr 4.51 | ms/batch 17.55 | loss 5.57 | ppl 261.60\n | epoch 2 | 2600/ 2981 batches | lr 4.51 | ms/batch 17.67 | loss 5.58 | ppl 265.11\n | epoch 2 | 2800/ 2981 batches | lr 4.51 | ms/batch 17.66 | loss 5.50 | ppl 245.18\n -----------------------------------------------------------------------------------------\n | end of epoch 2 | time: 54.18s | valid loss 5.59 | valid ppl 266.66\n -----------------------------------------------------------------------------------------\n | epoch 3 | 200/ 2981 batches | lr 4.29 | ms/batch 17.67 | loss 5.54 | ppl 254.90\n | epoch 3 | 400/ 2981 batches | lr 4.29 | ms/batch 17.40 | loss 5.55 | ppl 256.32\n | epoch 3 | 600/ 2981 batches | lr 4.29 | ms/batch 17.41 | loss 5.36 | ppl 211.86\n | epoch 3 | 800/ 2981 batches | lr 4.29 | ms/batch 17.35 | loss 5.41 | ppl 223.17\n | epoch 3 | 1000/ 2981 batches | lr 4.29 | ms/batch 17.34 | loss 5.37 | ppl 215.28\n | epoch 3 | 1200/ 2981 batches | lr 4.29 | ms/batch 17.27 | loss 5.41 | ppl 223.75\n | epoch 3 | 1400/ 2981 batches | lr 4.29 | ms/batch 17.23 | loss 5.43 | ppl 228.14\n | epoch 3 | 1600/ 2981 batches | lr 4.29 | ms/batch 17.23 | loss 5.47 | ppl 236.73\n | epoch 3 | 1800/ 2981 batches | lr 4.29 | ms/batch 17.21 | loss 5.40 | ppl 222.20\n | epoch 3 | 2000/ 2981 batches | lr 4.29 | ms/batch 17.22 | loss 5.43 | ppl 228.46\n | epoch 3 | 2200/ 2981 batches | lr 4.29 | ms/batch 17.21 | loss 5.32 | ppl 205.26\n | epoch 3 | 2400/ 2981 batches | lr 4.29 | ms/batch 17.23 | loss 5.39 | ppl 220.26\n | epoch 3 | 2600/ 2981 batches | lr 4.29 | ms/batch 17.25 | loss 5.41 | ppl 223.05\n | epoch 3 | 2800/ 2981 batches | lr 4.29 | ms/batch 17.27 | loss 5.34 | ppl 209.39\n -----------------------------------------------------------------------------------------\n | end of epoch 3 | time: 54.08s | valid loss 5.50 | valid ppl 244.09\n -----------------------------------------------------------------------------------------\n\n\n\n\n# \u30c6\u30b9\u30c8\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3092\u7528\u3044\u305f\u30e2\u30c7\u30eb\u306e\u8a55\u4fa1\n\u30e2\u30c7\u30eb\u3092\u30c6\u30b9\u30c8\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u3067\u8a55\u4fa1\u3057\u307e\u3059\u3002\n\n\n\n\n\n\n```python\ntest_loss = evaluate(best_model, test_data)\nprint('=' * 89)\nprint('| End of training | test loss {:5.2f} | test ppl {:8.2f}'.format(\n test_loss, math.exp(test_loss)))\nprint('=' * 89)\n```\n\n =========================================================================================\n | End of training | test loss 5.40 | test ppl 221.43\n =========================================================================================\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "a7ed2f6f05d5bed66ef64f246ecc65d337c80b3f", "size": 65652, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "notebooks/01PyTorchTEXT_transformer_tutorial.ipynb", "max_stars_repo_name": "JPA-BERT/jpa-bert.github.io", "max_stars_repo_head_hexsha": "d0acda35703d876582b90b80298cfe0fa8590512", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "notebooks/01PyTorchTEXT_transformer_tutorial.ipynb", "max_issues_repo_name": "JPA-BERT/jpa-bert.github.io", "max_issues_repo_head_hexsha": "d0acda35703d876582b90b80298cfe0fa8590512", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "notebooks/01PyTorchTEXT_transformer_tutorial.ipynb", "max_forks_repo_name": "JPA-BERT/jpa-bert.github.io", "max_forks_repo_head_hexsha": "d0acda35703d876582b90b80298cfe0fa8590512", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 69.6203605514, "max_line_length": 23866, "alphanum_fraction": 0.6781057698, "converted": true, "num_tokens": 9434, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.48828339529583464, "lm_q2_score": 0.1755380800931169, "lm_q1q2_score": 0.08571232975157927}} {"text": "

M\u00e9todos Num\u00e9ricos

\n

Cap\u00edtulo 1: Error y Representaci\u00f3n de n\u00fameros en el computador

\n

2021/02

\n

MEDELL\u00cdN - COLOMBIA

\n\n\n \n
\n Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license.(c) Carlos Alberto Alvarez Henao
\n\n*** \n\n***Docente:*** Carlos Alberto \u00c1lvarez Henao, I.C. D.Sc.\n\n***e-mail:*** carlosalvarezh@gmail.com\n\n***skype:*** carlos.alberto.alvarez.henao\n\n***Linkedin:*** https://www.linkedin.com/in/carlosalvarez5/\n\n***github:*** https://github.com/carlosalvarezh/Metodos_Numericos\n\n***Herramienta:*** [Jupyter](http://jupyter.org/)\n\n***Kernel:*** Python 3.8\n\n\n***\n\n\n\n

Tabla de Contenidos

\n\n\n***Comentario:*** este cap\u00edtulo est\u00e1 basado en parte de las notas del curso del profesor [Kyle T. Mandli](https://github.com/mandli/intro-numerical-methods) (en ingl\u00e9s)\n\n

\n \n

\n\n\n\n\n```python\n#Bibliotecas a ser utilizadas en el Notebook\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sympy\nimport scipy.special\n\n```\n\n## Fuentes de error\n\nLos c\u00e1lculos num\u00e9ricos, que involucran el uso de m\u00e1quinas (an\u00e1logas o digitales) presentan una serie de errores que provienen de diferentes fuentes:\n\n- del Modelo\n\n- de los datos\n\n- de truncamiento\n\n- de representaci\u00f3n de los n\u00fameros (punto flotante)\n\n- $ \\ldots$\n\n***Meta:*** Categorizar y entender cada tipo de error y explorar algunas aproximaciones simples para analizarlas.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Error en el modelo y los datos\n\nErrores en la formulaci\u00f3n fundamental\n\n- Error en los datos: imprecisiones en las mediciones o incertezas en los par\u00e1metros\n\nInfortunadamente no tenemos control de los errores en los datos y el modelo de forma directa pero podemos usar m\u00e9todos que pueden ser m\u00e1s robustos en la presencia de estos tipos de errores.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Error de truncamiento\n\nLos errores surgen de la expansi\u00f3n de funciones con una funci\u00f3n simple, por ejemplo, $sin(x) \\approx x$ para $|x|\\approx0$.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Error de representaci\u00f3n de punto fotante\n\nLos errores surgen de aproximar n\u00fameros reales con la representaci\u00f3n en precisi\u00f3n finita de n\u00fameros en el computador.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Definiciones b\u00e1sicas\n\nDado un valor verdadero de una funci\u00f3n $f$ y una soluci\u00f3n aproximada $F$, se define:\n\n#### Error absoluto\n\n$$e_a=|f-F|$$\n\n\n***Ejemplo:*** se realiza una medici\u00f3n y se obtiene un valor aproximado de $29.99$ *m*. Asumiendo que el valor exacto de dicha medici\u00f3n deber\u00eda ser de $30.00$ *m*, \u00bfcu\u00e1l es el error absoluto obtenido? Cu\u00e1l ser\u00eda el error absoluto si se disminuye un orden de magnitud las cantidades?\n\n\n```python\nf = 30.0\nF = 29.9\n```\n\n\n```python\nea = abs(f - F)\nprint(\"{0:6.4f}\".format(ea)) \n```\n\n\n```python\n# reduciendo un \u00f3rden de magnitud las cantidades\n\nf = 3.0\nF = 2.9\n```\n\n\n```python\nea = abs(f - F)\nprint(\"{0:6.4f}\".format(ea)) \n```\n\nSe observa que el valor del error absoluto es igual ($\\approx 0.1$), independiente de la magnitud de las cantidades. \n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Error relativo\n\n$$e_r (\\%)= \\frac{e_a}{|f|}=\\frac{|f-F|}{|f|} \\times 100 \\%$$\n\n\n***Ejemplo:*** Repetir el ejemplo anterior, pero calculando el error relativo porcentual.\n\n\n```python\nf = 30.0\nF = 29.9\n```\n\n\n```python\ner = abs(f - F) / f * 100\nprint(\"{0:6.4f}%\".format(er))\n```\n\n\n```python\n# reduciendo un \u00f3rden de magnitud las cantidades\n\nf = 3.0\nF = 2.9\n```\n\n\n```python\ner = abs(f - F) / f * 100\nprint(\"{0:6.4f}%\".format(er))\n```\n\nSe observa que los resultados son diferentes y es mayor cuando las cantidades medidas son menores.\n\nEntre las dos formas de representar el error, la relativa es m\u00e1s consistente con la magnitud de lo que se est\u00e1 midiendo.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n## Notaci\u00f3n $\\text{Big}-\\mathcal{O}$\n\nsea $$f(x)= \\mathcal{O}(g(x)) \\text{ cuando } x \\rightarrow a$$\n\nsi y solo si\n\n$$|f(x)|\\leq M|g(x)| \\text{ cuando } |x-a| < \\delta \\text{ donde } M, a > 0$$\n\n\nEn la pr\u00e1ctica, usamos la notaci\u00f3n $\\text{Big}-\\mathcal{O}$ para decir algo sobre c\u00f3mo se pueden comportar los t\u00e9rminos que podemos haber dejado fuera de una serie. Veamos el siguiente ejemplo de la aproximaci\u00f3n de la serie de Taylor:\n\n***Ejemplo:***\n\nsea $f(x) = \\sin(x)$ con $x_0 = 0$ entonces\n\n$$T_N(x) = \\sum^N_{n=0} (-1)^{n} \\frac{x^{2n+1}}{(2n+1)!}$$\n\nPodemos escribir $f(x)$ como\n\n$$f(x) = x - \\frac{x^3}{6} + \\frac{x^5}{120} + \\mathcal{O}(x^7)$$\n\nEsto se vuelve m\u00e1s \u00fatil cuando lo vemos como lo hicimos antes con $\\Delta x$:\n\n$$f(x) = \\Delta x - \\frac{\\Delta x^3}{6} + \\frac{\\Delta x^5}{120} + \\mathcal{O}(\\Delta x^7)$$\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Reglas para el error de propagaci\u00f3n basado en la notaci\u00f3n $\\text{Big}-\\mathcal{O}$\n\nEn general, existen dos teoremas que no necesitan prueba y se mantienen cuando el valor de $x$ es grande:\n\nSea\n\n$$\\begin{aligned}\n f(x) &= p(x) + \\mathcal{O}(x^n) \\\\\n g(x) &= q(x) + \\mathcal{O}(x^m) \\\\\n k &= \\max(n, m)\n\\end{aligned}$$\n\nEntonces\n\n$$\n f+g = p + q + \\mathcal{O}(x^k)\n$$\n\ny\n\n\\begin{align}\n f \\cdot g &= p \\cdot q + p \\mathcal{O}(x^m) + q \\mathcal{O}(x^n) + O(x^{n + m}) \\\\\n &= p \\cdot q + \\mathcal{O}(x^{n+m})\n\\end{align}\n\nDe otra forma, si estamos interesados en valores peque\u00f1os de $x$, $\\Delta x$, la expresi\u00f3n puede ser modificada como sigue:\n\n\\begin{align}\n f(\\Delta x) &= p(\\Delta x) + \\mathcal{O}(\\Delta x^n) \\\\\n g(\\Delta x) &= q(\\Delta x) + \\mathcal{O}(\\Delta x^m) \\\\\n r &= \\min(n, m)\n\\end{align}\n\nentonces\n\n$$\n f+g = p + q + O(\\Delta x^r)\n$$\n\ny\n\n\\begin{align}\n f \\cdot g &= p \\cdot q + p \\cdot \\mathcal{O}(\\Delta x^m) + q \\cdot \\mathcal{O}(\\Delta x^n) + \\mathcal{O}(\\Delta x^{n+m}) \\\\\n &= p \\cdot q + \\mathcal{O}(\\Delta x^r)\n\\end{align}\n\n***Nota:*** En este caso, supongamos que al menos el polinomio con $k=max(n,m)$ tiene la siguiente forma:\n\n$$\n p(\\Delta x) = 1 + p_1 \\Delta x + p_2 \\Delta x^2 + \\ldots\n$$\n\no\n\n$$\n q(\\Delta x) = 1 + q_1 \\Delta x + q_2 \\Delta x^2 + \\ldots\n$$\n\npara que $\\mathcal{O}(1)$ \n\n\nde modo que hay un t\u00e9rmino $\\mathcal{O}(1)$ que garantiza la existencia de $\\mathcal{O}(\\Delta x^r)$ en el producto final.\n\nPara tener una idea de por qu\u00e9 importa m\u00e1s la potencia en $\\Delta x$ al considerar la convergencia, la siguiente figura muestra c\u00f3mo las diferentes potencias en la tasa de convergencia pueden afectar la rapidez con la que converge nuestra soluci\u00f3n. Tenga en cuenta que aqu\u00ed estamos dibujando los mismos datos de dos maneras diferentes. Graficar el error como una funci\u00f3n de $\\Delta x$ es una forma com\u00fan de mostrar que un m\u00e9todo num\u00e9rico est\u00e1 haciendo lo que esperamos y muestra el comportamiento de convergencia correcto. Dado que los errores pueden reducirse r\u00e1pidamente, es muy com\u00fan trazar este tipo de gr\u00e1ficos en una escala log-log para visualizar f\u00e1cilmente los resultados. Tenga en cuenta que si un m\u00e9todo fuera realmente del orden $n$, ser\u00e1 una funci\u00f3n lineal en el espacio log-log con pendiente $n$.\n\n\n```python\ndx = np.linspace(1.0, 1e-4, 100)\n\nfig = plt.figure()\nfig.set_figwidth(fig.get_figwidth() * 2.0)\naxes = []\naxes.append(fig.add_subplot(1, 2, 1))\naxes.append(fig.add_subplot(1, 2, 2))\n\nfor n in range(1, 5):\n axes[0].plot(dx, dx**n, label=\"$\\Delta x^%s$\" % n)\n axes[1].loglog(dx, dx**n, label=\"$\\Delta x^%s$\" % n)\n\naxes[0].legend(loc=2)\naxes[1].set_xticks([10.0**(-n) for n in range(5)])\naxes[1].set_yticks([10.0**(-n) for n in range(16)])\naxes[1].legend(loc=4)\nfor n in range(2):\n axes[n].set_title(\"Crecimiento del Error vs. $\\Delta x^n$\")\n axes[n].set_xlabel(\"$\\Delta x$\")\n axes[n].set_ylabel(\"Error Estimado\")\n axes[n].set_title(\"Crecimiento de las diferencias\")\n axes[n].set_xlabel(\"$\\Delta x$\")\n axes[n].set_ylabel(\"Error Estimado\")\n\nplt.show()\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n## Error de truncamiento\n\n***Teorema de Taylor:*** Sea $f(x) \\in C^{m+1}[a,b]$ y $x_0 \\in [a,b]$, para todo $x \\in (a,b)$ existe un n\u00famero $c = c(x)$ que se encuentra entre $x_0$ y $x$ tal que\n\n$$ f(x) = T_N(x) + R_N(x)$$\n\ndonde $T_N(x)$ es la aproximaci\u00f3n del polinomio de Taylor\n\n$$T_N(x) = \\sum^N_{n=0} \\frac{f^{(n)}(x_0)\\times(x-x_0)^n}{n!}$$\n\ny $R_N(x)$ es el residuo (la parte de la serie que obviamos)\n\n$$R_N(x) = \\frac{f^{(n+1)}(c) \\times (x - x_0)^{n+1}}{(n+1)!}$$\n\nOtra forma de pensar acerca de estos resultados consiste en reemplazar $x - x_0$ con $\\Delta x$. La idea principal es que el residuo $R_N(x)$ se vuelve mas peque\u00f1o cuando $\\Delta x \\rightarrow 0$.\n\n$$T_N(x) = \\sum^N_{n=0} \\frac{f^{(n)}(x_0)\\times \\Delta x^n}{n!}$$\n\ny $R_N(x)$ es el residuo (la parte de la serie que obviamos)\n\n$$ R_N(x) = \\frac{f^{(n+1)}(c) \\times \\Delta x^{n+1}}{(n+1)!} \\leq M \\Delta x^{n+1}$$\n\n***Ejemplo 1:***\n\n$f(x) = e^x$ con $x_0 = 0$\n\nUsando esto podemos encontrar expresiones para el error relativo y absoluto en funci\u00f3n de $x$ asumiendo $N=2$.\n\nDerivadas:\n$$\\begin{aligned}\n f'(x) &= e^x \\\\\n f''(x) &= e^x \\\\ \n f^{(n)}(x) &= e^x\n\\end{aligned}$$\n\nPolinomio de Taylor:\n$$\\begin{aligned}\n T_N(x) &= \\sum^N_{n=0} e^0 \\frac{x^n}{n!} \\Rightarrow \\\\\n T_2(x) &= 1 + x + \\frac{x^2}{2}\n\\end{aligned}$$\n\nRestos:\n$$\\begin{aligned}\n R_N(x) &= e^c \\frac{x^{n+1}}{(n+1)!} = e^c \\times \\frac{x^3}{6} \\quad \\Rightarrow \\\\\n R_2(x) &\\leq \\frac{e^1}{6} \\approx 0.5\n\\end{aligned}$$\n\nPrecisi\u00f3n:\n$$\n e^1 = 2.718\\ldots \\\\\n T_2(1) = 2.5 \\Rightarrow e \\approx 0.2 ~~ r \\approx 0.1\n$$\n\n\u00a1Tambi\u00e9n podemos usar el paquete `sympy` que tiene la capacidad de calcular el polinomio de *Taylor* integrado!\n\n\n```python\nx = sympy.symbols('x')\nf = sympy.symbols('f', cls=sympy.Function)\n\nf = sympy.exp(x)\nf.series(x0=0, n=11)\n```\n\n\n```python\na = 1.1**500\n\nprint(a)\n```\n\nGraficando\n\n\n```python\nx = np.linspace(-5, 5, 100)\nT_N = 1.0 + x + x**2 / 2.0 + x**3 / 6.0 + x**4 / 24.0 + x**5 / 120.0 + x**6 / 720.0 + x**7 / 5040.0 + x**8 / 40320.0 + x**9 / 362880\nR_N = np.exp(1) * x**10 / 3628800.0\n\nplt.plot(x, T_N, 'r', x, np.exp(x), 'k', x, R_N, 'b')\nplt.plot(0.0, 1.0, 'o', markersize=10)\nplt.grid(True)\nplt.xlabel(\"x\")\nplt.ylabel(\"$f(x)$, $T_N(x)$, $R_N(x)$\")\nplt.legend([\"$T_N(x)$\", \"$f(x)$\", \"$R_N(x)$\"], loc=2)\nplt.show()\n```\n\n\n```python\nR_N\n```\n\n***Ejemplo 2:***\n\nAproximar\n\n$$ f(x) = \\frac{1}{x} \\quad x_0 = 1,$$\n\nusando $x_0 = 1$ para el tercer termino de la serie de Taylor.\n\n$$\\begin{aligned}\n f'(x) &= -\\frac{1}{x^2} \\\\\n f''(x) &= \\frac{2}{x^3} \\\\\n f^{(n)}(x) &= \\frac{(-1)^n n!}{x^{n+1}}\n\\end{aligned}$$\n\n$$\\begin{aligned}\n T_N(x) &= \\sum^N_{n=0} (-1)^n (x-1)^n \\Rightarrow \\\\\n T_2(x) &= 1 - (x - 1) + (x - 1)^2\n\\end{aligned}$$\n\n$$\\begin{aligned}\n R_N(x) &= \\frac{(-1)^{n+1}(x - 1)^{n+1}}{c^{n+2}} \\Rightarrow \\\\\n R_2(x) &= \\frac{-(x - 1)^{3}}{c^{4}}\n\\end{aligned}$$\n\n\n```python\nx = np.linspace(0.8, 2, 100)\nT_N = 1.0 - (x-1) + (x-1)**2\nR_N = -(x-1.0)**3 / (1.1**4)\n\nplt.plot(x, T_N, 'r', x, 1.0 / x, 'k', x, R_N, 'b')\nplt.plot(1.0, 1.0, 'o', markersize=10)\nplt.grid(True)\nplt.xlabel(\"x\")\nplt.ylabel(\"$f(x)$, $T_N(x)$, $R_N(x)$\")\n\nplt.legend([\"$T_N(x)$\", \"$f(x)$\", \"$R_N(x)$\"], loc=8)\nplt.show()\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Laboratorio Num\u00e9rico 1\n\n
\n$\\color{red}{\\textbf{Ejercicio:}}$ Realice la expansi\u00f3n de la serie de Taylor para los dos ejemplos anteriores con 3, 4 y 5 t\u00e9rminos. Cu\u00e1l es el error que se tiene a medida que se adicionan m\u00e1s t\u00e9rminos? Realice una gr\u00e1fica comparativa del Residuo que se obtiene para cada t\u00e9rmino adicional. Haga un an\u00e1lisis de lo que sucede. Si extendemos hasta el \"infinito\" dicho residuo qu\u00e9 pueden concluir?\n
\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n## Error de punto flotante\n\nErrores surgen de aproximar n\u00fameros reales con n\u00fameros de precisi\u00f3n finita\n\n$$\\pi \\approx 3.14$$\n\no $\\frac{1}{3} \\approx 0.333333333$ en decimal, los resultados forman un n\u00famero finito de registros para representar cada n\u00famero.\n\n***Ej.:*** considere la representaci\u00f3n de $\\sqrt{2}=1.4142 \\ldots$. Como sabemos, \u00e9ste es un n\u00famero irracional, es decir, tiene una cantidad infinita de d\u00edgitos decimales. El computador almacena de forma incompleta la represenaci\u00f3n de ese valor empleando cierta cantidad de n\u00fameros decimales\n\n$$2 - (\\sqrt{2})^2$$\n\n\n```python\na = np.sqrt(2)\nprint(\"a: \", a)\n```\n\n a: 1.4142135623730951\n\n\n\n```python\nb = abs(2-a**2)\nprint(\"|2-a^2| = \", b)\n```\n\n |2-a^2| = 4.440892098500626e-16\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Aritm\u00e9tica de punto flotante\n\nLos n\u00fameros en sistemas de [punto flotante](https://en.wikipedia.org/wiki/Floating-point_arithmetic \"Floating point arithmetic\") se representan como una serie de bits que representan diferentes partes de un n\u00famero. En los sistemas de punto flotante normalizados, existen algunas convenciones est\u00e1ndar para el uso de estos bits. En general, los n\u00fameros se almacenan dividi\u00e9ndolos en la forma\n\n$$fl(x) = \\pm (0.d_1 d_2 d_3 \\ldots d_p)_\\beta \\times \\beta^E$$\n\ndonde los digitos $\\{d_i\\}_{i=1}^p$ son enteros tales que $0\\leq d_i \\leq \\beta-1$ y $d_1 \\neq 0$\n\nEl sistema se caracteriza por cuatro n\u00fameros enteros:\n\n- la *base* $\\beta>1$. Para el sistema binario $\\beta = 2$, para decimal $\\beta = 10$, etc.\n\n\n- La precisi\u00f3n $p \\geq 1$, que representa la cantidad de d\u00edgitos significativos, y\n\n\n- el *exponente* $E$, que es un entero en el rango $[E_{\\min}, E_{\\max}]$\n\n\n$\\pm$ es un bit \u00fanico y representa el signo del n\u00famero.\n\nLos puntos importantes en cualquier sistema de punto flotante son:\n\n1. Existe un conjunto discreto y finito de n\u00fameros representables.\n\n\n2. Estos n\u00fameros representables no est\u00e1n distribuidos uniformemente en la l\u00ednea real\n\n\n3. La aritm\u00e9tica en sistemas de punto flotante produce resultados diferentes de la aritm\u00e9tica de precisi\u00f3n infinita (es decir, matem\u00e1tica \"real\")\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Notaci\u00f3n de punto flotante\n\nEs com\u00fan encontrar la siguiente notaci\u00f3n para representar un conjunto de n\u00fameros de punto flotante:\n\n$$F(\\beta, p, E_{min}, E_{max})$$\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Propiedades de los sistemas de punto flotante\n\nTodos los sistemas de punto flotante se caracterizan por varios n\u00fameros importantes\n\n- N\u00famero normalizado reducido ([*underflow*](https://en.wikipedia.org/wiki/Arithmetic_underflow) si est\u00e1 por debajo, relacionado con n\u00fameros sub-normales alrededor de cero)\n\n\n- N\u00famero normalizado m\u00e1s grande ([*overflow*](https://en.wikipedia.org/wiki/Integer_overflow))\n\n\n- Cero\n\n\n- $\\epsilon$ o $\\epsilon_{mach}$\n\n\n- `Inf` y `nan`\n\n***Ejemplo: Sistema de juguete***\n\nConsidere el sistema decimal de 2 digitos de precisi\u00f3n (normalizado)\n\n$$F(10,2,-2,0)$$\n\n$$f = \\pm 0.d_1d_2 \\times 10^E$$\n\ncon $E \\in [-2, 0]$.\n\n**Numero y distribuci\u00f3n de n\u00fameros**\n\n\n1. Cu\u00e1ntos n\u00fameros pueden representarse con este sistema?\n\n\n2. Cu\u00e1l es la distribuci\u00f3n en la l\u00ednea real?\n\n\n3. Cu\u00e1les son los l\u00edmites underflow y overflow?\n\nCu\u00e1ntos n\u00fameros pueden representarse con este sistema?\n\n$$f = \\pm 0.d_1d_2 \\times 10^E ~~~ \\text{con} ~~~ E \\in [-2, 0]$$\n\n$$2 \\times 9 \\times 10 \\times 3 + 1 = 541$$\n\nCu\u00e1l es la distribuci\u00f3n en la recta \"real\"?\n\n\n```python\nd_1_values = [1, 2, 3, 4, 5, 6, 7, 8, 9]\nd_2_values = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\nE_values = [0, -1, -2]\n\nfig = plt.figure(figsize=(10.0, 1.0))\naxes = fig.add_subplot(1, 1, 1)\n\nfor E in E_values:\n for d1 in d_1_values:\n for d2 in d_2_values:\n axes.plot( (d1 + d2 * 0.1) * 10**E, 0.0, 'r+', markersize=20)\n axes.plot(-(d1 + d2 * 0.1) * 10**E, 0.0, 'r+', markersize=20)\n \naxes.plot(0.0, 0.0, '+', markersize=20)\naxes.plot([-10.0, 10.0], [0.0, 0.0], 'k')\n\naxes.set_title(\"Distribuci\u00f3n de Valores\")\naxes.set_yticks([])\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"\")\naxes.set_xlim([-0.1, 0.1])\nplt.show()\n```\n\nCu\u00e1les son los l\u00edmites superior (overflow) e inferior (underflow)?\n\n- El menor n\u00famero que puede ser representado (underflow) es: $1.0 \\times 10^{-2} = 0.01$\n\n\n- El mayor n\u00famero que puede ser representado (overflow) es: $9.9 \\times 10^0 = 9.9$\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Sistema Binario\n\nConsidere el sistema en base 2 de 2 d\u00edgitos de precisi\u00f3n\n\n$$F(2,2,-1,1)$$\n\n$$F=\\pm 0.d_1d_2 \\times 2^E \\quad \\text{con} \\quad E \\in [-1, 1]$$\n\n\n#### Numero y distribuci\u00f3n de n\u00fameros\n\n1. Cu\u00e1ntos n\u00fameros pueden representarse con este sistema?\n\n\n2. Cu\u00e1l es la distribuci\u00f3n en la l\u00ednea real?\n\n\n3. Cu\u00e1les son los l\u00edmites underflow y overflow?\n\nCu\u00e1ntos n\u00fameros pueden representarse en este sistema?\n\n\n$$f=\\pm 0.d_1d_2 \\times 2^E ~~~~ \\text{con} ~~~~ E \\in [-1, 1]$$\n\n$$ 2 \\times 1 \\times 2 \\times 3 + 1 = 13$$\n\nCu\u00e1l es la distribuci\u00f3n en la l\u00ednea real?\n\n\n```python\nd_1_values = [1]\nd_2_values = [0, 1]\nE_values = [1, 0, -1]\n\nfig = plt.figure(figsize=(10.0, 1.0))\naxes = fig.add_subplot(1, 1, 1)\n\nfor E in E_values:\n for d1 in d_1_values:\n for d2 in d_2_values:\n axes.plot( (d1 + d2 * 0.5) * 2**E, 0.0, 'r+', markersize=20)\n axes.plot(-(d1 + d2 * 0.5) * 2**E, 0.0, 'r+', markersize=20)\n \naxes.plot(0.0, 0.0, 'r+', markersize=20)\naxes.plot([-4.5, 4.5], [0.0, 0.0], 'k')\n\naxes.set_title(\"Distribuci\u00f3n de Valores\")\naxes.set_yticks([])\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"\")\naxes.set_xlim([-3.5, 3.5])\nplt.show()\n```\n\nCu\u00e1les son los l\u00edmites superior (*overflow*) e inferior (*underflow*)?\n\n- El menor n\u00famero que puede ser representado (*underflow*) es: $1.0 \\times 2^{-1} = 0.5$\n\n\n\n\n- El mayor n\u00famero que puede ser representado (*overflow*) es: $1.1 \\times 2^1 = 3$\n\nObserve que estos n\u00fameros son en sistema binario. \n\nUna r\u00e1pida regla de oro:\n\n$$2^3 2^2 2^1 2^0 . 2^{-1} 2^{-2} 2^{-3}$$\n\ncorresponde a\n\n8s, 4s, 2s, 1s . mitades, cuartos, octavos, $\\ldots$\n\n[Volver a la Tabla de Contenido](#TOC)\n\n***Ejercicio:*** Cu\u00e1l ser\u00eda la representaci\u00f3n en punto flotante del siguiente conjunto de n\u00fameros:\n\n$$F(2,3,-1,3)$$\n\n- Cu\u00e1ntos n\u00fameros se pueden representar?\n\n\n- Cu\u00e1l ser\u00eda el menor n\u00famero representable?\n\n\n- Cu\u00e1l ser\u00eda el mayor n\u00famero?\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Sistema real - [IEEE 754](https://en.wikipedia.org/wiki/IEEE_754) sistema binario de punto flotante\n\n#### Precisi\u00f3n simple\n\n\n\n- Almacenamiento total es de 32 bits\n\n\n- Exponente de 8 bits $\\Rightarrow E \\in [-126, 127]$\n\n\n- Fracci\u00f3n 23 bits ($p = 24$)\n\n\n```\ns EEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF\n0 1 8 9 31\n```\n\nOverflow $= 2^{127} \\approx 3.4 \\times 10^{38}$\n\nUnderflow $= 2^{-126} \\approx 1.2 \\times 10^{-38}$\n\n$\\epsilon_{\\text{machine}} = 2^{-23} \\approx 1.2 \\times 10^{-7}$\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n#### Precisi\u00f3n doble\n\n- Almacenamiento total asignado es 64 bits\n\n- Exponenete de 11 bits $\\Rightarrow E \\in [-1022, 1024]$\n\n- Fracci\u00f3n de 52 bits ($p = 53$)\n\n```\ns EEEEEEEEEE FFFFFFFFFF FFFFFFFFFF FFFFFFFFFF FFFFFFFFFF FFFFFFFFFF FF\n0 1 11 12 63\n```\nOverflow $= 2^{1024} \\approx 1.8 \\times 10^{308}$\n\nUnderflow $= 2^{-1022} \\approx 2.2 \\times 10^{-308}$\n\n$\\epsilon_{\\text{machine}} = 2^{-52} \\approx 2.2 \\times 10^{-16}$\n\n[Volver a la Tabla de Contenido](#TOC)\n\n\n### Acceso de Python a n\u00fameros de la IEEE\n\nAccede a muchos par\u00e1metros importantes, como el epsilon de la m\u00e1quina\n\n```python\nimport numpy as np\nnp.finfo(float).eps\n```\n\n\n```python\nnp.finfo(float).eps\n\nprint(np.finfo(np.float16))\nprint(np.finfo(np.float32))\nprint(np.finfo(float))\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Calculo \"manual\" del $\\epsilon_{mach}$\n\nLa determinaci\u00f3n \"manual\" del $\\epsilon_{mach}$ es muy simple. Veamos el siguiente algoritmo\n\n\n```python\neps = 1.0\n\nwhile 1.0 + eps > 1.0:\n eps = eps / 2.0\n \nprint(eps)\n```\n\nSi lo comparamos con el valor obtenido en el numeral anterior para `float64`, `2.2204460492503131e-16`, se observa que es del orden de dos veces menor, por qu\u00e9?\n\n[Volver a la Tabla de Contenido](#TOC)\n\n## Por qu\u00e9 deber\u00eda importarnos esto?\n\n

\n \n

\n\n- Aritm\u00e9tica de punto flotante no es conmutativa o asociativa\n\n\n- Errores de punto flotante compuestos, No asuma que la precisi\u00f3n doble es suficiente\n\n\n- Mezclar precisi\u00f3n es muy peligroso\n\n***EL ORDEN DE LOS FACTORES NO ALTERA EL PRODUCTO???***\n\n$$2 \\times 3 = 3 \\times 2 = 6$$\n\n\n$$ 10^{300} \\times 10^{50} \\times 10^{-60} = 10^{300} \\times 10^{-60} \\times 10^{50} ??$$\n\n\n\n```python\na = 10**300\nb = 10**10\nc = 10**-60\n\n```\n\n\n```python\nd1 = a * b * c\nprint(\"d1: \", d1)\n```\n\n\n```python\nd2 = b * c * a\nprint(\"d2: \", d2)\n\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Ejemplo 1: Aritm\u00e9tica simple\n\nAritm\u00e9tica simple $\\delta < \\epsilon_{\\text{machine}}$\n\n $$(1+\\delta) - 1 = 1 - 1 = 0$$\n\n $$1 - 1 + \\delta = \\delta$$\n\n\n```python\ndelta = 1.0000000001 * eps\n\nvalue = (1 + delta) - 1\nprint(value)\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Ejemplo 2: Cancelaci\u00f3n catastr\u00f3fica\n\nMiremos qu\u00e9 sucede cuando sumamos dos n\u00fameros $x$ y $y$ cuando $x+y \\neq 0$. De hecho, podemos estimar estos l\u00edmites haciendo un an\u00e1lisis de error. Aqu\u00ed necesitamos presentar la idea de que cada operaci\u00f3n de punto flotante introduce un error tal que\n\n$$\n \\text{fl}(x ~\\text{op}~ y) = (x ~\\text{op}~ y) (1 + \\delta)\n$$\n\ndonde $\\text{fl}(\\cdot)$ es una funci\u00f3n que devuelve la representaci\u00f3n de punto flotante de la expresi\u00f3n encerrada, $\\text{op}$ es alguna operaci\u00f3n (ex. $+, -, \\times, /$), y $\\delta$ es el error de punto flotante debido a $\\text{op}$.\n\nDe vuelta a nuestro problema en cuesti\u00f3n. El error de coma flotante debido a la suma es\n\n$$\\text{fl}(x + y) = (x + y) (1 + \\delta).$$\n\n\nComparando esto con la soluci\u00f3n verdadera usando un error relativo tenemos\n\n$$\\begin{aligned}\n \\frac{(x + y) - \\text{fl}(x + y)}{x + y} &= \\frac{(x + y) - (x + y) (1 + \\delta)}{x + y} = \\delta.\n\\end{aligned}$$\n\nentonces si $\\delta = \\mathcal{O}(\\epsilon_{\\text{machine}})$ no estaremos muy preocupados.\n\nQue pasa si consideramos un error de punto flotante en la representaci\u00f3n de $x$ y $y$, $x \\neq y$, y decimos que $\\delta_x$ y $\\delta_y$ son la magnitud de los errores en su representaci\u00f3n. Asumiremos que esto constituye el error de punto flotante en lugar de estar asociado con la operaci\u00f3n en s\u00ed.\n\nDado todo esto, tendr\u00edamos\n\n$$\\begin{aligned}\n \\text{fl}(x + y) &= x (1 + \\delta_x) + y (1 + \\delta_y) \\\\\n &= x + y + x \\delta_x + y \\delta_y \\\\\n &= (x + y) \\left(1 + \\frac{x \\delta_x + y \\delta_y}{x + y}\\right)\n\\end{aligned}$$\n\nCalculando nuevamente el error relativo, tendremos\n\n$$\\begin{aligned}\n \\frac{x + y - (x + y) \\left(1 + \\frac{x \\delta_x + y \\delta_y}{x + y}\\right)}{x + y} &= 1 - \\left(1 + \\frac{x \\delta_x + y \\delta_y}{x + y}\\right) \\\\\n &= \\frac{x}{x + y} \\delta_x + \\frac{y}{x + y} \\delta_y \\\\\n &= \\frac{1}{x + y} (x \\delta_x + y \\delta_y)\n\\end{aligned}$$\n\nLo importante aqu\u00ed es que ahora el error depende de los valores de $x$ y $y$, y m\u00e1s importante a\u00fan, su suma. De particular preocupaci\u00f3n es el tama\u00f1o relativo de $x + y$. A medida que se acerca a cero en relaci\u00f3n con las magnitudes de $x$ y $y$, el error podr\u00eda ser arbitrariamente grande. Esto se conoce como ***cancelaci\u00f3n catastr\u00f3fica***.\n\n\n```python\ndx = np.array([10**(-n) for n in range(1, 16)])\nx = 1.0 + dx\ny = -np.ones(x.shape)\nerror = np.abs(x + y - dx) / (dx)\n\nfig = plt.figure()\nfig.set_figwidth(fig.get_figwidth() * 2)\n\naxes = fig.add_subplot(1, 2, 1)\naxes.loglog(dx, x + y, 'o-')\naxes.set_xlabel(\"$\\Delta x$\")\naxes.set_ylabel(\"$x + y$\")\naxes.set_title(\"$\\Delta x$ vs. $x+y$\")\n\naxes = fig.add_subplot(1, 2, 2)\naxes.loglog(dx, error, 'o-')\naxes.set_xlabel(\"$\\Delta x$\")\naxes.set_ylabel(\"$|x + y - \\Delta x| / \\Delta x$\")\naxes.set_title(\"Diferencia entre $x$ y $y$ vs. Error relativo\")\n\nplt.show()\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Ejemplo 3: Evaluaci\u00f3n de una funci\u00f3n\n\nConsidere la funci\u00f3n\n\n$$\n f(x) = \\frac{1 - \\cos x}{x^2}\n$$\n\ncon $x\\in[-10^{-4}, 10^{-4}]$. \n\nTomando el l\u00edmite cuando $x \\rightarrow 0$ podemos ver qu\u00e9 comportamiento esperar\u00edamos ver al evaluar esta funci\u00f3n:\n\n$$\n \\lim_{x \\rightarrow 0} \\frac{1 - \\cos x}{x^2} = \\lim_{x \\rightarrow 0} \\frac{\\sin x}{2 x} = \\lim_{x \\rightarrow 0} \\frac{\\cos x}{2} = \\frac{1}{2}.\n$$\n\n\u00bfQu\u00e9 hace la representaci\u00f3n de punto flotante?\n\n\n```python\nf = (1-np.cos(0))/0**2\n```\n\n\n```python\nx = np.linspace(-1e-3, 1e-3, 100, dtype=np.float32)\nerror = (0.5 - (1.0 - np.cos(x)) / x**2) / 0.5\n\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x, error, 'o')\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"Error Relativo\")\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Ejemplo 4: Evaluaci\u00f3n de un Polinomio\n\n $$f(x) = x^7 - 7x^6 + 21 x^5 - 35 x^4 + 35x^3-21x^2 + 7x - 1$$\n\n\n```python\nx = np.linspace(0.988, 1.012, 1000, dtype=np.float16)\ny = x**7 - 7.0 * x**6 + 21.0 * x**5 - 35.0 * x**4 + 35.0 * x**3 - 21.0 * x**2 + 7.0 * x - 1.0\n\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x, y, 'r')\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"y\")\naxes.set_ylim((-0.1, 0.1))\naxes.set_xlim((x[0], x[-1]))\nplt.show()\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Ejemplo 5: Evaluaci\u00f3n de una funci\u00f3n racional\n\nCalcule $f(x) = x + 1$ por la funci\u00f3n $$F(x) = \\frac{x^2 - 1}{x - 1}$$\n\n\u00bfCu\u00e1l comportamiento esperar\u00edas encontrar?\n\n\n```python\nx = np.linspace(0.5, 1.5, 101, dtype=np.float16)\nf_hat = (x**2 - 1.0) / (x - 1.0)\n\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x, np.abs(f_hat - (x + 1.0)))\naxes.set_xlabel(\"$x$\")\naxes.set_ylabel(\"Error Absoluto\")\nplt.show()\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n## Combinaci\u00f3n de error\n\nEn general, nos debemos ocupar de la combinaci\u00f3n de error de truncamiento con el error de punto flotante.\n\n- Error de Truncamiento: errores que surgen de la aproximaci\u00f3n de una funci\u00f3n, truncamiento de una serie.\n\n$$\\sin x \\approx x - \\frac{x^3}{3!} + \\frac{x^5}{5!} + O(x^7)$$\n\n\n- Error de punto flotante: errores derivados de la aproximaci\u00f3n de n\u00fameros reales con n\u00fameros de precisi\u00f3n finita\n\n$$\\pi \\approx 3.14$$\n\no $\\frac{1}{3} \\approx 0.333333333$ en decimal, los resultados forman un n\u00famero finito de registros para representar cada n\u00famero.\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Ejemplo 1:\n\nConsidere la aproximaci\u00f3n de diferencias finitas donde $f(x) = e^x$ y estamos evaluando en $x=1$\n\n$$f'(x) \\approx \\frac{f(x + \\Delta x) - f(x)}{\\Delta x}$$\n\nCompare el error entre disminuir $\\Delta x$ y la verdadera solucion $f'(1) = e$\n\n\n```python\ndelta_x = np.linspace(1e-20, 5.0, 100)\ndelta_x = np.array([2.0**(-n) for n in range(1, 60)])\nx = 1.0\nf_hat_1 = (np.exp(x + delta_x) - np.exp(x)) / (delta_x)\nf_hat_2 = (np.exp(x + delta_x) - np.exp(x - delta_x)) / (2.0 * delta_x)\n\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.loglog(delta_x, np.abs(f_hat_1 - np.exp(1)), 'o-', label=\"Unilateral\")\naxes.loglog(delta_x, np.abs(f_hat_2 - np.exp(1)), 's-', label=\"Centrado\")\naxes.legend(loc=3)\naxes.set_xlabel(\"$\\Delta x$\")\naxes.set_ylabel(\"Error Absoluto\")\nplt.show()\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Ejemplo 2:\n\nEval\u00fae $e^x$ con la serie de *Taylor*\n\n$$e^x = \\sum^\\infty_{n=0} \\frac{x^n}{n!}$$\n\npodemos elegir $n< \\infty$ que puede aproximarse $e^x$ en un rango dado $x \\in [a,b]$ tal que el error relativo $E$ satisfaga $E<8 \\cdot \\varepsilon_{\\text{machine}}$?\n\n\u00bfCu\u00e1l podr\u00eda ser una mejor manera de simplemente evaluar el polinomio de Taylor directamente por varios $N$?\n\n\n```python\ndef my_exp(x, N=10):\n value = 0.0\n for n in range(N + 1):\n value += x**n / scipy.special.factorial(n)\n \n return value\n\nx = np.linspace(-2, 2, 100, dtype=np.float32)\nfor N in range(1, 50):\n error = np.abs((np.exp(x) - my_exp(x, N=N)) / np.exp(x))\n if np.all(error < 8.0 * np.finfo(float).eps):\n break\n\nprint(N)\n\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.plot(x, error)\naxes.set_xlabel(\"x\")\naxes.set_ylabel(\"Error Relativo\")\nplt.show()\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Ejemplo 3: Error relativo\n\nDigamos que queremos calcular el error relativo de dos valores $x$ y $y$ usando $x$ como valor de normalizaci\u00f3n\n\n$$\n E = \\frac{x - y}{x}\n$$\ny\n$$\n E = 1 - \\frac{y}{x}\n$$\n\nson equivalentes. En precisi\u00f3n finita, \u00bfqu\u00e9 forma pidr\u00eda esperarse que sea m\u00e1s precisa y por qu\u00e9?\n\nEjemplo tomado de [blog](https://nickhigham.wordpress.com/2017/08/14/how-and-how-not-to-compute-a-relative-error/) posteado por Nick Higham*\n\nUsando este modelo, la definici\u00f3n original contiene dos operaciones de punto flotante de manera que\n\n$$\\begin{aligned}\n E_1 = \\text{fl}\\left(\\frac{x - y}{x}\\right) &= \\text{fl}(\\text{fl}(x - y) / x) \\\\\n &= \\left[ \\frac{(x - y) (1 + \\delta_+)}{x} \\right ] (1 + \\delta_/) \\\\\n &= \\frac{x - y}{x} (1 + \\delta_+) (1 + \\delta_/)\n\\end{aligned}$$\n\nPara la otra formulaci\u00f3n tenemos\n\n$$\\begin{aligned}\n E_2 = \\text{fl}\\left( 1 - \\frac{y}{x} \\right ) &= \\text{fl}\\left(1 - \\text{fl}\\left(\\frac{y}{x}\\right) \\right) \\\\\n &= \\left(1 - \\frac{y}{x} (1 + \\delta_/) \\right) (1 + \\delta_-)\n\\end{aligned}$$\n\nSi suponemos que todos las $\\text{op}$s tienen magnitudes de error similares, entonces podemos simplificar las cosas dejando que \n\n$$\n |\\delta_\\ast| \\le \\epsilon.\n$$\n\nPara comparar las dos formulaciones, nuevamente usamos el error relativo entre el error relativo verdadero $e_i$ y nuestras versiones calculadas $E_i$\n\nDefinici\u00f3n original\n\n$$\\begin{aligned}\n \\frac{e - E_1}{e} &= \\frac{\\frac{x - y}{x} - \\frac{x - y}{x} (1 + \\delta_+) (1 + \\delta_/)}{\\frac{x - y}{x}} \\\\\n &\\le 1 - (1 + \\epsilon) (1 + \\epsilon) = 2 \\epsilon + \\epsilon^2\n\\end{aligned}$$\n\nDefinici\u00f3n manipulada:\n\n$$\\begin{aligned}\n \\frac{e - E_2}{e} &= \\frac{e - \\left[1 - \\frac{y}{x}(1 + \\delta_/) \\right] (1 + \\delta_-)}{e} \\\\\n &= \\frac{e - \\left[e - \\frac{y}{x} \\delta_/) \\right] (1 + \\delta_-)}{e} \\\\\n &= \\frac{e - \\left[e + e\\delta_- - \\frac{y}{x} \\delta_/ - \\frac{y}{x} \\delta_/ \\delta_-)) \\right] }{e} \\\\\n &= - \\delta_- + \\frac{1}{e} \\frac{y}{x} \\left(\\delta_/ + \\delta_/ \\delta_- \\right) \\\\\n &= - \\delta_- + \\frac{1 -e}{e} \\left(\\delta_/ + \\delta_/ \\delta_- \\right) \\\\\n &\\le \\epsilon + \\left |\\frac{1 - e}{e}\\right | (\\epsilon + \\epsilon^2)\n\\end{aligned}$$\n\nVemos entonces que nuestro error de punto flotante depender\u00e1 de la magnitud relativa de $e$\n\n\n```python\n# Based on the code by Nick Higham\n# https://gist.github.com/higham/6f2ce1cdde0aae83697bca8577d22a6e\n# Compares relative error formulations using single precision and compared to double precision\n\nN = 501 # Note: Use 501 instead of 500 to avoid the zero value\nd = numpy.finfo(numpy.float32).eps * 1e4\na = 3.0\nx = a * numpy.ones(N, dtype=numpy.float32)\ny = [x[i] + numpy.multiply((i - numpy.divide(N, 2.0, dtype=numpy.float32)), d, dtype=numpy.float32) for i in range(N)]\n\n# Compute errors and \"true\" error\nrelative_error = numpy.empty((2, N), dtype=numpy.float32)\nrelative_error[0, :] = numpy.abs(x - y) / x\nrelative_error[1, :] = numpy.abs(1.0 - y / x)\nexact = numpy.abs( (numpy.float64(x) - numpy.float64(y)) / numpy.float64(x))\n\n# Compute differences between error calculations\nerror = numpy.empty((2, N))\nfor i in range(2):\n error[i, :] = numpy.abs((relative_error[i, :] - exact) / numpy.abs(exact))\n\nfig = plt.figure()\naxes = fig.add_subplot(1, 1, 1)\naxes.semilogy(y, error[0, :], '.', markersize=10, label=\"$|x-y|/|x|$\")\naxes.semilogy(y, error[1, :], '.', markersize=10, label=\"$|1-y/x|$\")\n\naxes.grid(True)\naxes.set_xlabel(\"y\")\naxes.set_ylabel(\"Error Relativo\")\naxes.set_xlim((numpy.min(y), numpy.max(y)))\naxes.set_ylim((5e-9, numpy.max(error[1, :])))\naxes.set_title(\"Comparasi\u00f3n Error Relativo\")\naxes.legend()\nplt.show()\n```\n\nAlgunos enlaces de utilidad con respecto al punto flotante IEEE:\n\n- [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)\n\n\n- [IEEE 754 Floating Point Calculator](http://babbage.cs.qc.edu/courses/cs341/IEEE-754.html)\n\n\n- [Numerical Computing with IEEE Floating Point Arithmetic](http://epubs.siam.org/doi/book/10.1137/1.9780898718072)\n\n[Volver a la Tabla de Contenido](#TOC)\n\n## Operaciones de conteo\n\n- ***Error de truncamiento:*** *\u00bfPor qu\u00e9 no usar m\u00e1s t\u00e9rminos en la serie de Taylor?*\n\n\n- ***Error de punto flotante:*** *\u00bfPor qu\u00e9 no utilizar la mayor precisi\u00f3n posible?*\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Ejemplo 1: Multiplicaci\u00f3n matriz - vector\n\nSea $A, B \\in \\mathbb{R}^{N \\times N}$ y $x \\in \\mathbb{R}^N$.\n\n1. Cuenta el n\u00famero aproximado de operaciones que tomar\u00e1 para calcular $Ax$\n\n2. Hacer lo mismo para $AB$\n\n***Producto Matriz-vector:*** Definiendo $[A]_i$ como la $i$-\u00e9sima fila de $A$ y $A_{ij}$ como la $i$,$j$-\u00e9sima entrada entonces\n\n$$\n A x = \\sum^N_{i=1} [A]_i \\cdot x = \\sum^N_{i=1} \\sum^N_{j=1} A_{ij} x_j\n$$\n\nTomando un caso en particular, siendo $N=3$, entonces la operaci\u00f3n de conteo es\n\n$$\n A x = [A]_1 \\cdot v + [A]_2 \\cdot v + [A]_3 \\cdot v = \\begin{bmatrix}\n A_{11} \\times v_1 + A_{12} \\times v_2 + A_{13} \\times v_3 \\\\\n A_{21} \\times v_1 + A_{22} \\times v_2 + A_{23} \\times v_3 \\\\\n A_{31} \\times v_1 + A_{32} \\times v_2 + A_{33} \\times v_3\n \\end{bmatrix}\n$$\n\nEsto son 15 operaciones (6 sumas y 9 multiplicaciones)\n\nTomando otro caso, siendo $N=4$, entonces el conteo de operaciones es:\n\n$$\n A x = [A]_1 \\cdot v + [A]_2 \\cdot v + [A]_3 \\cdot v = \\begin{bmatrix}\n A_{11} \\times v_1 + A_{12} \\times v_2 + A_{13} \\times v_3 + A_{14} \\times v_4 \\\\\n A_{21} \\times v_1 + A_{22} \\times v_2 + A_{23} \\times v_3 + A_{24} \\times v_4 \\\\\n A_{31} \\times v_1 + A_{32} \\times v_2 + A_{33} \\times v_3 + A_{34} \\times v_4 \\\\\n A_{41} \\times v_1 + A_{42} \\times v_2 + A_{43} \\times v_3 + A_{44} \\times v_4 \\\\\n \\end{bmatrix}\n$$\n\nEsto lleva a 28 operaciones (12 sumas y 16 multiplicaciones).\n\nGeneralizando, hay $N^2$ mutiplicaciones y $N(N-1)$ sumas para un total de \n\n$$\n \\text{operaciones} = N (N - 1) + N^2 = \\mathcal{O}(N^2).\n$$\n\n***Producto Matriz-Matriz ($AB$):*** Definiendo $[B]_j$ como la $j$-\u00e9sima columna de $B$ entonces\n\n$$\n (A B)_{ij} = \\sum^N_{i=1} \\sum^N_{j=1} [A]_i \\cdot [B]_j\n$$\n\nEl producto interno de dos vectores es representado por \n\n$$\n a \\cdot b = \\sum^N_{i=1} a_i b_i\n$$\n\nconduce a $\\mathcal{O}(3N)$ operaciones. Como hay $N^2$ entradas en la matriz resultante, tendr\u00edamos $\\mathcal{O}(N^3)$ operaciones\n\nExisten m\u00e9todos para realizar la multiplicaci\u00f3n matriz - matriz m\u00e1s r\u00e1pido. En la siguiente figura vemos una colecci\u00f3n de algoritmos a lo largo del tiempo que han podido limitar el n\u00famero de operaciones en ciertas circunstancias\n$$\n \\mathcal{O}(N^\\omega)\n$$\n\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Ejemplo 2: M\u00e9todo de Horner para evaluar polinomios\n\nDado\n\n$$P_N(x) = a_0 + a_1 x + a_2 x^2 + \\ldots + a_N x^N$$ \n\no\n\n\n$$P_N(x) = p_1 x^N + p_2 x^{N-1} + p_3 x^{N-2} + \\ldots + p_{N+1}$$\n\nqueremos encontrar la mejor v\u00eda para evaluar $P_N(x)$\n\nPrimero considere dos v\u00edas para escribir $P_3$\n\n$$ P_3(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4$$\n\ny usando multiplicaci\u00f3n anidada\n\n$$ P_3(x) = ((p_1 x + p_2) x + p_3) x + p_4$$\n\nConsidere cu\u00e1ntas operaciones se necesitan para cada...\n\n$$ P_3(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4$$\n\n$$P_3(x) = \\overbrace{p_1 \\cdot x \\cdot x \\cdot x}^3 + \\overbrace{p_2 \\cdot x \\cdot x}^2 + \\overbrace{p_3 \\cdot x}^1 + p_4$$\n\nSumando todas las operaciones, en general podemos pensar en esto como una pir\u00e1mide\n\n\n\npodemos estimar de esta manera que el algoritmo escrito de esta manera tomar\u00e1 aproximadamente $\\mathcal{O}(N^2/2)$ operaciones para completar.\n\nMirando nuetros otros medios de evaluaci\u00f3n\n\n$$ P_3(x) = ((p_1 x + p_2) x + p_3) x + p_4$$\n\nAqu\u00ed encontramos que el m\u00e9todo es $\\mathcal{O}(N)$ (el 2 generalmente se ignora en estos casos). Lo importante es que la primera evaluaci\u00f3n es $\\mathcal{O}(N^2)$ y la segunda $\\mathcal{O}(N)$!\n\n[Volver a la Tabla de Contenido](#TOC)\n\n### Algoritmo\n\nComplete la funci\u00f3n e implemente el m\u00e9todo de *Horner*\n\n```python\ndef eval_poly(p, x):\n \"\"\"Evaluates polynomial given coefficients p at x\n \n Function to evaluate a polynomial in order N operations. The polynomial is defined as\n \n P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]\n \n The value x should be a float.\n \"\"\"\n pass\n```\n\n\n```python\ndef eval_poly(p, x):\n \"\"\"Evaluates polynomial given coefficients p at x\n \n Function to evaluate a polynomial in order N operations. The polynomial is defined as\n \n P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]\n \n The value x should be a float.\n \"\"\"\n ### ADD CODE HERE\n pass\n```\n\n\n```python\n# Scalar version\ndef eval_poly(p, x):\n \"\"\"Evaluates polynomial given coefficients p at x\n \n Function to evaluate a polynomial in order N operations. The polynomial is defined as\n \n P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]\n \n The value x should be a float.\n \"\"\"\n \n y = p[0]\n for coefficient in p[1:]:\n y = y * x + coefficient\n \n return y\n\n# Vectorized version\ndef eval_poly(p, x):\n \"\"\"Evaluates polynomial given coefficients p at x\n \n Function to evaluate a polynomial in order N operations. The polynomial is defined as\n \n P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]\n \n The value x can by a NumPy ndarray.\n \"\"\"\n \n y = numpy.ones(x.shape) * p[0]\n for coefficient in p[1:]:\n y = y * x + coefficient\n \n return y\n\np = [1, -3, 10, 4, 5, 5]\nx = numpy.linspace(-10, 10, 100)\nplt.plot(x, eval_poly(p, x))\nplt.show()\n```\n\n[Volver a la Tabla de Contenido](#TOC)\n\n\n```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open('./nb_style.css', 'r').read()\n return HTML(styles)\ncss_styling()\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "ce03d631c4df06573f64bec7e27cb7cb115ea0f8", "size": 163327, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Cap01_Error.ipynb", "max_stars_repo_name": "carlosalvarezh/Analisis_Numerico", "max_stars_repo_head_hexsha": "4a6aed7cf18832e81e731352ed279bd381cfd7a6", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2020-09-24T17:53:50.000Z", "max_stars_repo_stars_event_max_datetime": "2020-09-24T17:53:50.000Z", "max_issues_repo_path": "Cap01_Error.ipynb", "max_issues_repo_name": "carlosalvarezh/Analisis_Numerico", "max_issues_repo_head_hexsha": "4a6aed7cf18832e81e731352ed279bd381cfd7a6", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Cap01_Error.ipynb", "max_forks_repo_name": "carlosalvarezh/Analisis_Numerico", "max_forks_repo_head_hexsha": "4a6aed7cf18832e81e731352ed279bd381cfd7a6", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2021-01-28T21:22:28.000Z", "max_forks_repo_forks_event_max_datetime": "2021-03-11T17:53:02.000Z", "avg_line_length": 66.8550961932, "max_line_length": 26748, "alphanum_fraction": 0.7665052318, "converted": true, "num_tokens": 15925, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.34158248603300034, "lm_q2_score": 0.2509127812837603, "lm_q1q2_score": 0.08570741160836133}} {"text": "```python\nfrom IPython.display import display, Image\n\n```\n\n# Introduction to Gradient Boosting Methods (GBMs)\n\nWe note that the following content mainly builds upon [Introduction to Boosted Trees](https://xgboost.readthedocs.io/en/stable/tutorials/model.html).\n\nThe technique of gradient boosting which has attracted significantly increasing attention in recent years due to its superior for solving\ntabular data problems. The term **Gradient Boosting** originates from the paper *Greedy Function Approximation: A Gradient Boosting Machine*, by Friedman. This tutorial aims to provide a clear explanation on typical gradient boosting methods, such as gradient boosting decision trees (GBDT), in a self-contained and principled way using the elements of supervised learning.\n\n## 1 Elements of Supervised Learning\n\nFirst, we introduce the notations used throughout this tutorial as follows:\n\nGiven the training data $X=\\{\\mathbf{x}_i\\}_{i=1}^{n}$, and the target $Y=\\{y_i\\}_{i=1}^{n}$, where $\\mathbf{x}_i$ denotes the feature vector with respect to the $i$-th data instance, which can be either continuous or categorical features. $\\mathbf{x}_{ij}$ denotes the $j$-th feature of $\\mathbf{x}_i$.\n\n### 1.1 Model and Parameters\nThe **model** in supervised learning usually refers to the mathematical structure by which the prediction $\\hat{y}_{i}$ is made given the input $\\mathbf{x}_i$. A common example is a **linear model**, where the prediction is given as $\\hat{y}_i = \\sum_j \\theta_j \\mathbf{x}_{ij}$, namely a linear combination of weighted input features. The prediction value can have different interpretations, depending on the task, i.e., regression or classification. For example, it can be logistic transformed to get the probability of positive class in logistic regression, and it can also be used as a ranking score when we want to rank the outputs.\n\nThe **parameters** are the undetermined part that we need to learn from data. In linear regression problems, the parameters are the coefficients $\\theta$. Usually we will use $\\theta$ to denote the parameters.\n\n### 1.2 Objective Function: Training Loss + Regularization\nWith judicious choices for $y_i$, we may express a variety of tasks, such as regression, classification, and ranking.\nThe task of **training** the model amounts to finding the best parameters $\\theta$ that best fit the training data $\\mathbf{x}_i$ and labels $y_i$. In order to train the model, we need to define the **objective function**\nto measure how well the model fit the training data.\n\nA salient characteristic of objective functions is that they consist two parts: **training loss** and **regularization term**:\n\n\\begin{equation}\n\\text{obj}(\\theta) = L(\\theta) + \\Omega(\\theta)\n\\end{equation}\n\nwhere $L$ is the training loss function, and $\\Omega$ is\nthe **regularization term**. The training loss measures how *predictive* our model is with respect to the training data. A common choice of $L$ is the *mean squared error*, which is given by\n\n$L(\\theta) = \\sum_i (y_i-\\hat{y}_i)^2$\n\nAnother commonly used loss function is logistic loss, to be used for logistic regression:\n\n$$L(\\theta) = \\sum_i[ y_i\\ln (1+e^{-\\hat{y}_i}) + (1-y_i)\\ln (1+e^{\\hat{y}_i})]$$\n\nThe **regularization term** is what people usually forget to add. The regularization term controls the complexity of the model, which helps us to avoid overfitting.\n\n### 1.3 Why introduce the general principle?\nThe elements introduced above form the basic elements of supervised learning, and they are natural building blocks of machine learning toolkits. For example, you should be able to describe the differences and commonalities between gradient boosted trees and random forests. Understanding the process in a formalized way also helps us to understand the objective that we are learning and the reason behind the heuristics such as pruning and smoothing.\n\n## 2 Gradient Boosting Decision Trees (GBDT)\n\n### 2.1 Tree Ensembles\nNow that we have introduced the elements of supervised learning, let us get started with real trees. The tree ensemble model consists of a set of classification and regression trees (CART). Here's a simple example of a CART that classifies whether someone will like a hypothetical computer game X.\n\nFig. A toy example for CART\n\n\nWe classify the members of a family into different leaves, and assign them the score on the corresponding leaf.\nA CART is a bit different from decision trees, in which the leaf only contains decision values. In CART, a real score\nis associated with each of the leaves, which gives us richer interpretations that go beyond classification.\nThis also allows for a principled, unified approach to optimization, as we will see in a later part of this tutorial.\n\nUsually, a single tree is not strong enough to be used in practice. What is actually used is the ensemble model,\nwhich sums the prediction of multiple trees together.\n\nFig. A toy example for tree ensemble, consisting of two CARTs\n\n\nHere is an example of a tree ensemble of two trees. The prediction scores of each individual tree are summed up to get the final score.\nIf you look at the example, an important fact is that the two trees try to **complement** each other.\nMathematically, we can write our model in the form\n\n$$\\hat{y}_i = \\sum_{k=1}^K f_k(x_i), f_k \\in \\mathcal{F}$$\n\nwhere $K$ is the number of trees, $f$ is a function in the functional space $\\mathcal{F}$, and $\\mathcal{F}$ is the set of all possible CARTs. The objective function to be optimized is given by\n\n$$\\text{obj}(\\theta) = \\sum_i^n l(y_i, \\hat{y}_i) + \\sum_{k=1}^K \\Omega(f_k)$$\n\nNow here comes a trick question: what is the **model** used in random forests? Tree ensembles! So random forests and boosted trees are really the same models; the difference arises from how we train them. This means that, if you write a predictive service for tree ensembles, you only need to write one and it should work for both random forests and gradient boosted trees. (See [Treelite](https://treelite.readthedocs.io/en/latest/index.html) for an actual example.) One example of why elements of supervised learning rock.\n\n### 2.2 Tree Boosting\n\nNow that we introduced the model, let us turn to training: How should we learn the trees?\nThe answer is, as is always for all supervised learning models: **define an objective function and optimize it**!\n\nLet the following be the objective function (remember it always needs to contain training loss and regularization):\n\n$$\\text{obj} = \\sum_{i=1}^n l(y_i, \\hat{y}_i^{(t)}) + \\sum_{i=1}^t\\Omega(f_i)$$\n\nIn particular, $t$ denotes the training step, each step also corresponds to a member function $f$, i.e., a tree.\n\n### 2.3 Additive Training\n\nThe first question we want to ask: what are the **parameters** of trees? You can find that what we need to learn are those functions $f_i$, **each containing the structure of the tree and the leaf scores**. Learning tree structure is much harder than traditional optimization problem where you can simply take the gradient. **It is intractable to learn all the trees at once**.\nInstead, we use an **additive strategy: fix what we have learned, and add one new tree at a time**. In other words, the functions $f_1$ ... $f_{t-1}$ would be viewed as learned functions when we learn $f_t$. We write the prediction value at **step** $t$ as $\\hat{y}_i^{(t)}$. Then we have\n\n\\begin{equation}\n\\begin{split}\n\\hat{y}_i^{(0)} &= 0\\\\\n \\hat{y}_i^{(1)} &= f_1(x_i) = \\hat{y}_i^{(0)} + f_1(x_i)\\\\\n \\hat{y}_i^{(2)} &= f_1(x_i) + f_2(x_i)= \\hat{y}_i^{(1)} + f_2(x_i)\\\\\n &\\dots\\\\\n \\hat{y}_i^{(t)} &= \\sum_{k=1}^t f_k(x_i)= \\hat{y}_i^{(t-1)} + f_t(x_i)\n\\end{split}\n\\end{equation}\n\nIt remains to ask: which tree do we want at each step? A natural thing is to add the one that optimizes our objective.\n\n\\begin{equation}\n\\begin{split}\n \\text{obj}^{(t)} & = \\sum_{i=1}^n l(y_i, \\hat{y}_i^{(t)}) + \\sum_{i=1}^t\\Omega(f_i) \\\\\n & = \\sum_{i=1}^n l(y_i, \\hat{y}_i^{(t-1)} + f_t(x_i)) + \\Omega(f_t) + \\mathrm{constant}\n\\end{split}\n\\end{equation}\n\nIf we consider using mean squared error (MSE) as our loss function, the objective becomes\n\n\\begin{equation}\n\\begin{split}\n \\text{obj}^{(t)} & = \\sum_{i=1}^n (y_i - (\\hat{y}_i^{(t-1)} + f_t(x_i)))^2 + \\sum_{i=1}^t\\Omega(f_i) \\\\\n & = \\sum_{i=1}^n [2(\\hat{y}_i^{(t-1)} - y_i)f_t(x_i) + f_t(x_i)^2] + \\Omega(f_t) + \\mathrm{constant}\n\\end{split}\n\\end{equation}\n\nwhere the terms without $f_t$ are aggregated as a constant since the functions $f_1$ ... $f_{t-1}$ are learned functions in previous steps.\n\n> In calculus, Taylor's theorem gives an approximation of a k-times\ndifferentiable function around a given point by a polynomial of degree\nk, called the kth-order Taylor polynomial. For a smooth function,\nthe Taylor polynomial is the truncation at the order k of the Taylor\nseries of the function.\n>\n> \\begin{equation}\nf(x)=\\sum_{n=0}^{\\infty}\\frac{f^{(n)}(x_{0})}{n!}(x-x_{0})^{n}\n\\end{equation}\n>\n> The first-order Taylor polynomial is the linear approximation of the\nfunction,\n>\n> $f(x)\\approx f(x_{0})+f^{'}(x_{0})(x-x_{0})$\n>\n>The second-order Taylor polynomial is often referred to as the quadratic\napproximation,\n>\n>$f(x)\\approx f(x_{0})+f^{'}(x_{0})(x-x_{0})+f^{''}(x_{0})\\frac{(x-x_{0})^{2}}{2}$\n\nThe form of MSE is friendly, with a first order term (usually called the residual) and a quadratic term.\nFor other losses of interest (for example, logistic loss), it is not so easy to get such a nice form.\nSo in the general case, we take the **Taylor expansion of the loss function up to the second order**:\n\n\\begin{equation}\n\\begin{split}\n \\text{obj}^{(t)} = \\sum_{i=1}^n [l(y_i, \\hat{y}_i^{(t-1)}) + g_i f_t(x_i) + \\frac{1}{2} h_i f_t^2(x_i)] + \\Omega(f_t) + \\mathrm{constant}\n\\end{split}\n\\end{equation}\n\nwhere the $g_i$ and $h_i$ are defined as\n\n\\begin{equation}\n\\begin{split}\n g_i &= \\partial_{\\hat{y}_i^{(t-1)}} l(y_i, \\hat{y}_i^{(t-1)})\\\\\n h_i &= \\partial_{\\hat{y}_i^{(t-1)}}^2 l(y_i, \\hat{y}_i^{(t-1)})\n\\end{split}\n\\end{equation}\n\n> We note that the $f$ in the description on Taylor's theorem is different from $f_{t}$ in the loss function. Put another way, $g_i$ corresponds to $f^{'}(x_{0})$, $h_i$ corresponds to $f^{''}(x_{0})$, $f_t(x_i)$ corresponds to $x-x_{0}$, $\\sum_{i=1}^n l(y_i, \\hat{y}_i^{(t-1)})$ corresponds to $f(x_{0})$.\n\nAfter we remove all the constants, the specific objective at step $t$ becomes\n\n\\begin{equation}\n\\begin{split}\n \\sum_{i=1}^n [g_i f_t(x_i) + \\frac{1}{2} h_i f_t^2(x_i)] + \\Omega(f_t)\n\\end{split}\n\\end{equation}\n\n**This becomes our optimization goal for the new tree**. One important advantage of this definition is that\nthe value of the objective function only depends on $g_i$ and $h_i$. This is how the popular packages, such as **XGBoost** and **LightGBM**, support custom loss functions.\n**We can optimize every loss function, including logistic regression and pairwise ranking, using exactly the same solver that takes $g_i$ and $h_i$ as input**!\n\n### 2.4 Model Complexity\nWe have introduced the training step, but wait, there is one important thing, the **regularization term**!\nWe need to define the complexity of the tree $\\Omega(f)$. In order to do so, let us first refine the definition of the tree $f(x)$ as\n\n\\begin{equation}\n\\begin{split}\n f_t(x) = w_{q(x)}, w \\in R^T, q:R^d\\rightarrow \\{1,2,\\cdots,T\\} .\n\\end{split}\n\\end{equation}\n\nHere $w$ is the vector of scores on leaves, $q$ is a function assigning each data point to the corresponding leaf, and $T$ is the number of leaves.\nIn XGBoost, the complexity is defined as\n\n\\begin{equation}\n\\begin{split}\n \\Omega(f) = \\gamma T + \\frac{1}{2}\\lambda \\sum_{j=1}^T w_j^2\n\\end{split}\n\\end{equation}\n\nOf course, there is more than one way to define the complexity, but this one works well in practice. The regularization is one part most tree packages treat\nless carefully, or simply ignore. This was because the traditional treatment of tree learning only emphasized improving impurity, while the complexity control was left to heuristics.\nBy defining it formally, we can get a better idea of what we are learning and obtain models that perform well in the wild.\n\n### 2.5 The Structure Score\nHere is the magical part of the derivation. After re-formulating the tree model, we can write the objective value with the $t$-th tree as:\n\n\\begin{equation}\n\\begin{split}\n \\text{obj}^{(t)} &\\approx \\sum_{i=1}^n [g_i w_{q(x_i)} + \\frac{1}{2} h_i w_{q(x_i)}^2] + \\gamma T + \\frac{1}{2}\\lambda \\sum_{j=1}^T w_j^2\\\\\n &= \\sum^T_{j=1} [(\\sum_{i\\in I_j} g_i) w_j + \\frac{1}{2} (\\sum_{i\\in I_j} h_i + \\lambda) w_j^2 ] + \\gamma T\n\\end{split}\n\\end{equation}\n\nwhere $I_j = \\{i|q(x_i)=j\\}$ is the set of indices of data points assigned to the $j$-th leaf.\nNotice that in the second line we have changed the index of the summation because all the data points on the same leaf get the same score.\nWe could further compress the expression by defining $G_j = \\sum_{i\\in I_j} g_i$ and $H_j = \\sum_{i\\in I_j} h_i$:\n\n\\begin{equation}\n\\begin{split}\n \\text{obj}^{(t)} = \\sum^T_{j=1} [G_jw_j + \\frac{1}{2} (H_j+\\lambda) w_j^2] +\\gamma T\n\\end{split}\n\\end{equation}\n\nIn this equation, $w_j$ are independent with respect to each other, the form $G_jw_j+\\frac{1}{2}(H_j+\\lambda)w_j^2$ is quadratic and the best $w_j$ for a given structure $q(x)$ and the best objective reduction we can get is:\n\n\\begin{equation}\n\\begin{split}\n w_j^\\ast &= -\\frac{G_j}{H_j+\\lambda}\\\\\n \\text{obj}^\\ast &= -\\frac{1}{2} \\sum_{j=1}^T \\frac{G_j^2}{H_j+\\lambda} + \\gamma T\n\\end{split}\n\\end{equation}\n\nThe last equation measures *how good* a tree structure $q(x)$ is.\n\nFig. An illustration of structure score (fitness)\n\n\n\nIf all this sounds a bit complicated, let's take a look at the picture, and see how the scores can be calculated.\nBasically, for a given tree structure, we push the statistics $g_i$ and $h_i$ to the leaves they belong to,\nsum the statistics together, and use the formula to calculate how good the tree is.\nThis score is like the impurity measure in a decision tree, except that it also takes the model complexity into account.\n\n### 2.6 Learn the tree structure\nNow that we have a way to measure how good a tree is, ideally we would enumerate all possible trees and pick the best one.\nIn practice this is intractable, so we will try to optimize one level of the tree at a time.\nSpecifically we try to split a leaf into two leaves, and the score it gains is\n\n\\begin{equation}\n\\begin{split}\n Gain = \\frac{1}{2} \\left[\\frac{G_L^2}{H_L+\\lambda}+\\frac{G_R^2}{H_R+\\lambda}-\\frac{(G_L+G_R)^2}{H_L+H_R+\\lambda}\\right] - \\gamma\n\\end{split}\n\\end{equation}\n\nThis formula can be decomposed as: 1) the score on the new left leaf, 2) the score on the new right leaf, 3) the score on the original leaf, 4) regularization on the additional leaf.\nWe can see an important fact here: if the gain is smaller than $\\gamma$, we would do better not to add that branch. This is exactly the **pruning** techniques in tree based models! By using the principles of supervised learning, we can naturally come up with the reason these techniques work :)\n\n### 2.7 Approximate Split Finding Using Feature Histograms\n\nIt is vital to find the optimal split of a tree node efficiently, as enumerating every possible split in a brute-force manner is impractical. Current works generally adopt a histogram-based algorithm for\nfast and accurate split finding, like the following picture.\n\n\n```python\npath_img_his = \"../img/histogram_split.png\"\nimg_ltr_perqdata = Image(path_img_his, width = 800, height = 100)\ndisplay(img_ltr_perqdata)\n```\n\nSpecifically, the algorithm considers only $k$ values (i.e., number of bins) for each feature as candidate splits rather than all possible splits (e.g., all feature values). The most common approach to propose the candidates is using the **quantile sketch** to approximate the feature distribution. After candidate splits are prepared, we enumerate\nall instances on a tree node and accumulate their gradient statistics into two histograms, first- and second-order gradients, respectively. The histogram consists of $k$ bins, each of which sums the first- or second-order gradients of instances whose $j$-th feature values fall into that bin. In this way, each feature is summarized by two histograms. We find the best split of $j$-th feature upon the histograms that achieve the maximum gain value and the global best split is the best split over all features.\n\nAnother advantage of the histogram-based algorithm is that we can accelerate the algorithm by a histogram subtraction technique. The instances on two children nodes are **non-overlapping and mutual exclusive**, since **an instance will be classified onto either left or right child node when the parent node gets split** (since the bins or histograms are naturally ordered). Considering the basic operation of histogram is adding gradients, therefore, for a specific feature, the element-wise sum of first or second-order histograms of children nodes equals to that of parent.\n\n- Example case: using local bins\n\n Motivated by this, we can significantly accelerate training by first constructing the histograms of the one child node with fewer instances, and then getting those of the sibling node via histogram subtraction (histograms of parent node are persist in memory). By doing so, we can skip at least one half of the instances. Since histogram construction usually dominates the computation cost, such subtraction technique can speed up the training process considerably.\n\n> Limitation of additive tree learning\n\n Since it is intractable to enumerate all possible tree structures, we add one split at a time. This approach works well most of the time, but there are some edge cases that fail due to this approach. For those edge cases, training results in a degenerate model because we consider only one feature dimension at a time. See [Can Gradient Boosting Learn Simple Arithmetic?]() for an example.\n", "meta": {"hexsha": "9b9dbba0799185b184e053cfea1c1160dec18ed7", "size": 294557, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tutorial/ptranking_gbm.ipynb", "max_stars_repo_name": "ii-research-ranking/ptranking", "max_stars_repo_head_hexsha": "2794e6e086bcd87ce177f40194339e9b825e9f4c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 64, "max_stars_repo_stars_event_min_datetime": "2018-09-19T17:04:04.000Z", "max_stars_repo_stars_event_max_datetime": "2020-04-30T07:54:04.000Z", "max_issues_repo_path": "tutorial/ptranking_gbm.ipynb", "max_issues_repo_name": "ii-research-ranking/ptranking", "max_issues_repo_head_hexsha": "2794e6e086bcd87ce177f40194339e9b825e9f4c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2018-09-27T06:59:02.000Z", "max_issues_repo_issues_event_max_datetime": "2020-01-05T12:35:12.000Z", "max_forks_repo_path": "tutorial/ptranking_gbm.ipynb", "max_forks_repo_name": "ii-research-ranking/ptranking", "max_forks_repo_head_hexsha": "2794e6e086bcd87ce177f40194339e9b825e9f4c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 11, "max_forks_repo_forks_event_min_datetime": "2018-09-28T07:17:51.000Z", "max_forks_repo_forks_event_max_datetime": "2020-03-12T06:28:35.000Z", "avg_line_length": 839.1937321937, "max_line_length": 272432, "alphanum_fraction": 0.9441907678, "converted": true, "num_tokens": 4877, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4035668537353745, "lm_q2_score": 0.21206879937726764, "lm_q1q2_score": 0.08558393814012225}} {"text": "```python\n# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)\n\n# Toggle cell visibility\n\nfrom IPython.display import HTML\ntag = HTML('''\nToggle cell visibility here.''')\ndisplay(tag)\n\n# Hide the code completely\n\n# from IPython.display import HTML\n# tag = HTML('''''')\n# display(tag)\n```\n\n\n\nToggle cell visibility here.\n\n\n\n```python\n# Examples: \n# Factored form: 1/(x**2*(x**2 + 1))\n# Expanded form: 1/(x**4+x**2)\n\nimport sympy as sym\nfrom IPython.display import Latex, display, Markdown, Javascript, clear_output\nfrom ipywidgets import widgets, Layout # Interactivity module\n```\n\n## Razcep na parcialne ulomke\n\nOb uporabi Laplaceove transformacije za analizo sistema, dobimo Laplaceovo transformacijo izstopnega signala z mno\u017eenjem prenosne funkcije in Laplaceove transformacije vstopnega signala. Rezultat tega mno\u017eenja je pogosto te\u017eak za razumevanje. Z namenom izvedbe inverzne Laplaceove transformacije je najprej potrebno izvesti razcep na parcialne ulomke. Ta interaktivni primer prikazuje na\u010din izvedbe razcepa.\n\n---\n\n### Kako upravljati s tem interaktivnim primerom?\nPreklaplja\u0161 lahko med opcijama *Vnos funkcije* ali *Vnos koeficientov polinoma*.\n\n1. *Vnos funkcije*:\n* Primer: \u010ce \u017eeli\u0161 vnesti funkcijo $\\frac{1}{x^2(x^2 + 1)}$ (faktorizirana oblika) vnesi 1/(x\\*\\*2\\*(x\\*\\*2 + 1)); \u010de \u017eeli\u0161 vnesti isto funkcijo a v raz\u0161irjeni obliki ($\\frac{1}{x^4+x^2}$) type 1/(x\\*\\*4+x\\*\\*2).\n
\n\n2. *Vnos koeficientov polinoma*:\n* Z uporabo drsnikov izberi stopnji \u0161tevca in imenovalca izbrane racionalne funkcije.\n* Vnesi vrednost koeficientov \u0161teva in imenovalca v ustrezna besedilna polja; za potrditev klikni na gumb *Potrdi*.\n\n\n\n\n\n\n```python\n## System selector buttons\nstyle = {'description_width': 'initial'}\ntypeSelect = widgets.ToggleButtons(\n options=[('Vnos funkcije', 0), ('Vnos koeficientov polinoma', 1),],\n description='Izberi: ',style={'button_width':'230px'})\n\nbtnReset=widgets.Button(description=\"Ponastavi\")\n\n# function\ntextbox=widgets.Text(description=('Vnesi funkcijo:'),style=style)\nbtnConfirmFunc=widgets.Button(description=\"Potrdi\") # ex btnConfirm\n\n# poly\nbtnConfirmPoly=widgets.Button(description=\"Potrdi\") # ex btn\n\ndisplay(typeSelect)\n\ndef on_button_clickedReset(ev):\n display(Javascript(\"Jupyter.notebook.execute_cells_below()\"))\n\ndef on_button_clickedFunc(ev):\n eq = sym.sympify(textbox.value)\n\n if eq==sym.factor(eq):\n display(Markdown('Vne\u0161ena funkcija $%s$ je zapisana v faktorizirani obliki. ' %sym.latex(eq) + 'Njena raz\u0161irjena oblika je enaka $%s$.' %sym.latex(sym.expand(eq))))\n \n else:\n display(Markdown('Vne\u0161ena funkcija $%s$ je zapisana v raz\u0161irjeni obliki. ' %sym.latex(eq) + 'Njena faktorizirana oblika je enaka $%s$.' %sym.latex(sym.factor(eq))))\n \n display(Markdown('Rezultat razcepa na parcialne ulomke: $%s$' %sym.latex(sym.apart(eq)) + '.'))\n display(btnReset)\n \ndef transfer_function(num,denom):\n num = np.array(num, dtype=np.float64)\n denom = np.array(denom, dtype=np.float64)\n len_dif = len(denom) - len(num)\n if len_dif<0:\n temp = np.zeros(abs(len_dif))\n denom = np.concatenate((temp, denom))\n transferf = np.vstack((num, denom))\n elif len_dif>0:\n temp = np.zeros(len_dif)\n num = np.concatenate((temp, num))\n transferf = np.vstack((num, denom))\n return transferf\n\ndef f(orderNum, orderDenom):\n global text1, text2\n text1=[None]*(int(orderNum)+1)\n text2=[None]*(int(orderDenom)+1)\n display(Markdown('2. Vnesi koeficiente polinoma v \u0161tevcu.'))\n for i in range(orderNum+1):\n text1[i]=widgets.Text(description=(r'a%i'%(orderNum-i)))\n display(text1[i])\n display(Markdown('3. Vnesi koeficiente polinoma v imenovalcu.')) \n for j in range(orderDenom+1):\n text2[j]=widgets.Text(description=(r'b%i'%(orderDenom-j)))\n display(text2[j])\n global orderNum1, orderDenom1\n orderNum1=orderNum\n orderDenom1=orderDenom\n\ndef on_button_clickedPoly(btn):\n clear_output()\n global num,denom\n enacbaNum=\"\"\n enacbaDenom=\"\"\n num=[None]*(int(orderNum1)+1)\n denom=[None]*(int(orderDenom1)+1)\n for i in range(int(orderNum1)+1):\n if text1[i].value=='' or text1[i].value=='Vnesi koeficient':\n text1[i].value='Vnesi koeficient'\n else:\n try:\n num[i]=int(text1[i].value)\n except ValueError:\n if text1[i].value!='' or text1[i].value!='Vnesi koeficient':\n num[i]=sym.var(text1[i].value)\n \n for i in range (len(num)-1,-1,-1):\n if i==0:\n enacbaNum=enacbaNum+str(num[len(num)-i-1])\n elif i==1:\n enacbaNum=enacbaNum+\"+\"+str(num[len(num)-i-1])+\"*x+\"\n elif i==int(len(num)-1):\n enacbaNum=enacbaNum+str(num[0])+\"*x**\"+str(len(num)-1)\n else:\n enacbaNum=enacbaNum+\"+\"+str(num[len(num)-i-1])+\"*x**\"+str(i) \n \n for j in range(int(orderDenom1)+1):\n if text2[j].value=='' or text2[j].value=='Vnesi koeficient':\n text2[j].value='Vnesi koeficient'\n else:\n try:\n denom[j]=int(text2[j].value)\n except ValueError:\n if text2[j].value!='' or text2[j].value!='Vnesi koeficient':\n denom[j]=sym.var(text2[j].value)\n \n for i in range (len(denom)-1,-1,-1):\n if i==0:\n enacbaDenom=enacbaDenom+\"+\"+str(denom[len(denom)-i-1])\n elif i==1:\n enacbaDenom=enacbaDenom+\"+\"+str(denom[len(denom)-i-1])+\"*x\"\n elif i==int(len(denom)-1):\n enacbaDenom=enacbaDenom+str(denom[0])+\"*x**\"+str(len(denom)-1)\n else:\n enacbaDenom=enacbaDenom+\"+\"+str(denom[len(denom)-i-1])+\"*x**\"+str(i)\n \n funcSym=sym.sympify('('+enacbaNum+')/('+enacbaDenom+')')\n\n DenomSym=sym.sympify(enacbaDenom)\n NumSym=sym.sympify(enacbaNum)\n DenomSymFact=sym.factor(DenomSym);\n funcFactSym=NumSym/DenomSymFact;\n \n if DenomSym==sym.expand(enacbaDenom):\n if DenomSym==DenomSymFact:\n display(Markdown('Vne\u0161ena funkcija je enaka $%s$. \u0160tevca ni mo\u010d razcepiti.' %sym.latex(funcSym)))\n else:\n display(Markdown('Vne\u0161ena funkcija je enaka $%s$. \u0160tevca ni mo\u010d razcepiti. Isto funkcijo lahko zapi\u0161emo v faktorizirani obliki kot $%s$.' %(sym.latex(funcSym), sym.latex(funcFactSym))))\n\n if sym.apart(funcSym)==funcSym:\n display(Markdown('Razcepa na parcialne ulomke ni mo\u017eno izvesti.'))\n else:\n display(Markdown('Rezultat razcepa na parcialne ulomke je enak $%s$' %sym.latex(sym.apart(funcSym)) + '.'))\n \n btnReset.on_click(on_button_clickedReset)\n display(btnReset)\n \ndef partial_frac(index):\n\n if index==0:\n x = sym.Symbol('x') \n display(widgets.HBox((textbox, btnConfirmFunc)))\n btnConfirmFunc.on_click(on_button_clickedFunc)\n btnReset.on_click(on_button_clickedReset)\n \n elif index==1:\n display(Markdown('1. Dolo\u010di stopnji polinomov v \u0161tevcu (orderNum) in imenovalcu (orderDenom).'))\n widgets.interact(f, orderNum=widgets.IntSlider(min=0,max=10,step=1,value=0),\n orderDenom=widgets.IntSlider(min=0,max=10,step=1,value=0));\n btnConfirmPoly.on_click(on_button_clickedPoly)\n display(btnConfirmPoly) \n\ninput_data=widgets.interactive_output(partial_frac,{'index':typeSelect})\ndisplay(input_data)\n```\n\n\n ToggleButtons(description='Izberi: ', options=(('Vnos funkcije', 0), ('Vnos koeficientov polinoma', 1)), style\u2026\n\n\n\n Output()\n\n", "meta": {"hexsha": "713ffddf48bf30475bba7e36078343efc8a48325", "size": 12863, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ICCT_si/examples/02/.ipynb_checkpoints/TD-09-Razcep_na_parcialne_ulomke-checkpoint.ipynb", "max_stars_repo_name": "ICCTerasmus/ICCT", "max_stars_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2021-05-22T18:42:14.000Z", "max_stars_repo_stars_event_max_datetime": "2021-10-03T14:10:22.000Z", "max_issues_repo_path": "ICCT_si/examples/02/TD-09-Razcep_na_parcialne_ulomke.ipynb", "max_issues_repo_name": "ICCTerasmus/ICCT", "max_issues_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ICCT_si/examples/02/TD-09-Razcep_na_parcialne_ulomke.ipynb", "max_forks_repo_name": "ICCTerasmus/ICCT", "max_forks_repo_head_hexsha": "fcd56ab6b5fddc00f72521cc87accfdbec6068f6", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2021-05-24T11:40:09.000Z", "max_forks_repo_forks_event_max_datetime": "2021-08-29T16:36:18.000Z", "avg_line_length": 39.5784615385, "max_line_length": 430, "alphanum_fraction": 0.5438855632, "converted": true, "num_tokens": 2662, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3106943959796865, "lm_q2_score": 0.27512972976675254, "lm_q1q2_score": 0.08548126520593556}} {"text": "# Example of DOV search methods for CPT measurements (sonderingen)\n\n[](https://mybinder.org/v2/gh/DOV-Vlaanderen/pydov/master?filepath=docs%2Fnotebooks%2Fsearch_sonderingen.ipynb)\n\n## Use cases explained below\n* Get CPT measurements in a bounding box\n* Get CPT measurements with specific properties\n* Get CPT measurements in a bounding box based on specific properties\n* Select CPT measurements in a municipality and return depth\n* Get CPT measurements based on fields not available in the standard output dataframe\n* Get CPT measurements data, returning fields not available in the standard output dataframe\n* Get CPT measurements in a municipality and where groundwater related data are available\n\n\n```python\n%matplotlib inline\nimport inspect, sys\n```\n\n\n```python\nimport pydov\n```\n\n## Get information about the datatype 'Sondering'\n\n\n```python\nfrom pydov.search.sondering import SonderingSearch\nsondering = SonderingSearch()\n```\n\nA description is provided for the 'Sondering' datatype:\n\n\n```python\nsondering.get_description()\n```\n\n\n\n\n 'In DOV worden de resultaten van sonderingen ter beschikking gesteld. Bij het uitvoeren van de sondering wordt een sondeerpunt met conus bij middel van buizen statisch de grond ingedrukt. Continu of met bepaalde diepte-intervallen wordt de weerstand aan de conuspunt, de plaatselijke wrijvingsweerstand en/of de totale indringingsweerstand opgemeten. Eventueel kan aanvullend de waterspanning in de grond rond de conus tijdens de sondering worden opgemeten met een waterspanningsmeter. Het op diepte drukken van de sondeerbuizen gebeurt met een indrukapparaat. De nodige reactie voor het indrukken van de buizen wordt geleverd door een verankering en/of door het gewicht van de sondeerwagen. De totale indrukcapaciteit varieert van 25 kN tot 250 kN, afhankelijk van apparaat en opstellingswijze.'\n\n\n\nThe different fields that are available for objects of the 'Sondering' datatype can be requested with the get_fields() method:\n\n\n```python\nfields = sondering.get_fields()\n\n# print available fields\nfor f in fields.values():\n print(f['name'])\n```\n\n id\n sondeernummer\n pkey_sondering\n weerstandsdiagram\n meetreeks\n x\n y\n start_sondering_mtaw\n gemeente\n diepte_sondering_van\n diepte_sondering_tot\n datum_aanvang\n uitvoerder\n conus\n sondeermethode\n apparaat\n informele_stratigrafie\n formele_stratigrafie\n hydrogeologische_stratigrafie\n opdrachten\n datum_gw_meting\n diepte_gw_m\n lengte\n diepte\n qc\n Qt\n fs\n u\n i\n\n\nYou can get more information of a field by requesting it from the fields dictionary:\n* *name*: name of the field\n* *definition*: definition of this field\n* *cost*: currently this is either 1 or 10, depending on the datasource of the field. It is an indication of the expected time it will take to retrieve this field in the output dataframe.\n* *notnull*: whether the field is mandatory or not\n* *type*: datatype of the values of this field\n\n\n```python\nfields['diepte_sondering_tot']\n```\n\n\n\n\n {'name': 'diepte_sondering_tot',\n 'definition': 'Maximumdiepte van de sondering ten opzichte van het aanvangspeil, in meter.',\n 'type': 'float',\n 'notnull': False,\n 'query': True,\n 'cost': 1}\n\n\n\nOptionally, if the values of the field have a specific domain the possible values are listed as *values*:\n\n\n```python\nfields['conus']['values']\n```\n\n\n\n\n {'E': None, 'M1': None, 'M2': None, 'M4': None, 'U': None, 'onbekend': None}\n\n\n\n## Example use cases\n\n### Get CPT measurements in a bounding box\n\nGet data for all the CPT measurements that are geographically located within the bounds of the specified box.\n\nThe coordinates are in the Belgian Lambert72 (EPSG:31370) coordinate system and are given in the order of lower left x, lower left y, upper right x, upper right y.\n\n\n```python\nfrom pydov.util.location import Within, Box\n\ndf = sondering.search(location=Within(Box(152999, 206930, 153050, 207935)))\ndf.head()\n```\n\n [000/001] c\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxystart_sondering_mtawdiepte_sondering_vandiepte_sondering_totdatum_aanvanguitvoerdersondeermethodeapparaatdatum_gw_metingdiepte_gw_mlengtediepteqcQtfsui
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100 kNNaNNaN0.2NaN1.62.06NaNNaNNaN
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100 kNNaNNaN0.4NaN3.64.26NaNNaNNaN
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100 kNNaNNaN0.6NaN2.63.46NaNNaNNaN
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100 kNNaNNaN0.8NaN4.05.66NaNNaNNaN
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-72/555-SXVIII153008.0206985.015.80.036.01973-03-21Rijksinstituut voor Grondmechanicadiscontinu mechanisch100 kNNaNNaN1.0NaN3.06.53NaNNaNNaN
\n
\n\n\n\nThe dataframe contains one CPT measurement where multiple measurement points. The available data are flattened to represent unique attributes per row of the dataframe.\n\nUsing the *pkey_sondering* field one can request the details of this borehole in a webbrowser:\n\n\n```python\nfor pkey_sondering in set(df.pkey_sondering):\n print(pkey_sondering)\n```\n\n https://www.dov.vlaanderen.be/data/sondering/1973-016812\n\n\n### Get CPT measurements with specific properties\n\nNext to querying CPT based on their geographic location within a bounding box, we can also search for CPT measurements matching a specific set of properties. For this we can build a query using a combination of the 'Sondering' fields and operators provided by the WFS protocol.\n\nA list of possible operators can be found below:\n\n\n```python\n[i for i,j in inspect.getmembers(sys.modules['owslib.fes'], inspect.isclass) if 'Property' in i]\n```\n\n\n\n\n ['PropertyIsBetween',\n 'PropertyIsEqualTo',\n 'PropertyIsGreaterThan',\n 'PropertyIsGreaterThanOrEqualTo',\n 'PropertyIsLessThan',\n 'PropertyIsLessThanOrEqualTo',\n 'PropertyIsLike',\n 'PropertyIsNotEqualTo',\n 'PropertyIsNull',\n 'SortProperty']\n\n\n\nIn this example we build a query using the *PropertyIsEqualTo* operator to find all CPT measuremetns that are within the community (gemeente) of 'Herstappe':\n\n\n```python\nfrom owslib.fes import PropertyIsEqualTo\n\nquery = PropertyIsEqualTo(propertyname='gemeente',\n literal='Elsene')\ndf = sondering.search(query=query)\n\ndf.head()\n```\n\n [000/029] ccccccccccccccccccccccccccccc\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxystart_sondering_mtawdiepte_sondering_vandiepte_sondering_totdatum_aanvanguitvoerdersondeermethodeapparaatdatum_gw_metingdiepte_gw_mlengtediepteqcQtfsui
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25 kNNaN1.971.0NaN3.3NaNNaNNaNNaN
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25 kNNaN1.971.1NaN2.9NaNNaNNaNNaN
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25 kNNaN1.971.2NaN2.7NaNNaNNaNNaN
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25 kNNaN1.971.3NaN2.4NaNNaNNaNNaN
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-75/194-S1150310.0169796.056.30.04.51975-05-20Rijksinstituut voor Grondmechanicadiscontinu mechanisch25 kNNaN1.971.4NaN3.6NaNNaNNaNNaN
\n
\n\n\n\nOnce again we can use the *pkey_sondering* as a permanent link to the information of these CPT measurements:\n\n\n```python\nfor pkey_sondering in set(df.pkey_sondering):\n print(pkey_sondering)\n```\n\n https://www.dov.vlaanderen.be/data/sondering/1980-024719\n https://www.dov.vlaanderen.be/data/sondering/1980-024720\n https://www.dov.vlaanderen.be/data/sondering/1971-022776\n https://www.dov.vlaanderen.be/data/sondering/1971-023322\n https://www.dov.vlaanderen.be/data/sondering/1976-030128\n https://www.dov.vlaanderen.be/data/sondering/1971-023323\n https://www.dov.vlaanderen.be/data/sondering/1976-013899\n https://www.dov.vlaanderen.be/data/sondering/1992-000338\n https://www.dov.vlaanderen.be/data/sondering/1976-013900\n https://www.dov.vlaanderen.be/data/sondering/1971-023091\n https://www.dov.vlaanderen.be/data/sondering/1975-014064\n https://www.dov.vlaanderen.be/data/sondering/1971-022777\n https://www.dov.vlaanderen.be/data/sondering/1992-000339\n https://www.dov.vlaanderen.be/data/sondering/1971-023321\n https://www.dov.vlaanderen.be/data/sondering/1976-030150\n https://www.dov.vlaanderen.be/data/sondering/1992-000336\n https://www.dov.vlaanderen.be/data/sondering/1974-016927\n https://www.dov.vlaanderen.be/data/sondering/1975-014063\n https://www.dov.vlaanderen.be/data/sondering/1971-023319\n https://www.dov.vlaanderen.be/data/sondering/1976-013898\n https://www.dov.vlaanderen.be/data/sondering/1992-000335\n https://www.dov.vlaanderen.be/data/sondering/1971-022775\n https://www.dov.vlaanderen.be/data/sondering/1976-014638\n https://www.dov.vlaanderen.be/data/sondering/1971-023320\n https://www.dov.vlaanderen.be/data/sondering/1974-016926\n https://www.dov.vlaanderen.be/data/sondering/1992-000337\n https://www.dov.vlaanderen.be/data/sondering/1976-014640\n https://www.dov.vlaanderen.be/data/sondering/1976-030148\n https://www.dov.vlaanderen.be/data/sondering/1976-030140\n\n\n### Get CPT measurements in a bounding box based on specific properties\n\nWe can combine a query on attributes with a query on geographic location to get the CPT measurements within a bounding box that have specific properties.\n\nThe following example requests the CPT measurements with a depth greater than or equal to 2000 meters within the given bounding box.\n\n(Note that the datatype of the *literal* parameter should be a string, regardless of the datatype of this field in the output dataframe.)\n\n\n```python\nfrom owslib.fes import PropertyIsGreaterThanOrEqualTo\n\nquery = PropertyIsGreaterThanOrEqualTo(\n propertyname='diepte_sondering_tot',\n literal='20')\n\ndf = sondering.search(\n location=Within(Box(200000, 211000, 205000, 214000)),\n query=query\n )\n\ndf.head()\n```\n\n [000/021] ccccccccccccccccccccc\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxystart_sondering_mtawdiepte_sondering_vandiepte_sondering_totdatum_aanvanguitvoerdersondeermethodeapparaatdatum_gw_metingdiepte_gw_mlengtediepteqcQtfsui
0https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200 kN - TRACK-TRUCK2010-08-30 12:50:001.451.301.301.22NaN1.0NaN0.8
1https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200 kN - TRACK-TRUCK2010-08-30 12:50:001.451.351.353.19NaN2.0NaN1.0
2https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200 kN - TRACK-TRUCK2010-08-30 12:50:001.451.401.407.21NaN63.0NaN1.2
3https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200 kN - TRACK-TRUCK2010-08-30 12:50:001.451.451.4512.75NaN138.0NaN1.2
4https://www.dov.vlaanderen.be/data/sondering/2...GEO-10/095-S1200030.2212577.526.581.2520.652010-08-30VO - Afdeling Geotechniekcontinu elektrisch200 kN - TRACK-TRUCK2010-08-30 12:50:001.451.501.5015.26NaN143.0NaN1.4
\n
\n\n\n\nWe can look at one of the CPT measurements in a webbrowser using its *pkey_sondering*:\n\n\n```python\nfor pkey_sondering in set(df.pkey_sondering):\n print(pkey_sondering)\n```\n\n https://www.dov.vlaanderen.be/data/sondering/2010-062407\n https://www.dov.vlaanderen.be/data/sondering/2007-049200\n https://www.dov.vlaanderen.be/data/sondering/2008-077556\n https://www.dov.vlaanderen.be/data/sondering/2015-054999\n https://www.dov.vlaanderen.be/data/sondering/2008-077592\n https://www.dov.vlaanderen.be/data/sondering/2009-000054\n https://www.dov.vlaanderen.be/data/sondering/2008-077565\n https://www.dov.vlaanderen.be/data/sondering/2008-077579\n https://www.dov.vlaanderen.be/data/sondering/2015-055496\n https://www.dov.vlaanderen.be/data/sondering/2009-000052\n https://www.dov.vlaanderen.be/data/sondering/2008-077566\n https://www.dov.vlaanderen.be/data/sondering/2008-077581\n https://www.dov.vlaanderen.be/data/sondering/2008-077564\n https://www.dov.vlaanderen.be/data/sondering/2008-077557\n https://www.dov.vlaanderen.be/data/sondering/2008-077580\n https://www.dov.vlaanderen.be/data/sondering/2015-054995\n https://www.dov.vlaanderen.be/data/sondering/2007-049201\n https://www.dov.vlaanderen.be/data/sondering/2008-077577\n https://www.dov.vlaanderen.be/data/sondering/2008-077545\n https://www.dov.vlaanderen.be/data/sondering/2008-077591\n https://www.dov.vlaanderen.be/data/sondering/2009-000053\n\n\n### Select CPT measurements in a municipality and return depth\n\nWe can limit the columns in the output dataframe by specifying the *return_fields* parameter in our search.\n\nIn this example we query all the CPT measurements in the city of Ghent and return their depth:\n\n\n```python\nquery = PropertyIsEqualTo(propertyname='gemeente',\n literal='Gent')\ndf = sondering.search(query=query,\n return_fields=('diepte_sondering_tot',))\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
diepte_sondering_tot
02.7
11.4
27.6
311.5
418.6
\n
\n\n\n\n\n```python\ndf.describe()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
diepte_sondering_tot
count3628.000000
mean18.562031
std8.481631
min1.000000
25%11.400000
50%18.800000
75%24.600000
max52.600000
\n
\n\n\n\n\n```python\nax = df.boxplot()\nax.set_title('Distribution depth CPT measurements in Ghent');\nax.set_ylabel(\"depth (m)\")\n```\n\n### Get CPT measurements based on fields not available in the standard output dataframe\n\nTo keep the output dataframe size acceptable, not all available WFS fields are included in the standard output. However, one can use this information to select CPT measurements as illustrated below.\n\nFor example, make a selection of the CPT measurements in municipality the of Antwerp, using a conustype 'U':\n\n\n```python\nfrom owslib.fes import And\n\nquery = And([PropertyIsEqualTo(propertyname='gemeente',\n literal='Antwerpen'),\n PropertyIsEqualTo(propertyname='conus', \n literal='U')]\n )\ndf = sondering.search(query=query,\n return_fields=('pkey_sondering', 'sondeernummer', 'x', 'y', 'diepte_sondering_tot', 'datum_aanvang'))\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxydiepte_sondering_totdatum_aanvang
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.029.701993-03-02
1https://www.dov.vlaanderen.be/data/sondering/2...GEO-02/111-S1150347.3214036.429.952002-12-17
2https://www.dov.vlaanderen.be/data/sondering/2...GEO-04/123-SKD4-E146437.7222317.54.452004-07-12
3https://www.dov.vlaanderen.be/data/sondering/2...GEO-04/123-SKD6-E146523.9222379.77.402004-07-14
4https://www.dov.vlaanderen.be/data/sondering/2...GEO-04/123-SKD5-E146493.4222298.81.652004-07-16
\n
\n\n\n\n### Get CPT data, returning fields not available in the standard output dataframe\n\nAs denoted in the previous example, not all available fields are available in the default output frame to keep its size limited. However, you can request any available field by including it in the *return_fields* parameter of the search:\n\n\n```python\nquery = And([PropertyIsEqualTo(propertyname='gemeente', literal='Gent'),\n PropertyIsEqualTo(propertyname='conus', literal='U')])\n\ndf = sondering.search(query=query,\n return_fields=('pkey_sondering', 'sondeernummer', 'diepte_sondering_tot',\n 'conus', 'x', 'y'))\n\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerdiepte_sondering_totconusxy
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SV33.80U110241.6204692.2
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SI15.65U110062.5205051.4
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SII26.50U110107.0204965.3
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIII16.50U110152.4204876.1
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIV16.70U110197.8204787.0
\n
\n\n\n\n\n```python\ndf\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerdiepte_sondering_totconusxy
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SV33.80U110241.6204692.2
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SI15.65U110062.5205051.4
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SII26.50U110107.0204965.3
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIII16.50U110152.4204876.1
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIV16.70U110197.8204787.0
5https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SIX27.60U110479.5205240.7
6https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SVI16.80U110288.5204608.8
7https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SVII26.70U110334.3204519.8
8https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SX27.50U110685.0204845.5
9https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SXI25.60U109941.5204346.9
10https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/020-SXII26.50U110412.2204398.1
11https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/096-SIX(CPT9)17.60U105018.0190472.0
12https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/096-SVII(CPT7)26.05U105046.0190550.0
13https://www.dov.vlaanderen.be/data/sondering/1...GEO-94/096-SVIII(CPT8)24.75U104997.0190521.0
14https://www.dov.vlaanderen.be/data/sondering/1...GEO-97/002-S229.90U105376.6189104.3
15https://www.dov.vlaanderen.be/data/sondering/1...GEO-97/002-S35.90U105391.3189083.7
16https://www.dov.vlaanderen.be/data/sondering/1...GEO-97/002-S130.60U105399.3189065.2
17https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S118.05U106104.1188699.4
18https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S217.30U106045.3188708.4
19https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S318.70U106100.5188743.8
20https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S517.30U106130.0188712.0
21https://www.dov.vlaanderen.be/data/sondering/2...GEO-01/162-S417.00U106077.5188686.0
\n
\n\n\n\n## Resistivity plot\n\nThe data for the reporting of resistivity plots with the online application, see for example [this report](https://www.dov.vlaanderen.be/zoeken-ocdov/proxy-sondering/sondering/1993-001275/rapport/identifygrafiek?outputformaat=PDF), is also accessible with the pydov package. Querying the data for this specific _sondering_:\n\n\n```python\nquery = PropertyIsEqualTo(propertyname='pkey_sondering',\n literal='https://www.dov.vlaanderen.be/data/sondering/1993-001275')\ndf_sond = sondering.search(query=query)\n\ndf_sond.head()\n```\n\n [000/001] c\n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
pkey_sonderingsondeernummerxystart_sondering_mtawdiepte_sondering_vandiepte_sondering_totdatum_aanvanguitvoerdersondeermethodeapparaatdatum_gw_metingdiepte_gw_mlengtediepteqcQtfsui
0https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200 kNNaNNaN0.6NaN11.60NaN130.069.0NaN
1https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200 kNNaNNaN0.7NaN6.30NaN100.029.0NaN
2https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200 kNNaNNaN0.8NaN6.22NaN120.0-4.0NaN
3https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200 kNNaNNaN0.9NaN4.92NaN120.0-48.0NaN
4https://www.dov.vlaanderen.be/data/sondering/1...GEO-93/023-SII-E152740.0215493.06.250.029.71993-03-02MVG - Afdeling Geotechniekcontinu elektrisch200 kNNaNNaN1.0NaN4.40NaN80.0-35.0NaN
\n
\n\n\n\nWe have the depth (`length`) available, together with the measured values for each depth of the variables (in dutch):\n\n* `qc`: Opgemeten waarde van de conusweerstand, uitgedrukt in MPa.\n* `Qt`: Opgemeten waarde van de totale weerstand, uitgedrukt in kN.\n* `fs`: Opgemeten waarde van de plaatelijke kleefweerstand uitgedrukt in kPa.\n* `u`: Opgemeten waarde van de porienwaterspanning, uitgedrukt in kPa.\n* `i`: Opgemeten waarde van de inclinatie, uitgedrukt in graden.\n\nTo recreate the resistivity plot, we also need the `resistivity number` (wrijvingsgetal `rf`), see [DOV documentation](https://www.dov.vlaanderen.be/page/sonderingen).\n\n\\begin{equation}\nR_f = \\frac{f_s}{q_c}\n\\end{equation}\n\n**Notice:** $f_s$ is provide in kPa and $q_c$ in MPa.\n\nAdding `rf` to the dataframe:\n\n\n```python\ndf_sond[\"rf\"] = df_sond[\"fs\"]/df_sond[\"qc\"]/10 \n```\n\nRecreate the resistivity plot:\n\n\n```python\nimport matplotlib.pyplot as plt\n```\n\n\n```python\ndef make_patch_spines_invisible(ax):\n ax.set_frame_on(True)\n ax.patch.set_visible(False)\n for sp in ax.spines.values():\n sp.set_visible(False)\n```\n\n\n```python\n# Determine Lengte or Depth\n# If diepte is available, the y-axis will be Diepte\n# Else the y-axis will be Lengte\nif df_sond['diepte'].isnull().values.any():\n # IsNan\n y_type = \"lengte\"\n y_axis = \"Length (m)\"\nelse:\n y_type = \"diepte\"\n y_axis = \"Depth (m)\"\n\n\nfig, ax0 = plt.subplots(figsize=(8, 12))\n\n# Prepare the individual axis\nax_qc = ax0.twiny()\nax_fs = ax0.twiny()\nax_u = ax0.twiny()\nax_rf = ax0.twiny()\n\nfor i, ax in enumerate([ax_qc, ax_fs, ax_u]):\n ax.spines[\"top\"].set_position((\"axes\", 1+0.05*(i+1)))\n make_patch_spines_invisible(ax)\n ax.spines[\"top\"].set_visible(True)\n\n# Plot the data on the axis\ndf_sond.plot(x=\"rf\", y=y_type, label=\"rf\", ax=ax_rf, color='purple', legend=False)\ndf_sond.plot(x=\"qc\", y=y_type, label=\"qc (MPa)\", ax=ax_qc, color='black', legend=False)\ndf_sond.plot(x=\"fs\", y=y_type, label=\"fs (kPa)\", ax=ax_fs, color='green', legend=False)\ndf_sond.plot(x=\"u\", y=y_type, label=\"u (kPa)\", ax=ax_u, color='red', \n legend=False, xlim=(-100, 300)) # ! 300 is hardocded here for the example\n\n# styling and configuration\nax_rf.xaxis.label.set_color('purple')\nax_fs.xaxis.label.set_color('green')\nax_u.xaxis.label.set_color('red')\n\nax0.axes.set_visible(False)\nax_qc.axes.yaxis.set_visible(False)\nax_fs.axes.yaxis.set_visible(False)\nfor i, ax in enumerate([ax_rf, ax_qc, ax_fs, ax_u, ax0]):\n ax.spines[\"right\"].set_visible(False)\n ax.spines[\"bottom\"].set_visible(False)\n ax.xaxis.label.set_fontsize(15)\n ax.xaxis.set_label_coords(-0.05, 1+0.05*i)\n ax.spines['left'].set_position(('outward', 10))\n ax.spines['left'].set_bounds(0, 30)\nax_rf.set_xlim(0, 46)\n\nax_u.set_title(\"Resistivity plot CPT measurement GEO-93/023-SII-E\", fontsize=12)\n\nax0.invert_yaxis()\nax_rf.invert_xaxis()\nax_u.set_ylabel(y_axis, fontsize=12)\nfig.legend(loc='lower center', ncol=4)\nfig.tight_layout()\n```\n\n## Visualize locations\n\nUsing Folium, we can display the results of our search on a map.\n\n\n```python\n# import the necessary modules (not included in the requirements of pydov!)\nimport folium\nfrom folium.plugins import MarkerCluster\nfrom pyproj import Transformer\n```\n\n\n```python\n# convert the coordinates to lat/lon for folium\ndef convert_latlon(x1, y1):\n transformer = Transformer.from_crs(\"epsg:31370\", \"epsg:4326\", always_xy=True)\n x2,y2 = transformer.transform(x1, y1)\n return x2, y2\n\ndf['lon'], df['lat'] = zip(*map(convert_latlon, df['x'], df['y'])) \n# convert to list\nloclist = df[['lat', 'lon']].values.tolist()\n```\n\n\n```python\n# initialize the Folium map on the centre of the selected locations, play with the zoom until ok\nfmap = folium.Map(location=[df['lat'].mean(), df['lon'].mean()], zoom_start=11)\nmarker_cluster = MarkerCluster().add_to(fmap)\nfor loc in range(0, len(loclist)):\n folium.Marker(loclist[loc], popup=df['sondeernummer'][loc]).add_to(marker_cluster)\nfmap\n\n```\n\n\n\n\n
\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "a7cd117385c0a324cb21da21bc8615a487a4dfe3", "size": 203064, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "docs/notebooks/search_sonderingen.ipynb", "max_stars_repo_name": "GuillaumeVandekerckhove/pydov", "max_stars_repo_head_hexsha": "b51f77bf93d1f9e96dd39edf564d95426da04126", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 32, "max_stars_repo_stars_event_min_datetime": "2017-03-17T16:36:40.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-18T13:10:50.000Z", "max_issues_repo_path": "docs/notebooks/search_sonderingen.ipynb", "max_issues_repo_name": "GuillaumeVandekerckhove/pydov", "max_issues_repo_head_hexsha": "b51f77bf93d1f9e96dd39edf564d95426da04126", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 240, "max_issues_repo_issues_event_min_datetime": "2017-01-03T12:32:15.000Z", "max_issues_repo_issues_event_max_datetime": "2022-03-30T11:52:02.000Z", "max_forks_repo_path": "docs/notebooks/search_sonderingen.ipynb", "max_forks_repo_name": "DOV-Vlaanderen/dov-pydownloader", "max_forks_repo_head_hexsha": "126b17f4ad870d9fae5cb2c4b868c564cf7cd1b3", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 17, "max_forks_repo_forks_event_min_datetime": "2017-01-09T21:00:36.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-01T15:04:21.000Z", "avg_line_length": 85.7171802448, "max_line_length": 83116, "alphanum_fraction": 0.7562098649, "converted": true, "num_tokens": 15060, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.39233681595684605, "lm_q2_score": 0.21733751597763015, "lm_q1q2_score": 0.08526950900663358}} {"text": "# Practical Session 1: Data exploration and regression algorithms\n\n*Notebook by Ekaterina Kochmar*\n\n## 0.1. Dataset\n\nThe California House Prices Dataset is originally obtained from the StatLib repository. This dataset contains the collected information on the variables (e.g., median income, number of households, precise geographical position) using all the block groups in California from the 1990 Census. A block group is the smallest geographical unit for which the US Census Bureau publishes sample data, and on average it includes $1425.5$ individuals living in a geographically compact area. The [original data](http://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html) contains $20640$ observations on $9$ variables, with the *median house value* being the dependent variable (or *target attribute*). The [modified dataset](https://www.kaggle.com/camnugent/california-housing-prices) from Aurelien Geron, *Hands-On Machine Learning with Scikit-Learn and TensorFlow* contains an additional categorical variable.\n\nFor more information on the original data, please refer to Pace, R. Kelley and Ronald Barry, *Sparse Spatial Autoregressions*, Statistics and Probability Letters, 33 (1997) 291-297. For the information on the modified dataset, please refer to Aurelien Geron, *Hands-On Machine Learning with Scikit-Learn and TensorFlow*, O\u2032Reilly (2017), ISBN: 978-1491962299.\n\n## 0.2. Understanding your task\n\nYou are given a dataset that contains a range of attributes describing the houses in California. Your task is to predict the median price of a house based on its attributes. That is, you should train a machine learning (ML) algorithm on the available data, and the next time you get new information on some housing in California, you can use your trained algorithm to predict its price.\n\nThe questions to ask yourself before starting a new ML project:\n- Does the task suggest a supervised or an unsupervised approach?\n- Are you trying to predict a discrete or a continuous value?\n- Which ML algorithm is most suitable?\n\nTry to answer these questions before you start working on this task, using the following hints:\n- *Supervised* approaches rely on the availability of target label annotation in data; examples include regression and classification approaches. *Unsupervised* approaches don't use annotated data; clustering is a good example of such approach.\n- *Discrete* variables are associated with classes and imply classification approach. *Continuous* variables are associated with regression.\n\n## 0.3. Machine Learning check-list\n\nIn a typical ML project, you need to:\n\n- Get the dataset\n- Understand the data, the attributes and their correlations\n- Split the data into training and test set\n- Apply normalisation, scaling and other transformations to the attributes if needed\n- Build a machine learning model\n- Evaluate the model and investigate the errors\n- Tune your model to improve performance\n\nThis practical will show you how to implement the above steps.\n\n## 0.4. Prerequisites\n\nSome of you might have used Jupiter notebooks with the following libraries before in the [CL 1A Scientific Computing course](https://www.cl.cam.ac.uk/teaching/1920/SciComp/materials.html).\n\nTo run the notebooks on your machine, check if `Python 3` is installed. In addition, you will need the following libraries:\n\n- `Pandas` for easy data uploading and manipulation. Check installation instructions at https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html\n- `Matplotlib`: for visualisations. Check installation instructions at https://matplotlib.org/users/installing.html\n- `NumPy` and `SciPy`: for scietinfic programming. Check installation instruction at https://www.scipy.org/install.html\n- `Scikit-learn`: for machine learning algorithms. Check installation instructions at http://scikit-learn.org/stable/install.html\n\nAlternatively, a number of these libraries can be installed in one go through [Anaconda](https://www.anaconda.com/products/individual) distribution. \n\n## 0.5. Learning objectives\n\nIn this practical you will learn how to:\n\n- upload and explore a dataset\n- visualise and explore the correlations between the variables\n- structure a machine learning project\n- select the training and test data in a random and in a stratified way\n- handle missing values\n- handle categorical values\n- implement a custom data transformer\n- build a machine learning pipeline\n- implement a regression algorithm\n- evaluate a regression algorithm performance\n\nIn addition, you will learn about such common machine learning concepts as:\n- data scaling and normalisation\n- overfitting and underfitting\n- cross-validation\n- hyperparameter setting with grid search\n\n\n## Step 1: Uploading and inspecting the data\n\nFirst let's upload the dataset using `Pandas` and defining a function pointing to the location of the `housing.csv` file:\n\n\n```python\nimport pandas as pd\nimport os\n\ndef load_data(housing_path):\n csv_path = os.path.join(housing_path, \"housing.csv\")\n return pd.read_csv(csv_path)\n```\n\nNow, let's run `load_data` using the path where you stored your `housing.csv` file. This function will return a `Pandas` DataFrame object containing all the data. It is always a good idea to take a quick look into the uploaded dataset and make sure you understand the data you are working with. For example, you can check the top rows of the uploaded data and get the general information about the dataset using `Pandas` functionality as follows:\n\n\n```python\nhousing = load_data(\"housing/\")\nhousing.head()\n```\n\nRemember that each row in this table represents a block group (housing district), and each column an attribute. How many attributes does the dataset contain? \n\nAnother way to get the summary information about the number of instances and attributes in the dataset is using `info` function. It also shows each attribute's type and number of non-null values:\n\n\n```python\nhousing.info()\n```\n\nBefore proceeding further, think about the following: \n- How is the data represented? \n- What do the attribute types suggest? \n- Are there any missing values in the dataset? If so, should you do anything about them? \n\nYou must have worked with numerical values before, and the data types like `float64` should look familiar. However, *ocean\\_proximity* attribute has values of a different type. You can inspect the values of a particular attribute in the DataFrame using the following code:\n\n\n```python\nhousing[\"ocean_proximity\"].value_counts()\n```\n\nThe above suggests that the values are categorical: there are $5$ categories that define ocean proximity. ML algorithms prefer to work with numerical data, besides all the other attributes are represented using numbers. Keep that in mind, as this suggests that you will need to cast the categorical data as numerical.\n\nFor now, let's have a general overview of the attributes and distribution of their values (note *ocean_proximity* is excluded from this summary):\n\n\n```python\nhousing.describe()\n```\n\nTo make sure you understand the structure of the dataset, try answering the following questions: \n- How can you interpret the values in the table above?\n- What do the percentiles (e.g., $25\\%$ or $50\\%$) tell you about the distribution of values in this dataset (you can select one particular attribute to explain)? \n- How are the missing values handled?\n\nRemember that you can always refer to [`Pandas`](https://pandas.pydata.org/pandas-docs/stable/reference/index.html) documentation.\n\nAnother good way to get an overview of the values distribution is to plot histograms. This time, you'll need to use `matplotlib`:\n\n\n```python\n%matplotlib inline \n#so that the plot will be displayed in the notebook\nimport matplotlib.pyplot as plt\n\nhousing.hist(bins=50, figsize=(20,15))\nplt.show()\n```\n\nTwo observations about this graphs are worth making:\n- the *median_income*, *housing_median_age* and the *median_house_value* have been capped by the team that collected the data: that is, the values for the *median_income* are scaled by dividing the income by \\\\$10000 and capped so that they range between $[0.4999, 15.0001]$ with the incomes lower than $0.4999$ and higher than $15.0001$ binned together; similarly, the *housing_median_age* values have been scaled and binned to range between $[1, 52]$ years and the *median_house_value* \u2013 to range between $[14999, 500001]$. Data manipulations like these are not unusual in data science but it's good to be aware of how the data is represented;\n- several other attributes are \"tail heavy\" \u2013 they have a long distribution tail with many decreasingly rare values to the right of the mean. In practice that means that you might consider using the logarithms of these values rather than the absolute values.\n\n## Step 2: Splitting the data into training and test sets\n\nIn this practical, you are working with a dataset that has been collected and thoroughly labelled in the past. Each instance has a predefined set of values and the correct price label assigned to it. After training the ML model on this dataset you hope to be able to predict the prices for new houses, not contained in this dataset, based on their characteristics such as geographical position, median income, number of rooms and so on. How can you check in advance whether your model is good in making such predictions?\n\nThe answer is: you set part of your dataset, called *test set*, aside and use it to evaluate the performance of your model only. You train and tune your model using the rest of the dataset \u2013 *training set* \u2013 and evaluate the performance of the model trained this way on the test set. Since the model doesn't see the test set during training, this perfomance should give you a reasonable estimate of how well it would perform on new data. Traditionally, you split the data into $80\\%$ training and $20\\%$ test set, making sure that the test instances are selected randomly so that you don't end up with some biased selection leading to over-optimistic or over-pessimistic results on your test set.\n\nFor example, you can select your test set as the code below shows. To ensure random selection of the test items, use `np.random.permutation`. However, if you want to ensure that you have a stable test set, and the same test instances get selected from the dataset in a random fashion in different runs of the program, select a random seed, e.g. using `np.random.seed(42)`.\n\n\n```python\nimport numpy as np\nnp.random.seed(42)\n\ndef split_train_test(data, test_ratio): \n shuffled_indices = np.random.permutation(len(data))\n test_set_size = int(len(data) * test_ratio)\n test_indices = shuffled_indices[:test_set_size]\n train_indices = shuffled_indices[test_set_size:]\n return data.iloc[train_indices], data.iloc[test_indices]\n\ntrain_set, test_set = split_train_test(housing, 0.2)\nprint(len(train_set), \"training instances +\", len(test_set), \"test instances\")\n```\n\nNote that `scikit-learn` provides a similar functionality to the code above with its `train_test_split` function. Morevoer, you can pass it several datasets with the same number of rows each, and it will split them into training and test sets on the same indices (you might find it useful if you need to pass in a separate DataFrame with labels):\n\n\n```python\nfrom sklearn.model_selection import train_test_split\n\ntrain_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)\nprint(len(train_set), \"training instances +\", len(test_set), \"test instances\")\n```\n\nSo far, you have been selecting your test set using random sampling methods. If your data is representative of the task at hand, this should help ensure that the results of the model testing are informative. However, if your dataset is not very large and the data is skewed on some of the attributes or on the target label (as is often the case with the real-world data), random sampling might introduce a sampling bias. *Stratified sampling* is a technique that helps make sure that the distributions of the instance attributes or labels in the training and the test sets are similar, meaning that the proportion of instances drawn from each *stratum* in the dataset is similar in the training and test data.\n\nSampling bias may express itself both in the distribution of labels and in the distribution of the attribute values. For instance, take a look at the *median_income* attribute value distribution. Suppose for now (and you might find a confirmation to that later in the practical) that this attribute is predictive of the house price, however its values are unevenly distributed across the range of $[0.4999, 15.0001]$ with a very long tail. If random sampling doesn't select enough instances for each *stratum* (each range of incomes) the estimate of the under-represented strata's importance will be biased. \n\nFirst, to limit the number of income categories (strata), particularly at the long tail, let's apply further binning to the income values: e.g., you can divide the income by $1.5$, round up the values using `ceil` to have discrete categories (bins), and merge all the categories greater than $5$ into category $5$. The latter can be achieved using `Pandas`' `where` functionality, keeping the original values when they are smaller than $5$ and converting them to $5$ otherwise:\n\n\n```python\nhousing[\"income_cat\"] = np.ceil(housing[\"median_income\"] / 1.5)\nhousing[\"income_cat\"].where(housing[\"income_cat\"] < 5, 5.0, inplace = True)\n\nhousing[\"income_cat\"].hist()\nplt.show()\n```\n\nNow you have a much smaller number of categories of income, with the instances more evenly distributed, so you can hope to get enough data to represent the tail. Next, let's split the dataset into training and test sets making sure both contain similar proportion of instances from each income category. You can do that using `scikit-learn`'s `StratifiedShuffleSplit` specifying the condition on which the data should be stratified (in this case, income category):\n\n\n```python\nfrom sklearn.model_selection import StratifiedShuffleSplit\n\nsplit = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)\nfor train_index, test_index in split.split(housing, housing[\"income_cat\"]):\n strat_train_set = housing.loc[train_index]\n strat_test_set = housing.loc[test_index]\n```\n\nLet's compare the distribution of the income values in the randomly selected train and test sets and the stratified train and test sets against the full dataset. To better understand the effect of random sampling versus stratified sampling, let's also estimate the error that would be introduced in the data by such splits:\n\n\n```python\ntrain_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)\n\ndef income_cat_proportions(data):\n return data[\"income_cat\"].value_counts() / len(data)\n\ncompare_props = pd.DataFrame({\n \"Overall\": income_cat_proportions(housing),\n \"Stratified tr\": income_cat_proportions(strat_train_set),\n \"Random tr\": income_cat_proportions(train_set),\n \"Stratified ts\": income_cat_proportions(strat_test_set),\n \"Random ts\": income_cat_proportions(test_set),\n})\ncompare_props[\"Rand. tr %error\"] = 100 * compare_props[\"Random tr\"] / compare_props[\"Overall\"] - 100\ncompare_props[\"Rand. ts %error\"] = 100 * compare_props[\"Random ts\"] / compare_props[\"Overall\"] - 100\ncompare_props[\"Strat. tr %error\"] = 100 * compare_props[\"Stratified tr\"] / compare_props[\"Overall\"] - 100\ncompare_props[\"Strat. ts %error\"] = 100 * compare_props[\"Stratified ts\"] / compare_props[\"Overall\"] - 100\n\ncompare_props.sort_index()\n```\n\nAs you can see, the distributions in the stratified training and test sets are much closer to the original distribution of categories as well as being much closer to each other. \n\nNote, that to help you split the data, you had to introduce a new category \u2013 *income_cat* \u2013 which contains the same information as the original attribute *median_income* binned in a different way:\n\n\n```python\nstrat_train_set.info()\n```\n\nBefore proceeding further let's remove the *income_cat* attribute so the data is back to its original state. Here is how you can do that:\n\n\n```python\nfor set_ in (strat_train_set, strat_test_set):\n set_.drop(\"income_cat\", axis=1, inplace=True)\n\nstrat_train_set.info()\n```\n\n## Step 3: Exploring the attributes\n\nThe next step is to look more closely into the attributes and gain insights into the data. In particular, you should try to answer the following questions: \n- Which attributes look most informative? \n- How do they correlate with each other and the target label?\n- Is any further normalisation or scaling needed?\n\nThe most informative ways in which you can answer the questions above are by *visualising* the data and by *collecting additional statistics* on the attributes and their relations to each other.\n\nFirst, remember that from now on you're only looking into and gaining insights from the training data. You will use the test data at the evaluation step only, thus ensuring no data leakage between the training and test sets occurs and the results on the test set are a fair evaluation of your algorithm's performance. Let's make a copy of the training set that you can experiment with without a danger of overwriting or changing the original data: \n\n\n```python\nhousing = strat_train_set.copy()\n```\n\n### Visualisations\n\nThe first two attributes describe the geographical position of the houses. Let's apply further visualisations and look into the geographical area that is covered: for that, use a scatter plot plotting longitude against latitude coordinates. To make the scatter plot more informative, use `alpha` option to highlight high density points:\n\n\n```python\nhousing.plot(kind='scatter', x='longitude', y='latitude', alpha=0.2)\n```\n\nYou can experiment with `alpha` values to get a better understanding, but it should be obvious from these plots that the areas in the south and along the coast of California are more densely populated (roughly corresponding to the Bay Area, Los Angeles, San Diego, and the Central Valley). \n\nNow, what does geographical position suggest about the housing prices? In the following code, the size of the circles represents the size of the population, and the color represents the price, ranging from blue for low prices to red for high prices (this color scheme is specified by the preselected `cmap` type):\n\n\n```python\nhousing.plot(kind='scatter', x='longitude', y='latitude', alpha=0.5,\n s=housing[\"population\"]/100, label=\"population\", figsize=(10,7), \n c=housing[\"median_house_value\"], cmap=plt.get_cmap(\"jet\"), colorbar=\"True\",\n )\nplt.legend()\n```\n\nThis plot suggests that the housing prices depend on the proximity to the ocean and on the population size. What does this suggest about the informativeness of the attributes for your ML task?\n\n### Correlations\n\nLet's also look into how the attributes correlate with each other:\n\n\n```python\ncorr_matrix = housing.corr()\ncorr_matrix\n```\n\nSince you are trying to predict the house value, the last column in this table is the most informative. Let's make the output clearer:\n\n\n```python\ncorr_matrix[\"median_house_value\"].sort_values(ascending=False)\n```\n\nThis makes it clear that the *median_income* is most strongly positively correlated with the price. There is small positive correlation of the price with *total_rooms* and *housing_median_age*, and small negative correlation with *latitude*, which suggests that the prices go up with the increase in income, number of rooms and house age, and go down when you go north. `Pandas`' `scatter_matrix` function allows you to visualise the correlation of attributes with each other (note that since the correlation of an attribute with itself will result in a straight line, `Pandas` uses a histogram instead \u2013 that's what you see along the diagonal):\n\n\n```python\nfrom pandas.plotting import scatter_matrix\n# If the above returns an error, use the following:\n#from pandas.tools.plotting import scatter_matrix\n\nattributes = [\"median_house_value\", \"median_income\", \"total_rooms\", \"housing_median_age\", \"latitude\"]\nscatter_matrix(housing[attributes], figsize=(12,8))\n```\n\nThese plots confirm that the income attribute is the most promising one for predicting house prices, so let's zoom in on this attribute:\n\n\n```python\nhousing.plot(kind=\"scatter\", x=\"median_income\", y=\"median_house_value\", alpha=0.3)\n```\n\nThere are a couple of observations to be made about this plot:\n- The correlation is indeed quite strong: the values follow the upward trend and are not too dispersed otherwise;\n- You can clearly see a line around $500000$ which covers a full range of income values and is due to the fact that the house prices above that value were capped in the original dataset. However, the plot suggests that there are also some other less obvious groups of values, most visible around $350000$ and $450000$, that also cover a range of different income values. Since your ML algorithm will learn to reproduce such data quirks, you might consider looking into these matters further and removing these districts from your dataset (after all, in any real-world application, one can expect a certain amount of noise in the data and clearing the data is one of the steps in any practical application). \n\nThe next thing to notice is that a number of attributes from the original dataset, including *total_rooms*, \t*total_bedrooms* and *population*, do not actually describe each house in particular but rather represent the cumulative counts for *all households* in the block group. At the same time, the task at hand requires you to predict the house price for *each individual household*. In addition, an attribute that measures the proportion of bedrooms against the total number of rooms might be informative. Therefore, the following transformed attributes might be more useful for the prediction:\n\n\n```python\nhousing[\"rooms_per_household\"] = housing[\"total_rooms\"] / housing[\"households\"]\nhousing[\"bedrooms_per_household\"] = housing[\"total_bedrooms\"] / housing[\"households\"]\nhousing[\"bedrooms_per_rooms\"] = housing[\"total_bedrooms\"] / housing[\"total_rooms\"]\nhousing[\"population_per_household\"] = housing[\"population\"] / housing[\"households\"]\n```\n\nA good way to check whether these transformations have any effect on the task is to check attributes correlations again:\n\n\n```python\ncorr_matrix = housing.corr()\ncorr_matrix[\"median_house_value\"].sort_values(ascending=False)\n```\n\nYou can see that the number of rooms per household is more strongly correlated with the house price \u2013 the more rooms the more expensive the house, while the proportion of bedrooms is more strongly correlated with the price than either the number of rooms or bedrooms in the household \u2013 since the correlation is negative, the lower the bedroom-to-room ratio, the more expensive the property.\n\n## Step 4: Data preparation and transformations for machine learning algorithms\n\nNow you are almost ready to implement a regression algorithm for the task at hand. However, there are a couple of other things to address, in particular:\n- handle missing values if there are any;\n- convert all attribute values (e.g. categorical, textual) into numerical format;\n- scale / normalise the feature values if necessary.\n\nFirst, let's separate the labels you're trying to predict (*median_house_value*) from the attributes in the dataset that you will use as *features*. The following code will keep a copy of the labels and the rest of the attributes separate (note that `drop()` will create a copy of the data and will not affect `strat_train_set` itself): \n\n\n```python\nhousing = strat_train_set.drop(\"median_house_value\", axis=1)\nhousing_labels = strat_train_set[\"median_house_value\"].copy()\n```\n\nYou can add the transformed features that you found useful before with the additional function as shown below. Then you can run `add_features(housing)` to add the features:\n\n\n```python\ndef add_features(data):\n # add the transformed features that you found useful before\n data[\"rooms_per_household\"] = data[\"total_rooms\"] / data[\"households\"]\n data[\"bedrooms_per_household\"] = data[\"total_bedrooms\"] / data[\"households\"]\n data[\"bedrooms_per_rooms\"] = data[\"total_bedrooms\"] / data[\"total_rooms\"]\n data[\"population_per_household\"] = data[\"population\"] / data[\"households\"]\n \n# add_features(housing)\n```\n\nYou will learn shortly about how to implement your own *data transformers* and will be able to re-implement addition of these features as a data transfomer.\n\n### Handling missing values\n\nIn Step 1 above, when you took a quick look into the dataset, you might have noticed that all attributes but one have $20640$ values in the dataset; *total_bedrooms* has $20433$, so some values are missing. ML algorithms cannot deal with missing values, so you'll need to decide how to replace these values. There are three possible solutions:\n\n1. remove the corresponding housing blocks from the dataset (i.e., remove the rows in the dataset)\n2. remove the whole attribute (i.e., remove the column)\n3. set the missing values to some predefined value (e.g., zero value, the mean, the median, the most frequent value of the attribute, etc.)\n\nThe following `Pandas` functionality will help you implement each of these options:\n\n\n```python\n## option 1:\n# housing.dropna(subset=[\"total_bedrooms\"])\n## option 2:\n# housing.drop(\"total_bedrooms\", axis=1)\n# option 3:\nmedian = housing[\"total_bedrooms\"].median()\nhousing[\"total_bedrooms\"].fillna(median, inplace=True)\n```\n\nAlthough, all three options are possible, keep in mind that in the first two cases you are throwing away either some valuable attributes (e.g., as you've seen earlier, *bedrooms_per_rooms* correlates well with the label you're trying to predict) or a number of valuable training examples. Option 3, therefore, looks more promising. Note, that for that you estimate a mean or median based on the training set only (as, in general, your ML algorithm has access to the training data only during the training phase), and then store the mean / median values to replace the missing values in the test set (or any new dataset, to that effect). In addition, you might want to calculate and store the mean / median values for all attributes as in a real-life application you can never be sure if any of the attributes will have missing values in the future.\n\nHere is how you can calculate and store median values using `sklearn` (note that you'll need to exclude `ocean_proximity` attribute from this calculation since it has non-numerical values):\n\n\n```python\n# for earlier versions of sklearn use:\n#from sklearn.preprocessing import Imputer \n#imputer = Imputer(strategy=\"median\")\n\nfrom sklearn.impute import SimpleImputer\n\nimputer = SimpleImputer(strategy=\"median\")\nhousing_num = housing.drop(\"ocean_proximity\", axis=1)\nimputer.fit(housing_num)\n```\n\nYou can check the median values stored in the `imputer` as follows:\n\n\n```python\nimputer.statistics_\n```\n\nand also make sure that they exactly coincide with the median values for all numerical attributes:\n\n\n```python\nhousing_num.median().values\n```\n\nFinally, let's replace the missing values in the training data:\n\n\n```python\nX = imputer.transform(housing_num)\nhousing_tr = pd.DataFrame(X, columns=housing_num.columns)\nhousing_tr.info()\n```\n\n### Handling textual and categorical attributes\n\nAnother aspect of the dataset that should be handled is the textual / categorical values of the *ocean_proximity* attribute. ML algorithms prefer working with numerical data, so let's use `sklearn`'s functionality and cast the categorical values as numerical values as follows:\n\n\n```python\nfrom sklearn.preprocessing import LabelEncoder\n\nencoder = LabelEncoder()\nhousing_cat_encoded = encoder.fit_transform(housing[\"ocean_proximity\"])\nhousing_cat_encoded\n```\n\nThe code above mapped the categories to numerical values. You can check what the numerical values correspond to in the original data using:\n\n\n```python\nencoder.classes_\n```\n\nOne problem with the encoding above is that the ML algorithm will automatically assume that the numerical values that are close to each other encode similar concepts, which for this data is not quite true: for example, value $0$ corresponding to *$<$1H OCEAN* category is actually most similar to values $3$ and $4$ (*NEAR BAY* and *NEAR OCEAN*) and not to value $1$ (*INLAND*).\n\nAn alternative to this encoding is called *one-hot encoding* and it runs as follows: for each category, it creates a separate binary attribute which is set to $1$ (hot) when the category coincides with the attribute, and $0$ (cold) otherwise. So, for instance, *$<$1H OCEAN* will be encoded as a one-hot vector $[1, 0, 0, 0, 0]$ and *NEAR OCEAN* will be encoded as $[0, 0, 0, 0, 1]$. The following `sklearn`'s functionality allows to convert categorical values into one-hot vectors:\n\n\n```python\nfrom sklearn.preprocessing import OneHotEncoder\n\nencoder = OneHotEncoder()\n# fit_transform expects a 2D array, but housing_cat_encoded is a 1D array.\n# Reshape it using NumPy's reshape functionality where -1 simply means \"unspecified\" dimension \nhousing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1,1))\nhousing_cat_1hot\n```\n\nNote that the data format above says that the output is a sparse matrix. This means that the data structure only stores the location of the non-zero elements, rather than the full set of vectors which are mostly full of zeros. You can check the [documentation on sparse matrices](https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html) if you'd like to learn more. If you'd like to see how the encoding looks like you can also convert it back into a dense NumPy array using:\n\n\n```python\nhousing_cat_1hot.toarray()\n```\n\nThe steps above, including casting text categories to numerical categories and then converting them into 1-hot vectors, can be performed using `sklearn`'s `LabelBinarizer`:\n\n\n```python\nfrom sklearn.preprocessing import LabelBinarizer\n\nencoder = LabelBinarizer()\nhousing_cat_1hot = encoder.fit_transform(housing[\"ocean_proximity\"])\nhousing_cat_1hot\n```\n\nThe above produces dense array as an output, so if you'd like to have a sparse matrix instead you can specify it in the `LabelBinarizer` constructor:\n\n\n```python\nencoder = LabelBinarizer(sparse_output=True)\nhousing_cat_1hot = encoder.fit_transform(housing[\"ocean_proximity\"])\nhousing_cat_1hot\n```\n\n### Data transformers\n\nA useful functionality of `sklearn` is [data transformers](http://scikit-learn.org/stable/data_transforms.html): you will see them used in preprocessing very often. For example, you have just used one to impute the missing values. In addition, you can implement your own custom data transformers. In general, a transformer class needs to implement three methods:\n- a constructor method;\n- a `fit` method that learns parameters (e.g. mean and standard deviation for a normalization transformer) or returns `self`; and\n- a `transform` method that applies the learned transformation to the new data.\n\nWhenever you see `fit_transform` method, it means that the method uses an optimised combination of `fit` and `transform`. Here is how you can implement a data transformer that will convert categorical values into 1-hot vectors:\n\n\n```python\nfrom sklearn.base import TransformerMixin # TransformerMixin allows you to use fit_transform method\n\nclass CustomLabelBinarizer(TransformerMixin):\n def __init__(self, *args, **kwargs):\n self.encoder = LabelBinarizer(*args, **kwargs)\n def fit(self, X, y=0):\n self.encoder.fit(X)\n return self\n def transform(self, X, y=0):\n return self.encoder.transform(X)\n```\n\nSimilarly, here is how you can wrap up adding new transformed features like bedroom-to-room ratio with a data transformer:\n\n\n```python\nfrom sklearn.base import BaseEstimator, TransformerMixin \n# BaseEstimator allows you to drop *args and **kwargs from you constructor\n# and, in addition, allows you to use methods set_params() and get_params()\n\nrooms_id, bedrooms_id, population_id, household_id = 3, 4, 5, 6\n\nclass CombinedAttributesAdder(BaseEstimator, TransformerMixin):\n def __init__(self, add_bedrooms_per_rooms = True): # note no *args and **kwargs used this time\n self.add_bedrooms_per_rooms = add_bedrooms_per_rooms\n def fit(self, X, y=None):\n return self\n def transform(self, X, y=None):\n rooms_per_household = X[:, rooms_id] / X[:, household_id]\n bedrooms_per_household = X[:, bedrooms_id] / X[:, household_id]\n population_per_household = X[:, population_id] / X[:, household_id]\n if self.add_bedrooms_per_rooms:\n bedrooms_per_rooms = X[:, bedrooms_id] / X[:, rooms_id]\n return np.c_[X, rooms_per_household, bedrooms_per_household, \n population_per_household, bedrooms_per_rooms]\n else:\n return np.c_[X, rooms_per_household, bedrooms_per_household, \n population_per_household]\n \nattr_adder = CombinedAttributesAdder()\nhousing_extra_attribs = attr_adder.transform(housing.values)\nhousing_extra_attribs\n```\n\nIf you'd like to explore the new attributes, you can convert the `housing_extra_attribs` into a `Pandas` DataFrame and apply the functionality as before:\n\n\n```python\nhousing_extra_attribs = pd.DataFrame(housing_extra_attribs, columns=list(housing.columns)+\n [\"rooms_per_household\", \"bedrooms_per_household\", \n \"population_per_household\", \"bedrooms_per_rooms\"])\nhousing_extra_attribs.head()\n```\n\n\n```python\nhousing_extra_attribs.info()\n```\n\n### Feature scaling\n\nFinally, ML algorithms do not typically perform well when the feature values cover significantly different ranges of values. For example, in the dataset at hand, the income ranges from $0.4999$ to $15.0001$, while population ranges from $3$ to $35682$. Taken at the same scale, these values are not directly comparable. The data transformation that should be applied to these values is called *feature scaling*.\n\nOne of the most common ways to scale the data is to apply *min-max scaling* (also often referred to as *normalisaton*). Min-max scaling puts all values on the scale of $[0, 1]$ making the ranges directly comparable. For that, you need to subtract the min from the actual value and divide by the difference between the maximum and minimum values, i.e.:\n\n\\begin{equation}\nf_{scaled} = \\frac{f - F_{min}}{F_{max} - F_{min}}\n\\end{equation}\n\nwhere $f \\in F$ is the actual feature value of a feature type $F$, and $F_{min}$ and $F_{max}$ are the minumum and maximum values for the feature of type $F$.\n\nAnother common approach is *standardisation*, which subtracts the mean value (so the standardised values have a zero mean) and divides by the variance (so the standardised values have unit variance). Standardisation does not impose a specific range on the values and is more robust to the outliers: i.e., a noisy input or an incorrect income value of $100$ (when the rest of the values lie within the range of $[0.4999, 15.0001]$) will introduce a significant skew in the data after min-max scaling. At the same time, standardisation does not bind values to the same range of $[0, 1]$, which might be problematic for some algorithms.\n\n`Scikit-learn` has an implementation for the `MinMaxScaler`, `StandardScaler`, as well as [other scaling approaches](http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-scaler), i.e.:\n\n\n```python\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler\n\nscaler = StandardScaler()\nhousing_tr_scaled = scaler.fit_transform(housing_tr)\n```\n\n### Putting all the data transformations together\n\nAnother useful functionality of `sklearn` is pipelines. These allow you to stack several separate transformations together. For example, you can apply the numerical transformations such as missing values handling and data scaling as follows:\n\n\n```python\nfrom sklearn.pipeline import Pipeline\n\nnum_pipeline = Pipeline([\n #('imputer', Imputer(strategy=\"median\")),\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('std_scaler', StandardScaler()),\n])\n\nhousing_num_tr = num_pipeline.fit_transform(housing_num)\nhousing_num_tr.shape\n```\n\nPipelines are useful because they help combining several steps together, so that the output of one data transformer (e.g., `Imputer`) is passed on as an input to the next one (e.g., `StandardScaler`) and so you don't need to worry about the intermediate steps. Besides, it makes the code look more concise and readable. However:\n- the code above doesn't handle categorical values;\n- we started with `Pandas` DataFrames because they are useful for data uploading and inspection, but the `Pipeline` expects `NumPy` arrays as input, and at the moment, `sklearn`'s `Pipeline` cannot handle `Pandas` DataFrames.\n\nIn fact, there is a way around the two issues above. Let's implement another custom data transformer that will allow you to select specific attributes from a `Pandas` DataFrame:\n\n\n```python\nfrom sklearn.base import BaseEstimator, TransformerMixin\n\n# Create a class to select numerical or categorical columns \n# since Scikit-Learn doesn't handle DataFrames yet\nclass DataFrameSelector(BaseEstimator, TransformerMixin):\n def __init__(self, attribute_names):\n self.attribute_names = attribute_names\n def fit(self, X, y=None):\n return self\n def transform(self, X):\n return X[self.attribute_names].values\n```\n\nThe transformer above allows you to select a predefined set of attributes from a DataFrame, dropping the rest and converting the selected ones into a `NumPy` array. This is quite useful because now you can select the numerical attributes and apply one set of transformations to them, and then select categorical attributes and apply another set of transformation to them, i.e.:\n\n\n```python\nnum_attribs = list(housing_num)\ncat_attribs = [\"ocean_proximity\"]\n\nnum_pipeline = Pipeline([\n ('selector', DataFrameSelector(num_attribs)),\n #('imputer', Imputer(strategy=\"median\")),\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('attribs_adder', CombinedAttributesAdder()),\n ('std_scaler', StandardScaler()),\n ])\n\ncat_pipeline = Pipeline([\n ('selector', DataFrameSelector(cat_attribs)),\n ('label_binarizer', CustomLabelBinarizer()),\n ])\n```\n\nFinally, to merge the output of the two separate data transformers back together, you can use `sklearn`'s `FeatureUnion` functionality: it runs the two pipelines' `fit` methods and the two `transform` methods in parallel, and then concatenates the output. I.e.:\n\n\n```python\nfrom sklearn.pipeline import FeatureUnion\n\nfull_pipeline = FeatureUnion(transformer_list=[\n (\"num_pipeline\", num_pipeline),\n (\"cat_pipeline\", cat_pipeline),\n ])\n\n\nhousing = strat_train_set.drop(\"median_house_value\", axis=1)\nhousing_labels = strat_train_set[\"median_house_value\"].copy()\n\nhousing_prepared = full_pipeline.fit_transform(housing)\nprint(housing_prepared.shape)\nhousing_prepared\n```\n\n## Step 5: Implementation, evaluation and fine-tuning of a regression model\n\nNow that you've explored and prepared the data, you can implement a regression model to predict the house prices on the test set. \n\n### Training and evaluating the model\n\nLet's train a [Linear Regression](http://scikit-learn.org/stable/modules/linear_model.html) model first. During training, a Linear Regression model tries to find the optimal set of weights $w=(w_{1}, w_{2}, ..., w_{n})$ for the features (attributes) $X=(x_{1}, x_{2}, ..., x_{n})$ by minimising the residual sum of squares between the responses predicted by such linear approximation $Xw$ and the observed responses $y$ in the dataset, i.e. trying to solve:\n\n\\begin{equation}\nmin_{w} ||Xw - y||_{2}^{2}\n\\end{equation}\n\n\n```python\nfrom sklearn.linear_model import LinearRegression\n\nlin_reg = LinearRegression()\nlin_reg.fit(housing_prepared, housing_labels)\n```\n\nFirst, let's try the model on some instances from the training set itself:\n\n\n```python\nsome_data = housing.iloc[:5]\nsome_labels = housing_labels.iloc[:5]\n# note the use of transform, as you'd like to apply already learned (fitted) transformations to the data\nsome_data_prepared = full_pipeline.transform(some_data)\n\nprint(\"Predictions:\", list(lin_reg.predict(some_data_prepared)))\nprint(\"Actual labels:\", list(some_labels))\n```\n\nThe above shows that the model is able to predict some price values, however they don't seem to be very accurate. How can you measure the performance of your model in a more comprehensive way?\n\nTypically, the output of the regression model is measured in terms of the error in prediction. There are two error measures that are commonly used. *Root Mean Square Error (RMSE)* measures the average deviation of the model's prediction from the actual label, but note that it gives a higher weight for large errors:\n\n\\begin{equation}\nRMSE(X, h) = \\sqrt{\\frac{1}{m} \\sum_{i=1}^{m} (h(x^{(i)}) - y^{(i)})^{2}}\n\\end{equation}\n\nwhere $m$ is the number of instances, $h$ is the model (hypothesis), $X$ is the matrix containing all feature values, $x^{(i)}$ is the feature vector describing instance $i$, and $y^{(i)}$ is the actual label for instance $i$.\n\nBecause *RMSE* is highly influenced by the outliers (i.e., large errors), in some situations *Mean Absolute Error (MAE)* is preferred. You may note that its estimation is somewhat similar to the estimation of *RMSE*:\n\n\\begin{equation}\nMAE(X, h) = \\frac{1}{m} \\sum_{i=1}^{m} |h(x^{(i)}) - y^{(i)}|\n\\end{equation}\n\nLet's measure the performance of the linear regression model using these error estimations:\n\n\n```python\nfrom sklearn.metrics import mean_squared_error\n\nhousing_predictions = lin_reg.predict(housing_prepared)\nlin_mse = mean_squared_error(housing_labels, housing_predictions)\nlin_rmse = np.sqrt(lin_mse)\nlin_rmse\n```\n\nGiven that the majority of the districts' housing values lie somewhere between $[\\$100000, \\$300000]$ an estimation error of over \\\\$68000 is very high. This shows that the regression model *underfits* the training data: it doesn't capture the patterns in the training data well enough because it lacks the descriptive power either due to the features not providing enough information to make a good prediction or due to the model itself being not complex enough. The ways to fix this include:\n- using more features and/or more informative features, for example applying log to some of the existing features to address the long tail distributions;\n- using more complex models;\n- reducing the constraints on the model.\n\nThe model that you used above is not constrained (or, *regularised* \u2013 more on this in later lectures), so you should try using more powerful models or work on the feature set.\n\nFor example, *polynomial regression* models the relationship between the $X$ and $y$ as an $n$-th degree polynomial. Polynomial regression extends simple linear regression by constructing polynomial features from the existing ones. For simplicity, assume that your data has only $2$ features rather than $8$, i.e. $X=[x_{1}, x_{2}]$. The linear regression model above tries to learn the coefficients (weights) $w=[w_{0}, w_{1}, w_{3}]$ for the linear prediction (a plane) $\\hat{y} = w_{0} + w_{1}x_{1} + w_{2}x_{2}$ that minimises the residual sum of squares between the prediction and actual label as you've seen above. \n\nIf you want to fit a paraboloid to the data instead of a plane, you can combine the features in second-order polynomials, so that the model looks like this: \n\n\\begin{equation}\n\\hat{y} = w_{0} + w_{1}x_{1} + w_{2}x_{2} + w_{3}x_{1}x_{2} + w_{4}x_{1}^2 + w_{5}x_{2}^2\n\\end{equation}\n\nThis time, the model tries to learn an optimal set of weights $w=[w_{0}, ..., w_{5}]$ (note that $w_{0}$ is called an intercept).\n\nNote that polynomial regression still employs a linear model. For instance, you can define a new variable $z = [x_1, x_2, x_1x_2, x_1^2, x_2^2]$ and rewrite the polynomial above as:\n\n\\begin{equation}\n\\hat{y} = w_{0} + w_{1}z_{0} + w_{2}z_{1} + w_{3}z_{2} + w_{4}z_{3} + w_{5}z_{4}\n\\end{equation}\n\nFor that reason, the polynomial regression in `sklearn` is addressed at the `preprocessing` steps \u2013 that is, first the second-order polynomials are estimated on the features, and then the same `LinearRegression` model as above is applied. For instance, use a second- and third-order polynomials and compare the results (feel free to use higher order polynomials, though keep in mind that as the complexity of the model increases, so does the processing time, the number of weights to be learned, and the chance that the model *overfits* to the training data). For more information, refer to `sklearn` [documentation](http://scikit-learn.org/stable/auto_examples/linear_model/plot_polynomial_interpolation.html):\n\n\n```python\nfrom sklearn.preprocessing import PolynomialFeatures\n\nmodel = Pipeline([('poly', PolynomialFeatures(degree=3)),\n ('linear', LinearRegression())])\n\nmodel = model.fit(housing_prepared, housing_labels)\nhousing_predictions = model.predict(housing_prepared)\nlin_mse = mean_squared_error(housing_labels, housing_predictions)\nlin_rmse = np.sqrt(lin_mse)\nlin_rmse\n```\n\nHow does the performance of the polynomial regression model compare to the first-order linear regression? You see that the performance improves as the complexity of the feature space increases. However, note that the more complex the model becomes, the more accurately it learns to replicate the training data, and the less likely it will generalise to the new pattern, i.e. in the test data. This phenomenon of learning to replicate the patterns from the training data too closely is called *overfitting*, and it is an opposite of *underfitting* when the model does not learn enough about the pattern from the training data due to its simplicity.\n\nJust to give you a flavor of the problem, here is an example of a complex model from the `sklearn` suite called `DecisionTreeRegressor` (Decision Trees are outside of the scope of this course, so don't worry if this looks unfamiliar to you. `sklearn` has implementation for a wide range of ML algorithms, so do check the [documentation](http://scikit-learn.org/stable/auto_examples/tree/plot_tree_regression.html) if you want to learn more). Note that the `DecisionTreeRegressor` learns to predict the values in the training data perfectly well (resulting in the error of $0$!) which usually means that it won't work well on the new data \u2013 e.g., check this later on the test data:\n\n\n```python\nfrom sklearn.tree import DecisionTreeRegressor\n\ntree_reg = DecisionTreeRegressor()\ntree_reg = tree_reg.fit(housing_prepared, housing_labels)\nhousing_predictions = tree_reg.predict(housing_prepared)\ntree_mse = mean_squared_error(housing_labels, housing_predictions)\ntree_mse = np.sqrt(tree_mse)\ntree_mse\n```\n\n### Learning to better evaluate you model using cross-validation\n\nObviously, one of the problems with overfitting above is caused by the fact that you're training and testing on the same (training) set (remember, that you should do all model tuning and optimisation on the training data, and only then apply the best model to the test data). So how can you measure the level of overfitting *before* you apply this model to the test data?\n\nThere are two possible solutions. You can either reapply `train_test_split` function from Step 2 to set aside part of the training set as a *development* (or *validation*) set, and then train the model on the smaller training set and tune it on the development set, before applying your best model to the test set. Or you can use *cross-validation*.\n\nWith *K-fold cross-validation* strategy, the training data gets randomly split into $k$ distinct subsets (*splits*). Then the model gets trained $10$ times, in each run being tested on a different fold and trained on the other $9$ folds. That way, the algorithm is evaluated on each data point in the training set, but during training is not exposed to the data points that it gets tested on later. The result is an array of $10$ evaluation scores, which can be averaged for better understanding and model comparison, i.e.:\n\n\n```python\nfrom sklearn.model_selection import cross_val_score\n \ndef analyse_cv(model): \n scores = cross_val_score(model, housing_prepared, housing_labels,\n scoring = \"neg_mean_squared_error\", cv=10)\n\n # cross-validation expects utility function (greater is better)\n # rather than cost function (lower is better), so the scores returned\n # are negative as they are the opposite of MSE\n sqrt_scores = np.sqrt(-scores) \n print(\"Scores:\", sqrt_scores)\n print(\"Mean:\", sqrt_scores.mean())\n print(\"Standard deviation:\", sqrt_scores.std())\n \nanalyse_cv(tree_reg)\n```\n\nThis shows that the `DecisionTreeRegression` model does not actually perform well when tested on a set different from the one it was trained on. What about the other models? E.g.:\n\n\n```python\nanalyse_cv(lin_reg)\n```\n\nLet's try one more model \u2013 [`RandomForestRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) that implements many Decision Trees (similar to above) on random subsets of the features. This type of models are called *ensemble learning* models and they are very powerful because they benefit from combining the decisions of multiple algorithms:\n\n\n```python\nfrom sklearn.ensemble import RandomForestRegressor\n\nforest_reg = RandomForestRegressor()\nanalyse_cv(forest_reg)\n```\n\n### Fine-tuning the model\n\nSome learning algorithms have *hyperparameters* \u2013 the parameters of the algorithms that should be set up prior to training and don't get changed during training. Such hyperparameters are usually specified for the `sklearn` algorithms in brackets, so you can always check the list of parameters specified in the documentation. For example, whether the [`LinearRegression`](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html) model should calculate the intercept or not should be set prior to training and does not depend on the training itself, and so does the number of helper algorithms (decision trees) that should be combined in a [`RandomForestRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) for the final prediction. `RandomForestRegressor` has $16$ parameters, so if you want to find the *best* setting of the hyperparametes for `RandomForestRegressor`, it will take you a long time to try out all possible combinations.\n\nThe code below shows you how the best hyperparameter setting can be automatically found for an `sklearn` ML algorithm using a `GridSearch` functionality. Let's use the example of `RandomForestRegressor` and focus on specific hyperparameters: the number of helper algorithms (decision trees in the forest, or `n_estimators`) and the number of features the regressor considers in order to find the most informative subsets of instances to each of the helper algorithms (`max_features`):\n\n\n```python\nfrom sklearn.model_selection import GridSearchCV\n\n# specify the range of hyperparameter values for the grid search to try out \nparam_grid = {'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]}\n\nforest_reg = RandomForestRegressor()\ngrid_search = GridSearchCV(forest_reg, param_grid, cv=5,\n scoring=\"neg_mean_squared_error\")\ngrid_search.fit(housing_prepared, housing_labels)\n\ngrid_search.best_params_\n```\n\nYou can also monitor the intermediate results as shown below. Note also that if the best results are achieved with the maximum value for each of the parameters specified for exploration, you might want to keep experimenting with even higher values to see if the results improve any further:\n\n\n```python\ncv_results = grid_search.cv_results_\nfor mean_score, params in zip(cv_results[\"mean_test_score\"], cv_results[\"params\"]):\n print(np.sqrt(-mean_score), params)\n```\n\nOne more insight you can gain from the best estimator is the importance of each feature (expressed in the weight the best estimator learned to assign to each of the features). Here is how you can do that:\n\n\n```python\nfeature_importances = grid_search.best_estimator_.feature_importances_\nfeature_importances\n```\n\nIf you also want to display the feature names, you can do that as follows:\n\n\n```python\nextra_attribs = ['rooms_per_household', 'bedrooms_per_household', 'population_per_household', 'bedrooms_per_rooms']\ncat_one_hot_attribs = ['<1H OCEAN', 'INLAND', 'ISLAND', 'NEAR BAY', 'NEAR OCEAN']\nattributes = num_attribs + extra_attribs + cat_one_hot_attribs\nsorted(zip(feature_importances, attributes), reverse=True)\n```\n\nHow do these compare with the insights you gained earlier (e.g., during data exploration in Step 1, or during attribute exporation in Step 3)?\n\n\n### At last, evaluating your best model on the test set!\n\nFinally, let's take the best model you built and tuned on the training set and apply in to the test set:\n\n\n```python\nfinal_model = grid_search.best_estimator_\n\nX_test = strat_test_set.drop(\"median_house_value\", axis=1)\ny_test = strat_test_set[\"median_house_value\"].copy()\n\nX_test_prepared = full_pipeline.transform(X_test)\nfinal_predictions = final_model.predict(X_test_prepared)\n\nfinal_mse = mean_squared_error(y_test, final_predictions)\nfinal_rmse = np.sqrt(final_mse)\n\nfinal_rmse\n```\n\n# Assignments\n\n**For the tick session**:\n\n## 1. \nFamiliarise yourself with the code in this practical. During the tick session, be prepared to discuss the different steps and answer questions (as well as ask questions yourself).\n\n## 2.\nExperiment with the different steps in the ML pipeline:\n- try dropping less informative features from the feature set and test whether it improves performance\n- use other options in preprocessing: e.g., different imputer strategies, min-max rather than standardisation for scaling, feature scaling vs. no feature scaling, and compare the results\n- evaluate the performance of the simple linear regression model on the test set. What is the `final_rmse` for this model?\n- estimate different feature importance weights with the simple linear regression model (if unsure how to extract the feature weights, check [documentation](http://scikit-learn.org/stable/modules/linear_model.html)). How do these compare to the (1) feature importance weights with the best estimator, and (2) feature correlation scores with the target value from Step 3?\n- [`RandomizedSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html), as opposed to the `GridSearchCV` used in the practical, does not try out each parameter values combination. Instead it only tries a fixed number of parameter settings sampled from the specified distributions. As a result, it allows you to try out a wider range of parameter values in a less expensive way than `GridSearchCV`. Apply `RandomizedSearchCV` and compare the best estimator results.\n\nFinally, if you want to have more practice with regression tasks, you can **work on the following optional task**:\n\n## 3. (Optional)\n\nUse the bike sharing dataset (`./bike_sharing/bike_hour.csv`, check `./bike_sharing/Readme.txt` for the description), apply the ML steps and gain insights from the data. What data transformations should be applied? Which attributes are most predictive? What additional attributes can be introduced? Which regression model performs best?\n\n\n```python\n\n```\n", "meta": {"hexsha": "9800c608d366c5fffca2dad55d25df7342ecb17c", "size": 72586, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "DSPNP_practical1/DSPNP_notebook1.ipynb", "max_stars_repo_name": "yulonglin/cl-datasci-pnp-2021", "max_stars_repo_head_hexsha": "bb51c482009c777402ba0e56d13fbbceaf6f4e85", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": 10, "max_stars_repo_stars_event_min_datetime": "2020-11-06T13:03:19.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-05T17:27:23.000Z", "max_issues_repo_path": "DSPNP_practical1/DSPNP_notebook1.ipynb", "max_issues_repo_name": "yulonglin/cl-datasci-pnp-2021", "max_issues_repo_head_hexsha": "bb51c482009c777402ba0e56d13fbbceaf6f4e85", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "DSPNP_practical1/DSPNP_notebook1.ipynb", "max_forks_repo_name": "yulonglin/cl-datasci-pnp-2021", "max_forks_repo_head_hexsha": "bb51c482009c777402ba0e56d13fbbceaf6f4e85", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": 12, "max_forks_repo_forks_event_min_datetime": "2020-11-08T09:34:50.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-02T21:55:08.000Z", "avg_line_length": 47.1337662338, "max_line_length": 1034, "alphanum_fraction": 0.6680627118, "converted": true, "num_tokens": 12380, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4843800842769843, "lm_q2_score": 0.1755380693103044, "lm_q1q2_score": 0.08502714480634437}} {"text": "\n\n# Condi\u00e7\u00f5es Gerais\n\nEsta avalia\u00e7\u00e3o tem como objetivo avaliar os conhecimentos adquiridos durante a disciplina de Mec\u00e2nica dos S\u00f3lidos.\n\nEssa forma de avalia\u00e7\u00e3o tem por objetivo promover a discuss\u00e3o dos exerc\u00edcios entre os membros do grupo (e eventualmente entre grupos) e ampliar a diversidade de exerc\u00edcios a serem realizados.\n\n---\n\nAs condic\u00f5es abaixo devem ser observadas: \n\n1. Ser\u00e3o formadas equipes e cada uma delas com no m\u00ednimo 3 e no m\u00e1ximo 4 integrantes. \n\n2. A avalia\u00e7\u00e3o ser\u00e1 realizada por meio da entrega de uma c\u00f3pia deste notebook com as solu\u00e7\u00f5es desenvolvidas at\u00e9 a data estipulada de entrega.\n\n\n3. Da entrega da avalia\u00e7\u00e3o.\n * Os documentos necess\u00e1rios para a entrega do trabalho s\u00e3o (1) os c\u00f3digos desenvolvidos pela equipe. \n * A equipe deve usar este modelo de notebook para desenvolver os c\u00f3digos. \n * Os c\u00f3digos podem ser desenvolvidos combinado a linguagem LaTeX e computa\u00e7\u00e3o simb\u00f3lica via python quando necess\u00e1rio.\n\n4. Da distribui\u00e7\u00e3o das quest\u00f5es.\n * Ser\u00e3o atribu\u00eddas para cada grupo at\u00e9 9 quest\u00f5es referentes ao cap\u00edtulo 2 do \n livro texto. \n * A quantidade de quest\u00f5es ser\u00e1 a mesma para cada grupo. \n * A distribui\u00e7\u00e3o das quest\u00f5es ser\u00e1 aleat\u00f3ria. \n * A pontuac\u00e3o referente a cada quest\u00e3o ser\u00e1 igualit\u00e1ria e o valor total da avalia\u00e7\u00e3o ser\u00e1 100 pontos.\n\n5. As equipes devem ser formadas at\u00e9 \u00e0s **18 horas o dia 23/11/2021** por meio do preenchimento da planilha [[MAC005] Forma\u00e7\u00e3o das Equipes](https://docs.google.com/spreadsheets/d/1j59WVAl1cMzXgupwG86WFNGAQhbtVtc0b5aIQSbqGQE/edit?usp=sharing).\n\n6. A forma\u00e7\u00e3o das equipes pode ser acompanhada arquivo [[MAC005] Forma\u00e7\u00e3o das Equipes](https://docs.google.com/spreadsheets/d/1j59WVAl1cMzXgupwG86WFNGAQhbtVtc0b5aIQSbqGQE/edit?usp=sharing). Cada equipe ser\u00e1 indentificada por uma letra em ordem alfab\u00e9tica seguida do n\u00famero 1 (A1, B1, C1, e assim por diante). O arquivo est\u00e1 aberto para edi\u00e7\u00e3o e pode ser alterado pelos alunos at\u00e9 a data estipulada.\n\n7. Equipes formadas ap\u00f3s a data estabelecida para a forma\u00e7\u00e3o das equipes ter\u00e3o a nota da avalia\u00e7\u00e3o multiplicada por um coeficiente de **0.80**.\n\n8. A equipe deve indicar no arquivo [[MAC005] Forma\u00e7\u00e3o das Equipes](https://docs.google.com/spreadsheets/d/1j59WVAl1cMzXgupwG86WFNGAQhbtVtc0b5aIQSbqGQE/edit?usp=sharing) um respons\u00e1vel pela entrega do projeto. \n * Somente o respons\u00e1vel pela entrega deve fazer o upload do arquivo na plataforma\n\n9. A entrega dos projetos deve ocorrer at\u00e9 \u00e0s **23:59 do dia 30/11/2021** na plataforma da disciplina pelo respons\u00e1vel pela entrega. \n * Caso a entrega seja feita por outro integrante diferente daquele indicado pela pela equipe a avalia\u00e7\u00e3o ser\u00e1 desconsiderada e n\u00e3o ser\u00e1 corrigida at\u00e9 que a a condi\u00e7\u00e3o de entrega seja satisfeita.\n\n10. Quaisquer d\u00favidas ou esclarecimentos devem ser encaminhadas pela sala de aula virtual.\n\n\n\n#Exercicios\n\n2.3, 2.5, 2.7, 2.10, 2.19, 2.21, 2.25, 2.27, 2.46\n\n[Link do Livro](http://fn.iust.ac.ir/files/fnst/ssadeghzadeh_52bb7/files/Introduction_to_continuum_mechanics_-Lai-2010-4edition%281%29.pdf)\n\n## Solu\u00e7\u00e3o do problema 2.3 (inserir n\u00famero e enunciado)\n\n\n\n###a) 1 - Montamos e resolvemos o sistema de equa\u00e7\u00f5es para a primeira equa\u00e7\u00e3o: $b_i = B_{ij} a_j$
\n\n$b_1 = B_{1j} a_j$
\n$b_2 = B_{2j} a_j$
\n$b_3 = B_{3j} a_j$

\n\n$b_1 = B_{11} a_1 + B_{12} a_2 + B_{13} a_3$
\n$b_2 = B_{21} a_1 + B_{22} a_2 + B_{23} a_3$
\n$b_3 = B_{31} a_1 + B_{32} a_2 + B_{33} a_3$

\n\n$b_1 = 2*1 + 3*0 + 0*2 = 2$
\n$b_2 = 0*1 + 5*0 + 1*2 = 2$
\n$b_3 = 0*1 + 2*0 + 1*2 = 2$

\n\n$\nb = \n\\begin{bmatrix}\n b_1 \\\\\n b_2 \\\\\n b_3 \n\\end{bmatrix}\n=\n\\begin{bmatrix}\n 2 \\\\\n 2 \\\\\n 2 \n\\end{bmatrix}\n$\n


\n2 - Multiplicamos as matrizes para a segunda equa\u00e7\u00e3o: $[b] = [B][a]$\n\n\n```python\nimport numpy as np\nimport sympy as sp\nsp.init_printing()\n\nB = np.matrix([[2,3,0],[0,5,1],[0,2,1]])\na = np.matrix([[1],[0],[2]])\nb = sp.Matrix(B*a)\n\nprint(\"Dessa forma, as duas equa\u00e7\u00f5es (do passo 1 e 2) s\u00e3o equivalentes\")\nb\n```\n\n Dessa forma, as duas equa\u00e7\u00f5es (do passo 1 e 2) s\u00e3o equivalentes\n\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}2\\\\2\\\\2\\end{matrix}\\right]$\n\n\n\n###b) 1 - Montamos e resolvemos o sistema de equa\u00e7\u00f5es para a primeira equa\u00e7\u00e3o: $s = B_{ij} a_i a_j$
\n\n$s = B_{11} a_1 a_1 + B_{12} a_1 a_2 + B_{13} a_1 a_3 + B_{21} a_2 a_1 + B_{22} a_2 a_2 + B_{23} a_2 a_3 + B_{31} a_3 a_1 + B_{32} a_3 a_2 + B_{33} a_3 a_3 $\n\n$s = 2*1*1 + 3*1*0 + 0*1*2 + 0*0*1* + 5*0*0 + 1*0*2 + 0*2*1 + 2*2*0 + 1*2*2 $\n\n$s = 2 + 4 = 6$\n\n
\n2 - Multiplicamos as matrizes para a segunda equa\u00e7\u00e3o: $s=[a]^t[B][a]$\n\n\n```python\nimport numpy as np\nimport sympy as sp\nsp.init_printing()\n\nB = np.matrix([[2,3,0],[0,5,1],[0,2,1]])\na = np.matrix([[1],[0],[2]])\nat = np.transpose(a)\n\n#calculo da segunda equa\u00e7\u00e3o\nb = sp.Matrix(at*B*a)\n\nprint(\"Dessa forma, as duas equa\u00e7\u00f5es (do passo 1 e 2) s\u00e3o equivalentes\")\nb\n```\n\n Dessa forma, as duas equa\u00e7\u00f5es (do passo 1 e 2) s\u00e3o equivalentes\n\n\n\n\n\n$\\displaystyle \\left[\\begin{matrix}6\\end{matrix}\\right]$\n\n\n\n## Solu\u00e7\u00e3o do problema 2.5 (inserir n\u00famero e enunciado)\n\n\n\n(a)\n\n$s = A_1^{2} + A_2^{2} + A_3^{2}$ \\\\\n$s = (A_1.A_1) + (A_2.A_2) + (A_3.A_3)$ \\\\\n$s = A_iA_i$\n\n(b)\n\n$\n\\frac{\\partial\u03a6}{\\partial x_1^2} + \\frac{\\partial\u03a6}{\\partial x_2^2} + \n\\frac{\\partial\u03a6}{\\partial x_3^2}$\n\n$\n\\frac{\\partial\u03a6}{\\partial x_1 . x_1} + \\frac{\\partial\u03a6}{\\partial x_2 . x_2} + \n\\frac{\\partial\u03a6}{\\partial x_3 . x_3} = $\n$\\frac{\\partial\u03a6}{\\partial x_i . x_i}$\n\n## Solu\u00e7\u00e3o do problema 2.7 (inserir n\u00famero e enunciado)\n\n\n\n### Para escrever $a_i$ em forma longa temos que expandir a equa\u00e7\u00e3o $a_i = \u2202v_i/\u2202t + v_j\u2202v_i/\u2202x_j$ para os indices $j = 1, j = 2, j = 3$:\n$a_1 = \u2202v_1/\u2202t + v_1\u2202v_1/\u2202x_1 + \u2202v_1/\u2202t + v_2\u2202v_1/\u2202x_2 + \u2202v_1/\u2202t + v_3\u2202v_1/\u2202x_3$\n$a_2 = \u2202v_2/\u2202t + v_1\u2202v_2/\u2202x_1 + \u2202v_2/\u2202t + v_2\u2202v_2/\u2202x_2 + \u2202v_2/\u2202t + v_3\u2202v_2/\u2202x_3$\n$a_3 = \u2202v_3/\u2202t + v_1\u2202v_3/\u2202x_1 + \u2202v_3/\u2202t + v_2\u2202v_3/\u2202x_2 + \u2202v_3/\u2202t + v_3\u2202v_3/\u2202x_3$\n\n## Solu\u00e7\u00e3o do problema 2.10 (inserir n\u00famero e enunciado)\n\n\n\n### 1 - Primeiro vamos encontrar a matriz $[d{i}]$, sabendo que $d_k = \u03b5_{ijk} a_i b_j$:\n\n$d_1 = \u03b5_{ij1} a_i b_j$
\n$d_2 = \u03b5_{ij2} a_i b_j$
\n$d_3 = \u03b5_{ij1} a_i b_j$

\n$d_1 = \u03b5_{231} a_2 b_3 + \u03b5_{321} a_3 b_2$
\n$d_2 = \u03b5_{312} a_3 b_1 + \u03b5_{132} a_1 b_3$
\n$d_3 = \u03b5_{123} a_1 b_2 + \u03b5_{213} a_2 b_1$

\n### Sabemos que $\u03b5_{ijk}$ \u00e9 o simbolo de permuta\u00e7\u00e3o \u00e9:
$\u03b5_{123}$ = $\u03b5_{231}$ = $\u03b5_{312}$ = +1
$\u03b5_{213}$ = $\u03b5_{321}$ = $\u03b5_{132}$ = -1
$\u03b5_{111}$ = $\u03b5_{222}$ = $\u03b5_{333}$ = 0
Ent\u00e3o os valores de $d_1,d_2,d_3$ ficam :\n$d_1 = a_2 b_3 - a_3 b_2 = (2)(3) - (0)(2) = 6$
\n$d_2 = a_3 b_1 - a_1 b_3 = (0)(0) - (1)(3) = -3$
\n$d_3 = a_1 b_2 - a_2 b_1 = (1)(2) - (2)(0) = 2$

\n\n### Portanto:\n$\n[d_i] = \n\\begin{bmatrix}\n d_1 \\\\\n d_2 \\\\\n d_3 \n\\end{bmatrix}\n=\n\\begin{bmatrix}\n 6 \\\\\n -3 \\\\\n 2 \n\\end{bmatrix}\n$\n


\n### 2 - Agora encontraremos a matriz $[d{i}]$, sabendo que $d_k = (a$ x $b). e_k$ :
Sabemos que $a$ x $b$ = $(a_ie_i)$ x $(b_je_j) = a_ib_j\u03b5_{ijk}e_k$, ent\u00e3o:\n$d_1 = a_ib_j\u03b5_{ij1}e_1.e_1$
\n$d_2 = a_ib_j\u03b5_{ij2}e_2.e_2$
\n$d_3 = a_ib_j\u03b5_{ij3}e_3.e_3$

\n$d_1 = a_2b_3\u03b5_{231}e_1.e_1 + a_3b_2\u03b5_{321}e_1.e_1$
\n$d_2 = a_1b_3\u03b5_{132}e_2.e_2 + a_3b_1\u03b5_{312}e_2.e_2$
\n$d_3 = a_1b_2\u03b5_{123}e_3.e_3 + a_2b_1\u03b5_{213}e_3.e_3$

\n$d_1 = a_2b_3 - a_3b_2 = (2)(3) - (0)(2) = 6$
\n$d_2 = a_3b_1 - a_1b_3 = (0)(0) - (1)(3) = -3$
\n$d_3 = a_1 b_2 - a_2 b_1 = (1)(2) - (2)(0) = 2$

\n### Resultando em:\n$\n[d_i] = \n\\begin{bmatrix}\n d_1 \\\\\n d_2 \\\\\n d_3 \n\\end{bmatrix}\n=\n\\begin{bmatrix}\n 6 \\\\\n -3 \\\\\n 2 \n\\end{bmatrix}\n$\n


\n### Comparando as matrizes encontradas em 1 e 2 vemos que os resultados s\u00e3o iguais.\n\n\n## Solu\u00e7\u00e3o do problema 2.19 (inserir n\u00famero e enunciado)\n\n\n\n### Uma transforma\u00e7\u00e3o T opera em qualquer vetor $a$ para dar $Ta = \\frac{a}{|a|}$, onde $|a|$ \u00e9 a magnitude de $a$. Mostre que T n\u00e3o \u00e9 uma transforma\u00e7\u00e3o linear.\n\nComo $Ta = \\frac{a}{|a|}$ para todos os valores de $a$, ent\u00e3o podemos fazer:\n\n$T(a + b) = \\frac{(a + b)}{|a+b|}$\n\nCom isso, escrevemos:\n\n$Ta + Tb = \\frac{a}{|a|} + \\frac{b}{|b|}$\n\nNesse momento, j\u00e1 podemos notar que $T(a + b) \\neq Ta + Tb$\n\nOu seja, T n\u00e3o \u00e9 uma transforma\u00e7\u00e3o linear.\n\n## Solu\u00e7\u00e3o do problema 2.21 (inserir n\u00famero e enunciado)\n\n\n\n### Usar propriedade linear de $T$ para achar:\n### A) $Ta$\n\nUsando as informa\u00e7\u00f5es passadas no enunciado, podemos fazer\n\n$Ta = T(2e1 + 3e2)$\n\n$T(2e1 + 3e2) = 2Te1 + 3Te2$\n\nComo $Te1 = e1 + e2$ e $Te2 = e1 \u2212 e2$, ent\u00e3o fazemos:\n\n$2Te1 + 3Te2 = 2(e1 + e2) + 3(e1 \u2212 e2)$\n\n$2(e1 + e2) + 3(e1 \u2212 e2) = 2e1 + 2e2 + 3e1 - 3e2$\n\nO que nos d\u00e1 $5e1 -e2$. Ou seja, $Ta = 5e1 -e2$.\n\n### B) $Tb$\n\n$Tb = T(3e1 + 2e2)$\n\n$T(3e1 + 2e2) = 3Te1 + 2Te2$\n\nComo $Te1 = e1 + e2$ e $Te2 = e1 \u2212 e2$, ent\u00e3o fazemos:\n\n$3Te1 + 2Te2 = 3(e1 + e2) + 2(e1 \u2212 e2)$\n\n$3(e1 + e2) + 2(e1 \u2212 e2) = 3e1 + 3e2 + 2e1 - 2e2$\n\nO que nos d\u00e1 $5e1 + e2$. Ou seja, $Tb = 5e1 + e2$.\n\n### B) $T(a+b)$\n\n$T(a+b) = T(2e1 + 3e2 + 3e1 + 2e2)$\n\n$T(a+b) = T(5e1 + 5e2)$\n\nComo $T(5e1 + 5e2) = 5(Te1 + Te2)$, ent\u00e3o $5(Te1 + Te2) = 5(e1 + e2 + e1 \u2212 e2)$\n\nDessa forma, temos $5(e1 + e2 + e1 \u2212 e2) = 5e1 + 5e1 + 5e2 - 5e2$\n\nPortanto, $T(a+b) = 10e1$\n\n## Solu\u00e7\u00e3o do problema 2.25 (inserir n\u00famero e enunciado)\n\n\n\n(a) \n$ e_i' = Re_i = R_{mi}e_m\\\\ \n e_1' = R_{11}e_1 + R_{21}e_2 + R_{31}e_3 \\\\\n e_2' = R_{12}e_1 + R_{22}e_2 + R_{32}e_3 \\\\\n e_3' = R_{13}e_1 + R_{23}e_2 + R_{33}e_3 \\\\\n \\text{Dessa forma:} \\\\\n R_{im}R_{jm} = R_{mi}R_{mj} = \\delta_{ij} \\\\\n R_{11} = e_1.Re_1 = e_1 . e_1' = cos(e_1,e_1') \\\\\n R_{12} = e_1.Re_2 = e_1 . e_2' = cos(e_1,e_2') \\\\\n R_{13} = e_1.Re_3 = e_1 . e_3' = cos(e_1,e_3')\n \\text{, logo:} \\\\\n R_{ij} = cos(e_i,e_j') \\\\\n\\text{De acordo com a demonstra\u00e7\u00e3o acima:} \\\\\nR = \n\\begin{bmatrix}\nR_{11} & R_{12} & R_{13}\\\\\n R_{21} & R_{22} & R_{23} \n \\\\ R_{31} & R_{32} & R_{33} \n\\end{bmatrix} \\text{, ser\u00e1:} \\\\\nR = \n\\begin{bmatrix}\n1 & 0 & 0\\\\\n0 & cos\\theta & sen\\theta \n \\\\ 0 & -sen\\theta & cos\\theta \n\\end{bmatrix}\n$\n\n(b)\n\n$\n\\text{Analogamente \u00e0 letra \"a\":} \\\\\nR = \n\\begin{bmatrix}\ncos\\theta & 0 & -sen\\theta\\\\\n0 & 1 & 0 \n \\\\ sen\\theta & 0 & cos\\theta \n\\end{bmatrix}\n$\n\n\n\n\n\n## Solu\u00e7\u00e3o do problema 2.27 (inserir n\u00famero e enunciado)\n\n\n\n### Pelo produto di\u00e1dico temos que :\n$Tr = r - 2(r.n)n = r - 2(nn)r$\n### Multiplicando pela matriz Identidade $I$ :\n$Tr = (Ir - 2(nn)r) = (I - 2nn)r$
\n$T = I - 2nn$

\n### Agora vamos encontrar a matriz $T$ :
Como foi dito no enunciado :\n$n = (e_1 + e_2 + e_3)/\u221a3$\n###Ent\u00e3o :\n$\n[2nn] = 2/3\n\\begin{bmatrix}\n 1 \\\\\n 1 \\\\\n 1 \n\\end{bmatrix}\n\\begin{bmatrix}\n 1 & 1 & 1\\\\\n\\end{bmatrix}\n= 2/3\n\\begin{bmatrix}\n 1 & 1 & 1 \\\\\n 1 & 1 & 1 \\\\\n 1 & 1 & 1 \n\\end{bmatrix}\n$\n


\n###Substituindo na equa\u00e7\u00e3o $T = I - 2nn$ :\n$\n[T] = [I] - 2/3\n\\begin{bmatrix}\n 1 & 1 & 1 \\\\\n 1 & 1 & 1 \\\\\n 1 & 1 & 1 \n\\end{bmatrix}\n$

\n$\n[T] = \n\\begin{bmatrix}\n 1 & 0 & 0 \\\\\n 0 & 1 & 0 \\\\\n 0 & 0 & 1 \n\\end{bmatrix} - \n\\begin{bmatrix}\n 2/3 & 2/3 & 2/3 \\\\\n 2/3 & 2/3 & 2/3 \\\\\n 2/3 & 2/3 & 2/3 \n\\end{bmatrix}\n$

\n$\n[T] = \n\\begin{bmatrix}\n 1/3 & -2/3 & -2/3 \\\\\n -2/3 & 1/3 & -2/3 \\\\\n -2/3 & -2/3 & 1/3 \n\\end{bmatrix} = 1/3\n\\begin{bmatrix}\n 1 & -2 & -2 \\\\\n -2 & 1 & -2 \\\\\n -2 & -2 & 1\n\\end{bmatrix}\n$

\n\n\n## Solu\u00e7\u00e3o do problema 2.46 (inserir n\u00famero e enunciado)\n\n\n\n\n###a) Dado qualquer vetor $a$ e qualquer tensor $T$, mostrar que $a T^A a = 0$, onde $T^A$ e $T^S$ s\u00e3o sim\u00e9tricos e antisim\u00e9tricos por parte de T.\n\nR: Dado que $T^A$ \u00e9 antissim\u00e9trico, ent\u00e3o $(T^A)^T = -T^A$. Dessa forma, podemos calcular:\n\n$aT^Aa = a(T^A)^Ta$ \n\n$aT^Aa = -aT^Aa$\n\n$2aT^Aa = 0$\n\n$aT^Aa = 0$\n\n###b) Dado qualquer vetor $a$ e qualquer tensor $T$, mostrar que $a T a = a T^S a$, onde $T^A$ e $T^S$ s\u00e3o sim\u00e9tricos e antisim\u00e9tricos por parte de T.\n\nR: Dado que qualquer Tensor $T$ pode ser decomposto na soma de um tensor sim\u00e9trico $T^S$ e um antissim\u00e9trico $T^A$. Temos: $T = T^S + T^A$\n\nSendo assim, podemos calcular:\n\n$a T a = a(T^S + T^A)a$\n\n$a T a = (a T^S + a T^A) a$\n\n$a T a = a T^S a + a T^A a$\n\nSubstituindo o valor encontrado na alternativa a ($aT^Aa = 0$), temos: \n\n$a T a = a T^S a + 0$\n\n$a T a = a T^S a$\n", "meta": {"hexsha": "523656fd08b1ca83dfaa848845029b56022f6ada", "size": 444465, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "[MAC005] - Trabalho 01.ipynb", "max_stars_repo_name": "MathewsJosh/mecanica-solidos", "max_stars_repo_head_hexsha": "68b167c4cf760fcb6601dd053a45454fdf73347a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "[MAC005] - Trabalho 01.ipynb", "max_issues_repo_name": "MathewsJosh/mecanica-solidos", "max_issues_repo_head_hexsha": "68b167c4cf760fcb6601dd053a45454fdf73347a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "[MAC005] - Trabalho 01.ipynb", "max_forks_repo_name": "MathewsJosh/mecanica-solidos", "max_forks_repo_head_hexsha": "68b167c4cf760fcb6601dd053a45454fdf73347a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 583.2874015748, "max_line_length": 88262, "alphanum_fraction": 0.9374213943, "converted": true, "num_tokens": 5461, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.42632159254749036, "lm_q2_score": 0.1993080074160125, "lm_q1q2_score": 0.08496930712906148}} {"text": "```python\n%matplotlib notebook\n%matplotlib inline\nimport math\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\n# Nuclear Models\n\n## Learning objectives\n\n- Summarize the history of atomic theory development.\n- Recognize the radiation signatures that drove early atomic theory.\n- List atomic models: Plum Pudding, Rutherford, Bohr, Bohr with Elliptical Orbits, Quantum Mechanical\n- Differentiate various atomic models by name and physics.\n- List nuclear models: Proton-electron, Proton-neutron, Liquid-Drop, Shell\n- Differentiate nuclear models by name and physics.\n- Identify the physics captured by various nuclear models.\n- Explain the reason for the structure of most likely decays in the chart of the nuclides\n\n## Discovery of Radioactivity\n\nElectrically charged plates impose a magnetic field (out of the page). \n\n\n\n21.3 Radioactive Decay by Rice University is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted. (https://opentextbc.ca/chemistry/chapter/21-3-radioactive-decay/)\n\n\n- Alpha $(\\alpha)$ particles are attracted to the negative plate and deflected by a relatively small amount.\n- Beta $(\\beta)$ particles are attracted to the positive plate and deflected by a larger amount.\n- Gamma $(\\gamma)$ particles seem to be unaffected.\n\n### Exercise: Think-pair-share\n\nBased on these experimental results:\n\n1. What can one determine about the charges of the alpha, beta, and gamma particles?\n2. What can one determine about the weights of the particles?\n\n\n### Plum Pudding Model\n\nJ.J. Thomson correctly identified the beta rays as electrons. \nSo, electrons must be somewhere in the atom. However, atoms seemed be generally electrically neutral. \nBeing british, J.J. Thomson equated this to plum pudding:\n\n\n\n\nHe had no reason to believe that the charges were not uniformly distributed.\n\n\n\n\nExplains:\n\n- electrically neutral atoms\n- size of the atom $(10^{-10} m)$\n- an ion is an atom from which an electron has been lost\n- charge of a singly ionized atom is exactly one \n- number of electrons equals approximately half of the atomic weight of the atom\n\n\n### Rutherford Model\n\nGold foil experiment by Geiger and Marsden. Alpha particles bombarded a very thin gold sheet/foil. Reflected alphas were very unlikely to be observed if the Thomson model was correct.\n\n\n\n\nIn **1911** Rutherford surmised that a dense, positively charged thing must be at the center of the atom, perhaps with a diameter of $10^{-14}m$ or less.\n\nDeficiency of Rutherford\u2019s model:\nThe accelerating charge (electron) would radiate away their kinetic energy and the atom will collapse. \n\n\n### Bohr Model\n\nIn the 1880s, Balmer, Rydberg, and others had observed **discrete lines** in the wavelength spectrum of atomic hydrogen when atoms are excited.\n\n\n\nThe equation explaining these emission lines was well known by the early 1900s.\n\n\\begin{align}\n\\frac{1}{\\lambda} &= R_H \\left[\\frac{1}{n_o^2} - \\frac{1}{n^2} \\right]\\\\\n\\mbox{where }&\\\\\n\\lambda &= \\mbox{the wavelength of electromagnetic radiation emitted in vacuum}\\\\\nR_H &= \\mbox{the Rydberg constant } \\\\\n&\\simeq 1.097373156850865 \\times 10^7 m^{-1} \\\\\n\\implies \\frac{1}{R} &=912 \\mbox{ angstrom}\\\\\n\\end{align}\n\nIn the Bohr model, then, electrons were allowed only in certain orbits. Specifically only those orbits whose angular momentum satisfied the following relation:\n\n\\begin{align}\n\\frac{m_ev^2}{r} &= \\frac{Ze^2}{4\\pi\\epsilon_or^2}\\\\\nL &\\equiv m_evr = n\\frac{h}{2\\pi}\\\\\n\\mbox{where }&\\\\\nn&=1,2,3...\\\\\nL&=\\mbox{angular momentum}\\\\\nv&=\\mbox{velocity of the electron}\\\\\nr&=\\mbox{radius of the electron orbit}\\\\\n\\epsilon_o&=\\mbox{permittivity of free space}\\\\\n&= 8.85418782 \\times 10^{-12} m^{-3} kg^{-1} s^4 A^2\n\\end{align}\n\nSolving those relations for $r$ and $v$ :\n\n\\begin{align}\nv_n &= \\frac{Ze^2}{2\\epsilon_onh}\\\\\nr_n &= \\frac{n^2h^2\\epsilon_o}{\\pi m_e Z e^2}\n\\end{align}\n\n\nThis explained the discreteness, as electrons might radiate energy only when moving from one allowed orbit to another.\n\n\n\n\nElectron\u2019s total energy under a certain orbit:\n\n\\begin{align}\nE_n &= \\frac{-m_e(Ze^2)^2}{8\\epsilon_o^2 n^2 h^2}\\\\\n\\end{align}\n\n\nEnergy difference between two allowed orbits:\n\n\\begin{align}\n\\Delta E_{n\\rightarrow n_o} &= h\\nu_{n\\rightarrow n_o} \\\\\n &= \\frac{-m_e(Ze^2)^2}{8\\epsilon_o^2 h^2}\\left[\\frac{1}{n_o^2} - \\frac{1}{n^2}\\right]\\\\\n\\end{align}\n\n\nThis model explained a lot.\n\n\n\n### Bohr Model : Elliptic Orbits\n\nFine structure: Spectral lines are consisted of a number of lines very close together.\n\n- Except the quantum number $n$, there are some energy levels lying close to one another. \n- Sommerfeld postulated elliptic orbits as well as circular orbits and introduced another quantum number to describe the **angular momentum** of orbits. \n\nHowever, Sommerfeld\u2019s theory predicted more lines than were observed in experiments.\n\nThus, a new quantum number $n$ needed an _ad hoc_ selection rule to limit the number of predicted lines.\n\n\n\n\n\nSplitting of the spectral lines was observed: A third quantum number $m$ needed to be introduced to revise Bohr\u2019s model. also predicted more lines\n\nThe theory failed to applied to more complicated atoms (multiplet structure is observed).\n\nFurther change of Bohr\u2019s theory can no longer resolve the difficulties. \n\n### Quantum Mechanical Model\n\nThe electrons are no longer modeled as particles moving in orbits. Instead, they are modeled as a standing wave around the nucleus. The magnitude of that wave reflects the probability of finding the electron in that locations.\n\n- Schrodinger\u2019s new approach: **the wave function**\n- The electrons are no longer point particles; they are visualized as a standing wave around the nucleus.\n\nThe three quantum numbers ($n$, $l$ , $m$) arose from the theory naturally - no ad hoc selection rules were needed. \n\nA fourth quantum number $m_s$ was introduced to explain:\n- Multiple fine-line structure\n- Splitting of lines in a strong magnetic field ( the **anomalous Zeeman effect**)\n\n\n\nThe $m_s$ quantum number accounted for the inherent angular momentum of the electron equal to $\\pm\\frac{h}{2\\pi}. Later, in 1928, Dirac would show that this fourth quantum number also arises from the wave equation.\n\n## Nuclear Models\n\n\n### Fundamental properties of the nucleus \n\nSome simple facts were determined about the mass of the nucleus.\n\n1. The **masses of atoms** were very nearly whole numbers if one defines the atomic mass unit as:\n\\begin{align}\n1 u = \\frac{1}{12}M(^{12}C)\n\\end{align}\n2. The **mass contribution from electrons** is quite **small**\n - In $^{12}C$, the six electrons together only weigh $0.00329u$\n \n \nAlso, electron scattering experiments yielded details about the density of the nucleus and its components.\n**The density of protons** inside the spherical nuclei was well understood to be a function of the radius, dropping off appreciably at the boundary of the nucleus.\n\n\\begin{align}\n\\rho_p(r) &= \\frac{\\rho_p^o}{1+e^{\\frac{(r-R)}{a}}} \\left[\\frac{\\mbox{protons}}{fm^3}\\right]\\\\\n\\mbox{where }&\\\\\nr &= \\mbox{distance to center of nucleus}\\\\\nR &= \\mbox{total 'radius' of the nucleus}\\\\\na &= \\mbox{surface thickness}\\\\\n\\rho_p^o &= \\int\\int\\int \\rho_p(r)dV\\\\\n&= 4\\pi\\int_0^\\infty r^2\\rho_p(r)dr\\\\\n&= Z\n\\end{align}\n\nAlso, the R value, the radius of the nucleus was seen empirically to be proportional to $A^{1/3}$.\n\n\n```python\ndef rho_p(r, rho_po, radius, a):\n denom = 1 + math.exp((r - radius)/a)\n return rho_po/denom\n```\n\n\n```python\n# For 16O (from table 3.2)\no_rho_po = 0.156 # fm^{-3}\no_radius = 2.61 # fm\no_a = 0.513\n\n# For 109Ag (from table 3.2)\nag_rho_po = 0.157 # fm^{-3}\nag_radius = 5.33 # fm\nag_a = 0.523\n\n# For 208Pb (from table 3.2)\npb_rho_po = 0.159 # fm^{-3}\npb_radius = 6.65 # fm\npb_a = 0.526\n\nr = np.arange(0, 10, 0.1)\nto_plot_o = np.arange(0., 100.)\nto_plot_ag = np.arange(0., 100.)\nto_plot_pb = np.arange(0., 100.)\n\nfor i in range(0, 100):\n to_plot_o[i] = rho_p(r[i], o_rho_po, o_radius, o_a)\n to_plot_ag[i] = rho_p(r[i], ag_rho_po, ag_radius, ag_a)\n to_plot_pb[i] = rho_p(r[i], pb_rho_po, pb_radius, pb_a)\n\nplt.plot(r, to_plot_o, label=\"$^{16}O$\")\nplt.plot(r, to_plot_ag, label=\"$^{109}Ag$\")\nplt.plot(r, to_plot_pb, label=\"$^{208}Pb$\")\nplt.ylabel(\"Proton density ($fm^{-3}$)\")\nplt.xlabel(\"distance from center $(fm)$\")\nplt.legend()\n```\n\n### Proton Electron Model\n\nThis model sought primarily to explain the wholeness of mass numbers. To do so, it simply assumed that all heavier nuclei were composed of multiples of the hydrogen nucleus (assumed to be a single proton), which has the smallest mass(1 amu).\n\nFor this to match what was known about atomic charge, electrons would need to be in the nucleus to cancel some, but not all, of the positive charge from the protons.\n\n**In this model** an atom $^A_ZX$ would have a nucleus containing A protons and (A \u2212 Z) electrons with Z electrons surrounding the nucleus. This postulated extra electrons -- to explain it, the mass of the electrons was assumed to make a negligible contribution.\n\nTwo difficulties with this P-E Model:\n\n1. Predicted angular momentum (spin) of the nuclei did not always agree with experiment. \n **Protons and electrons both have half integer spin**, so when an even number are combined, whole integer spin should result. For example, **the model predicted integer spin for Beryllium, while experiments predicted half-integer spin.** Similarly, **the model predicted half-integer spin in nitrogen but experimental results show nitrogen has integer spin.**\n\n2. Uncertainty Principle \n\n\\begin{align}\n\\Delta p \\Delta x &\\ge \\frac{h}{4\\pi}\n\\end{align}\n \n\nIf an electron is in the nucleus, then $\\Delta x\\simeq10^{-14}m$. Accordingly:\n\\begin{align}\nmin(\\Delta_p) = 1.1\\times 10^{-20}J m^{-1}s\n\\end{align}\nSince the electron\u2019s total energy is\n\\begin{align}\nE &= T +m_oc^2 \\\\\n&=\t\\sqrt{p^2c^2 +m^2_oc^4}\\\\\n\\implies &\\\\\n&\\forall p = \\Delta p : E \\simeq T = 20 MeV\\\\\n\\end{align}\n\nSince the electron's rest-mass energy is 0.51 MeV and beta particles emitted by atoms seldom have energies above a few MeV, something was wrong with this calculation.\n\nYou can do the same calculation for the proton. Because it has a much higher mass, there is no discrepancy. The energy of a free proton confined to a nucleus is its rest-mass energy (931 MeV), with $T<1MeV$.\n\n### 1932: Chadwick discovers the neutron\n\nChadwick discovers the existence of a chargeless particle with a mass just slightly greater than that of the proton.\n\n\\begin{align}\nm_n &= 1.008665 u\\\\\nm_p &= 1.007276 u\n\\end{align}\n\n\n\n### Proton-Neutron Model\n\n- **1932** Chadwick discovered the neutron. \n- **1932** Heisenberg first suggested every nucleus is composed of only protons and neutrons.\n\n**In this model**, a nucleus with a mass number A contains Z protons and N = A \u2212 Z neutrons. \n\n- This P-N Model avoids the failures of the P-E model. \n- Also is consistent with experimental results regarding radioactivity \n- Since the neutron has half-integral spin, the A neutrons and protons give appropriate spin for the total atom in all cases.\n\nChallenges for this model:\n\n- Because of Coulombic repulsive forces in the nucleus, a 'nuclear force' must hold the nucleus together\n- To hold the protons and neutrons together: \n nuclear force : p-n , n-n , p-p\n 1. inside the nucleus: nuclear force (attractive)\n 2. outside the nucleus: Coulombic force (repulsive)\n\nThe energy needed to separate nucleus into $p_s$ & $n_s$ : binding energy\n\n\nIn the image below, the nuclear potential well is shown.\n\n\n**The above image was reproduced from Shultis, J. K. and Faw, R. E. \u201cFundamentals of Nuclear Science and Engineering\u201d (2016).**\n\n## Nuclear Stability\n\n\n\n**Figure** Graph of isotopes by type of nuclear decay. Orange and blue nuclides are unstable, with the black squares between these regions representing stable nuclides. The unbroken line passing below many of the nuclides represents the theoretical position on the graph of nuclides for which proton number is the same as neutron number. The graph shows that elements with more than 20 protons must have more neutrons than protons, in order to be stable.\n\n\n\n**The above image was reproduced from Shultis, J. K. and Faw, R. E. \u201cFundamentals of Nuclear Science and Engineering\u201d (2016).**\n\n- many more stable isotopes with even N and/or Z\n- in a heavy nucleus, the neutrons and protons tend to group themselves into subunits of 2 neutrons and 2 protons.\n- when either Z or N equals 8, 20, 50, 82, or 126, there are relatively greater numbers of stable nuclides. \n\n### Liquid Drop Model\n\n\n1. volume term: volume binding energy is proportional to the number of nucleons (i.e. # of nucleons.)\n\n2. Surface term: Nucleons near the surface of the nucleus are not completely surrounded by other nucleons as interior nucleons.\n\n3. Coulomb term: Coulombic repulsive force decrease the stability of nucleons (reduces the BE ).\n\n4. asymmetry term: a departure from symmetry (N=Z) tends to reduce nuclear stability. (Fig.3.10 )\n\n5. pairing term : pairing neutrons and protons are more stable. (even N or Z; odd N or Z ; even N (Z), odd Z (N)) (Fig.3.11 & Fig.3.12)\n\n\n\n\nHigher binding energy $\\implies$ a more stable nuclide.\n\n### Shell Model\n\nThe liquid drop model cannot explain the abnormal high stable nuclides(magic numbers:2,8,20,28,50,82,126).\nThe shell model assume: (Shrodinger\u2019s wave eq.)\nEach nucleon moves independently.\nEach nucleon moves in a potential well.\n\nWhen the model\u2019s quantum-mechanical wave eq. is solved, the nucleons are found to distribute themselves into a number of energy levels.\nFilled shells are indicated by large gaps between each adjacent energy level.\n\n\n\n\n\n```python\n\n```\n\n\n```python\n\n```\n", "meta": {"hexsha": "8a9e4e156a4462a60a6c0c7a661ace783a69e1ea", "size": 51470, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "nuclear_models/00-nuclear-models.ipynb", "max_stars_repo_name": "katyhuff/npr247", "max_stars_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-12-17T06:07:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T17:14:51.000Z", "max_issues_repo_path": "nuclear_models/00-nuclear-models.ipynb", "max_issues_repo_name": "katyhuff/npr247", "max_issues_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-08-29T17:27:24.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-29T17:46:50.000Z", "max_forks_repo_path": "nuclear_models/00-nuclear-models.ipynb", "max_forks_repo_name": "katyhuff/npr247", "max_forks_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-08-25T20:00:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-14T03:05:26.000Z", "avg_line_length": 86.5042016807, "max_line_length": 26752, "alphanum_fraction": 0.7933553526, "converted": true, "num_tokens": 4235, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.341582499438317, "lm_q2_score": 0.24798742068237775, "lm_q1q2_score": 0.08470816298594798}} {"text": "\n\n## Data-driven Design and Analyses of Structures and Materials (3dasm)\n\n## Lecture 8\n\n### Miguel A. Bessa | M.A.Bessa@tudelft.nl | Associate Professor\n\n**What:** A lecture of the \"3dasm\" course\n\n**Where:** This notebook comes from this [repository](https://github.com/bessagroup/3dasm_course)\n\n**Reference for entire course:** Murphy, Kevin P. *Probabilistic machine learning: an introduction*. MIT press, 2022. Available online [here](https://probml.github.io/pml-book/book1.html)\n\n**How:** We try to follow Murphy's book closely, but the sequence of Chapters and Sections is different. The intention is to use notebooks as an introduction to the topic and Murphy's book as a resource.\n* If working offline: Go through this notebook and read the book.\n* If attending class in person: listen to me (!) but also go through the notebook in your laptop at the same time. Read the book.\n* If attending lectures remotely: listen to me (!) via Zoom and (ideally) use two screens where you have the notebook open in 1 screen and you see the lectures on the other. Read the book.\n\n**Optional reference (the \"bible\" by the \"bishop\"... pun intended \ud83d\ude06) :** Bishop, Christopher M. *Pattern recognition and machine learning*. Springer Verlag, 2006.\n\n**References/resources to create this notebook:**\n* [Car figure](https://korkortonline.se/en/theory/reaction-braking-stopping/)\n\nApologies in advance if I missed some reference used in this notebook. Please contact me if that is the case, and I will gladly include it here.\n\n## **OPTION 1**. Run this notebook **locally in your computer**:\n1. Confirm that you have the 3dasm conda environment (see Lecture 1).\n\n2. Go to the 3dasm_course folder in your computer and pull the last updates of the [repository](https://github.com/bessagroup/3dasm_course):\n```\ngit pull\n```\n3. Open command window and load jupyter notebook (it will open in your internet browser):\n```\nconda activate 3dasm\njupyter notebook\n```\n4. Open notebook of this Lecture.\n\n## **OPTION 2**. Use **Google's Colab** (no installation required, but times out if idle):\n\n1. go to https://colab.research.google.com\n2. login\n3. File > Open notebook\n4. click on Github (no need to login or authorize anything)\n5. paste the git link: https://github.com/bessagroup/3dasm_course\n6. click search and then click on the notebook for this Lecture.\n\n\n```python\n# Basic plotting tools needed in Python.\n\nimport matplotlib.pyplot as plt # import plotting tools to create figures\nimport numpy as np # import numpy to handle a lot of things!\nfrom IPython.display import display, Math # to print with Latex math\n\n%config InlineBackend.figure_format = \"retina\" # render higher resolution images in the notebook\nplt.style.use(\"seaborn\") # style for plotting that comes from seaborn\nplt.rcParams[\"figure.figsize\"] = (8,4) # rescale figure size appropriately for slides\n```\n\n## Outline for today\n\n* Parameter estimation from training with data (model fitting)\n - Posterior approximation by Dirac delta \"distribution\"\n - Point estimates for the Dirac delta \"distribution\"\n * MAP: Maximum A Posterior estimate\n * MLE: Maximum Likelihood Estimation \n - Negative log likelihood (NLL) \n* Why some people do not adopt a Bayesian (probabilistic) perspective of ML\n\n**Reading material**: This notebook + Chapter 4\n\n## Summary of past lectures\n\nBayesian inference:\n* predicts a **quantity of interest** (e.g. $y$) while treating **unknown** information as rv's (e.g. $z$)\n\n* it is based on establishing a model (observation distribution + prior) and evaluating it on data (joint likelihood normalized by marginal likelihood) to update our belief about the unknown (posterior)\n\n* from the posterior, we can then predict a distribution for the quantity of interest (the PPD) that results from marginalizing (integrating out) the unknown\n\n### The good and the bad\n\nIn short: Bayesian inference results from interpreting the unknown as rv's of a model, and then evaluating the impact of all possible values of the rv's (within the constraints imposed by the model!) by marginalizing them (integrating them out).\n\n* **The good**: This is powerful because even if our assumptions are wrong, we can at least take different values for the rv's and their respective impact on the predictions. This alleviates problems such as overfitting and overconfidence, that we will encounter in the remaining of the course.\n\n* **The bad**: Bayesian inference can be difficult. We solved one of the simplest problems in the last lectures, and we saw that those integrals are a bit ugly...\n - In most cases, the integrals (to compute the marginal likelihood, and the PPD) cannot even be solved analytically.\n - Numerical strategies exist to approximate the integration, but they tend to be **slow when accurate** or **fast but innacurate** (a dangerous generalization: forgive me Bayesians!)\n\n**Very Important Question (VIQ)**: What if we don't calculate these integrals at all?\n\n## Machine Learning without going fully Bayesian\n\nAvoiding integration is possible by noting that:\n\n1. Computing the PPD is trivial if the **posterior distribution becomes the Dirac delta**\n\n\n2. The marginal likelihood is just a **constant**\n\nLet's explore these two remarks.\n\n### 1. PPD when the posterior is a Dirac delta\n\n$$\\require{color}\n{\\color{orange}p(y|\\mathcal{D}_y)} = \\int \\underbrace{p(y|z)}_{\\text{observation}\\\\ \\text{distribution}} \\overbrace{p(z|y=\\mathcal{D}_y)}^{\\text{posterior}} dz\n$$\n\nWhat happens if the posterior is the Dirac delta \"distribution\"?\n\n$$\np(z|y=\\mathcal{D}_y) = \\delta(z-\\hat{z})\n$$\n\nwhere $\\hat{z}$ is our best estimate for the value that $z$ should have.\n\n$$\\require{color}\n\\begin{align}\n{\\color{orange}p(y|\\mathcal{D}_y)} &= \\int \\underbrace{p(y|z)}_{\\text{observation}\\\\ \\text{distribution}} \\overbrace{p(z|y=\\mathcal{D}_y)}^{\\text{posterior}} dz\\\\\n&= \\int p(y|z) \\delta(z-\\hat{z}) dz \\\\\n&= p(y|z=\\hat{z})\n\\end{align}\n$$\n\n**Conclusion**: The PPD becomes the **observation distribution** where the unknown $z$ becomes our **best estimate** $\\hat{z}$ (in other words: $z = \\hat{z} =$ const)\n\n* But what is our \"**best estimate**\" $\\hat{z}$?\n - There are different estimates and different strategies to get there!\n\n### 2. Finding the \"best estimate\" $\\hat{z}$ without computing the marginal likelihood\n\nRemember: the Bayes' rule determines the posterior,\n\n$\\require{color}$\n$$\n{\\color{green}p(z|y=\\mathcal{D}_y)} = \\frac{ {\\color{blue}p(y=\\mathcal{D}_y|z)}{\\color{red}p(z)} } {p(y=\\mathcal{D}_y)}\n$$\n\nand the marginal likelihood $p(y=\\mathcal{D}_y)$ is just a constant.\n\nIf we want to reduce the posterior to the Dirac delta \"distribution\",\n\n$$\np(z|y=\\mathcal{D}_y) = \\delta(z-\\hat{z})\n$$\n\nwhat is the only parameter that we need to find?\n\n* We just need to find $\\hat{z}$ to completely characterize $\\delta(z-\\hat{z})$\n\nNote that this is not the case if the posterior is a different distribution!\n\nFor example, we saw in the previous lectures that the posterior for the car stopping distance problem was a **Gaussian**.\n\n* How many parameters do you need to characterize the Gaussian distribution?\n\nIndeed... Two!\n\nAnd if the posterior distribution is more complicated, you may need a lot more parameters! In some cases, the posterior does not even have an analytical description!\n\nAnyway, the question still remains: what should be the value $\\hat{z}$?\n\nLet's go back to the two problems we have seen in Lecture 6 and Lecture 7.\n\nRecall our reflection on the differences between the posterior for the two priors we used.\n\n* When using the noninformative Uniform prior $p(z) = \\frac{1}{C_z}$ (Lecture 6):\n\n$$\\require{color}\\begin{align}\n{\\color{green}p(z|y=\\mathcal{D}_y)}\n&= \\mathcal{N}(z|\\mu, \\sigma^2)\n\\end{align}\n$$\n\n* When using a Gaussian prior $p(z) = \\mathcal{N}\\left(z| \\overset{\\scriptscriptstyle <}{\\mu}_z, \\overset{\\scriptscriptstyle <}{\\sigma}_z^2\\right)$ (Lecture 7):\n\n$$\\require{color}\\begin{align}\n{\\color{green}p(z|y=\\mathcal{D}_y)} &= \\mathcal{N}\\left(z| \\overset{\\scriptscriptstyle >}{\\mu}_z, \\overset{\\scriptscriptstyle >}{\\sigma}_z^2\\right) = \\mathcal{N}\\left(z\\left|\\frac{1}{\\frac{1}{\\sigma^2} + \\frac{1}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}} \\left( \\frac{\\mu}{\\sigma^2} + \\frac{\\overset{\\scriptscriptstyle <}{\\mu}_z}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}\\right), \\frac{1}{\\frac{1}{\\sigma^2} + \\frac{1}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}}\\right.\\right)\n\\end{align}\n$$\n\nThe posterior is still a Gaussian but its mean and variance have been updated by the influence of the prior!\n\nLet's play a simple game:\n* Choose where to place the Dirac delta \"distribution\" for those two posteriors we found before.\n\n\n```python\n# This cell is hidden during the presentation\nfrom scipy.stats import norm # import the normal dist, as we learned before!\ndef samples_y_with_2rvs(N_samples,x): # observations/measurements/samples for car stop. dist. prob. with 2 rv's\n mu_z1 = 1.5; sigma_z1 = 0.5;\n mu_z2 = 0.1; sigma_z2 = 0.01;\n samples_z1 = norm.rvs(mu_z1, sigma_z1, size=N_samples) # randomly draw samples from the normal dist.\n samples_z2 = norm.rvs(mu_z2, sigma_z2, size=N_samples) # randomly draw samples from the normal dist.\n samples_y = samples_z1*x + samples_z2*x**2 # compute the stopping distance for samples of z_1 and z_2\n return samples_y # return samples of y\n```\n\n\n```python\n# This cell is hidden during the presentation\n\n# -------------------------------------------------------------------------------\n# PARAMETERS YOU CAN CHANGE! PLAY A BIT WITH THIS ;)\nx = 75 # keeping the car velocity constant at 75 m/s as we have done before\nmu_z2 = 0.1; sigma_z2 = 0.01 # parameters of z_2 distribution\nN_samples = 3 # Let's say our data is composed of 3 samples (empirical observations)\nmu_prior_z = 3; sigma_prior_z = 2 # parameters of the Gaussian prior distribution (used only in case 2)\n# -------------------------------------------------------------------------------\n\n\nempirical_y = samples_y_with_2rvs(N_samples, x) # Our data (empirical measurements of N_samples at x=75)\n\n# Compute all the constants needed to plot the posterior for Lecture 6 and for Lecture 7\nw = x\nb = mu_z2*x**2\nsigma_yGIVENz = np.sqrt((x**2*sigma_z2)**2) # sigma_y|z (comes from the stochastic influence of the z_2 rv)\n# Empirical mean and std directly calculated from observations:\nempirical_mu_y = np.mean(empirical_y); empirical_sigma_y = np.std(empirical_y); \n#\n# Parameters of the likelihood function (not a distribution because it is not normalized):\nsigma = np.sqrt(sigma_yGIVENz**2/(w**2*N_samples)) # std arising from the likelihood\nmu = empirical_mu_y/w - b/w # mean arising from the likelihood (product of Gaussian densities for the data)\n# -------------------------------------------------------------------------------\n# Case 1: using a noninformative Uniform prior (Lecture 6):\n# Posterior parameters:\n# These parameters are obvious in this case but I just want to highlight that the mean and std of this posterior\n# are the same as the parameters of the likelihood because posterior = likelihood / const )\nsigma_posterior_UniformPrior = sigma # std of posterior (same as likelihood)\nmu_posterior_UniformPrior = mu # mean of posterior (same as likelihood)\n#\n# PPD parameters:\nPPD_mu_y_UniformPrior = mu*w + b # same result if using: np.mean(empirical_y)\nPPD_sigma_y_UniformPrior = np.sqrt(w**2*sigma**2+sigma_yGIVENz**2) # same as: np.sqrt((x**2*sigma_z2)**2*(1/N_samples + 1))\n\n# z values for plot of case 1:\nzrange_case1 = np.linspace(-3*sigma_posterior_UniformPrior+mu_posterior_UniformPrior,\n 3*sigma_posterior_UniformPrior+mu_posterior_UniformPrior, 200)\n# Posterior values for plot of case 1:\nposterior_pdf_values_case1 = norm.pdf(zrange_case1, mu, sigma)\n# Probability density of posterior at the mean for case 1:\npdf_at_mean_case1 = norm.pdf(mu_posterior_UniformPrior,mu_posterior_UniformPrior,sigma_posterior_UniformPrior)\n# MAP estimate (maximum a posterior estimate) is the same as MLE (maximum likelihood estimation) for case 1:\npdf_at_mode_case1 = pdf_at_mean_case1 # in this case it's the same as mean (no calculation needed)\n# -------------------------------------------------------------------------------\n#\n# -------------------------------------------------------------------------------\n# CASE 2: using a Gaussian prior (Lecture 7):\n# Posterior parameters:\nsigma_posterior_GaussianPrior = np.sqrt( (sigma_prior_z**2*sigma**2)/(sigma_prior_z**2+sigma**2) )# std of posterior\nmu_posterior_GaussianPrior = sigma_posterior_GaussianPrior**2*(mu/(sigma**2)+mu_prior_z/(sigma_prior_z**2)) # mean of posterior\n# PPD parameters:\nPPD_mu_y_GaussianPrior = mu_posterior_GaussianPrior*w + b\nPPD_sigma_y_GaussianPrior = np.sqrt(w**2*sigma_posterior_GaussianPrior**2+sigma_yGIVENz**2)\n#\n# z values for plot:\nzrange_case2 = np.linspace(-3*sigma_posterior_GaussianPrior+mu_posterior_GaussianPrior,\n 3*sigma_posterior_GaussianPrior+mu_posterior_GaussianPrior, 200)\n# Posterior values for plot of case 2:\nposterior_pdf_values_case2 = norm.pdf(zrange_case2, mu_posterior_GaussianPrior,\n sigma_posterior_GaussianPrior) # values of posterior for plotting\n# Probability density of posterior at the mean for case 2:\npdf_at_mean_case2 = norm.pdf(mu_posterior_GaussianPrior,mu_posterior_GaussianPrior,sigma_posterior_GaussianPrior)\n# MAP estimate (maximum a posterior estimate) for case 2:\npdf_at_mode_case2 = pdf_at_mean_case2 # in this case it's the same as mean (no calculation needed)\n# -------------------------------------------------------------------------------\n \n\n# Plot the posteriors that we calculate above and the Dirac delta at different z_hat\ndef Posteriors_and_Dirac_delta(z_hat_case1=mu_posterior_UniformPrior-2*sigma_posterior_UniformPrior,\n z_hat_case2=mu_posterior_GaussianPrior-2*sigma_posterior_GaussianPrior):\n fig_Dirac, (ax_case1, ax_case2) = plt.subplots(1,2)\n #\n ax_case1.plot(zrange_case1, posterior_pdf_values_case1,\n label=r\"Posterior: $p(z|\\mathcal{D}_y) = \\mathcal{N}\\left(z| \\mu, \\sigma^2\\right)$\")\n ax_case1.set_ylim(0, 1.3*pdf_at_mode_case1)\n ax_case1.plot(mu_posterior_UniformPrior, pdf_at_mean_case1,\n 'g^', markersize=25, linewidth=2,\n label=r'mode: $\\underset{z}{\\mathrm{argmax}}\\; p(z|\\mathcal{D}_y)=\\mu$')\n ax_case1.plot(mu_posterior_UniformPrior, pdf_at_mode_case1,\n 'k*', markersize=20, linewidth=2,\n label=r'mean: $\\mathbb{E}[z|\\mathcal{D}_y]=\\mu$')\n ax_case1.annotate(\"\",\n xy=(z_hat_case1, 0), xycoords='data',\n xytext=(z_hat_case1, 1.3*pdf_at_mode_case1), textcoords='data',\n arrowprops=dict(arrowstyle=\"<-\",\n connectionstyle=\"arc3\", color='r', lw=2),\n )\n ax_case1.text(z_hat_case1, pdf_at_mode_case1*1.05, 'Dirac $\\delta$', rotation = -90, fontsize = 15)\n ax_case1.text(z_hat_case1, 0, ('$\\hat{z}=%1.2f$' % z_hat_case1), fontsize = 15)\n ax_case1.set_xlabel(\"z\", fontsize=20)\n ax_case1.set_ylabel(\"probability density\", fontsize=20)\n ax_case1.legend(loc='center right', fontsize=12)\n ax_case1.set_title(\"Posterior using noninformative Uniform prior (Lecture 6)\", fontsize=20)\n #\n ax_case2.plot(zrange_case2, posterior_pdf_values_case2,\n label=r\"Posterior: $p(z|\\mathcal{D}_y) = \\mathcal{N}\\left(z| \\overset{>}{\\mu}_z, \\overset{>}{\\sigma}_z^2\\right)$\")\n \n ax_case2.set_ylim(0, 1.3*pdf_at_mode_case2)\n ax_case2.plot(mu_posterior_GaussianPrior, pdf_at_mean_case2,\n 'g^', markersize=25, linewidth=2,\n label=r'mode: $\\underset{z}{\\mathrm{argmax}}\\; p(z|\\mathcal{D}_y)=\\overset{>}{\\mu}_z$')\n ax_case2.plot(mu_posterior_GaussianPrior, pdf_at_mean_case2,\n 'k*', markersize=20, linewidth=2,\n label=r'mean: $\\mathbb{E}[z|\\mathcal{D}_y]=\\overset{>}{\\mu}_z$')\n ax_case2.annotate(\"\",\n xy=(z_hat_case2, 0), xycoords='data',\n xytext=(z_hat_case2, 1.3*pdf_at_mode_case2), textcoords='data',\n arrowprops=dict(arrowstyle=\"<-\",\n connectionstyle=\"arc3\", color='r', lw=2),\n )\n ax_case2.text(z_hat_case2, pdf_at_mode_case2*1.05, 'Dirac $\\delta$', rotation = -90, fontsize = 15)\n ax_case2.text(z_hat_case2, 0, ('$\\hat{z}=%1.2f$' % z_hat_case2), fontsize = 15)\n ax_case2.set_xlabel(\"z\", fontsize=20)\n ax_case2.set_ylabel(\"probability density\", fontsize=20)\n ax_case2.legend(loc='center right', fontsize=12)\n ax_case2.set_title(\"Posterior using Gaussian prior (Lecture 7)\", fontsize=20)\n fig_Dirac.set_size_inches(15, 6) # scale figure to be wider (since there are 2 subplots)\n```\n\n\n```python\n# Static plot (I skip this cell in presentations, but use it when printing slides to PDF)\nPosteriors_and_Dirac_delta(z_hat_case1=mu_posterior_UniformPrior-2*sigma_posterior_UniformPrior,\n z_hat_case2=mu_posterior_GaussianPrior-2*sigma_posterior_GaussianPrior)\n```\n\n\n```python\n# Showing posteriors and Dirac delta with interactive plot. Code is hidden in presentation.\nfrom ipywidgets import interactive # so that we can interact with the plot\ninteractive_plot = interactive(Posteriors_and_Dirac_delta,\n z_hat_case1=(min(zrange_case1), max(zrange_case1), 6/10*sigma_posterior_UniformPrior),\n z_hat_case2=(min(zrange_case2), max(zrange_case2), 6/10*sigma_posterior_GaussianPrior) )\ninteractive_plot\n```\n\n\n interactive(children=(FloatSlider(value=0.2645198973023184, description='z_hat_case1', max=2.429583406763415, \u2026\n\n\nProbably you didn't hesitate to place the Dirac delta \"distribution\" at the mean or mode (they are the same for a Gaussian distribution)!\n\nWhat if the Posterior distribution is something else? For example, a Gamma distribution\n\n\n```python\n# This cell is hidden during the presentation\n\n# You may recall that we plotted the Gamma distribution in Lecture 1\n# -------------------------------------------------------------------------------\n# PARAMETERS YOU CAN CHANGE! PLAY A BIT WITH THIS ;)\nx = 75 # keeping the car velocity constant at 75 m/s as we have done before\nmu_z2 = 0.1; sigma_z2 = 0.01 # parameters of z_2 distribution\nN_samples = 3 # Let's say our data is composed of 3 samples (empirical observations)\nmu_prior_z = 3; sigma_prior_z = 2 # parameters of the Gaussian prior distribution (used only in case 2)\n# -------------------------------------------------------------------------------\n\nfrom scipy.stats import gamma # import from scipy.stats the Gamma distribution\nfrom scipy.optimize import minimize # import minimizer to calculate mode\n\na = 2.0 # this is the only input parameter needed for this distribution\n\n# Define the support of the distribution (its domain) by using the\n# inverse of the cdf (called ppf) to get the lowest z of the plot that\n# corresponds to Pr = 0.01 and the highest z of the plot that corresponds\n# to Pr = 0.99:\nzrange_min = gamma.ppf(0.01, a)\nzrange_max = gamma.ppf(0.99, a)\nzrange = np.linspace(zrange_min, zrange_max, 200) \n\nmu_posterior, var_posterior = gamma.stats(2.0, moments='mv') # This computes the mean and variance of the pdf\n\nposterior_pdf_values = gamma.pdf(zrange, a)\n\npdf_at_mean = gamma.pdf(mu_posterior, a)\n\n# Finding the maximum of a function can be done by minimizing\n# the negative gamma pdf. So, we create a function that outputs\n# the negative of the gamma pdf given the parameter a=2.0:\ndef neg_gamma_given_a(z): return -gamma.pdf(z,a)\n\n# Use the default optimizer of scipy (L-BFGS) to find the\n# maximum (by minimizing the negative gamma pdf). Note\n# that we need to give an initial guess for the value of z,\n# so we can use, for example, z=mu_z:\nmode_posterior = minimize(neg_gamma_given_a,mu_posterior).x # in general this is a vector, but for Gamma it's just a scalar\n\npdf_at_mode = gamma.pdf(mode_posterior, a) # in general this is a vector, but for Gamma is just a scalar\n\n# Plot the posteriors that we calculate above and the Dirac delta at different z_hat\ndef Gamma_Posterior_and_Dirac_delta(z_hat=0.5):\n fig_Gamma, ax = plt.subplots()\n ax.plot(zrange, posterior_pdf_values, label=r\"Posterior: $p(z|\\mathcal{D}_y) = \\Gamma(z|a)$\")\n\n ax.plot(mu_posterior, pdf_at_mean, 'r*', markersize=15, linewidth=2,\n label=r'Posterior mean: $\\mathbb{E}[z|\\mathcal{D}_y]$')\n\n ax.plot(mode_posterior, pdf_at_mode[0],'g^', markersize=15,\n linewidth=2,label=r'Posterior mode: $\\underset{z}{\\mathrm{argmax}}\\; p(z|\\mathcal{D}_y)$')\n ax.annotate(\"\",\n xy=(z_hat, 0), xycoords='data',\n xytext=(z_hat, 1.3*pdf_at_mode[0]), textcoords='data',\n arrowprops=dict(arrowstyle=\"<-\",\n connectionstyle=\"arc3\", color='r', lw=2),\n )\n ax.text(z_hat, pdf_at_mode[0]*1.05, 'Dirac $\\delta$', rotation = -90, fontsize = 15)\n ax.text(z_hat, 0, ('$\\hat{z}=%1.2f$' % z_hat) , fontsize = 15)\n ax.set_ylim(0, 1.3*pdf_at_mode[0])\n ax.set_xlabel(\"z\", fontsize=20)\n ax.set_ylabel(\"probability density\", fontsize=20)\n ax.legend(loc='upper right', fontsize=15)\n ax.set_title(\"Posterior being a Gamma pdf for $a=2.0$\", fontsize=20)\n```\n\n\n```python\n# Static plot (I skip this cell in presentations, but use it when printing slides to PDF)\nGamma_Posterior_and_Dirac_delta(z_hat=0.5)\n```\n\n\n```python\n# Showing posteriors and Dirac delta with interactive plot. Code is hidden in presentation.\ninteractive_plot = interactive(Gamma_Posterior_and_Dirac_delta,z_hat=(0.5, 6.5, 0.5 ) )\ninteractive_plot\n```\n\n\n interactive(children=(FloatSlider(value=0.5, description='z_hat', max=6.5, min=0.5, step=0.5), Output()), _dom\u2026\n\n\nMaybe now you are hesitating where to place it?\n\nBoth are used in practice! And there are other estimates...\n\nThese are called **point estimates**.\n\n* They reduce each unknown rv $z$ to a point $\\hat{z}$ (transforming the posterior distribution into the Dirac delta \"distribution\").\n\nOf course, as everything in life, some choices are better than others...\n\nCommon point estimates for determining $\\hat{z}$:\n* Maximum Likelihood Estimation (MLE):\n - You choose the mode (the maximum) of the posterior but you used a Uniform prior\n* Maximum A Posterior (MAP) estimate:\n - You choose the mode (the maximum) of the posterior (and your prior is **not** Uniform)\n* Posterior mean estimate (no accronym!):\n - You choose the mean of the posterior.\n* ... and so on\n\nCalculating the **Posterior mean estimate** is not new to us (see Lecture 1):\n\n$$\n\\mathbb{E}[z|\\mathcal{D}]= \\int_{\\mathcal{Z}}z p(z|\\mathcal{D}) dz\n$$\n\nBut I told you that today we are all about avoiding integrals!\n\nSo, let's focus on two very common point estimates: **MAP** and **MLE**.\n\nBoth are obtained by finding the **mode** of the posterior (i.e. maximum location in the posterior):\n\n$$\n\\require{color}\\hat{\\mathbf{z}} = \\underset{z}{\\mathrm{argmax}}\\; {\\color{green}p(z|\\mathcal{D})}\n$$\n\nIn other words, we need to solve an optimization problem.\n\nBut finding the mode of the posterior involves a few simple \"tricks\"...\n\n$$\\require{color}\n{\\color{green}p(z|y=\\mathcal{D}_y)} = \\frac{ {\\color{blue}p(y=\\mathcal{D}_y|z)}{\\color{red}p(z)} } {p(y=\\mathcal{D}_y)}\n$$\n\n#### Calculating the mode of posterior: Trick 1 (taking the $\\log$)\n\nWe can separate the three terms of the posterior if we work with its $\\log$:\n\n$$\n\\log{{\\color{green}p(z|y=\\mathcal{D}_y)}} = \\log{{\\color{blue}p(y=\\mathcal{D}_y|z)}} + \\log{{\\color{red}p(z)}} - \\log{p(y=\\mathcal{D}_y)}\n$$\n\n* Note: $\\log$ is a monotone function, so the $\\mathrm{argmax}$ of a function is the same as the $\\mathrm{argmax}$ of the $\\log$ of the function! Mathematically:\n\n$$\n\\require{color}\\hat{\\mathbf{z}} = \\underset{z}{\\mathrm{argmax}}\\; {\\color{green}p(z|\\mathcal{D})} = \\underset{z}{\\mathrm{argmax}}\\; \\log{{\\color{green}p(z|\\mathcal{D})}}\n$$\n\n#### Calculating the mode: Trick 2 (maximizing by minimizing the negative $\\log$)\n\n**Maximizing a function** is the same as **minimizing the negative of a function** (flipping the sign in the end).\n\nMathematically:\n\n$$\n\\require{color}\\hat{\\mathbf{z}} = \\underset{z}{\\mathrm{argmax}}\\; \\log{{\\color{green}p(z|\\mathcal{D})}} = \\underset{z}{\\mathrm{argmin}} \\left[-\\log{{\\color{green}p(z|\\mathcal{D})}}\\right]\n$$\n\n* In numerical optimization, this is very common practice!\n - Most optimization algorithms are designed to *minimize* functions.\n - In general, when we are optimizing (whether maximizing or minimizing) functions we call them \"**objective**\" functions. Yet, in particular:\n * when we are *minimizing* functions we call them \"**loss**\" or \"cost\" functions.\n * when we are *maximizing* functions we call them \"**reward**\" or \"score\" functions.\n\n#### Calculating the mode: focusing on each $\\log$ term\n\n$$\\require{color}\n\\begin{align}\n\\hat{\\mathbf{z}} &= \\underset{z}{\\mathrm{argmax}}\\left[\\log{{\\color{green}p(z|y=\\mathcal{D}_y)}}\\right] \\\\\n&= \\underset{z}{\\mathrm{argmin}}\\left[-\\log{{\\color{green}p(z|y=\\mathcal{D}_y)}}\\right] \\\\\n&= \\underset{z}{\\mathrm{argmin}}\\left[-\\log{{\\color{blue}p(y=\\mathcal{D}_y|z)}} - \\log{{\\color{red}p(z)}} + \\log{p(y=\\mathcal{D}_y)}\\right]\\\\\n&= \\underset{z}{\\mathrm{argmin}}\\left[-\\log{{\\color{blue}p(y=\\mathcal{D}_y|z)}} - \\log{{\\color{red}p(z)}}+\\text{constant}\\right]\\\\\n\\end{align}\n$$\n\n* The last line can be further simplified because a constant does not change the location of the minimum.\n\nSo, we get: $\\require{color}\n\\hat{\\mathbf{z}} = \\underset{z}{\\mathrm{argmin}}\\left[-\\log{{\\color{blue}p(y=\\mathcal{D}_y|z)}} - \\log{{\\color{red}p(z)}}\\right]$\n\nAt this point, recall that the likelihood is usually calculated assuming the training examples (observations) are sampled independently from the observation distribution $p(y|z)$:\n\n$$\np(y=\\mathcal{D}_y | z) = \\prod_{i=1}^{N} p(y=y_i|z)\n$$\n\nwhich is known as the **i.i.d.** assumption (independent and identically distributed).\n\nThis means that the $\\log$ likelihood usually has a very convenient form:\n\n$$\n\\mathrm{LL}(z) = \\log{{\\color{blue}p(y=\\mathcal{D}_y|z)}} = \\sum_{i=1}^{N} \\log{p(y=y_i|z)}\n$$\n\nwhich decomposed into a sum of terms, one per example (observation).\n\n**In summary**, the mode of the posterior is calculated as:\n\n$\\require{color}\n\\hat{\\mathbf{z}} = \\underset{z}{\\mathrm{argmin}}\\left[-\\log{{\\color{blue}p(y=\\mathcal{D}_y|z)}} - \\log{{\\color{red}p(z)}}\\right]$\n\nwhere the first term is called **negative log likelihood**:\n\n$\\mathrm{NLL}(z) = -\\mathrm{LL}(z) = -\\log{{\\color{blue}p(y=\\mathcal{D}_y|z)}}=-\\sum_{i=1}^{N} \\log{p(y=y_i|z)}$\n\n#### Maximum A Posterior (MAP) estimate\n\nIf we choose any prior distribution **except** the Uniform distribution, then the estimate is called MAP:\n\n$\\require{color}\n\\hat{\\mathbf{z}}_{\\text{map}} = \\underset{z}{\\mathrm{argmin}}\\left[-\\log{{\\color{blue}p(y=\\mathcal{D}_y|z)}} - \\log{{\\color{red}p(z)}}\\right]$\n\nwhere $p(z)$ is **not** the Uniform distribution.\n\n#### Maximum Likelihood Estimation (MLE)\n\nIn the special case of choosing the prior to be a **Uniform distribution**, $p(z) \\propto 1$, then the mode of the posterior becomes the same as the mode of the (log) likelihood:\n\n$$\\require{color}\n\\hat{\\mathbf{z}} = \\underset{z}{\\mathrm{argmin}}\\left[-\\log{{\\color{blue}p(y=\\mathcal{D}_y|z)}} - \\log{{\\color{red}p(z)}}\\right] = \\underset{z}{\\mathrm{argmin}}\\left[-\\log{{\\color{blue}p(y=\\mathcal{D}_y|z)}}\\right]\n$$\n\nand we say that we are using the Maximum Likelihood Estimation (MLE) for the unknown $z$:\n\n$$\n\\hat{\\mathbf{z}}_{\\text{mle}} = \\underset{z}{\\mathrm{argmin}}\\left[-\\log{{\\color{blue}p(y=\\mathcal{D}_y|z)}}\\right] = \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{i=1}^{N}\\log{ p(y=y_i|z)}\\right]\n$$\n\nwhere, again, the argument of this expression is called the **negative log likelihood** $\\mathrm{NLL}(z)$.\n\n## Summary of Machine Learning without going fully Bayesian\n\n1. Approximate posterior by a **Dirac delta** \"distribution\" $\\delta(z-\\hat{z})$ where $\\hat{z}$ is a chosen **Point estimate**:\n * MLE: $\\hat{\\mathbf{z}}_{\\text{mle}} = \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{i=1}^{N}\\log{ p(y=y_i|z)}\\right]$\n * MAP: $\\hat{\\mathbf{z}}_{\\text{map}} = \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{i=1}^{N}\\log{ p(y=y_i|z)}- \\log{p(z)}\\right] $\n * etc.\n\n\n2. Compute the PPD using the Point estimate $\\hat{z}$ and without calculating any integrals:\n$$\\require{color}\n{\\color{orange}p(y|\\mathcal{D}_y)} = \\int p(y|z) \\delta(z-\\hat{z}) dz = p(y|z=\\hat{z})\n$$\n\n\n# HOMEWORK\n\n1. Using the MLE point estimate, predict the PPD for the car stopping distance problem (Lecture 6).\n\n\n2. Using the MAP estimate, predict the PPD for the car stopping distance problem considering the Gaussian prior of Lecture 7.\n\n\n3. Create a plot of the two PPD's and compare them with the PPD's obtained in Lecture 6 and Lecture 7.\n * Note: create these plots of the PPD's such that the abscissa (horizontal) axis is the $y$ rv and the ordinate (vertical axis) is the probability density.\n\n\"Teaser\": PPD obtained with the MLE **versus** PPD obtained in Lecture 6 (Uniform prior)\n\n\n```python\n# This cell is hidden during presentation. It's just to define a function to plot the governing model of\n# the car stopping distance problem. Defining a function that creates a plot allows to repeatedly run\n# this function on cells used in this notebook.\ndef car_fig_2rvs(ax):\n x = np.linspace(3, 83, 1000)\n mu_z1 = 1.5; sigma_z1 = 0.5; # parameters of the \"true\" p(z_1)\n mu_z2 = 0.1; sigma_z2 = 0.01; # parameters of the \"true\" p(z_2)\n mu_y = mu_z1*x + mu_z2*x**2 # From Homework of Lecture 4\n sigma_y = np.sqrt( (x*sigma_z1)**2 + (x**2*sigma_z2)**2 ) # From Homework of Lecture 4\n ax.set_xlabel(\"x (m/s)\", fontsize=20) # create x-axis label with font size 20\n ax.set_ylabel(\"y (m)\", fontsize=20) # create y-axis label with font size 20\n ax.set_title(\"Car stopping distance problem with two rv's\", fontsize=20); # create title with font size 20\n ax.plot(x, mu_y, 'k:', label=\"Governing model $\\mu_y$\")\n ax.fill_between(x, mu_y - 1.9600 * sigma_y,\n mu_y + 1.9600 * sigma_y,\n color='k', alpha=0.2,\n label='95% confidence interval ($\\mu_y \\pm 1.96\\sigma_y$)') # plot 95% credence interval\n ax.legend(fontsize=15)\n```\n\n\n```python\n# This cell is hidden during presentation\ndef MLE_versus_Bayesian_PPD_for_UniformPrior(N_samples):\n fig_car_PPD_UniformPrior, ax_car_PPD_UniformPrior = plt.subplots(1,2)\n x = 75\n mu_z2 = 0.1; sigma_z2 = 0.01\n # Observation of N_samples from the true data:\n empirical_y = samples_y_with_2rvs(N_samples, x) # Empirical measurements of N_samples at x=75\n # Empirical mean and std directly calculated from observations:\n empirical_mu_y = np.mean(empirical_y); empirical_sigma_y = np.std(empirical_y); \n #\n # --------------------------------------------------------------------------------------------\n # PPD calculated in Lecture 6 (Uniform prior)\n # Now define all the constants needed in the calculation of the PPD's obtained with each prior.\n w = x\n b = mu_z2*x**2\n sigma_yGIVENz = np.sqrt((x**2*sigma_z2)**2) # sigma_y|z (comes from the stochastic influence of the z_2 rv)\n sigma = np.sqrt(sigma_yGIVENz**2/(w**2*N_samples)) # std arising from the likelihood\n mu = empirical_mu_y/w - b/w # mean arising from the likelihood (product of Gaussian densities for the data)\n #\n # Now, calculate PPD when using a UNIFORM prior (Lecture 6):\n PPD_mu_y_UniformPrior = mu*w + b # same result if using: np.mean(empirical_y)\n PPD_sigma_y_UniformPrior = np.sqrt(w**2*sigma**2+sigma_yGIVENz**2) # same as: np.sqrt((x**2*sigma_z2)**2*(1/N_samples + 1))\n # --------------------------------------------------------------------------------------------\n \n \n # --------------------------------------------------------------------------------------------\n # MLE:\n z_mle = mu # in this case it also coincides with the mean of the likelihood.\n PPD_mu_y_mle = w*z_mle + b # same as empirical mean (also same as mean of Bayesian PPD for Uniform prior)\n PPD_sigma_y_mle = sigma_yGIVENz # NOT the same as Bayesian PPD for Uniform prior (only in the limit)\n # --------------------------------------------------------------------------------------------\n \n \n car_fig_2rvs(ax_car_PPD_UniformPrior[0]) # a function I created to include the background plot of the governing model\n for i in range(2): # create two plots (one is zooming in on the error bar)\n ax_car_PPD_UniformPrior[i].errorbar(x , empirical_mu_y,yerr=1.96*empirical_sigma_y, fmt='m*',\n markersize=30, elinewidth=9);\n ax_car_PPD_UniformPrior[i].errorbar(x , PPD_mu_y_UniformPrior,yerr=1.96*PPD_sigma_y_UniformPrior,\n color='#F39C12', fmt='*', markersize=15, elinewidth=6);\n ax_car_PPD_UniformPrior[i].errorbar(x , PPD_mu_y_mle,yerr=1.96*PPD_sigma_y_mle,\n fmt='w*', markersize=10, elinewidth=3);\n ax_car_PPD_UniformPrior[i].scatter(x*np.ones_like(empirical_y),empirical_y, s=150,facecolors='none',\n edgecolors='k', linewidths=2.0)\n print(\"Ground truth : mean[y] = 675 & std[y] = 67.6\")\n print(\"Empirical values (purple) : mean[y] = %.2f & std[y] = %.2f\" % (empirical_mu_y,empirical_sigma_y) )\n print(\"PPD with Uniform Prior (orange): mean[y] = %.2f & std[y] = %.2f\" % (PPD_mu_y_UniformPrior, PPD_sigma_y_UniformPrior))\n print(\"PPD from MLE (white) : mean[y] = %.2f & std[y] = %.2f\" % (PPD_mu_y_mle,PPD_sigma_y_mle))\n fig_car_PPD_UniformPrior.set_size_inches(15, 6) # scale figure to be wider (since there are 2 subplots)\n```\n\n\n```python\nMLE_versus_Bayesian_PPD_for_UniformPrior(N_samples=2)\n```\n\n\"Teaser\": PPD obtained with the MLE **versus** PPD obtained in Lecture 7 (Gaussian prior)\n\n\n```python\n# This cell is hidden during presentation\ndef MAP_versus_Bayesian_PPD_for_GaussianPrior(N_samples):\n fig_car_PPD_GaussianPrior, ax_car_PPD_GaussianPrior = plt.subplots(1,2)\n x = 75\n mu_z2 = 0.1; sigma_z2 = 0.01\n # Observation of N_samples from the true data:\n empirical_y = samples_y_with_2rvs(N_samples, x) # Empirical measurements of N_samples at x=75\n # Empirical mean and std directly calculated from observations:\n empirical_mu_y = np.mean(empirical_y); empirical_sigma_y = np.std(empirical_y); \n #\n # --------------------------------------------------------------------------------------------\n # PPD calculated in Lecture 7 (Gaussian prior)\n w = x\n b = mu_z2*x**2\n sigma_yGIVENz = np.sqrt((x**2*sigma_z2)**2) # sigma_y|z (comes from the stochastic influence of the z_2 rv)\n sigma = np.sqrt(sigma_yGIVENz**2/(w**2*N_samples)) # std arising from the likelihood\n mu = empirical_mu_y/w - b/w # mean arising from the likelihood (product of Gaussian densities for the data)\n #\n mu_prior_z = 3; sigma_prior_z = 2 # parameters of the Gaussian prior distribution \n sigma_posterior_z = np.sqrt( (sigma_prior_z**2*sigma**2)/(sigma_prior_z**2+sigma**2) )# std of posterior\n mu_posterior_z = sigma_posterior_z**2*( mu/(sigma**2) + mu_prior_z/(sigma_prior_z**2) ) # mean of posterior\n PPD_mu_y_GaussianPrior = mu_posterior_z*w + b\n PPD_sigma_y_GaussianPrior = np.sqrt(w**2*sigma_posterior_z**2+sigma_yGIVENz**2)\n # --------------------------------------------------------------------------------------------\n \n \n # --------------------------------------------------------------------------------------------\n # MAP:\n z_map = mu_posterior_z # in this case it also coincides with the mean of the posterior\n PPD_mu_y_map = w*z_map + b # same as empirical mean (also same as mean of Bayesian PPD for Uniform prior)\n PPD_sigma_y_map = sigma_yGIVENz # NOT the same as Bayesian PPD for Uniform prior (only in the limit)\n # --------------------------------------------------------------------------------------------\n\n #\n car_fig_2rvs(ax_car_PPD_GaussianPrior[0]) # a function I created to include the background plot of the governing model\n for i in range(2): # create two plots (one is zooming in on the error bar)\n ax_car_PPD_GaussianPrior[i].errorbar(x , empirical_mu_y,yerr=1.96*empirical_sigma_y, fmt='m*',\n markersize=30, elinewidth=9);\n ax_car_PPD_GaussianPrior[i].errorbar(x , PPD_mu_y_GaussianPrior,yerr=1.96*PPD_sigma_y_GaussianPrior,\n fmt='b*', markersize=15, elinewidth=6);\n ax_car_PPD_GaussianPrior[i].errorbar(x , PPD_mu_y_map,yerr=1.96*PPD_sigma_y_map,\n fmt='c*', markersize=10, elinewidth=3);\n ax_car_PPD_GaussianPrior[i].scatter(x*np.ones_like(empirical_y),empirical_y, s=150,facecolors='none',\n edgecolors='k', linewidths=2.0)\n print(\"Ground truth : mean[y] = 675 & std[y] = 67.6\")\n print(\"Empirical values (purple) : mean[y] = %.2f & std[y] = %.2f\" % (empirical_mu_y,empirical_sigma_y) )\n print(\"PPD with Gaussian Prior (blue): mean[y] = %.2f & std[y] = %.2f\" % (PPD_mu_y_GaussianPrior,PPD_sigma_y_GaussianPrior))\n print(\"PPD from MAP (cyan) : mean[y] = %.2f & std[y] = %.2f\" % (PPD_mu_y_map, PPD_sigma_y_map))\n fig_car_PPD_GaussianPrior.set_size_inches(15, 6) # scale figure to be wider (since there are 2 subplots)\n```\n\n\n```python\nMAP_versus_Bayesian_PPD_for_GaussianPrior(N_samples=3)\n```\n\n## Final reflection: what strategy should we choose?\n\nApproximating the PPD using a Point estimate is usually much simpler and faster than marginalizing unknown rv's such as $z$ (integrals!).\n\nThis is true analytically as well as numerically.\n\nThis explains why many ML practitioners choose Point estimates like MLE or MAP.\n\nBut in general the predictions of the PPD have different robustness:\n\n* PPD calculated from Posterior distribution > PPD from Point estimates\n - We can also say that within the Point estimates: Posterior mean estimate > MAP > MLE\n\nWe will see evidence in favor of this in the remaining of the course.\n\n## Final reflection: Bayesian versus non-Bayesian perspective on ML\n\nWe can do one last simplification (but it can **mislead** us into believing that ML is not probabilistic!)\n\nWhen the PPD is approximated by the observation distribution for a Point estimate $\\hat{z}$,\n\n$$\\require{color}\n{\\color{orange}p(y|\\mathcal{D}_y)} = p(y|z=\\hat{z})\n$$\n\nwe can decide to focus on only making a prediction for the **mean** of the PPD and even forget that it is a distribution (we forget uncertainties!).\n\nThis is very common in ML literature! But, I think it's advantageous not to think about it that way...\n\n\n## Solution to the Homework of this Lecture\n\n1. The PPD using the MLE for problem in Lecture 6 is:\n\n$$\np(y\\mid y=\\mathcal{D}_y) = p(y \\mid z=\\hat{z}_{\\text{mle}}) = \\mathcal{N}\\left(y| \\mu_{y|z}=w \\hat{z}_{\\text{mle}}+b,\\, \\sigma_{y|z}^2 \\right)\n$$\n\nwhere the MLE of $z$ is obtained as follows:\n\n$$\n\\begin{align}\n\\hat{\\mathbf{z}}_{\\text{mle}} &= \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{i=1}^{N}\\log{ p(y=y_i|z)}\\right] \\\\\n&= \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{i=1}^{N}\\log{\\left( \\frac{1}{|w|}\\frac{1}{\\sqrt{2\\pi \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}} \\exp\\left\\{ -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y_i-b}{w}\\right)\\right]^2\\right\\}\\right)}\\right]\\\\\n&= \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{i=1}^{N}\\left(\\log{\\left( \\frac{1}{|w|}\\frac{1}{\\sqrt{2\\pi \\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}}\\right)} -\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\left[z-\\left(\\frac{y_i-b}{w}\\right)\\right]^2\\right)\\right] \\\\\n&= \\underset{z}{\\mathrm{argmin}}\\left[\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\sum_{i=1}^{N} \\left[z-\\left(\\frac{y_i-b}{w}\\right)\\right]^2\\right] \\\\\n\\end{align}\n$$\n\nTo find the minimum location we need to take the derivative wrt $z$ and equal it to zero:\n\n$$\n\\frac{1}{\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\sum_{i=1}^{N} \\left[z-\\left(\\frac{y_i-b}{w}\\right)\\right] = 0\n$$\n\n$$\nN z - \\sum_{i=1}^{N} \\frac{y_i-b}{w}=0\n$$\n\n$$\nz = \\frac{1}{N} \\sum_{i=1}^{N} \\frac{y_i-b}{w}\n$$\n\nSo, we conclude that:\n\n$$\n\\hat{z}_{\\text{mle}} = \\frac{1}{N} \\sum_{i=1}^{N} \\frac{y_i-b}{w} = \\mu\n$$\n\n**This result should be very familiar to you**!\n\nHere's why: Remember that in Lecture 5 (and 6) we already calculated the likelihood to be ${\\color{blue}p(y=\\mathcal{D}_y | z)} = \\frac{1}{|w|^N} \\cdot C \\cdot \\frac{1}{\\sqrt{2\\pi \\sigma^2}} \\exp\\left[ -\\frac{1}{2\\sigma^2}(z-\\mu)^2\\right]$, so it is obvious that this is maximized at the mean value of $z=\\mu$ because the mode of a Gaussian is the same as the mean. This is why in the code above, I already knew the result without doing any calculation \ud83d\ude09\n\n2. The PPD using the MAP for the Gaussian prior of Lecture 7 is:\n\n$$\np(y\\mid y=\\mathcal{D}_y) = p(y \\mid z=\\hat{z}_{\\text{map}}) = \\mathcal{N}\\left(y| \\mu_{y|z}=w \\hat{z}_{\\text{map}}+b,\\, \\sigma_{y|z}^2 \\right)\n$$\n\nwhere the MAP of $z$ is obtained as follows:\n\n$$\n\\begin{align}\n\\hat{\\mathbf{z}}_{\\text{map}} &= \\underset{z}{\\mathrm{argmin}}\\left[-\\sum_{i=1}^{N}\\log{ p(y=y_i|z)}-\\log{p(z)}\\right] \\\\\n&= \\underset{z}{\\mathrm{argmin}}\\left[\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\sum_{i=1}^{N} \\left[z-\\left(\\frac{y_i-b}{w}\\right)\\right]^2 - \\log{\\frac{1}{\\sqrt{2\\pi \\overset{\\scriptscriptstyle <}{\\sigma}_z^2}}\\exp\\left\\{-\\frac{1}{2\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}\\left(z-\\overset{\\scriptscriptstyle <}{\\mu}_z\\right)^2\\right\\}}\\right] \\\\\n&= \\underset{z}{\\mathrm{argmin}}\\left[\\frac{1}{2\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\sum_{i=1}^{N} \\left[z-\\left(\\frac{y_i-b}{w}\\right)\\right]^2+\\frac{1}{2\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}\\left(z-\\overset{\\scriptscriptstyle <}{\\mu}_z\\right)^2\\right] \\\\\n\\end{align}\n$$\n\nSimilarly, to find the minimum location we need to take the derivative wrt $z$ and equal it to zero:\n\n$$\n\\frac{1}{\\left(\\frac{\\sigma_{y|z}}{w}\\right)^2}\\sum_{i=1}^{N} \\left[z-\\left(\\frac{y_i-b}{w}\\right)\\right] + \\frac{1}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}\\left(z-\\overset{\\scriptscriptstyle <}{\\mu}_z\\right)= 0\n$$\n\n$$\n\\frac{N w^2}{\\sigma_{y|z}^2} z - \\frac{w^2}{\\sigma_{y|z}^2} \\sum_{i=1}^{N} \\frac{y_i-b}{w} + \\frac{1}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2} z - \\frac{1}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}\\overset{\\scriptscriptstyle <}{\\mu}_z =0\n$$\n\nNoting that $\\sum_{i=1}^{N} \\frac{y_i-b}{w} = N\\mu$ and that $\\frac{N w^2}{\\sigma_{y|z}^2} = \\frac{1}{\\sigma^2}$,\n\n$$\n\\frac{1}{\\sigma^2} z - \\frac{1}{\\sigma^2}\\mu + \\frac{1}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2} z - \\frac{1}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}\\overset{\\scriptscriptstyle <}{\\mu}_z =0\n$$\n\n$$\nz = \\frac{1}{\\frac{1}{\\sigma^2}+\\frac{1}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}} \\left( \\frac{\\mu}{\\sigma^2}+\\frac{\\overset{\\scriptscriptstyle <}{\\mu}_z}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}\\right)\n$$\n\nSo, we conclude that:\n\n$$\n\\hat{z}_{\\text{map}} = \\frac{1}{\\frac{1}{\\sigma^2}+\\frac{1}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}} \\left( \\frac{\\mu}{\\sigma^2}+\\frac{\\overset{\\scriptscriptstyle <}{\\mu}_z}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}\\right)\n$$\n\nOnce again, **this result should be very familiar to you**!\n\nHere's why: Remember that in Lecture 7 we already calculated the posterior to be ${\\color{green}p(z|y=\\mathcal{D}_y)} = \\mathcal{N}\\left(z| \\overset{\\scriptscriptstyle >}{\\mu}_z, \\overset{\\scriptscriptstyle >}{\\sigma}_z^2\\right) = \\mathcal{N}\\left(z\\left|\\frac{1}{\\frac{1}{\\sigma^2} + \\frac{1}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}} \\left( \\frac{\\mu}{\\sigma^2} + \\frac{\\overset{\\scriptscriptstyle <}{\\mu}_z}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}\\right), \\frac{1}{\\frac{1}{\\sigma^2} + \\frac{1}{\\overset{\\scriptscriptstyle <}{\\sigma}_z^2}}\\right.\\right)$, so it is obvious that this is maximized at the mean value of $z=\\overset{\\scriptscriptstyle >}{\\mu}_z$ because the mode of a Gaussian is the same as the mean. Again, this is why in the code above, I already knew the result without doing any calculation \ud83d\ude09\n\n3. Finally, we can plot the PPD's for question 1 and 2 and compare them with what we obtained in Lecture 7.\n\nWe already have computed the parameters in the last 2 plots of this lecture. This question is just asking to show these distributions in a different way.\n\n\n```python\nfig_HW, ax_HW = plt.subplots()\n\nN_samples=3\nx = 75\nmu_z2 = 0.1; sigma_z2 = 0.01\n#\n# Observation of N_samples from the true data:\nempirical_y = samples_y_with_2rvs(N_samples, x) # Empirical measurements of N_samples at x=75\n# Empirical mean and std directly calculated from observations:\nempirical_mu_y = np.mean(empirical_y); empirical_sigma_y = np.std(empirical_y); \n#\n# --------------------------------------------------------------------------------------------\n# PPD calculated in Lecture 6 (Uniform prior)\n# Now define all the constants needed in the calculation of the PPD's obtained with each prior.\nw = x\nb = mu_z2*x**2\nsigma_yGIVENz = np.sqrt((x**2*sigma_z2)**2) # sigma_y|z (comes from the stochastic influence of the z_2 rv)\nsigma = np.sqrt(sigma_yGIVENz**2/(w**2*N_samples)) # std arising from the likelihood\nmu = empirical_mu_y/w - b/w # mean arising from the likelihood (product of Gaussian densities for the data)\n#\n# Now, calculate PPD when using a UNIFORM prior (Lecture 6):\nPPD_mu_y_UniformPrior = mu*w + b # same result if using: np.mean(empirical_y)\nPPD_sigma_y_UniformPrior = np.sqrt(w**2*sigma**2+sigma_yGIVENz**2) # same as: np.sqrt((x**2*sigma_z2)**2*(1/N_samples + 1))\n# --------------------------------------------------------------------------------------------\n\n\n# --------------------------------------------------------------------------------------------\n# MLE:\nz_mle = mu # in this case it also coincides with the mean of the likelihood.\nPPD_mu_y_mle = w*z_mle + b # same as empirical mean (also same as mean of Bayesian PPD for Uniform prior)\nPPD_sigma_y_mle = sigma_yGIVENz # NOT the same as Bayesian PPD for Uniform prior (only in the limit)\n# --------------------------------------------------------------------------------------------\n\n# --------------------------------------------------------------------------------------------\n# PPD calculated in Lecture 7 (Gaussian prior)\nw = x\nb = mu_z2*x**2\nsigma_yGIVENz = np.sqrt((x**2*sigma_z2)**2) # sigma_y|z (comes from the stochastic influence of the z_2 rv)\nsigma = np.sqrt(sigma_yGIVENz**2/(w**2*N_samples)) # std arising from the likelihood\nmu = empirical_mu_y/w - b/w # mean arising from the likelihood (product of Gaussian densities for the data)\n#\nmu_prior_z = 3; sigma_prior_z = 2 # parameters of the Gaussian prior distribution \nsigma_posterior_z = np.sqrt( (sigma_prior_z**2*sigma**2)/(sigma_prior_z**2+sigma**2) )# std of posterior\nmu_posterior_z = sigma_posterior_z**2*( mu/(sigma**2) + mu_prior_z/(sigma_prior_z**2) ) # mean of posterior\nPPD_mu_y_GaussianPrior = mu_posterior_z*w + b\nPPD_sigma_y_GaussianPrior = np.sqrt(w**2*sigma_posterior_z**2+sigma_yGIVENz**2)\n# --------------------------------------------------------------------------------------------\n\n\n# --------------------------------------------------------------------------------------------\n# MAP:\nz_map = mu_posterior_z # in this case it also coincides with the mean of the posterior\nPPD_mu_y_map = w*z_map + b # same as empirical mean (also same as mean of Bayesian PPD for Uniform prior)\nPPD_sigma_y_map = sigma_yGIVENz # NOT the same as Bayesian PPD for Uniform prior (only in the limit)\n# --------------------------------------------------------------------------------------------\n\n\n# --------------------------------------------------------------------------------------------\n# I will also include the real distribution p(y)\n# Note: we found this in Homework of Lecture 4 (solution shown in Lecture 5)\nmu_z1 = 1.5\nsigma_z1 = 0.5\nreal_mu_y = x*mu_z1 + mu_z2*x**2\nreal_sigma_y = np.sqrt((x*sigma_z1)**2 + (x**2*sigma_z2)**2)\n# --------------------------------------------------------------------------------------------\n\n# We can establish the limits of the plot based on the real distribution:\nymin = real_mu_y - 3*real_sigma_y\nymax = real_mu_y + 3*real_sigma_y\nyrange = np.linspace(ymin, ymax, 200) # to plot\n\nax_HW.plot(yrange, norm.pdf(yrange, PPD_mu_y_UniformPrior, PPD_sigma_y_UniformPrior),\n '--', linewidth = 3, color='#F39C12', label='Bayesian PPD for Uniform prior')\nax_HW.plot(yrange, norm.pdf(yrange, PPD_mu_y_mle, PPD_sigma_y_mle),\n '-', linewidth = 3, color='#F39C12', label='PPD using MLE')\nax_HW.plot(yrange, norm.pdf(yrange, PPD_mu_y_GaussianPrior, PPD_sigma_y_GaussianPrior),\n 'b--', linewidth = 3, label='Bayesian PPD for Gaussian prior')\nax_HW.plot(yrange, norm.pdf(yrange, PPD_mu_y_map, PPD_sigma_y_map),\n 'b-', linewidth = 3, label='PPD using MAP')\nax_HW.plot(yrange, norm.pdf(yrange, real_mu_y, real_sigma_y),\n 'k-', linewidth = 3, label='Real distribution $p(y)$')\nax_HW.set_xlabel(\"y\", fontsize=20)\nax_HW.set_ylabel(\"probability density\", fontsize=20)\nax_HW.legend(fontsize=15, loc='upper left');\nfig_HW.set_size_inches(15, 6)\n```\n\n### See you next class\n\nHave fun **but do your HOMEWORK**!\n", "meta": {"hexsha": "bbb6f59f2fe1788ebd78945ebd4aa05e6cb80df6", "size": 869780, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Lectures/Lecture8/3dasm_Lecture8.ipynb", "max_stars_repo_name": "shushu-qin/3dasm_course", "max_stars_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_stars_repo_licenses": ["BSD-3-Clause"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2022-02-07T18:45:48.000Z", "max_stars_repo_stars_event_max_datetime": "2022-02-23T21:45:27.000Z", "max_issues_repo_path": "Lectures/Lecture8/3dasm_Lecture8.ipynb", "max_issues_repo_name": "shushu-qin/3dasm_course", "max_issues_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_issues_repo_licenses": ["BSD-3-Clause"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Lectures/Lecture8/3dasm_Lecture8.ipynb", "max_forks_repo_name": "shushu-qin/3dasm_course", "max_forks_repo_head_hexsha": "a53ce9f8d7c692a9b1356946ec11e60b35b7bbcd", "max_forks_repo_licenses": ["BSD-3-Clause"], "max_forks_count": 6, "max_forks_repo_forks_event_min_datetime": "2022-02-07T18:45:49.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-25T19:30:17.000Z", "avg_line_length": 490.5696559504, "max_line_length": 241376, "alphanum_fraction": 0.9320425855, "converted": true, "num_tokens": 14418, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.22000709974589314, "lm_q2_score": 0.3849121444839335, "lm_q1q2_score": 0.0846834045648824}} {"text": "```python\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nplt.rcParams[\"figure.figsize\"] = (12, 9)\nplt.rcParams[\"font.size\"] = 18\n```\n\n## Binary Nuclear Reactions\n\n### Learning Objectives:\n\n- Connect concepts in particle collisions and decay to binary reactions\n- Categorize nuclear reactions using standard nomenclature\n- Apply conservation of nucleons to binary nuclear reactions\n- Formulate Q value equations for binary nuclear reactions\n- Apply conservation of energy and linear momentum to scattering\n- Apply coulombic threshold\n- Apply kinematic threshold\n- Determine when coulombic and kinematic thresholds apply or do not\n\n## Recall from Weeks 3 & 4\n\nTo acheive these objectives, we need to recall 3 major themes from weeks three and four. \n\n### 1: Compare Exothermic and Endothermic reactions\n\n- In **_exothermic_** or **_exoergic_** reactions, energy is **emitted** ($Q>0$)\n- In **_endothermic_** or **_endoergic_** reactions, energy is **absorbed** ($Q<0$)\n\n\n\n\n
(credit: BBC)
\n\n### 2: Relate energy and mass $E=mc^2$\n\nWhen the masses of reactions change, this is tied to a change in energy from whence we learn the Q value.\nThis change in mass is equivalent to a change in energy because **$E=mc^2$**\n\n\\begin{align}\nA + B + \\cdots &\\rightarrow C + D + \\cdots\\\\\n\\mbox{(reactants)} &\\rightarrow \\mbox{(products)}\\\\\n\\implies \\Delta M &= (\\mbox{reactants}) - (\\mbox{products})\\\\\n &= (M_A + M_B + \\cdots) - (M_C + M_D + \\cdots)\\\\\n\\implies \\Delta E &= \\left[(M_A + M_B + \\cdots) - (M_C + M_D + \\cdots)\\right]c^2\\\\\n\\end{align}\n\n\n### 3: Apply conservation of energy and momentum to scattering collisions\n\nConservation of total energy and linear momentum can inform Compton scattering reactions. X-rays scattered from electrons had a change in wavelength $\\Delta\\lambda = \\lambda' - \\lambda$ proportional to $(1-\\cos{\\theta_s})$\n\n\n\nWe used the law of cosines:\n\n\\begin{align}\np_e^2 &= p_\\lambda^2 + p_{\\lambda'}^2 - 2p_\\lambda p_{\\lambda'}\\cos{\\theta_s}\n\\end{align}\n\n\nAnd we also used conservation of energy:\n\\begin{align}\np_\\lambda c+m_ec^2 &= p_{\\lambda'}c + mc^2\\\\\n\\mbox{where }&\\\\\nm_e&=\\mbox{rest mass of the electron}\\\\\nm &= \\mbox{relativistic electron mass after scattering}\n\\end{align}\n\nCombining these with our understanding of photon energy ($E=h\\nu=pc$) gives:\n\n\\begin{align}\n\\lambda' - \\lambda &= \\frac{h}{m_ec}(1-\\cos{\\theta_s})\\\\\n\\implies \\frac{1}{E'} - \\frac{1}{E} &= \\frac{1}{m_ec^2}(1-\\cos{\\theta_s})\\\\\n\\implies E' &= \\left[\\frac{1}{E} + \\frac{1}{m_ec^2}(1-\\cos{\\theta_s})\\right]^{-1}\\\\\n\\end{align}\n\n## More Types of Reactions\n\nPreviously we were interested in fundamental particles striking one another (e.g. the electron and proton in Compton scattering) or nuclei emitting such particles (e.g. $\\beta^\\pm$ decay).\n\n**Today:** We are interested in myriad additional reactants and/or products. In particular, we're interested in:\n\n- neutron absorption and production reactions \n- _binary, two-product nuclear reactions_ in which two products emerge with new energies after the collision.\n\n\n```python\n# The below IFrame displays Page 162 of your textbook:\n# Shultis, J. K. (2016). Fundamentals of Nuclear Science and Engineering Third Edition, \n# 3rd Edition. [Vitalsource]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781498769303/\n\nfrom IPython.display import IFrame\nIFrame(\"https://bookshelf.vitalsource.com/books/9781498769303/pageid/162\", width=1000, height=500)\n\n```\n\n\n\n\n\n\n\n\n\n\n## Reaction Nomenclature\n\n**Transfer Reactions:** Nucleons (1 or 2) are transferred between the projectile and product.\n\n**Scattering reactions:** The projectile and product emerge from a collision with the same identities as when they started, exchanging only kinetic energy. \n\n**Knockout reactions:** The projectile directly interacts with the target nucleus and is re-emitted **along with** nucleons from the target nucleus.\n\n**capture reactions:** The projectile is absorbed, typically exciting the nucleus. The excited nucleus may emit that energy decaying via photon emission.\n\n**nuclear photoeffect:** A photon projectile liberates a nucleon from the target nucleus.\n\n### Think Pair Share : categorize these reactions\n\nOne example of each of the above appears below. Use the definitions to categorize them.\n\n- $(n, n)$\n- $(n, \\gamma)$\n- $(n, 2n)$\n- $(\\gamma, n)$\n- $(\\alpha, n)$\n\n\n## Binary, two-product nuclear reactions\n\n**Two initial nuclei collide to form two product nuclei.**\n\n\\begin{align}\n^{A_1}_{Z_1}X_1 + ^{A_2}_{Z_2}X_2 \\longrightarrow ^{A_3}_{Z_3}X_3 + ^{A_4}_{Z_4}X_4\n\\end{align}\n\n#### Applying Conservation of Neutrons and Protons\n\nThe total number of nucleons is always conserved.\nIf the `______________` force is not involved, we can also apply this conservation separately.\n\n\nIn most binary, two-product nuclear reactions, this is the case, so the number of protons and neutrons are conserved. Thus:\n\n\\begin{align}\nZ_1 + Z_2 = Z_3 + Z_4\\\\\nA_1 + A_2 = A_3 + A_4\n\\end{align}\n\nApply this to the following:\n\n\\begin{align}\n^{3}_{1}H + ^{16}_{8}O \\longrightarrow \\left(X\\right)^* \\longrightarrow ^{16}_{7}N + ^{A_4}_{Z_4}X_4\n\\end{align}\n\n### Think Pair Share:\n\nWhat are :\n\n- $A_4$ \n- $Z_4$\n- $X_4$?\n\n- Bonus: What is $\\left(X\\right)^*$?\n\n\n### An Aside on Nuclear Energy in the Media\n\n\n
Prof. Huff's first job was at the LANSCE ICE HOUSE, 2003 & 2004
\n\n\n
This image linked above is a screenshot from Spiderman 2, in 2004.
\nIt is owned and copyright 2004 by Marvel comics.
\nIt shows Dr. Octopus and the fuel for his fusion reactor.
\n\n#### Applying conservation of mass and energy.\n\nThe Q-value calculation is the same as it has been before. \nThe Q value represents the `________` in kinetic energy and, equivalently, a `________` in the rest masses.\n\n\\begin{align}\nQ &= E_y + E_Y \u2212 E_x \u2212 E_X \\\\\n &= (m_x + m_X \u2212 m_y \u2212 m_Y )c^2\\\\\n &= \\left(m\\left(^{A_1}_{Z_1}X_1\\right) + m\\left(^{A_2}_{Z_2}X_2\\right) - m\\left(^{A_3}_{Z_3}X_3\\right) - m\\left(^{A_4}_{Z_4}X_4\\right)\\right)c^2\\\\\n\\end{align}\n\nIf proton numbers are conserved (true for everything but electron capture or reactions involving the weak force.), we can use the approximation that $m(X) = M(X)$.\n\n\\begin{align}\nQ &= E_y + E_Y \u2212 E_x \u2212 E_X \\\\\n &= (m_x + m_X \u2212 m_y \u2212 m_Y )c^2\\\\\n &= (M_x + M_X \u2212 M_y \u2212 M_Y )c^2\\\\\n &= \\left(M\\left(^{A_1}_{Z_1}X_1\\right) + M\\left(^{A_2}_{Z_2}X_2\\right) - M\\left(^{A_3}_{Z_3}X_3\\right) - M\\left(^{A_4}_{Z_4}X_4\\right)\\right)c^2\\\\\n\\end{align}\n\n\n```python\ndef q(m_reactants, m_products):\n \"\"\"Returns Q\n \n Parameters\n ----------\n m_reactants: list (of doubles)\n the masses of the reactant atoms [amu]\n m_products : list (of doubles)\n the masses of the product atoms [amu]\n \"\"\"\n amu_to_mev = 931.5 # MeV/amu conversion\n m_difference = sum(m_reactants) - sum(m_products)\n return m_difference*amu_to_mev\n\n\n# Look up the masses:\nh_3_mass = 3.0160492675\no_16_mass = 15.9949146221\nhe_3_mass = 3.0160293097\nn_16_mass = 16.0061014\n\nm_react = [h_3_mass, o_16_mass]\nm_prods = [he_3_mass, n_16_mass]\n\nprint(\"Q: \", q(m_react, m_prods))\n```\n\n Q: -10.401892923148063\n\n\n#### Applying conservation of linear momentum\n\nLet's get back to collision kinematics. \n\nFirst, we'll assume the target nucleus ($X_2$) is initially at rest.\n\n##### Question: How do we handle the case when the incident and target nuclei both have initial velocity?\n\n##### Harder Question: How do we handle the case when the incident and target nuclei are accelerating with respect to each other?\n\n\n\nIf $E_i$ is the kinetic energy of the $i^{th}$ nucleus.\n\n\\begin{align}\nEx = Ey + EY \u2212 Q.\n\\end{align}\n\n\n \n\n\n```python\n# The below IFrame displays Page 162 of your textbook:\n# Shultis, J. K. (2016). Fundamentals of Nuclear Science and Engineering Third Edition, \n# 3rd Edition. [Vitalsource]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781498769303/\n\nfrom IPython.display import IFrame\nIFrame(\"https://bookshelf.vitalsource.com/books/9781498769303/pageid/162\", width=1000, height=500)\n```\n\n\n\n\n\n\n\n\n\n\n# Kinematic Threshold\n\nRelying on a combination of kinetic energies $E_i$ and corresponding linear momenta:\n\n\\begin{align}\np_i = \\sqrt{2m_iE_i}\n\\end{align}\n\nWe can determine that some reactions aren't possible without a certain minimum quantity of kinetic energy. \n\nThe solution to $E_3$ can become nonphysical if :\n\n- $\\cos{\\theta_3} < 0$\n- $Q < 0$\n- $m_4 - m_1 < 0$\n\n## For Exoergic Reactions ($Q>0$)\n\nFor $Q>0$ and $m_{4} > m_{1}$, $E_{3} = (a + \\sqrt{a^2+b^2})^2$ is the only real, positive, meaningful solution. \n\nThe kinetic energy of $E_3$ is, at minimum, the energy arrived at when $p_1 = 0$. Thus:\n\n\\begin{align}\nE_3 \\longrightarrow& \\frac{m_4}{m_3 + m_4}Q\\\\\n&\\mbox{ when } Q>0, p_1=0\n\\end{align}\n\nSo, no exoergic reactions are restricted by kinetics, as $Q = E_3 + E_4$, for the minimum linear momentum case, which is real and positive. \n\n## For Endoergic Reactions ($Q<0$)\nSome $Q<0$ reactions aren't possible without a certain minimum quantity of kinetic energy. \n\n\nFor $Q<0$ and $m_{4} > m_{1}$, some values of $E_{1}$ are too small to carry forward a real, positive solution. That is, the incident projectile must supply a minimum amount of kinetic energy before the reaction can occur. Without this energy, the solution for $E_3$ results in physically meaningless values. This minimum energy can be found from eqn 6.11 in your book and is :\n\n\\begin{align}\nE_1^{th,k} = -\\frac{m_3 + m_4}{m_3 + m_4 - m_1}Q.\n\\end{align}\n\nOne can often simplify this (assuming $m_i >> Q/c^2$ and $m_3 + m_4 - m_1 \\simeq m_2$) :\n\n\n\\begin{align}\nE_1^{th,k} \\simeq - \\left( 1 + \\frac{m_1}{m_2} \\right)Q.\n\\end{align}\n\n\n```python\ndef kinematic_threshold(m_1, m_3, m_4, Q):\n \"\"\"Returns the kinematic threshold energy [MeV]\n \n Parameters\n ----------\n m_1: double\n mass of incident projectile\n m_3: double\n mass of first product \n m_3: double\n mass of second product \n Q : double\n Q-value for the reaction [MeV]\n \"\"\"\n num = -(m_3 + m_4)*Q\n denom = m_3 + m_4 - m_1\n return num/denom\n\ndef kinematic_threshold_simple(m_1, m_2, Q):\n \"\"\"Returns the coulombic threshold energy [MeV]\n \n Parameters\n ----------\n m_1: double\n mass of incident projectile\n m_2: double\n mass of target \n Q : double\n Q-value for the reaction [MeV]\n \"\"\"\n to_return = -(1 + m_1/m_2)*Q\n return to_return\n```\n\n# Coulombic Threshold\n\nCoulomb forces repel a projectile if it is:\n\n- a positively charged nucleus\n- a proton\n\nThe force between the projectile (particle 1) and the target nucleus (particle 2) is :\n\n\\begin{align}\n&F_C = \\frac{Z_1Z_2e^2}{4\\pi\\epsilon_0r^2}\\\\\n\\mbox{where}&&\\\\\n&\\epsilon_0 = \\mbox{the permittivity of free space.}\n\\end{align}\n\n### Think pair share:\nWhat are the other terms in the above equation:\n\n- $Z_1$ ?\n- $Z_2$ ?\n- $e$ ?\n- $r$ ?\n\n\nBy evaluating the work function for approach to the nucleus with a coulomb barrier, we can establish that the coulombic threshold energy (in MeV) is :\n\n\\begin{align}\nE_1^{th,C} \\simeq 1.20 \\frac{Z_1Z_2}{A_1^{1/3}+A_2^{1/3}}\n\\end{align}\n\n\n```python\ndef colombic_threshold(z_1, z_2, a_1, a_2):\n \"\"\"Returns the coulombic threshold energy [MeV]\n \n Parameters\n ----------\n z_1: int\n proton number of incident projectile\n z_2: int\n proton number of target \n a_1 : int or double\n mass number of the incident projectile [amu]\n a_2 : int or double\n mass number of the target [amu]\n \"\"\"\n num = 1.20*z_1*z_2\n denom = pow(a_1, 1/3) + pow(a_2, 1/3)\n return num/denom\n```\n\n### Think Pair Share \n\nWhich thresholds apply to the below situations:\n\n- A chargeless incident particle, reaction $Q>0$\n- A chargeless incident particle, reaction $Q<0$\n- A positively charged incident particle, reaction $Q>0$\n- A positively charged incident particle, reaction $Q<0$\n\n## Overall threshold\n\nFor the case where both thresholds apply, the minimum energy for the reaction to occur is the highest of the two thresholds. \n\n\\begin{align}\n\\min{\\left(E_1^{th}\\right)}\t= \\max{\\left(E^{th,C}_1,E_1^{th,k}\\right)}.\n\\end{align}\n\n## Example\n\nTake the (p, n) reaction from $^{9}Be\\longrightarrow^{9}B$. We will need to calculate:\n\n- The Q value\n- The kinematic threshold (if it applies)\n- The coulombic threshold (if it applies)\n- Determine which one is higher\n\n\n```python\n# Q value\n# Look up the masses:\nbe_9_mass = 9.0121821\nb_9_mass = 9.0133288\nn_mass = 1.0086649158849\np_mass = 1.007825032 # hydrogen nucleus!\n\nm_react = [be_9_mass, p_mass]\nm_prods = [b_9_mass, n_mass]\n\nq_example = q(m_react, m_prods)\nprint(\"Q: \", q_example)\n```\n\n Q: -1.8505028887843764\n\n\n\n```python\n# Kinematic Threshold\n# Which particles were which again?\nm_1 = p_mass\nm_2 = be_9_mass\nm_3 = n_mass\nm_4 = b_9_mass\n\n# Calculate using both regular and simpler methods\nE_k_th = kinematic_threshold(m_1, m_3, m_4, q_example)\nE_k_th_simple = kinematic_threshold_simple(m_1, m_2, q_example)\nprint(\"E_k_th: \", E_k_th)\nprint(\"E_k_th (simplified): \", E_k_th_simple)\n```\n\n E_k_th: 2.0573975230549033\n E_k_th (simplified): 2.0574431294953586\n\n\n\n```python\n# Coulombic Threshold\n# Need some charge info and mass numbers\nz_1 = 1 # proton\nz_2 = 4 # Be\na_1 = 1 # proton\na_2 = 9 # Be\n\nE_c_th = colombic_threshold(z_1, z_2, a_1, a_2)\n\nprint(\"E_c_th: \", E_c_th)\n```\n\n E_c_th: 1.558399146177754\n\n\n\n```python\n## Which one is higher?\n\nprint(\"Total threshold: \", max(E_c_th, E_k_th))\n```\n\n Total threshold: 2.0573975230549033\n\n\n# Applications: Neutron Detection\nNeutron's don't tend to directly ionize matter as they pass through. However, they can instigate nuclear reactions which produce charged products. These products, in turn, can be detected due to the ionization they create. The scheme for a Boron Trifluoride detector is below (hosted at https://www.orau.org/ptp/collection/proportional%20counters/bf3info.htm).\n\n\n\nThe wall effect results in the following spectrum (approximately):\n\n\nIn (n,p) reactions, for example, variation in emission angle of particle 3 can be used to determine the energy of the original incident neutron.\n\n# Applications: Neutron Production\nSpecific neutron energies can be targetted by collecting them at a certain angle away from the production collision.\n\n\n\n
The accelerator and spallation target at LANSCE and other spallation experiments rely on this fact.
Prof. Huff's first job was at the LANSCE ICE HOUSE, 2003 & 2004
\n\n\n## Two energies\n\nIn (p,n) reactions, for example, certain proton energies may result in more than one neutron energy observed at a single angle. How? \n\nRecall the equation (Shultis and Faw 6.11):\n\n\\begin{align}\n\\sqrt{E_y}=&\\sqrt{\\frac{m_xm_yE_x}{(m_y + m_Y)^2}}\\cos\\theta_y \\\\\n&\\pm \\sqrt{\\frac{m_xm_yE_x}{(m_y + m_Y)^2}\\cos^2\\theta_y + \\left[\\frac{m_Y-m_x}{(m_y + m_Y)}E_x + \\frac{m_YQ}{(m_y + m_Y)}\\right]}\n\\end{align}\n\nProf. Huff prefers this notation: \n\\begin{align}\n\\sqrt{E_3}=&\\sqrt{\\frac{m_1m_3E_1}{(m_3 + m_4)^2}}\\cos\\theta_3 \\\\\n&\\pm \\sqrt{\\frac{m_1m_3E_1}{(m_3 + m_4)^2}\\cos^2\\theta_3 + \\left[\\frac{m_4-m_1}{(m_3 + m_4)}E_1 + \\frac{m_4Q}{(m_3 + m_4)}\\right]}\n\\end{align}\n\n## Heavy Particle scattering from an electron\n\nMuch like the Compton reaction we saw between photons and electrons, we can see a similar reaction with heavy particles. Occaisionally, a heavy particle (e.g. a small nucleus, like an $\\alpha$ particle) strikes the orbital electrons in atoms of a medium.\n\nThus: particles 2 and 3 are the electron. So:\n\n\\begin{align} \nm_2 &= m_3 = m_e = \\mbox{(the electron mass)}\\\\\nE_3 &= E_e = \\mbox{(the recoil electron energy)}\\\\\nm_1 &= m_4 = \\mbox{(the mass of the heavy particle)}\\\\\nE_1 &= E_4 = \\mbox{(the kinetic energy of the incident heavy particle)}\n\\end{align}\n\nFor this scattering process, there is no change in the rest masses of the reactants, so Q = 0. \n\nWe can use the Shutlis and Faw 6.11 equation above to arrive at:\n\n\\begin{align}\n\\sqrt{E_e}=& \\frac{2}{m_4 + m_e}\\sqrt{m_4m_eE_4}\\cos{\\theta_e}\n\\end{align}\n\nWe can approximate that $m_4 >> m_e$ such that the electron recoil energy becomes:\n\n\\begin{align}\n\\implies E_e =& 4\\frac{m_e}{m_4}E_4\\cos^2{\\theta_e}\n\\end{align}\n\n## Think Pair Share\nWhat angle, $\\theta_e$, corresponds to the maximimum loss of kinetic energy by the incident heavy particle?\n\n\n\nAt $\\theta_e=0$, we find that:\n\n\\begin{align}\n(E_e)_{max} = 4\\frac{m_e}{m_4}E_4\n\\end{align}\n\n\n```python\nimport math \ndef recoil_energy(m_4, e_4, theta_e):\n m_e = 0.0005486 # amu\n num = 4*m_e*e_4*pow(math.cos(theta_e), 2)\n return num/m_4\n\n```\n\n\n```python\nth = [math.radians(-90),\n math.radians(-75),\n math.radians(-60),\n math.radians(-45),\n math.radians(-30),\n math.radians(-15),\n math.radians(0), \n math.radians(15),\n math.radians(30),\n math.radians(45),\n math.radians(60),\n math.radians(75),\n math.radians(90)]\n\nm_4 = 4.003 # alpha particle\n\nto_plot_4 = np.arange(0.,len(th))\nto_plot_10 = np.arange(0.,len(th))\n\nfor k, v in enumerate(th):\n to_plot_4[k] = (recoil_energy(m_4, 4, v))\n to_plot_10[k] = (recoil_energy(m_4, 10, v))\n\n\nplt.plot(th, to_plot_4, label=\"$4MeV$\")\nplt.plot(th, to_plot_10, label=\"$10MeV$\")\n\nplt.ylabel(\"Electron Recoil Energy ($MeV$)\")\nplt.xlabel(\"Angle (radians)\")\nplt.legend(loc=2)\n```\n\n\n```python\n\nth = 0\nm_4 = 4.003 # alpha particle\ne_4 = 4 # MeV\n\nprint(\"Max (4MeV alpha): \", recoil_energy(m_4, e_4, th))\n```\n\n Max (4MeV alpha): 0.002192755433424931\n\n\n## Neutron Scattering\n\n### Neutron interactions with matter.\n\n\\begin{align}\n^1_0n + {^a_z}X \\longrightarrow \n\\begin{cases}\n^1_0n + {^a_z}X & \\mbox{Elastic Scattering}\\\\\n^1_0n + \\left({^a_z}X\\right)^* & \\mbox{Inlastic Scattering}\n\\end{cases}\n\\end{align}\n\n\n\nUsing the ubiquitous equation 6.11 for a neutron scatter:\n\n\\begin{align}\n\\sqrt{E_3}=&\\sqrt{\\frac{m_1m_3E_1}{(m_3 + m_4)^2}}\\cos\\theta_3 \\\\\n&\\pm \\sqrt{\\frac{m_1m_3E_1}{(m_3 + m_4)^2}\\cos^2\\theta_3 + \\left[\\frac{m_4-m_1}{(m_3 + m_4)}E_1 + \\frac{m_4Q}{(m_3 + m_4)}\\right]}\\\\\n\\end{align}\n\nWe can define our particles as a neutron hitting a nucleus and changing in its energy.\n\n\\begin{align}\nm_1 = m_3 = m_n\\\\\nE_1 = E_n\\\\\nE_3 = E_n'\\\\\n\\end{align}\n\nSuch that:\n\n\\begin{align}\n\\sqrt{E_n'} =&\\sqrt{\\frac{m_nm_nE_n}{(m_n + m_4)^2}}\\cos\\theta_s \\\\\n&\\pm \\sqrt{\\frac{m_nm_nE_n}{(m_n + m_4)^2}\\cos^2\\theta_s + \\left[\\frac{m_4-m_n}{(m_n + m_4)}E_n + \\frac{m_4Q}{(m_n + m_4)}\\right]}\n\\end{align}\n\nWe can also agree that $m_2=m_4$, which is some nucleus with a mass that is approximately the same at the beginning and end of the scatter (approximate if the scattering is inelastic) . This gives, with some rearrangement:\n\n\\begin{align}\n\\sqrt{E_n'} &= \\frac{1}{m_4 + m_n}\\times\\\\ &\\left[\\sqrt{m_n^2E_n}\\cos{\\theta_s} \\pm \\sqrt{E(m_4^2 + m_n^2\\cos^2{\\theta_s} \u2212 m_n^2) + m_4 ( m_4 + m_n ) Q }\\right]\n\\end{align}\n\nAnd, for elastic scattering ($Q=0$):\n\n\\begin{align}\nE' = \\frac{1}{(A+1)^2}\\left[\\sqrt{E}\\cos{\\theta_s} + \\sqrt{E(A^2 - 1 + \\cos{\\theta_s}^2)}\\right]^2\n\\end{align}\n\n\n\n```python\ndef scattered_neutron_energy(A, E, th):\n \"\"\"Returns the energy of a scattered neutron [MeV]\n Parameters\n ----------\n A: int or double\n mass number of medium\n E: double\n kinetic energy of the incident neutron [MeV]\n th : double\n scattering angle, in degrees\n \"\"\"\n cos_th = math.cos(math.radians(th))\n term1 = 1/((A+1)**2)\n term2 = math.sqrt(E)*cos_th\n term3 = math.sqrt(E*(A**2 - 1 + cos_th**2))\n return term1*((term2 + term3)**2)\n```\n\n\n```python\nth = [math.radians(-90),\n math.radians(-75),\n math.radians(-60),\n math.radians(-45),\n math.radians(-30),\n math.radians(-15),\n math.radians(0), \n math.radians(15),\n math.radians(30),\n math.radians(45),\n math.radians(60),\n math.radians(75),\n math.radians(90)]\n\ne_initial = 2.0 # 2 MeV is special\na_light = 4.003 # alpha particle\na_heavy = 235.0 # uranium atom\n\nto_plot_light = np.arange(0.,len(th))\nto_plot_heavy = np.arange(0.,len(th))\n\nfor k, v in enumerate(th):\n to_plot_light[k] = (scattered_neutron_energy(a_light, e_initial, v))\n to_plot_heavy[k] = (scattered_neutron_energy(a_heavy, e_initial, v))\n\nplt.plot(th, to_plot_light, label=\"light\")\nplt.plot(th, to_plot_heavy, label=\"heavy\")\n\nplt.ylabel(\"Scattered Neutron Energy ($MeV$)\")\nplt.xlabel(\"Angle (radians)\")\nplt.legend(loc=2)\n```\n\n## Average Energy Loss\n\nFor elastic scattering (Q = 0), we see the minimum and maxium energies occur at the maximum and minimum angles. \n\n\\begin{align}\nE'_{max} &= E'(\\theta_{s,min})\\\\\n &= E'(\\theta_{s}=0)\\\\\n &= E\\\\\nE'_{min} &= E'(\\theta_{s,max})\\\\\n &= E'(\\theta_{s}=\\pi)\\\\\n &= \\frac{(A-1)^2}{(A+1)^2} E\\\\\n &\\equiv \\alpha E\\\\\n\\end{align}\n\nFor isotropic scattering, we can find the average loss:\n\n\n\\begin{align}\n(\\Delta E)_{av} &\\equiv E - E'_{av}\\\\\n&= E\u2212 1(E+\\alpha E)\\\\\n& = 1(1- \\alpha)E\n\\end{align}\n\n\n```python\ndef alpha(a):\n \"\"\"Returns the average energy loss of a \n scattered neutron [MeV]\n Parameters\n ----------\n A: int or double\n mass number of medium\n \"\"\"\n num = (a-1)**2\n denom = (a+1)**2\n return num/denom\n \ndef average_energy_loss(A, E):\n \"\"\"Returns the average energy loss of a scattered neutron [MeV]\n Parameters\n ----------\n A: int or double\n mass number of medium\n E: double\n kinetic energy of the incident neutron [MeV]\n \"\"\"\n return 1*(1-alpha(A))*E\n```\n\n\n```python\ne_initial = np.arange(0, 2, 0.001)\n\nto_plot_light = np.arange(0.,len(e_initial))\nto_plot_heavy = np.arange(0.,len(e_initial))\n\nfor k, v in enumerate(e_initial):\n to_plot_light[k] = (average_energy_loss(a_light, v))\n to_plot_heavy[k] = (average_energy_loss(a_heavy, v))\n\nplt.plot(e_initial, to_plot_light, label=\"light atom\")\nplt.plot(e_initial, to_plot_heavy, label=\"heavy atom\")\n\nplt.ylabel(\"Average Neutron Energy Loss ($MeV$)\")\nplt.xlabel(\"Initial neutron energy ($MeV$)\")\nplt.legend(loc=2)\n```\n\n## Logarithmic Energy Loss\n\nIt turns out, on a logarithmic energy scale, a neutron loses the same amount of logarithmic energy per elastic scatter, regardless of its initial energy. So, this is a helpful term, particularly since neutron energies can range by many orders of magnitude. So, we often use 'logarithmic energy loss' when discussing this downscattering. This is also called \"lethargy\".\n\n\\begin{align}\n\\left(\\ln{(E)} - \\ln{(E')}\\right)_{av} & = \\overline{\\ln{\\left(\\frac{E}{E'}\\right)}} \\\\\n&= 1 + \\frac{\\alpha}{1-\\alpha}\\\\\n&= \\xi\\\\\n&= \\mbox{average logarithmic energy loss per elastic scatter}\\\\\n&= \\mbox{lethargy}\n\\end{align}\n\n\n```python\ndef lethargy(a): \n \"\"\"Returns the average logarithmic energy \n loss per elastic scatter\n Parameters\n ----------\n A: int or double\n mass number of medium\n \"\"\"\n return 1.0 + alpha(a)/(1-alpha(a)) \n```\n\n\n```python\na = np.arange(1, 240)\nplt.plot([lethargy(i) for i in a])\nplt.ylabel(\"$\\\\xi$\")\nplt.xlabel(\"A($amu$)\")\n\n```\n\n\n```python\n# The below IFrame displays Page 172 of your textbook:\n# Shultis, J. K. (2016). Fundamentals of Nuclear Science and Engineering Third Edition, \n# 3rd Edition. [Vitalsource]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781498769303/\n\nfrom IPython.display import IFrame\nIFrame(\"https://bookshelf.vitalsource.com/books/9781498769303/pageid/172\", width=1000, height=500)\n```\n\n\n\n\n\n\n\n\n\n\n## Thermal Neutrons\n\n1. a fast neutron slows down\n2. may eventually come into thermal equilibrium with the medium \n3. thermal motion of atoms in medium are in Maxwellian distribution \n4. neutron may gain kinetic energy upon scattering from a rapidly moving nucleus \n5. neutron may lose energy upon scattering from a slowly moving nucleus.\n\n\n\n\nAt room temperature, 293 K:\n- the most probable kinetic energy of thermal neutrons is 0.025 eV\n- 0.025 eV corresponds to a neutron speed of about 2200 m/s.\n\n## Epithermal\n\nNeutrons that are faster than thermal neutrons, but aren't quite \"fast\" are called _epithermal_. ($0.2eV < E_{epi} < 1 MeV$)\n\n## Fast\n\n$> 1MeV$\n\n\n## Neutron Capture \n\n- Free neutrons will eventually be absorbed by a nucleus (or escape the domain of interest)\n- Neutron capture leaves the nucleus excited \n- Actually, very excited (Recall: what is a typical binding energy per nucleon?)\n- When it's released as a $\\gamma$ that energy can be very hazardous\n\nNeutron slowing down can help us to reduce very high energy $\\gamma$ emissions.\n\n## Fission Reactions\n\nSome nuclei spontaneously fission (e.g. $^{252}Cf$). However, this isn't common.\n\n\n\n\\begin{align}\n^1_0n + ^{235}_{92}U \\longrightarrow \\left( ^{236}_{92}U \\right)^*\n\\begin{cases}\n^{235}_{92}U + ^1_0n & \\mbox{Elastic Scattering}\\\\\n^{235}_{92}U + ^1_0n' + \\gamma & \\mbox{Inelastic Scattering}\\\\\n^{236}_{92}U + \\gamma & \\mbox{Radiative Capture}\\\\\n^{A_H}_{Z_H}X_H + ^{A_L}_{Z_L}X_L + ^1_0n + \\cdots & \\mbox{Fission}\n\\end{cases}\n\\end{align}\n\n### Recall: Cross sections\n\nThe likelihood of each of these scattering events is captured by cross sections. \n\n- $\\sigma_x = $ microscopic cross section $[cm^2]$\n- $\\Sigma_x = $ macroscopic cross section $[1/length]$\n- $\\Sigma_x = N\\sigma_x $\n- $N = $ number density of target atoms $[\\#/volume]$\n\n\n### Cross sections are in units of area. Explain this to your neighbor.\n\n### What energy neutron do we prefer for fission in $^{235}U$?\n\n\nNuclei that undergo neutron induced fission can be categorized into three types:\n\n- fissile: can fission with a slow neutron ($^{235}U$, $^{233}U$, $^{239}Pu$)\n- fissionable: require high energy (>1MeV) neutron ($^{238}U$, $^{240}Pu$)\n- fertile: can be converted into fissile or fissionable nuclide (breeding reactions)\n\nKey breeding reactions are :\n\n\\begin{align}\n{^{232}_{90}}Th + ^1_0n \\longrightarrow {^{233}_{90}}Th \\overset{\\beta^-}{\\longrightarrow} {^{233}_{91}}Pa \\overset{\\beta^-}{\\longrightarrow} {^{233}_{92}}U\\\\\n{^{238}_{92}}U + ^1_0n \\longrightarrow {^{239}_{92}}U \\overset{\\beta^-}{\\longrightarrow} {^{239}_{93}}Np \\overset{\\beta^-}{\\longrightarrow} {^{239}_{94}}Pu\\\\\n\\end{align}\n\n## The fission process\n\n\\begin{align}\n^1_0n + ^{235}_{92}U \\longrightarrow \\left( ^{236}_{92}U \\right)^* \\longrightarrow X_H + X_L + \\nu_p\\left(^1_0n\\right) + \\gamma_p\n\\end{align}\n\nConserving neutrons and protons:\n\n\\begin{align}\nA_L + A_H + \\nu_p &= 236\\\\\nN_L + N_H + \\nu_p &= 144\\\\\nZ_L + Z_H &= 92\\\\\n\\end{align}\n\n\n\n## Fission Product Decay\n\nThe fission fragments end up very neutron rich.\n\n### Think Pair Share\nRecall the chart of the nuclides. How will these fission products likely decay?\n\n\n\n\n\n\n### Fission Spectrum\n\n$\\chi(E)$ is an empirical probability density function describing the energies of prompt fission neutrons. \n\n\\begin{align}\n\\chi (E) &= 0.453e^{-1.036E}\\sinh\\left(\\sqrt{2.29E}\\right)\\\\\n\\end{align}\n\n\n```python\nimport numpy as np\nimport math\ndef chi(energy):\n return 0.453*np.exp(-1.036*energy)*np.sinh(np.sqrt(2.29*energy))\n\nenergies = np.arange(0.0,10.0, 0.1)\n\nplt.plot(energies, chi(energies))\nplt.title(r'Prompt Neutron Energy Distribution $\\chi(E)$')\nplt.xlabel(\"Prompt Neutron Energy [MeV]\")\nplt.ylabel(\"probability\")\n```\n\n\n```python\n#### Questions about this plot:\n\n- What is the most likely prompt neutron energy?\n- Can you write an equation for the average neutron energy?\n\n```\n\n Object `energy` not found.\n Object `energy` not found.\n\n\n\n```python\n- Can you write an equation for the average neutron energy\n```\n\n\n```python\nprint(max([chi(e) for e in energies]), chi(0.7))\n```\n\n 0.358102702287 0.358102702287\n\n\n#### Expectation Value\n\nRecall that the average energy will be the expectation value of the probability density function.\n\n\n\\begin{align}\n &= \\int E\\chi(E)dE\\\\\n&= E \\chi(E)\n\\end{align}\n\n\n```python\nplt.plot(energies, [chi(e)*e for e in energies])\n```\n\n## Prompt and Delayed neutrons\n\n- Most of the neutrons in fission are emitted within $10^{-14}s$. \n - **prompt** neutrons\n - $\\nu_p$\n- Some, ($<1\\%$) are produced by delayed decay of fission products. \n - **delayed** neutrons\n - $\\nu_d$\n \nWe define the delayed neutron fraction as :\n\n\\begin{align}\n\\beta \\equiv \\frac{\\nu_d}{\\nu_d + \\nu_p}\n\\end{align}\n\n## Energy from fission\n\n\n```python\n# The below IFrame displays Page 183 of your textbook:\n# Shultis, J. K. (2016). Fundamentals of Nuclear Science and Engineering Third Edition, \n# 3rd Edition. [Vitalsource]. Retrieved from https://bookshelf.vitalsource.com/#/books/9781498769303/\n\nfrom IPython.display import IFrame\nIFrame(\"https://bookshelf.vitalsource.com/books/9781498769303/pageid/183\", width=1000, height=500)\n\n```\n\n\n\n\n\n\n\n\n\n\n### Reaction Rates\n\n- The microscopic cross section is just the likelihood of the event per unit area. \n- The macroscopic cross section is just the likelihood of the event per unit area of a certain density of target isotopes.\n- The reaction rate is the macroscopic cross section times the flux of incident neutrons.\n\n\\begin{align}\nR_{i,j}(\\vec{r}) &= N_j(\\vec{r})\\int dE \\phi(\\vec{r},E)\\sigma_{i,j}(E)\\\\\nR_{i,j}(\\vec{r}) &= \\mbox{reactions of type i involving isotope j } [reactions/cm^3s]\\\\\nN_j(\\vec{r}) &= \\mbox{number of nuclei participating in the reactions } [\\#/cm^3]\\\\\nE &= \\mbox{energy} [MeV]\\\\\n\\phi(\\vec{r},E)&= \\mbox{flux of neutrons with energy E at position i } [\\#/cm^2s]\\\\\n\\sigma_{i,j}(E)&= \\mbox{cross section } [cm^2]\\\\\n\\end{align}\n\n\nThis can be written more simply as $R_x = \\Sigma_x I N$, where I is intensity of the neutron flux.\n\n\n### Source term\n\nThe source of neutrons in a reactor are the neutrons from fission. \n\n\\begin{align}\ns &=\\nu \\Sigma_f \\phi\n\\end{align}\n\nwhere\n\n\\begin{align}\ns &= \\mbox{neutrons available for next generation of fissions}\\\\\n\\nu &= \\mbox{the number born per fission}\\\\\n\\Sigma_f &= \\mbox{the number of fissions in the material}\\\\\n\\phi &= \\mbox{initial neutron flux}\n\\end{align}\n\nThis can also be written as:\n\n\\begin{align}\ns &= \\nu\\Sigma_f\\phi\\\\\n &= \\nu\\frac{\\Sigma_f}{\\Sigma_{a,fuel}}\\frac{\\Sigma_{a,fuel}}{\\Sigma_a}{\\Sigma_a} \\phi\\\\\n &= \\eta f {\\Sigma_a} \\phi\\\\\n\\eta &= \\frac{\\nu\\Sigma_f}{\\Sigma_{a,fuel}} \\\\\n &= \\mbox{number of neutrons produced per neutron absorbed by the fuel, \"neutron reproduction factor\"}\\\\\nf &= \\frac{\\Sigma_{a,fuel}}{\\Sigma_a} \\\\\n &= \\mbox{number of neutrons absorbed in the fuel per neutron absorbed anywhere, \"fuel utilization factor\"}\\\\\n\\end{align}\n\nThis absorption and flux term at the end seeks to capture the fact that some of the neutrons escape. However, if we assume an infinite reactor, we know that all the neutrons are eventually absorbed in either the fuel or the coolant, so we can normalize by $\\Sigma_a\\phi$ and therefore:\n\n\n\\begin{align}\nk_\\infty &= \\frac{\\eta f \\Sigma_a\\phi}{\\Sigma_a \\phi}\\\\\n&= \\eta f\n\\end{align}\n", "meta": {"hexsha": "be74233cd6fea5859ebfbd80fa18d92767ab7f56", "size": 293624, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "binary_reactions/binary-reactions.ipynb", "max_stars_repo_name": "katyhuff/npr247", "max_stars_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": 3, "max_stars_repo_stars_event_min_datetime": "2018-12-17T06:07:21.000Z", "max_stars_repo_stars_event_max_datetime": "2022-01-21T17:14:51.000Z", "max_issues_repo_path": "binary_reactions/binary-reactions.ipynb", "max_issues_repo_name": "katyhuff/npr247", "max_issues_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": 2, "max_issues_repo_issues_event_min_datetime": "2018-08-29T17:27:24.000Z", "max_issues_repo_issues_event_max_datetime": "2018-08-29T17:46:50.000Z", "max_forks_repo_path": "binary_reactions/binary-reactions.ipynb", "max_forks_repo_name": "katyhuff/npr247", "max_forks_repo_head_hexsha": "0bc7abf483247ba1a705516393f49703d8263458", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": 4, "max_forks_repo_forks_event_min_datetime": "2018-08-25T20:00:51.000Z", "max_forks_repo_forks_event_max_datetime": "2022-01-14T03:05:26.000Z", "avg_line_length": 160.3626433643, "max_line_length": 58120, "alphanum_fraction": 0.8742166853, "converted": true, "num_tokens": 9848, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.28776781576105304, "lm_q2_score": 0.2942149783515162, "lm_q1q2_score": 0.08466560168440133}} {"text": "```python\n### Running in Google Colab? You'll want to uncomment and run these cell once each time you start this notebook.\n\n\"\"\"\n!pip uninstall cftime --yes\n!pip install cftime==1.2.1\n!pip install nc-time-axis\n!pip install netcdf4\n!wget https://github.com/psheehan/CIERA-HS-Program/blob/master/Projects/EarthsClimateModel/tas_Amon_CESM1-WACCM_rcp85_r2i1p1_200601-209912.nc?raw=true\n!mv tas_Amon_CESM1-WACCM_rcp85_r2i1p1_200601-209912.nc?raw=true tas_Amon_CESM1-WACCM_rcp85_r2i1p1_200601-209912.nc\n!apt-get install libproj-dev proj-data proj-bin\n!apt-get install libgeos-dev\n!pip uninstall shapely cartopy --yes\n!pip install shapely cartopy --no-binary shapely --no-binary cartopy\n\"\"\"\n```\n\n# _Welcome to Earth_\n\n\n\n## In this module, we'll take the space exploration a little bit closer to home: Earth. \n\n\n\n\n\nimage from NASA (https://explorer1.jpl.nasa.gov/galleries/earth-from-space/#gallery-16)\n\n# What makes the Earth unique?\n\n\nThe easiest answer? Us. \n\nHumankind has perfectly evolved to thrive in this climate. Arguably, the main components that support this life is the atmospheric composition and the temperatures we experience because of it.\n\n\n# Atmospheric Composition\n\n\nThe Earth has this unique atmosphere that allows for life to thrive. The air we breathe is made up of many elements, mainly: nitrogen (78.09%) and oxygen (20.95%). The remaining >1% of the atmosphere is made up of other trace gases like carbon dioxide ($CO_2$), methane ($CH_4$), and water vapor ($H_2O_v$). \n\nYou've probably heard of these molecules before -- water, $CO_2$, and $CH_4$ are famous greenhouse gases. Though you've probably heard of greenhouse gases, the definition is:\n\n**Greenhouse gas:** A gas that absorbs infared energy thus trapping heat.\n\nThese greenhouse gases are actually necessary for life -- without these gases trapping heat in our atmosphere, we'd experience immensely cooler temperatures. This can be explained with some basic physics principles.\n\n1. Above 0 Kelvin (or absolute zero), molecules in an object are moving. Therefore, all objects above 0 Kelvin emit radiation. This includes the Sun and the Earth!\n\n\n2. The radiation ($E$) an object gives off is proportional to its absolute temperature ($T$), as described in the Stefan-Boltzman Law:\n $$E = \\sigma T^4 $$ where $\\sigma$ is a constant such that: $\\sigma = 5.67*10^{-8} W/m^2$.\n If an object completely absorbs and emits radiation, it's what we call a **black body.**\n\n\n3. The relationship between the object's temperature ($T$) and wavelength ($\\lambda$) is called Wien's Law:\n $$ \\lambda = c/T $$\n where $c$ is a constant such that: $c = 2897 \\mu m * K$. The wavelength an object emits is what we percieve as the color of the object. The sun emits at a high temperature, so we see the the incoming light in the visible spectrum -- the sun glows yellow, so let's say it's emitting at 570 $nm$. The Earth emits radiation at a lower temperature in the infrared (50 $\\mu m$), so we can't actually see the outgoing radiation. \n \n\nAfter rearranging the Stefan-Boltzmann law, we can rearrange the equation to relate stars and black body planets temperatures to: \n\n$$ T_{planet} = T_{star} * \\sqrt { R_{sun}/{(2*a_{star-planet}) }}$$\n\n\nWith the sun radius of 696,340 km ($696*10^6 m$) and a = 149,598,000 km ($149*10^9 m)$\n \n\n**Question 1: what is the temperature of the Sun's surface? What is the temperature of the Earth's surface?** \n\nHint: rearrange Stefan-Boltzman and Wien's law with the information you know about the sun and Earth's peak spectra\n\nHint 2: If you're still struggling -- Google stefan-boltzmann surface temperature of the sun! (My advice for all problems you can't figure out) \n\n\n```python\n\n```\n\n**Question 2: Look up the average temperature of the Earth's surface. Do your answers match? Why or why not?**\n\n\n```python\n\n```\n\nSo....\n\n\nThese were leading questions obvioulsy to make you think about the power of that >1% atmosphere. Greenhouse gases are essential to making these temperatures habitable on Earth! What you just solved for was the 1-dimensional average temperature of the Earth if we did not have an atmosphere. But we do ... so it gets more complicated. \n\n\n\n\nimage from NASA (https://explorer1.jpl.nasa.gov/galleries/earth-from-space/#gallery-6)\n\n# One-dimensional to three\n\nThe previous exercise gave you a 1-D view of the Earth. But we know that the Earth is a sphere which causes all sorts of fun problems, like the uneven distribution of sunlight as compared to the equators and the north pole. \n\n\n\nFrom: http://www.geo.mtu.edu/KeweenawGeoheritage/Lake/Temperature_files/latitude_and_sunlight_large.jpg\n\nSolar warming is generally greater at the equator where the sun shines directly and much less at the poles where the sun is low in the sky. Surfaces that are perpendicular to the sun\u2019s ray path heat faster than those at an angle. This differential heating is passed on to the air above by conduction which causes air expansion and changes in pressure. Wind is the result of pressure changes in the atmosphere. Any shoreline is a wind machine, because of solar heating effects.\n\n**Question 3: Using your background knowledge, how would you describe the weather at the equator? (More than just temperatures ...)** \n\n\n\nThis differing temperature thus forces air parcels to move, and then the Earth's rotation creates a deflection on these parcels, giving us atmospheric circulation patterns. \n\nThe basic principle comes from the ideal gas law:\n$$ PV = nRT $$\nWhere P = Pressure (atm), V = volume (liters), n = number moles of gas, T = temperature of gas (Kelvin), R = constant, 0.0821 L * atm * K^-1 * mol^-1\n\n\n\nFrom: https://history.aip.org/climate/xGenCirc.htm\n\n\n**Here is a video explainer of this figure if you do not understand the figure, the last 2 minutes are particularly useful for the next two questions:**\n\n\nhttps://www.youtube.com/watch?v=ebjKyoQ6YoE\n\n**Question 4: Looking at the figure above, how does the atmospheric circulation converge at the equator (0 degrees)? Think about where the air is going ... what it's bringing ...** \n\n\n\n\n**Question 5: How about at 30 degrees? Looking at a map, what significant features are found on the 30th latitude? (Hint look to Africa....) Can you explain why this may be happening, given what you know about the way the ideal gas law works and the atmospheric circulation patterns?** \n\n\n\nUnfortunately this perfect circulation model does not work across the entire planet -- think about the temperature of Ireland and Canada -- they're found on the same latitude, but one is so much warmer than the other!\n\nLooking at the Earth as a simple cell does not perfectly explain our climate situation -- it would if we were an Aquaplanet without land (orthographic) scale features and a uniform depth in the ocean -- but obviously, our planet is way more complicated than that. And that's why we can't just use the Stefan-Boltzmann and ideal gas law to explain our climate! They are very useful and provide a part of the story about our climate, however, our mathematical models need to integrate the nuance of oceanic circulation, orthographic features, and clouds! As a result, we go look at far more complicated models -- climate models -- in order to better estimate what's happening in our environment.\n\n# Intro to Climate Models\n\nTo reiterate, the energy on Earth comes from the sun, so incoming light -- or radiation, as I'll call it for the rest of this -- is key to controlling temperature. The ultimate amount of warming potential found in the earth is controlled by three knobs:\n\n> 1. Incoming radiation: Amount of sunlight reaching Earth -- influenced by sun spots, the distance of the sun to the Earth\n \n> 2. Albedo: Amount of reflectivity on Earth's surface -- what kind of landcover is on the planet\n\n> 3. Gases that absorb longwave radiation: levels of greenhouse gas concentrations in the atmosphere trap heat\n\nIf you change any of these three knobs, you change the climate because you change the radiation balance that we're in!\n\n\nClimate models use this radiation balance to solve the energy balance, but there's also a mass balance that it must solve. This is stolved through the Navier-Stokes equation ... and it's not pretty.\n\n\\begin{equation}\n\\frac{\\partial (\\rho u_{i})}{\\partial t} + \\frac{\\partial[\\rho u_{i}u_{j}]}{\\partial x_{j}} = -\\frac{\\partial p}{\\partial x_{i}} + \\frac{\\partial \\tau_{ij}}{\\partial x_{j}} + \\rho f_{i} \\end{equation}\n\nBasically, we're balancing the flow of fluid motion and balancing the speed, pressure, temperature and density of the gases in the atmosphere and the water in the ocean.\n\nBUT DON'T WORRY -- I'm not having you solve for this equation. I have model data which has already done this for you! A climate model simultaneously solves for the energy and mass balance over a sphere, giving us an incredible tool to study the atmosphere and climate.\n\n\n\n## Making Plots with Climate Model Output\n\nFor this assignment, you are tasked with creating a Python plot of Climate Model Intercomparison Project CMIP data. CMIP is a standard experimental framework for studying the output of coupled atmosphere-ocean general circulation models. This facilitates assessment of the strengths and weaknesses of climate models which can enhance and focus the development of future models. For example, if the models indicate a wide range of values either regionally or globally, then scientists may be able to determine the cause(s) of this uncertainty.\n\n## Task: Your assignment is to plot the average global temperature in the year 2066 using whatver map projection you'd like to use.\n\nThe CMIP file is called: tas_Amon_CESM1-WACCM_rcp85_r2i1p1_200601-209912.nc\n\nRecommended libraries: xarray, netCDF4, cartopy, matplotlib.pyplot, numpy, pandas \n\n\n```python\n# Import libraries\nimport cartopy.crs as ccrs\nimport matplotlib.pyplot as plt\nimport xarray as xr\nimport nc_time_axis\nimport math\nimport numpy as np\n\nfrom netCDF4 import Dataset\n```\n\n1. Read in the file using xarray\n\n\n```python\nfile_in = # fill in: filename\n\nDS_netcdf = Dataset(file_in)\nDS=xr.open_dataset(file_in)\n```\n\nI guess we should talk about Xarray before we continue. Climate data file formats are typically netCDF files, a binary file format that makes these huge datasets more portable across different machines but not possible to open with something like excel. Really, the data are stored in a folder... Xarray is a great tool to open these files up. But it takes a second to get used to this kinda format. \n\n\n```python\nprint(DS)\n```\n\nOk so attributes are not necessary for you all to know, but it's basically the metadata attached to the file! Most important part of the file is the data variables. **tas** is the variable we're more interested in -- it's temperature!\n\n2. Select the 2066 time slice from the array\n\n\n```python\nstartday= #complete\nendday= #complete\nfeb66=DS.sel(time=slice(startday,endday))\n```\n\n3. Average the data over time! (http://xarray.pydata.org/en/stable/generated/xarray.Dataset.mean.html) \n\n\n```python\nfeb66_avg=feb66.mean(dim= ) #complete\nprint(feb66_avg)\n```\n\n4. Now let's map it to make more sense of it...\n\n\n```python\n#Projection for plot\nprojection= ccrs.PlateCarree(central_longitude=255);\n# Data projection\ndata_crs = ccrs.PlateCarree()\n\n# Open figure object\nplt.figure(figsize=(10, 6))\nax = plt.axes(projection=projection)\nax.set_global()\nax.coastlines()\n\n# Make a contour plot\nlon= # pull the data from the feb66_avg variable!\nlat= # complete\ndata= # complete\ncf=ax.contourf(lon, lat, data, levels=60, transform=data_crs, cbar_kwargs={'label': DS.tas.units})\nplt.colorbar(cf)\nax.set_label(DS.tas.units)\n\n# Add the gridlines, title, colorbar\nax.gridlines(crs=projection,linestyle=\"dotted\", draw_labels=True)\nplt.title(\"Temperature\")\n\n#Save our figure, show our figure! (Must save before showing)\nplt.savefig('assignment1_feb66.png')\nplt.show()\n```\n\n5. So there you have it! Your first climate plot. Look at your plot and ponder, what is happening in the world? Where is it warm? Where is it cool? Is this expected? \n\n\n\n# Taking off the training wheels! Make your own plots now.\n\nNow that you know how to make a plot, now I want to see:\n\n(a) A map of average temperature from 2006-2016\n\n\n```python\n# Code here... don't worry, it's basically written above -- you can do this!\n\n\n\n\n## End code\n```\n\n(b) A map of average temperature from 2089-2099\n\n\n```python\n\n```\n\n(c) The difference between the temperatures in (a) and (b)\n\n\n```python\n\n```\n\n# Time series plot\n\nNow we can see how the temperature changes from 2006-2099. In this next plot we'll look at making the global average monthly temperatures from 2006-2099 (an x-y plot). \n\n\n```python\n# This will help you later....\ndef weighted_mean(data_da, dim, weights):\n r\"\"\"Computes the weighted mean.\n\n We can only do the actual weighted mean over the dimensions that\n ``data_da`` and ``weights`` share, so for dimensions in ``dim`` that aren't\n included in ``weights`` we must take the unweighted mean.\n\n This functions skips NaNs, i.e. Data points that are NaN have corresponding\n NaN weights.\n\n Args:\n data_da (xarray.DataArray):\n Data to compute a weighted mean for.\n dim (str | list[str]):\n dimension(s) of the dataarray to reduce over\n weights (xarray.DataArray):\n a 1-D dataarray the same length as the weighted dim, with dimension\n name equal to that of the weighted dim. Must be nonnegative.\n Returns:\n (xarray.DataArray):\n The mean over the given dimension. So it will contain all\n dimensions of the input that are not in ``dim``.\n Raises:\n (IndexError):\n If ``weights.dims`` is not a subset of ``dim``.\n (ValueError):\n If ``weights`` has values that are negative or infinite.\n \"\"\"\n if isinstance(dim, str):\n dim = [dim]\n else:\n dim = list(dim)\n\n if not set(weights.dims) <= set(dim):\n dim_err_msg = (\n \"`weights.dims` must be a subset of `dim`. {} are dimensions in \"\n \"`weights`, but not in `dim`.\"\n ).format(set(weights.dims) - set(dim))\n raise IndexError(dim_err_msg)\n else:\n pass # `weights.dims` is a subset of `dim`\n\n if (weights < 0).any() or xr.ufuncs.isinf(weights).any():\n negative_weight_err_msg = \"Weight must be nonnegative and finite\"\n raise ValueError(negative_weight_err_msg)\n else:\n pass # `weights` are nonnegative\n\n weight_dims = [\n weight_dim for weight_dim in dim if weight_dim in weights.dims\n ]\n\n if np.isnan(data_da).any():\n expanded_weights, _ = xr.broadcast(weights, data_da)\n weights_with_nans = expanded_weights.where(~np.isnan(data_da))\n else:\n weights_with_nans = weights\n\n mean_da = ((data_da * weights_with_nans).sum(weight_dims, skipna=True)\n / weights_with_nans.sum(weight_dims))\n other_dims = list(set(dim) - set(weight_dims))\n return mean_da.mean(other_dims, skipna=True)\n```\n\n1. Pull time period of interest\n\n\n```python\ndata_2006_2099=DS.sel(time=slice('2006-01-01','2099-12-31')) # 94 years\n```\n\n2. Assign contents into variables\n\n\n```python\nlat = # latitude \nlon = # longitude\ntas = # near-surface air temperature\n```\n\n3. Calculate the weights according to latitude. There are multiple ways to do this (see here). In this example, we use the cosine of latitudes. Now we use the function defined above to calculate the weighted mean over the latitude, passing the correct parameters\n\n\n```python\nrad = 4.*math.atan(1.)/180.\nweights = np.cos(lat*rad)\nwmean_lat = weighted_mean(, dim='lat', weights=weights) #complete\n```\n\n4. Since we want a single average value for the year, we also average over the longitudes. This time it does not have to be weighted.\n\n\n```python\nwmean = wmean_lat.mean(dim='lon')\n```\n\n5. Plot time and data from wmean\n\n\n```python\nplt.figure(figsize=(10,3))\nplt.plot(, , linestyle='-', color='black', linewidth=0.5) #complete\nplt.xlabel('Time')\nplt.ylabel('Temperature (K)')\nplt.title('Global Monthly Mean Temperatures', loc='left')\n```\n\n6. Yay! Another plot. Again -- what is happening? Any observations? What's weird about this plot?\n\n\n\n# Time series plot of temperature anomolies\n\nThe annual temperature has all of the seasonal cyclicity ... while it's clear the temperature is rising, we can remove the seasonal signal by subtracting a baseline. We can call 2006-2030 temperatures a baseline.\n\n1. Get the annual temperatures and the baseline\n\n\n```python\nannual_wmean = wmean.groupby('time.year').mean(dim='time')\n\nbaseline = wmean.sel(time=slice('2006-01-01','2035-12-31')).mean(dim='time')\n```\n\n2. We then subtract the baseline from the annual weighted mean, to get the anomaly\n\n\n```python\nanomaly = # complete\n```\n\n3. Now plot the anomoly like you did above!\n\n\n```python\n# complate\n```\n\nNow we have global average temperature anomolies. Is it what you expected? Is this climate change? \n\n\n# BONUS\nIf you've made it all the way here and still want more to do ...\nPlot the global temperature anomoly as a map! This is a combination of the past 3 plots put all together... Good luck.\n\n\n```python\n#bonus\n```\n", "meta": {"hexsha": "9396eea99f333bfb378299753f655857d85c22e9", "size": 27921, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Projects/EarthsClimateModel/EarthsClimateModel.ipynb", "max_stars_repo_name": "psheehan/CIERA-HS-Program", "max_stars_repo_head_hexsha": "76f7f0ff994e74e646fa34bbb41c314bf7526e9b", "max_stars_repo_licenses": ["Naumen", "Condor-1.1", "MS-PL"], "max_stars_count": 2, "max_stars_repo_stars_event_min_datetime": "2019-06-25T02:36:49.000Z", "max_stars_repo_stars_event_max_datetime": "2020-06-09T21:44:41.000Z", "max_issues_repo_path": "Projects/EarthsClimateModel/EarthsClimateModel.ipynb", "max_issues_repo_name": "psheehan/CIERA-HS-Program", "max_issues_repo_head_hexsha": "76f7f0ff994e74e646fa34bbb41c314bf7526e9b", "max_issues_repo_licenses": ["Naumen", "Condor-1.1", "MS-PL"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Projects/EarthsClimateModel/EarthsClimateModel.ipynb", "max_forks_repo_name": "psheehan/CIERA-HS-Program", "max_forks_repo_head_hexsha": "76f7f0ff994e74e646fa34bbb41c314bf7526e9b", "max_forks_repo_licenses": ["Naumen", "Condor-1.1", "MS-PL"], "max_forks_count": 7, "max_forks_repo_forks_event_min_datetime": "2019-06-25T15:33:10.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-12T18:04:36.000Z", "avg_line_length": 31.5135440181, "max_line_length": 699, "alphanum_fraction": 0.5980086673, "converted": true, "num_tokens": 4213, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.29746995506106744, "lm_q2_score": 0.28457600421652673, "lm_q1q2_score": 0.08465281118574834}} {"text": "```python\nfrom IPython.display import HTML\n\ndef yt(url, width=500, height=None):\n \"\"\"\n Function to embed a youtube movie in a notebook\n \"\"\"\n \n if height is None:\n height = (9/16*width)\n \n url = url.replace(\"youtu.be\", 'youtube.com/embed')\n \n embed_code = \"\"\"\n \n \"\"\".format(width, height, url)\n \n return HTML(embed_code)\n```\n\n# Topic 02 - Physics\n\nWe'll go in some more details to understand what drives the entire thing\n\n# Mechanics\n\nClassical mechanics is typically the stepping stone before exploring more in physics. We will work our way up to electricity and electronics, but classical mechanics will be our starting point.\n\nWhat's also nice to note is how much of the ideas and concepts appear here for the very first time. The unity of all these things becomes more magical the more you notice it.\n\nThe particular approach here is pretty fundamental and I will discuss these topics more or less in a happy-go-lucky fashion. The Feynman lectures are freely available online, and are a nice introduction to get your concepts straight.\n\n**David Tong**\n\nAnother excellent reference, from a more theoretical point of view, are David Tong's lecture notes. Moreover, what we care about here is summarized in 15 pages, so that's surprisingly doable.\n\nCf. http://www.damtp.cam.ac.uk/user/tong/dynamics/one.pdf\n\nThis resource is particularly nice if you care about physics, since it connects what we will discuss below, to more modern frameworks in which people today tend to think about physics. I think David Tong did a truly amazing job there.\n\n\n## Motion\n\ncf. http://www.feynmanlectures.caltech.edu/I_08.html\n\n### Exercise\n\nGiven an equation of motion,\n\n$$s(t) = At^3 + Bt$$,\n\nwrite a function `approximate_velocity(t, delta_t)` that return the approximate velocity at time $t$, calculated by the formula,\n\n$$\nv = \\frac{\\Delta s}{\\Delta t} = \\frac{s(t + \\Delta t) - s(t)}{\\Delta t}\n$$\n\nAnd compare it to the actual derivative.\n\n\n```python\n# Write answer here\n```\n\n### Exercise (visual)\n\nPlot the actual velocity, and the approximate velocities, calculated with the functions you defined above.\n\n\n```python\n# Write answer here\n```\n\n### Exercise\n\nIf our robot car is driving at $v=250 \\frac{km}{hour}$, and jumps of a horizontal cliff of $h=75m$ high,\n\n1. How far will it jump?\n2. How long will it take until it hits the ground?\n3. Out of sheer surprise, a spectator (not pictured) standing at the side of the cliff drops his coffee mug of the side of the cliff at the exact same time as the car makes the jump. When will that mug hit the ground?\n\n\n```python\n%%html\n\n```\n\n\n\n\n\n\n\n```python\n# Code the solution here\n```\n\n## Conservations\n\n### Energy\n\ncf. \n- http://www.feynmanlectures.caltech.edu/I_01.html (atomic motion)\n- http://www.feynmanlectures.caltech.edu/I_04.html (concept)\n\n### Momentum\n\ncf. http://www.feynmanlectures.caltech.edu/I_10.html\n\n#### Exercise\n\nDisaster! Elon Musk is incredibly jealous of our progress and in a blind rage he decides to collide head-on with our self driving car. The parameters of the problem are;\n\n$$\n\\begin{align}\nm_{s3} = 500 g \\quad &v_{s3} = 4 m/s \\\\\nm_{Tesla} = 2250 kg \\quad &v_{Tesla}=120 km/h \\\\\nv_{Tesla-t2} = 110 km/h\n\\end{align}\n$$\n\nHow fast (and in which direction) will our car go after this horrible collusion?\n\n\n```python\n# Code solution here\n```\n\n## Forces\n\ncf.\n- http://www.feynmanlectures.caltech.edu/I_09.html (Newton)\n- http://www.feynmanlectures.caltech.edu/I_11.html (vectors again)\n- http://www.feynmanlectures.caltech.edu/I_12.html (forces)\n\n### Exercise\n\nExplain the difference between kinematics and dynamics.\n\n### Exercise\n\n- Write a function `next_time_step(x, v, a)` that takes in the position $x$, velocity $v$ and acceleration $a$, of a spring at time $t_n$ and return the same parameters (i.e. $x,v,a$) at one time step after ($t_{n+1}$)\n\n- Test your formula with inputs $x=0, v=2, a=3$\n\n\n```python\n# code solution here\n```\n\n## Work and Energy\n\ncf. \n- http://www.feynmanlectures.caltech.edu/I_13.html (part 01)\n- http://www.feynmanlectures.caltech.edu/I_14.html (part 02)\n\n## Rotation\n\ncf.\n- http://www.feynmanlectures.caltech.edu/I_18.html (2D rotation)\n- http://www.feynmanlectures.caltech.edu/I_19.html (CoM)\n- http://www.feynmanlectures.caltech.edu/I_20.html (rotation in space)\n\n## The Harmonic Oscillator\n\n_\"Understanding physics is understanding the harmonic oscillator, over and over again in different levels of abstraction\"_\n\nI cannot agree with that quote any more. It is true in the same way that you only realize after the fact. You'll have to experience it for yourselves, maybe on day, today, it suffices that you should know that this stupid oscillator is the key to many secrets. It is not about the pendulum.\n\ncf. http://www.feynmanlectures.caltech.edu/I_21.html\n\n# Electricity and Magnetism\n\nSince this is what we care about, we will spend some more time on this with regards to what we previously studied. The resources I provide here are essentially ordered from simple to complex. Needless to say, we will spiral through them in this order.\n\n**The most basic overview**\n\nCf. http://physicsforidiots.com/physics/electromagnetism/ for a basic introduction.\n\n**Khan Academy**\n\nKhan academy offers their content free of any charge on youtube, which is a pretty nifty thing of them to do. So, I will embed their vides inside this lecture notebook, and our discussion will be primarily focussed around these.\n\nAs a summarized reference, cf.\n\n - https://www.khanacademy.org/science/physics/electric-charge-electric-force-and-voltage\n - https://www.khanacademy.org/science/physics/circuits-topic\n - https://www.khanacademy.org/science/physics/magnetic-forces-and-magnetic-fields\n\n**Walter Lewins exploration**\n\nDisclaimer: Walter Lewin is a controversial figure. The internet will tell you all about it, feel free to look it up. Before the scandal he was a professor at MIT (he got fired after) where he gave a few introductory courses in physics (mechanics and electricty) that attained kind of a legend status. They're not easy, but definitely worth a watch. I'll mention relevant lectures as well, to go along with the topics.\n\nFor the full playlist, cf.\n\n- https://www.youtube.com/playlist?list=PLyQSN7X0ro2314mKyUiOILaOC2hk6Pc3j\n\nIt's really huge so that needs a summary. I will typically mention which of his lectures goes into the subjects we care about.\n\n**David Tong's lecture notes**\n\nFor a theoretical take on affairs, I have a strong preference towards David Tong's approach. You'll be warned, this is not easy, but in the end, it is not meant to be. This is meant to be true. That sounds presumptuous, and you'd be quite right, but at some point, you're at a point where you understand all the things that we have discussed so far, and yet this theoretical approach seems so novel. Why is that? Because the typical pedagogical explanations do not give you the full story. Once you are ready, by which I mean, once you have a good intuitive understanding, you are ready to walk to the cliff. The formulas and derivations explained in these theoretical texts are essentially all we -humans, that is- have figured out about electromagnetism. Once you understand this approach, in some way, you have mastered the subject fully. It is quite an investment to get to the cliff, with little or no practical payoffs, but the reward is in the journey, and the overview you acquire in the end is the cherry on top. I guess this is where the roads of physics and engineering really deviate.\n\nCf. http://www.damtp.cam.ac.uk/user/tong/em.html\n\n## Electricity\n\nFor the general, theoretical story: cf. http://www.damtp.cam.ac.uk/user/tong/em/el1.pdf\n\n### Electric Charges and Forces; Coulomb\n\nCf. https://www.khanacademy.org/science/physics/electric-charge-electric-force-and-voltage#charge-electric-force\n\nand\n\nLewin 1-2\n\n#### Exercise\n\nWrite a function `electric_force(q_one, q_two)` that takes in two electrical charges and return the electrical force between them.\n\n### Electric Fields\n\nCf. https://www.khanacademy.org/science/physics/electric-charge-electric-force-and-voltage#electric-field\n\nand\n\nLewin 2-3-4\n\n#### Exercise\n\nWrite a function `electric_force_due_to_field(E, q)` that gives the electric force that an electron with charge `q` will experience due to an electric field with charge $E$.\n\n\n```python\n# Code your solution here\n```\n\n#### Exercise (challenge!)\n\nGiven two charges at locations $l_1$ and $l_2$ both with an electric charge of $q=1 C$, calculate the electric field at location $l_3$. The parameters of this problem are;\n\n$$\n\\begin{align}\nl_1 = [0,0] \\\\\nl_2 = [0,10] \\\\\nl_3 = [3, 7]\n\\end{align}\n$$\n\nObviously, you'll need vectors as inputs and outputs here! So, time to apply your recently learned linear algebra!\n\n\n```python\n# Code solution here\n```\n\n### Electric Energy\n\nCf. https://www.khanacademy.org/science/physics/electric-charge-electric-force-and-voltage#electric-potential-voltage\n\nLewin 5-6-7\n\n\n```python\n\n```\n\n## Circuits\n\nCf. Lewin 8-9-10\n\n### Resistor Circuits\n\nCf. https://www.khanacademy.org/science/physics/circuits-topic#circuits-resistance\n\n### Capacitor Circuits\n\nCf. https://www.khanacademy.org/science/physics/circuits-topic#circuits-with-capacitors\n\n## Magnetism\n\nCf. Lewin and for the theoretical story, cf.\n\n- http://www.damtp.cam.ac.uk/user/tong/em/el2.pdf (David Tong's magnetostatics)\n- http://www.damtp.cam.ac.uk/user/tong/em/el3.pdf (David Tong's electrodynamics)\n\n### Magnetic forces and fields\n\nCf. https://www.khanacademy.org/science/physics/magnetic-forces-and-magnetic-fields#magnets-magnetic\n\n### Magnetic field from electricity\n\nCf. https://www.khanacademy.org/science/physics/magnetic-forces-and-magnetic-fields#magnetic-field-current-carrying-wire\n\n### Electric Motors\n\nOf course, this is a physical component that we happen to care deeply about. Motors will be obviously a super critical component of our system.\n\nCf. https://www.khanacademy.org/science/physics/magnetic-forces-and-magnetic-fields#electric-motors\n\n### Faraday\n\nThis will be sufficiently far on our journey in electricity and magnetism. In the end, both of these interactions unify, and electromagnetism is all that's left. If you care, we can go into that at some point, but to get our car running, I'm going to cut the 'official' content right here.\n\nCf. https://www.khanacademy.org/science/physics/magnetic-forces-and-magnetic-fields#magnetic-flux-faradays-law\n\n# Physical Objects of Interest\n\nSome more information on the physical things which will be important to us.\n\n## Actuators\n\n### Brushed Motor\n\nType one of DC motor.\n\nAs a reference, we can always look to [the wiki article](https://en.wikipedia.org/wiki/Brushed_DC_electric_motor).\n\nA second reference that I like is this (slightly old) MIT page http://lancet.mit.edu/motors/index.html\n\nOther good explanations are -of course- to be found on youtube;\n\n\n```python\nyt(\"https://youtu.be/LAtPHANEfQo\")\n```\n\n\n\n\n\n\n\n\n\n\nFor some youtube weirdness, this guy is surprisingly accurate. Very weird style though.\n\n\n```python\nyt(\"https://youtu.be/yO9xIVv8ryc\")\n```\n\n\n\n\n\n\n\n\n\n\n\n```python\n\n```\n\n### Brushless Motor\n\nType two of DC motor\n\n\n```python\nyt(\"https://youtu.be/bCEiOnuODac\")\n```\n\n\n\n\n\n\n\n\n\n\n### Servo Motor\n\nMotor that allows for controlled movements\n\n\n```python\nyt(\"https://youtu.be/ditS0a28Sko\")\n```\n\n\n\n\n\n\n\n\n\n\nAnother resource (on a channel we will use more when talking about electronics)\n\n\n```python\nyt(\"https://youtu.be/J8atdmEqZsc\")\n```\n\n\n\n\n\n\n\n\n\n", "meta": {"hexsha": "31858654fa4c91b653d1e1b875a8ce8749578c77", "size": 22011, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "note/02 - Physics.ipynb", "max_stars_repo_name": "eliavw/s3-2019", "max_stars_repo_head_hexsha": "d0368ab9a6a5cecff96083b79838767728063f46", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "note/02 - Physics.ipynb", "max_issues_repo_name": "eliavw/s3-2019", "max_issues_repo_head_hexsha": "d0368ab9a6a5cecff96083b79838767728063f46", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "note/02 - Physics.ipynb", "max_forks_repo_name": "eliavw/s3-2019", "max_forks_repo_head_hexsha": "d0368ab9a6a5cecff96083b79838767728063f46", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 27.897338403, "max_line_length": 1105, "alphanum_fraction": 0.5599472991, "converted": true, "num_tokens": 2930, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4455295497638851, "lm_q2_score": 0.18952109361853836, "lm_q1q2_score": 0.0844372475106265}} {"text": "\n*This notebook contains course material from [CBE30338](https://jckantor.github.io/CBE30338)\nby Jeffrey Kantor (jeff at nd.edu); the content is available [on Github](https://github.com/jckantor/CBE30338.git).\nThe text is released under the [CC-BY-NC-ND-4.0 license](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode),\nand code is released under the [MIT license](https://opensource.org/licenses/MIT).*\n\n\n< [Getting Started](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/01.00-Getting-Started.ipynb) | [Contents](toc.ipynb) | [Python Basics](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/01.02-Python-Basics.ipynb) >

\n\n# Getting Started with Python and Jupyter Notebooks\n\n## Summary\n\nThe purpose of this [Jupyter Notebook](http://jupyter.org/) is to get you started using Python and Jupyter Notebooks for routine chemical engineering calculations. This introduction assumes this is your first exposure to Python or Jupyter notebooks.\n\n## Step 0: Gain Executable Access to Jupyter Notebooks\n\nJupyter notebooks are documents that can be viewed and executed inside any modern web browser. Since you're reading this notebook, you already know how to view a Jupyter notebook. The next step is to learn how to execute computations that may be embedded in a Jupyter notebook.\n\nTo execute Python code in a notebook you will need access to a Python kernal. A kernal is simply a program that runs in the background, maintains workspace memory for variables and functions, and executes Python code. The kernal can be located on the same laptop as your web browser or located in an on-line cloud service. \n\n**Important Note Regarding Versions** There are two versions of Python in widespread use. Version 2.7 released in 2010, which was the last release of the 2.x series. Version 3.5 is the most recent release of the 3.x series which represents the future direction of language. It has taken years for the major scientific libraries to complete the transition from 2.x to 3.x, but it is now safe to recommend Python 3.x for widespread use. So for this course be sure to use latest verstion, currently 3.6, of the Python language.\n\n### Using Jupyter/Python in the Cloud\n\nThe easiest way to use Jupyter notebooks is to sign up for a free or paid account on a cloud-based service such as [Wakari.io](https://www.wakari.io/) or [SageMathCloud](https://cloud.sagemath.com/). You will need continuous internet connectivity to access your work, but the advantages are there is no software to install or maintain. All you need is a modern web browser on your laptop, Chromebook, tablet or other device. Note that the free services are generally heavily oversubscribed, so you should consider a paid account to assure access during prime hours.\n\nThere are also demonstration sites in the cloud, such as [tmpnb.org](https://tmpnb.org/). These start an interactive session where you can upload an existing notebook or create a new one from scratch. Though convenient, these sites are intended mainly for demonstration and generally quite overloaded. More significantly, there is no way to retain your work between sessions, and some python functionality is removed for security reasons.\n\n### Installing Jupyter/Python on your Laptop\n\nFor regular off-line use you should consider installing a Jupyter Notebook/Python environment directly on your laptop. This will provide you with reliable off-line access to a computational environment. This will also allow you to install additional code libraries to meet particular needs. \n\nChoosing this option will require an initial software installation and routine updates. For this course the recommended package is [Anaconda](https://store.continuum.io/cshop/anaconda/) available from [Continuum Analytics](http://continuum.io/). Downloading and installing the software is well documented and easy to follow. Allow about 10-30 minutes for the installation depending on your connection speed. \n\nAfter installing be sure to check for updates before proceeding further. With the Anaconda package this is done by executing the following two commands in a terminal window:\n\n > conda update conda\n > conda update anaconda\n\nAnaconda includes an 'Anaconda Navigator' application that simplifies startup of the notebook environment and manage the update process.\n\n## Step 1: Start a Jupyter Notebook Session\n\nIf you are using a cloud-based service a Jupyter session will be started when you log on. \n\nIf you have installed a Jupyter/Python distribution on your laptop then you can open a Jupyter session in one of two different ways:\n\n* Use the Anaconda Navigator App, or \n* open a terminal window on your laptop and execute the following statement at the command line:\n\n > jupyter notebook\n\nEither way, once you have opened a session you should see a browser window like this:\n\n\n\nAt this point the browser displays a list of directories and files. You can navigate amoung the directories in the usual way by clicking on directory names or on the 'breadcrumbs' located just about the listing. \n\nJupyter notebooks are simply files in a directory with a `.ipynb` suffix. They can be stored in any directory including Dropbox or Google Drive. Upload and create new Jupyter notebooks in the displayed directory using the appropriate buttons. Use the checkboxes to select items for other actions, such as to duplicate, to rename, or to delete notebooks and directories.\n\n* select one of your existing notebooks to work on,\n* start a new notebook by clicking on the `New Notebook` button, or \n* import a notebook from another directory by dragging it onto the list of notebooks.\n\nAn IPython notebook consists of cells that hold headings, text, or python code. The user interface is relatively self-explanatory. Take a few minutes now to open, rename, and save a new notebook. \n\nHere's a quick video overview of Jupyter notebooks.\n\n\n```python\nfrom IPython.display import YouTubeVideo\nYouTubeVideo(\"HW29067qVWk\",560,315,rel=0)\n```\n\n\n\n\n\n\n\n\n\n\n## Step 2: Simple Calculations with Python\n\nPython is an elegant and modern language for programming and problem solving that has found increasing use by engineers and scientists. In the next few cells we'll demonstrate some basic Python functionality.\n\n### Basic Arithmetic Operations\n\nBasic arithmetic operations are built into the Python langauge. Here are some examples. In particular, note that exponentiation is done with the \\*\\* operator.\n\n\n```python\na = 12\nb = 2\n\nprint(a + b)\nprint(a**b)\nprint(a/b)\n```\n\n 14\n 144\n 6.0\n\n\n### Python Libraries\n\nThe Python language has only very basic operations. Most math functions are in various math libraries. The `numpy` library is convenient library. This next cell shows how to import `numpy` with the prefix `np`, then use it to call a common mathematical functions.\n\n\n```python\nimport numpy as np\n\n# mathematical constants\nprint(np.pi)\nprint(np.e)\n\n# trignometric functions\nangle = np.pi/4\nprint(np.sin(angle))\nprint(np.cos(angle))\nprint(np.tan(angle))\n```\n\n 3.141592653589793\n 2.718281828459045\n 0.707106781187\n 0.707106781187\n 1.0\n\n\n### Working with Lists\n\nLists are a versatile way of organizing your data in Python. Here are some examples, more can be found on [this Khan Academy video](http://youtu.be/zEyEC34MY1A).\n\n\n```python\nxList = [1, 2, 3, 4]\nxList\n```\n\n\n\n\n [1, 2, 3, 4]\n\n\n\nConcatentation is the operation of joining one list to another. \n\n\n```python\n# Concatenation\nx = [1, 2, 3, 4];\ny = [5, 6, 7, 8];\n\nx + y\n```\n\n\n\n\n [1, 2, 3, 4, 5, 6, 7, 8]\n\n\n\nSum a list of numbers\n\n\n```python\nnp.sum(x)\n```\n\n\n\n\n 10\n\n\n\nAn element-by-element operation between two lists may be performed with \n\n\n```python\nprint(np.add(x,y))\nprint(np.dot(x,y))\n```\n\n [ 6 8 10 12]\n 70\n\n\nA for loop is a means for iterating over the elements of a list. The colon marks the start of code that will be executed for each element of a list. Indenting has meaning in Python. In this case, everything in the indented block will be executed on each iteration of the for loop. This example also demonstrates string formatting.\n\n\n```python\nfor x in xList:\n print(\"sin({0}) = {1:8.5f}\".format(x,np.sin(x)))\n```\n\n sin(1) = 0.84147\n sin(2) = 0.90930\n sin(3) = 0.14112\n sin(4) = -0.75680\n\n\n### Working with Dictionaries\n\nDictionaries are useful for storing and retrieving data as key-value pairs. For example, here is a short dictionary of molar masses. The keys are molecular formulas, and the values are the corresponding molar masses.\n\n\n```python\nmw = {'CH4': 16.04, 'H2O': 18.02, 'O2':32.00, 'CO2': 44.01}\nmw\n```\n\n\n\n\n {'CH4': 16.04, 'CO2': 44.01, 'H2O': 18.02, 'O2': 32.0}\n\n\n\nWe can a value to an existing dictionary.\n\n\n```python\nmw['C8H18'] = 114.23\nmw\n```\n\n\n\n\n {'C8H18': 114.23, 'CH4': 16.04, 'CO2': 44.01, 'H2O': 18.02, 'O2': 32.0}\n\n\n\nWe can retrieve a value from a dictionary.\n\n\n```python\nmw['CH4']\n```\n\n\n\n\n 16.04\n\n\n\nA for loop is a useful means of interating over all key-value pairs of a dictionary.\n\n\n```python\nfor species in mw.keys():\n print(\"The molar mass of {:7.2f}\".format(species, mw[species]))\n```\n\n C8H18 114.23\n CH4 16.04\n CO2 44.01\n H2O 18.02\n O2 32.00\n\n\n\n```python\nfor species in sorted(mw, key = mw.get):\n print(\" {:<8s} {:>7.2f}\".format(species, mw[species]))\n```\n\n CH4 16.04\n H2O 18.02\n O2 32.00\n CO2 44.01\n C8H18 114.23\n\n\n### Plotting with Matplotlib\n\nImporting the `matplotlib.pyplot` library gives IPython notebooks plotting functionality very similar to Matlab's. Here are some examples using functions from the \n\n\n```python\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nx = np.linspace(0,10)\ny = np.sin(x)\nz = np.cos(x)\n\nplt.plot(x,y,'b',x,z,'r')\nplt.xlabel('Radians');\nplt.ylabel('Value');\nplt.title('Plotting Demonstration')\nplt.legend(['Sin','Cos'])\nplt.grid()\n```\n\n\n```python\nplt.plot(y,z)\nplt.axis('equal')\n```\n\n\n```python\nplt.subplot(2,1,1)\nplt.plot(x,y)\nplt.title('Sin(x)')\n\nplt.subplot(2,1,2)\nplt.plot(x,z)\nplt.title('Cos(x)')\n```\n\n### Solve Equations using Sympy Library\n\nOne of the best features of Python is the ability to extend it's functionality by importing special purpose libraries of functions. Here we demonstrate the use of a symbolic algebra package [`Sympy`](http://sympy.org/en/index.html) for routine problem solving.\n\n\n```python\nimport sympy as sym\n\nsym.var('P V n R T');\n\n# Gas constant\nR = 8.314 # J/K/gmol\nR = R * 1000 # J/K/kgmol\n\n# Moles of air\nmAir = 1 # kg\nmwAir = 28.97 # kg/kg-mol\nn = mAir/mwAir # kg-mol\n\n# Temperature\nT = 298\n\n# Equation\neqn = sym.Eq(P*V,n*R*T)\n\n# Solve for P \nf = sym.solve(eqn,P)\nprint(f[0])\n\n# Use the sympy plot function to plot\nsym.plot(f[0],(V,1,10),xlabel='Volume m**3',ylabel='Pressure Pa')\n```\n\n## Step 3: Where to Learn More\n\nPython offers a full range of programming language features, and there is a seemingly endless range of packages for scientific and engineering computations. Here are some suggestions on places you can go for more information on programming for engineering applications in Python.\n\n### Introduction to Python for Science\n\nThis excellent introduction to python is aimed at undergraduates in science with no programming experience. It is free and available at the following link.\n\n* [Introduction to Python for Science](https://github.com/djpine/pyman)\n\n### Tutorial Introduction to Python for Science and Engineering\n\nThe following text is licensed by the Hesburgh Library for use by Notre Dame students and faculty only. Please refer to the library's [acceptable use policy](http://library.nd.edu/eresources/access/acceptable_use.shtml). Others can find it at [Springer](http://www.springer.com/us/book/9783642549588) or [Amazon](http://www.amazon.com/Scientific-Programming-Computational-Science-Engineering/dp/3642549586/ref=dp_ob_title_bk). Resources for this book are available on [github](http://hplgit.github.io/scipro-primer/).\n\n* [A Primer on Scientific Programming with Python (Fourth Edition)](http://link.springer.com.proxy.library.nd.edu/book/10.1007/978-3-642-54959-5) by Hans Petter Langtangen. Resources for this book are available on [github](http://hplgit.github.io/scipro-primer/).\n\npycse is a package of python functions, examples, and document prepared by John Kitchin at Carnegie Mellon University. It is a recommended for its coverage of topics relevant to chemical engineers, including a chapter on typical chemical engineering computations. \n\n* [pycse - Python Computations in Science and Engineering](https://github.com/jkitchin/pycse/blob/master/pycse.pdf) by John Kitchin at Carnegie Mellon. This is a link into the the [github repository for pycse](https://github.com/jkitchin/pycse), click on the `Raw` button to download the `.pdf` file.\n\n### Interative learning and on-line tutorials\n\n* [Code Academy on Python](http://www.codecademy.com/tracks/python)\n* [Khan Academy Videos on Python Programming](https://www.khanacademy.org/science/computer-science-subject/computer-science)\n* [Python Tutorial](http://docs.python.org/2/tutorial/)\n* [Think Python: How to Think Like a Computer Scientist](http://www.greenteapress.com/thinkpython/html/index.html)\n* [Engineering with Python](http://www.engineeringwithpython.com/)\n\n### Official documentation, examples, and galleries\n\n* [Notebook Examples](https://github.com/ipython/ipython/tree/master/examples/notebooks)\n* [Notebook Gallery](https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks)\n* [Official Notebook Documentation](http://ipython.org/ipython-doc/stable/interactive/notebook.html)\n* [Matplotlib](http://matplotlib.org/index.html) \n\n\n```python\n\n```\n\n\n< [Getting Started](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/01.00-Getting-Started.ipynb) | [Contents](toc.ipynb) | [Python Basics](http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/01.02-Python-Basics.ipynb) >

\n", "meta": {"hexsha": "4fc38c854550b6e4eb4761121718694800573ac9", "size": 139296, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Mathematics/Mathematical Modeling/01.01-Getting-Started-with-Python-and-Jupyter-Notebooks.ipynb", "max_stars_repo_name": "okara83/Becoming-a-Data-Scientist", "max_stars_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Mathematics/Mathematical Modeling/01.01-Getting-Started-with-Python-and-Jupyter-Notebooks.ipynb", "max_issues_repo_name": "okara83/Becoming-a-Data-Scientist", "max_issues_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Mathematics/Mathematical Modeling/01.01-Getting-Started-with-Python-and-Jupyter-Notebooks.ipynb", "max_forks_repo_name": "okara83/Becoming-a-Data-Scientist", "max_forks_repo_head_hexsha": "f09a15f7f239b96b77a2f080c403b2f3e95c9650", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 2, "max_forks_repo_forks_event_min_datetime": "2022-02-09T15:41:33.000Z", "max_forks_repo_forks_event_max_datetime": "2022-02-11T07:47:40.000Z", "avg_line_length": 166.2243436754, "max_line_length": 30874, "alphanum_fraction": 0.8902911785, "converted": true, "num_tokens": 3884, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.32082131381216084, "lm_q2_score": 0.26284183159693775, "lm_q1q2_score": 0.0843252617377243}} {"text": "```python\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"./styles/custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()\n```\n\n\n\n\n\n\n\n\n\n### BEFORE YOU DO ANYTHING...\nIn the terminal:\n1. Navigate to __inside__ your ILAS_Python repository.\n2. __COMMIT__ any un-commited work on your personal computer.\n3. __PULL__ any changes *you* have made using another computer.\n4. __PULL__ textbook updates (including homework answers).\n\n1. __Open Jupyter notebook:__ Start >> Programs (\u3059\u3079\u3066\u306e\u30d7\u30ed\u30b0\u30e9\u30e0) >> Programming >> Anaconda3 >> JupyterNotebook\n1. __Navigate to the ILAS_Python folder__. \n1. __Open today's seminar__ by clicking on 5_Functions.\n\n# Functions\n\n# Lesson Goal\n\nTo encapsulate the code you have been writing as Python functions to be called within your programs. \n\n\n\n# Objectives\n\n- Learn to write a user-defined Python function. \n- Pass arguments to a function to give it inputs.\n- Use global and local variables within functions. \n- Generate sequences of values recursively using functions [and generators].\n- [Generating functions using an alternative method (lamda functions)].\n\nWhy we are studying this:\n\n - To produce code that is shorter and more concise. \n - To produce code that we can re-use making the code we write less repetitive. \n - To quickly apply our code to multiple (sometimes very large numbes of) variables. \n - To have less code to \"debug\", reducing our risk of errors. \n\n\n Lesson structure:\n - What is a function? (Anatomy of a function)\n - Function arguments\n - Scope\n - Recursive functions\n - [Extension topics (Generators and Callbacks)]\n - Review exercises \n - Summary\n\nLet\u2019s start by finding out what a function is\u2026\n\n## What is a function?\n\nFunctions are one of the most important concepts in computing. \n\nIn mathematics, a function is a relation between __inputs__ and a set of permissible __outputs__.\n\nExample: The function relating $x$ to $x^2$ is:\n$$ \nf(x) = x \\cdot x\n$$\n\nIn programming, a function behaves in a similar way. \n\n__Function__: A named section of a code that performs a specific task. \n\n\n\nFunctions can (although do no always) take data as __inputs__ and return __outputs__.\n\n \nA simple function example:\n - Inputs: the coordinates of the vertices of a triangle.\n - Output: the area of the triangle. \n\nYou are already familiar with some *built in* Python functions...\n\n\n\n `print()` takes the __input__ in the parentheses and __outputs__ a visible representation.\n \n\n\n```python\nprint(\"Today we will learn about functions\")\n```\n\n Today we will learn about functions\n\n\n `len()` takes a data structure as __input__ in the parentheses and __outputs__ the number of items in the data structure (in one direction).\n \n\n\n```python\nprint(len(\"Today we will learn about functions\"))\n```\n\n 35\n\n\n`sorted()` takes a data structure as __input__ in the parentheses and __outputs__ the data structure sorted by a rule determined by the data type.\n \n\n\n```python\nprint(sorted(\"Today we will learn about functions\"))\n```\n\n [' ', ' ', ' ', ' ', ' ', 'T', 'a', 'a', 'a', 'b', 'c', 'd', 'e', 'e', 'f', 'i', 'i', 'l', 'l', 'l', 'n', 'n', 'n', 'o', 'o', 'o', 'r', 's', 't', 't', 'u', 'u', 'w', 'w', 'y']\n\n\nMost Python programs contain a number of *custom functions*. \n\nThese are functions, created by the programmer (you!) to perform a specific task.\n\n## The Anatomy of a Function\n\nHere is a python function in pseudocode:\n \n def function_name():\n code to execute\n more code to execute\n \n\n\n\n### Function Checklist\n\nA custom function is __declared__ using:\n1. The definition keyword, __`def`__.\n1. A __function name__ of your choice.\n1. __() parentheses__ which optionally contain __arguments__\n1. __: a colon__ character\n1. The __body code__ to be executed when the function is *called*.\n1. An optional __return__ statement \n\n\n\nBelow is an example of a Python function.\n\n\n\n```python\ndef sum_and_increment(a, b): \n c = a + b + 1\n return c\n```\n\n__Function name:__ `sum_and_increment`\n\n__Arguments:__ \n
`a` and `b`\n
Function inputs are placed within () parentheses.\n\n ```python\n def sum_and_increment(a, b): \n \n ```\n \n\n\n\n__Body:__ \n
The code to be executed when the function is called. \n
Indented by four spaces. \n
Indentation happens automatically. \n
Code indented to the same level (or less) as `def` falls __outside__ of the function body.\n\n ```python\n def sum_and_increment(a, b): \n c = a + b + 1\n\n ```\n\n__`return`__ statement: \n
Defines what result the function should return. \n
Often placed at the end of a function.\n
A function doesn't always include a return statement.\n\n\n ```python\n def sum_and_increment(a, b): \n c = a + b + 1\n return c\n \n ```\n\n\n### The Documentation String\nIt is best practise to include a *documentation string* (\"docstring\").\n - Describes __in words__ what the function does.\n - Begins and end with `\"\"\"`.\n - *Optional* - however it makes your code much more understandadble. \n\n\n```python\ndef sum_and_increment(a, b):\n \"\"\"\"\n Return the sum of a and b, plus 1\n \"\"\"\n c = a + b + 1\n return c\n\n```\n\nTo execute (*call*) the function, type:\n - a variable name to store the output (`n` in the example below)\n - the function name\n - any arguments in parentheses\n\n\n```python\ndef sum_and_increment(a, b):\n \"\"\"\"\n Return the sum of a and b, plus 1\n \"\"\"\n c = a + b + 1\n return c\n\nm = sum_and_increment(3, 4)\nprint(m) # Expect 8\n```\n\n 8\n\n\n\n```python\nm = 10\nn = sum_and_increment(m, m)\nprint(n) # Expect 21\n```\n\n 21\n\n\n\n```python\nl = 5\nm = 6\nn = sum_and_increment(m, l)\nprint(n) \n```\n\n 12\n\n\n__Example:__ a function that:\n- does not take any arguments\n- does not return any variables.\n\n\n```python\ndef print_message():\n print(\"The function 'print_message' has been called.\")\n\nprint_message()\n```\n\n The function 'print_message' has been called.\n\n\nFunctions are good for repetitive tasks. \n\nComputer code can be re-used multiple times with different input data. \n\nRe-using code reduces the risk of making mistakes or errors. \n\n\n\nBelow is a simple example of a function using `if` and `else` control statements.`\n\n\n\n\n```python\ndef process_value(x):\n \"Return a value that depends on the input value x \"\n if x > 10:\n return 0\n elif x > 5:\n return x*x\n elif x > 0:\n return x**3\n else:\n return x\n```\n\nBy placing these in a function we can avoid duplicating the `if-elif-else` statement every time we want to use it. \n\n\n\n```python\nprint(process_value(3))\n```\n\n 27\n\n\nBelow is a simple example of a function being 'called' numerous times from inside a `for` loop.\n\n\n```python\n# calling the function within a for loop...\nfor x in range(3):\n print(process_value(x))\n\nprint()\n \n# is more concise than... \nfor x in range(3):\n if x > 10:\n print(0)\n elif x > 5:\n print(x*x)\n elif x > 0:\n print(x**3)\n else:\n print(x)\n \n# but gives the same result:\n```\n\n 0\n 1\n 8\n \n 0\n 1\n 8\n\n\nThe more times we want to use the function within a program, the more useful this becomes.\n\nFunctions can make programs more readable.
\n\n__Example:__
\nA function called `sin`, that computes and returns $\\sin(x)$,
\nis far more readable and less prone to error than writing
an equation for $\\sin(x)$ every time we want to use it. \n\n## Function Arguments\n\nIt is important to input arguments in the correct order when calling a function. \n\n\n\n\n```python\ndef sum_and_increment(a, b):\n \"\"\"\"\n Return the sum of a and b, plus 1\n \"\"\"\n c = a + b + 1\n return c\n```\n\nThe function `sum_and_increment` adds:\n - the first argument, `a`\n - ...to the second argument `b`\n - ...to 1.\n \nIf the order of a and b is switched, the result is the same.\n\n\n\n```python\nprint(sum_and_increment(3,4))\nprint(sum_and_increment(4,3))\n```\n\n 8\n 8\n\n\nHowever, if we subtract one argument from the other, the result depends on the input order: \n\n\n```python\ndef subtract_and_increment(a, b):\n \"\"\"\"\n Return a minus b, plus 1\n \"\"\"\n c = a - b + 1\n return c\n\nprint(subtract_and_increment(3,4))\nprint(subtract_and_increment(4,3))\n```\n\n 0\n 2\n\n\n### Named Arguments\n\nIt can be easy to make a mistake in the input order. \n\nThis can lead to a bug. \n\nWe can reduce this risk by giving inputs as *named* arguments. \n\nNamed arguments also enhances program readability. \n\nWhen we use named arguments, the order of input does not matter. \n\n\n```python\ndef subtract_and_increment(a, b):\n \"Return a minus b, plus 1\"\n c = a - b + 1\n return c\n\nalpha = 3\nbeta = 4\n\nprint(subtract_and_increment(a=alpha, b=beta))\nprint(subtract_and_increment(b=beta, a=alpha)) \n```\n\n 0\n 0\n\n\n### What can be passed as a function argument?\n\n*Object* types that can be passed as arguments to functions include:\n- single variables (`int`, `float`...)\n- data structures (`list`, `tuple`, `dict`...)\n- other functions \n\n\n\n\n### Data Structures as Function Arguments. \n__Indexing__ can be useful when data structures are used as function arguments.\n\n__Example: Area of a Triangle__ \nThe coordinates of the vertices of a triangle are $(x_0, y_0)$, $(x_1, y_1)$ and $(x_2, y_2)$.\n\n \n\nThe area $A$ of the triangle is given by:\n\n$$\nA = \\left| \\frac{x_0(y_1 - y_2) + x_1(y_2 - y_0) + x_2(y_0 - y_1)}{2} \\right|\n$$\n\n\nThe function `triangle_area` takes three arguments:\n - a __tuple__ containig the coordinates of vertex 0\n - a __tuple__ containig the coordinates of vertex 1\n - a __tuple__ containig the coordinates of vertex 2\n \n\nThe individual elements of the tuples are referenced within the function by *indexing*. \n\n\n```python\nvtex0 = (1, 1) #(x, y) coordinates of vertex 0\nvtex1 = (6, 2) #(x, y) coordinates of vertex 1\nvtex2 = (3, 4) #(x, y) coordinates of vertex 2\n\ndef triangle_area(v0, v1, v2):\n \n A = abs( (v0[0] * (v1[1] - v2[1]) +\n v1[0] * (v2[1] - v0[1]) +\n v2[0] * (v0[1] - v1[1])) / 2 )\n \n return A\n\nprint(triangle_area(vtex0, vtex1, vtex2))\n```\n\n 6.5\n\n\n__Data Type:__\n
By organising the 6 variables into 3 pairs (tuples), rather than expressing them as individual values we are less likely to make a mistake.\n
e.g. entering variables in the wrong order such as putting x and y the wrong way round.\n\n__Readability:__ \n
The equation for A is organised onto 3 lines to make it easier to read. \n
We can make the function easier to understand by using assignment.\n\n__Readability:__ \n
We can also use local variables are used to limit the scope, allowing `x` and `y` to be used as names for variable outside of the function. \n\n\n```python\nvtex0 = (1, 1) #(x, y) coordinates of vertex 0\nvtex1 = (6, 2) #(x, y) coordinates of vertex 1\nvtex2 = (3, 4) #(x, y) coordinates of vertex 2\n\ndef triangle_area(v0, v1, v2):\n x, y = 0, 1\n \n A = abs( (v0[x] * (v1[y] - v2[y]) +\n v1[x] * (v2[y] - v0[y]) +\n v2[x] * (v0[y] - v1[y])) / 2 )\n \n return A\n\nprint(triangle_area(vtex0, vtex1, vtex2))\n```\n\n 6.5\n\n\n\n### Functions as Function Arguments. \n__Example:__ The function `is_positive` checks if the value of a function $f$, evaluated at $x$, is positive:\n\n\n```python\ndef is_positive(f, x):\n \"Checks if the function value f(x) is positive\"\n return f(x) > 0\n \ndef f0(x):\n \"Computes x^2 - 1\"\n return x*x - 1\n\ndef f1(c):\n \"Computes -c^2 + 2c + 1\"\n return -c*c + 2*c + 1\n \n# Value of x to test\nx = 2\n\n# Test function f0\nprint(is_positive(f0, x))\n\n# Test function f1\nprint(is_positive(f1, x))\n```\n\n True\n False\n\n\n__Note:__ The order that we *define* the functions does not effect the output. \n\n\n### Default / Keyword Arguments\n\n'Default' or 'keyword' arguments have a default initial value.\n\nThe default value can be overridden when the function is called. \n\nIn some cases it just saves the programmer effort - they can write less code. \n\n\n\n\n\nIn other cases default arguments a function to be applied to a wider range of problems. \n\n\n__Example: A function that takes either two OR three input arguments.__\n\nThis simple function to express x, y (and z) inputs as a list.
(e.g. coordinates to define a position vector). \n\nWe can use the same function for 2 inputs (x and y coordinates) and 3 inputs (x, y and z coordinates). \n\nThe default value for the z component is zero.\n\nThe *default* or *keyword* argument z = 0.0 is overridden if a z coordinate is included when the function is called. \n\n\n```python\ndef vector_3D(x, y, z=0.0):\n \"\"\"\n Expresses 2D or 3D vector in 3D coordinates, as a list.\n \"\"\"\n return[x, y, z]\n```\n\n__Important Note:__ Non-default (*positional*) arguments must always appear __before__ default (*keyword*) arguments in the function definition). \n\n\n```python\nprint(vector_3D(2.0, 1.5, 6.0)) \nprint(vector_3D(2.0, 1.5))\n\n```\n\n [2.0, 1.5, 6.0]\n [2.0, 1.5, 0.0]\n\n\n\n__Example: A function that takes either one OR two OR three input arguments.__\n\nThe default values for the y and z components are both zero.\n\n\n```python\ndef vector_3D(x, y=0.0, z=0.0):\n \"\"\"\n Expresses 1D, 2D or 3D vector in 3D coordinates, as a list.\n \"\"\"\n return [x, y, z]\n```\n\n\n```python\nprint(vector_3D(2.0, 1.5, 6.0))\nprint(vector_3D(2.0, 1.5))\nprint(vector_3D(2.0))\n```\n\n [2.0, 1.5, 6.0]\n [2.0, 1.5, 0.0]\n [2.0, 0.0, 0.0]\n\n\n__Example: A particle moving with constant acceleration.__\n
\nFind the position $r$ of a particle with:\n - initial position $r_{0}$ \n - initial velocity $v_{0}$\n - constant acceleration $a$. \n\nFrom the equations of motion, the position $r$ at time $t$ is given by: \n\n$$\nr(t) = r(0) + v(0) t + \\frac{1}{2} a t^{2}\n$$\n\n\n\n__A particle moving with constant acceleration.__\n
Example: An object falling from rest, due to gravity. \n
(*particle*: neglect air resistance)\n\n \n\n\n\n - $a = g = -9.81$ m s$^{-2}$ is sufficiently accurate *in most cases*. \n - $v(0) = 0$ in __every__ case: \"...falling from rest...\"\n - $r(0) =$ the height from which the object falls. \n - $t = $ the time at which we want to find the objects position.\n \nWe can use keyword arguments for the velocity `v0` and the acceleration `a`:\n\n\n```python\ndef position(t, r0, v0=0.0, a=-9.81):\n \"\"\"\n Computes position of an accelerating particle.\n \"\"\"\n return r0 + (v0 * t) + (0.5 * a * t**2)\n```\n\n__Note__ that we __do not__ need to include the default variables in the brackets when calling the function. \n\n\n\n\n```python\ndef position(t, r0, v0=0.0, a= -9.81):\n \"\"\"\n Computes position of an accelerating particle.\n \"\"\"\n return r0 + (v0 * t) + (0.5 * a * t**2)\n\n# Position at t = 0.2s, when dropped from r0 = 1m\np = position(0.2, 1.0)\n\nprint(\"height =\", p, \"m\")\n```\n\n height = 0.8038 m\n\n\nAt the equator, the acceleration due to gravity is lower, $a= g = -9.78$ m s$^{-2}$\n\nFor some calculations, this makes a significnat difference. \n\nIn this case, we simply override the default value for acceleration: \n\n\n```python\n# Position at t = 0.2s, when dropped from r0 = 1m\np = position(0.2, 1.0)\n\nprint(\"height =\", p, \"m\")\n\n# Position at t = 0.2s, when dropped from r0 = 1m at the equator\np = position(0.2, 1, 0.0, -9.78)\n\nprint(\"height =\", p, \"m\")\n```\n\n height = 0.8038 m\n height = 0.8044 m\n\n\n__Note__ that we have *also* entered the initial velocity, `v`.\n\nAs the value to overide is the 4th argument, the 3rd argument must also be input. \n\nThe function interprets:\n\n p = position(0.2, 1, -9.78)\n \nas\n\n p = position(0.2, 1, -9.78 -9.81)\n \n\n\n\nManually inputting an argument, `v0` when we want to use its default is a potential source of error. \n\nWe may accidentally input the default value of `v0` incorrectly, causing a bug. \n\nA more robust solution is to specify the acceleration by using a named argument. \n\n\n```python\n# Position at t = 0.2s, when dropped from r0 = 1m at the equator\np = position(0.2, 1, 0.0, -9.78)\n\nprint(\"height =\", p, \"m\")\n```\n\n height = 0.8044 m\n\n\nThe program overwrites the correct default value.\n\nWe do not have to specify `v`. \n\n#### Forcing Default Arguments\n\nAs an additional safety measure, you can force arguments to be enetered as named arguments by preceding them with a * star in the function definition.\n\nAll arguments after the star must be entered as named arguments.\n\nBelow is an example:\n\n\n```python\n# redefine position function, forcing keyword arguments\ndef position(t, r0, *, v0=0.0, a= -9.81):\n \"\"\"\n Computes position of an accelerating particle.\n \"\"\"\n return r0 + (v0 * t) + (0.5 * a * t**2)\n\n# Now entering default arguments without a keyword retruns an error\n# p = position(0.2, 1.0, 3)\n\np = position(0.2, 1.0, v0=3)\n```\n\n__Try it yourself__\n\n__Hydrostatic Pressure \u9759\u6c34\u5727__\n\nThe hydrostatic pressure (Pa = Nm$^{-2}$ = kg m$^{-1}$s$^{-2}$) is the pressure on a submerged object due to the overlying fluid):\n\n$$\nP = \\rho g h\n$$\n\n$g$ = acceleration due to gravity, m s$^{-2}$\n
$\\rho $ = fluid density, kg m$^{-3}$\n
$h$ = height of the fluid above the object, m. \n\n\n\n\n\nIn the cell below, write a function that:\n - takes $g$, $\\rho$ and $h$ as __inputs__\n - returns (__outputs__) the hydrostatic pressure $P$\n \n\nAssume:\n
The function will mostly to be used for calculating the hydrostatic pressure on objects submerged in __water__.\n
The acceleration due to gravity, $g = 9.81$ m s$^{-2}$ is sufficiently accurate *in most cases*.\n
(Note: acceleration due to gravity is postive in this example.)\n
The density of water, $\\rho_w$ = 1000 kg m$^{-3}$ is sufficiently accurate *in most cases*.\n\nTherefore use keyword arguments for `g` and `rho` in your function.\n\nRemember, keyword/default arguments should appear *after* non-default arguments.\n\nInclude a doc-string to say what your function does.\n\n\n```python\n# Function to compute hydrostatic pressure.\ndef hydro_pressure(h, g = 9.81, rho = 1000):\n \"\"\"\n This function computes the hydrostatic pressure for an object under height h of water\n \"\"\"\n return rho*g*h\n```\n\n__Call__ your function to find the hydrostatic pressure on an object, submerged in water, at a depth of 10m.\n\n\n\n\n```python\n# The hydrostatic pressure (Pa) on an object at a depth of 10m in WATER\nP = hydro_pressure(10)\nprint(P)\n```\n\nThen use a suitable value for `g` to find the hydrostatic pressure on an object submerged in water:\n- at a depth of 10m\n- at the equator\n\n\n\n\n```python\n# The hydrostatic pressure (Pa) on an object:\n# at a depth of 10m, at the equator \n```\n\nDue to it's salt content, seawater has a higher density, $\\rho_{sw}$ = 1022 kg m$^{-3}$.
\nFinally, find the hydrostatic pressure on an object:\n- submerged in __sea water__\n- at a depth of 10m\n- at the equator\n\n\n```python\n# The hydrostatic pressure (Pa) on an object:\n# at a depth of 10m, in SEA WATER, at the EQUATOR.\n```\n\n__Note__ \n
In the last seminar we looked at how to store mulitple variables (e.g. vectors) as lists. \n
The functions above could be implemented more efficiently using lists or tuples. \n
We will look at how to do this later today. \n\n## Introduction to Scope\n\n__Global variables:__ Variable that are *declared* __outside__ of a function *can* be used __inside__ on the function.
\nThey have *global scope*. \n\n__Local variables:__ Variables that are *declared* __inside__ of a function *can not* be used __outside__ of the function. \n
\nThey have *local scope*. \n\n\n```python\n# global variable\nglobal_var = \"Global variable\"\n\ndef my_func():\n \"\"\"\n Prints a global variable and a local variable \n \"\"\"\n # the function can access the global variable\n print(global_var) \n \n local_var = \"Local variable\"\n print(local_var)\n\n# call the function\nmy_func()\n\n# Global variables are accessible anywhere\nprint(global_var)\n\n# Local variables only accessible within the function in which they are defined\n# print(local_var)\n```\n\n Global variable\n Local variable\n Global variable\n\n\nDue to scope, variables with the *same name* can appear globally and locally without conflict. \n\nThis prevents variables declared inside a function from unexpectedly affecting other parts of a program. \n\n\n\nWhere a local and global variable have the same name, the program will use the __local__ version.\n\nLet's modify our function `my_func` so now both the local and global varibale have the same name...\n\nThis time the first `print(var)` raises an error.\n\nThe local variable overrides the global variable, \n
however the local variable has not yet been assigned a value.\n\n\n```python\n# global variable\nvar = \"Global variable\"\n\ndef my_func():\n # notice what happens this time if we try to access the global variable within the function\n print(var) \n \n # local variable of the same name\n var = \"Local variable\"\n print(var)\n \n# Call the function.\n# print(my_func())\n```\n\n\n\nThe global variable `var` is unaffected by the local variable `var`.\n\n\n```python\n# global variable\nvar = \"Global variable\"\n\ndef my_func():\n \n # local variable of the same name\n var = \"Local variable\"\n return var\n\n# Call the function.\nprint(my_func())\n\n# The global variable is unaffected by the local variable\nprint(var)\n\n# We can overwrite the global varibale with the returned value\nvar = my_func()\nprint(var)\n```\n\n Local variable\n Global variable\n Local variable\n\n\nIf we *really* want to use a global variable and a local variable \n
with the same name \n
within the same function, \n
we can input use the global variable as a __function argument__. \n\nBy inputting it as an argument we rename the global variable for use within the function....\n\n\n```python\n# Global \nvar = \"Global variable\"\n\ndef my_func(input_var):\n # The argument is given the name input_variable for use within the function \n print(input_var) \n \n # Local\n var = \"Local variable\"\n print(var)\n \n return (input_var + \" \" + var)\n\n# Run the function, giving the global variable as an argument\nprint(my_func(var))\n```\n\n Global variable\n Local variable\n Global variable Local variable\n\n\nThe global variable is unaffected by the function\n\n\n```python\nprint(var)\n```\n\n Global variable\n\n\n...unless we overwrite the value of the global variable.\n\n\n```python\nprint(var)\n\nvar = my_func(var)\nprint(var)\n```\n\n Global variable\n Global variable\n Local variable\n Global variable Local variable\n\n\n__Try it yourself__\nIn the cell below:\n1. Create a global variable called `my_var`, with a numeric value\n1. Create a function, called `my_func`, that:\n - takes a single argument, `input_var` \n - creates a local variable called `my_var` (same name as global variable).\n - returns the sum of the function argument and the local variable: `input_var + my_var`.

\n1. Print the output when the function `my_func` is called, giving the global varable `my_var` as the input agument.\n1. print the global variable `my_var`.\n1. Add a docstring to say what your function does\n\n\n```python\n# Global and local scope\n```\n\n\n\nA global variable can be modified from inside a function by:\n1. Use Python `global` keyword. Give the variable a name.\n```python\nglobal var\n```\n1. Assign the variable a value.\n```python\nvar = 10\n```\n\n\n```python\n# global variable\nvar = \"Global variable\"\n\ndef my_func():\n \n # Locally assigned global variable\n global var\n var = \"Locally assigned global variable\"\n \n \nprint(\"Before calling the function var =\", var)\n\n# Call the function.\nmy_func()\n\nprint(\"After calling the function var =\", var)\n```\n\n Before calling the function var = Global variable\n After calling the function var = Locally assigned global variable\n\n\n__Try it yourself__\n\nIn the cell below:\n1. Copy and paste your code from the previous exercise.\n1. Edit your code so that:\n - The function `my_func` takes no input arguments. \n - The global variable `my_var` is overwritten within the function using the prefix `global`. \n1. Print the global variable before and after calling the function to check your code. \n1. Modify the docstring as necessary.\n\n\n```python\n# Copy and paste code here:\n```\n\nAs we have seen, a *local variable* can be accessed from outside the function by *returning* it. \n\n### Return arguments\n\nA __single__ Python function can return:\n- no values\n- a single value \n- multiple return values\n\nFor example, we could have a function that:\n - takes three values (`x0, x1, x2`)\n - returns the maximum, the minimum and the mean\n\n\n```python\ndef compute_max_min_mean(x0, x1, x2):\n \"Return maximum, minimum and mean values\"\n \n x_min = x0\n if x1 < x_min:\n x_min = x1\n if x2 < x_min:\n x_min = x2\n\n x_max = x0\n if x1 > x_max:\n x_max = x1\n if x2 > x_max:\n x_max = x2\n\n x_mean = (x0 + x1 + x2)/3 \n \n return x_min, x_max, x_mean\n\n\nxmin, xmax, xmean = compute_max_min_mean(0.5, 0.1, -20)\nprint(xmin, xmax, xmean)\n```\n\n -20 0.5 -6.466666666666666\n\n\nThe __`return`__ keyword works a bit like the __`break`__ statement does in a loop.\n\nIt returns the value and then exits the function before running the rest of the code.\n\nThis can provide an efficient way to structure the code.\n
\n\nHowever, if we want the program to do something else before exiting the function it must come before the return statement.\n\nIn the following example, we want the function to :\n- return the input value of global variabale `x` as a string, with some information.\n- increase the value of `x` by 1\n\nIf we call the function repeatedly, we should see the printed value of global variable `x` increasing. \n\nIf we increase x by one last after `return`, the value of `x` does not increase. \n
The program exits the function before `\"Increment global x by +1 \".\n\n\n\n```python\nx = 1\n\ndef process_value(X):\n \"Returns a value that depends on the input value x \"\n \n if X > 10:\n return str(X) + \" > 10\"\n elif X > 5:\n return str(X) + \" > 5\"\n elif X > 0:\n return str(X) + \" > 0\"\n else:\n return str(X)\n \n # Increment global x by +1\n global x\n x = X + 1 \n \nprint(process_value(x))\nprint(process_value(x))\nprint(process_value(x))\n```\n\n 1 > 0\n 1 > 0\n 1 > 0\n\n\nThe return statement must come last.\n\n\n```python\nx = 1\n\ndef process_value(X):\n \"Returns a value that depends on the input value x \"\n \n #Increment global x by +1 \n global x\n x = X + 1 \n \n if x > 10:\n return str(X) + \" > 10\"\n elif x > 5:\n return str(X) + \" > 5\"\n elif x > 0:\n return str(X) + \" > 0\"\n else:\n return str(X) \n \nprint(process_value(x))\nprint(process_value(x))\nprint(process_value(x))\n```\n\n 1 > 0\n 2 > 0\n 3 > 0\n\n\nIt may be more appropriate to store the return item as a varable if multiple items are to be returned...\n
\n\n\n```python\nx = -3\n\ndef process_value(X): \n \"Returns two values that depend on the input value x \"\n if X > 10:\n i = (str(X) + \" > 10\")\n elif X > 0:\n i = (str(X) + \" > 0\")\n else:\n i = None\n \n if X < 0:\n j = (str(X) + \" < 0\")\n elif X < 10:\n j = (str(X) + \" < 10\")\n else:\n j = None\n \n global x\n x = X + 1 \n \n return i, j\n \n# if i and j: \n# return i, j \n# elif i:\n# return (i,)\n# else:\n# return (j,)\n\nfor k in range(14):\n print(process_value(x))\n```\n\n (None, '-3 < 0')\n (None, '-2 < 0')\n (None, '-1 < 0')\n (None, '0 < 10')\n ('1 > 0', '1 < 10')\n ('2 > 0', '2 < 10')\n ('3 > 0', '3 < 10')\n ('4 > 0', '4 < 10')\n ('5 > 0', '5 < 10')\n ('6 > 0', '6 < 10')\n ('7 > 0', '7 < 10')\n ('8 > 0', '8 < 10')\n ('9 > 0', '9 < 10')\n ('10 > 0', None)\n\n\n\n## Recursive Functions\n\nA recursive function is a function that makes calls to itself.\n\nLet's consider a well-known example, the Fibonacci series of numbers.\n\n### The Fibonacci Sequence\n\nAn integer sequence characterised by the fact that every number (after the first two) is the sum of the two preceding numbers. \n\ni.e. the $n$th term $f_{n}$ is computed from the preceding terms $f_{n-1}$ and $f_{n-2}$. \n\n$$\nf_n = f_{n-1} + f_{n-2}\n$$\n\nfor $n > 1$, and with $f_0 = 0$ and $f_1 = 1$. \n\nDue to this dependency on previous terms, we say the series is defined __recursively__.\n\n\n\n\n\n\n\n\n\nThe number sequence appears in many natural geometric arrangements: \n\n \n\n\nBelow is a function that computes the $n$th number in the Fibonacci sequence using a `for` loop inside the function.\n\n\n```python\ndef fib(n):\n \"Compute the nth Fibonacci number\"\n # Starting values for f0 and f1\n f0, f1 = 0, 1\n\n # Handle cases n==0 and n==1\n if n == 0:\n return 0\n elif n == 1:\n return 1\n \n # Start loop (from n = 2) \n for i in range(2, n + 1):\n \n # Compute next term in sequence\n f = f1 + f0\n\n # Update f0 and f1 \n f0 = f1\n f1 = f\n \n # Return Fibonacci number\n return f\n\nprint(fib(10))\n```\n\n 55\n\n\nThe __recursive function__ below return the same result.\n\nIt is simpler and has a more \"mathematical\" structure.\n\n\n```python\ndef f(n): \n \"Compute the nth Fibonacci number using recursion\"\n if n == 0:\n return 0 # This doesn't call f, so it breaks out of the recursion loop\n elif n == 1:\n return 1 # This doesn't call f, so it breaks out of the recursion loop\n else:\n return f(n - 1) + f(n - 2) # This calls f for n-1 and n-2 (recursion), and returns the sum \n\nprint(f(10))\n```\n\n 55\n\n\nCare needs to be taken when using recursion that a program does not enter an infinite recursion loop. \n\nThere must be a mechanism to 'break out' of the recursion cycle. \n\n\n## Extension: Generators \n\nWhen a Python function is called:\n1. It excutes the code within the function\n1. It returns any values \nThe state of the variables within the function are not retained.\n\ni.e. the next time the function is called it will process the code within the function exactly as before.\n\n\n\nA generator is a special type of function.\n - They contain the keyword `yield`.\n - When called, any variables within the function retain their value at the end of the function call. \n - Values following the keyword `yield` are \"returned\" by the generator function.\n\n\n\nIntuitively, generators can be used to increment a value. \n\nLet's consider our examlpe from earlier, which incremented a value every time called. \n\n\n```python\nx = 1\n\ndef process_value(X):\n \n \"Return a value that depends on the input value x \"\n if X > 10:\n i = (str(X) + \" > 10\")\n elif X > 5:\n i = (str(X) + \" > 5\")\n elif X > 0:\n i = (str(X) + \" > 0\")\n else:\n i = str(X)\n \n \"Increment global x by +1 \"\n global x\n x = X + 1 \n \n return i \n \nprint(process_value(x))\nprint(process_value(x))\nprint(process_value(x))\n```\n\n 1 > 0\n 2 > 0\n 3 > 0\n\n\nA more concise way to express this is as a generator.\n1. Use the function definition line as normal\n1. Initialise the variable(s) you are going to increment.\n1. Start a while loop. `while True` creates an infinite while loop.
The program won't get stuck as it will only execute 1 loop every time the function is called.\n1. The value to yield each loop.\n1. The operation to perform each loop\n\n\n\n```python\n# `def` is used as normal\ndef incr():\n \n # create an initial value, i\n i = 1\n \n # while loop\n while True:\n \n # the value to return at each call\n yield i \n \n # the operation to perform on i at each call\n i += 1 \n\n```\n\nWe create a *generator object* by assigning the generator to a name:\n\n\n```python\ninc = incr()\n```\n\nThe next value can be called using the `next` keyword:\n\n\n```python\nprint(next(inc))\nprint(next(inc))\nprint(next(inc))\n\n\n```\n\n 1\n 2\n 3\n\n\nIt is not very efficient to print next mulitple times.\n\nWe can call the generator multiple times using a for loop.\n\n\n\nAs the generator contains an infinite while loop, we must specify where the code should stop running to avoid getting trapped in an infinite loop. \n\nThere is more than one way to do this.\n\nHere are two examples...\n\n\n```python\n# to print the result of the next 10 loops\nfor j in range(10):\n print(next(inc))\n```\n\n 4\n 5\n 6\n 7\n 8\n 9\n 10\n 11\n 12\n 13\n\n\n\n```python\n# to keep looping until the incremented value exceeds a specified threshold \nfor i in inc: \n if i > 20:\n break\n else:\n print(i)\n```\n\n 14\n 15\n 16\n 17\n 18\n 19\n 20\n\n\n### The Fibonacci Sequence (Continued)\n\nThe followig example shows how a generator can be used to produce the Fibonacci number sequence. \n\n\n```python\ndef fibonacci():\n # first two values in the sequence\n a = 0\n b = 1\n \n # infinite while loop\n while True:\n \n # value to return\n yield a \n \n a, b = b, a + b\n \n# Create a generator object called fib \nfib = fibonacci()\n\n# Call single loops of the function\nprint(next(fib))\nprint(next(fib))\nprint(next(fib))\n\n# Repeatedly call the function until the sequence exceeds 100.\nfor i in fib: \n if i > 100:\n break\n else:\n print(i)\n```\n\n 0\n 1\n 1\n 2\n 3\n 5\n 8\n 13\n 21\n 34\n 55\n 89\n\n\n## Extension: Callbacks\n\nWhen we create a function using the `def` keyword we assign it to a function name. \n
e.g. in the function above we assign the name fibonacci:\n```python\ndef fibonacci():\n```\n\n\n\nWe can also create un-named functions using the `lambda` keyword.\n\nAn un-named function: \n - may contain a single expression, only\n - must always return a value\n \n\n\nThe next example shows the definition of a function and a lambda function.\n
Both perform exactly the same task; computing the value of `x`$^2$.\n\nBoth can be called using:\n```python\nsquare(5)\n```\nwith the number in brackets being the value that you want to square. \n\n\n```python\n# function definition expressed on two lines\n#def square(x):\n# return x ** 2\n\n# function definition expressed on one line\ndef square(x) : return x ** 2\n\nprint(square(5))\n\n# un-named function\nsquare = lambda x : x ** 2\n \nprint(square(5))\n```\n\n 25\n 25\n\n\nSo what is the point of the un-named function? \n\n\n- Short functions can be written more concisely.\n- Functions can be embedded within main body of the code, for example within a list.\n- This is not possible with a regular function...\n\n\n\n```python\n# 1. Define functions\ndef function1(x): return x ** 2\ndef function2(x): return x ** 3\ndef function3(x): return x ** 4\n\n# 2. Compile list\ncallbacks = [function1, function2, function3]\n\n# 3. Call each function\nfor function in callbacks:\n print(function(5))\n```\n\n 25\n 125\n 625\n\n\n\n```python\n# 1. Define lamda functions within list\ncallbacks = [lambda x : x ** 2, lambda x : x ** 3, lambda x : x ** 4]\n\n# 3. Call each function\nfor function in callbacks:\n print(function(5))\n```\n\n 25\n 125\n 625\n\n\n## Review Exercises\nThe following review problems are designed to:\n - test your understanding of the different techniques for building functions that we have learnt today.\n - test your ability to use user-defined Pyhton functions to solve the type of engineering problems you will encounter in your studies. \n\n### Review Exercise: Simple function\n\nIn the cell below, write a function called `is_even` that determines if a number is even by running:\n\n```python\nis_even(n)\n```\n\n__Input:__ `n` (an integer). \n\n__Output:__ The function should return:\n - `True` if the argument is even\n - `False` if the argument is not even\n \nInclude a __documentation string (docstring)__ to say what your function does.\n\nJump to Documentation Strings\n \nPrint the output of your function for several input values.\n\n\n```python\n# A simple function\n```\n\n\n```python\n# Example Solution \n\ndef is_even(n):\n \"\"\"\n Returns boolean true if input is even and boolean false if input is odd\n \"\"\"\n #return (n % 2 == 0)\n return (not n % 2)\n\nprint(is_even(1))\nprint(is_even(2))\nprint(is_even(3))\nprint(is_even(4))\n```\n\n False\n True\n False\n True\n\n\n### Review Exercise: Expressing Calculations as Functions\n\nIn the cell below, copy and paste your answer from previous seminar __Control Flow__:\n
__Review Exercise: `for` loops and `if`, `else` and `continue` statements.__\n\n__(A)__ Us the pasted code to write a function called `square_root` that prints the square root of an input argument by running:\n\n```python\nsquare_root(n)\n```\n\n__Input:__ Argument is `n` (a numeric variable). \n\n__Output:__ The function should return the square root of `n`\n\nInclude a doc-string to say what your function does. \n\nJump to Documentation Strings' \n\n__(B)__ Using your answer to __Control Flow, Review Exercise: `for` loops and `if`, `else` and `continue` statements__,\n to print the sqaure root of the first 25 odd positive integers.\n\n\n```python\n# A function to find the square root of an input\n```\n\n\n```python\n# Example Solution\ndef square_root(n):\n \"\"\"\n Returns the square root of an input value. \n \"\"\"\n return (n ** (1/2))\n \n \nfor x in range(1, 50, 2):\n print(square_root(x))\n```\n\n 1.0\n 1.7320508075688772\n 2.23606797749979\n 2.6457513110645907\n 3.0\n 3.3166247903554\n 3.605551275463989\n 3.872983346207417\n 4.123105625617661\n 4.358898943540674\n 4.58257569495584\n 4.795831523312719\n 5.0\n 5.196152422706632\n 5.385164807134504\n 5.5677643628300215\n 5.744562646538029\n 5.916079783099616\n 6.082762530298219\n 6.244997998398398\n 6.4031242374328485\n 6.557438524302\n 6.708203932499369\n 6.855654600401044\n 7.0\n\n\n### Review Exercise: Using Data Structures as Function Arguments - Counter\nIn the cell below write a function:\n\n__Input:__ Argument is a list. e.g. `[\"fizz\", \"buzz\", \"buzz\", \"fizz\", \"fizz\", \"fizz\"]`\n\n__Output:__ The function should return the numer of times \"fizz\" appears in a list.\n\nDemonstrate that your function works by inputting a list.\n\n*Hint 1:* Create a local variable, `count`, within your function.\n
Increment the count by one for each instance of `fizz`.\n\n*Hint 2:* Use a `for` loop to iterate over the list to count the number of times `fizz` appears.\n\n\n```python\n# Counter\n```\n\n\n```python\n#Example Solution\n\ndef fizz_counter(words):\n count=0\n for w in words:\n if w == \"fizz\":\n count=count +1\n return count\n \nfizz_buzz = [\"fizz\", \"buzz\", \"buzz\", \"fizz\", \"fizz\", \"fizz\"] \n\nfizz_counter(fizz_buzz)\n```\n\n\n\n\n 4\n\n\n\n### Review Exercise: Using Data Structures as Function Arguments - Magnitude\n\nThe magnitude of an $n$ dimensional vector can be written\n\n$$\n|\\mathbf{x}|= \\sqrt{x_1^2 + x_2^2 + ... x_n^2} = \\sqrt{\\sum_{i = 1}^{n} (x_{n})^2 }\n$$\n\nTherefore...\n\nThe magnitude of a 2D vector (e.g. $x = [x_1, x_2]$):\n\n$$\n|\\mathbf{x}|= \\sqrt{x_1^2 + x_2^2}\n$$\n
\n\nThe magnitude of a 3D vector (e.g. $x = [x_1, x_2, x_3]$):\n\n$$\n|\\mathbf{x}|= \\sqrt{x_1^2 + x_2^2 + x_3^2}\n$$\n
\n\n__(A)__ In the cell below, write a function called `magnitude` that computes the magnitude of an n-dimensional vector.\n
Include a doc-string to say what your function does.\n
Check your function, for example use hand calculations to verify the answer is correct. \n\n__Argument:__ `list` with n elements (e.g. [x, y]) if vector is 2D, [x, y, z]), if vector is 3D.\n\n__Return:__ Magnitude of the vector.\n\nHints: \n - Jump to Data Structure as Arguments. \n - Use a loop to iterate over each item in the list. \n\n__(B)__ Print the output of your function to show that it works for both 2D and 3D input vectors. \n\n\n\n```python\n# A function that computes the magnitude of an n-dimensional vector \n```\n\n\n```python\n#Example Solution\ndef magnitude(vector):\n \"Computes the magnitude of an n-dimensional vector\"\n x = 0.0\n for v in vector:\n x += v**2\n return x**(1/2)\n\nprint(magnitude([1,2,3]))\nprint(magnitude([1,2]))\n```\n\n 3.7416573867739413\n 2.23606797749979\n\n\n\n```python\n# Improved Solution\n\ndef magnitude(vector):\n \"Computes the magnitude of an n-dimensional vector\"\n x = [v**2 for v in vector]\n x = sum(x) \n return x**(1/2)\n\n\n# ...which can be expressed more concisely as a single line\ndef magnitude(vector):\n \"Computes the magnitude of an n-dimensional vector\"\n return (sum([v**2 for v in vector]))**(1/2)\n\n\nprint(magnitude([1,2,3]))\nprint(magnitude([1,2]))\n```\n\n 3.7416573867739413\n 2.23606797749979\n\n\n### Review Exercise: Using Functions as Function Arguments, Default Arguments. \nCopy and paste your function `is_even` from __Review Exercise: Simple function__ in the cell below.\n\n__(A)__ Edit `is_even` to:\n- take two arguments:\n - a numeric variable, `n`\n - the function `square_root` from __Review Exercise: Using Data Structures as Function Arguments__. Jump to Using Functions as Function Arguments. \n- return:\n - `True` if the square root of n is even\n - `False` if the square root of n is not even\n\n__(B)__ Make `square_root` the __default__ value of the function argument.\n
Jump to Default Arguments. \n
Force the function argument to be input using a named argument. \n
Jump to Forcing Default Arguments. \n\n__(C)__ Print the output ofthe function `is_even` for the first 25 natural numbers.\n\n\n```python\n# A function to determine if the square root of a number is odd or even\n```\n\n\n```python\ndef square_root(n):\n \"\"\"\n Returns the square root of an input value. \n \"\"\"\n return (n ** (1/2))\n\ndef is_even(n, *, f=square_root):\n \"\"\"\n Returns boolean true if input is even and boolean false if input is odd\n \"\"\" \n return (not f(n) % 2)\n \nfor x in range(1, 26):\n print(is_even(x))\n```\n\n False\n False\n False\n True\n False\n False\n False\n False\n False\n False\n False\n False\n False\n False\n False\n True\n False\n False\n False\n False\n False\n False\n False\n False\n False\n\n\n### Review Excercise: Using Functions as Function Arguments - Bisection\n\nRefer to your answer to __Seminar 3, Review Exercise: `while` loops (bisection)__\n\n\n\n__(A)__ Express your answer to __Seminar 3, Review Exercise: `while` loops (bisection)__, as a function called `bisection`.
\nThe function should compute approximate the root of a function, `F`, by running:\n\n`bisection(f, a, b , tol=1e-6, nmax=30)`\n\n\n - `f` : The function F(x) you wish to find the root of (`F` should first be defined (using `def`)).\n - `a` : The minimum of the interval within which the root lies. \n - `b` : The maximum of the interval within which the root lies. \n - `tol` : User defined tolerance.
The program determines a root has been found when |F(x$_{mid}$)| < `tol`.\n - `nmax` : The maximum number of iterations before the program breaks out of the loop.\n
\n \nJump to Functions as Function Arguments. \n\n \n\n
\n\n__(B)__ Define the function F(x) = 4x$^3$ - 3x$^2$ - 25x - 6 using `def`.\n\n\n$$\nF(x) = 4x^3 - 3x^2 - 25x - 6\n$$\n\n\n\n__(C)__ Use you `bisection` function to find the root of F(x) that lies between a = -0.6 and b = 0.\n
Use default arguments `tol=1e-6` and `nmax=25`:\n\n
`f` = `F`\n
`a` = -0.6\n
`b` = 0\n
`tol` = 1 $\\times$10$^{-6}$\n
`nmax` = 25\n\n`bisection(f, -0.6, 0 , tol=1e-6, nmax=30)`\n\nCompare your answer to your answer to __Seminar 3, Review Exercise: `while` loops (bisection)__.\n\nInclude a doc-string to say what your function does.\n\n\n\n```python\n# Bisection Function\n```\n\n\n```python\n#Example Solution\n\ndef F(x):\n return (4 * x**3) - (3 * x**2) - (25 * x) - 6\n\ndef bisection(f, a, b, tol=1e-6, nmax=30):\n \"\"\"\n Estimates the root of a function, F(x), using two values; x = a and x = b, where F(a)F(b) < 0\n \"\"\"\n if (f(a) * f(b) < 0): \n \n xmid = (a + b) / 2\n\n for i in range(nmax):\n\n print(round(f(xmid), 5))\n\n if (abs(f(xmid)) < 10E-6):\n return xmid\n\n # If F(x) changes sign between F(x_mid) and F(a), \n # the root must lie between F(x_mid) and F(a)\n if f(xmid) * f(a) < 0:\n b = xmid\n xmid = (a + b)/2\n\n\n # If F(x) changes sign between F(x_mid) and F(b), \n # the root must lie between F(x_mid) and F(b) \n else:\n a = xmid\n xmid = (a + b)/2 \n \nroot = bisection(F, -0.6, 0)\n\nprint(\"root = \", round(root, 4))\n```\n\n 1.122\n -2.331\n -0.57244\n 0.28343\n -0.14242\n 0.07104\n -0.03556\n 0.01777\n -0.00889\n 0.00444\n -0.00222\n 0.00111\n -0.00056\n 0.00028\n -0.00014\n 7e-05\n -3e-05\n 2e-05\n -1e-05\n root = -0.25\n\n\n### Review Exercise: Scope\n\nIn the example below, complete the comments with definition (\"local variable\"/\"global variable\") describing the scope of variables a-c.\n\n\n```python\n# In the code below: \n# a is a local variable / global variable\n# b is a ...\n# c is a ...\n# d is a ...\n\ndef my_function(a):\n b = a - 2\n return b\n\nc = 3\n\nif c > 2:\n d = my_function(5)\n print(d)\n```\n\n 3\n\n\n\n```python\n# Example Solution\n\n# a is a local variable \n# b is a local variable \n# c is a global variable \n# d is a local variable \n```\n\n### Review Exercise: Recursive Functions\n\nThe factorial of a positive integer $n$ is:\n\n\\begin{align}\nn! = \\prod_{i=1}^{n} i =1 \\cdot 2 \\cdot 3 \\cdot ... (n - 2) \\cdot (n - 1) \\cdot n\n\\end{align}\n\nWe can write this *recursively*.\n
This means we use the value of $(n-1)!$ to compute the value of $n!$:\n\n$$\nn! = (n-1)! n \n$$\n\nNote: $0! = 1$\n \ne.g. \n
$1! = 1 \\cdot 0! = 1 $\n
$2! = 2 \\cdot 1! = 2 \\cdot 1 = 2$\n
$3! = 3 \\cdot 2! = 3 \\cdot 2 \\cdot 1 = 6$\n
$4! = 4 \\cdot 3! = 4 \\cdot 3 \\cdot 2 \\cdot 1 = 24$\n\nA recursive function is a function that calls itself. \n
Jump to Recursive Functions\n\n__(A)__ In the cell below, write a __recursive function__ called `factorial` to compute $n!$ of an input argument `n`:\n\n__Input:__ Numerical variable `n`\n\n__Output:__ `factorial(n) = factorial(n-1)*n` \n\n\nInclude a doc-string to say what your function does. \n\nTest your function for correctness using hand calculations. \n\n
\n__(B)__ The formula above only applies to __positive integers__(with a specific exception for 0). \n
Include a check to make sure the input value is a positive integer or zero. \n\nShow the output of your function for several input values.\n\n\n\n\n```python\n# A function to compute n! for input n\n```\n\n\n```python\n#Example solution\n\ndef factorial(n):\n \n # check if n is an integer \n if (int(n) == n >= 0):\n \n # take care of case that n \n if n < 1:\n return 1\n else:\n return n * factorial(n - 1)\n \n else:\n print(\"input not postive integer\")\n\nfactorial(-4)\nfactorial(4)\n```\n\n input not postive integer\n\n\n\n\n\n 24\n\n\n\n# Updating your git repository\n\nYou have made several changes to your interactive textbook.\n\nYou have made several changes to your interactive textbook.\n\n > Save your work.\n >
`git add -A`\n >
`git commit -m \"A short message describing changes\"`\n >
`git push`\n\n# Summary\n - Functions are defined using the .... keyword.\n - Functions contain indented statements to execute when the function is called.\n - Global variables can be used ....(where?)\n - Local variables can be used ....(where?)\n - Function arguments (inputs) are declared between () parentheses, seperated by commas.\n - Function arguments muct be specified each time a function is called. \n - Default arguments do not need to be specified when a function is called unless .... \n - The keyword used to define the function outputs is ....\n\n\n\n# Summary: Extension Topics\n- An un-named function can be created using the `lamda` keyword.\n- A generator function is created when the keyword `yield` is included in the function block.\n- Varibales in a generator function their state from when the function was last called.\n- The python built in `next()` function can be used to continue execution of a generator function by one iteration.\n \n\n# Homework \n\n\n1. __COMPLETE__ any unfinished Review Exercises.
In particular, please complete: __Review Excercise: Using Functions as Function Arguments.__.
You will need to refer to your answer in next week's Seminar. \n1. __PUSH__ the changes you make at home to your online repository. \n\n\n```python\n\n```\n", "meta": {"hexsha": "ce32c0449a8592e4c3d73271d48534503f8a7b41", "size": 93803, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "5_Functions.ipynb", "max_stars_repo_name": "Ouaggag/Intro_to_python", "max_stars_repo_head_hexsha": "61ec245a6b64fb42d87b70cffb473ba399160e2a", "max_stars_repo_licenses": ["MIT"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "5_Functions.ipynb", "max_issues_repo_name": "Ouaggag/Intro_to_python", "max_issues_repo_head_hexsha": "61ec245a6b64fb42d87b70cffb473ba399160e2a", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "5_Functions.ipynb", "max_forks_repo_name": "Ouaggag/Intro_to_python", "max_forks_repo_head_hexsha": "61ec245a6b64fb42d87b70cffb473ba399160e2a", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 23.7897539944, "max_line_length": 220, "alphanum_fraction": 0.4989286057, "converted": true, "num_tokens": 13872, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.25982562649804053, "lm_q2_score": 0.32423540551084407, "lm_q1q2_score": 0.08424466736970128}} {"text": "## GeostatsPy: Univariate Spatial Trend Modeling for Subsurface Data Analytics in Python \n\n\n### Michael Pyrcz, Associate Professor, University of Texas at Austin \n\n#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)\n\n\n### PGE 383 Exercise: Univariate Spatial Trends Modeling for Subsurface Data Analytics in Python \n\nHere's a simple workflow with basic univariate spatial trend modeling for subsurface modeling workflows. This should help you get started with building subsurface models that include deterministic and stochastic components. \n\n#### Trend Modeling\n\nTrend modeling is the modeling of local features, based on data and interpretation, that are deemed certain (known). The trend is substracted from the data, leaving a residual that is modeled stochastically with uncertainty (treated as unknown).\n\n* geostatistical spatial estimation methods will make an assumption concerning stationarity\n * in the presence of significant nonstationarity we can not rely on spatial estimates based on data + spatial continuity model\n* if we observe a trend, we should model the trend.\n * then model the residuals stochastically\n\nSteps: \n\n1. model trend consistent with data and intepretation at all locations within the area of itnerest, integrate all available information and expertise.\n\n\\begin{equation}\nm(\\bf{u}_\\beta), \\, \\forall \\, \\beta \\in \\, AOI\n\\end{equation}\n\n2. substract trend from data at the $n$ data locations to formulate a residual at the data locations.\n\n\\begin{equation}\ny(\\bf{u}_{\\alpha}) = z(\\bf{u}_{\\alpha}) - m(\\bf{u}_{\\alpha}), \\, \\forall \\, \\alpha = 1, \\ldots, n\n\\end{equation}\n\n3. characterize the statistical behavoir of the residual $y(\\bf{u}_{\\alpha})$ integrating any information sources and interpretations. For example the global cumulative distribution function and a measure of spatial continuity shown here.\n\n\\begin{equation}\nF_y(y) \\quad \\gamma_y(\\bf{h})\n\\end{equation}\n\n4. model the residual at all locations with $L$ multiple realizations.\n\n\\begin{equation}\nY^\\ell(\\bf{u}_\\beta), \\, \\forall \\, \\beta \\, \\in \\, AOI; \\, \\ell = 1, \\ldots, L\n\\end{equation}\n\n5. add the trend back in to the stochastic residual realizations to calculate the multiple realizations, $L$, of the property of interest based on the composite model of known deterministic trend, $m(\\bf{u}_\\alpha)$ and unknown stochastic residual, $y(\\bf{u}_\\alpha)$ \n\n\\begin{equation}\nZ^\\ell(\\bf{u}_\\beta) = Y^\\ell(\\bf{u}_\\beta) + m(\\bf{u}_\\beta), \\, \\forall \\, \\beta \\in \\, AOI; \\, \\ell = 1, \\ldots, L\n\\end{equation}\n\n6. check the model, including quantification of the proportion of variance treated as known (trend) and unknown (residual).\n\n\\begin{equation}\n\\sigma^2_{Z} = \\sigma^2_{Y} + \\sigma^2_{m} + 2 \\cdot C_{Y,m}\n\\end{equation}\n\ngiven $C_{Y,m} \\to 0$:\n\n\\begin{equation}\n\\sigma^2_{Z} = \\sigma^2_{Y} + \\sigma^2_{m}\n\\end{equation}\n\nI can now describe the proportion of variance allocated to known and unknown components as follows:\n\n\\begin{equation}\nProp_{Known} = \\frac{\\sigma^2_{m}}{\\sigma^2_{Y} + \\sigma^2_{m}} \\quad Prop_{Unknown} = \\frac{\\sigma^2_{Y}}{\\sigma^2_{Y} + \\sigma^2_{m}}\n\\end{equation}\n\nI provide some practical, data-driven methods for trend model, but I should indicate that:\n\n1. trend modeling is very important in reservoir modeling as it has large impact on local model accuracy and on the undertainty model\n2. trend modeling is used in almost every subsurface model, unless the data is dense enough to impose local trends\n3. trend modeling includes a high degree of expert judgement combined with the integration of various information sources\n\nWe limit ourselves to simple data-driven methods, but acknowledge much more is needed. In fact, trend modeling requires a high degree of knowledge concerning local geoscience and engineering data and knowledge. \n\n#### Objective \n\nIn the PGE 383: Stochastic Subsurface Modeling class I want to provide hands-on experience with building subsurface modeling workflows. Python provides an excellent vehicle to accomplish this. I have coded a package called GeostatsPy with GSLIB: Geostatistical Library (Deutsch and Journel, 1998) functionality that provides basic building blocks for building subsurface modeling workflows. \n\nThe objective is to remove the hurdles of subsurface modeling workflow construction by providing building blocks and sufficient examples. This is not a coding class per se, but we need the ability to 'script' workflows working with numerical methods. \n\n#### Getting Started\n\nHere's the steps to get setup in Python with the GeostatsPy package:\n\n1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/). \n2. From Anaconda Navigator (within Anaconda3 group), go to the environment tab, click on base (root) green arrow and open a terminal. \n3. In the terminal type: pip install geostatspy. \n4. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality. \n\nYou will need to copy the data file to your working directory. They are available here:\n\n* Tabular data - sample_data_biased.csv at https://git.io/fh0CW\n\nThere are exampled below with these functions. You can go here to see a list of the available functions, https://git.io/fh4eX, other example workflows and source code. \n\n\n```python\nimport geostatspy.GSLIB as GSLIB # GSLIB utilies, visualization and wrapper\nimport geostatspy.geostats as geostats # GSLIB methods convert to Python \n```\n\nWe will also need some standard packages. These should have been installed with Anaconda 3.\n\n\n```python\nimport numpy as np # ndarrys for gridded data\nimport pandas as pd # DataFrames for tabular data\nimport os # set working directory, run executables\nimport matplotlib.pyplot as plt # for plotting\nfrom scipy import stats # summary statistics\nimport math # trig etc.\nimport scipy.signal as signal # kernel for moving window calculation\n```\n\n#### Set the working directory\n\nI always like to do this so I don't lose files and to simplify subsequent read and writes (avoid including the full address each time). \n\n\n```python\nos.chdir(\"c:/PGE383\") # set the working directory\n```\n\n#### Loading Tabular Data\n\nHere's the command to load our comma delimited data file in to a Pandas' DataFrame object. \n\n\n```python\ndf = pd.read_csv('sample_data_biased.csv') # load our data table (wrong name!)\n```\n\nIt worked, we loaded our file into our DataFrame called 'df'. But how do you really know that it worked? Visualizing the DataFrame would be useful and we already leard about these methods in this demo (https://git.io/fNgRW). \n\nWe can preview the DataFrame by printing a slice or by utilizing the 'head' DataFrame member function (with a nice and clean format, see below). With the slice we could look at any subset of the data table and with the head command, add parameter 'n=13' to see the first 13 rows of the dataset. \n\n\n```python\nprint(df.iloc[0:5,:]) # display first 4 samples in the table as a preview\ndf.head(n=13) # we could also use this command for a table preview\n```\n\n X Y Facies Porosity Perm\n 0 100 900 1 0.115359 5.736104\n 1 100 800 1 0.136425 17.211462\n 2 100 600 1 0.135810 43.724752\n 3 100 500 0 0.094414 1.609942\n 4 100 100 0 0.113049 10.886001\n\n\n\n\n\n

\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
XYFaciesPorosityPerm
010090010.1153595.736104
110080010.13642517.211462
210060010.13581043.724752
310050000.0944141.609942
410010000.11304910.886001
520080010.154648106.491795
620070010.153113140.976324
720050010.12616712.548074
820040000.0947501.208561
920010010.15096144.687430
1030080010.1992271079.709291
1130070010.154220179.491695
1230050010.13750238.164911
\n
\n\n\n\n#### Summary Statistics for Tabular Data\n\nThe table includes X and Y coordinates (meters), Facies 1 and 2 (1 is sandstone and 0 interbedded sand and mudstone), Porosity (fraction), and permeability as Perm (mDarcy). \n\nThere are a lot of efficient methods to calculate summary statistics from tabular data in DataFrames. The describe command provides count, mean, minimum, maximum, and quartiles all in a nice data table. We use transpose just to flip the table so that features are on the rows and the statistics are on the columns.\n\n\n```python\ndf.describe().transpose()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
countmeanstdmin25%50%75%max
X289.0475.813149254.2775300.000000300.000000430.000000670.000000990.000000
Y289.0529.692042300.8953749.000000269.000000549.000000819.000000999.000000
Facies289.00.8131490.3904680.0000001.0000001.0000001.0000001.000000
Porosity289.00.1347440.0377450.0585480.1063180.1261670.1542200.228790
Perm289.0207.832368559.3593500.0758193.63408614.90897071.4544245308.842566
\n
\n\n\n\n#### Visualizing Tabular Data with Location Maps \n\nIt is natural to set the x and y coordinate and feature ranges manually. e.g. do you want your color bar to go from 0.05887 to 0.24230 exactly? Also, let's pick a color map for display. I heard that plasma is known to be friendly to the color blind as the color and intensity vary together (hope I got that right, it was an interesting Twitter conversation started by Matt Hall from Agile if I recall correctly). We will assume a study area of 0 to 1,000m in x and y and omit any data outside this area.\n\n\n```python\nxmin = 0.0; xmax = 1000.0 # range of x values\nymin = 0.0; ymax = 1000.0 # range of y values\npormin = 0.05; pormax = 0.25; # range of porosity values\nnx = 100; ny = 100; csize = 10.0\ncmap = plt.cm.plasma # color map\n```\n\nLet's try out locmap. This is a reimplementation of GSLIB's locmap program that uses matplotlib. I hope you find it simpler than matplotlib, if you want to get more advanced and build custom plots lock at the source. If you improve it, send me the new code. Any help is appreciated. To see the parameters, just type the command name:\n\n\n```python\nGSLIB.locmap\n```\n\n\n\n\n \n\n\n\nNow we can populate the plotting parameters and visualize the porosity data.\n\n\n```python\nplt.subplot(111)\nGSLIB.locmap_st(df,'X','Y','Porosity',xmin,xmax,ymin,ymax,pormin,pormax,'Well Data - Porosity','X(m)','Y(m)','Porosity (fraction)',cmap)\nplt.subplots_adjust(left=0.0, bottom=0.0, right=1.0, top=1.2, wspace=0.2, hspace=0.2)\nplt.show()\n```\n\nLet's get some declustering weights. For more information see the demonstration on declustering.\n\n\n```python\nwts, cell_sizes, dmeans = geostats.declus(df,'X','Y','Porosity',iminmax = 1, noff= 10, ncell=100,cmin=10,cmax=2000)\ndf['Wts'] = wts # add weights to the sample data DataFrame\ndf.head() # preview to check the sample data DataFrame\n```\n\n There are 289 data with:\n mean of 0.13474387540138408 \n min and max 0.058547873 and 0.228790002\n standard dev 0.03767982164385207 \n\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
XYFaciesPorosityPermWts
010090010.1153595.7361043.064286
110080010.13642517.2114621.076608
210060010.13581043.7247520.997239
310050000.0944141.6099421.165119
410010000.11304910.8860011.224164
\n
\n\n\n\n#### Trend by Convolution / Local Window Average\n\nLet's first attempt a convolution-based trend model, this is a moving window average of the local data.\n\nWe have a convenience function that takes data with X and Y locations in a DataFrame and makes a sparse 2D array. All cells without a data value are assigned to NumPy's NaN (null values, missing value). Let's see the inputs for this command.\n\n\n```python\nGSLIB.DataFrame2ndarray\n```\n\n\n\n\n \n\n\n\nLet's make an sparse array with the appropriate parameters. The reason we are doing this is that convolution programs in general work with ndarrays and not from DataFrames.\n\n\n```python\npor_grid = GSLIB.DataFrame2ndarray(df,'X','Y','Porosity',xmin, xmax, ymin, ymax, csize, nx, ny)\n```\n\nWe have a ndarray (por_grid) with the data assigned to grid cells. Now we need a kernel. The kernel represents the weights within the moving window. If we use constant 1.0 in the moving window will get discontinuities in our trend model. A Gaussian kernel (weights highest in the middle of the window and decreasing to 0,0 at the edge) is useful to get a smooth trend. We can use the SciPy package's signal functions. Of course we will have to import that package and then we can make our kernel. Here's an example below. There shouldnt be any surprises. \n\n\n```python\ngkern1d = signal.gaussian(53,5).reshape(53, 1)\ngkern2d = np.outer(gkern1d, gkern1d)\nprint('We have made a kernel of size, number of grid cells (ny, nx) ' + str(gkern2d.shape))\n\nplt.subplot(111)\nGSLIB.pixelplt_st(gkern2d,xmin=-265,xmax=265,ymin=-265,ymax=265,step=10,vmin=0,vmax=1,title='Kernel',xlabel='X(m)',ylabel='Y(m)',vlabel='weight',cmap=cmap)\nplt.subplots_adjust(left=0.0, bottom=0.0, right=0.6, top=0.8, wspace=0.2, hspace=0.2)\nplt.show()\n```\n\nNow we need to convolve our sparse data assigned to a ndarray with our Gaussian kernel. There are many functions available for convolution. But we have a problem as we want to apply our Gaussian kernel to a sparse ndarray full of missing values. It turns out this is a common issue for our friends in Astronomy and so their Astropy package has a convolution method that will work well. I figured out the following (so you don't have to!).\n\n\n```python\nimport astropy.convolution.convolve as convolve\nporosity_trend = convolve(por_grid,gkern2d,boundary='extend',nan_treatment='interpolate',normalize_kernel=True)\n```\n\nNo errors? It worked? Let's look at the results. We can plot and compare the original porosity data and the resulting trend to check for consistency.\n\n\n```python\nplt.subplot(131)\nGSLIB.locmap_st(df,'X','Y','Porosity',xmin,xmax,ymin,ymax,pormin,pormax,'Well Data - Porosity','X(m)','Y(m)','Porosity (fraction)',cmap)\n\nplt.subplot(132)\nGSLIB.pixelplt_st(porosity_trend,xmin,xmax,ymin,ymax,csize,pormin,pormax,'Porosity Trend','X(m)','Y(m)','Porosity (fraction)',cmap)\n\nplt.subplot(133)\nGSLIB.locpix_st(porosity_trend,xmin,xmax,ymin,ymax,csize,pormin,pormax,df,'X','Y','Porosity','Porosity Data and Trend','X(m)','Y(m)','Porosity (fraciton)',cmap)\n\nplt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=1.2, wspace=0.2, hspace=0.2)\nplt.show()\n\n```\n\n#### Other Methods for Trend Calculation\n\nThere are a variety of other methods for trend calculation. I will just mention them here.\n\n1. hand-drawn, expert interpretation - many 3D modeling packages allow for experts to draw trends and allow for fast interpolation to build an exhaustive trend model.\n2. kriging - kriging provides best linear unbiased estimates between data given a spatial continuity model (more on this when we cover spatial estimation). One note of caution is that kriging is exact; therefore it will over fit unless it is use with averaged data values (e.g. over the vertical) or with a block kriging option (kriging at a volume support larger than the data).\n3. regression - fit a function as a function of X, Y coordinates. This could be extended to more complicated prediction models from machine learning.\n\n#### Trend Diagnotistics\n\nLet's go back to the convolution trend and check it (to demonstrate the method of trend checking). Note, I haven't tried to perfect the result. I'm just demonstrating the method. \n\nIn addition to the previous visualization, let's look at the distributions and summary statistics of the original declustered porosity data and the trend.\n\n\n```python\nplt.subplot(121)\nGSLIB.hist_st(df['Porosity'],pormin,pormax,False,False,20,df['Wts'],'Porosity (fraction)','Declustered Porosity')\n\nplt.subplot(122)\nGSLIB.hist_st(porosity_trend.flatten(),pormin,pormax,False,False,20,None,'Porosity Trend (fraction)','Porosity Trend')\n\nplt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=1.5, wspace=0.2, hspace=0.2)\nplt.show()\n```\n\nWe can also look at the summary statistics. Here's a function that calculates the weighted standard deviation (and the average). We can use this with the data and declustering weights and figure out the allocation of variance between the trend and the residual. \n\n\n```python\n# Weighted average and standard deviation\ndef weighted_avg_and_std(values, weights): # from Eric O Lebigot, stack overflow\n average = np.average(values, weights=weights)\n variance = np.average((values-average)**2, weights=weights)\n return (average, math.sqrt(variance))\n\nwavg_por,wstd_por = weighted_avg_and_std(df['Porosity'],df['Wts']) \n\nwavg_por_trend = np.average(porosity_trend)\nwstd_por_trend = np.std(porosity_trend)\n\nprint('Declustered Porosity Data: Average ' + str(round(wavg_por,4)) + ', Var ' + str(round(wstd_por**2,5)))\nprint('Porosity Trend: Average ' + str(round(wavg_por_trend,4)) + ', Var ' + str(round(wstd_por_trend**2,5)))\nprint('Proportion Trend / Known: ' + str(round(wstd_por_trend**2/(wstd_por**2),3)))\nprint('Proportion Residual / Unknown: ' + str(round((wstd_por**2 - wstd_por_trend**2)/(wstd_por**2),3)))\n```\n\n Declustered Porosity Data: Average 0.1212, Var 0.00102\n Porosity Trend: Average 0.1233, Var 0.00064\n Proportion Trend / Known: 0.631\n Proportion Residual / Unknown: 0.369\n\n\nInteresting, we have 63% of the variance being treated as known, modeled by trend, and 37% of the variance being treated as unknown, modeled by residual. \n\n#### Adding Trend to DataFrame\n\nLet's add the porosity trend to our DataFrame. We have a sample program in GeostatsPy that takes a 2D ndarray and extracts the values at the data locations and adds them as a new column. Then we can do a little math to calculate and add the porosity residual also and visualize this all together as a final check.\n\n\n```python\ndf = GSLIB.sample(porosity_trend,xmin,xmax,ymin,ymax,nx,ny,csize,\"Por_Trend\",df,'X','Y')\ndf['Por_Res'] = df['Porosity'] - df['Por_Trend'] # calculate the residual and add to DataFrame\n```\n\nLet's check out the DataFrame and confirm that we have everything now. We will need trend and residual in our DataFrame to support all subsequent modeling steps.\n\n\n```python\ndf.head()\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
XYFaciesPorosityPermWtsPor_TrendPor_Res
010090010.1153595.7361043.0642860.117365-0.002006
110080010.13642517.2114621.0766080.1239380.012487
210060010.13581043.7247520.9972390.1284350.007375
310050000.0944141.6099421.1651190.112399-0.017985
410010000.11304910.8860011.2241640.1027910.010258
\n
\n\n\n\nThat looks good. A quick check, confirm that the Porosity column is equal to the Por_Trend + the Por_Res columns. As a final check let's visualize the original porosity data, porosity trends at the data locations and the porosity residuals. \n\n\n```python\nplt.subplot(131)\nGSLIB.locmap_st(df,'X','Y','Porosity',xmin,xmax,ymin,ymax,pormin,pormax,'Well Data - Porosity','X(m)','Y(m)','Porosity (fraction)',cmap)\n\nplt.subplot(132)\nGSLIB.locmap_st(df,'X','Y','Por_Trend',xmin,xmax,ymin,ymax,pormin,pormax,'Well Data - Porosity Trend','X(m)','Y(m)','Porosity (fraction)',cmap)\n\nplt.subplot(133)\nGSLIB.locmap_st(df,'X','Y','Por_Res',xmin,xmax,ymin,ymax,-0.01,0.01,'Well Data - Porosity Residual','X(m)','Y(m)','Porosity (fraction)',cmap)\n\nplt.subplots_adjust(left=0.0, bottom=0.0, right=3.0, top=1.2, wspace=0.2, hspace=0.2)\nplt.show()\n```\n\nDoes it look correct? There is a strong degree of consistency between the porosity data and trend and the porosity residual no longer has a trend, it has been detrended.\n\n#### Comments\n\nThis was a basic demonstration of trend modeling. Much more could be done, I have other demonstrations on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy. \n \nI hope this was helpful,\n\n*Michael*\n\nMichael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin\n\n#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)\n\n", "meta": {"hexsha": "093241dcbaded407b3bc5636605bd5f96b77b39a", "size": 644594, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "GeostatsPy_trends.ipynb", "max_stars_repo_name": "caf3676/PythonNumericalDemos", "max_stars_repo_head_hexsha": "206a3d876f79e137af88b85ba98aff171e8d8e06", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 403, "max_stars_repo_stars_event_min_datetime": "2017-10-15T02:07:38.000Z", "max_stars_repo_stars_event_max_datetime": "2022-03-30T15:27:14.000Z", "max_issues_repo_path": "GeostatsPy_trends.ipynb", "max_issues_repo_name": "caf3676/PythonNumericalDemos", "max_issues_repo_head_hexsha": "206a3d876f79e137af88b85ba98aff171e8d8e06", "max_issues_repo_licenses": ["MIT"], "max_issues_count": 4, "max_issues_repo_issues_event_min_datetime": "2019-08-21T10:35:09.000Z", "max_issues_repo_issues_event_max_datetime": "2021-02-04T04:57:13.000Z", "max_forks_repo_path": "GeostatsPy_trends.ipynb", "max_forks_repo_name": "caf3676/PythonNumericalDemos", "max_forks_repo_head_hexsha": "206a3d876f79e137af88b85ba98aff171e8d8e06", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 276, "max_forks_repo_forks_event_min_datetime": "2018-06-27T11:20:30.000Z", "max_forks_repo_forks_event_max_datetime": "2022-03-25T16:04:24.000Z", "avg_line_length": 553.2995708155, "max_line_length": 235348, "alphanum_fraction": 0.936648495, "converted": true, "num_tokens": 8149, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.47268347662043286, "lm_q2_score": 0.1778108672995868, "lm_q1q2_score": 0.08404825893606313}} {"text": "```python\n%matplotlib inline\nfrom google.colab import files\n```\n\n\n```python\nimport matplotlib.pyplot as plt\nimport math\nimport numpy as np\nnp.random.seed(5)\n```\n\n\n```python\nimport scipy.stats as stats\nimport random\nimport math\nimport pandas as pd\n\n```\n\n\n```python\nfrom sklearn import decomposition\nfrom sklearn import datasets\n\n# datasets\n!pip install ggplot\nfrom ggplot import mtcars\niris = datasets.load_iris()\n```\n\n Collecting ggplot\n Downloading ggplot-0.11.5-py2.py3-none-any.whl (2.2MB)\n \u001b[K 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.2MB 530kB/s \n \u001b[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from ggplot)\n Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from ggplot)\n Collecting brewer2mpl (from ggplot)\n Downloading brewer2mpl-1.4.1-py2.py3-none-any.whl\n Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from ggplot)\n Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from ggplot)\n Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from ggplot)\n Requirement already satisfied: patsy>=0.4 in /usr/local/lib/python3.6/dist-packages (from ggplot)\n Requirement already satisfied: cycler in /usr/local/lib/python3.6/dist-packages (from ggplot)\n Requirement already satisfied: statsmodels in /usr/local/lib/python3.6/dist-packages (from ggplot)\n Requirement already satisfied: pytz>=2011k in /usr/local/lib/python3.6/dist-packages (from pandas->ggplot)\n Requirement already satisfied: python-dateutil>=2 in /usr/local/lib/python3.6/dist-packages (from pandas->ggplot)\n Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->ggplot)\n Installing collected packages: brewer2mpl, ggplot\n Successfully installed brewer2mpl-1.4.1 ggplot-0.11.5\n\n\n /usr/local/lib/python3.6/dist-packages/ggplot/utils.py:81: FutureWarning: pandas.tslib is deprecated and will be removed in a future version.\n You can access Timestamp as pandas.Timestamp\n pd.tslib.Timestamp,\n /usr/local/lib/python3.6/dist-packages/ggplot/stats/smoothers.py:4: FutureWarning: The pandas.lib module is deprecated and will be removed in a future version. These are private functions and can be accessed from pandas._libs.lib instead\n from pandas.lib import Timestamp\n /usr/local/lib/python3.6/dist-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.\n from pandas.core import datetools\n\n\n# Outline\n\n**We intend for each topic to have a TL;DR, then a more detailed definition and a coding example.**\n\n- [Reference Material](#reference)\n- [Math](#math)\n- [Stats](#statistics)\n- [ML](#ml)\n- [Applied ML](#appliedml)\n- [Data](#data)\n\n\n\n\n\n\n\n# Reference Material\n\n## General\n\n- http://web.stanford.edu/~hastie/ElemStatLearn/\n- https://github.com/josephmisiti/awesome-machine-learning/blob/master/books.md\n\n## Linear Algebra and Probability\n- http://www.deeplearningbook.org/contents/linear_algebra.html\n- http://www.deeplearningbook.org/contents/prob.html\n- https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf\n- https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading18.pdf\n\n## ML\n\n- http://cs231n.github.io/linear-classify/\n\n## Blogs and Posts\n\n- http://cs231n.github.io/optimization-1/\n- https://hamelg.blogspot.com/2015/12/python-for-data-analysis-index.html?view=sidebar\n- http://colah.github.io/posts/2015-09-Visual-Information/\n- http://worrydream.com/refs/Shannon%20-%20A%20Mathematical%20Theory%20of%20Communication.pdf\n- http://varianceexplained.org/r/bayesian-ab-testing/\n\n\n\n\n# Math\n\n## Linear Algebra Review\n\n[chapter 2(dl book)](http://www.deeplearningbook.org/contents/linear_algebra.html) as reference\n\n\n### Eigenvectors, Eigenvalues and Eigendecomposition\n\nTLDR: A square matrix $A$ can be described as a geometric transformation, and an eigenvector $v$ is transformed by that matrix only by a scaling parameter $\\lambda$. So $Av = \\lambda v$. You can use this property to decompose $A$ into several matrix multiplications.\n\n#### Definition of eigenvector, eigenvalue\n\n\n\n- An Eigenvector $v$ of a square matrix $A$, when multiplied by $A$ only alters the scale of $v$. $\\lambda$ is the eigenvalue corresponding to this eigenvector. \n- $$ \\textbf{Av} = \\boldsymbol\\lambda \\textbf{v}$$\n- If $s \\epsilon R $ and $ s \\neq 0 $, then $sv$ is also an eigenvector and has the same eigenvalue. \n\n#### Eigendecomposition\n\n- The eigendecomposition of $\\textbf{A}$ is:\n\n$$ \\textbf{A} = \\textbf{V}diag(\\boldsymbol\\lambda)\\textbf{V}^{-1}$$\n\nWhere\n\n- $\\textbf{A}$ has to be a square matrix. Every real symmetric matrix $\\textbf{A}$ can be decomposed into an expression using only real-valued eigenvectors and eigenvalues.\n- Matrix $\\textbf{A}$ has $n$ linearly independent eigenvectors { $\\textbf{v}^{(1)}$, ... $\\textbf{v}^{(n)}$}, with corresponding eigenvalues {$\\lambda^{(1)}$, ... $\\lambda^{(n)}$}. \n\n- matrix $\\textbf{V}$ has all eigenvectors, one eigenvector per column, $\\textbf{V} = [\\textbf{v}^{(1)}, ..., \\textbf{v}^{(n)}]$.\n- vector $\\boldsymbol\\lambda = [\\lambda^{(1)}, ..., \\lambda^{(n)}]$, with all eigenvalues.\n\n- Can think of A as scaling space by $\\lambda_i$ in direction $v_i$ (think of a unit circle being scaled)\n\n#### Definitions coming from eigendecomposition\n- Eigendecomposition may not be unique. (only unique if all eigenvalues are unique).\n- Matrix is **singular** if any of the eigenvalues are 0.\n- Matrix is **positive definite** if all eigenvalues are positive\n- Matrix is **positive semidefinite** if all eigenvalues are positive or 0.\n - This guarantees that $ \\forall \\textbf{x}, \\textbf{x}^T\\textbf{Ax}\\geq0 $\n- Matrix is **negative definite** if all eigenvalues are negative\n- Matrix is **negative semidefinite** if all eigenvalues are negative or 0.\n\n\n\nLet's look at a coding example with eigendecomposition:\n\n\n```python\nX = iris.data\nA = X.T.dot(X) # a square matrix\nV = np.linalg.eig(A)\n\n\n# check that A v = lambda v\nprint('lambda * v', A.dot(V[1][:, 0]) / V[1][:, 0])\nprint('lambda', V[0][0])\n```\n\n lambda * v [9206.53059607 9206.53059607 9206.53059607 9206.53059607]\n lambda 9206.530596067096\n\n\n### SVD and PCA\n\nTLDR: Other ways to decompose matrices.\n\n#### Singular Value Decomposition\n\nTLDR: if you can't do eigendecomposition on a matrix(i.e. matrix is not square) use SVD to decompose that matrix instead.\n\n- Every real matrix has an SVD (e.g. if matrix is not square, eigendecomposition is undefined, so use SVD instead)\n$$\\textbf{A} = \\textbf{UDV}^T $$\n- $$\\textbf{A}(mxn) = \\textbf{U}(mxm)\\textbf{D}(mxn)\\textbf{V}^T(nxn) $$\n- **U** and **V** are orthogonal (inner product is zero).\n- **D** is diagonal and it's diagonal values are singular values of **A**, columns of **U** are left singular vectors (eigenvectors of $\\textbf{AA}^T$), columns of **V** are right-singular vectors (eigenvectors of $\\textbf{A}^T\\textbf{A}$).\n - singular values are the eigenvalues of matrix $\\sqrt{A^T A}$ or $\\sqrt{A A^T}$\n\n#### Moore-Penrose Pseudoinverse:\nTo solve $\\textbf{Ax = y}$, we want $\\textbf{A}^{-1}$ which isn't possible if **A** is not square. We can use the Moore-Penrose pseudoinverse:\n$$\\textbf{A}^+ = \\textbf{VD}^+\\textbf{U}^T $$\nwhere + indicaes pseudoinverse, **U, D, V** are from SVD of **A**\n- $D^{+}$ is reciprocal of its nonzero elements, then transpose.\n\n#### PCA\n\nTLDR: PCA helps us lower the dimensionality of our data while keeping the most relevant information.\n\nUse PCA to get a lower dimensional version of points that requires less memory, if it is okay to lose some precision.\n\n$\\textbf{X}$ is our data with shape $R^{m,n}$. We want a function to approximately encode $\\textbf{X}$ to $\\textbf{C} \\in R^{m, l}$, where $l < n$. We can use $\\textbf{D}$ to encode $\\textbf{X}$ where $\\textbf{D}^T\\textbf{X} = \\textbf{C}$ and $\\textbf{D}$ is composed of the largest $l$ eigenvectors of $\\textbf{X}^T\\textbf{X}$. $D \\in R^{nxl}$. \n- D is unitary ($DD^T=I$) because eigenvectors are orthogonal and they are normalized.\n\n---\n\n*Derivation:*\n\n- For each $ x^i \\epsilon R^n$, find a corresponding code vector, $c^i \\epsilon R^l$ (l < n for less memory)\n- We want an encoding function $f(x) = c$, and a decoding function, $x \\approx g(f(x))$\n- Find D for $g(c) = Dc$ where $D \\epsilon R^{nxl}$\n- PCA constrains columns of D to be orthogonal to each other, and all columns of D to have unit norm (for unique solution)\n- Find optimal $c^{*}$ for each x. Minimize distance between input point x and reconstruction g($c^{*}$) (measure this using L^2 norm):\n - $c* = \\underset{c}{\\operatorname{argmin}} \\|x - g(c) \\|_2^2$\n - ... expand, substitute g(c) and do optimization ...\n - $ c = D^T x $\n - $ f(x) = D^T x $\n - $ r(x) = g(f(x)) = D D^T x $\n - to find matrix D, solve this:\n - $D* = \\underset{D}{\\operatorname{argmin}} \\sqrt{\\underset{i,j}\\sum(X_j^i - r(x^i)_j)}$, subject to $D^TD = I_l$\n - $X \\epsilon R^{mxn}$ is all vectors stacked, $ X_{i,:} = X^{i^T}$\n - When l = 1, D is just a vector, d, and you get $ \\underset{d}{\\operatorname{argmax}} Tr(d^TX^TXD)$ subject to $d^Td=1$, and optimization problem can be solved using eigendecomposition.\n - Optimal d is given by eigenvector of $X^TX$, corresponding to largest eigenvalue.\n\n#### PCA Coding Example\n\n\n```python\n# Get PCA with 2 components from Iris Data using sklearn\nX = iris.data\n\npca = decomposition.PCA(n_components=2, svd_solver='full')\npca.fit(X)\nC = pca.transform(X)\nD = pca.components_.T\n\nprint('X shape is ', X.shape)\nprint('C shape is ', C.shape)\nprint('D shape is ', D.shape)\n```\n\n X shape is (150, 4)\n C shape is (150, 2)\n D shape is (4, 2)\n\n\n\n```python\n# Do it manually by getting eigendecomposition of X^T X\nXm = X - X.mean(axis=0)\nd = np.linalg.eig(Xm.T.dot(Xm))\n\n# get top 2 eigenvalues\nD1 = d[1][:, :2]\n\nC1 = Xm.dot(D1)\n\nprint('Actual X[0]', X[0])\nprint('Sklearn reconstruction', C.dot(D.T)[0] + X.mean(axis=0))\nprint('Numpy reconstruction', C1.dot(D1.T)[0] + X.mean(axis=0))\n```\n\n Acutal X[0] [ 5.1 3.5 1.4 0.2]\n Sklearn reconstruction [ 5.08718247 3.51315614 1.4020428 0.21105556]\n Numpy reconstruction [ 5.08718247 3.51315614 1.4020428 0.21105556]\n\n\n\n```python\n## Numerical Methods\n[chapter4(dl book)]\n(http://www.deeplearningbook.org/contents/numerical.html)\n\nhttp://cs231n.github.io/optimization-1/\n\n#### Directional Derivative \n\n### Convex Optimization\n\n#### Newton's method\n\n#### Simplex Algorithm - Linear Programming\n\n#### Quadratic Programming\n\n\n### Non-convex Optimization\n\n#### Finite Differences\n#### Gradient Descent\n#### Conjugate Gradient\n#### BFGS\n#### Hessian\n#### Genetic Algorithms\n#### Differential Evolution\n```\n\n## Probability Review\n[chapter3(dl book)](http://www.deeplearningbook.org/contents/prob.html) as reference\n\n#### Variance/Covariance Equations:\n\nTLDR: Variance is the expected squared distanec from each point to the mean.\n\n$Var(f(x)) = E[ (f(x) \u2212 E[f(x)])^2 ] $\n\n$stdev = \\sqrt(Var) $\n\n$Cov(f(x), g(y)) = \\mathrm{E}[ (f(x) - \\mathrm{E}[f(x)]) (g(y) - \\mathrm{E}[g(y)]) ]$\n - 2 independent variables have 0 covariance.\n - 2 variables with non-zero covariance are dependent.\n - 2 dependent variables can have 0 covariance.\n\n\n```python\n# Compute covariance with numpy\n\nX = iris.data\n\nprint(np.mean((X - np.mean(X, axis=0)) ** 2, axis=0))\n\nprint(np.std(X, axis=0) ** 2)\n```\n\n [ 0.68112222 0.18675067 3.09242489 0.57853156]\n [ 0.68112222 0.18675067 3.09242489 0.57853156]\n\n\n### Baye's Formula\n\nTLDR: A formula for conditional probability distributions\n\nThe probability of getting a model with parameters $\\theta$ given an observation $x$, based on prior beliefs $p(\\theta)$ of the model parameters is: \n\n$$ P(\\theta|x) = \\frac{P(x|\\theta)P(\\theta)}{P(x)}$$\n\n- $P(\\theta)$ is the prior, initial belief in $\\theta$\n- P($\\theta$| x) is the posterior, probability of getting a model $\\theta$ given an observation $x$\n- P(x|$\\theta$) is the likelihood of seeing an observation $x$ given your model $\\theta$\n\n### Common Probability Distributions\n\n#### Uniform Distribution\n\nTLDR: Flat distribution over a specified interval.\n\n$$\n\\begin{equation}\n P(x)=\\begin{cases}\n 0, & \\text{if $x b$}.\n \\end{cases}\n\\end{equation}\n$$\n\n- It is a maximum entropy distribution given a specified interval over real numbers.\n\n#### Bernoulli distribution\n\nTLDR: think of a coin\n\n - Single binary R.V.\n - $\\phi \\in [0,1]$\n - $P(x=1) = \\phi $\n - $P(x=0) = 1 - \\phi$\n - $P(x = x) = \\phi^x(1-\\phi)^{1-x}$\n - $\\mathrm{E}_X[x] = \\phi$\n - $\\mathrm{Var}_X(x) = \\phi(1-\\phi) $\n\n#### Multinomial distribution\n\nTLDR: think of dice\n\n - Categorical. Single discrete variable with k different states (k is finite)\n - $\\textbf{p} \\in [0,1]^{k-1}$ where $p_i$ is the $i$th state's probability.\n - $k$th state probability given by $1- \\textbf{1}^T\\textbf{p}$, $\\textbf{1}^T\\textbf{p} \\leq 1$\n \n\n#### Gaussian Distribution and Central Limit Theorem\n\nTLDR: Has a mean and variance, for continuous random variables, and it approximates a ton of other distributions because of the Central Limit Theorem.\n\n$$P(x) = \\frac{1}{{\\sigma \\sqrt {2\\pi } }} e^{ -(x - \\mu)^2 / (2 \\sigma^2) }$$\n\n- Central Limit Theorem - the sum of many independent random variables is approximately normally distributed. \n- Out of all possible probability distributions over real numbers with a specified variance, the normal distribution encodes the maximum amount of uncertainty. In other words, it's a maximum entropy distribution.\n\n#### Exponential and laplace distribution\n\nTLDR: often want a sharp point at x = 0\n\n- Exponential Distribution:\n$$p(x;\\lambda) = \\lambda \\textbf{1}_{x\\geq0}\\exp(-\\lambda x)$$\n- Laplace Distribution\n$$ Laplace(x; \\mu, \\gamma) = \\frac{1}{2\\gamma} \\exp(-\\frac{| x - \\mu |}{\\gamma})$$\n\n\n\n#### Dirac and Empirical distribution\n\nYou can make an empirical distribution by putting all the mass in a probability distribution around the actual points of the data.\n\n- Dirac delta function:\n$$ p(x) = \\delta(x - \\mu) $$\n - zero everywhere except 0\n - infinitely narrow peak where $x = \\mu$\n - Empirical distribution (common use of Dirac delta distribution)\n $$ p(\\textbf{x}) = \\frac{1}{m} \\sum_{i=1}^m \\delta(\\textbf{x} - \\textbf{x}^{(i)})$$\n - Used to define empirical distribution over continuous variables.\n - For discrete variables, empirical distribution is a multinoulli distribution, where probability of each input value is the empirical frequency in the training set.\n - Empirical distribution is the probability density that maximizes the likelihood of the training data.\n\n#### Mixtures of Distributions:\n\nTLDR: break up a distribution instead several smaller ones\n\n- Made up of several component distributions\n- For example, we could first sample a component identity $P(c)$ from multinoulli distribution, which tells us which distribution to sample from $P(x|c)$.\n$$ P(x) = \\sum_iP(c=i)P(x|c = i) $$\n - P(c) is the multinoulli distribution over component identities\n\n**Latent variable**\n- Random variable that we can't observe directly\n- above, $c$ is an example. \n- Latent variables are related to x through joint distribution, i.e. $ P(x,c) = P(x|c)P(c)$\n\n**Gaussian mixture model (GMM)**\n\nTLDR: A model consisting of several gaussian distributions.\n\n- $p(\\textbf{x}|c=i)$ are Gaussians.\n- Each component has separately parametrized $\\mathbf{\\mu}^{(i)}$ and $\\mathbf{\\Sigma}^{(i)}$\n- Parameters also specify prior probability: $\\alpha_i = P(c=i)$ given to each component $i$ (prior because it is the model's belief about c before it has observed x.)\n- $P(c|\\textbf{x})$ is a posterior probability\n- This is a universal approximator of densities (any smooth density can be approximated with a gmm with enough components.)\n\n\n### Common functions and useful properties:\n\n\n#### Logistic Sigmoid\n\nSigmoid converts a real number $\\in (-\\inf, \\inf)$ to a probability $\\in [0, 1]$.\n\n$$\\sigma(x) = \\frac{1}{1 + \\exp{(-x)}} $$\n\n\nThe inverse of the sigmoidal function is the logit function $log(\\frac{p}{1 - p})$.\n\n\n```python\ndef sigmoid(x):\n y = []\n for item in x:\n y.append(1/(1 + math.exp(-item)))\n return y\n\nx = np.arange(-10, 10, 0.2)\ny = sigmoid(x)\nplt.plot(x, y)\nplt.show()\n```\n\n#### Softplus Function:\n\n$$\\varsigma(x) = log(1 + \\exp(x))$$, sometimes used as an activation function in neural nets.\n\n\n```python\n def soft_plus(x):\n y = []\n for item in x:\n y.append(math.log(1 + math.exp(item)))\n return y\n \nx = np.arange(-10, 10, 0.2)\ny = soft_plus(x)\nplt.plot(x, y)\nplt.show()\n```\n\n#### Properties of Sigmoid ($\\sigma$) and Softplus ($\\varsigma$):\n$$\\sigma(x) = \\frac{\\exp{(x)}}{\\exp{(x)} + \\exp{(0)}} $$\n\n$$\\frac{d}{dx} \\sigma(x) = \\sigma(x)(1-\\sigma(x)) $$\n\n$$ 1 - \\sigma(x) = \\sigma(-x)$$\n\n$$ log\\sigma(x) = -\\varsigma(-x) $$\n\n$$\\frac{d}{dx}\\varsigma(x) = \\sigma(x)$$\n\n$$\\forall \\in (0,1), \\sigma^{-1}(x) = log(\\frac{x}{1-x})$$\n\n$$\\forall > 0, \\varsigma^{-1}(x) = log(\\exp(x) - 1)$$\n\n$$ \\varsigma(x) = \\int_{-\\infty}^{x} \\sigma(y)dy $$\n\n$$ \\varsigma(x) - \\varsigma(-x) = x$$\n\n- This last one is similar to identity $x^+ - x^- = x $\n\n## Information Theory\n\nhttp://colah.github.io/posts/2015-09-Visual-Information/\n\nhttp://worrydream.com/refs/Shannon%20-%20A%20Mathematical%20Theory%20of%20Communication.pdf\n\n\n\n\n#### Self-information\n\n$$I(x) = -logP(x)$$\n\n#### Entropy\n\nTLDR: Used to quantify the uncertainty in a probability distribution or the minimum number of bits to encode events from a probability distribution.\n\n$$H(X) = \\mathrm{E}_{X \\sim P} I(x) = - \\sum_{i= 1}^n P(x_i)log(P(x_i)) $$\n\n\n#### KL Divergence\n\nTLDR: Measure the difference between 2 probability distribution with the same random variable.\n\nRandom variable $x$, and 2 probability distributions P(x), Q(x)\n\n$$ D_{KL}(P \\parallel Q) = \\mathrm{E}_{X \\sim P}[ \\ log \\frac{P(x)}{Q(x)} ] = \\sum_i P(x_i) log(\\frac{P(x_i)}{Q(x_i)}) $$\n- measures difference between 2 distributions: the extra information sent if we sent a message with symbols drawn from distribution P using code that minimized the length of messages drawn from distribution Q\n- Difference between cross-entropy and entropy.\n- Not a distance measure, not symmetric, i.e. $ D_{KL}(P \\parallel Q) \\neq D_{KL}(Q \\parallel P) $\n\n\n#### Cross Entropy\n\nTLDR: Measures the number of bits needed if we encode events from P using the wrong distribution Q.\n\n$$H(P, Q) = -\\sum_{i=1}^n P(x_i) log(Q(x_i))$$\n$$H(P,Q) = H(P) + D_{KL}(P \\parallel Q)$$\n\n- If $P$ and $Q$ are the same distribution, we just get entropy.\n- Cross-entropy loss for random guesses in a binary classifier is $-log(0.5)\\approx0.693$. In this case $P$ are the ground truth labels and $Q$ is our model's predictions.\n- minimizing cross-entropy is the same as minimizing KL divergence.\n\n#### Graphical Models\nTLDR: factorization of a probability distribution using a graph\n\n- We can split up a probability distribution into many factors, like:\n$$ p(a,b,c) = p(a)p(b|a)p(c|b)$$\n- The factorization of a probability distribution with a graph is called a graphical model\n- directed and undirected models (use directed and undirected graphs respectively)\n - Directed Models:\n - factorizations into conditional probability distributions:\n $$ p(\\textbf{x}) = \\prod_{i}p(x_i|Pa_\\zeta(x_i)) $$\n - $Pa_\\zeta(x_i)$ are the parents of $x_i$\n \n - Undirected Models:\n - factorizations into a set of functions(usually not probability distributions)\n - set of nodes that are all connected to eachother in $\\zeta$ is called a clique, $C^{(i)}$.\n - Each clique, $C^{(i)}$ is associated with a factor $\\phi^{(i)}(C^{(i)})$. These are functions, non-negative, but don't have to sum to 1 like a probability distribution.\n - Beacuse they don't sum to 1, we use a normalization constant $Z$.\n $$p(\\textbf{x}) = \\frac{1}{Z} \\prod_{i} \\phi^{(i)}(C^{(i)})$$\n \n\n \n\n\n# Statistics\n\n## Sampling Statistics\n\n\n\n### Population Statistics\n\n\n#### Mean & Median\n- Mean: Sum of values divided by number of values.\n- Median: Middle value of sorted values, the 50th percentile. This is a more robust statistic than mean, because it tends to resist effects of skew or outliers.\n\n\n```python\n# change the index from numbers to the name of the car\nmtcars.index = mtcars['name']\n```\n\n\n```python\n# Mean:\nmtcars.mean(axis=0) # mean of each column\nmtcars.mean(axis=1) # mean of each row\nmtcars.median(axis=0) # median of column\n```\n\n\n\n\n mpg 19.200\n cyl 6.000\n disp 196.300\n hp 123.000\n drat 3.695\n wt 3.325\n qsec 17.710\n vs 0.000\n am 0.000\n gear 4.000\n carb 2.000\n dtype: float64\n\n\n\n\n#### Quantiles\nq-Quantiles partition your data into q subsets of (nearly) equal sizes. The median is the 2nd quantile of your data.\n you can get the 25% (1st quantile), 75% (3rd quantile)\n\n\n```python\n# Defined as the 'five num' summary:\n\nfive_num = [mtcars[\"mpg\"].quantile(0), \n mtcars[\"mpg\"].quantile(0.25),\n mtcars[\"mpg\"].quantile(0.50),\n mtcars[\"mpg\"].quantile(0.75),\n mtcars[\"mpg\"].quantile(1)]\n\nprint(five_num)\n\n# IQR(Interquartile range) is a measure of spread (upper quartile-lower quartile\n# 4-quantiles are called quartiles):\nmtcars[\"mpg\"].quantile(0.75) - mtcars[\"mpg\"].quantile(0.25)\n```\n\n [10.4, 15.425, 19.2, 22.8, 33.9]\n\n\n\n\n\n 7.375\n\n\n\n\n```python\n# A boxplot plots these quantities, i.e.\nmtcars.boxplot(column='mpg', return_type='axes', figsize=(8,8))\n\nplt.text(x=0.74, y=22.25, s=\"3rd Quartile\")\nplt.text(x=0.8, y=18.75, s=\"Median\")\nplt.text(x=0.75, y=15.5, s=\"1st Quartile\")\nplt.text(x=0.9, y=10, s=\"Min\")\nplt.text(x=0.9, y=32, s=\"Max\")\nplt.text(x=0.7, y=19.5, s=\"IQR\", rotation=90, size=25)\nplt.show()\n```\n\n#### Skew & Kurtosis\n*Skewness* is the measure of skew or asymmetry of a distribution, and *kurtosis* measures the 'peakedness'.\n\n\n- Mean, Variance, and Standard Deviation are all susceptible to influece of skew and outliers.\n\n\n```python\nnorm_data = np.random.normal(size=100000)\nskewed_data = np.concatenate((np.random.normal(size=35000)+2, \n np.random.exponential(size=65000)), \n axis=0)\nuniform_data = np.random.uniform(0,2, size=100000)\npeaked_data = np.concatenate((np.random.exponential(size=50000),\n np.random.exponential(size=50000)*(-1)),\n axis=0)\n\ndata_df = pd.DataFrame({\"norm\":norm_data,\n \"skewed\":skewed_data,\n \"uniform\":uniform_data,\n \"peaked\":peaked_data})\n\n\ndata_df.plot(kind=\"density\",\n figsize=(10,10),\n xlim=(-5,5))\nplt.show()\n```\n\n\n```python\nprint('skew of graphs')\nprint(data_df.skew())\nprint('\\n')\nprint('kurtosis of graphs')\nprint(data_df.kurt())\n```\n\n skew of graphs\n norm 0.012523\n peaked -0.030491\n skewed 1.004339\n uniform 0.005870\n dtype: float64\n \n \n kurtosis of graphs\n norm 0.004622\n peaked 2.861855\n skewed 1.276163\n uniform -1.199963\n dtype: float64\n\n\n#### Correlation\nA measure of dependence between 2 quantities.\n\nPearson correlation coefficient:\n$$\\rho_{X,Y} = corr(X, Y) = \\frac{cov(X,Y)}{\\sigma_X\\sigma_Y} = \n\\frac{E [ (X-\\mu_X) (Y-\\mu_Y) ] } {\\sigma_X \\sigma_Y} $$\n\nequals +1 in perfectly increasing linear relationship, equals -1 in perfectly decreasing linear relationship. It cannot exceed 1.\n\n\n\n\n```python\n# Computes all pairwise correlation scores\nmtcars.corr(method='pearson')\n```\n\n\n\n\n
\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
mpgcyldisphpdratwtqsecvsamgearcarb
mpg1.000000-0.852162-0.847551-0.7761680.681172-0.8676590.4186840.6640390.5998320.480285-0.550925
cyl-0.8521621.0000000.9020330.832447-0.6999380.782496-0.591242-0.810812-0.522607-0.4926870.526988
disp-0.8475510.9020331.0000000.790949-0.7102140.887980-0.433698-0.710416-0.591227-0.5555690.394977
hp-0.7761680.8324470.7909491.000000-0.4487590.658748-0.708223-0.723097-0.243204-0.1257040.749812
drat0.681172-0.699938-0.710214-0.4487591.000000-0.7124410.0912050.4402780.7127110.699610-0.090790
wt-0.8676590.7824960.8879800.658748-0.7124411.000000-0.174716-0.554916-0.692495-0.5832870.427606
qsec0.418684-0.591242-0.433698-0.7082230.091205-0.1747161.0000000.744535-0.229861-0.212682-0.656249
vs0.664039-0.810812-0.710416-0.7230970.440278-0.5549160.7445351.0000000.1683450.206023-0.569607
am0.599832-0.522607-0.591227-0.2432040.712711-0.692495-0.2298610.1683451.0000000.7940590.057534
gear0.480285-0.492687-0.555569-0.1257040.699610-0.583287-0.2126820.2060230.7940591.0000000.274073
carb-0.5509250.5269880.3949770.749812-0.0907900.427606-0.656249-0.5696070.0575340.2740731.000000
\n
\n\n\n\n\n```python\nmpg = mtcars['mpg']\ncyl = mtcars['cyl']\n\ncov_mc = np.mean((mpg - np.mean(mpg))* (cyl - np.mean(cyl)))\ncov_mc/(np.std(mtcars['mpg']) * np.std(mtcars['cyl']))\n```\n\n\n\n\n -0.85216195942661332\n\n\n\n### Sampling\n\n#### Point Estimates and Sampling\n\nTLDR:\n\n- Population statistics are for an entire dataset. You don't really need anything fancy since you have the whole dataset already!\n- Sample statistics are for a sample of the population. You need fancy statistics since you need to adjust for bias from sampling in order to reflect statistics of the entire population.\n\nThe sample mean is an unbiased estimator of the population mean. Let's get samples from a population with a population mean $\\mu$. Our samples are $X_1, X_2, ... , X_n$ and the sample mean is $E[X] = \\bar{X}$. The expectation of our sample mean, if we were to sample many times from the population, is $E[\\bar{X}] = \\mu$. See the derivation [here](https://onlinecourses.science.psu.edu/stat414/node/167).\n\nLet's make some fake data below, and see that re-sampling many samples from a population and taking the sample mean gives us an unbiased estimator of the population mean.\n\n\n\n\n```python\nnp.random.seed(3)\n\nmu = 10\nsigma = 3\nsample_size = 8\npopulation_size = 50000\npopulation = np.random.normal(mu, sigma, size=population_size)\nsample_means = []\n\nfor _ in range(10000):\n sample = population[np.random.randint(0, len(population), sample_size)]\n sample_means.append(np.mean(sample))\n \n\nprint('Mean of sample means is', np.mean(sample_means), 'and the true mean is', mu)\nprint('Standard deviation of sample means is ', np.std(sample_means))\nprint('Theoretical standard deviation of sample means is', np.sqrt(sigma**2 / sample_size))\n\n```\n\n Mean of sample means is 9.9933643506 and the true mean is 10\n Standard deviation of sample means is 1.05264195357\n Theoretical standard deviation of sample means is 1.06066017178\n\n\nIn the example above, we calculate the variance around our sample mean $\\bar{X}$, which seems quite high at 1.053. This is because the variance around our sample mean depends on the sample size (i.e. if we sampled the whole population, the variance around our sample mean would be 0, since the sample mean would equal the population mean). We can calculate the variance around our sample mean as such:\n\n$$\n\\begin{aligned}\nVar(\\bar{X}) &= Var(\\frac{X_1 + X_2 + ... + X_n}{n}) \\\\\n&= Var(\\frac{X_1}{n} + \\frac{X_2}{n} +... + \\frac{X_n}{n})\\\\\n&= \\frac{Var(X_1)}{n^2} + \\frac{Var(X_2)}{n^2} +... + \\frac{Var(X_n)}{n^2}\\\\\n&= \\frac{1}{n^2} [\\sigma^2] n\\\\\n&= \\frac{\\sigma^2}{n}\n\\end{aligned}\n$$\n\nWe validated this formula in the example above. In summary, the variance of our sample mean is $Var(\\bar{X}) = \\frac{\\sigma^2}{n}$.\n\n\n\nWhat if we now take standard deviations of our samples to estimate the population standard deviation? Is it unbiased?\n\n\n\n\n```python\nnp.random.seed(3)\n\nmu = 10\nsigma = 3\nsample_size = 15\npopulation_size = 50000\npopulation = np.random.normal(mu, sigma, size=population_size)\nsample_stds = []\nunbiased_sample_stds = []\n\nfor _ in range(10000):\n sample = population[np.random.randint(0, len(population), sample_size)]\n sample_stds.append(np.std(sample))\n unbiased_sample_stds.append(np.std(sample, ddof=1)**2)\n \n\nprint('Mean of sample stds is', np.mean(sample_stds), 'and the true std is', sigma)\nprint('Mean of unbiased sample stds is', np.sqrt(np.mean(unbiased_sample_stds)))\n\n```\n\n Mean of sample stds is 2.8458362328361426 and the true std is 3\n Mean of unbiased sample stds is 2.9987127401285933\n\n\nWe see in the example above that sample standard deviations are very far from the true standard deviation! \n\nThis is because standard deviation is a biased estimator of the population standard deviation. An unbiased estimator of the population standard deviation is the **sample standard deviation**:\n\n$$s_N = \\sqrt{\\frac{1}{N - 1} \\sum_{i=1}^{N} (x_i - \\bar{x})^2}$$\n\nSee [Bessel's correction](https://en.wikipedia.org/wiki/Bessel%27s_correction) for a derivation.\n\n\n\n\n##### Sufficient Statistics\n\nA statistic is sufficient with respect to an unknown population parameter and a data sample if no other statistic on the data sample can give us additional information about the population parameter.\n\nThe probability of a population parameter given the sufficient statistics is independent of the data sample. In other words, once we calculate the sufficient statistics, we can throw out the data, since no additional statistic from the data will help us estimate the population parameter in a better way.\n\nFor a gaussian generated data, the sufficient statistics to estimate the distribution are the mean and variance. If we are trying to estimate a population mean, the sufficient statistic would be the sample mean.\n\n\n#### Confidence Intervals Around Sample Mean\n\nTLDR: You should add a margin of error to your point estimate to create a \"confidence interval.\"\n\n##### Known Population Standard Deviation\n\nWe saw above that the standard deviation of a sample mean scales as $\\frac{\\sigma}{\\sqrt{n}}$. If we assume that the sample means are normally distributed (i.e. $\\frac{\\bar{x} - \\mu}{\\frac{\\sigma}{\\sqrt(n)}} \\sim N(0,1)$, which is a reasonable assumption for large $n$ due to the Central Limit Theorem), we can calculate our confidence interval as $z * \\frac{\\sigma}{\\sqrt{n}}$\n\nwhere $z$ ($z$-critical value) is the number of standard deviations you would have to go from the mean to capture the proportion of data equal to your confidence interval. $\\sigma$ is of the population, and $n$ is your sample size.\n\nLet's get a 95% confidence interval for the mean sample age point estimate:\n\n\n\n```python\n#create population\npopulation1 = stats.poisson.rvs(mu=35, size=50000)\npopulation2 = stats.poisson.rvs(mu=35, size=100000)\npopulation = np.concatenate((population1, population2))\nprint('actual population mean', population.mean())\n\n\nnp.random.seed(5)\nsample_size = 500\nsample_ages = np.random.choice(a=population, size=sample_size)\nsample_mean = sample_ages.mean()\nprint('sample mean', sample_mean)\n\n# 2 tailed distribution, so to get a 95% confidence interval, we need to use 97.5%\n\nz = stats.norm.ppf(q=.975)\nmoe = z * (population.std() / math.sqrt(sample_size))\nprint('margin of error', moe)\nprint('95th% confidence interval:', (sample_mean - moe, sample_mean + moe))\n\n```\n\n actual population mean 35.006366666666665\n sample mean 34.834\n margin of error 0.5186390573204052\n 95th% confidence interval: (34.3153609426796, 35.35263905732041)\n\n\nIn this case, the actual mean lies within the confidence interval. If we sampled the population $m$ times with the same confidence level of 95%, we should expect the actual mean to fall outside of the confidence interval of the sample mean 5% of the time.\n\n----\n\n##### Uknown Population Standard Deviation\n\nhttps://onlinecourses.science.psu.edu/stat414/node/199\n\n*If you **don't** know $\\sigma$ of the population*, you can replace the population standard deviation $\\sigma$ with the unbiased sample standard deviation $s$ to estimate your confidence interval around the sample mean. Previously we assumed that $\\frac{X - \\mu}{\\frac{\\sigma}{\\sqrt(n)}} \\sim N(0,1)$, to get our confidence intervals. Now we have $\\frac{X - \\mu}{\\frac{s}{\\sqrt(n)}}$ instead, which turns out to follow the [T-distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution).\n\nSo we can use the $t$-critical value to create our confidence interval around the sample mean. The $t$ value is drawn from the $t$-distribution. The $t$-distribution is symmetric and bell-shaped like the normal distribution but has heavier tails (wider, so more likely to produce values that fall far from it's mean).\n\n\nIn the example below, we show how to use the t-distribution to get a confidence interval around a sample mean with unknown $\\sigma$ of the population.\n\n\n```python\nsample_size = 50\nsample_ages = np.random.choice(a=population, size=sample_size)\nsample_mean = sample_ages.mean()\n```\n\n\n```python\n# actually, we are getting t for q = .975 and q = .025\n# we are actually doing mu + moe(q=0.025) and mu + moe (q=0.975)\nt = stats.t.ppf(q=.975, df=sample_size - 1)\nsample_stdev = np.std(sample_ages, ddof=1)\nmoe_t = t * (sample_stdev / math.sqrt(sample_size))\n\nprint('margin of error with t value', moe_t)\nprint('confidence interval:', (sample_mean - moe_t, sample_mean + moe_t))\nprint('population mean', population.mean())\n```\n\n margin of error with t value 1.840016072380871\n confidence interval: (33.11998392761913, 36.80001607238087)\n population mean 35.006366666666665\n\n\n*As sample size increases, the t-distribution approaches the normal distribution.\n\n----\n\n##### Examples\n\nHow do you know how many samples to take to estimate a sample mean?\n\nLet's look at an example: let's say I know the standard deviation of some true population statistic to be 1, how many samples do I need so that I'm 95% sure my sample mean is within 0.1 of the actual mean?\n\n\n\n```python\n# you want your confidence interval to be within .1 unit of the actual mean\nmoe = 0.1\n# for 95% confidence interval:\nz = stats.norm.ppf(q=.975)\n\n# moe = z * (sigma/sqrt(n))\n# so, n = (z * sigma / moe)^2\nsigma = 1\n\nprint('The number of samples would be ', ((z * sigma) / moe) ** 2)\n```\n\n The number of samples would be 384.14588206941244\n\n\n\n```python\n# Let's show that this sample size gives us the correct confidence interval\nmu = 2\nsigma = 1\nsample_size = 384\npopulation = np.random.normal(mu, sigma, 5000)\n\nsample_mean = []\nfor _ in range(10000):\n sample = population[np.random.randint(0, len(population), sample_size)]\n sample_mean.append(np.mean(sample))\n\nprint('Sampled 95% confidence interval', np.percentile(sample_mean, 2.5), np.percentile(sample_mean, 97.5))\nprint('Sample mean is: ', np.mean(sample_mean))\n```\n\n Sampled 95% confidence interval 1.89703332080055 2.0990285951801337\n Sample mean is: 1.9972105309953014\n\n\n#### Bootstrapping Point Estimates with Confidence Intervals\n\nhttps://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf\n\nTLDR: If our data is drawn from some unknown distribution $F$ with unknown mean $\\mu$, and we use a sample mean $\\bar{x}$ as a point estimate of $\\mu$, we can use the ***empirical bootstrap*** to find a confidence interval around $\\bar{x}$.\n\n**Resampling**:\nLabel your data, $x_1,x_2,...x_n$ by drawing a number j from the uniform distribution on {$1, 2, ..., n$}, and take $x_j$ as your resampled value. We sample with replacement.\n\n**Bootstrap Setup:**\n1. $x_1,x_2...x_n$ is a data sample from distribution $F$\n2. $u$ is a statistic computed from this sample\n3. $F^*$ is the resampling distribution\n4. $x_1^*,x_2^*...x_n^*$ is the resampled data. (Think of this as a sample size $n$ being drawn from $F^*$)\n5. $u*$ is the statistic computed from the resample.\n\n**The bootstrap principle says:**\n1. $F^* \\approx F $\n2. The variation of $u$ is well approximated by the variation of $u^*$\n\nWe use step 2 of the bootstrap principle to estimate the size of the confidence intervals. See the link above for a justification.\n\n**Example**: Sampled data is $[30, 37, 36, 43, 42, 43, 43, 46, 41, 42]$. Estimate the mean $\\mu$ of the underlying distribution and give an 80% bootstrap confidence interval.\n\nSample mean $\\bar{x}=40.3$. We want to know the distribution of $\\delta = \\bar{x} - \\mu$ which we can approximate using $\\delta^* = \\bar{x}^* - \\bar{x}$, where $\\bar{x}^*$ is the mean of an emperical bootstrap sample.\n\n\n\n```python\n# 1. Perform resampling:\nsample_data = np.array([30,37,36,43,42,43,43,46,41,42])\nx_bar = np.mean(sample_data)\n\n# Let's perform 100 bootstrap samples\nbootstrap_samples_n = 100\nbootstrapped_x_bars = []\nfor n in range(bootstrap_samples_n):\n resample_idx = np.random.randint(0, len(sample_data), len(sample_data))\n bootstrapped_x_bars.append(np.mean(sample_data[resample_idx]))\n\nbootstrapped_x_bars = np.sort(bootstrapped_x_bars)\ndelta_star = bootstrapped_x_bars - x_bar\n```\n\n\n```python\n# 2. Now we can get the 80th percentile of delta_star distribution\nlower_bound_int = delta_star[int(bootstrap_samples_n * .1) - 1]\nupper_bound_int = delta_star[int(bootstrap_samples_n * .9) - 1]\nprint('confidence interval is: ', [x_bar + lower_bound_int, \n x_bar + upper_bound_int])\n```\n\n confidence interval is: [1.5612398911159096, 1.9277749920844274]\n\n\n$*$ By the law of large numbers, we could increase the number of bootstrap samples to get a more and more accurate estimate.\n\n- The bootstrap is based on the law of large numbers, which says that with enough data, the empirical distribution will be a good approximation of the true distribution.\n\n- Resampling doesn't improve our point estimate.\n\n- The distribution of $\\delta = \\bar{x} - \\mu$ describes the variation of $\\bar{x}$ about its center, and distribution of $\\delta^* = \\bar{x}^* - \\bar{x}$ describes the variation of $\\bar{x}^*$ about its center. So, even if the centers $x$ and $\\mu$ are different, the variations of the two centers can be approximately equal.\n\n\n\n\n#### Stratified Sampling\n\nThis is a sampling method where the population is divided into separate groups (strata) so that each strata is a good representation of that strata in the whole population. \n- In proportion allocation, each sampled strata has equal proportions to that of the strata in the total population.\n\n\n#### Rejection Sampling\n\n#### Importance Sampling\n\n#### Reservoir Sampling\n\n- https://en.wikipedia.org/wiki/Reservoir_sampling\n- https://gregable.com/2007/10/reservoir-sampling.html\n\nTLDR: A way to sample streaming data uniformly without wasting memory.\n\nFor example, if I have a twitter stream, how do I sample 100 tweets uniformly, if I only have enough storage for 100 tweets?\n\nFirst, we fill our reservoir (that holds only 100 tweets) with the first 100 tweets. Next, we'll want to process the 101th, 102nd,... nth tweet such that after processing, the 100 elements in the reservoir are randomly sampled amongst all the tweets we've seen so far.\n\n*Solution:*\n- When the 101st item arrives, we need the probability of keeping any element we've seen so far to be $\\frac{100}{101}$. That means that we need to get rid of the 101st element with probability $\\frac{1}{101} = 1 - \\frac{1}{101}$.\n- We also need the probabilty of getting rid of any item in the reservoir to be $\\frac{1}{101}$, which can be done with:\n\nP(101th element getting selected) * P(element in reservoir getting chosen as replacement) $$ = \\frac{100}{101}*\\frac{1}{100}= \\frac{1}{101}$$\n\n\nThat means that:\n\n- For the ith round, the probability we keep the ith item coming in is $\\frac{100}{i}$. The probability any element will be removed from the reservoir in that round is $\\frac{1}{i}$, and the probability we will keep any item is $\\frac{100}{i}$.\n\n\n```python\nfrom collections import Counter\nimport numpy as np\n```\n\n\n```python\ndef reservoir_alg(res_capacity=10, stream_size=100):\n # For this example, let's say the reservoir is already filled to capacity\n reservoir = []\n for i in range(1, stream_size):\n if i <= res_capacity:\n reservoir.append(i)\n else:\n # Do reservoir sampling:\n j = np.random.randint(1, i + 1)\n if j <= res_capacity:\n # remove j element and replace with i\n reservoir[j - 1] = i\n return reservoir\n```\n\n\n```python\n# Now let's run this 10,000 times and see if we get uniform distribution\n# over the 100 items we see in total\ncounts = Counter()\nnum_sim = 10000\nfor i in range(num_sim):\n reservoir = reservoir_alg()\n counts.update(reservoir)\n\n# We expect the counter to show that each number in stream_size\n# has an equal probability of occuring:\nprobs = {x:counts[x] / num_sim for x in counts}\n```\n\n\n```python\n# Indeed we see that each has a ~1/stream_size probability of occuring!\nprobs\n```\n\n\n\n\n {1: 0.0985,\n 2: 0.1017,\n 3: 0.1012,\n 4: 0.0975,\n 5: 0.0997,\n 6: 0.0992,\n 7: 0.0975,\n 8: 0.1009,\n 9: 0.1061,\n 10: 0.1024,\n 11: 0.102,\n 12: 0.0988,\n 13: 0.1007,\n 14: 0.1024,\n 15: 0.103,\n 16: 0.1027,\n 17: 0.1003,\n 18: 0.1006,\n 19: 0.103,\n 20: 0.102,\n 21: 0.0981,\n 22: 0.1021,\n 23: 0.1002,\n 24: 0.0986,\n 25: 0.1028,\n 26: 0.099,\n 27: 0.1024,\n 28: 0.0993,\n 29: 0.1033,\n 30: 0.1034,\n 31: 0.1022,\n 32: 0.1046,\n 33: 0.0978,\n 34: 0.0984,\n 35: 0.0989,\n 36: 0.1007,\n 37: 0.1058,\n 38: 0.1017,\n 39: 0.1053,\n 40: 0.1038,\n 41: 0.1006,\n 42: 0.1054,\n 43: 0.0988,\n 44: 0.1046,\n 45: 0.0943,\n 46: 0.1009,\n 47: 0.097,\n 48: 0.1018,\n 49: 0.1036,\n 50: 0.0921,\n 51: 0.1003,\n 52: 0.0996,\n 53: 0.1056,\n 54: 0.0998,\n 55: 0.1004,\n 56: 0.0988,\n 57: 0.0993,\n 58: 0.0999,\n 59: 0.1032,\n 60: 0.1016,\n 61: 0.0987,\n 62: 0.1019,\n 63: 0.1037,\n 64: 0.1012,\n 65: 0.1003,\n 66: 0.1022,\n 67: 0.0975,\n 68: 0.1002,\n 69: 0.0975,\n 70: 0.1003,\n 71: 0.1025,\n 72: 0.1076,\n 73: 0.105,\n 74: 0.1019,\n 75: 0.0987,\n 76: 0.0976,\n 77: 0.1014,\n 78: 0.1013,\n 79: 0.1013,\n 80: 0.1002,\n 81: 0.0984,\n 82: 0.1036,\n 83: 0.108,\n 84: 0.0994,\n 85: 0.0992,\n 86: 0.1011,\n 87: 0.1027,\n 88: 0.0966,\n 89: 0.0962,\n 90: 0.0976,\n 91: 0.0982,\n 92: 0.102,\n 93: 0.1024,\n 94: 0.1053,\n 95: 0.1021,\n 96: 0.1035,\n 97: 0.1026,\n 98: 0.1063,\n 99: 0.0976}\n\n\n\n\n## Significance Testing:\n\nhttps://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading18.pdf\n\nTLDR: Signficance Testing helps us test if data we collect matches some hypothesis about the data.\n\nIn other words, Significance Testing helps us test if our data is in the region we expect under some default assumption called the Null Hypothesis (or Null Distribution). If the data is significantly outside the Null Distribution, we reject the Null Hypothesis.\n\nTerms:\n\n- $H_0$ Null Hypothesis: default assumption. This is usually a Hypothesis that nothing interesting is happening in the data, i.e. 'the click-through-rate (CTR) is not increasing with this new button.'\n- $H_A$ Alternative Hypothesis: This is the opposite of $H_0$, i.e. 'the CTR is increasing with this new button.' The Alternative Hypothesis is accepted if we reject the null hypothesis. \n- $X$: the test statistic, which we compute from the data\n- Null Distribution: Probability distribution of our data assuming $H_0$ is true\n- Significance level $\\alpha = P$(reject $H_0$ | $H_0$ is true)\n- Power = $P$(reject $H_0$ | $H_A$ is true)\n\n\n\n\n\n\n### How to run a significance test:\n\n1. Design the experiment (look at [AB Test Design](#abtest-design)), choose a test statistic $X$ to collect. You must also choose a null distribution, $f(x|H_0)$ and the alternative distribution $f(x|H_A)$.\n2. Decide if test is one or two-sided based on $H_A$ and the null distribution.\n - **Two-sided Hypothesis**\n - Two sided if you care if the test statistic is greater or less than the Null distribution. \n - Example: Decide whether a button on a website has the same click-through-rate (CTR) compared to another button. We are testing $\\mu = \\mu_0$ where $\\mu$ is the CTR of the new button and $\\mu_0$ is of the old button.\n - **One-sided Hypothesis**\n - Example: You want to know if a new button has a lower or higher CTR than a new one, but not both. You either pick: \n - $\\mu > \\mu_0$ one-sided-greater\n - $\\mu < \\mu_0$ one-sided-less\n3. Pick a significance level $\\alpha$ for rejecting the null hypothesis.\n - The significance level is the probability of seeing a test statistic $X$ given that the Null Hypothesis is true, and then rejecting the Null Hypothesis. For example, if we choose $\\alpha=0.05$, any test statistic with a probability of occuring less than 0.05 under the Null distribution makes us reject the Null Hypothesis.\n4. Run the experiment to collect data, $x_1, x_2, ..., x_n$\n5. Compute the test-statistic\n6. Compute the $p$-value corresponding to $X$ using the null distribution.\n - **$p$-value**:\n - The p-value is the probability of observing your test statistic (in step 5) under the Null Distribution.\n7. If $p<\\alpha$, reject the Null Hypothesis and accept the alternative hypothesis. Otherwise, fail to reject the Null Hypothesis.\n\n#### Type-1 Error\n\nThe probability of rejecting $H_0$ when $H_0$ is in fact true. This is also called a **false positive** because we incorrectly reject $H_0$, as opposed to a true positive when we reject $H_0$ when $H_A$ is true.\n\n\nLet's say we run the same test 100 times, and the test has a significance level of .05. If we assume the Null Hypothesis is true, we would expect to reject the Null Hypothesis 5 times. Notice that the probability of a Type-1 Error is 0.05 (5 / 100), which is equivalent to the significance level. Thus, **the significance level is equal to the probability of a type-1 error.**\n\n\nSee the example below related to Type-1 error. We sample from a distribution and test the Null Hypothesis 40,000 times with a 0.05 significance level, and we show that we get 5% chance of getting a Type-1 Error.\n\n\n```python\n# Type-1 Error Example\nnull_mu = 10\nnull_std = 6\nexperiment_sample_size = 20\nsignificance_level = 0.05\nN = 40000\n# Null Hypothesis is that our sampled mu = null_mu\n\nfalse_positive = 0\nfor _ in range(N):\n # run 1000 experiments\n exp = np.random.normal(loc=null_mu, scale=null_std, size=20)\n exp_mu = np.mean(exp)\n \n # do a 1-Sample Z-test, we divide by std of the sample mean\n z = (exp_mu - null_mu) / (null_std / np.sqrt(experiment_sample_size))\n test_statistic = stats.norm.cdf(z)\n if test_statistic < (significance_level / 2)\\\n or test_statistic > (1 - significance_level / 2):\n false_positive += 1\n\nprint('The Type-1 Error is: ', false_positive / N)\n```\n\n The Type-1 Error is: 0.049325\n\n\n#### Type-2 Error\n\nThe probability of accepting $H_0$ when $H_0$ is false. (i.e. **false negative**, since we should have rejected $H_0$ but didn't).\n\n\n#### Power\n\n\nhttps://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading17b.pdf\n\nThe **power** is $P$(reject $H_0$ | $H_A$), so in other words it is $1$ - $P$(type 2 error).\n\n\nThe shaded red region of $f(x|H_0)$ represents the significance level, and shaded violet region under $f(x|H_A)$ represents the power. Both tests have the same significance level, but if $f(x|H_A)$ has more overlap with $f(x|H_0)$, the power is lower.\n\nIf $x$ is drawn from $H_A$, under a high power test, it is very likely that it will be in the rejection region.\n\nIf $x$ is drawn from $H_A$, under a low power test, it is very likely that it will be in the non-rejection region.\n\nSee [AB Test design](#abtest-design) for how to control the power of a test.\n\n\n**Example 1**\n\nLet's say your CEO says that people open the company's app 2 times a day. You want to see if the app open-rate has increased over the past week. Assume the app open-rate follows a normal distribution with unknown mean $\\mu$ and known population variance of 4.\n\n- $H_0$: $\\mu = 2$\n- $H_A$: $\\mu > 2$\n- Data: 1, 2, 3, 6, 0\n\nAt a significance level $\\alpha=0.05$, should we reject the null hypothesis and tell the CEO that the app open-rate has increased?\n\n\n```python\nfrom scipy.stats import norm\nimport scipy.stats as stats\n```\n\n\n```python\n# Use a z test bc we have normal data and a known variance:\ndata = np.array([1, 2, 3, 6, 0])\nN = len(data)\nmean_open_rate = np.mean(data)\nnull_hypothesis_open_rate = 2\nknown_variance = 4\nsigma = np.sqrt(known_variance)\nsqrt_N = np.sqrt(N)\n\n# calculate our Z statistic since our sample mean comes from a normal distribution\nz = (mean_open_rate - null_hypothesis_open_rate) / (sigma / sqrt_N)\nz\n```\n\n\n\n\n 0.44721359549995787\n\n\n\n\n```python\n# Find P(Z > z)\nprint('Probability that we get this mean open-rate if the Null Hypothesis is true')\nprint(1 - stats.norm.cdf(z))\n```\n\n Probability that we get this mean open-rate if the Null Hypothesis is true\n 0.327360423009\n\n\nThe p-value $ p > \\alpha$, so we do not reject the null hypothesis. In other words, The mean app open-rate is not greater than 2 with a 5% significance level.\n\n#### P-Values Recap\n\nTLDR: P-values help us decide if something is \"significant\" or not. The lower the p-value, the more significant our result is.\n\n\nThe p-value is the probability of observing our test statistic, $X$ under the Null Distribution. The lower the p-value, the less likely you would observe $X$, assuming that the Null Hypothesis is true.\n\n### Significance Testing In-Practice\n\nSignificance testing depends on the distribution of your data. If we have continuous data, we usually assume that the Null Distribution is a fat-tailed normal distribution and use the **T-test** to test the Null Hypothesis. If we have data of positive counts, we usually use the **Chi-square** test instead.\n\n### T-test\n\nWhen we pick our distribution in significance testing, we don't always know $\\sigma$ (population standard deviation), so we cannot use the Z-test like in Example 1 above. In these instances, we use a t-test.\n\nUnder the t-test we use a [t-distribution](https://en.wikipedia.org/wiki/Student's_t-distribution#Derivation), which is shaped like the normal distribution and has a parameter $df$ called the degrees of freedom. When $df$ is small, the tails are fatter than the normal distribution, and as $df$ increases, it looks more like the standard normal distribution.\n\nThere are two types of T-tests used in practice, the one-sample and two-sample T-test, described below with examples.\n\n#### One Sample T-test\nUse this to test if you want to see if a sample mean equals a hypothesized mean ($\\mu_0$) of some population where the population $\\sigma$ is unknown. It's called One Sample because you only have one sample of data from the population.\n\n- *Data*: $x_1, x_2, ..., x_n$\n- *Assume*: The data are independent normal samples coming from the population. $x_i$ ~ $N(\\mu_0, \\sigma ^ 2)$ where $\\mu_0$ and $\\sigma$ are unknown.\n- *Null hypothesis*: $\\bar{X} = \\mu_0$\n- *Test statistic*: $t$, called the studentized mean: \n$$ t = \\frac{\\bar{X}-\\mu_0}{\\frac{s}{\\sqrt{n}}} $$\nand s is the sample standard deviation ([here](https://colab.research.google.com/notebook#fileId=1tlrfQPy7NcuIppzuz4xGdcFnrHyA4wtp&scrollTo=T2v_K9wLK8G3) for formula)\n - If the sample mean is normal with mean $\\mu_0$, the studentized mean (t) follows a t-distribution [here](http://en.wikipedia.org/wiki/Student\u2019s_t-distribution#Derivation)\n\n- *Null Distribution*: $f(t|H_0)$ is the probability distribution which follows th T-distribution $T$~$t(n-1)$ \n\n#### Example 2: \nLet's do an example similar to [Example 1](https://colab.research.google.com/notebook#fileId=1tlrfQPy7NcuIppzuz4xGdcFnrHyA4wtp&scrollTo=N_4FN67QjdQV), but now the variance is unknown.\n\nLet's say you need the battery life of a cell-phone produced at your company to be 2 hours. You sample 5 data points from the assembly line. Is it possible that the battery life on the assembly line is not 2 hours?\n\n- $H_0$: $\\mu = 2$\n- $H_A$: $\\mu \\neq 2$\n- Data: 1, 2, 3, 6, 0\n\nAt a significance level $\\alpha=0.05$, should we reject the null hypothesis and tell the CEO that our battery life is not 2 hours?\n\n\n\n\n```python\n# Use a t test because we don't know mean or population variance:\ndata = np.array([1, 2, 0, 1, 0])\nN = len(data)\nmean_life = np.mean(data)\nnull_hypothesis_life = 2\n\n\nsigma = np.std(data, ddof=1)\nsqrt_N = np.sqrt(N)\n\n# calculate our t statistic\nt = (mean_life - null_hypothesis_life) / (sigma / sqrt_N)\nt\n```\n\n\n\n\n -3.2071349029490928\n\n\n\n\n```python\n# p-value = P(|T| > |t|) = 1 - t_dist(t, df)\ndf = N - 1\np_value = 2 * (1 - stats.t.cdf(abs(t), df))\n\nprint('The sample mean battery life is', mean_life)\nprint('The p-value is', p_value)\n\nprint('our p-value < .05 so we reject the null hypothesis. Good luck at your company')\n```\n\n The sample mean battery life is 0.8\n The p-value is 0.03267792333680308\n our p-value < .05 so we reject the null hypothesis. Good luck at your company\n\n\n#### Two Sample T-test\nWe use this test when we want to compare the means of samples from 2 different popuations, and we don't know the mean or variance of the 2 populations. It is called Two Sample, because we sample from 2 populations.\n\n\n- *Data* + *Assumptions*: 2 sets of data drawn from normal distributions:\n\n $ x_1, x_2, ... , x_n$ ~ $N(\\mu_1, \\sigma^2) $ \\\\\n $ y_1, y_2, ... , y_m$ ~ $N(\\mu_2, \\sigma^2) $\n \n $\\mu_1, \\mu_2,$ and $\\sigma$ are all unknown. Both distributions have the same variance*. The number of samples can be different as well.\n\n- *Null hypothesis*: $\\mu_1 = \\mu_2$\n- *Test statistic:*\n$$t = \\frac{\\bar{x} - \\bar{y}}{s_p} $$\nwhere $s_p^2$ is the pooled variance:\n$$ s_p^2 = \\frac{(n-1)s_x^2 + (m-1)s_y^2}{n + m - 2} (\\frac{1}{n} + \\frac{1}{m}) $$\n$s_x^2$ and $s_y^2$ are the sample variances of x_i and y_j. \n- *Null distribution*:\n$f(t|H_0)$ is the pdf of $T$ ~ $t(n+m-2)$\n\n*There is also a version of the two-sample t-test where 2 groups have different variances. The test statistic is a little bit more complicated, and is called the Welch's t-test. https://en.wikipedia.org/wiki/Welch%27s_t-test\n\n**Example 3:**\nSuppose you write a new prompt for your donation website that you think will increase the amount of donations you get. You then show two groups of users the different versions and collect the number of money donated. Which button is better?\n\n\n\n```python\n\nN1 = 800\nN2 = 779\n# let's sample from the two populations\nbutton_one_obs = np.random.normal(3, 2.0, N1)\nbutton_two_obs = np.random.normal(3.3, 2.0, N2)\n\nmu1 = np.mean(button_one_obs)\nmu2 = np.mean(button_two_obs)\nstd1 = np.std(button_one_obs, ddof=1)\nstd2 = np.std(button_two_obs, ddof=1)\n\n# let's calculate the two-sample t-test manually\npooled_variance = ((N1 - 1) * std1**2 + (N2 - 1) * std2**2) / (N1 + N2 - 2)\npooled_variance *= ((1/N1) + (1/N2))\nt = (mu1 - mu2) / np.sqrt(pooled_variance)\n\nprint('t', t)\n\ndf = N1 + N2 - 2\np_value = 2 * (1 - stats.t.cdf(abs(t), df))\n\n\nprint('p', p_value)\n\n```\n\n t -3.6115402379982764\n p 0.0003139185602183403\n\n\n\n```python\n# we can also get the same result using stats.ttest_ind_from_stats\nt, p = stats.ttest_ind_from_stats(mu1, std1, N1, mu2, std2, N2)\n\nprint('t', t)\nprint('p', p)\n```\n\n t -3.6115402379982764\n p 0.0003139185602183564\n\n\nSince our $p$-value is smaller than $\\alpha$ = .05, we can reject the null hypothesis and conclude that there is a difference in the means of the 2 samples. Since $\\mu$ from button two is bigger, we can conclude that button two is better.\n\n#### ANOVA\nTest for comparing 2 or more group means. It generalizes the two-sample t-test to more than 2 groups/samples. ANOVA is more conservative than using multiple two sample t-tests for each combination of 2-samples because of Multiple Comparisons Problem (explained in AB Testing Gotchas). When you compare your sample means, you use the F-statistic.\n\n\n\n---\n\n#### Chi-Square Tests\n\n##### Chi-Square Test for goodness of fit\n\nThis test is used to determine whether a set of categorical data came from a hypothesized discrete probability distribution (i.e. think data of counts, like rolling a dice, and testing whether the dice is fair).\n\n\nThe test statistic is the chi-square statistic, and null distribution follows a chi-square distribution, $\\chi^2(df)$ where $df$ is the degrees of freedom. This test is used to see if discrete data fits a specific probability mass function.\n\n- *Data:* Observed count $O_i$ for each possible outcome $w_i$, with $k$ outcomes.\n- $H_0$ = Data was drawn from a specific discrete distribution.\n- $H_A$ = Data was not drawn from $H_0$.\n- *Test Statistic*: From $H_0$ we can get a set of expected counts $E_i$ ($i = 1, 2, ..., k$) for each outcome $k$. We compare the observed counts $O_i$ to the expected counts $E_i$. There are 2 statistics we can use: Likelihood ratio statistic, $G$*, and Pearson's chi-square statistic $X^2$.\n\n$$G = 2 * \\sum_{i=1}^k O_i ln(\\frac{O_i}{E_i})$$\n$$ X^2 = \\sum_{i=1}^k \\frac{(O_i - E_i)^2}{E_i}$$\n\n- *Degrees of freedom*: $n-1$ for $n$ data points\n\n- *Null distribution*: Assuming $H_0$, both $G$ and $X^2$ approximately follow a chi-square distribution with $n-1$ degrees of freedom. See [here](https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading19.pdf) for a derivation.\n\n*If $G$ is used, this is also called a $G$-test or a likelihood ratio test.\n\n\n---\n##### Chi-Square Test for independence:\n\nUse this to determine whether two observed categorical variables are independent or not (i.e. is the click-through rate of a button independent of gender?). (See example 4 below)\n\nAll same as above except:\n- $H_0$ = Categorical variables of population are independent.\n- $H_A$ = Categorical variables of population are dependent.\n- *df*: $(m-1)(n-1) $ where $m$ is the number of possibilities for the first categorical variable, and $n$ is the number of possibilities for the second categorical variable.\n\n**Example 4**\n\nSuppose you design a new Sign-up button for your website and create a new homepage for your website by changing this button. You show this version, Landing Page B, and the original version, Landing Page A to different users and monitor your sign-ups for each session. The data collected is summarized in the table below. Can you determine whether the button helps with the number of sign ups with a significance of 10%?\n\n\n| | Landing Page A | Landing Page B | Total |\n|---------------------|----------------|----------------|-------|\n| **Observed Sign-ups** | 135 | 170 | 305 |\n| **Observed No signups** | 365 | 328 | 693 |\n| **Total** | 500 | 498 | 998 |\n\nOur null hypothesis is that the type of landing page is independent of whether someone signs up on your website or not.\n\nFrom this table, we can take the sample of total sign-ups and estimate what proportion of the whole population would sign up, i.e. $305/998$ is the estimated proportion of total sign-ups we should expect. We can now make a table of expected values under the null hypothesis, and compare the expected counts to the observed counts.\n\n| | Landing Page A | Landing Page B |\n|---------------------|----------------|----------------|\n| **Expected Sign-ups** | 152.81 | 152.19 |\n| **Expected No signups** | 347.19 | 345.81 |\n\n\n\n\n\n\n\n```python\n# let's calculate the test-statistic manually\nobserved = [135, 170, 365, 328]\nexpected = [152.81, 152.19, 347.19, 345.81]\nchi_sq_stat = sum([(e - o) ** 2 / e for o, e in zip(observed, expected)])\nprint(chi_sq_stat)\ndf = (2-1)*(2-1)\n```\n\n 5.990831022612689\n\n\n\n```python\n# stats.chisquare is equivalent as the above calculation\nstats.chisquare(observed, expected, ddof=df)\n```\n\n\n\n\n Power_divergenceResult(statistic=5.990831022612689, pvalue=0.050015840621105236)\n\n\n\nBased on these results, we can reject our null hypothesis that the landing pages are independent with a confidence of 10%. Thus, we can conclude that landing page B is better and the button does help with sign-ups!\n\n\n---\n#### Difference between t-test and chi-square\n- A T-test is used for continuous variables, and a chi-square test is used for categorical variables.\n\n#### Kolmogorov-Smirnov (KS) Test\n\nTest whether a sample of data came from a specific distribution, can be categorical or continuous.\n\n#### Anderson-Darling Test\n\nSame as KS test, but more sensitive.\n\n\n\n## [Designing an AB Test](#abtest-design)\n\nTLDR: Before running a test, you need to design it to make sure you will be able to measure a certain effect size within some probability bound. This is usually controlled by the sample size of your test.\n\n### Power Calculation\n\nWhen designing an AB Test, you need to know how many samples to collect to measure a certain effect size. The effect size is the difference in means or proportions of some event you are testing between a control group and test group. You can use the power calculation to determine what is the needed sample size for your experiment to achieve a certain desired power (i.e. recall that power is the probability you reject $H_0$ if $H_A$ is true).\n\nWe have a different power calculation formula for continuous variables (difference in means) and for categorical variables (difference in proportions):\n\n**Sample size for difference in means:**\n$$n \\propto \\frac{\\sigma^2(Z_\\beta + Z_{\\alpha/2})^2}{difference^2} $$\n\nwhere:\n- $n$: sample size in each group\n- $\\sigma$: standard deviation of the outcome variable\n- $Z_\\beta$: The desired power\n- $Z_{\\alpha/2}$: Desired level of statistical significance( p-value)\n- $difference$: Effect size, i.e. the difference in means between $H_0$ and $H_A$\n\nAs an example, you need to pick the sample size so that you have enough power (at least 0.9) to detect a difference of values greater than 1.\n\n**Sample size for difference in proportions:**\n$$n \\propto \\frac{(\\bar{p})(1 - \\bar{p})(Z_\\beta + Z_{\\alpha/2})^2 }{(p_1 - p_2)^2} $$\n\nwhere:\n- $n$: sample size in each group (assumes equal sized groups)\n- $(\\bar{p})(1 - \\bar{p})$: measure of variability (similar to standard deviation)\n- $Z_\\beta$: The desired power\n- $Z_{\\alpha/2}$: Desired level of statistical significance( p-value)\n- $p_1 - p_2$: Effect size (the difference in proportions)\n\n***could not find a good reference on this formula***\n\nAs sample size increases, power increases. We need more data to keep the same power if:\n- Our desired effect size increases\n- Variance increases\n- Our significan level decreases\n\n\n```python\n# https://stackoverflow.com/questions/15204070/is-there-a-python-scipy-function-to-determine-parameters-needed-to-obtain-a-ta\nfrom scipy.stats import norm, zscore\n\ndef sample_power_probtest(p1, p2, power=0.8, sig=0.05):\n z = norm.isf([sig/2]) #two-sided t test\n zp = -1 * norm.isf([power]) \n d = (p1-p2)\n s = 2*((p1+p2) /2)*(1-((p1+p2) /2))\n n = s * ((zp + z)**2) / (d**2)\n return n[0]\n\ndef sample_power_difftest(d, s, power=0.8, sig=0.05):\n z = norm.isf([sig/2])\n zp = -1 * norm.isf([power])\n n = s * ((zp + z)**2) / (d**2)\n return int(round(n[0]))\n```\n\n\n```python\nsample_power_probtest(0.5, 0.75, power=0.9)\n```\n\n\n\n\n 78.80567296080467\n\n\n\n\n```python\nimport statsmodels.stats.api as sms\nes = sms.proportion_effectsize(0.5, 0.75)\n```\n\n /usr/local/lib/python3.6/dist-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.\n from pandas.core import datetools\n\n\n\n```python\nsms.NormalIndPower().solve_power(es, power=0.9, alpha=0.05, ratio=1)\n```\n\n\n\n\n 76.65294037206691\n\n\n\n## AB Testing Gotchas\n\nProblems with A/B Testing:\nhttp://varianceexplained.org/r/bayesian-ab-testing/\n\n\n### P-hacking\n\nP-hacking is when you tweak your data in an attempt to hack your p-value into a number that you want. If you are desperate to get a certain p-value, you can always manipulate your data or your test in a way that gives you the desired result.\n\nTo avoid p-hacking, avoid manipulating your data or test. Additionally, you can calculate confidence intervals to see how precise your result is, instead of solely relying on p-value to reject or accept the Null Hypothesis.\n\nhttps://en.wikipedia.org/wiki/Data_dredging\n\n\n### Peeking\n\nWhen you run a test, you may be tempted to stop your A/B test once the $p$-value reaches a certain threshold, like .05, but in doing so, you risk jumping to false conclusions too soon. In addition, if you end your test early, the power of your test decreases (see Designing an AB Test above), making it less likely that $H_A$ is true when you reject $H_0$.\n\n- One solution is to decide the number of samples before your test begins (you can use the power calculation above)\n- You can also correct for data peeking. Choose intervals at which you will peek, then apply a correction based on the number of peeks and the interval of peeking.\n\nhttps://www.talyarkoni.org/blog/2010/05/06/the-capricious-nature-of-p-05-or-why-data-peeking-is-evil/\n\n### Multiple Comparisons Problem\n\nThe probability of getting at least one significant result due to Type-1 error (false positive) goes up as you conduct more and more comparisons for the same test.\n\nWhen you have multiple hypotheses that you want to test (e.g. trial rate, subscription rate, time to subscription), you might think to test each hypothesis separately under the same significance level. Let's say you are testing 20 hypotheses for the same test with a significance level of .05. What is the probability of observing at least one significant result? \\\\\n\n$P$(at least one significant result)$= 1 - P$(no significant result) \\\\\n$ = 1 - (1 - .05)^{20}$ \\\\\n$\\approx .64$ \\\\\n\nThus, there is a 64% chance that we would observe at least one significant result when there is actually no significant result if we conduct 20 comparisons.\n\n#### Bonferroni Correction\n\nTo correct for the above error, we can set the significance level to $\\alpha/n$, where $n$ is the number of comparisons you do. This can make the test too conservative; there are other p-value corrections not mentioned here (see [here for other methods](https://en.wikipedia.org/wiki/Family-wise_error_rate#Controlling_procedures)).\n\nSo, in the above case, $\\alpha$ becomes .0025, and the probability to discover at least one significant result is $\\approx .0488$\n\n\n\n\n\n\n## Bayesian AB Testing\n\nGood blog posts on Bayesian AB testing:\n\n- http://varianceexplained.org/statistics/beta_distribution_and_baseball/\n- http://varianceexplained.org/r/empirical_bayes_baseball/\n- http://varianceexplained.org/r/credible_intervals_baseball/\n- http://varianceexplained.org/r/bayesian_fdr_baseball/\n- http://varianceexplained.org/r/bayesian_ab_baseball/\n\n\n\n\n#### Beta Distribution:\n\nTLDR: We use the beta distribution when we have binary data. It is the distribution around the probability of success given # of successes and # of failures.\n\nThe beta distribution is conjugate to the binomial distribution. Distributions A and B are conjugate if a likelihood distribution (A) times a prior distribution (B), gives back a posterior with distribution (B).\n\nThe binomial distribution gives us a distribution around the number of successes you will have in $n$ trials when the probability of success for any trial is $p$.\n\nhttp://varianceexplained.org/statistics/beta_distribution_and_baseball/\n\nIf $P(X) \\tilde{} \\text{Beta}(\\alpha, \\beta)$\n- $E[X] = \\frac{\\alpha}{\\alpha + \\beta}$ ( [derivations](https://en.wikipedia.org/wiki/Beta_distribution#Mean))\n- $var[X] = \\frac{\\alpha\\beta}{(\\alpha + \\beta)^2(\\alpha + \\beta + 1)} $\n\nWith algebra, we can get\n- $\\alpha = (\\frac{1-\\mu}{\\sigma^2} - \\frac{1}{\\mu})\\mu^2$\n- $\\beta = \\alpha(\\frac{1}{\\mu} - 1)$\n\n\n**Example**\n\nLet's say we expect around 27% of our users to sign up to our newsletter from a button on the home page from last quarter results, and with a variance of 0.00065482. We can model this using a beta distribution below:\n\n\n```python\ndef _return_params_beta(mean, var):\n alpha = (((1 - mean) / var) - (1 / mean)) * mean**2\n beta = alpha * (1 / mean - 1)\n return alpha, beta\n```\n\n\n```python\n_return_params_beta(.27, 0.00065482)\n```\n\n\n\n\n (80.99966189181761, 218.99908585565498)\n\n\n\n\n```python\na, b = 81, 219\n# we can get sigma using scipy\nsigma = stats.beta.stats(81, 219, moments='mvsk')[1]\n\n```\n\n\n```python\n# Choose alpha and beta based on mean and sigma\na, b = 81, 219\nbeta_dist = stats.beta(a, b)\nx = np.linspace(0, 1, 1002)[1:-1]\nplt.plot(x, beta_dist.pdf(x))\nplt.show()\n# we see the mean is .27, and the beta distribution gives us a distribution around the sign-up rate\n```\n\n#### Updating Beta Distribution\n\nLet's say we start a new quarter and we want to update our posterior to reflect the number of sign-ups we actually observed. The beta distribution is appropriate here because we can update it very easily. The new beta distribution is:\n\n$\\beta(\\alpha_0 +$ signups$, \\beta_0 +$ failed_signups)\n\nHalfway into the second quarter, we gather some stats and see that out of 300 new visitors, we only had 100 signups. We can easily update our prior to get our new posterior distribution, ~$\\beta(81 + 100, 219 + 200)$, which looks like:\n\n\n\n\n```python\na2, b2 = 81 + 100, 219 + 200\nbeta_dist2 = stats.beta(a2, b2)\nx2 = np.linspace(0, 1, 1002)[1:-1]\nplt.plot(x, beta_dist.pdf(x))\nplt.plot(x2, beta_dist2.pdf(x2))\nplt.show()\n```\n\nThe green curve (posterior beta distribution) has a mean of .303. The beta distribution is good because we can incorporate what we expect the probability of sign-ups to be (prior beta distribution) into the current data we observe.\n\n#### Empirical Bayes Estimates\n\nUse Emperical Bayes if you want to do a fair comparison between estimates derived from your data when some estimates have very few samples.\n\nIn Emperical Bayes, we estimate our prior based on observed data, whereas in Bayesian Methods (like Bayesian AB testing), we keep our prior fixed before we observe the data.\n\n\nTake batting average for example. Some players bat 10 times, some bat 100s of times. How do we compare all players on an equal footing? [This is an example of Emperical Bayes Estimation](http://varianceexplained.org/r/empirical_bayes_baseball/). You compute the prior using all the data, and then multiply the prior by the likelihood of each player's batting average to get an Empirical Bayes Estimate. E.g. This graph plots the posterior batting average (Empirical Bayes) compared the actual batting average: the values less than and greater than the prior average get closer to the prior average but not lesser or greater.\n\n*shrinkage*: all values get pushed towards the mean of the prior when we update our data with the prior. \\\\\n\n\nWith the new 'shrunk' data, we don't have to worry about having less counts for one player and more counts for another, since we update each players posterior performance with the overall prior average performance. However, we still want an interval rather than a point estimate around each player's batting average to quantify the uncertainty.\n\nWe can quantify uncertainty around our point estimates using *Credible intervals*. We can calculate the *credible* interval of our beta distribution, which says that some percentage (i.e. 95%) of our posterior distribution lies within a region around our point estimate by using the quantile of the beta distribution.\n\n*Credible intervals vs. Confidence intervals*: Frequentist confidence intervals are derived from a fixed distribution, whereas credible intervals are derived from a posterior distribution that was updated by our priors. (more on this in a couple cells).\n\n### Bayesian AB Testing\n\nhttp://varianceexplained.org/r/bayesian_ab_baseball/\n\n\nTLDR: You can analyze AB test results using Bayesian methods, like hypothesis testing, but you need to pick a prior.\n\n**Example 1**\n\nYou've designed 2 different sign-up buttons, and want to test them. So, you create 2 different versions of your website where only the sign-up button is changed, version A and version B. We expect that a sign-up button's performance on our site will have success with a mean around .25 and variance .00015 using data from an existing button. We can model how we expect our buttons to perform (prior) with a beta distribution:\n\n\n\n\n\n\n```python\n%matplotlib inline\nimport numpy as np\nfrom scipy import stats\nfrom matplotlib import pyplot as plt\n```\n\n\n```python\nalpha_0, beta_0 = _return_params_beta(mean=.15, var=.00015)\nalpha_0, beta_0\n```\n\n\n\n\n (127.35, 721.65)\n\n\n\n\n```python\n\nbeta_dist = stats.beta(alpha_0, beta_0)\nx = np.linspace(0, 1, 1002)[1:-1]\nplt.plot(x, beta_dist.pdf(x))\nplt.show()\n```\n\nIt's been 1 month since we started our experiment and we see that for version A of our website, 100 signed up out of 985 visitors, and for version B of our website, 78 signed up out of the 600 visitors. Let's update our prior belief using this new data for version A and B. \n\n\n```python\nalpha_A, beta_A = alpha_0 + 100, beta_0 + 885\nalpha_B, beta_B = alpha_0 + 78, beta_0 + 522\n\nbeta_distA = stats.beta(alpha_A, beta_A)\nbeta_distB = stats.beta(alpha_B, beta_B)\nx2 = np.linspace(0, 0.5, 1002)[1:-1]\n\nplt.plot(x, beta_dist.pdf(x),\n label=r'$\\alpha_0=%.1f,\\ \\beta_0=%.1f$' % (alpha_0, beta_0))\nplt.plot(x, beta_distA.pdf(x),\n label=r'$\\alpha_A=%.1f,\\ \\beta_A=%.1f$' % (alpha_A, beta_A))\nplt.plot(x, beta_distB.pdf(x2),\n label=r'$\\alpha_B=%.1f,\\ \\beta_B=%.1f$' % (alpha_B, beta_B))\nplt.legend(loc=0)\nplt.show()\n```\n\nBased on this, we see that website B cleary looks to be a winner here. It does better than both website A and the prior.\n\n\n#### Credible Intervals\n\nTLDR: A way to summarize and express uncertainty in our posterior distribution, e.g. 95% of the posterior distribution lies within a particular region.\n\n**Example 1 cont'd**\n\nIn the example above, how do we quantify our belief that button B is better? We could find website B's 95% credible interval by using the quantile of the beta distribution:\n\n\n```python\nprint('average stats', 78/(522+78))\nprint('low', stats.beta.ppf(.025, alpha_B, beta_B))\nprint('high', stats.beta.ppf(.975, alpha_B, beta_B))\n```\n\n average stats 0.13\n low 0.12424278055818447\n high 0.16013069007485325\n\n\n#### Difference between Credible Intervals and Confidence Intervals\n\nCredible intervals are similar to frequentist confidence intervals, but take the prior into account.\n\nIn Frequentist Statistics, there is one true population statistic that we try to estimate with samples from that population. The confidence interval around our sample statistic is variable, and depends on our sample data. The population statistics are fixed.\n\nIn Bayesian Statistics, we assume a distribution over our population mean (prior), and we update our belief of that distribution with our data (likelihood). So the credible interval is fixed because the posterior distribution is fixed given the data we observed. But the population statistic is variable. Therefore the confidence interval is not always equal to the credible interval.\n\nIn other words, the Frequentist makes assumptions on the distribution of the sample statistic (i.e. the sample mean minus the population mean is t-distributed if we were to sample from a normally distributed population). The Bayesian makes assumptions on the distribution of the population or the prior (i.e. we've seen prior data indicating that the population mean follows this distribution).\n\n\nIf we have very little data, the frequentist confidence interval would be very large. The Bayesian view takes into account a prior, so the credible interval under the Bayesian perspective could be a lot smaller.\n\n\n---\n\n**Example 2**\n\nWhat if we didn't have such a clear winner and our results were actually this:\nfor version A of our website, 185 signed up out of 970 visitors, and for version B of our website, 220 signed up out of the 1070 visitors.\n\n\n\n```python\nalpha_A, beta_A = alpha_0 + 185, beta_0 + 785\nalpha_B, beta_B = alpha_0 + 220, beta_0 + 850\nprint(alpha_0, beta_0)\nprint(alpha_A, beta_A)\nprint(alpha_B, beta_B)\n\n\nbeta_distA = stats.beta(alpha_A, beta_A)\nbeta_distB = stats.beta(alpha_B, beta_B)\nx2 = np.linspace(0, 0.6, 1002)[1:-1]\n\n\nplt.plot(x, beta_dist.pdf(x2),\n label=r'$\\alpha_0=%.1f,\\ \\beta_0=%.1f$' % (alpha_0, beta_0))\nplt.plot(x, beta_distA.pdf(x2),\n label=r'$\\alpha_A=%.1f,\\ \\beta_A=%.1f$' % (alpha_A, beta_A))\nplt.plot(x, beta_distB.pdf(x2),\n label=r'$\\alpha_B=%.1f,\\ \\beta_B=%.1f$' % (alpha_B, beta_B))\nplt.legend(loc=0)\nplt.show()\n```\n\nIs version A or B better in this case? It seems that version B is slightly better, but by how much? We can try many different approaches to arrive to a conclusion: simulation of posterior draws, numerical integration, and closed form solutions.\n\n#### Computing Credible Intervals\n\n##### Simulation of posterior draws\n\nLet's use the posterior distributions to simulate 1 million draws, then compare the results in the example below:\n\n\n\n\n```python\n\nsim_size = 1000000\nA_sim = stats.beta.rvs(alpha_A, beta_A, size=sim_size)\nB_sim = stats.beta.rvs(alpha_B, beta_B, size=sim_size)\n\nnp.sum(B_sim > A_sim) / sim_size\n```\n\n\n\n\n 0.77175\n\n\n\nSo, 77% of the time, B does better, and we can conclude that there's about a 77% probability that version B is better given our posterior distribution.\n\n##### Numerical Integration\n\nEach posterior is an independent distribution, and we can combine them into a joint distribution. Then we can use numerical integration to find the area of the joint distribution where version B is greater than A.\n\nNumerical integration is not a good solution when problems have multiple dimensions.\n \n\n\n##### Closed-form solution:\n\nWe can also sometimes calculate the closed form solution, which is derived [here](https://www.evanmiller.org/bayesian-ab-testing.html#binary_ab_derivation) for the beta distribution:\n $$ p_A \\sim \\mbox{Beta}(\\alpha_A, \\beta_A) $$\n $$ p_B \\sim \\mbox{Beta}(\\alpha_B, \\beta_B) $$\n $${\\rm Pr}(p_B > p_A) = \\sum_{i=0}^{\\alpha_B-1}\\frac{B(\\alpha_A+i,\\beta_A+\\beta_B)}{(\\beta_B+i) \nB(1+i, \\beta_B)\nB(\\alpha_A, \\beta_A)\n}$$\nwhere $B$ is the beta function.\n\n\n\n\n\n```python\n# use log beta because beta function can be less numerically stable\n\nimport math\n\ndef log_beta_func(a, b):\n beta = math.exp(math.lgamma(a) + math.lgamma(b) - math.lgamma(a+b))\n return beta\n\ndef exp_error(alpha_A, beta_A, alpha_B, beta_B):\n total_sum = 0\n for i in range(int(alpha_B - 1)):\n lnum = log_beta_func(alpha_A + i, beta_A + beta_B) \n lden = math.log(beta_B + i) + log_beta_func(1 + i, beta_B) + \\\n log_beta_func(alpha_A, beta_A)\n total_sum += math.exp(lnum - lden)\n return total_sum\n\n1 - exp_error(alpha_A, beta_A, alpha_B, beta_B)\n```\n\n##### Normal Approximation\n\nAnother way to calculate this probability is by assuming the two functions are normal and calculating the probability that one distribution is greater than the other. In this case, the probability that B is greater than A is just:\n$P(B A', 1 - posterior)\nprint('\\n')\n\nprint('estimate', mu_diff)\nprint('credible interval for above estimate:')\nprint('lower bound', low_estimate)\nprint('higher bound', high_estimate)\n\n```\n\n posterior probability that B > A 1.0\n \n \n estimate 0.009290504004828865\n credible interval for above estimate:\n lower bound 0.008986008745105292\n higher bound 0.009594999264552437\n\n\n## Multi-Armed Bandits for AB testing\n\n\n\n### Epsilon Greedy\n\n### UCB\n\n### Thompson Sampling\n\n\n\n\n### Bayesian Inference\nhttps://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/\n\n\n# ML\n## Supervised ML\n\n## Linear Models\n\n\n\nhttp://cs231n.github.io/linear-classify/\n\nLinear models fit linear equations to your data.\n\nA scoring function maps your data to some outcome (class label scores or continuous variable), and a loss function measures the difference between your predicted score and ground truth so that we can keep improving the weights in the scoring function to minimize the loss function.\n\n### Definitions\n\n**Scoring Function:**\n\nThe scoring function maps input data to some outcome $y$.\n\n$f(x_i, W, b) = Wx_i + b $, where\n\nData: $x_i \\in R^D$ where $i = 1 ... N$\n\nLabels: $y_i \\in 1 ... K$, or Continuous Variable: $y_i \\in R$\n\n$W$: weights, $k$x$D$\n\n$x_i$: data, $D$x$1$\n\n$b$: bias, $k$x$1$\n\n\n**Bias Trick**\n\nCombine bias as an extra column in W and add an extra 1 as a row in $x_i$ so \n\n$$f(x_i, W) = Wx_i$$\n\n**Loss Function**\n\nAlso called cost function or objective function. Loss functions measure the difference between our prediction and the ground truth.\n\n\n### Linear Regression\n\nLinear regression maps your data to some continuous output using a scoring function $$f(x_i, W) = Wx_i$$ and usually uses least squares to fit the model.\n\nWe need to find the coefficients W to minimize the residual sum of squares (least squares method):\n\n$$RSS(W) = \\sum_{i=1}^{n}{(y_i - Wx_i)^2}$$\n\nWhen we differentiate with respect to $W$, we get the normal equations:\n\n$$\\textbf{X}^T(\\textbf{y} - \\textbf{X} W) = 0$$\n\nwhere $\\textbf{X}$ is $n$x$D$ (each row is an input vector), and $\\textbf{y}$ is $n$x$1$. Solving for $W$,\n\n$$W = (\\textbf{X}^T\\textbf{X})^{-1}\\textbf{X}^T \\textbf{y} $$\n\n\n#### Why do we use least squares?\nThe error decomposes into recognizable quantities:\n$$E[f(\\textbf{X}) - \\textbf{y}] = E_X[E[(f(\\textbf{X}) - y)^2 | \\textbf{X}]]$$\n$$= E_X[f(\\textbf{X})^2 -2 f(\\textbf{X})E[y|\\textbf{X}] + E[y^2|\\textbf{X}]]$$\n$$= E_X[(f(\\textbf{X}) - E[y|\\textbf{X}])^2 + E[y^2|\\textbf{X}] - E[y|\\textbf{X}]^2]$$\n$$= E_X[(f(\\textbf{X}) - E[y|\\textbf{X}])^2] + E[var[y|\\textbf{X}]]$$\n$$= E_X[f(\\textbf{X}) - E[y|\\textbf{X}]]^2 + (E[f(\\textbf{X})^2] - E[f(\\textbf{X})]^2) + E_X[var[y|\\textbf{X}]]$$\n$$= E_X[f(\\textbf{X}) - E[y|\\textbf{X}]]^2 + var[f(\\textbf{X})] + E_X[var[y|\\textbf{X}]]$$\n\n= Squared bias w.r.t to data + variance of the model + inherent noise in the data.\n\nSo, minimizing least squares treats bias and variance equally in the lost function.\n\n### Logistic Regression\n\nLogistic regression is used for classification. The scoring function adds a softmax to the output of the linear regression:\n\n$$f(x_i, W, b) = softmax(Wx_i + b) $$\n\nwhere the Softmax function is $softmax(x) = \\frac{e^x}{1 + e^x} = p(x)$. It squashes the linear regression output into a probability $\\in [0, 1]$. The raw output of the classifier $Wx_i + b$ is referred to the logits or log-odds, since $log(\\frac{p}{1 - p}) = Wx_i + b$.\n\nThe loss often used in logistic regression is the cross-entropy loss:\n\n$$H(p,q) = - \\sum_x p(x) \\log q(x)$$\n\nwhere $p(x)$ is the ground truth class distribution and $q(x)$ is the output from our classifier.\n\n\n### SVM - Max-Margin Classifier\n\nSupport Vector Machine is another linear model, and uses SVM Loss. SVM wants the score of the correct class for each input to be higher than the incorrect classes by a fixed margin $\\Delta$.\n\nLet's call the score for each prediction $s$. That is, the score for the $j$-th class is $s_j = f(x_i, W)_j$. The multi-class SVM loss is:\n\n$$ L_i = \\sum_{j \\neq y_i}max(0, s_j - s_{y_i} + \\Delta) \u00a0$$\n\n\n\n### Bias variance trade off\n\n\n\nThe bias-variance tradeoff is a problem encountered in supervised learning (look at the squared-error loss decomposition in Linear Regression above). Ideally we would reduce both the variance and bias term in our model loss. Unfortunately, this is very hard to do and we often have to compromise between the bias and variance losses of our model.\n\nModels with low bias tend to be more complex and overfit the training data by capturing the noise in the training data (they have higher variance). Models with low variance tend to be simpler and generalize better, but underfit on the training data (they have higher bias).\n\n\n## Multi-Class / Multi-Label\n\n\n\n\n### Multiclass classification\n\nThe output of a mutli-class classification is a single class instance per data-point. There can be more than 2 classes, but each data-point is only assigned one class label. There are some methods that are inherently multi-class, such a multi-class logistic regression and SVM. You can also use several smaller binary classifiers as follows:\n\n#### One vs. All classification (OVA) or One vs. Rest (OVR):\n\nPick a technique for building binary classifiers (i.e. binary logistic regression), and build $N$ binary classifiers. For the $i$-th classifier, let positive examples be the points in class $i$, negative examples are points not in class $i$. \n\nIf $f_i$ is the $i$-th classifier, classify with:\n$$f(x) = \\underset{i}{\\mathrm{argmax}} f_i(x)$$\n\n#### All vs. All classification (AVA) or One vs. One (OVO):\n\nBuild N(N-1) binary classifiers, each classifier distinguishes between a different pair of classes, i and j.\n\nLet $f_{i,j}$ be the classifier where class $i$ are the positive examples, and class $j$ are the negative. $f_{j,i} = -f_{i,j}$. \n\n$$ f(x) = \\underset{i}{\\mathrm{argmax}}(\\sum_j f_{ij}(x))$$\n\n#### OVO vs. AVA:\nAVA requires O(N^2) classifiers, OVA requres O(N). But, each classifier in AVA has less data to classify.\n\n## Over-fitting and Regularization\n\n## Hyper-Parameter Tuning\n\n## Metrics\n\n### Accuracy\n\n### Cross-Entropy Loss\n\n### F1 Score - Precision/Recall\n\n\n## Non-Linear Models\n\n### Neural Nets\n--DONE (i have to type up my paper notes from this)--\n\n### Nearest Neighbors\n\nRarely used in practice. Take the difference of two data-points. The closer the distance metric, the more similar they are. \n\nFor example have two images: image 1, $I_1$, and image 2, $I_2$. We can take the $L_1$ distance between the 2 images by taking the sum of difference between pixels, $p$ of each image. i.e. \n\n$$d(I_1, I_2) = \\sum_p{|I_1^p - I_2^p|}$$\n\nTo train a nearest neighbors classifier, remember all X and y. Then when you predict for some image, return the corresponding label to the nearest image using the distance metric above.\n\n\nAdvantage: \n - No time to train, just storing past results.\n\nDisadvantage:\n - Needs space to store all the training data\n - High computational cost during test time.\n\n#### k-Nearest Neighbor\n\nInstead of finding the nearest image, find the top-k nearest images. k is a hyperparameter that you tune for. The probability of the label is the empirical distribution of the labels of the k neighbors.\n\n\n\n\n\n\n# Unsupervised ML\n\n## Unsupervised / Clustering / Embeddings\n\n### K-Means\n### EM\n### Gaussian Mixture Model\n### Dimensionality Reduction & Plotting\n### Auto-encoding\n\n## HMM and Kalman Filters\n\n## Sequence Predictions / RNNs\n\n## Meta-learning\n\n\n\n# Reinforcement Learning\n\n\n\n\n# Applied ML\n\n## Feature Engineering\n\n### How to deal with categorical variables\n\n\nDummy Coding:\n\n- One hot encode (see curse of dimensionality below)\n\n\n\nCombine Levels:\n\n- If a feature variable has too many classes, you can combine them into groups, e.g. if you have too many zip codes, combine multiple zipcodes into different districts.\n- Combine levels based on the frequency of the variable, e.g. if some zipcodes are less frequent, combine them to one.\n- You can combine categories by their commonalities (i.e. location or distance)\n\n\n\nConvert to Numbers:\n\n- Label encoders (where number for the class is between 0 and n (number of classes) - 1)\n- Numeric bins, e.g. Age (0-17, 17-34, etc.)\n - Label encode them, e.g. each bin will be a different numeric bin\n - Create a new feature using mean or mode of each bin\n - 2 new features, one lower bound, another upper bound\n \nIf you convert categoricals to continuous variables, the meaning associated with increasing the continuous variable should translate to the same meaning with the categorical variable.\n\n\n\n\n### Curse of Dimensionality\n\n\nWhen you add categorical or continuous variables to your dataset, you will need exponentially more rows to achieve the same statistical significance.\n\nAs an example, let's say you have 2 categorical binary variables. There are 2^2 or 4 combinations. So if you had 100 evenly distributed data points, you would have an average of 25 data points per class combination. Now let's say you had 3 categorical variables. In order to have the same 25 data points per class combination, you would need 2^3 * 25 = 200 datapoints in total. We added one categorical variables to our dataset, and now we need double the data to achieve the same significance.\n\nIn other words, as you increase the number of variables, the data you need to achieve significance increases exponentially.\n\n\n\n\n\n\n## Text Representations\n\nThere are many ways to represent text as numbers to be used in statistical models.\n\n\n### Bag of Words\n\nBuild a fixed length vocabulary $V$ from your corpus. Assign a vector of length $V$ to new text by assigning each entry of the vector with the count of the word in the text.\n\n### TF-IDF (term frequency - inverse document frequency)\n\nBuild a fixed length vocabulary $V$ from your corpus. We assign a score to each word that represents how 'important' this word is in your corpus.\n\nGiven some new text, we weight each word by its frequency in that text and with the inverse document frequency in a previously seen corpus.\n\ntf = $\\frac{t_d}{\\sum_{d' \\in{N}}{t_{d'}}}$ \\\\\n\n- $t_d$ is the number of times term $t$ occurs in the document $d$. \n- The denominator is the total number of terms in the document.\n- N is the number of documents. \\\\\n\nidf = $ln{ \\frac{N}{\\text{Number of documents with term t in it}}} $ \\\\\n\nEach term gets a $tfidf = tf * idf$ score, and the vector representation of the text contains the tf-idf values for each word in the vocabulary.\n\n### Word vectors\n\nThe problem with the above vector representations of text is that if you take two words, and compute the cosine similarity, we get 0. For example, if our vocabulary contained two words, \"cat\" and \"dog\", a document with the word cat could be represented as [0, 1], and a document with the word dog could be represented as [1, 0]. If you take the dot-product, we obtain 0, although we know cats and dogs are both pets, so the similarity of the two documents should probably be bigger than 0. We will now discuss methods that fix this issue.\n\n\n\n### Word2vec \n1 hidden layer neural network that takes in a word and its context words within a corpus, and learns a vector representation of that word.\n\nPaper with very good explanations and derivations: https://arxiv.org/pdf/1411.2738.pdf\n\n2 types:\n\n- CBOW: given a window of context words surrounding a word, predict the word itself\n- Skipgram: given a word, predict the surrounding context words\n\n*Model definition:*\n\n- Vocabulary size, $V$\n- Hidden layer $\\textbf{h}$, size, $N$\n- Weights between input layer and hidden layer, $\\textbf{W}_{VxN}$\n - Each row of $W$ is the $N$ dimensional vector representation of the $k$th word in the input layer.\n \n- Weights between hidden layer and output layer $\\textbf{W}'_{NxV}$\n- Context window size, $C$\n\n\n\n\n\n**CBOW**:\n\nArchitecture looks like this:\n\n*Input layer:* Each input word from your context is a one hot encoded vector.\n\n*Hidden layer:* Input to the hidden layer is the average of your context word vectors.\n\n$$\\textbf{h} = \\frac{1}{C}\\textbf{W}^T(\\textbf{x}_1 + \\textbf{x}_2 + ... + \\textbf{x}_C) $$\n\nThis is just\n$$\\textbf{h} = \\frac{1}{C}(\\textbf{v}_{w_1} + \\textbf{v}_{w_2} + ... + \\textbf{v}_{w_C})^T $$\n\nwhere $ w_1,... w_c$ are the words in the context and $\\textbf{v}_{w_c}$ is the vector representation of input word $w_c$.\n\n$\\textbf{W}^Tx_C$ copies the $k$th row of $\\textbf{W}$ to $\\textbf{v}_{w_c}$.\n\n*Output layer*:\n\nWe need to compute a score, $u_j$ for each word in the vocabulary.\n\n$$u_j = \\textbf{v}'^T_{w_j}\\textbf{h}$$\n\nwhere $\\textbf{v}'_{w_j}$ is the $j$th column of matrix $\\textbf{W}'$.\n\nWe use softmax to obtain the posterior distribution of words (which is a multinomial distribution).\n\n$$p(w_j|w_1, ... w_C) = y_j = \\frac{exp(u_j)}{\\sum_{j'=1}^Vexp(u_{j'})}$$\n\nThe training objective is to maximize the above equation for the actual output word $w_O$, where $j*$ is the index of $w_O$.\n\n$$\\max p(w_j|w_1, ... w_C) = \\max y_{j*}$$\n\nThe loss equation we want to minimize is:\n\n$$E = -logp(w_O|w_1, ..., w_C) $$\n$$ = -u_{j*} + log\\sum_{j'=1}^{V}{exp( \\textbf{v}'^T_{w_j} \\cdot h)}$$\n$$ = - \\textbf{v}'^T_{w_O} \\cdot \\textbf{h} + log \\sum_{j'=1}^{V} {\\textbf{v}'^T_{w_j} \\cdot \\textbf{h}}$$\n\nUpdate equation for hidden->output weight matrix:\n\n$$\\frac{\\partial{E}}{\\partial{w'_{ij}}} = \\frac{\\partial{E}}{\\partial{u_{j}}} \\frac{\\partial{u_j}}{\\partial{w'_{ij}}} = e_j \\cdot h_i$$\nwhere\n$$\\frac{\\partial{E}}{\\partial{u_{j}}} = e_j = y_j - t_j$$\n$t_j = \\mathbb{1}(j = j^*)$.\n\nUsing SGD, the update equation looks like:\n\n$$w'^{(new)}_{ij} = w'^{(old)}_{ij} - \\eta \\cdot e_j \\cdot h_i $$\n\nor\n\n$$v'^{(new)}_{w_j} = v'^{(old)}_{w_j} - \\eta \\cdot e_j \\cdot \\textbf{h} $$\nfor $j = 1, 2, ... V$\nwhere $\\eta$ is the learning rate.\n\nUpdate equation for input->hidden weight matrix:\n(derivation similar, look at [the paper mentioned earlier](https://arxiv.org/pdf/1411.2738.pdf))\n\nWe need to apply the following equation to every input context word vector:\n\n$$v^{(new)}_{w_{I, c}} = v^{(old)}_{w_{I, c}} - \\frac{1}{C} \\cdot \\eta \\cdot EH^T $$\n\nwhere $v_{w_{I, c}}$ is the input vector of the $c$ word in the input context, $\\eta$ is the learning rate, $EH = \\frac{\\partial E}{\\partial{h_i}}$\n\nSince $EH$ is the sum of the output vectors of all words in the vocabulary weighted by their prediction error $e_j$, we can intuitively understand this update equation as adding a part of every vector to the input of the context word.\n\n\n\n\nSilly example that isn't real but demonstrates the math, let's check that 2 words get more similar:\n\n\n```python\nimport numpy as np\n\ndocument = \"Some polar bears in the Arctic are shedding pounds during the time they should be beefing up, a new study shows. It\u2019s the climate change diet and scientists say it\u2019s not good. They blame global warming for the dwindling ice cover on the Arctic Ocean that bears need for hunting seals each spring.\"\nsentence1 = document.split('.')[0]\n\ndoc_dict = {}\nidx_num = 0\nfor sentence in document.split('.'):\n sentence_tknzd = sentence.split(' ')\n for word in sentence_tknzd:\n word.strip(',')\n if word not in doc_dict:\n doc_dict[word] = idx_num\n idx_num += 1\n\nsentence1_tknzd = sentence1.split(' ')\n```\n\n\n```python\n# Initialize weights matrix W, and W'\n# let's give only 20 features for now:\nvocab_size = len(doc_dict) + 1\nnum_features = 20\n\nW = np.random.standard_normal(size=(vocab_size, num_features))\n\nW_prime = np.random.standard_normal(size=(num_features, vocab_size))\n\n# Set some hyperparameters of model:\nlearning_rate = 0.1\nwin_size = 3\n```\n\n\n```python\n# Check distance between polar and bears\npolar_vec = W[doc_dict['polar']]\nbears_vec = W[doc_dict['bears']]\ncosine_similarity(polar_vec.reshape(1, num_features), bears_vec.reshape(1, num_features))\n```\n\n\n\n\n array([[0.3498866]])\n\n\n\n\n```python\n# Go through 10 epochs:\nfor x in range(10):\n for target_idx in range(0, len(sentence1_tknzd)):\n # Get all indices\n start_idx = max(target_idx - win_size, 0)\n end_idx = min(len(sentence1_tknzd) - 1, target_idx + win_size)\n target_word = sentence1_tknzd[target_idx]\n context = [sentence1_tknzd[idx] for idx in range(start_idx, target_idx)]\n context += [sentence1_tknzd[idx] for idx in range(target_idx + 1, end_idx + 1)]\n\n # Input vectors:\n inp = np.array([doc_dict[c] for c in context])\n input_layer = np.zeros((len(inp), vocab_size))\n input_layer[np.arange(len(inp)), inp] = 1\n\n # you can just use np.mean function?\n # Average input word vectors (context) to get hidden layer.\n h = (1 / len(context)) * np.sum([np.dot(W.T, x) for x in input_layer], axis=0)\n\n scores = np.array([np.dot(W_prime[:, i].T, h) for i in range(vocab_size)])\n\n # Apply softmax\n output_layer = np.exp(scores) / np.sum(np.exp(scores), axis=0)\n\n # compute error e\n t_j = np.zeros(vocab_size)\n t_j[target_idx] = 1\n e = output_layer - t_j\n\n # Update W'\n W_prime -= np.array([learning_rate * e[j] * h for j in range(vocab_size)]).T\n\n # Update W\n # Only updating input context vectors\n # EH [1, 20]\n EH = np.array([np.sum(W_prime[i, :] * e) for i in range(num_features)])\n EH_weighted = EH * learning_rate * (1 / vocab_size)\n W[a] -= EH_weighted\n\n\n```\n\n\n```python\npolar_vec2 = W[doc_dict['polar']]\nbears_vec2 = W[doc_dict['bears']]\ncosine_similarity(polar_vec2.reshape(1, num_features), bears_vec2.reshape(1, num_features))\n```\n\n\n\n\n array([[0.33496195]])\n\n\n\nIt makes sense that the vectors for 'polar' and 'bears' are getting closer!\n\n**Skip-Gram Model**\n\nNow, our target word is at the input layer, and context at the output layer of our network.\n\nInput layer: \n- One word, $w_I$.\n\nHidden layer:\n- Input to the hidden layer is just the vector representation of the input word, $\\textbf{v}_{w_I}$:\n\n$$\\textbf{h} = \\textbf{W}^T_{(k, \\cdot)} = \\textbf{v}^T_{w_I}$$\n\nOutput layer:\n- Instead of 1 multinomial distribution, we have C multinomial distributions. Each output uses the same weight matrix $W'$.\n\n\n\n\n**Computational Efficiency of Word2vec**\n\n\nThese models have 2 vector representations (input vector $\\textbf{v}_w$ and output vector $\\textbf{v'}_w$ ) for each word. \n\nTo update $\\textbf{v'}_w$, we need to iterate through every word $w_j$ in the vocabulary, check the output probability $y_j$ and compare it with the expected output (1 or 0). This is very expensive!\n\n\n2 solutions: (1) Hierarchical Softmax, (2) Sampling\n\nIdea behind (1) Hierarchical Softmax:\nThis is an efficient way of computing softmax where we get rid of the output vector representation for words. Instead, the vocabulary is represented as a binary tree, where the words are leaves, and the probability of each word is derived from the unique path between the root and the leaf node word.\n(can read paper for thorough explanation)\n\nIdea behind (2) Negative Sampling:\nOur problem is that we have too many output vectors, so let's keep the output word and just sample a few other words as negative samples. We determine the distribution empirically, and the word2vec paper defines one distribution. They also use a simplified training objective.\n\n\n\n\n\n#### Fasttext\n\nhttps://fasttext.cc\n\nFasttext for word embeddings is just like word2vec, except it looks at character ngrams for training as well.\n\nSo, for the word 'king' if you specified the smallest ngram to be 3 and largest to be 4, it would look at 'kin', 'ing' as well. To use 'king' as an input, we represent it using the sum of the vectors for {'kin', 'ing', 'king'} .\n\nThis is good because:\n- It could generate better word embeddings for rare words, i.e. a rare word may not have many context words, but some of its character n-grams might.\n- Could handle out of vocabulary words well, a word could have a vector from its character ngrams \n\n\n#### Fasttext classification\n\nFirst we average our word representations into a text representation (into the hidden layer) which is fed into a linear classifier, similar to CBOW architecture of word2vec. We still use a context-window, (this size is one of the hyperparameters). Use softmax to compute probability distribution over classes. Then minimize negative log likelihood over the classes. Train with SGD and decaying learning rate.\n\n\n\n\n\n\n### Named Entity Recognition\n### Part-of-Speech Tagging\n### Machine Translation\n\nIBM models (statistical machine translation), deep-learning beat the shit out of it, sequence2sequence with Attention\n\n\n\n## Image Representations\n\n\nA computer sees an image as a matrix of numbers, m pixels by n pixels. If it is an RGB image, the matrix is 3 dimensional, each dimension representing a different color channel. Each number is an integer from 0 (black) to 255 (white).\n\nIt's common to normalize pixels: get a mean image by taking the mean of all pixels in your training data, then subtract the mean image from each image. This makes your pixels lie approximately between values [-127, 127]. Could also scale input features to lie between [-1, 1].\n\n### Issues with using raw pixels\n\nIf we use raw pixels as representations though, we have a similar problem that we did with bag-of-words representations of text. If we take the cosine similarity between two identical images, but with one image slightly translated to the right, the cosine similarity will be very different than two identical images without translation. Our representation of images should be translation invariant, and ideally rotation invariant, and should also capture a \"similarity\" measure that makes sense to humans visually. \n\n\n\n### Convolutional Neural Networks\n\nImage representations are often learned with deep convolution neural nets. Typical datasets for learning image representations are [ImageNet](http://www.image-net.org), [Open Images Dataset](https://github.com/openimages/dataset), CIFAR, MNIST, [COCO](http://cocodataset.org/#home). Some network architectures are NASnet, ResNet, Inception models. Pre-trained architectures can be found [here](https://github.com/tensorflow/models/tree/master/research/slim/nets/nasnet) or [here](https://github.com/tensorflow/models/tree/master/research/slim). New tasks can be learned with transfer learning, fine-tuning the last layers of these architectures.\n\n\n### Distance Measures\n\n\n\n\n\n## Approximate Nearest Neighbors\n\nIf you have a large search index, approximate methods for nearest neighbor search will be more efficient. Some packages are [Annoy](https://github.com/spotify/annoy), [faiss](https://github.com/facebookresearch/faiss), etc.\n\nTODO: describe one of these methods.\n\n\n## Recommendation Algorithms\n\n\n\n### Collaborative Filtering\n\n\n### Content Based Filtering\n\n\n\n\n\n\n\n\n\n### Cold-start\n\n\n### Examples\n\n\n## Putting stuff into production\n\nTo put a model into production, you should:\n\n- Train a model and Cross-Validate your performance to ensure generalizability.\n- Serialize/Deserialize the model\n- Package your environment (Docker)\n- Make a standard API for other to interface with your model\n- Store Application logs and Service logs\n- Create alerting based on latency, error counts, etc.\n- Monitor how the model performs, AB-test, etc.\n\n\n#### No Free Lunch Theorem\n\n\"if an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems\" - https://en.wikipedia.org/wiki/No_free_lunch_theorem\n\nAny elevated performance of an algorithm on one class of problems is offset by performance on another class of problems, if we look at set of all optimization problems we could encounter. For example, cross-validation won't give us a more generalizable solution compared to random search on all class of problems. Luckily, the set of problems we encounter seems to be a subset of all problems for which we can make prior assumptions that do tend to generalize.\n\n\n\n# Systems\n\n## Data\n\n## Parallel Computing in Python\n\n### MPI (Message Passing Interface)\n\nhttp://materials.jeremybejarano.com/MPIwithPython\n\nYou write the code for all processes in one program, and then you run the same program on all CPUs. They communicate with each other via the same program using MPI.\n\n**Load Balancing**: this is when you are running multiple processes, and one process has more work than the others. The program is as slow as the slowest process. In order to make the program more efficient, we need to balance the workload across all processes. This is called Load Balancing.\n\nRelational Databases (SQL)\n\nDistributed Data Stores (Hadoop)\n\nDistributed Computation (Spark)\n\n\n", "meta": {"hexsha": "823a2642493b8148fe0a86b62ebb7b8aee5e5cef", "size": 372964, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "ML_Notes.ipynb", "max_stars_repo_name": "innainu/ML-and-DS-Handbook", "max_stars_repo_head_hexsha": "c9f29c182481ec67451989d320e58794756c6e9c", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 5, "max_stars_repo_stars_event_min_datetime": "2018-02-22T23:28:23.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-22T15:03:48.000Z", "max_issues_repo_path": "ML_Notes.ipynb", "max_issues_repo_name": "innainu/ML-and-DS-Handbook", "max_issues_repo_head_hexsha": "c9f29c182481ec67451989d320e58794756c6e9c", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "ML_Notes.ipynb", "max_forks_repo_name": "innainu/ML-and-DS-Handbook", "max_forks_repo_head_hexsha": "c9f29c182481ec67451989d320e58794756c6e9c", "max_forks_repo_licenses": ["MIT"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 61.4337012024, "max_line_length": 67882, "alphanum_fraction": 0.745701998, "converted": true, "num_tokens": 30659, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4532618627863438, "lm_q2_score": 0.1847675061589216, "lm_q1q2_score": 0.08374806402398005}} {"text": "# Logging with Tensorboard\n\n**DIVE into Deep Learning**\n___\n\n\n```python\nfrom util import *\n```\n\n## Logging the results\n\nTo call additional functions during training, we can add the functions to the `callbacks` parameter of the model `fit` method. For instance:\n\n\n```python\nimport tqdm.keras\n\nif input('Train? [Y/n]').lower() != 'n':\n model.fit(ds_b[\"train\"],\n epochs=6,\n validation_data=ds_b[\"test\"],\n verbose=0,\n callbacks=[tqdm.keras.TqdmCallback(verbose=2)])\n```\n\nThe above code uses [`tqdm.keras.TqdmCallback()`](https://tqdm.github.io/docs/keras/) to return a callback function that displays a graphical progress bar:\n- Setting `verbose=0` for the method `fit` disables the default text-based progress bar.\n- Setting `verbose=2` for the class `TqdmCallback` show and keep the progress bars for training each batch. Try changing `verbose` to other values to see different effects.\n\nAn important use of callback functions is to save the models and results during training for further analysis. We define the following function `train_model` for this purpose:\n- Take a look at the docstring to learn its basic usage, and then\n- learn the implementations in the source code.\n\n\n```python\nimport os, datetime, pytz\n\n\ndef train_model(model,\n fit_params={},\n log_root='.',\n save_log_params=None,\n save_model_params=None,\n debug_params=None):\n '''Train and test the model, and return the log directory path name.\n \n Parameters:\n ----------\n log_root (str): the root directory for creating log directory\n \n fit_params (dict): dictionary of parameters to pass to model.fit.\n save_log_params (dict): dictionary of parameters to pass to \n tf.keras.callbacks.TensorBoard to save the results for TensorBoard.\n The default value None means no logging of the results.\n save_model_params (dict): dictionary of parameters to pass to\n tf.keras.callbacks.ModelCheckpoint to save the model to checkpoint \n files. \n The default value None means no saving of the models.\n debug_params (dict): dictionary of parameters to pass to \n tf.debugging.experimental.enable_dump_debug_info for debugger \n v2 in tensorboard.\n The default value None means no logging of the debug information.\n \n Returns:\n -------\n str: log directory path that points to a subfolder of log_root named \n using the current time.\n '''\n # use a subfolder named by the current time to distinguish repeated runs\n log_dir = os.path.join(\n log_root,\n datetime.datetime.now(\n tz=pytz.timezone('Asia/Hong_Kong')).strftime(\"%Y%m%d-%H%M%S\"))\n \n callbacks = fit_params.pop('callbacks', []).copy()\n \n if save_log_params is not None:\n # add callback to save the training log for further analysis by tensorboard\n callbacks.append(\n tf.keras.callbacks.TensorBoard(log_dir,\n **save_log_params))\n\n if save_model_params is not None:\n # save the model as checkpoint files after each training epoch\n callbacks.append(\n tf.keras.callbacks.ModelCheckpoint(os.path.join(log_dir, '{epoch}.ckpt'),\n **save_model_params))\n\n if debug_params is not None:\n # save information for debugger v2 in tensorboard\n tf.debugging.experimental.enable_dump_debug_info(\n log_dir, **debug_params)\n\n # training + testing (validation)\n model.fit(ds_b['train'],\n validation_data=ds_b['test'],\n callbacks=callbacks,\n **fit_params)\n\n return log_dir\n```\n\nFor example:\n\n\n```python\nfit_params = {'epochs': 6, 'callbacks': [tqdm.keras.TqdmCallback()], 'verbose': 0}\nlog_root = os.path.join(user_home, \"log\") # log folder\nsave_log_params = {'update_freq': 100, 'histogram_freq': 1}\nsave_model_params = {'save_weights_only': True, 'verbose': 1}\ndebug_params = {'tensor_debug_mode': \"FULL_HEALTH\", 'circular_buffer_size': -1}\n\nif input('Train? [Y/n]').lower() != 'n':\n model = compile_model(create_simple_model())\n log_dir = train_model(model,\n fit_params = fit_params,\n log_root=log_root,\n save_log_params=save_log_params,\n save_model_params=save_model_params,\n debug_params=debug_params)\n```\n\nBy providing the `save_model_params` to the callback [`tf.keras.callbacks.ModelCheckpoint`](https://www.tensorflow.org/tutorials/keras/save_and_load#save_checkpoints_during_training), the model is saved at the end of each epoch to `log_dir`.\n\n\n```python\n!ls {log_dir}\n```\n\nSaving the model is useful because it often takes a long time to train a neural network. To reload the model from the latest checkpoint and continue to train it:\n\n\n```python\nif input('Continue to train? [Y/n]').lower() != 'n':\n # load the weights of the previously trained model\n restored_model = compile_model(create_simple_model())\n restored_model.load_weights(tf.train.latest_checkpoint(log_dir)) \n # continue to train\n with tf.device('CPU'): # train with CPU instead\n train_model(restored_model, \n log_root=log_root, \n save_log_params=save_log_params)\n```\n\nBy providing [`tf.keras.callbacks.TensorBoard`](https://www.tensorflow.org/tensorboard/get_started#using_tensorboard_with_keras_modelfit) as a callback function to the `fit` method earlier, the training logs can be analyzed using TensorBoard.\n\n\n```python\nif input('Execute? [Y/n]').lower() != 'n':\n %load_ext tensorboard\n %tensorboard --logdir {log_dir}\n```\n\nThe `SCALARS` tab shows the curves of training and validation losses/accuracies after different batches/epoches. The curves often have jitters as the gradient descent is stochastic (random). To see the typical performance, a smoothing factor $\\theta\\in [0,1]$ can be applied on the left panel. The smoothed curve $\\bar{l}(t)$ of the original curve $l(t)$ is defined as\n\n$$\n\\begin{align}\n\\bar{l}(t) = \\theta \\bar{l}(t-1) + (1-\\theta) l(t)\n\\end{align}\n$$\n\nwhich is called the moving average. Try changing the smoothing factor on the left panel to see the effect.\n\n**Exercise** If the smoothing factor $\\theta$ is too large, would it cause bias when using empirical loss or performance to estimate the actual loss or performance? If so, is estimate overly optimistic or pessimistic?\n\nYOUR ANSWER HERE\n\nWe can also visualize the input images in TensorBoard:\n- Run the following cell to write the images to the log directory.\n- Click the `refresh` button on the top of the previous TensorBoard panel.\n- Click the `IMAGE` tab to show the images.\n\n\n```python\nif input('Execute? [Y/n]').lower() != 'n':\n file_writer = tf.summary.create_file_writer(log_dir)\n\n with file_writer.as_default():\n # Don't forget to reshape.\n images = np.reshape([image for (image, label) in ds[\"train\"].take(25)],\n (-1, 28, 28, 1))\n tf.summary.image(\"25 training data examples\",\n images,\n max_outputs=25,\n step=0)\n```\n\nIn addition to presenting the results, TensorBoard is useful for debugging deep learning. In particular, learn\n- to check the model graph under the [`GRAPHS`](https://www.tensorflow.org/tensorboard/graphs) tab, \n- to debug using the [`DEBUGGER v2` tab](https://www.tensorflow.org/tensorboard/debugger_v2), and\n- to [publish your results](https://www.tensorflow.org/tensorboard/get_started#tensorboarddev_host_and_share_your_ml_experiment_results).\n\nTensorBoard can also show simultaneously the logs of different runs stored in different subfolders of the log directory:\n\n\n```python\nif input('Execute? [Y/n]').lower() != 'n':\n %load_ext tensorboard\n %tensorboard --logdir {log_root}\n```\n\nYou can select different runs on the left panel to compare their performance.\n\nNote that loading the log to TensorBoard may consume a lot of memory. You can list the TensorBoard notebook instances and kill those you do not need anymore by running `!kill {pid}`.\n\n\n```python\nimport tensorboard as tb\ntb.notebook.list() # list all the running TensorBoard notebooks.\n```\n\n\n```python\nwhile (pid := input('pid to kill? (press enter to exit)')):\n !kill {pid}\n```\n\n## Enhancements\n\n**Exercise** Train the following network with [dropout](https://en.wikipedia.org/wiki/Dilution_(neural_networks)#Dropout). Try to tune the network for the best accuracy. Put your training code inside the body of the conditional `if input...`.\n\n\n```python\ndef create_dropout_model():\n model = tf.keras.models.Sequential([\n tf.keras.layers.Flatten(input_shape=(28, 28, 1)),\n tf.keras.layers.Dense(128, activation=tf.keras.activations.relu),\n tf.keras.layers.Dropout(0.2), # dropout\n tf.keras.layers.Dense(10, activation=tf.keras.activations.softmax)\n ], name=\"Dropout\")\n return model\n\n\nmodel = compile_model(create_dropout_model())\nprint(model.summary())\n\nif input('Train? [Y/n]').lower() != 'n':\n # YOUR CODE HERE\n raise NotImplementedError()\n```\n\n**Exercise** Explore the [convolutional neural network (CNN)](https://en.wikipedia.org/wiki/Convolutional_neural_network). Try to tune the network for the best accuracy.\n\n\n```python\ndef create_cnn_model():\n model = tf.keras.models.Sequential([\n tf.keras.layers.Conv2D(32,\n 3,\n activation='relu',\n input_shape=(28, 28, 1)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(10, activation='softmax')\n ], name=\"CNN\")\n return model\n\n\nmodel = compile_model(create_cnn_model())\nprint(model.summary())\n\nif input('Train? [Y/n]').lower() != 'n':\n # YOUR CODE HERE\n raise NotImplementedError()\n```\n\n**Exercise** Launch TensorBoard to show the best performances of each of the two neural network architectures. Note that to clean up the log of the inferior results, you may need to kill the TensorBoard instance. It is easier to use the vscode interface or the terminal in the lab interface to remove folders.\n\n\n```python\nif input('Execute? [Y/n]').lower() != 'n':\n # YOUR CODE HERE\n raise NotImplementedError()\n```\n\n## Remove Logs\n\nIf you run out of storage, you should remove some of the log files:\n\n\n```python\nif input('Remove all logs? [Y/n]').lower() != 'n':\n !rm -rf {log_root}\n```\n", "meta": {"hexsha": "a1c4b2ef4dcd3eadb06ca09bca734e04ba7bbf73", "size": 22586, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "part2/logging.ipynb", "max_stars_repo_name": "ccha23/divedeep", "max_stars_repo_head_hexsha": "dd9c5e0a589613fa37c467b7863e58c5d22d8d3f", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2021-06-29T00:46:39.000Z", "max_stars_repo_stars_event_max_datetime": "2021-06-29T00:46:39.000Z", "max_issues_repo_path": "part2/logging.ipynb", "max_issues_repo_name": "ccha23/divedeep", "max_issues_repo_head_hexsha": "dd9c5e0a589613fa37c467b7863e58c5d22d8d3f", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "part2/logging.ipynb", "max_forks_repo_name": "ccha23/divedeep", "max_forks_repo_head_hexsha": "dd9c5e0a589613fa37c467b7863e58c5d22d8d3f", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2021-07-03T02:44:06.000Z", "max_forks_repo_forks_event_max_datetime": "2021-07-03T02:44:06.000Z", "avg_line_length": 28.0571428571, "max_line_length": 380, "alphanum_fraction": 0.5478615071, "converted": true, "num_tokens": 2384, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.476579651063676, "lm_q2_score": 0.17553806499717958, "lm_q1q2_score": 0.08365786976474872}} {"text": "\n# Infinite matter, from the electron gas to nuclear matter, background material\n\n \n**[Morten Hjorth-Jensen](http://mhjgit.github.io/info/doc/web/), National Superconducting Cyclotron Laboratory and Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA & Department of Physics, University of Oslo, Oslo, Norway**\n\nDate: **Jul 10, 2018**\n\n## Introduction to studies of infinite matter\n\n\nStudies of infinite nuclear matter play an important role in nuclear physics. The aim of this part of the lectures is to provide the necessary ingredients for perfoming studies of neutron star matter (or matter in $\\beta$-equilibrium) and symmetric nuclear matter. We start however with the electron gas in two and three dimensions for both historical and pedagogical reasons. Since there are several benchmark calculations for the electron gas, this small detour will allow us to establish the necessary formalism. Thereafter we will study infinite nuclear matter \n* at the Hartree-Fock with realistic nuclear forces and\n\n* using many-body methods like coupled-cluster theory or in-medium SRG\n\n## The infinite electron gas\n\nThe electron gas is perhaps the only realistic model of a \nsystem of many interacting particles that allows for an analytical solution\nof the Hartree-Fock equations. Furthermore, to first order in the interaction, one can also\nobtain an analytical expression for the total energy and several other properties of a many-particle systems. \nThe model gives a very good approximation to the properties of valence electrons in metals.\nThe assumptions are\n\n * System of electrons that is not influenced by external forces except by an attraction provided by a uniform background of ions. These ions give rise to a uniform background charge. The ions are stationary.\n\n * The system as a whole is neutral.\n\n * We assume we have $N_e$ electrons in a cubic box of length $L$ and volume $\\Omega=L^3$. This volume contains also a uniform distribution of positive charge with density $N_ee/\\Omega$. \n\nThe homogeneus electron gas is a system of electrons that is not\ninfluenced by external forces except by an attraction provided by a\nuniform background of ions. These ions give rise to a uniform\nbackground charge. The ions are stationary and the system as a whole\nis neutral.\nIrrespective of this simplicity, this system, in both two and\nthree-dimensions, has eluded a proper description of correlations in\nterms of various first principle methods, except perhaps for quantum\nMonte Carlo methods. In particular, the diffusion Monte Carlo\ncalculations of [Ceperley](http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.45.566) \nand [Ceperley and Tanatar](http://journals.aps.org/prb/abstract/10.1103/PhysRevB.39.5005) \nare presently still considered as the\nbest possible benchmarks for the two- and three-dimensional electron\ngas. \n\n\n\nThe electron gas, in \ntwo or three dimensions is thus interesting as a test-bed for \nelectron-electron correlations. The three-dimensional \nelectron gas is particularly important as a cornerstone \nof the local-density approximation in density-functional \ntheory. In the physical world, systems \nsimilar to the three-dimensional electron gas can be \nfound in, for example, alkali metals and doped \nsemiconductors. Two-dimensional electron fluids are \nobserved on metal and liquid-helium surfaces, as well as \nat metal-oxide-semiconductor interfaces. However, the Coulomb \ninteraction has an infinite range, and therefore \nlong-range correlations play an essential role in the\nelectron gas. \n\n\n\n\nAt low densities, the electrons become \nlocalized and form a lattice. This so-called Wigner \ncrystallization is a direct consequence \nof the long-ranged repulsive interaction. At higher\ndensities, the electron gas is better described as a\nliquid.\nWhen using, for example, Monte Carlo methods the electron gas must be approximated \nby a finite system. The long-range Coulomb interaction \nin the electron gas causes additional finite-size effects that are not\npresent in other infinite systems like nuclear matter or neutron star matter.\nThis poses additional challenges to many-body methods when applied \nto the electron gas.\n\n\n\n\n\n## The infinite electron gas as a homogenous system\n\nThis is a homogeneous system and the one-particle wave functions are given by plane wave functions normalized to a volume $\\Omega$ \nfor a box with length $L$ (the limit $L\\rightarrow \\infty$ is to be taken after we have computed various expectation values)\n\n$$\n\\psi_{\\mathbf{k}\\sigma}(\\mathbf{r})= \\frac{1}{\\sqrt{\\Omega}}\\exp{(i\\mathbf{kr})}\\xi_{\\sigma}\n$$\n\nwhere $\\mathbf{k}$ is the wave number and $\\xi_{\\sigma}$ is a spin function for either spin up or down\n\n$$\n\\xi_{\\sigma=+1/2}=\\left(\\begin{array}{c} 1 \\\\ 0 \\end{array}\\right) \\hspace{0.5cm}\n\\xi_{\\sigma=-1/2}=\\left(\\begin{array}{c} 0 \\\\ 1 \\end{array}\\right).\n$$\n\nWe assume that we have periodic boundary conditions which limit the allowed wave numbers to\n\n$$\nk_i=\\frac{2\\pi n_i}{L}\\hspace{0.5cm} i=x,y,z \\hspace{0.5cm} n_i=0,\\pm 1,\\pm 2, \\dots\n$$\n\nWe assume first that the electrons interact via a central, symmetric and translationally invariant\ninteraction $V(r_{12})$ with\n$r_{12}=|\\mathbf{r}_1-\\mathbf{r}_2|$. The interaction is spin independent.\n\nThe total Hamiltonian consists then of kinetic and potential energy\n\n$$\n\\hat{H} = \\hat{T}+\\hat{V}.\n$$\n\nThe operator for the kinetic energy can be written as\n\n$$\n\\hat{T}=\\sum_{\\mathbf{k}\\sigma}\\frac{\\hbar^2k^2}{2m}a_{\\mathbf{k}\\sigma}^{\\dagger}a_{\\mathbf{k}\\sigma}.\n$$\n\n## Defining the Hamiltonian operator\n\nThe Hamiltonian operator is given by\n\n$$\n\\hat{H}=\\hat{H}_{el}+\\hat{H}_{b}+\\hat{H}_{el-b},\n$$\n\nwith the electronic part\n\n$$\n\\hat{H}_{el}=\\sum_{i=1}^N\\frac{p_i^2}{2m}+\\frac{e^2}{2}\\sum_{i\\ne j}\\frac{e^{-\\mu |\\mathbf{r}_i-\\mathbf{r}_j|}}{|\\mathbf{r}_i-\\mathbf{r}_j|},\n$$\n\nwhere we have introduced an explicit convergence factor\n(the limit $\\mu\\rightarrow 0$ is performed after having calculated the various integrals).\nCorrespondingly, we have\n\n$$\n\\hat{H}_{b}=\\frac{e^2}{2}\\int\\int d\\mathbf{r}d\\mathbf{r}'\\frac{n(\\mathbf{r})n(\\mathbf{r}')e^{-\\mu |\\mathbf{r}-\\mathbf{r}'|}}{|\\mathbf{r}-\\mathbf{r}'|},\n$$\n\nwhich is the energy contribution from the positive background charge with density\n$n(\\mathbf{r})=N/\\Omega$. Finally,\n\n$$\n\\hat{H}_{el-b}=-\\frac{e^2}{2}\\sum_{i=1}^N\\int d\\mathbf{r}\\frac{n(\\mathbf{r})e^{-\\mu |\\mathbf{r}-\\mathbf{x}_i|}}{|\\mathbf{r}-\\mathbf{x}_i|},\n$$\n\nis the interaction between the electrons and the positive background.\n\n\n\n## Single-particle Hartree-Fock energy\n\nIn the first exercise below we show that the Hartree-Fock energy can be written as\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m_e}-\\frac{e^{2}}\n{\\Omega^{2}}\\sum_{k'\\leq\nk_{F}}\\int d\\mathbf{r}e^{i(\\mathbf{k}'-\\mathbf{k})\\mathbf{r}}\\int\nd\\mathbf{r'}\\frac{e^{i(\\mathbf{k}-\\mathbf{k}')\\mathbf{r}'}}\n{\\vert\\mathbf{r}-\\mathbf{r}'\\vert}\n$$\n\nresulting in\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m_e}-\\frac{e^{2}\nk_{F}}{2\\pi}\n\\left[\n2+\\frac{k_{F}^{2}-k^{2}}{kk_{F}}ln\\left\\vert\\frac{k+k_{F}}\n{k-k_{F}}\\right\\vert\n\\right]\n$$\n\nThe previous result can be rewritten in terms of the density\n\n$$\nn= \\frac{k_F^3}{3\\pi^2}=\\frac{3}{4\\pi r_s^3},\n$$\n\nwhere $n=N_e/\\Omega$, $N_e$ being the number of electrons, and $r_s$ is the radius of a sphere which represents the volum per conducting electron. \nIt can be convenient to use the Bohr radius $a_0=\\hbar^2/e^2m_e$.\nFor most metals we have a relation $r_s/a_0\\sim 2-6$. The quantity $r_s$ is dimensionless.\n\n\nIn the second exercise below we find that\nthe total energy\n$E_0/N_e=\\langle\\Phi_{0}|\\hat{H}|\\Phi_{0}\\rangle/N_e$ for\nfor this system to first order in the interaction is given as\n\n$$\nE_0/N_e=\\frac{e^2}{2a_0}\\left[\\frac{2.21}{r_s^2}-\\frac{0.916}{r_s}\\right].\n$$\n\n\n\n## Exercise 1: Hartree-Fock single-particle solution for the electron gas\n\nThe electron gas model allows closed form solutions for quantities like the \nsingle-particle Hartree-Fock energy. The latter quantity is given by the following expression\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m}-\\frac{e^{2}}\n{V^{2}}\\sum_{k'\\leq\nk_{F}}\\int d\\mathbf{r}e^{i(\\mathbf{k'}-\\mathbf{k})\\mathbf{r}}\\int\nd\\mathbf{r}'\\frac{e^{i(\\mathbf{k}-\\mathbf{k'})\\mathbf{r}'}}\n{\\vert\\mathbf{r}-\\mathbf{r'}\\vert}\n$$\n\n**a)**\nShow first that\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m}-\\frac{e^{2}\nk_{F}}{2\\pi}\n\\left[\n2+\\frac{k_{F}^{2}-k^{2}}{kk_{F}}ln\\left\\vert\\frac{k+k_{F}}\n{k-k_{F}}\\right\\vert\n\\right]\n$$\n\n\n\n**Hint.**\nHint: Introduce the convergence factor \n$e^{-\\mu\\vert\\mathbf{r}-\\mathbf{r}'\\vert}$\nin the potential and use $\\sum_{\\mathbf{k}}\\rightarrow\n\\frac{V}{(2\\pi)^{3}}\\int d\\mathbf{k}$\n\n\n\n\n\n**Solution.**\nWe want to show that, given the Hartree-Fock equation for the electron gas\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m}-\\frac{e^{2}}\n{V^{2}}\\sum_{p\\leq\nk_{F}}\\int d\\mathbf{r}\\exp{(i(\\mathbf{p}-\\mathbf{k})\\mathbf{r})}\\int\nd\\mathbf{r}'\\frac{\\exp{(i(\\mathbf{k}-\\mathbf{p})\\mathbf{r}'})}\n{\\vert\\mathbf{r}-\\mathbf{r'}\\vert}\n$$\n\nthe single-particle energy can be written as\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m}-\\frac{e^{2}\nk_{F}}{2\\pi}\n\\left[\n2+\\frac{k_{F}^{2}-k^{2}}{kk_{F}}ln\\left\\vert\\frac{k+k_{F}}\n{k-k_{F}}\\right\\vert\n\\right].\n$$\n\nWe introduce the convergence factor \n$e^{-\\mu\\vert\\mathbf{r}-\\mathbf{r}'\\vert}$\nin the potential and use $\\sum_{\\mathbf{k}}\\rightarrow\n\\frac{V}{(2\\pi)^{3}}\\int d\\mathbf{k}$. We can then rewrite the integral as\n\n\n
\n\n$$\n\\begin{equation}\n\\frac{e^{2}}\n{V^{2}}\\sum_{k'\\leq\nk_{F}}\\int d\\mathbf{r}\\exp{(i(\\mathbf{k'}-\\mathbf{k})\\mathbf{r})}\\int\nd\\mathbf{r}'\\frac{\\exp{(i(\\mathbf{k}-\\mathbf{p})\\mathbf{r}'})}\n{\\vert\\mathbf{r}-\\mathbf{r'}\\vert}= \n\\label{_auto1} \\tag{1}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{e^{2}}{V (2\\pi)^3} \\int d\\mathbf{r}\\int\n\\frac{d\\mathbf{r}'}{\\vert\\mathbf{r}-\\mathbf{r'}\\vert}\\exp{(-i\\mathbf{k}(\\mathbf{r}-\\mathbf{r}'))}\\int d\\mathbf{p}\\exp{(i\\mathbf{p}(\\mathbf{r}-\\mathbf{r}'))},\n\\label{_auto2} \\tag{2}\n\\end{equation}\n$$\n\nand introducing the abovementioned convergence factor we have\n\n\n
\n\n$$\n\\begin{equation}\n\\lim_{\\mu \\to 0}\\frac{e^{2}}{V (2\\pi)^3} \\int d\\mathbf{r}\\int d\\mathbf{r}'\\frac{\\exp{(-\\mu\\vert\\mathbf{r}-\\mathbf{r}'\\vert})}{\\vert\\mathbf{r}-\\mathbf{r'}\\vert}\\int d\\mathbf{p}\\exp{(i(\\mathbf{p}-\\mathbf{k})(\\mathbf{r}-\\mathbf{r}'))}.\n\\label{_auto3} \\tag{3}\n\\end{equation}\n$$\n\nWith a change variables to $\\mathbf{x} = \\mathbf{r}-\\mathbf{r}'$ and $\\mathbf{y}=\\mathbf{r}'$ we rewrite the last integral as\n\n$$\n\\lim_{\\mu \\to 0}\\frac{e^{2}}{V (2\\pi)^3} \\int d\\mathbf{p}\\int d\\mathbf{y}\\int d\\mathbf{x}\\exp{(i(\\mathbf{p}-\\mathbf{k})\\mathbf{x})}\\frac{\\exp{(-\\mu\\vert\\mathbf{x}\\vert})}{\\vert\\mathbf{x}\\vert}.\n$$\n\nThe integration over $\\mathbf{x}$ can be performed using spherical coordinates, resulting in (with $x=\\vert \\mathbf{x}\\vert$)\n\n$$\n\\int d\\mathbf{x}\\exp{(i(\\mathbf{p}-\\mathbf{k})\\mathbf{x})}\\frac{\\exp{(-\\mu\\vert\\mathbf{x}\\vert})}{\\vert\\mathbf{x}\\vert}=\\int x^2 dx d\\phi d\\cos{(\\theta)}\\exp{(i(\\mathbf{p}-\\mathbf{k})x\\cos{(\\theta))}}\\frac{\\exp{(-\\mu x)}}{x}.\n$$\n\nWe obtain\n\n\n
\n\n$$\n\\begin{equation}\n4\\pi \\int dx \\frac{ \\sin{(\\vert \\mathbf{p}-\\mathbf{k}\\vert)x} }{\\vert \\mathbf{p}-\\mathbf{k}\\vert}{\\exp{(-\\mu x)}}= \\frac{4\\pi}{\\mu^2+\\vert \\mathbf{p}-\\mathbf{k}\\vert^2}.\n\\label{_auto4} \\tag{4}\n\\end{equation}\n$$\n\nThis results gives us\n\n\n
\n\n$$\n\\begin{equation}\n\\lim_{\\mu \\to 0}\\frac{e^{2}}{V (2\\pi)^3} \\int d\\mathbf{p}\\int d\\mathbf{y}\\frac{4\\pi}{\\mu^2+\\vert \\mathbf{p}-\\mathbf{k}\\vert^2}=\\lim_{\\mu \\to 0}\\frac{e^{2}}{ 2\\pi^2} \\int d\\mathbf{p}\\frac{1}{\\mu^2+\\vert \\mathbf{p}-\\mathbf{k}\\vert^2},\n\\label{_auto5} \\tag{5}\n\\end{equation}\n$$\n\nwhere we have used that the integrand on the left-hand side does not depend on $\\mathbf{y}$ and that $\\int d\\mathbf{y}=V$.\n\nIntroducing spherical coordinates we can rewrite the integral as\n\n\n
\n\n$$\n\\begin{equation}\n\\lim_{\\mu \\to 0}\\frac{e^{2}}{ 2\\pi^2} \\int d\\mathbf{p}\\frac{1}{\\mu^2+\\vert \\mathbf{p}-\\mathbf{k}\\vert^2}=\\frac{e^{2}}{ 2\\pi^2} \\int d\\mathbf{p}\\frac{1}{\\vert \\mathbf{p}-\\mathbf{k}\\vert^2}= \n\\label{_auto6} \\tag{6}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\frac{e^{2}}{\\pi} \\int_0^{k_F} p^2dp\\int_0^{\\pi} d\\theta\\cos{(\\theta)}\\frac{1}{p^2+k^2-2pk\\cos{(\\theta)}},\n\\label{_auto7} \\tag{7}\n\\end{equation}\n$$\n\nand with the change of variables $\\cos{(\\theta)}=u$ we have\n\n$$\n\\frac{e^{2}}{\\pi} \\int_0^{k_F} p^2dp\\int_{0}^{\\pi} d\\theta\\cos{(\\theta)}\\frac{1}{p^2+k^2-2pk\\cos{(\\theta)}}=\\frac{e^{2}}{\\pi} \\int_0^{k_F} p^2dp\\int_{-1}^{1} du\\frac{1}{p^2+k^2-2pku},\n$$\n\nwhich gives\n\n$$\n\\frac{e^{2}}{k\\pi} \\int_0^{k_F} pdp\\left\\{ln(\\vert p+k\\vert)-ln(\\vert p-k\\vert)\\right\\}.\n$$\n\nIntroducing new variables $x=p+k$ and $y=p-k$, we obtain after some straightforward reordering of the integral\n\n$$\n\\frac{e^{2}}{k\\pi}\\left[\nkk_F+\\frac{k_{F}^{2}-k^{2}}{kk_{F}}ln\\left\\vert\\frac{k+k_{F}}\n{k-k_{F}}\\right\\vert\n\\right],\n$$\n\nwhich gives the abovementioned expression for the single-particle energy.\n\n\n\n**b)**\nRewrite the above result as a function of the density\n\n$$\nn= \\frac{k_F^3}{3\\pi^2}=\\frac{3}{4\\pi r_s^3},\n$$\n\nwhere $n=N/V$, $N$ being the number of particles, and $r_s$ is the radius of a sphere which represents the volum per conducting electron.\n\n\n\n**Solution.**\nIntroducing the dimensionless quantity $x=k/k_F$ and the function\n\n$$\nF(x) = \\frac{1}{2}+\\frac{1-x^2}{4x}\\ln{\\left\\vert \\frac{1+x}{1-x}\\right\\vert},\n$$\n\nwe can rewrite the single-particle Hartree-Fock energy as\n\n$$\n\\varepsilon_{k}^{HF}=\\frac{\\hbar^{2}k^{2}}{2m}-\\frac{2e^{2}\nk_{F}}{\\pi}F(k/k_F),\n$$\n\nand dividing by the non-interacting contribution at the Fermi level,\n\n$$\n\\varepsilon_{0}^{F}=\\frac{\\hbar^{2}k_F^{2}}{2m},\n$$\n\nwe have\n\n$$\n\\frac{\\varepsilon_{k}^{HF} }{\\varepsilon_{0}^{F}}=x^2-\\frac{e^2m}{\\hbar^2 k_F\\pi}F(x)=x^2-\\frac{4}{\\pi k_Fa_0}F(x),\n$$\n\nwhere $a_0=0.0529$ nm is the Bohr radius, setting thereby a natural length scale. \n\n\nBy introducing the radius $r_s$ of a sphere whose volume is the volume occupied by each electron, we can rewrite the previous equation in terms of $r_s$ using that the electron density $n=N/V$\n\n$$\nn=\\frac{k_F^3}{3\\pi^2} = \\frac{3}{4\\pi r_s^3},\n$$\n\nwe have (with $k_F=1.92/r_s$,\n\n$$\n\\frac{\\varepsilon_{k}^{HF} }{\\varepsilon_{0}^{F}}=x^2-\\frac{e^2m}{\\hbar^2 k_F\\pi}F(x)=x^2-\\frac{r_s}{a_0}0.663F(x),\n$$\n\nwith $r_s \\sim 2-6$ for most metals.\n\n\n\nIt can be convenient to use the Bohr radius $a_0=\\hbar^2/e^2m$.\nFor most metals we have a relation $r_s/a_0\\sim 2-6$.\n\n**c)**\nMake a plot of the free electron energy and the Hartree-Fock energy and discuss the behavior around the Fermi surface. Extract also the Hartree-Fock band width $\\Delta\\varepsilon^{HF}$ defined as\n\n$$\n\\Delta\\varepsilon^{HF}=\\varepsilon_{k_{F}}^{HF}-\n\\varepsilon_{0}^{HF}.\n$$\n\nCompare this results with the corresponding one for a free electron and comment your results. How large is the contribution due to the exchange term in the Hartree-Fock equation?\n\n\n\n**Solution.**\nWe can now define the so-called band gap, that is the scatter between the maximal and the minimal value of the electrons in the conductance band of a metal (up to the Fermi level). \nFor $x=1$ and $r_s/a_0=4$ we have\n\n$$\n\\frac{\\varepsilon_{k=k_F}^{HF} }{\\varepsilon_{0}^{F}} = -0.326,\n$$\n\nand for $x=0$ we have\n\n$$\n\\frac{\\varepsilon_{k=0}^{HF} }{\\varepsilon_{0}^{F}} = -2.652,\n$$\n\nwhich results in a gap at the Fermi level of\n\n$$\n\\Delta \\varepsilon^{HF} = \\frac{\\varepsilon_{k=k_F}^{HF} }{\\varepsilon_{0}^{F}}-\\frac{\\varepsilon_{k=0}^{HF} }{\\varepsilon_{0}^{F}} = 2.326.\n$$\n\nThis quantity measures the deviation from the $k=0$ single-particle energy and the energy at the Fermi level.\nThe general result is\n\n$$\n\\Delta \\varepsilon^{HF} = 1+\\frac{r_s}{a_0}0.663.\n$$\n\nThe following python code produces a plot of the electron energy for a free electron (only kinetic energy) and \nfor the Hartree-Fock solution. We have chosen here a ratio $r_s/a_0=4$ and the equations are plotted as funtions\nof $k/f_F$.\n\n\n```\n%matplotlib inline\n\nimport numpy as np\nfrom math import log\nfrom matplotlib import pyplot as plt\nfrom matplotlib import rc, rcParams\nimport matplotlib.units as units\nimport matplotlib.ticker as ticker\nrc('text',usetex=True)\nrc('font',**{'family':'serif','serif':['Hartree-Fock energy']})\nfont = {'family' : 'serif',\n 'color' : 'darkred',\n 'weight' : 'normal',\n 'size' : 16,\n }\n\nN = 100\nx = np.linspace(0.0, 2.0,N)\nF = 0.5+np.log(abs((1.0+x)/(1.0-x)))*(1.0-x*x)*0.25/x\ny = x*x -4.0*0.663*F\n\nplt.plot(x, y, 'b-')\nplt.plot(x, x*x, 'r-')\nplt.title(r'{\\bf Hartree-Fock single-particle energy for electron gas}', fontsize=20) \nplt.text(3, -40, r'Parameters: $r_s/a_0=4$', fontdict=font)\nplt.xlabel(r'$k/k_F$',fontsize=20)\nplt.ylabel(r'$\\varepsilon_k^{HF}/\\varepsilon_0^F$',fontsize=20)\n# Tweak spacing to prevent clipping of ylabel\nplt.subplots_adjust(left=0.15)\nplt.savefig('hartreefockspelgas.pdf', format='pdf')\nplt.show()\n```\n\nFrom the plot we notice that the exchange term increases considerably the band gap\ncompared with the non-interacting gas of electrons.\n\n\nWe will now define a quantity called the effective mass.\nFor $\\vert\\mathbf{k}\\vert$ near $k_{F}$, we can Taylor expand the Hartree-Fock energy as\n\n$$\n\\varepsilon_{k}^{HF}=\\varepsilon_{k_{F}}^{HF}+\n\\left(\\frac{\\partial\\varepsilon_{k}^{HF}}{\\partial k}\\right)_{k_{F}}(k-k_{F})+\\dots\n$$\n\nIf we compare the latter with the corresponding expressiyon for the non-interacting system\n\n$$\n\\varepsilon_{k}^{(0)}=\\frac{\\hbar^{2}k^{2}_{F}}{2m}+\n\\frac{\\hbar^{2}k_{F}}{m}\\left(k-k_{F}\\right)+\\dots ,\n$$\n\nwe can define the so-called effective Hartree-Fock mass as\n\n$$\nm_{HF}^{*}\\equiv\\hbar^{2}k_{F}\\left(\n\\frac{\\partial\\varepsilon_{k}^{HF}}\n{\\partial k}\\right)_{k_{F}}^{-1}\n$$\n\n**d)**\nCompute $m_{HF}^{*}$ and comment your results.\n\n**e)**\nShow that the level density (the number of single-electron states per unit energy) can be written as\n\n$$\nn(\\varepsilon)=\\frac{Vk^{2}}{2\\pi^{2}}\\left(\n\\frac{\\partial\\varepsilon}{\\partial k}\\right)^{-1}\n$$\n\nCalculate $n(\\varepsilon_{F}^{HF})$ and comment the results.\n\n\n\n\n\n\n\n\n\n\n\n## Exercise 2: Hartree-Fock ground state energy for the electron gas in three dimensions\n\nWe consider a system of electrons in infinite matter, the so-called electron gas. This is a homogeneous system and the one-particle states are given by plane wave function normalized to a volume $\\Omega$ \nfor a box with length $L$ (the limit $L\\rightarrow \\infty$ is to be taken after we have computed various expectation values)\n\n$$\n\\psi_{\\mathbf{k}\\sigma}(\\mathbf{r})= \\frac{1}{\\sqrt{\\Omega}}\\exp{(i\\mathbf{kr})}\\xi_{\\sigma}\n$$\n\nwhere $\\mathbf{k}$ is the wave number and $\\xi_{\\sigma}$ is a spin function for either spin up or down\n\n$$\n\\xi_{\\sigma=+1/2}=\\left(\\begin{array}{c} 1 \\\\ 0 \\end{array}\\right) \\hspace{0.5cm}\n\\xi_{\\sigma=-1/2}=\\left(\\begin{array}{c} 0 \\\\ 1 \\end{array}\\right).\n$$\n\nWe assume that we have periodic boundary conditions which limit the allowed wave numbers to\n\n$$\nk_i=\\frac{2\\pi n_i}{L}\\hspace{0.5cm} i=x,y,z \\hspace{0.5cm} n_i=0,\\pm 1,\\pm 2, \\dots\n$$\n\nWe assume first that the particles interact via a central, symmetric and translationally invariant\ninteraction $V(r_{12})$ with\n$r_{12}=|\\mathbf{r}_1-\\mathbf{r}_2|$. The interaction is spin independent.\n\nThe total Hamiltonian consists then of kinetic and potential energy\n\n$$\n\\hat{H} = \\hat{T}+\\hat{V}.\n$$\n\nThe operator for the kinetic energy is given by\n\n$$\n\\hat{T}=\\sum_{\\mathbf{k}\\sigma}\\frac{\\hbar^2k^2}{2m}a_{\\mathbf{k}\\sigma}^{\\dagger}a_{\\mathbf{k}\\sigma}.\n$$\n\n**a)**\nFind the expression for the interaction\n$\\hat{V}$ expressed with creation and annihilation operators. The expression for the interaction\nhas to be written in $k$ space, even though $V$ depends only on the relative distance. It means that you need to set up the Fourier transform $\\langle \\mathbf{k}_i\\mathbf{k}_j| V | \\mathbf{k}_m\\mathbf{k}_n\\rangle$.\n\n\n\n**Solution.**\nA general two-body interaction element is given by (not using anti-symmetrized matrix elements)\n\n$$\n\\hat{V} = \\frac{1}{2} \\sum_{pqrs} \\langle pq \\hat{v} \\vert rs\\rangle a_p^\\dagger a_q^\\dagger a_s a_r ,\n$$\n\nwhere $\\hat{v}$ is assumed to depend only on the relative distance between two interacting particles, that is\n$\\hat{v} = v(\\vec r_1, \\vec r_2) = v(|\\vec r_1 - \\vec r_2|) = v(r)$, with $r = |\\vec r_1 - \\vec r_2|$). \nIn our case we have, writing out explicitely the spin degrees of freedom as well\n\n\n
\n\n$$\n\\begin{equation}\n\\hat{V} = \\frac{1}{2} \\sum_{\\substack{\\sigma_p \\sigma_q \\\\ \\sigma_r \\sigma_s}}\n\\sum_{\\substack{\\mathbf{k}_p \\mathbf{k}_q \\\\ \\mathbf{k}_r \\mathbf{k}_s}}\n\\langle \\mathbf{k}_p \\sigma_p, \\mathbf{k}_q \\sigma_2\\vert v \\vert \\mathbf{k}_r \\sigma_3, \\mathbf{k}_s \\sigma_s\\rangle\na_{\\mathbf{k}_p \\sigma_p}^\\dagger a_{\\mathbf{k}_q \\sigma_q}^\\dagger a_{\\mathbf{k}_s \\sigma_s} a_{\\mathbf{k}_r \\sigma_r} .\n\\label{_auto8} \\tag{8}\n\\end{equation}\n$$\n\nInserting plane waves as eigenstates we can rewrite the matrix element as\n\n$$\n\\langle \\mathbf{k}_p \\sigma_p, \\mathbf{k}_q \\sigma_q\\vert \\hat{v} \\vert \\mathbf{k}_r \\sigma_r, \\mathbf{k}_s \\sigma_s\\rangle =\n\\frac{1}{\\Omega^2} \\delta_{\\sigma_p \\sigma_r} \\delta_{\\sigma_q \\sigma_s}\n\\int\\int \\exp{-i(\\mathbf{k}_p \\cdot \\mathbf{r}_p)} \\exp{-i( \\mathbf{k}_q \\cdot \\mathbf{r}_q)} \\hat{v}(r) \\exp{i(\\mathbf{k}_r \\cdot \\mathbf{r}_p)} \\exp{i( \\mathbf{k}_s \\cdot \\mathbf{r}_q)} d\\mathbf{r}_p d\\mathbf{r}_q ,\n$$\n\nwhere we have used the orthogonality properties of the spin functions. We change now the variables of integration\nby defining $\\mathbf{r} = \\mathbf{r}_p - \\mathbf{r}_q$, which gives $\\mathbf{r}_p = \\mathbf{r} + \\mathbf{r}_q$ and $d^3 \\mathbf{r} = d^3 \\mathbf{r}_p$. \nThe limits are not changed since they are from $-\\infty$ to $\\infty$ for all integrals. This results in\n\n$$\n\\begin{align*}\n\\langle \\mathbf{k}_p \\sigma_p, \\mathbf{k}_q \\sigma_q\\vert \\hat{v} \\vert \\mathbf{k}_r \\sigma_r, \\mathbf{k}_s \\sigma_s\\rangle\n&= \\frac{1}{\\Omega^2} \\delta_{\\sigma_p \\sigma_r} \\delta_{\\sigma_q \\sigma_s} \\int\\exp{i (\\mathbf{k}_s - \\mathbf{k}_q) \\cdot \\mathbf{r}_q} \\int v(r) \\exp{i(\\mathbf{k}_r - \\mathbf{k}_p) \\cdot ( \\mathbf{r} + \\mathbf{r}_q)} d\\mathbf{r} d\\mathbf{r}_q \\\\\n&= \\frac{1}{\\Omega^2} \\delta_{\\sigma_p \\sigma_r} \\delta_{\\sigma_q \\sigma_s} \\int v(r) \\exp{i\\left[(\\mathbf{k}_r - \\mathbf{k}_p) \\cdot \\mathbf{r}\\right]}\n\\int \\exp{i\\left[(\\mathbf{k}_s - \\mathbf{k}_q + \\mathbf{k}_r - \\mathbf{k}_p) \\cdot \\mathbf{r}_q\\right]} d\\mathbf{r}_q d\\mathbf{r} .\n\\end{align*}\n$$\n\nWe recognize the integral over $\\mathbf{r}_q$ as a $\\delta$-function, resulting in\n\n$$\n\\langle \\mathbf{k}_p \\sigma_p, \\mathbf{k}_q \\sigma_q\\vert \\hat{v} \\vert \\mathbf{k}_r \\sigma_r, \\mathbf{k}_s \\sigma_s\\rangle =\n\\frac{1}{\\Omega} \\delta_{\\sigma_p \\sigma_r} \\delta_{\\sigma_q \\sigma_s} \\delta_{(\\mathbf{k}_p + \\mathbf{k}_q),(\\mathbf{k}_r + \\mathbf{k}_s)} \\int v(r) \\exp{i\\left[(\\mathbf{k}_r - \\mathbf{k}_p) \\cdot \\mathbf{r}\\right]} d^3r .\n$$\n\nFor this equation to be different from zero, we must have conservation of momenta, we need to satisfy\n$\\mathbf{k}_p + \\mathbf{k}_q = \\mathbf{k}_r + \\mathbf{k}_s$. We can use the conservation of momenta to remove one of the summation variables resulting in\n\n$$\n\\hat{V} =\n\\frac{1}{2\\Omega} \\sum_{\\sigma \\sigma'} \\sum_{\\mathbf{k}_p \\mathbf{k}_q \\mathbf{k}_r} \\left[ \\int v(r) \\exp{i\\left[(\\mathbf{k}_r - \\mathbf{k}_p) \\cdot \\mathbf{r}\\right]} d^3r \\right]\na_{\\mathbf{k}_p \\sigma}^\\dagger a_{\\mathbf{k}_q \\sigma'}^\\dagger a_{\\mathbf{k}_p + \\mathbf{k}_q - \\mathbf{k}_r, \\sigma'} a_{\\mathbf{k}_r \\sigma},\n$$\n\nwhich can be rewritten as\n\n\n
\n\n$$\n\\begin{equation}\n\\hat{V} =\n\\frac{1}{2\\Omega} \\sum_{\\sigma \\sigma'} \\sum_{\\mathbf{k} \\mathbf{p} \\mathbf{q}} \\left[ \\int v(r) \\exp{-i( \\mathbf{q} \\cdot \\mathbf{r})} d\\mathbf{r} \\right]\na_{\\mathbf{k} + \\mathbf{q}, \\sigma}^\\dagger a_{\\mathbf{p} - \\mathbf{q}, \\sigma'}^\\dagger a_{\\mathbf{p} \\sigma'} a_{\\mathbf{k} \\sigma},\n\\label{eq:V} \\tag{9}\n\\end{equation}\n$$\n\nThis equation will be useful for our nuclear matter calculations as well. In the last equation we defined\nthe quantities\n$\\mathbf{p} = \\mathbf{k}_p + \\mathbf{k}_q - \\mathbf{k}_r$, $\\mathbf{k} = \\mathbf{k}_r$ og $\\mathbf{q} = \\mathbf{k}_p - \\mathbf{k}_r$.\n\n\n\n**b)**\nCalculate thereafter the reference energy for the infinite electron gas in three dimensions using the above expressions for the kinetic energy and the potential energy.\n\n\n\n**Solution.**\nLet us now compute the expectation value of the reference energy using the expressions for the kinetic energy operator and the interaction.\nWe need to compute $\\langle \\Phi_0\\vert \\hat{H} \\vert \\Phi_0\\rangle = \\langle \\Phi_0\\vert \\hat{T} \\vert \\Phi_0\\rangle + \\langle \\Phi_0\\vert \\hat{V} \\vert \\Phi_0\\rangle$, where $\\vert \\Phi_0\\rangle$ is our reference Slater determinant, constructed from filling all single-particle states up to the Fermi level.\nLet us start with the kinetic energy first\n\n$$\n\\langle \\Phi_0\\vert \\hat{T} \\vert \\Phi_0\\rangle \n= \\langle \\Phi_0\\vert \\left( \\sum_{\\mathbf{p} \\sigma} \\frac{\\hbar^2 p^2}{2m} a_{\\mathbf{p} \\sigma}^\\dagger a_{\\mathbf{p} \\sigma} \\right) \\vert \\Phi_0\\rangle \\\\\n= \\sum_{\\mathbf{p} \\sigma} \\frac{\\hbar^2 p^2}{2m} \\langle \\Phi_0\\vert a_{\\mathbf{p} \\sigma}^\\dagger a_{\\mathbf{p} \\sigma} \\vert \\Phi_0\\rangle .\n$$\n\nFrom the possible contractions using Wick's theorem, it is straightforward to convince oneself that the expression for the kinetic energy becomes\n\n$$\n\\langle \\Phi_0\\vert \\hat{T} \\vert \\Phi_0\\rangle = \\sum_{\\mathbf{i} \\leq F} \\frac{\\hbar^2 k_i^2}{m} = \\frac{\\Omega}{(2\\pi)^3} \\frac{\\hbar^2}{m} \\int_0^{k_F} k^2 d\\mathbf{k}.\n$$\n\nThe sum of the spin degrees of freedom results in a factor of two only if we deal with identical spin $1/2$ fermions. \nChanging to spherical coordinates, the integral over the momenta $k$ results in the final expression\n\n$$\n\\langle \\Phi_0\\vert \\hat{T} \\vert \\Phi_0\\rangle = \\frac{\\Omega}{(2\\pi)^3} \\left( 4\\pi \\int_0^{k_F} k^4 d\\mathbf{k} \\right) = \\frac{4\\pi\\Omega}{(2\\pi)^3} \\frac{1}{5} k_F^5 = \\frac{4\\pi\\Omega}{5(2\\pi)^3} k_F^5 = \\frac{\\hbar^2 \\Omega}{10\\pi^2 m} k_F^5 .\n$$\n\nThe density of states in momentum space is given by $2\\Omega/(2\\pi)^3$, where we have included the degeneracy due to the spin degrees of freedom.\nThe volume is given by $4\\pi k_F^3/3$, and the number of particles becomes\n\n$$\nN = \\frac{2\\Omega}{(2\\pi)^3} \\frac{4}{3} \\pi k_F^3 = \\frac{\\Omega}{3\\pi^2} k_F^3 \\quad \\Rightarrow \\quad\nk_F = \\left( \\frac{3\\pi^2 N}{\\Omega} \\right)^{1/3}.\n$$\n\nThis gives us\n\n\n
\n\n$$\n\\begin{equation}\n\\langle \\Phi_0\\vert \\hat{T} \\vert \\Phi_0\\rangle =\n\\frac{\\hbar^2 \\Omega}{10\\pi^2 m} \\left( \\frac{3\\pi^2 N}{\\Omega} \\right)^{5/3} =\n\\frac{\\hbar^2 (3\\pi^2)^{5/3} N}{10\\pi^2 m} \\rho^{2/3} ,\n\\label{eq:T_forventning} \\tag{10}\n\\end{equation}\n$$\n\nWe are now ready to calculate the expectation value of the potential energy\n\n$$\n\\begin{align*}\n\\langle \\Phi_0\\vert \\hat{V} \\vert \\Phi_0\\rangle \n&= \\langle \\Phi_0\\vert \\left( \\frac{1}{2\\Omega} \\sum_{\\sigma \\sigma'} \\sum_{\\mathbf{k} \\mathbf{p} \\mathbf{q} } \\left[ \\int v(r) \\exp{-i (\\mathbf{q} \\cdot \\mathbf{r})} d\\mathbf{r} \\right] a_{\\mathbf{k} + \\mathbf{q}, \\sigma}^\\dagger a_{\\mathbf{p} - \\mathbf{q}, \\sigma'}^\\dagger a_{\\mathbf{p} \\sigma'} a_{\\mathbf{k} \\sigma} \\right) \\vert \\Phi_0\\rangle \\\\\n&= \\frac{1}{2\\Omega} \\sum_{\\sigma \\sigma'} \\sum_{\\mathbf{k} \\mathbf{p} \\mathbf{q}} \\left[ \\int v(r) \\exp{-i (\\mathbf{q} \\cdot \\mathbf{r})} d\\mathbf{r} \\right]\\langle \\Phi_0\\vert a_{\\mathbf{k} + \\mathbf{q}, \\sigma}^\\dagger a_{\\mathbf{p} - \\mathbf{q}, \\sigma'}^\\dagger a_{\\mathbf{p} \\sigma'} a_{\\mathbf{k} \\sigma} \\vert \\Phi_0\\rangle .\n\\end{align*}\n$$\n\nThe only contractions which result in non-zero results are those that involve states below the Fermi level, that is \n$k \\leq k_F$, $p \\leq k_F$, $|\\mathbf{p} - \\mathbf{q}| < \\mathbf{k}_F$ and $|\\mathbf{k} + \\mathbf{q}| \\leq k_F$. Due to momentum conservation we must also have $\\mathbf{k} + \\mathbf{q} = \\mathbf{p}$, $\\mathbf{p} - \\mathbf{q} = \\mathbf{k}$ and $\\sigma = \\sigma'$ or $\\mathbf{k} + \\mathbf{q} = \\mathbf{k}$ and $\\mathbf{p} - \\mathbf{q} = \\mathbf{p}$. \nSummarizing, we must have\n\n$$\n\\mathbf{k} + \\mathbf{q} = \\mathbf{p} \\quad \\text{and} \\quad \\sigma = \\sigma', \\qquad\n\\text{or} \\qquad\n\\mathbf{q} = \\mathbf{0} .\n$$\n\nWe obtain then\n\n$$\n\\langle \\Phi_0\\vert \\hat{V} \\vert \\Phi_0\\rangle =\n\\frac{1}{2\\Omega} \\left( \\sum_{\\sigma \\sigma'} \\sum_{\\mathbf{q} \\mathbf{p} \\leq F} \\left[ \\int v(r) d\\mathbf{r} \\right] - \\sum_{\\sigma}\n\\sum_{\\mathbf{q} \\mathbf{p} \\leq F} \\left[ \\int v(r) \\exp{-i (\\mathbf{q} \\cdot \\mathbf{r})} d\\mathbf{r} \\right] \\right).\n$$\n\nThe first term is the so-called direct term while the second term is the exchange term. \nWe can rewrite this equation as (and this applies to any potential which depends only on the relative distance between particles)\n\n\n
\n\n$$\n\\begin{equation}\n\\langle \\Phi_0\\vert \\hat{V} \\vert \\Phi_0\\rangle =\n\\frac{1}{2\\Omega} \\left( N^2 \\left[ \\int v(r) d\\mathbf{r} \\right] - N \\sum_{\\mathbf{q}} \\left[ \\int v(r) \\exp{-i (\\mathbf{q}\\cdot \\mathbf{r})} d\\mathbf{r} \\right] \\right),\n\\label{eq:V_b} \\tag{11}\n\\end{equation}\n$$\n\nwhere we have used the fact that a sum like $\\sum_{\\sigma}\\sum_{\\mathbf{k}}$ equals the number of particles. Using the fact that the density is given by\n$\\rho = N/\\Omega$, with $\\Omega$ being our volume, we can rewrite the last equation as\n\n$$\n\\langle \\Phi_0\\vert \\hat{V} \\vert \\Phi_0\\rangle =\n\\frac{1}{2} \\left( \\rho N \\left[ \\int v(r) d\\mathbf{r} \\right] - \\rho\\sum_{\\mathbf{q}} \\left[ \\int v(r) \\exp{-i (\\mathbf{q}\\cdot \\mathbf{r})} d\\mathbf{r} \\right] \\right).\n$$\n\nFor the electron gas\nthe interaction part of the Hamiltonian operator is given by\n\n$$\n\\hat{H}_I=\\hat{H}_{el}+\\hat{H}_{b}+\\hat{H}_{el-b},\n$$\n\nwith the electronic part\n\n$$\n\\hat{H}_{el}=\\sum_{i=1}^N\\frac{p_i^2}{2m}+\\frac{e^2}{2}\\sum_{i\\ne j}\\frac{e^{-\\mu |\\mathbf{r}_i-\\mathbf{r}_j|}}{|\\mathbf{r}_i-\\mathbf{r}_j|},\n$$\n\nwhere we have introduced an explicit convergence factor\n(the limit $\\mu\\rightarrow 0$ is performed after having calculated the various integrals).\nCorrespondingly, we have\n\n$$\n\\hat{H}_{b}=\\frac{e^2}{2}\\int\\int d\\mathbf{r}d\\mathbf{r}'\\frac{n(\\mathbf{r})n(\\mathbf{r}')e^{-\\mu |\\mathbf{r}-\\mathbf{r}'|}}{|\\mathbf{r}-\\mathbf{r}'|},\n$$\n\nwhich is the energy contribution from the positive background charge with density\n$n(\\mathbf{r})=N/\\Omega$. Finally,\n\n$$\n\\hat{H}_{el-b}=-\\frac{e^2}{2}\\sum_{i=1}^N\\int d\\mathbf{r}\\frac{n(\\mathbf{r})e^{-\\mu |\\mathbf{r}-\\mathbf{x}_i|}}{|\\mathbf{r}-\\mathbf{x}_i|},\n$$\n\nis the interaction between the electrons and the positive background.\nWe can show that\n\n$$\n\\hat{H}_{b}=\\frac{e^2}{2}\\frac{N^2}{\\Omega}\\frac{4\\pi}{\\mu^2},\n$$\n\nand\n\n$$\n\\hat{H}_{el-b}=-e^2\\frac{N^2}{\\Omega}\\frac{4\\pi}{\\mu^2}.\n$$\n\nFor the electron gas and a Coulomb interaction, these two terms are cancelled (in the thermodynamic limit) by the contribution from the direct term arising\nfrom the repulsive electron-electron interaction. What remains then when computing the reference energy is only the kinetic energy contribution and the contribution from the exchange term. For other interactions, like nuclear forces with a short range part and no infinite range, we need to compute both the direct term and the exchange term.\n\n\n\n**c)**\nShow thereafter that the final Hamiltonian can be written as\n\n$$\nH=H_{0}+H_{I},\n$$\n\nwith\n\n$$\nH_{0}={\\displaystyle\\sum_{\\mathbf{k}\\sigma}}\n\\frac{\\hbar^{2}k^{2}}{2m}a_{\\mathbf{k}\\sigma}^{\\dagger}\na_{\\mathbf{k}\\sigma},\n$$\n\nand\n\n$$\nH_{I}=\\frac{e^{2}}{2\\Omega}{\\displaystyle\\sum_{\\sigma_{1}\\sigma_{2}}}{\\displaystyle\\sum_{\\mathbf{q}\\neq 0,\\mathbf{k},\\mathbf{p}}}\\frac{4\\pi}{q^{2}}\na_{\\mathbf{k}+\\mathbf{q},\\sigma_{1}}^{\\dagger}\na_{\\mathbf{p}-\\mathbf{q},\\sigma_{2}}^{\\dagger}\na_{\\mathbf{p}\\sigma_{2}}a_{\\mathbf{k}\\sigma_{1}}.\n$$\n\n**d)**\nCalculate $E_0/N=\\langle \\Phi_{0}\\vert H\\vert \\Phi_{0}\\rangle/N$ for for this system to first order in the interaction. Show that, by using\n\n$$\n\\rho= \\frac{k_F^3}{3\\pi^2}=\\frac{3}{4\\pi r_0^3},\n$$\n\nwith $\\rho=N/\\Omega$, $r_0$\nbeing the radius of a sphere representing the volume an electron occupies \nand the Bohr radius $a_0=\\hbar^2/e^2m$, \nthat the energy per electron can be written as\n\n$$\nE_0/N=\\frac{e^2}{2a_0}\\left[\\frac{2.21}{r_s^2}-\\frac{0.916}{r_s}\\right].\n$$\n\nHere we have defined\n$r_s=r_0/a_0$ to be a dimensionless quantity.\n\n**e)**\nPlot your results. Why is this system stable?\nCalculate thermodynamical quantities like the pressure, given by\n\n$$\nP=-\\left(\\frac{\\partial E}{\\partial \\Omega}\\right)_N,\n$$\n\nand the bulk modulus\n\n$$\nB=-\\Omega\\left(\\frac{\\partial P}{\\partial \\Omega}\\right)_N,\n$$\n\nand comment your results.\n\n\n\n\n\n\n\n\n## Preparing the ground for numerical calculations; kinetic energy and Ewald term\n\nThe kinetic energy operator is\n\n\n
\n\n$$\n\\begin{equation}\n \\hat{H}_{\\text{kin}} = -\\frac{\\hbar^{2}}{2m}\\sum_{i=1}^{N}\\nabla_{i}^{2},\n\\label{_auto9} \\tag{12}\n\\end{equation}\n$$\n\nwhere the sum is taken over all particles in the finite\nbox. The Ewald electron-electron interaction operator \ncan be written as\n\n\n
\n\n$$\n\\begin{equation}\n \\hat{H}_{ee} = \\sum_{i < j}^{N} v_{E}\\left( \\mathbf{r}_{i}-\\mathbf{r}_{j}\\right)\n + \\frac{1}{2}Nv_{0},\n\\label{_auto10} \\tag{13}\n\\end{equation}\n$$\n\nwhere $v_{E}(\\mathbf{r})$ is the effective two-body \ninteraction and $v_{0}$ is the self-interaction, defined \nas $v_{0} = \\lim_{\\mathbf{r} \\rightarrow 0} \\left\\{ v_{E}(\\mathbf{r}) - 1/r\\right\\} $. \n\nThe negative \nelectron charges are neutralized by a positive, homogeneous \nbackground charge. Fraser *et al.* explain how the\nelectron-background and background-background terms, \n$\\hat{H}_{eb}$ and $\\hat{H}_{bb}$, vanish\nwhen using Ewald's interaction for the three-dimensional\nelectron gas. Using the same arguments, one can show that\nthese terms are also zero in the corresponding \ntwo-dimensional system. \n\n\n\n\n## Ewald correction term\n\nIn the three-dimensional electron gas, the Ewald \ninteraction is\n\n$$\nv_{E}(\\mathbf{r}) = \\sum_{\\mathbf{k} \\neq \\mathbf{0}}\n \\frac{4\\pi }{L^{3}k^{2}}e^{i\\mathbf{k}\\cdot \\mathbf{r}}\n e^{-\\eta^{2}k^{2}/4} \\nonumber\n$$\n\n\n
\n\n$$\n\\begin{equation} \n + \\sum_{\\mathbf{R}}\\frac{1}{\\left| \\mathbf{r}\n -\\mathbf{R}\\right| } \\mathrm{erfc} \\left( \\frac{\\left| \n \\mathbf{r}-\\mathbf{R}\\right|}{\\eta }\\right)\n - \\frac{\\pi \\eta^{2}}{L^{3}},\n\\label{_auto11} \\tag{14}\n\\end{equation}\n$$\n\nwhere $L$ is the box side length, $\\mathrm{erfc}(x)$ is the \ncomplementary error function, and $\\eta $ is a free\nparameter that can take any value in the interval \n$(0, \\infty )$.\n\n\n\n## Interaction in momentum space\n\nThe translational vector\n\n\n
\n\n$$\n\\begin{equation}\n \\mathbf{R} = L\\left(n_{x}\\mathbf{u}_{x} + n_{y}\n \\mathbf{u}_{y} + n_{z}\\mathbf{u}_{z}\\right) ,\n\\label{_auto12} \\tag{15}\n\\end{equation}\n$$\n\nwhere $\\mathbf{u}_{i}$ is the unit vector for dimension $i$,\nis defined for all integers $n_{x}$, $n_{y}$, and \n$n_{z}$. These vectors are used to obtain all image\ncells in the entire real space. \nThe parameter $\\eta $ decides how \nthe Coulomb interaction is divided into a short-ranged\nand long-ranged part, and does not alter the total\nfunction. However, the number of operations needed\nto calculate the Ewald interaction with a desired \naccuracy depends on $\\eta $, and $\\eta $ is therefore \noften chosen to optimize the convergence as a function\nof the simulation-cell size. In\nour calculations, we choose $\\eta $ to be an infinitesimally\nsmall positive number, similarly as was done by [Shepherd *et al.*](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.86.035111) and [Roggero *et al.*](https://journals.aps.org/prb/abstract/10.1103/PhysRevB.88.115138).\n\nThis gives an interaction that is evaluated only in\nFourier space. \n\nWhen studying the two-dimensional electron gas, we\nuse an Ewald interaction that is quasi two-dimensional.\nThe interaction is derived in three dimensions, with \nFourier discretization in only two dimensions. The Ewald effective\ninteraction has the form\n\n$$\nv_{E}(\\mathbf{r}) = \\sum_{\\mathbf{k} \\neq \\mathbf{0}} \n \\frac{\\pi }{L^{2}k}\\left\\{ e^{-kz} \\mathrm{erfc} \\left(\n \\frac{\\eta k}{2} - \\frac{z}{\\eta }\\right)+ \\right. \\nonumber\n$$\n\n$$\n\\left. e^{kz}\\mathrm{erfc} \\left( \\frac{\\eta k}{2} + \\frac{z}{\\eta }\n \\right) \\right\\} e^{i\\mathbf{k}\\cdot \\mathbf{r}_{xy}} \n \\nonumber\n$$\n\n$$\n+ \\sum_{\\mathbf{R}}\\frac{1}{\\left| \\mathbf{r}-\\mathbf{R}\n \\right| } \\mathrm{erfc} \\left( \\frac{\\left| \\mathbf{r}-\\mathbf{R}\n \\right|}{\\eta }\\right) \\nonumber\n$$\n\n\n
\n\n$$\n\\begin{equation} \n - \\frac{2\\pi}{L^{2}}\\left\\{ z\\mathrm{erf} \\left( \\frac{z}{\\eta }\n \\right) + \\frac{\\eta }{\\sqrt{\\pi }}e^{-z^{2}/\\eta^{2}}\\right\\},\n\\label{_auto13} \\tag{16}\n\\end{equation}\n$$\n\nwhere the Fourier vectors $\\mathbf{k}$ and the position vector\n$\\mathbf{r}_{xy}$ are defined in the $(x,y)$ plane. When\napplying the interaction $v_{E}(\\mathbf{r})$ to two-dimensional\nsystems, we set $z$ to zero. \n\n\nSimilarly as in the \nthree-dimensional case, also here we \nchoose $\\eta $ to approach zero from above. The resulting \nFourier-transformed interaction is\n\n\n
\n\n$$\n\\begin{equation}\n v_{E}^{\\eta = 0, z = 0}(\\mathbf{r}) = \\sum_{\\mathbf{k} \\neq \\mathbf{0}} \n \\frac{2\\pi }{L^{2}k}e^{i\\mathbf{k}\\cdot \\mathbf{r}_{xy}}. \n\\label{_auto14} \\tag{17}\n\\end{equation}\n$$\n\nThe self-interaction $v_{0}$ is a constant that can be \nincluded in the reference energy.\n\n\n\n\n## Antisymmetrized matrix elements in three dimensions\n\nIn the three-dimensional electron gas, the antisymmetrized\nmatrix elements are\n\n\n
\n\n$$\n\\label{eq:vmat_3dheg} \\tag{18}\n \\langle \\mathbf{k}_{p}m_{s_{p}}\\mathbf{k}_{q}m_{s_{q}}\n |\\tilde{v}|\\mathbf{k}_{r}m_{s_{r}}\\mathbf{k}_{s}m_{s_{s}}\\rangle_{AS} \n \\nonumber\n$$\n\n$$\n= \\frac{4\\pi }{L^{3}}\\delta_{\\mathbf{k}_{p}+\\mathbf{k}_{q},\n \\mathbf{k}_{r}+\\mathbf{k}_{s}}\\left\\{ \n \\delta_{m_{s_{p}}m_{s_{r}}}\\delta_{m_{s_{q}}m_{s_{s}}}\n \\left( 1 - \\delta_{\\mathbf{k}_{p}\\mathbf{k}_{r}}\\right) \n \\frac{1}{|\\mathbf{k}_{r}-\\mathbf{k}_{p}|^{2}}\n \\right. \\nonumber\n$$\n\n\n
\n\n$$\n\\begin{equation} \n \\left. - \\delta_{m_{s_{p}}m_{s_{s}}}\\delta_{m_{s_{q}}m_{s_{r}}}\n \\left( 1 - \\delta_{\\mathbf{k}_{p}\\mathbf{k}_{s}} \\right)\n \\frac{1}{|\\mathbf{k}_{s}-\\mathbf{k}_{p}|^{2}} \n \\right\\} ,\n\\label{_auto15} \\tag{19}\n\\end{equation}\n$$\n\nwhere the Kronecker delta functions \n$\\delta_{\\mathbf{k}_{p}\\mathbf{k}_{r}}$ and\n$\\delta_{\\mathbf{k}_{p}\\mathbf{k}_{s}}$ ensure that the \ncontribution with zero momentum transfer vanishes.\n\n\nSimilarly, the matrix elements for the two-dimensional\nelectron gas are\n\n\n
\n\n$$\n\\label{eq:vmat_2dheg} \\tag{20}\n \\langle \\mathbf{k}_{p}m_{s_{p}}\\mathbf{k}_{q}m_{s_{q}}\n |v|\\mathbf{k}_{r}m_{s_{r}}\\mathbf{k}_{s}m_{s_{s}}\\rangle_{AS} \n \\nonumber\n$$\n\n$$\n= \\frac{2\\pi }{L^{2}}\n \\delta_{\\mathbf{k}_{p}+\\mathbf{k}_{q},\\mathbf{k}_{r}+\\mathbf{k}_{s}}\n \\left\\{ \\delta_{m_{s_{p}}m_{s_{r}}}\\delta_{m_{s_{q}}m_{s_{s}}} \n \\left( 1 - \\delta_{\\mathbf{k}_{p}\\mathbf{k}_{r}}\\right)\n \\frac{1}{\n |\\mathbf{k}_{r}-\\mathbf{k}_{p}|} \\right.\n \\nonumber\n$$\n\n\n
\n\n$$\n\\begin{equation} \n - \\left. \\delta_{m_{s_{p}}m_{s_{s}}}\\delta_{m_{s_{q}}m_{s_{r}}}\n \\left( 1 - \\delta_{\\mathbf{k}_{p}\\mathbf{k}_{s}}\\right)\n \\frac{1}{ \n |\\mathbf{k}_{s}-\\mathbf{k}_{p}|}\n \\right\\} ,\n\\label{_auto16} \\tag{21}\n\\end{equation}\n$$\n\nwhere the single-particle momentum vectors $\\mathbf{k}_{p,q,r,s}$\nare now defined in two dimensions.\n\nIn actual calculations, the \nsingle-particle energies, defined by the operator $\\hat{f}$, are given by\n\n\n
\n\n$$\n\\begin{equation}\n \\langle \\mathbf{k}_{p}|f|\\mathbf{k}_{q} \\rangle\n = \\frac{\\hbar^{2}k_{p}^{2}}{2m}\\delta_{\\mathbf{k}_{p},\n \\mathbf{k}_{q}} + \\sum_{\\mathbf{k}_{i}}\\langle \n \\mathbf{k}_{p}\\mathbf{k}_{i}|v|\\mathbf{k}_{q}\n \\mathbf{k}_{i}\\rangle_{AS}.\n\\label{eq:fock_heg} \\tag{22}\n\\end{equation}\n$$\n\n## Periodic boundary conditions and single-particle states\n\nWhen using periodic boundary conditions, the \ndiscrete-momentum single-particle basis functions\n\n$$\n\\phi_{\\mathbf{k}}(\\mathbf{r}) =\ne^{i\\mathbf{k}\\cdot \\mathbf{r}}/L^{d/2}\n$$\n\nare associated with \nthe single-particle energy\n\n\n
\n\n$$\n\\begin{equation}\n \\varepsilon_{n_{x}, n_{y}} = \\frac{\\hbar^{2}}{2m} \\left( \\frac{2\\pi }{L}\\right)^{2}\\left( n_{x}^{2} + n_{y}^{2}\\right)\n\\label{_auto17} \\tag{23}\n\\end{equation}\n$$\n\nfor two-dimensional sytems and\n\n\n
\n\n$$\n\\begin{equation}\n \\varepsilon_{n_{x}, n_{y}, n_{z}} = \\frac{\\hbar^{2}}{2m}\n \\left( \\frac{2\\pi }{L}\\right)^{2}\n \\left( n_{x}^{2} + n_{y}^{2} + n_{z}^{2}\\right)\n\\label{_auto18} \\tag{24}\n\\end{equation}\n$$\n\nfor three-dimensional systems.\n\n\nWe choose the single-particle basis such that both the occupied and \nunoccupied single-particle spaces have a closed-shell \nstructure. This means that all single-particle states \ncorresponding to energies below a chosen cutoff are\nincluded in the basis. We study only the unpolarized spin\nphase, in which all orbitals are occupied with one spin-up \nand one spin-down electron. \n\n\nThe table illustrates how single-particle energies\n fill energy shells in a two-dimensional electron box.\n Here $n_{x}$ and $n_{y}$ are the momentum quantum numbers,\n $n_{x}^{2} + n_{y}^{2}$ determines the single-particle \n energy level, $N_{\\uparrow \\downarrow }$ represents the \n cumulated number of spin-orbitals in an unpolarized spin\n phase, and $N_{\\uparrow \\uparrow }$ stands for the\n cumulated number of spin-orbitals in a spin-polarized\n system.\n\n\n\n\n## Magic numbers for the two-dimensional electron gas\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
$n_{x}^{2}+n_{y}^{2}$ $n_{x}$ $n_{y}$ $N_{\\uparrow \\downarrow }$ $N_{\\uparrow \\uparrow }$
0 0 0 2 1
1 -1 0
1 0
0 -1
0 1 10 5
2 -1 -1
-1 1
1 -1
1 1 18 9
4 -2 0
2 0
0 -2
0 2 26 13
5 -2 -1
2 -1
-2 1
2 1
-1 -2
-1 2
1 -2
1 2 42 21
\n## Hartree-Fock energies\n\nFinally, a useful benchmark for our calculations is the expression for\nthe reference energy $E_0$ per particle.\nDefining the $T=0$ density $\\rho_0$, we can in turn determine in three\ndimensions the radius $r_0$ of a sphere representing the volume an\nelectron occupies (the classical electron radius) as\n\n$$\nr_0= \\left(\\frac{3}{4\\pi \\rho}\\right)^{1/3}.\n$$\n\nIn two dimensions the corresponding quantity is\n\n$$\nr_0= \\left(\\frac{1}{\\pi \\rho}\\right)^{1/2}.\n$$\n\nOne can then express the reference energy per electron in terms of the\ndimensionless quantity $r_s=r_0/a_0$, where we have introduced the\nBohr radius $a_0=\\hbar^2/e^2m$. The energy per electron computed with\nthe reference Slater determinant can then be written as\n(using hereafter only atomic units, meaning that $\\hbar = m = e = 1$)\n\n$$\ng\nE_0/N=\\frac{1}{2}\\left[\\frac{2.21}{r_s^2}-\\frac{0.916}{r_s}\\right],\n$$\n\nfor the three-dimensional electron gas. For the two-dimensional gas\nthe corresponding expression is (show this)\n\n$$\nE_0/N=\\frac{1}{r_s^2}-\\frac{8\\sqrt{2}}{3\\pi r_s}.a\n$$\n\nFor an infinite homogeneous system, there are some particular\nsimplications due to the conservation of the total momentum of the\nparticles. By symmetry considerations, the total momentum of the\nsystem has to be zero. Both the kinetic energy operator and the\ntotal Hamiltonian $\\hat{H}$ are assumed to be diagonal in the total\nmomentum $\\mathbf{K}$. Hence, both the reference state $\\Phi_{0}$ and\nthe correlated ground state $\\Psi$ must be eigenfunctions of the\noperator $\\mathbf{\\hat{K}}$ with the corresponding eigemnvalue\n$\\mathbf{K} = \\mathbf{0}$. This leads to important\nsimplications to our different many-body methods. In coupled cluster\ntheory for example, all\nterms that involve single particle-hole excitations vanish. \n\n\n\n\n\n## Exercise 3: Magic numbers for the three-dimensional electron gas and perturbation theory to second order\n\n\n**a)**\nSet up the possible magic numbers for the electron gas in three dimensions using periodic boundary conditions..\n\n\n\n**Hint.**\nFollow the example for the two-dimensional electron gas and add the third dimension via the quantum number $n_z$.\n\n\n\n\n\n**Solution.**\nUsing the same approach as made with the two-dimensional electron gas with the single-particle kinetic energy defined as\n\n$$\n\\frac{\\hbar^2}{2m}\\left(k_{n_x}^2+k_{n_y}^2k_{n_z}^2\\right),\n$$\n\nand\n\n$$\nk_{n_i}=\\frac{2\\pi n_i}{L} \\hspace{0.1cm} n_i = 0, \\pm 1, \\pm 2, \\dots,\n$$\n\nwe can set up a similar table and obtain (assuming identical particles one and including spin up and spin down solutions) for energies less than or equal to $n_{x}^{2}+n_{y}^{2}+n_{z}^{2}\\le 3$\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
$n_{x}^{2}+n_{y}^{2}+n_{z}^{2}$ $n_{x}$ $n_{y}$ $n_{z}$ $N_{\\uparrow \\downarrow }$
0 0 0 0 2
1 -1 0 0
1 1 0 0
1 0 -1 0
1 0 1 0
1 0 0 -1
1 0 0 1 14
2 -1 -1 0
2 -1 1 0
2 1 -1 0
2 1 1 0
2 -1 0 -1
2 -1 0 1
2 1 0 -1
2 1 0 1
2 0 -1 -1
2 0 -1 1
2 0 1 -1
2 0 1 1 38
3 -1 -1 -1
3 -1 -1 1
3 -1 1 -1
3 -1 1 1
3 1 -1 -1
3 1 -1 1
3 1 1 -1
3 1 1 1 54
\nContinuing in this way we get for $n_{x}^{2}+n_{y}^{2}+n_{z}^{2}=4$ a total of 22 additional states, resulting in $76$ as a new magic number. For the lowest six energy values the degeneracy in energy gives us $2$, $14$, $38$, $54$, $76$ and $114$ as magic numbers. These numbers will then define our Fermi level when we compute the energy in a Cartesian basis. When performing calculations based on many-body perturbation theory, Coupled cluster theory or other many-body methods, we need then to add states above the Fermi level in order to sum over single-particle states which are not occupied. \n\nIf we wish to study infinite nuclear matter with both protons and neutrons, the above magic numbers become $4, 28, 76, 108, 132, 228, \\dots$.\n\n\n\n**b)**\nEvery number of particles for filled shells defines also the number of particles to be used in a given calculation. Use the number of particles to define the density of the system\n\n$$\n\\rho = g \\frac{k_F^3}{6\\pi^2},\n$$\n\nwhere you need to define $k_F$ and the degeneracy $g$, which is two for one type of spin-$1/2$ particles and four for symmetric nuclear matter.\n\n**c)**\nUse the density to find the length $L$ of the box used with periodic boundary contributions, that is use the relation\n\n$$\nV= L^3= \\frac{A}{\\rho}.\n$$\n\nYou can use $L$ to define the spacing to set up the spacing between varipus $k$-values, that is\n\n$$\n\\Delta k = \\frac{2\\pi}{L}.\n$$\n\nHere, $A$ can be the number of nucleons. If we deal with the electron gas only, this needs to be replaced by the number of electrons $N$.\n\n\n\n\n\n\n\n\n## Exercise 4: Quantum numbers for the electron gas in 3d\n\n\n**a)**\nSet up the quantum numbers for the electron gas in 3d using a given value \nof $n_{\\mathrm{max}}$.\n\n\n\n**Solution.**\nThe following python code sets up the quantum numbers for both infinite nuclear matter and neutron matter meploying a cutoff in the value of $n$.\n\n\n```\nfrom numpy import *\n\nnmax =1\nnshell = 3*nmax*nmax\ncount = 1\ntzmin = 1\nprint (\"------------------------------------\")\nprint (\"Neutron matter or the electron gas:\") \nprint (\"a, nx, ny, nz, sz, nx^2 + ny^2 + nz^2\")\nfor n in range(nshell): \n for nx in range(-nmax,nmax+1):\n for ny in range(-nmax,nmax+1):\n for nz in range(-nmax, nmax+1): \n for sz in range(-1,1+1):\n e = nx*nx + ny*ny + nz*nz\n if e == n:\n if sz != 0: \n print count, \" \",nx,\" \",ny, \" \",sz,\" \",tz,\" \",e\n count += 1\n```\n\n\n\n\n\n\n**b)**\nCompute now the contribution to the correlation energy for the electron gas at the level of second-order perturbation theory using a given number of electrons $N$ and a given (defined by you) number of single-particle states above the Fermi level.\nThe following Python code shows an implementation for the electron gas in three dimensions for second perturbation theory using the Coulomb interaction. Here we have hard-coded a case which computes the energy for $N=14$ and a total of $5$ major shells.\n\n\n\n**Solution.**\n\n\n```\nfrom numpy import *\n\nclass electronbasis():\n def __init__(self, N, rs, Nparticles):\n ############################################################\n ##\n ## Initialize basis: \n ## N = number of shells\n ## rs = parameter for volume \n ## Nparticles = Number of holes (conflicting naming, sorry)\n ##\n ###########################################################\n \n self.rs = rs\n self.states = []\n self.nstates = 0\n self.nparticles = Nparticles\n self.nshells = N - 1\n self.Nm = N + 1\n \n self.k_step = 2*(self.Nm + 1)\n Nm = N\n n = 0 #current shell\n ene_integer = 0\n while n <= self.nshells:\n is_shell = False\n for x in range(-Nm, Nm+1):\n for y in range(-Nm, Nm+1):\n for z in range(-Nm,Nm+1):\n e = x*x + y*y + z*z\n if e == ene_integer:\n is_shell = True\n self.nstates += 2\n self.states.append([e, x,y,z,1])\n self.states.append([e, x,y,z, -1])\n \n if is_shell:\n n += 1\n ene_integer += 1\n self.L3 = (4*pi*self.nparticles*self.rs**3)/3.0\n self.L2 = self.L3**(2/3.0)\n self.L = pow(self.L3, 1/3.0)\n \n for i in range(self.nstates):\n self.states[i][0] *= 2*(pi**2)/self.L**2 #Multiplying in the missing factors in the single particle energy\n self.states = array(self.states) #converting to array to utilize vectorized calculations \n \n def hfenergy(self, nParticles):\n #Calculate the HF-energy (reference energy) for nParticles particles\n e0 = 0.0\n if nParticles<=self.nstates:\n for i in range(nParticles):\n e0 += self.h(i,i)\n for j in range(nParticles):\n if j != i:\n e0 += .5*self.v(i,j,i,j)\n else:\n #Safety for cases where nParticles exceeds size of basis\n print(\"Not enough basis states.\")\n \n return e0\n \n def h(self, p,q):\n #Return single particle energy\n return self.states[p,0]*(p==q)\n\n \n def v(self,p,q,r,s):\n #Two body interaction for electron gas\n val = 0\n terms = 0.0\n term1 = 0.0\n term2 = 0.0\n kdpl = self.kdplus(p,q,r,s)\n if kdpl != 0:\n val = 1.0/self.L3\n if self.kdspin(p,r)*self.kdspin(q,s)==1:\n if self.kdwave(p,r) != 1.0:\n term1 = self.L2/(pi*self.absdiff2(r,p))\n if self.kdspin(p,s)*self.kdspin(q,r)==1:\n if self.kdwave(p,s) != 1.0:\n term2 = self.L2/(pi*self.absdiff2(s,p))\n return val*(term1-term2)\n\n \n #The following is a series of kroenecker deltas used in the two-body interactions. \n #Just ignore these lines unless you suspect an error here\n def kdi(self,a,b):\n #Kroenecker delta integer\n return 1.0*(a==b)\n def kda(self,a,b):\n #Kroenecker delta array\n d = 1.0\n for i in range(len(a)):\n d*=(a[i]==b[i])\n return d\n def kdfullplus(self,p,q,r,s):\n #Kroenecker delta wavenumber p+q,r+s\n return self.kda(self.states[p][1:5]+self.states[q][1:5],self.states[r][1:5]+self.states[s][1:5])\n def kdplus(self,p,q,r,s):\n #Kroenecker delta wavenumber p+q,r+s\n return self.kda(self.states[p][1:4]+self.states[q][1:4],self.states[r][1:4]+self.states[s][1:4])\n def kdspin(self,p,q):\n #Kroenecker delta spin\n return self.kdi(self.states[p][4], self.states[q][4])\n def kdwave(self,p,q):\n #Kroenecker delta wavenumber\n return self.kda(self.states[p][1:4],self.states[q][1:4])\n def absdiff2(self,p,q):\n val = 0.0\n for i in range(1,4):\n val += (self.states[p][i]-self.states[q][i])*(self.states[p][i]-self.states[q][i])\n return val\n\n \ndef MBPT2(bs):\n #2. order MBPT Energy \n Nh = bs.nparticles\n Np = bs.nstates-bs.nparticles #Note the conflicting notation here. bs.nparticles is number of hole states \n vhhpp = zeros((Nh**2, Np**2))\n vpphh = zeros((Np**2, Nh**2))\n #manual MBPT(2) energy (Should be -0.525588309385 for 66 states, shells = 5, in this code)\n psum2 = 0\n for i in range(Nh):\n for j in range(Nh):\n for a in range(Np):\n for b in range(Np):\n #val1 = bs.v(i,j,a+Nh,b+Nh)\n #val2 = bs.v(a+Nh,b+Nh,i,j)\n vhhpp[i + j*Nh, a+b*Np] = bs.v(i,j,a+Nh,b+Nh)\n vpphh[a+b*Np,i + j*Nh] = bs.v(a+Nh,b+Nh,i,j)/(bs.states[i,0] + bs.states[j,0] - bs.states[a + Nh, 0] - bs.states[b+Nh,0])\n psum = .25*sum(dot(vhhpp,vpphh).diagonal())\n return psum\n \ndef MBPT2_fast(bs):\n #2. order MBPT Energy \n Nh = bs.nparticles\n Np = bs.nstates-bs.nparticles #Note the conflicting notation here. bs.nparticles is number of hole states \n vhhpp = zeros((Nh**2, Np**2))\n vpphh = zeros((Np**2, Nh**2))\n #manual MBPT(2) energy (Should be -0.525588309385 for 66 states, shells = 5, in this code)\n psum2 = 0\n for i in range(Nh):\n for j in range(i):\n for a in range(Np):\n for b in range(a):\n val = bs.v(i,j,a+Nh,b+Nh)\n eps = val/(bs.states[i,0] + bs.states[j,0] - bs.states[a + Nh, 0] - bs.states[b+Nh,0])\n vhhpp[i + j*Nh, a+b*Np] = val \n vhhpp[j + i*Nh, a+b*Np] = -val \n vhhpp[i + j*Nh, b+a*Np] = -val\n vhhpp[j + i*Nh, b+a*Np] = val \n \n \n vpphh[a+b*Np,i + j*Nh] = eps\n vpphh[a+b*Np,j + i*Nh] = -eps\n vpphh[b+a*Np,i + j*Nh] = -eps\n vpphh[b+a*Np,j + i*Nh] = eps\n \n \n psum = .25*sum(dot(vhhpp,vpphh).diagonal())\n return psum\n\n\n#user input here\nnumber_of_shells = 5\nnumber_of_holes = 14 #(particles)\n\n\n#initialize basis \nbs = electronbasis(number_of_shells,1.0,number_of_holes) #shells, r_s = 1.0, holes\n\n#Print some info to screen\nprint (\"Number of shells:\", number_of_shells)\nprint (\"Number of states:\", bs.nstates)\nprint (\"Number of holes :\", bs.nparticles)\nprint (\"Reference Energy:\", bs.hfenergy(number_of_holes), \"hartrees \")\nprint (\" :\", 2*bs.hfenergy(number_of_holes), \"rydbergs \")\n\nprint (\"Ref.E. per hole :\", bs.hfenergy(number_of_holes)/number_of_holes, \"hartrees \")\nprint (\" :\", 2*bs.hfenergy(number_of_holes)/number_of_holes, \"rydbergs \")\n\n\n\n#calculate MBPT2 energy\nprint (\"MBPT2 energy :\", MBPT2_fast(bs), \" hartrees\")\n```\n\nAs we will see later, for the infinite electron gas, second-order perturbation theory diverges in the thermodynamical limit, a feature which can easily be noted if one lets the number of single-particle states above the Fermi level to increase. The resulting expression in a Cartesian basis will not converge.\n\n\n\n\n\n\n\n\n\n## Infinite nuclear matter and neutron star matter\n\nStudies of dense baryonic matter are of central importance to our basic understanding \nof the stability of nuclear matter, spanning from matter at high densities and temperatures\nto matter as found within dense astronomical objects like neutron stars. \n\nNeutron star matter\nat densities of 0.1 fm$^{-3}$ and greater, is often assumed to \nbe made of mainly neutrons, protons, electrons and \nmuons in beta equilibrium. However, other baryons like various hyperons may exist, as well as possible mesonic condensates and transitions to quark degrees of freedom at higher densities. \nHere we focus on specific definitions of various phases and focus \non distinct phases of matter such as pure baryonic\nmatter and/or quark matter.\nThe composition of matter is then \ndetermined by the requirements of chemical and electrical equilibrium.\nFurthermore, we will also consider matter at temperatures much lower\nthan the typical Fermi energies.\nThe equilibrium conditions are governed by the weak processes \n(normally referred to as the processes\nfor $\\beta$-equilibrium)\n\n\n
\n\n$$\n\\begin{equation} \n b_1 \\rightarrow b_2 + l +\\bar{\\nu}_l \\hspace{1cm} b_2 +l \\rightarrow b_1 \n+\\nu_l,\n\\label{eq:betadecay} \\tag{25}\n\\end{equation}\n$$\n\nwhere $b_1$ and $b_2$ refer to e.g.\\ the baryons being a neutron and a proton, \nrespectively, \n$l$ is either an electron or a muon and $\\bar{\\nu}_l $\nand $\\nu_l$ their respective anti-neutrinos and neutrinos. Muons typically \nappear at\na density close to nuclear matter saturation density, the latter being\n\n$$\nn_0 \\approx 0.16 \\pm 0.02 \\hspace{1cm} \\mathrm{fm}^{-3},\n$$\n\nwith a corresponding binding energy ${\\cal E}_0$ \nfor symmetric nuclear matter (SNM) at saturation density of\n\n$$\n{\\cal E}_0 = B/A=-15.6\\pm 0.2 \\hspace{1cm} \\mathrm{MeV}.\n$$\n\nIn this work the energy per baryon ${\\cal E}$ will always be in units of MeV, \nwhile\nthe energy density $\\varepsilon$ will \nbe in units of MeVfm$^{-3}$ and the number density\\footnote{We will often \nloosely just use density in our discussions.}\n$n$ in units of fm$^{-3}$. The pressure $P$ is \ndefined through the relation\n\n\n
\n\n$$\n\\begin{equation}\n P=n^2\\frac{\\partial {\\cal E}}{\\partial n}=\n n\\frac{\\partial \\varepsilon}{\\partial n}-\\varepsilon,\n\\label{_auto19} \\tag{26}\n\\end{equation}\n$$\n\nwith \ndimension MeVfm$^{-3}$. \nSimilarly, the chemical potential for particle species $i$\nis given by\n\n\n
\n\n$$\n\\begin{equation}\n \\mu_i = \\left(\\frac{\\partial \\varepsilon}{\\partial n_i}\\right),\n\\label{eq:chemicalpotdef} \\tag{27}\n\\end{equation}\n$$\n\nwith dimension MeV.\nIn calculations of properties of neutron star matter in $\\beta$-equilibrium,\nwe will need to calculate the energy per baryon ${\\cal E}$ for e.g. several \nproton fractions $x_p$, which corresponds to\nthe ratio of protons as\ncompared to the total nucleon number ($Z/A$), \n defined as\n\n\n
\n\n$$\n\\begin{equation}\n x_p = \\frac{n_p}{n},\n\\label{_auto20} \\tag{28}\n\\end{equation}\n$$\n\nwhere $n=n_p+n_n$, the total baryonic density if neutrons and\nprotons are the only baryons present. In that case,\nthe total Fermi momentum $k_F$ and the Fermi momenta $k_{Fp}$,\n$k_{Fn}$ for protons and neutrons are related to the total nucleon density\n$n$ by\n\n$$\nn = \\frac{2}{3\\pi^2} k_F^3 \\nonumber\n$$\n\n$$\n= x_p n + (1-x_p) n \\nonumber\n$$\n\n\n
\n\n$$\n\\begin{equation} \n = \\frac{1}{3\\pi^2} k_{Fp}^3 + \\frac{1}{3\\pi^2} k_{Fn}^3.\n\\label{eq:densi} \\tag{29}\n\\end{equation}\n$$\n\nThe energy per baryon will thus be\nlabelled as ${\\cal E}(n,x_p)$.\n${\\cal E}(n,0)$ will then refer to the energy per baryon for pure neutron\nmatter (PNM) while ${\\cal E}(n,\\frac{1}{2})$ is the corresponding value for \nSNM. Furthermore, in this work, subscripts $n,p,e,\\mu$\nwill always refer to neutrons, protons, electrons and muons, respectively.\n\n\nSince the mean free path of a neutrino in a neutron star is bigger\nthan the typical radius of such a star ($\\sim 10$ km), \nwe will throughout assume that neutrinos escape freely from the neutron star,\nsee for example the work of Prakash et al.\nfor a discussion\non trapped neutrinos. Eq. ([eq:betadecay](#eq:betadecay)) yields then the following\nconditions for matter in $\\beta$ equilibrium with for example nucleonic degrees \nfreedom only\n\n\n
\n\n$$\n\\begin{equation}\n \\mu_n=\\mu_p+\\mu_e,\n\\label{eq:npebetaequilibrium} \\tag{30}\n\\end{equation}\n$$\n\nand\n\n\n
\n\n$$\n\\begin{equation}\n n_p = n_e,\n\\label{eq:chargeconserv} \\tag{31}\n\\end{equation}\n$$\n\nwhere $\\mu_i$ and $n_i$ refer to the chemical potential and number density\nin fm$^{-3}$ of particle species $i$. \nIf muons are present as well, we need to modify the equation for \ncharge conservation, Eq. ([eq:chargeconserv](#eq:chargeconserv)), to read\n\n$$\nn_p = n_e+n_{\\mu},\n$$\n\nand require that $\\mu_e = \\mu_{\\mu}$.\nWith more particles present, the equations read\n\n\n
\n\n$$\n\\begin{equation}\n \\sum_i\\left(n_{b_i}^+ +n_{l_i}^+\\right) = \n \\sum_i\\left(n_{b_i}^- +n_{l_i}^-\\right),\n\\label{eq:generalcharge} \\tag{32}\n\\end{equation}\n$$\n\nand\n\n\n
\n\n$$\n\\begin{equation} \n \\mu_n=b_i\\mu_i+q_i\\mu_l,\n\\label{eq:generalbeta} \\tag{33}\n\\end{equation}\n$$\n\nwhere $b_i$ is the baryon number, $q_i$ the lepton charge and the superscripts \n$(\\pm)$ on \nnumber densities $n$ represent particles with positive or negative charge.\nTo give an example, it is possible to have baryonic matter with hyperons like\n$\\Lambda$ \nand $\\Sigma^{-,0,+}$ and isobars $\\Delta^{-,0,+,++}$ as well in addition\nto the nucleonic degrees of freedom.\nIn this case the chemical equilibrium condition of Eq. ([eq:generalbeta](#eq:generalbeta)) \nbecomes,\nexcluding muons,\n\n$$\n\\mu_{\\Sigma^-} = \\mu_{\\Delta^-} = \\mu_n + \\mu_e , \\nonumber\n$$\n\n$$\n\\mu_{\\Lambda} = \\mu_{\\Sigma^0} = \\mu_{\\Delta^0} = \\mu_n , \\nonumber\n$$\n\n$$\n\\mu_{\\Sigma^+} = \\mu_{\\Delta^+} = \\mu_p = \\mu_n - \\mu_e ,\\nonumber\n$$\n\n\n
\n\n$$\n\\begin{equation} \n \\mu_{\\Delta^{++}} = \\mu_n - 2 \\mu_e .\n\\label{eq:beta_baryonicmatter} \\tag{34}\n\\end{equation}\n$$\n\nA transition from hadronic to quark matter is expected at high densities. \nThe high-density quark matter phase\nin the interior of neutron stars is also described by\nrequiring the system to be locally neutral\n\n\n
\n\n$$\n\\begin{equation} \n\\label{eq:quarkneut} \\tag{35}\n (2/3)n_u -(1/3)n_d - (1/3)n_s - n_e = 0,\n\\end{equation}\n$$\n\nwhere $n_{u,d,s,e}$ \nare the densities of the $u$, $d$ and $s$ quarks and of the\nelectrons (eventually muons as well), respectively. \nMorover, the system must be in $\\beta$-equilibrium, i.e.\\ \nthe chemical potentials have to satisfy the following equations:\n\n\n
\n\n$$\n\\begin{equation}\n\\label{eq:ud} \\tag{36}\n \\mu_d=\\mu_u+\\mu_e,\n\\end{equation}\n$$\n\nand\n\n\n
\n\n$$\n\\begin{equation}\n\\label{eq:us} \\tag{37}\n \\mu_s=\\mu_u+\\mu_e .\n\\end{equation}\n$$\n\nEquations ([eq:quarkneut](#eq:quarkneut))-([eq:us](#eq:us)) have to be solved \nself-consistently together with the field equations for quarks \nat a fixed density $n=n_u+n_d+n_s$.\n\nAn important ingredient in the discussion of the EoS and the criteria for\nmatter in $\\beta$-equilibrium is the so-called symmetry energy ${\\cal S} (n)$, \ndefined as\nthe difference in energy for symmetric nuclear matter\nand pure neutron matter\n\n\n
\n\n$$\n\\begin{equation}\n {\\cal S} (n) = {\\cal E} (n,x_p=0) - {\\cal E} (n,x_p=1/2 ).\n\\label{eq:symenergy} \\tag{38}\n\\end{equation}\n$$\n\nIf we expand the energy per baryon in the case of nucleonic degrees of freedom \nonly\nin the proton concentration $x_p$ about the value of the energy \nfor SNM ($x_p=\\frac{1}{2}$), we obtain,\n\n\n
\n\n$$\n\\begin{equation}\n {\\cal E} (n,x_p)={\\cal E} (n,x_p=\\frac{1}{2})+\n \\frac{1}{2}\\frac{d^2 {\\cal E}}{dx_p^2} (n)\\left(x_p-1/2\\right)^2+\\dots ,\n\\label{eq:energyexpansion} \\tag{39}\n\\end{equation}\n$$\n\nwhere the term $d^2 {\\cal E}/dx_p^2$ \nis to be associated with the symmetry energy ${\\cal S} (n)$ in the empirical\nmass formula. If\nwe assume that higher order derivatives in the above expansion are small\n(we will see examples of this in the next subsection), then through the \nconditions\nfor $\\beta$-equilbrium of Eqs. ([eq:npebetaequilibrium](#eq:npebetaequilibrium)) and \n([eq:chargeconserv](#eq:chargeconserv))\nand Eq. ([eq:chemicalpotdef](#eq:chemicalpotdef)) we can define the proton\nfraction by the symmetry energy as\n\n\n
\n\n$$\n\\begin{equation} \n \\hbar c\\left(3\\pi^2nx_p\\right)^{1/3} = 4{\\cal S} (n)\\left(1-2x_p\\right),\n\\label{eq:crudeprotonfraction} \\tag{40}\n\\end{equation}\n$$\n\nwhere the electron chemical potential is given\nby $\\mu_e = \\hbar c k_F$, i.e.\\ ultrarelativistic electrons are assumed.\nThus, the symmetry energy is of paramount importance for studies \nof neutron star matter in $\\beta$-equilibrium.\nOne can extract information about the value of the symmetry energy at saturation \ndensity\n$n_0$ from systematic studies of the masses of atomic nuclei. However, these \nresults\nare limited to densities around $n_0$ and for proton fractions close to \n$\\frac{1}{2}$.\nTypical values for ${\\cal S} (n)$ at $n_0$ are in the range $27-38$ MeV.\nFor densities greater than $n_0$ it is more difficult to get a reliable \ninformation on the symmetry energy, and thereby the related proton fraction.\nWe will shed more light on this topic in the next subsection.\n\n\nFinally, another property of interest in the discussion of the various \nequations of state \nis the incompressibility modulus $K$ at non-zero pressure\n\n\n
\n\n$$\n\\begin{equation}\n K=9\\frac{\\partial P}{\\partial n}.\n\\label{eq:incompressibility} \\tag{41}\n\\end{equation}\n$$\n\nThe sound speed $v_s$ depends as well on the density\nof the nuclear medium through the relation\n\n\n
\n\n$$\n\\begin{equation}\n \\left(\\frac{v_s}{c}\\right)^2=\\frac{dP}{d\\varepsilon}=\n \\frac{dP}{dn}\\frac{dn}{d\\varepsilon}=\n \\left(\\frac{K}{9(m_nc^2+{\\cal E}+P/n)}\\right).\n\\label{eq:speedofsound} \\tag{42}\n\\end{equation}\n$$\n\nIt is important to keep track of the dependence on density of $v_s$\nsince a superluminal behavior can occur at higher densities for most\nnon-relativistic EoS.\nSuperluminal behavior would\nnot occur with a fully relativistic theory, and it is necessary to\ngauge the magnitude of the effect it introduces at the higher densities.\nThis will be discussed at the end of this section.\nThe adiabatic constant $\\Gamma$ can also be extracted from the EoS\nby\n\n\n
\n\n$$\n\\begin{equation}\n \\Gamma = \\frac{n}{P}\\frac{\\partial P}{\\partial n}.\n\\label{eq:adiabaticconstant} \\tag{43}\n\\end{equation}\n$$\n\n## Brueckner-Hartree-Fock theory\n\n\nThe Brueckner $G$-matrix has historically been an important ingredient\nin many-body calculations of nuclear systems. In this section, we will\nbriefly survey the philosophy behind the $G$-matrix.\n\nHistorically, the $G$-matrix was developed in microscopic nuclear\nmatter calculations using realistic nucleon-nucleon (NN) interactions.\nIt is an ingenuous as well as an interesting method to overcome the\ndifficulties caused by the strong, short-range repulsive core contained\nin all modern models for the NN interaction. The $G$-matrix method was\noriginally developed by Brueckner, and further\ndeveloped by Goldstone and Bethe, Brandow and Petschek. \nIn the literature it is generally referred to as the\nBrueckner theory or the Brueckner-Bethe-Goldstone theory.\n\nSuppose we want to calculate the nuclear matter ground-state\nenergy $E_0$ using the non-relativistic Schr\\\"{o}dinger equation\n\n\n
\n\n$$\n\\begin{equation}\n H\\Psi_0(A)=E_0(A)\\Psi_0(A),\n\\label{_auto21} \\tag{44}\n\\end{equation}\n$$\n\nwith $H=T+V$ where $A$ denotes the number of particles, $T$\nis the kinetic energy and $V$ is\nthe nucleon-nucleon\n(NN) potential. Models for the NN interaction are discussed in the chapter on nuclear forces.\nThe corresponding unperturbed\nproblem is\n\n\n
\n\n$$\n\\begin{equation}\n H_0\\psi_0(A)=W_0(A)\\psi_0(A).\n\\label{_auto22} \\tag{45}\n\\end{equation}\n$$\n\nHere $H_0$ is just kinetic energy $T$ and $\\psi_0$ is a Slater\ndeterminant representing the Fermi sea, where all orbits through the\nFermi momentum $k_F$ are filled. We write\n\n\n
\n\n$$\n\\begin{equation}\n E_0=W_0+\\Delta E_0,\n\\label{_auto23} \\tag{46}\n\\end{equation}\n$$\n\nwhere $\\Delta E_0$ is the ground-state energy shift or correlation energy as it was defined in many-body perturbation theory.\nIf we know how to calculate $\\Delta E_0$, then we know $E_0$, since\n$W_0$ is easily obtained. In the limit $A\\rightarrow \\infty$,\nthe quantities $E_0$ and $\\Delta E_0$ themselves are not well\ndefined, but the ratios $E_0/A$ and $\\Delta E_0/A$ are. The\nnuclear-matter binding energy per nucleon is commonly denoted\nby $BE/A$, which is just $-E_0/A$. In passing, we note that\nthe empirical value for symmetric nuclear matter (proton number\n$Z$=neutron number $N$) is $\\approx 16$ MeV.\nThere exists a formal theory for the calculation of $\\Delta E_0$.\nAccording to the well-known Goldstone linked-diagram theory, the energy shift $\\Delta E_0$ is given exactly by the\ndiagrammatic expansion shown in Fig. [fig:goldstone](#fig:goldstone). This theory,\nis a linked-cluster perturbation expansion for the ground state\nenergy of a many-body system, and applies equally well to both\nnuclear matter and closed-shell nuclei such as the doubly magic\nnucleus $^{40}$Ca. \nWe will not discuss the Goldstone expansion, but rather discuss\nbriefly how it is used in calculations.\n\n\n
\n\n

Diagrams which enter the definition of the ground-state shift energy $\\Delta E_0$. Diagram (i) is first order in the interaction $\\hat{v}$, while diagrams (ii) and (iii) are examples of contributions to second and third order, respectively.

\n\n\n\n\n\nUsing the standard diagram rules (see the discussion on coupled-cluster theory and many-body perturbation theory), the various\ndiagrams contained in the above figure can be readily calculated (in an uncoupled scheme)\n\n\n
\n\n$$\n\\begin{equation}\n (i)=\\frac{(-)^{n_h+n_l}}{2^{n_{ep}}}\\sum_{ij\\leq k_F}\n \\langle ij\\vert\\hat{v}\\vert ij\\rangle_{AS},\n\\label{_auto24} \\tag{47}\n\\end{equation}\n$$\n\nwith $n_h=n_l=2$ and $n_{ep}=1$. As discussed in connection with the diagram rules in the many-body perturbation theory chapter, $n_h$\ndenotes the number of hole lines, $n_l$ the number of closed\nfermion loops and $n_{ep}$ is the number of so-called\nequivalent pairs.\nThe factor $1/2^{n_{ep}}$ is needed since we want to count a pair \nof particles only once. We will carry this factor $1/2$ with us\nin the equations below. \nThe subscript $AS$ denotes the antisymmetrized and normalized matrix element\n\n\n
\n\n$$\n\\begin{equation}\n \\langle ij\\vert\\hat{v}\\vert ij\\rangle_{AS}=\\langle ij \\vert\\hat{v}\\vert ij\\rangle-\n \\langle ji \\vert\\hat{v}\\vert ij\\rangle.\n\\label{_auto25} \\tag{48}\n\\end{equation}\n$$\n\nSimilarly, diagrams (ii) and (iii) read\n\n\n
\n\n$$\n\\begin{equation}\n (ii)=\\frac{(-)^{2+2}}{2^2}\\sum_{ij\\leq k_F}\\sum_{ab>k_F}\n \\frac{\\langle ij\\vert\\hat{v}\\vert ab\\rangle_{AS}\n \\langle ab\\vert\\hat{v}\\vert ij\\rangle_{AS}}\n {\\varepsilon_i+\\varepsilon_j-\\varepsilon_a-\\varepsilon_b},\n\\label{_auto26} \\tag{49}\n\\end{equation}\n$$\n\nand\n\n\n
\n\n$$\n\\begin{equation}\n (iii)=\\frac{(-)^{2+2}}{2^3}\\sum_{k_i,k_j\\leq k_F}\\sum_{abcdk_F}\n \\frac{\\langle ij\\vert\\hat{v}\\vert ab\\rangle_{AS}\n \\langle ab\\vert\\hat{v}\\vert cd\\rangle_{AS}\n \\langle cd\\vert\\hat{v}\\vert ij\\rangle_{AS}}\n {(\\varepsilon_i+\\varepsilon_j-\\varepsilon_a-\\varepsilon_b)\n (\\varepsilon_i+\\varepsilon_j-\\varepsilon_c-\\varepsilon_d)}.\n\\label{_auto27} \\tag{50}\n\\end{equation}\n$$\n\nIn the above, $\\varepsilon$ denotes the sp energies defined by\n$H_0$.\nThe steps leading to the above expressions for the various\ndiagrams are rather straightforward. Though, if we wish to compute the\nmatrix elements for the interaction $v$, a serious problem\narises. Typically, the matrix elements will contain a term\n(see the next section for the formal details) $V(|{\\mathbf r}|)$, which\nrepresents the interaction potential $V$ between two nucleons, where\n${\\mathbf r}$ is the internucleon distance.\nAll modern models\nfor $V$ have a strong short-range repulsive core. Hence,\nmatrix elements involving $V(|{\\mathbf r}|)$, will result in large\n(or infinitely large for a potential with a hard core)\nand repulsive contributions to the ground-state energy. Thus, the\ndiagrammatic expansion for the ground-state energy in terms of the\npotential $V(|{\\mathbf r}|)$ becomes meaningless.\n\nOne possible solution to this problem is provided by the well-known\nBrueckner theory or the Brueckner $G$-matrix, or just the\n$G$-matrix. In fact, the $G$-matrix is an almost indispensable\ntool in almost every microscopic nuclear structure\ncalculation. Its main idea may be paraphrased as follows.\nSuppose we want to calculate the function $f(x)=x/(1+x)$. If\n$x$ is small, we may expand the function $f(x)$ as a power series\n$x+x^2+x^3+\\dots$ and it may be adequate to just calculate the first\nfew terms. In other words, $f(x)$ may be calculated using a low-order\nperturbation method. But if $x$ is large\n(or infinitely large), the above\npower series is obviously meaningless.\nHowever, the exact function\n$x/(1+x)$ is still well defined in the limit\nof $x$ becoming very large.\n\nThese arguments suggest that one should sum up the diagrams\n(i), (ii), (iii) in fig. [fig:goldstone](#fig:goldstone) and the similar ones\nto all orders, instead of computing them one by one. Denoting this\nall-order sum as $1/2\\tilde{G}_{ijij}$, where we have\nintroduced the shorthand notation\n$\\tilde{G}_{ijij}=\\langle k_ik_j\\vert \\tilde{G}\\vert k_ik_j\\rangle_{AS}$\n(and similarly for $\\tilde{v}$),\nwe have that\n\n$$\n\\frac{1}{2}\\tilde{G}_{ijij}=\\frac{1}{2}\\hat{v}_{ijij}\n +\\sum_{ab>k_F}\\frac{1}{2}\\hat{v}_{ijab}\\frac{1}{\\varepsilon_i+\\varepsilon_j-\\varepsilon_a-\\varepsilon_b}\n \\nonumber\n$$\n\n\n
\n\n$$\n\\begin{equation} \n \\times\\left[\\frac{1}{2}\\hat{v}_{abij}+\\sum_{cd>k_F}\n \\frac{1}{2}\\hat{v}_{abcd}\\frac{1}\n {\\varepsilon_i+\\varepsilon_j-\\varepsilon_c-\\varepsilon_d}\n \\frac{1}{2}V_{cdij}+\\dots \\right].\n\\label{_auto28} \\tag{51}\n\\end{equation}\n$$\n\nThe factor $1/2$ is the same as that discussed above, namely we want \nto count a pair of particles only once.\nThe quantity inside the brackets is just\n$1/2\\tilde{G}_{mnij}$ and the above equation can be\nrewritten as an integral equation\n\n\n
\n\n$$\n\\begin{equation}\n \\tilde{G}_{ijij}=\\tilde{V}_{ijij}\n +\\sum_{ab>F}\\frac{1}{2}\\hat{v}_{ijab}\\frac{1}{\\varepsilon_i+\\varepsilon_j-\\varepsilon_a-\\varepsilon_b}\n \\tilde{G}_{abij}.\n\\label{_auto29} \\tag{52}\n\\end{equation}\n$$\n\nNote that $\\tilde{G}$ is the antisymmetrized $G$-matrix since\nthe potential $\\tilde{v}$ is also antisymmetrized. This means that\n$\\tilde{G}$ obeys\n\n\n
\n\n$$\n\\begin{equation}\n \\tilde{G}_{ijij}=-\\tilde{G}_{jiij}=-\\tilde{G}_{ijji}.\n\\label{_auto30} \\tag{53}\n\\end{equation}\n$$\n\nThe $\\tilde{G}$-matrix is defined as\n\n\n
\n\n$$\n\\begin{equation}\n \\tilde{G}_{ijij}=G_{ijij}-G_{jiij},\n\\label{_auto31} \\tag{54}\n\\end{equation}\n$$\n\nand the equation for $G$ is\n\n\n
\n\n$$\n\\begin{equation}\n G_{ijij}=V_{ijij}\n +\\sum_{ab>k_F}V_{ijab}\\frac{1}\n {\\varepsilon_i+\\varepsilon_j-\\varepsilon_a-\\varepsilon_b}\n G_{abij},\n\\label{eq:ggeneral} \\tag{55}\n\\end{equation}\n$$\n\nwhich is the familiar $G$-matrix equation. The above\nmatrix is specifically designed to treat a class of diagrams\ncontained in $\\Delta E_0$, of which typical contributions\nwere shown in fig. [fig:goldstone](#fig:goldstone). In fact the sum of the diagrams\nin fig. [fig:goldstone](#fig:goldstone) is equal to $1/2(G_{ijij}-G_{jiij})$.\n\nLet us now define a more general $G$-matrix as\n\n\n
\n\n$$\n\\begin{equation}\n G_{ijij}=V_{ijij}\n +\\sum_{mn>0}V_{ijmn}\\frac{Q(mn)}\n {\\omega -\\varepsilon_m-\\varepsilon_n}\n G_{mnij},\n\\label{eq:gwithq} \\tag{56}\n\\end{equation}\n$$\n\nwhich is an extension of Eq. ([eq:ggeneral](#eq:ggeneral)). Note that \nEq. ([eq:ggeneral](#eq:ggeneral)) has\n$\\varepsilon_i+\\varepsilon_j$ in the energy denominator, whereas\nin the latter equation we have a general energy variable $\\omega$\nin the denominator. Furthermore, in Eq. ([eq:ggeneral](#eq:ggeneral))\nwe have a restricted\nsum over $mn$, while in Eq. ([eq:gwithq](#eq:gwithq))\nwe sum over all $ab$ and we have\nintroduced a weighting factor $Q(ab)$. In Eq. ([eq:gwithq](#eq:gwithq)) $Q(ab)$\ncorresponds to the choice\n\n\n
\n\n$$\n\\begin{equation}\n Q(a , b ) =\n \\left\\{\\begin{array}{cc}1,&min(a ,b ) > k_F\\\\\n 0,&\\mathrm{else}.\\end{array}\\right. ,\n\\label{_auto32} \\tag{57}\n\\end{equation}\n$$\n\nwhere $Q(ab)$ is usually referred to as the $G$-matrix Pauli\nexclusion operator. The role of $Q$ is to enforce a selection\nof the intermediate states allowed in the $G$-matrix equation. The above\n$Q$ requires that the intermediate particles $a$ and $b$\nmust be both above the Fermi surface defined by $F$. We may enforce\na different requirement by using a summation over intermediate states\ndifferent from that in Eq. ([eq:gwithq](#eq:gwithq)).\nAn example is the Pauli operator\nfor the model-space Brueckner-Hartree-Fock method discussed below.\n\n\nBefore ending this section, let us rewrite the $G$-matrix equation\nin a more compact form.\nThe sp energies $\\varepsilon$ and wave functions are defined\nby the unperturbed hamiltonian $H_0$ as\n\n\n
\n\n$$\n\\begin{equation}\n H_0\\vert \\psi_a\\psi_b=(\\varepsilon_a+\\varepsilon_b)\n \\vert \\psi_a\\psi_b.\n\\label{_auto33} \\tag{58}\n\\end{equation}\n$$\n\nThe $G$-matrix equation can then be rewritten in the following\ncompact form\n\n\n
\n\n$$\n\\begin{equation}\n G(\\omega )=V+V\\frac{\\hat{Q}}{\\omega -H_0}G(\\omega ),\n\\label{_auto34} \\tag{59}\n\\end{equation}\n$$\n\nwith\n$\\hat{Q}=\\sum_{ab}\\vert \\psi_a\\psi_b\\langle\\langle \\psi_a\\psi_b\\vert$.\nIn terms of diagrams, $G$ corresponds to an all-order sum of the\n\"ladder-type\" interactions between two particles with the\nintermediate states restricted by $Q$.\n\nThe $G$-matrix equation has a very simple form. But its\ncalculation is rather complicated, particularly for finite\nnuclear systems such as the nucleus $^{18}$O. There are a\nnumber of complexities. To mention a few, the Pauli operator\n$Q$ may not commute with the unperturbed hamiltonian\n$H_0$ and we have to make the replacement\n\n$$\n\\frac{Q}{\\omega -H_0}\\rightarrow Q\\frac{1}{\\omega -QH_0Q}Q.\n$$\n\nThe determination of the starting energy $\\omega$ is also another\nproblem. \n\n\nIn a medium such as nuclear \nmatter we must account\nfor the fact that certain states are not available as intermediate\nstates in the calculation of the $G$-matrix.\nFollowing the discussion above\nthis is achieved by introducing the medium\ndependent Pauli operator $Q$. Further, the\nenergy $\\omega$ of the incoming particles, given by a pure kinetic\nterm in a scattering problem between two unbound particles (for example two colliding protons), must be modified so as to allow\nfor medium corrections.\nHow to evaluate the Pauli operator for\nnuclear matter is, however, not straightforward.\nBefore discussing how to evaluate the Pauli operator for nuclear matter,\nwe note that the $G$-matrix\nis conventionally given in terms of partial waves and\nthe coordinates of the relative and center-of-mass motion.\nIf we assume that the $G$-matrix is diagonal in $\\alpha$ ($\\alpha$ is a shorthand\nnotation for $J$, $S$, $L$ and $T$), we write the equation for the $G$-matrix as a \ncoupled-channels equation in the relative and center-of-mass system\n\n\n
\n\n$$\n\\begin{equation}\n G_{ll'}^{\\alpha}(kk'K\\omega )=V_{ll'}^{\\alpha}(kk')\n +\\sum_{l''}\\int \\frac{d^3 q}{(2\\pi )^3}V_{ll''}^{\\alpha}(kq)\n \\frac{Q(q,K)}{\\omega -H_0}\n G_{l''l'}^{\\alpha}(qk'K\\omega).\n\\label{eq:gnonrel} \\tag{60}\n\\end{equation}\n$$\n\nThis equation is similar in structure to the scattering\nequations discussed in connection with nuclear forces (see the chapter on models for nuclear forces), except that we now have\nintroduced the Pauli operator $Q$ and a medium dependent two-particle\nenergy $\\omega$. The notations in this equation follow those of the chapter on nuclear forces\nwhere we discuss the solution of the scattering\nmatrix $T$.\nThe numerical details on how to solve the above $G$-matrix\nequation through matrix inversion techniques are discussed below\nNote however that the $G$-matrix may not be diagonal in $\\alpha$.\nThis is due to the fact that the\nPauli operator $Q$ is not diagonal\nin the above representation in the relative and center-of-mass\nsystem. The Pauli operator depends on the\nangle between the relative momentum and the center of mass momentum.\nThis angle dependence causes $Q$ to couple states with different\nrelative angular\nmomentua ${\\cal J}$, rendering a partial wave decomposition of the $G$-matrix equation \nrather difficult.\nThe angle dependence of the Pauli operator\ncan be eliminated by introducing the angle-average\nPauli operator, where one replaces the exact Pauli operator $Q$\nby its average $\\bar{Q}$ over all angles for fixed relative and center-of-mass\nmomenta.\nThe choice of Pauli operator is decisive to the determination of the\nsp\nspectrum. Basically, to first order in the reaction matrix $G$,\nthere are three commonly used sp spectra, all\ndefined by the solution of the following equations\n\n\n
\n\n$$\n\\begin{equation}\n \\varepsilon_{m} = \\varepsilon (k_{m})= t_{m} + u_{m}=\\frac{k_{m}^2}{2M_N}+u_{m},\n\\label{eq:spnrel} \\tag{61}\n\\end{equation}\n$$\n\nand\n\n\n
\n\n$$\n\\begin{equation}\n u_{m} = {\\displaystyle \\sum_{h \\leq k_F}}\\left\\langle m h \\right| G(\\omega = \\varepsilon_{m} + \\varepsilon_h )\n \\left| m h \\right\\rangle_{AS} \\hspace{3mm}k_m \\leq k_M, \n\\label{_auto35} \\tag{62}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n\\label{_auto36} \\tag{63}\n\\end{equation}\n$$\n\n\n
\n\n$$\n\\begin{equation} \n u_m=0, k_m > k_M.\n\\label{eq:selfcon} \\tag{64}\n\\end{equation}\n$$\n\nFor notational economy, we set $|{\\bf k}_m|=k_m$.\nHere we employ antisymmetrized matrix elements (AS), and $k_M$ is a cutoff\non the momentum. Further, $t_m$ is the sp kinetic\nenergy and similarly $u_m$\nis the\nsp potential.\nThe choice of cutoff $k_M$ is actually what determines the three\ncommonly used sp spectra.\nIn the conventional BHF approach one employs $k_M = k_F$,\nwhich leads\nto a Pauli operator $Q_{\\mathrm{BHF}}$ (in the laboratory system) given by\n\n\n
\n\n$$\n\\begin{equation}\n Q_{\\mathrm{BHF}}(k_m , k_n ) =\n \\left\\{\\begin{array}{cc}1,&min(k_m ,k_n ) > k_F\\\\\n 0,&\\mathrm{else}.\\end{array}\\right.\n\\label{eq:bhf} \\tag{65},\n\\end{equation}\n$$\n\nor, since we will define an\nangle-average Pauli operator in the relative and center-of-mass\nsystem, we have\n\n\n
\n\n$$\n\\begin{equation}\n \\bar{Q}_{\\mathrm{BHF}}(k,K)=\\left\\{\\begin{array}{cc}\n 0,&k\\leq \\sqrt{k_{F}^{2}-K^2/4}\\\\\n 1,&k\\geq k_F + K/2\\\\\n\t\\frac{K^2/4+k^2 -k_{F}^2}{kK}&\\mathrm{else},\\end{array}\\right.\n\\label{eq:qbhf} \\tag{66}\n\\end{equation}\n$$\n\nwith $k_F$ the momentum at the Fermi surface.\n\nThe BHF choice sets $u_k = 0$ for $k > k_F$, which leads\nto an unphysical, large gap at the Fermi surface, typically\nof the order of $50-60$ MeV. \nTo overcome the gap\nproblem, Mahaux and collaborators \nintroduced a continuous sp spectrum\nfor all values of $k$. The divergencies\nwhich then may occur in Eq. ([eq:gnonrel](#eq:gnonrel)) are taken care of by\nintroducing\na principal value integration in Eq. ([eq:gnonrel](#eq:gnonrel)),\nto retain only the\nreal part contribution to the $G$-matrix.\n\n\nTo define the energy denominators we will also make use of the\nangle-average approximation.\nThe angle dependence is handled by the\nso-called effective mass approximation. The single-particle energies\nin nuclear matter are assumed to have the simple quadratic form\n\n\n
\n\n$$\n\\begin{equation}\n \\begin{array}{ccc}\n \\varepsilon (k_m)=&\n {\\displaystyle\\frac{\\hbar^{2}k_m^2}\n {2M_{N}^{*}}}+\\Delta ,&\\hspace{3mm}k_m\\leq k_F\\\\\n &&\\\\\n =&{\\displaystyle\\frac{\\hbar^{2}\n k_m^2}{2M_{N}}},&\\hspace{3mm}k_m> k_F ,\\\\\n \\end{array}\n\\label{eq:spen} \\tag{67}\n\\end{equation}\n$$\n\nwhere $M_{N}^{*}$ is the effective mass of the nucleon and $M_{N}$ is the\nbare nucleon mass. For particle states above the Fermi sea we choose\na pure kinetic energy term, whereas for hole states,\nthe terms $M_{N}^{*}$ and $\\Delta$, the latter being \nan effective single-particle\npotential related to the $G$-matrix, are obtained through the\nself-consistent Brueckner-Hartree-Fock procedure.\nThe sp potential is obtained through the same angle-average approximation\n\n\n
\n\n$$\n\\begin{equation}\n\\label{eq:Uav} \\tag{68}\n U(k_m) =\\sum_{l\\alpha} (2T+1)(2J+1)\n \\left \\{ \\frac{8}{\\pi}\\int_{0}^{(k_F-k_m)/2}\n k^2dk G_{ll}^{\\alpha}(k,\\bar{K}_1) \\right. \n\\end{equation}\n$$\n\n$$\n\\left.\n + \\frac{1}{\\pi k_m}\\int_{(k_F-k_m)/2}^{(k_F+k_m)/2}\n kdk (k_F ^2-(k_m-2k)^2)\n G_{ll}^{\\alpha}(k,\\bar{K}_2) \\right \\} \\nonumber,\n$$\n\nwhere we have defined\n\n\n
\n\n$$\n\\begin{equation}\n \\bar{K}_1^2=4(k_m^2+k^2),\n\\label{_auto37} \\tag{69}\n\\end{equation}\n$$\n\nand\n\n\n
\n\n$$\n\\begin{equation}\n \\bar{K}_2^2=4(k_m^2+k^2)-(2k+k_m-k_F)(2k+k_1+k_F).\n\\label{_auto38} \\tag{70}\n\\end{equation}\n$$\n\nThis\nself-consistency scheme consists in choosing adequate initial values of the\neffective mass and $\\Delta$. The obtained $G$-matrix is in turn used to\nobtain new values for $M_{N}^{*}$ and $\\Delta$. This procedure\ncontinues until these parameters vary little.\n\n\n\n\n\n\n## Exercise 5: Quantum numbers for infinite matter, neutron matter and/or the electron gas in 3d\n\n\n**a)**\nSet up the quantum numbers for infinite nuclear matter and neutron matter or the electron gas in 3d using a given value \nof $n_{\\mathrm{max}}$.\n\n\n\n**Solution.**\nThe following python code sets up the quantum numbers for both infinite nuclear matter and neutron matter meploying a cutoff in the value of $n$.\n\n\n```\nfrom numpy import *\n\nnmax =2\nnshell = 3*nmax*nmax\ncount = 1\ntzmin = 1\n\nprint (\"Symmetric nuclear matter:\")\nprint (\"a, nx, ny, nz, sz, tz, nx^2 + ny^2 + nz^2\")\nfor n in range(nshell): \n for nx in range(-nmax,nmax+1):\n for ny in range(-nmax,nmax+1):\n for nz in range(-nmax, nmax+1): \n for sz in range(-1,1+1):\n tz = 1\n for tz in range(-tzmin,tzmin+1):\n e = nx*nx + ny*ny + nz*nz\n if e == n:\n if sz != 0: \n if tz != 0: \n print count, \" \",nx,\" \",ny, \" \",nz,\" \",sz,\" \",tz,\" \",e\n count += 1\n \n \nnmax =1\nnshell = 3*nmax*nmax\ncount = 1\ntzmin = 1\nprint (\"------------------------------------\")\nprint (\"Neutron matter or the electron gas:\") \nprint (\"a, nx, ny, nz, sz, nx^2 + ny^2 + nz^2\")\nfor n in range(nshell): \n for nx in range(-nmax,nmax+1):\n for ny in range(-nmax,nmax+1):\n for nz in range(-nmax, nmax+1): \n for sz in range(-1,1+1):\n e = nx*nx + ny*ny + nz*nz\n if e == n:\n if sz != 0: \n print count, \" \",nx,\" \",ny, \" \",sz,\" \",tz,\" \",e\n count += 1\n```\n\n\n\n\n", "meta": {"hexsha": "1dc46e01c9e2e136b6323cb7d7f5a02ba9788716", "size": 152772, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "doc/pub/inf/ipynb/inf.ipynb", "max_stars_repo_name": "NuclearTalent/ManyBody2018", "max_stars_repo_head_hexsha": "2339ed834777fa10f6156344f17494b9a7c0bf91", "max_stars_repo_licenses": ["CC0-1.0"], "max_stars_count": 6, "max_stars_repo_stars_event_min_datetime": "2018-07-17T01:09:17.000Z", "max_stars_repo_stars_event_max_datetime": "2021-04-08T02:34:02.000Z", "max_issues_repo_path": "doc/pub/inf/ipynb/inf.ipynb", "max_issues_repo_name": "NuclearTalent/ManyBody2018", "max_issues_repo_head_hexsha": "2339ed834777fa10f6156344f17494b9a7c0bf91", "max_issues_repo_licenses": ["CC0-1.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "doc/pub/inf/ipynb/inf.ipynb", "max_forks_repo_name": "NuclearTalent/ManyBody2018", "max_forks_repo_head_hexsha": "2339ed834777fa10f6156344f17494b9a7c0bf91", "max_forks_repo_licenses": ["CC0-1.0"], "max_forks_count": 8, "max_forks_repo_forks_event_min_datetime": "2018-07-16T06:31:54.000Z", "max_forks_repo_forks_event_max_datetime": "2021-05-01T07:53:38.000Z", "avg_line_length": 34.8237975838, "max_line_length": 609, "alphanum_fraction": 0.5103160265, "converted": true, "num_tokens": 32725, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO\n\n", "lm_q1_score": 0.41869690935568665, "lm_q2_score": 0.19930799314806233, "lm_q1q2_score": 0.08344964074097808}} {"text": "```python\n%%HTML\n\n```\n\n\n\n\n\n\n# Metody Numeryczne\n\n## Elementy analizy numerycznej\n\n### dr hab. in\u017c. Jerzy Baranowski, Prof. AGH.\n\n\n## Informacje og\u00f3lne\n- Katedra Automatyki i Robotyki, C3, p. 214\n- Konsultacje \n - Czwartki 11:00-12:00\n (o ile nie ma Kolegium Wydzia\u0142owego lub seminarium)\n- jb@agh.edu.pl\n- wyk\u0142ady dost\u0119pne tutaj: https://github.com/KAIR-ISZ/public_lectures\n\n# Reprezentacja liczb\n\n\n\n## Kod binarny\n\n- Zapis liczby z wykorzystaniem dw\u00f3ch symboli **1** i **0**\n- Podstawa wsp\u00f3\u0142czesnego sposobu reprezentacji informacji\n\n\n## Zamierzch\u0142a historia\n\n- Pingala, Chanda\u1e25\u015b\u0101stra i Prozodia\n - Ok. 4 wiek pne\n - Wykorzystanie zapisu w formie zer i jedynek do opisu metrum\n- Chiny, hexagramy, Shao Yong, I-Ching\n- Leibniz\n\n## Algebra Boole'a\n\n$$\n\\begin{align}\nx \\land y & = xy & \\mathsf{Koniunkcja}\\\\\nx \\lor y & = x+y-xy & \\mathsf{Alternatywa}\\\\\n\\neg x & =1-x & \\mathsf{Negacja}\\\\\nx \\rightarrow y & = (\\neg x\\lor y) & \\mathsf{Implikacja}\\\\\nx \\oplus y & = (x \\lor y)\\land\\neg(x\\land y) & \\mathsf{EXOR}\\\\\nx = y & = \\neg(x\\oplus y) & \\mathsf{R\u00f3wnowa\u017cno\u015b\u0107}\\\\\n\\end{align}\n$$\n\n## Nieco mniej zamierzch\u0142a historia\n- 1937 Shannon \u2013 przeka\u017anikowa realizacja operacji binarnych i algebry Boole\u2019a\n- 1937 Stibitz \u2013 Pierwszy komputer przeka\u017anikowy (dodawanie)\n\n## Kod binarny\n| **0** | **0** | **1** | **0** | **1** | **0** | **1** | **1** |\n|---------|---------|---------|---------|---------|---------|---------|---------|\n| $2^{7}$ | $2^{6}$ | $2^{5}$ | $2^{4}$ | $2^{3}$ | $2^{2}$ | $2^{1}$ | $2^{0}$ |\n\nCo daje $ =2^5+2^3+2^1+2^0=32+8+2+1=43$\n\n## Liczby naturalne\n- Og\u00f3lnie zakres od 0 do 2n-1\n- 8 bit \u2013 zakres od 0 do 255\n- 16 bit \u2013 zakres od 0 do 65,535 (short, int)\n- 32 bit \u2013 zakres od 0 do 4,294,967,295 (long)\n\nW Pythonie i matlabie za bardzo nie przejmujemy si\u0119\u00a0typami, chyba \u017ce je wymusimy\n\n## Operacje na liczbach binarnych\n\n- Dodawanie\n - 0+0=0\n - 0+1=1\n - 1+0=1\n - 1+1=0, przenie\u015b 1\n- Jak w dodawaniu pisemnym\n\n``\u200b 1 1 1 1 1 ``(cyfry przenoszone) \n``\u200b 0 1 1 0 1 ``(1310) \n``\u200b+ 1 0 1 1 1 ``(2310) \n``\u200b------------ `` \n``\u200b=1 0 0 1 0 0 `` (3610)\n\n## Operacje na liczbach binarnych\n\n- Odejmowanie\n - 0-0=0\n - 0-1=1, po\u017cyczka 1\n - 1-0=1\n - 1-1=0,\n- Analogicznie\n\n``\u200b * * * * ``(po\u017cyczki) \n``\u200b 1 1 0 1 1 1 0``(11010) \n``\u200b- 1 0 1 1 1``(2310) \n``\u200b--------------- `` \n``\u200b= 1 0 1 0 1 1 1`` (8710)\n\n## Co z liczbami ujemnymi?\n\nUzupe\u0142niamy zapis o tzw. bit znaku\n\n| **1** | **0** | **1** | **0** | **1** | **0** | **1** | **1** |\n|---------|---------|---------|---------|---------|---------|---------|---------|\n| S | $2^{6}$ | $2^{5}$ | $2^{4}$ | $2^{3}$ | $2^{2}$ | $2^{1}$ | $2^{0}$ |\n\nCo daje $ =(-1)^1(2^3+2^1+2^0)=-(8+2+1)=-11$\n\nZmieniaj\u0105 si\u0119\u00a0zakresy:\n- 8 bit (-128 do 127)\n- 16 bit (\u221232,768 do 32,767)\n- itd\n\n## Problemy\n\n- Niepraktyczny zapis\n- Trzeba przekodowywa\u0107 wyniki operacji\n- Potencjalnie podatniejsze na b\u0142\u0119dy\n\n## Kod uzupe\u0142nienia do 2 (U2)\n| **1** | **1** | **1** | **1** | **1** | **0** | **1** | **1** |\n|---------|---------|---------|---------|---------|---------|---------|---------|\n| $-2^{7}$| $2^{6}$ | $2^{5}$ | $2^{4}$ | $2^{3}$ | $2^{2}$ | $2^{1}$ | $2^{0}$ |\n\nCo daje $ =-2^7+2^6+2^5+2^4+2^3+2^1+2^0$\n\n$=-128+64+32+16+8+2+1=-5$\n\n## Bardzo \u0142atwa konwersja\n- Liczby dodatnie s\u0105 takie same jak by\u0142y\n- Aby zamieni\u0107\u00a0liczb\u0119 na jej przeciwn\u0105 wystarczy zanegowa\u0107 wszystkie bity i\u00a0do wyniku doda\u0107 1 (*w obie strony*)\n\n| **0** | **0** | **0** | **0** | **0** | **1** | **0** | **1** | 510 | orygina\u0142 |\n|----------|---------|---------|---------|---------|---------|---------|---------|-----------------|-----------|\n| 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | | negacja |\n| **1** | **1** | **1** | **1** | **1** | **0** | **1** | **1** | -510 | dodanie 1 |\n| 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | | negacja |\n| **0** | **0** | **0** | **0** | **0** | **1** | **0** | **1** | 510 | dodanie 1 |\n| -$2^{7}$ | $2^{6}$ | $2^{5}$ | $2^{4}$ | $2^{3}$ | $2^{2}$ | $2^{1}$ | $2^{0}$ | | |\n\n## Jaka z tego korzy\u015b\u0107?\n- Odejmowanie staje si\u0119\u00a0dodawaniem (prawie)\n$$ A - B = A + \\neg B + 1$$\n- Przyk\u0142ad 13 \u2013 7 (na 8 bitach)\n\n``\u200b 1 1 1 1 1 ``(cyfry przenoszone) \n``\u200b 0 0 0 0 1 1 0 1``(1310) \n``\u200b 1 1 1 1 1 0 0 0``(zanegowane 710) \n``\u200b+ 1``(jedynka) \n``\u200b-----------------`` \n``\u200b= 0 0 0 0 0 1 1 0`` (610) \n\n## Operacje na liczbach binarnych\nMno\u017cenie r\u00f3wnie\u017c przypomina mno\u017cenie pisemne\n\n``\u200b 1 0 1 1`` 1110 \n``\u200b * 1 0 1 0`` 1010 \n``\u200b -----------`` \n``\u200b 0 0 0 0`` \n``\u200b + 1 0 1 1 `` \n``\u200b + 0 0 0 0`` \n``\u200b + 1 0 1 1`` \n``\u200b ---------------`` \n``\u200b = 1 1 0 1 1 1 0`` 11010\n\n\n# Metody Numeryczne\n\n## Reprezentacja liczb wymiernych\n\n### dr hab. in\u017c. Jerzy Baranowski, Prof. AGH.\n\n## A co z u\u0142amkami?\nS\u0105 dwa sposoby zapisu liczb nieca\u0142kowitych\n- Sta\u0142oprzecinkowy (sta\u0142opozycyjny)\n- Zmiennoprzecinkowy (zmiennopozycyjny)\n\n## Zapis sta\u0142oprzecinkowy\n| **1** | **0** | **1** | **1** | **1** | **0** | **0** | **0** |\n|---------|---------|---------|---------|---------|---------|---------|---------|\n| $2^{1}$ | $2^{0}$ | $2^{-1}$ | $2^{-2}$ | $2^{-3}$ | $2^{-4}$ | $2^{-5}$ | $2^{-6}$ |\n\n$$\n2^1+2^{-1}+2^{-2}+2^{-3}=2+\\frac{1}{2}+\\frac{1}{4}+\\frac{1}{8}=2.875\n$$\n\n\n\n## Zalety zapisu sta\u0142oprzecinkowego\n- Nie ma r\u00f3\u017cnicy w kodowaniu\n- Mamy stale okre\u015blon\u0105 dok\u0142adno\u015b\u0107, kt\u00f3r\u0105 mo\u017cemy w miar\u0119\u00a0dok\u0142adnie kszta\u0142towa\u0107\n- Stosunkowa prostota\n- Ma\u0142e wymagania sprz\u0119towe\n\n## Wady zapisu sta\u0142oprzecinkowego\nProblemy z dok\u0142adno\u015bci\u0105, np. nie da si\u0119\u00a0dok\u0142adnie przedstawi\u0107\u00a0liczby 0.1\n- Na 3 bitach cz\u0119\u015bci u\u0142amkowej r\u00f3\u017cnica wynosi 0.025\n- Na 7 bitach cz\u0119\u015bci u\u0142amkowej r\u00f3\u017cnica wynosi ok. 0.001 \n\n\n## Jak wykonujemy dzia\u0142ania?\n- Dzia\u0142ania wykonujemy traktuj\u0105c zapis liczby sta\u0142oprzecinkowej jako normaln\u0105 binarn\u0105\n- Kod U2 dalej dzia\u0142a\n- Nale\u017cy pami\u0119ta\u0107, \u017ce wtedy liczba jest pomno\u017cona przez 2n gdzie n to ilo\u015b\u0107 bit\u00f3w cz\u0119\u015bci u\u0142amkowej \n- W liczbach poddanych dzia\u0142aniu liczba bit\u00f3w cz\u0119\u015bci ca\u0142kowitej i u\u0142amkowej musi by\u0107\u00a0r\u00f3wna\n\n## Dzia\u0142ania sta\u0142oprzecinkowe\n- Dodawanie wykonujemy identycznie\n- W przypadku mno\u017cenia wynik musimy podzieli\u0107\u00a0przez 2n \n- Mno\u017cenie liczb sta\u0142oprzecinkowych przez pot\u0119g\u0119 2 polega tylko na przesuwaniu bit\u00f3w (bardzo proste w realizacji)\n\n| 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | |\n|---------------|---------------|-----|-----|-----|-----|-----|-----|---|\n| 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | Podzielenie przez $2^{2}$ |\n| $2^{1}$ | $2^{0}$ | $2^{-1}$ | $2^{-2}$ | $2^{-3}$ | $2^{-4}$ | $2^{-5}$ | $2^{-6}$ ||\n\n## Format zmiennoprzecinkowy\n- Bardziej zaawansowany spos\u00f3b przedstawiania liczb\n- Ustandaryzowany norm\u0105 IEEE\n- Daj\u0105cy pod pewnymi wzgl\u0119dami wi\u0119ksz\u0105 dok\u0142adno\u015b\u0107\n\n## Format zmiennoprzecinkowy\n\nReprezentacja liczby\n\n$$\nx=S\\cdot M\\cdot B^E\n$$\n\n- S \u2013 znak (*sign*)\n- M \u2013 mantysa (*mantissa*, tak\u017ce *fraction*)\n- B \u2013 podstawa (*base*, zazwyczaj 2, rzadziej 10)\n- E - wyk\u0142adnik (*exponent*)\n\n## Mantysa\n- Liczba odpowiadaj\u0105ca za u\u0142amkow\u0105 cz\u0119\u015b\u0107 zapisu\n- Format sta\u0142oprzecinkowy, zazwyczaj liczba z przedzia\u0142u [1,2)\n\n## Podstawa i wyk\u0142adnik\n\n- Pozwalaj\u0105 na okre\u015blenie szerokiego zakresu\n- Ze wzgl\u0119du na kodowanie, zazwyczaj podstawa to 2\n- Wyk\u0142adnik mo\u017ce by\u0107 ujemny lub dodatni.\n- Wyk\u0142adnik koduje si\u0119\u00a0w U2, lub te\u017c wprowadza si\u0119 przesuni\u0119cie\n\n## Dzia\u0142ania na liczbach zmiennoprzecinkowych\nDodawanie i odejmowanie\n\n$$\nx_1\\pm x_2=\\left(M_1\\pm M_2\\cdot B^{E_2-E_1}\\right)\\cdot B^{E_1}\n$$\n\nMno\u017cenie i dzielenie\n\n$$\nx_1\\cdot x_2=(S_1\\cdot S_2)\\cdot (M_1\\cdot M_2)\\cdot B^{E_1+E_2}\n$$\n\n$$\nx_1 / x_2=(S_1\\cdot S_2)\\cdot (M_1/ M_2)\\cdot B^{E_1-E_2}\n$$\n\n\n\n## Dzielenie\n- Maj\u0105c mo\u017cliwo\u015b\u0107 zapisu liczby ulamkowej mo\u017cna sformu\u0142owa\u0107\u00a0operacj\u0119\u00a0dzielenia.\n- Istnieje wiele algorytm\u00f3w np.\n - *restoring division*\n - *non-restoring division*\n - SRT\n - algorytm Newtona-Raphsona\n - algorytm Goldschmidta\n- S\u0105\u00a0one ju\u017c zaimplementowane, jedno dzielenie zazwyczaj wymaga przeprowadzenia 3-4 mno\u017ce\u0144\n\n\n## Wa\u017cne formaty \u2013 IEEE Single precision\n\n- 8 bit\u00f3w wyk\u0142adnika, wyk\u0142adnik przesuni\u0119ty o\u00a0127 (zamiana z -126 do 127 na 1 do 244)\n- 24 bity mantysy, ale zawsze koduje si\u0119 tylko 23 po kropce, przed kropk\u0105 jest 1 \n- Specjalne zapisy niesko\u0144czono\u015bci i b\u0142\u0119d\u00f3w\n- w NumPy - ``float32``\n\n## Wa\u017cne formaty \u2013 IEEE Double precision\n\n- 11 bit\u00f3w wyk\u0142adnika, wyk\u0142adnik przesuni\u0119ty o\u00a01023 (zamiana z -1022 do 1023 na 1 do 2046)\n- 53 bity mantysy, ale zawsz koduje si\u0119 tylko 52 po kropce, przed kropk\u0105 jest 1 \n- Specjalne zapisy niesko\u0144czono\u015bci i b\u0142\u0119d\u00f3w\n- w NumPy - ``float64``, ale w zasadzie ka\u017cda liczba w Pythonie i Matlabie to double, chyba \u017ce wymusimy inaczej\n\n## Wy\u015bwietlanie liczb\n- Normalnie \n- Notacja in\u017cynierska\n - $3700=3.7\\cdot10^3$, $0.12=120\\cdot10^{-3}$\n- Notacja naukowa\n - ``3700=3.7E3``, ``0.12=1.2E-1``\n\n# Metody Numeryczne\n\n## B\u0142edy numeryczne\n\n### dr hab. in\u017c. Jerzy Baranowski, Prof. AGH.\n\n## Podstawowe definicje\nWarto\u015b\u0107 dok\u0142adna\n$$y=\\tilde{y}+\\varepsilon$$\n- $\\tilde{y}$ - warto\u015b\u0107 przybli\u017cona\n- $\\varepsilon$ - b\u0142\u0105d\n\n## B\u0142\u0105d bezwzgl\u0119dny\nWarto\u015b\u0107 bezwzgl\u0119dna r\u00f3\u017cnicy mi\u0119dzy rozwi\u0105zaniem dok\u0142adnym i przybli\u017conym\n$$ \\varepsilon=|y-\\tilde{y}|$$\n\n## B\u0142\u0105d wzgl\u0119dny\nStosunek b\u0142\u0119du bezwzgl\u0119dnego do warto\u015bci bezwzgl\u0119dnej rozwi\u0105zania\n$$\\eta=\\frac{|y-\\tilde{y}|}{|y|}=\\left|\\frac{y-\\tilde{y}}{y}\\right|=\\left|1-\\frac{\\tilde{y}}{y}\\right|$$\nCzasami b\u0142\u0105d wzgl\u0119dny wyra\u017camy w procentach\n\n## Przyk\u0142ady\nPierwiastek kwadratowy ze 122\n\n$$\n\\begin{align}\ny{}&=\\sqrt{122}\\approx 11.04536\\\\\n\\tilde{y}{}&=11\\\\\n\\varepsilon{}&=|y-\\tilde{y}|=0.04536\\\\\n\\eta{}&=\\frac{|y-\\tilde{y}|}{|y|}=0.00411\n\\end{align}\n$$\n\n## Przyk\u0142ady\nLiczba obywateli Polski (stan na ostatni spis powszechny z 2011)\n\n$$\n\\begin{align}\ny{}&=38\\ 538\\ 447\\\\\n\\tilde{y}{}&=38\\ 500\\ 000\\\\\n\\varepsilon{}&=|y-\\tilde{y}|=38\\ 447\\\\\n\\eta{}&=\\frac{|y-\\tilde{y}|}{|y|}=9.97627\\cdot10^{-4}\\approx 0.001\n\\end{align}\n$$\n\n## Przyk\u0142ady\nObliczanie sta\u0142ej grawitacji\n$$\n\\begin{align}\ny{}&=6.673841\\cdot10^{-11}\\\\\n\\tilde{y}{}&=6.7\\cdot10^{-11}\\\\\n\\varepsilon{}&=|y-\\tilde{y}|=2.6159\\cdot10^{-13}\\\\\n\\eta{}&=\\frac{|y-\\tilde{y}|}{|y|}=0.00391\n\\end{align}\n$$\n\n## \u0179r\u00f3d\u0142a b\u0142\u0119d\u00f3w\nB\u0142\u0119dy powstaj\u0105ce przy formu\u0142owaniu zagadnienia\n- B\u0142\u0119dy pomiaru\n- B\u0142\u0119dy wynikaj\u0105ce z przyj\u0119cia okre\u015blonych przybli\u017ce\u0144 opisu zjawisk fizycznych\n\nB\u0142\u0119dy powstaj\u0105ce przy obliczeniach\n- B\u0142\u0119dy grube (pomy\u0142ki)\n- B\u0142\u0119dy metody (obci\u0119cia)\n- B\u0142\u0119dy zaokr\u0105gle\u0144\n\n## B\u0142\u0119dy grube\n- B\u0142\u0105d przy wpisywaniu wzoru do komputera\nnp. ``x=A/b`` zamiast ``x=A\\b``\n- Z\u0142a implementacja algorytmu\n- Niew\u0142a\u015bciwa kolejno\u015b\u0107 wykonywania dzia\u0142a\u0144\n\n## B\u0142\u0119dy metody (obci\u0119cia)\n- B\u0142\u0119dy obci\u0119cia s\u0105 nieod\u0142\u0105cznym elementem oblicze\u0144 numerycznych.\n- B\u0142\u0105d obci\u0119cia jest to b\u0142\u0105d wynikaj\u0105cy z tego, \u017ce do uzyskania dok\u0142adnego rozwi\u0105zania potrzebujemy wykona\u0107\u00a0niesko\u0144czenie wiele oblicze\u0144\n\n## Przyk\u0142ady b\u0142\u0119d\u00f3w metody\nMo\u017cna wykaza\u0107, \u017ce\n$$\n\\begin{align}\n\\sin x={}&x-\\frac{x^3}{3!}+\\frac{x^5}{5!}-\\frac{x^7}{7!}+\\ldots=\\\\\n={}&\\sum\\limits_{n=0}^\\infty(-1)^n\\frac{x^{2n+1}}{(2n+1)!}\n\\end{align}\n$$\nB\u0142\u0119dem odci\u0119cia b\u0119dzie \n$$\n\\sin x\\approx x-\\frac{x^3}{3!}+\\frac{x^5}{5!}\n$$\n\n## Przyk\u0142ady b\u0142\u0119d\u00f3w metody\n\nMetoda bisekcji\n\n\n```python\ndef bisection(f,a,b,N): \n a_n = a\n b_n = b\n for n in range(1,N+1):\n m_n = (a_n + b_n)/2\n f_m_n = f(m_n)\n if f(a_n)*f_m_n < 0:\n a_n = a_n\n b_n = m_n\n elif f(b_n)*f_m_n < 0:\n a_n = m_n\n b_n = b_n\n return (a_n + b_n)/2\n```\n\nSzukamy pierwiastka wielomianu $x^2-2$, w przedziale $[1,2]$. Rozwi\u0105zanie to $\\sqrt{2}$.\n\n\n```python\nf = lambda x: x**2 - 2 # definicja funkcji\nbisection(f,1,2,5) # 5 krok\u00f3w\n```\n\n\n\n\n 1.421875\n\n\n\n\n```python\nbisection(f,1,2,10) # 10 krok\u00f3w\n```\n\n\n\n\n 1.41455078125\n\n\n\n\n```python\nbisection(f,1,2,15) # 15 krok\u00f3w\n```\n\n\n\n\n 1.4141998291015625\n\n\n\n\n```python\nimport numpy as np\nnp.sqrt(2) \n```\n\n\n\n\n 1.4142135623730951\n\n\n\n## B\u0142\u0105d metody - podsumowanie\n- Praktycznie wszystkie metody numeryczne maj\u0105 jaki\u015b b\u0142\u0105d metody\n- Dobre algorytmy podaj\u0105 jednak jego oszacowanie, w ten spos\u00f3b wiemy jak daleko jeste\u015bmy od rozwi\u0105zania nawet jak przerwiemy obliczenia\n\n# Metody Numeryczne\n\n## B\u0142\u0119dy zaokr\u0105gle\u0144\n\n### dr hab. in\u017c. Jerzy Baranowski, Prof. AGH.\n\n## B\u0142\u0119dy zaokr\u0105gle\u0144\nKolejne nieusuwalne w pe\u0142ni \u017ar\u00f3d\u0142o b\u0142\u0119d\u00f3w, nad kt\u00f3rym mamy mniejsz\u0105 kontrol\u0119 ni\u017c nad b\u0142\u0119dem metody\n\n## Zaokr\u0105glenie i cyfry znacz\u0105ce\nLiczba $\\tilde{y}=\\mathrm{rd}(y)$ jest poprawnie zaokr\u0105glona do *d* miejsc po przecinku, je\u017celi \n\n$$\n\\varepsilon=|y-\\tilde{y}|\\leq\\frac{1}{2}\\cdot10^{-d}\n$$\n*k*-t\u0105 cyfr\u0119\u00a0dziesi\u0119tn\u0105 liczby $\\tilde{y}$ nazwiemy znacz\u0105c\u0105 gdy\n$$|y-\\tilde{y}|\\leq\\frac{1}{2}\\cdot10^{-k}$$\noraz \n$$|\\tilde{y}|\\geq10^{-k}\n$$\n\n## Rzeczywiste obliczenia zmiennoprzecinkowe\n$$\n\\begin{align}\n\\mathrm{fl}(x+y)={}&\\mathrm{rd}(x+y)\\\\\n\\mathrm{fl}(x-y)={}&\\mathrm{rd}(x-y)\\\\\n\\mathrm{fl}(x\\cdot y)={}&\\mathrm{rd}(x\\cdot y)\\\\\n\\mathrm{fl}(x/y)={}&\\mathrm{rd}(x/y)\\\\\n\\end{align}\n$$\n\n## Liczby maszynowe\n- Liczba maszynowa, to taka liczba jak\u0105 mo\u017cna przedstawi\u0107\u00a0w komputerze. Zbi\u00f3r tych liczb oznaczamy A\n- Dok\u0142adno\u015b\u0107 maszynow\u0105 (epsilon maszynowy) \u2013 eps, $\\varepsilon_m$, definiujemy:\n$$\n\\mathrm{eps}=\\min\\{x\\in{A}\\colon \\mathrm{fl}(1+x)>1,\\ x>0\\}\n$$\nInnymi s\u0142owy, jest to najmniejsza liczba, kt\u00f3r\u0105 mo\u017cemy doda\u0107 do 1, aby uzyska\u0107 co\u015b wi\u0119kszego od 1. \n\n## Epsilon maszynowy w r\u00f3\u017cnych formatach\n\nZale\u017cy on od liczby bit\u00f3w na cz\u0119\u015b\u0107 u\u0142amkow\u0105\n- Single precision $\\varepsilon_m=2^{-24}\\approx 5.96\\cdot10^{-8}$\n- Double precision $\\varepsilon_m=2^{-52}\\approx 1.11\\cdot10^{-16}$\n\n### Przyk\u0142ad\n\n\n```python\na=10**(-15)\nb=10**(-17)\n1+a>1,1+b>1\n```\n\n\n\n\n (True, False)\n\n\n\n## Maksymalny b\u0142\u0105d reprezentacji\nDla ka\u017cdej liczby rzeczywistej $x$ istnieje taka liczba $\\varepsilon$, taka \u017ce $|\\varepsilon|<\\varepsilon_m$, \u017ce\n$\\mathrm{fl}(x)=x(1+\\varepsilon)$\n\nOznacza to, \u017ce **b\u0142\u0105d wzgl\u0119dny mi\u0119dzy liczb\u0105 rzeczywist\u0105, a jej najbli\u017csz\u0105 reprezentacj\u0105 zmiennoprzecinkow\u0105 jest zawsze mniejszy od $\\varepsilon_m$**\n\n## Lemat Wilkinsona\nB\u0142edy zaokr\u0105gle\u0144 powsta\u0142e podczas wykonywania dzia\u0142a\u0144 zmiennoprzecinkowych s\u0105 r\u00f3wnowa\u017cne zast\u0119pczemu zaburzeniu liczb, na kt\u00f3rych wykonujemy dzia\u0142ania \n\n$$\n\\begin{align}\n\\mathrm{fl}(x+y)={}&(x+y)(1+\\varepsilon_1)\\\\\n\\mathrm{fl}(x-y)={}&(x-y)(1+\\varepsilon_2)\\\\\n\\mathrm{fl}(x\\cdot y)={}&(x\\cdot y)(1+\\varepsilon_3)\\\\\n\\mathrm{fl}(x/y)={}&(x/y)(1+\\varepsilon_4)\\\\\n|\\varepsilon_i|<{}&\\varepsilon_m\n\\end{align}\n$$\n(dla ka\u017cdej pary liczb $x,\\ y$ zaburzenia zast\u0119pcze $\\varepsilon_i$ s\u0105 inne)\n\n## Konsekwencja lematu Wilkinsona\nPrawa \u0142\u0105czno\u015bci i rozdzielno\u015bci operacji matematycznych s\u0105 og\u00f3lnie nieprawdziwe dla oblicze\u0144 zmiennoprzecinkowych\n\n### Przyk\u0142ad\n\n\n```python\na=np.float32(0.23371258*10**(-4))\nb=np.float32(0.33678429*10**(2))\nc=np.float32(-0.33677811*10**(2))\nprint([a,b,c])\n```\n\n [2.3371258e-05, 33.67843, -33.67781]\n\n\nChcemy obliczy\u0107 ``a+b+c``\n\n## Obliczenia\n\n\n```python\n## Podej\u015bcie 1\nd=b+c\nwynik_1=a+d\nprint(wynik_1)\n```\n\n 0.0006413522\n\n\n\n```python\n## Podej\u015bcie 2\ne=a+b\nwynik_2=e+c\nprint(wynik_2)\n```\n\n 0.00064086914\n\n\n## Co tu si\u0119\u00a0porobi\u0142o?\n\n\n## Konsekwencje obliczen zmiennoprzecinkowych\n\n\n```python\nm_a, e_a = np.frexp(a)\nprint(m_a,e_a)\nm_b,e_b = np.frexp(b)\nprint(m_b,e_b)\nm_c,e_c = np.frexp(c)\nprint(m_c,e_c)\n```\n\n 0.7658294 -15\n 0.52622545 6\n -0.5262158 6\n\n\nWyk\u0142adnik ``a`` od wyk\u0142adnik\u00f3w ``b`` i ``c`` r\u00f3\u017cni si\u0119\u00a0o 21. Oznacza to, \u017ce z 23 bit\u00f3w mantysy liczby ``a`` po sprowadzeniu do wsp\u00f3lnego wyk\u0142adnika z ``b`` zostan\u0105\u00a0nam tylko 2 najbardziej znacz\u0105ce. \n\n## Konsekwencje cd..\nJe\u017celi dodajemy ma\u0142\u0105 liczb\u0119\u00a0do du\u017cej, zawsze musimy si\u0119\u00a0liczy\u0107 z zaokr\u0105gleniem i to normalne. W tym przypadku jednak dwie du\u017ce liczby ``b`` i ``c`` s\u0105 przeciwnych znak\u00f3w i bliskie co do warto\u015bci bezwzgl\u0119dnej. Wynik tego dzia\u0142ania:\n\n\n```python\nm_d,e_d = np.frexp(d)\nprint(m_d,e_d)\nprint(wynik_2)\n```\n\n 0.6328125 -10\n 0.00064086914\n\n\nW konsekwencji dodaj\u0105c ``a`` do ``d`` na zaokr\u0105gleniu stracimy jedynie 5 bit\u00f3w mantysy ``a``.\n\n## O ile si\u0119\u00a0pomylili\u015bmy (w stosunku do dok\u0142adniejszych oblicze\u0144)\n\n\n```python\na_dbl=(0.23371258*10**(-4))\nb_dbl=(0.33678429*10**(2))\nc_dbl=(-0.33677811*10**(2))\nd_dbl=b_dbl+c_dbl\nwynik_dbl=a_dbl+d_dbl\nepsilon_1=np.abs((wynik_1)-wynik_dbl)\neta_1=epsilon_1/np.abs(wynik_dbl)\nprint(\"Metoda 1: B\u0142\u0105d bezwzgl\u0119dny %10.2e, B\u0142\u0105d wzgl\u0119dny %10.2e\"%(epsilon_1,eta_1))\nepsilon_2=np.abs((wynik_2)-wynik_dbl)\neta_2=epsilon_2/np.abs(wynik_dbl)\nprint(\"Metoda 2: B\u0142\u0105d bezwzgl\u0119dny %10.2e, B\u0142\u0105d wzgl\u0119dny %10.2e\"%(epsilon_2,eta_2))\n\n\n```\n\n Metoda 1: B\u0142\u0105d bezwzgl\u0119dny 1.91e-08, B\u0142\u0105d wzgl\u0119dny 2.97e-05\n Metoda 2: B\u0142\u0105d bezwzgl\u0119dny 5.02e-07, B\u0142\u0105d wzgl\u0119dny 7.83e-04\n\n\n# Przenoszenie si\u0119\u00a0b\u0142\u0119d\u00f3w zaokr\u0105gle\u0144\nKorzystaj\u0105c z rachunku r\u00f3\u017cniczkowego (r\u00f3\u017cniczkowa analiza b\u0142\u0119d\u00f3w) mo\u017cemy poda\u0107 wz\u00f3r na przenoszenie si\u0119\u00a0b\u0142\u0119d\u00f3w.\n\nNiech $y=\\varphi(x_1,\\ x_2,,\\ldots\\ x_n)$ b\u0119dzie wielko\u015bci\u0105, kt\u00f3r\u0105 chcemy obliczy\u0107 a $x_i$ s\u0105 zaokr\u0105glone z b\u0142\u0119dem $\\varepsilon_{x_i}$. B\u0142\u0105d wzgl\u0119dny wyliczania $y$ wynosi w przybli\u017ceniu:\n\n$$\n\\varepsilon_y = \\sum_{i=0}^n \\frac{x_i}{\\varphi(\\mathbf{x})}\n\\cdot \\frac{\\partial\\varphi(\\mathbf{x})}{\\partial x_i}\\cdot\\varepsilon_{x_i}\n$$\n\n\n\n# Nieunikniony b\u0142\u0105d oblicze\u0144\nZe wzgl\u0119du na zaokr\u0105glenia pewnych b\u0142\u0119d\u00f3w nigdy nie unikniemy. Nieunikniony b\u0142\u0105d warto\u015bci sk\u0142ada si\u0119 z b\u0142\u0119du wyliczenia warto\u015bci (przeniesienia b\u0142\u0119d\u00f3w) oraz samego b\u0142\u0119du zaokr\u0105glenia:\n\n$$\n\\frac{\\Delta y}{y} = \\epsilon_y + \\mathrm{eps}\n$$\n\n## Przyk\u0142ad\nWyliczanie pierwiastka r\u00f3wnania kwadratowego $y^2+2py-q=0$ o mniejszej warto\u015bci bezwzgl\u0119dnej:\n$$ y=-p+\\sqrt{p^2+q} $$\nmo\u017cna policzy\u0107, \u017ce \n$$\n\\varepsilon_y=-\\frac{p}{\\sqrt{p^2-q}}\\varepsilon_p+\\frac{p+\\sqrt{p^2-q}}{2\\sqrt{p^2-q}}\\varepsilon_q\n$$\n\n## Analiza b\u0142\u0119du nieuniknionego\nPoniewa\u017c dla $q>0$ mamy\n\n$$\n\\left|\\frac{p}{\\sqrt{p^2-q}}\\right|\\leq1,\\quad \\left|\\frac{p+\\sqrt{p^2-q}}{2\\sqrt{p^2-q}}\\right|\\leq1\n$$\nto wtedy mamy (przyjmuj\u0105c, \u017ce nie zachodzi $p^2\\approx q$)\n$$\n\\mathrm{eps}\\leq\\left|\\frac{\\Delta y}{y}\\right| = |\\epsilon_y + \\mathrm{eps}|\\leq 3 \\mathrm{eps}\n$$\n\n## Por\u00f3wnanie algorytm\u00f3w\nRozpartrzmy dwa sposoby wyliczania $y$ dla $p$ i $q$ mniejszych od zera\n\n$$\n\\begin{aligned}\ns:={}&p^2\\\\\nt:={}&s+q\\\\\nu:={}&\\sqrt{t}\\\\\ny:={}&-p+q\n\\end{aligned}\n\\quad \\quad \\quad \\quad\n\\begin{aligned}\ns:={}&p^2\\\\\nt:={}&s+q\\\\\nu:={}&\\sqrt{t}\\\\\nv:={}&p+u\\\\\ny:={}&q/v\n\\end{aligned}\n$$\n\n\n## Algorytm 1\nPodstawowym \u017ar\u00f3d\u0142em b\u0142\u0119du b\u0119dzie wzmocnienie b\u0142\u0119du zaokr\u0105glenia wyliczania pierwiastka z $t$ poprzez odejmowanie dw\u00f3ch liczb przy wyliczaniu $y$\n$$\\varepsilon_y=\\frac{p\\sqrt{p^2+q}+p^2+q}{q}\\varepsilon=\\kappa\\varepsilon$$\n$\\kappa$ mo\u017cna oszacowa\u0107 z do\u0142u, przez \n$$\n\\kappa>\\frac{2 p^2}{q} >0\n$$\nco oznacza, \u017ce dla ma\u0142ych $q$ b\u0142\u0105d oblicze\u0144 b\u0119dzie du\u017co wi\u0119kszy ni\u017c b\u0142\u0105d nieunikniony.\n\n\n## Algorytm 2\nW tym algorytmie zakokr\u0105glenie przez odejmowanie nie wyst\u0105pi\n\n$$\n\\varepsilon_y = -\\frac{\\sqrt{p^2+q}}{p+\\sqrt{p^2+q}}\\varepsilon = \\kappa\\varepsilon\n$$\nw tym przypadku zawsze $|\\kappa|<1$.\n\n\n```python\ndef algorytm_1(p,q):\n s=p**2\n t=s+q\n u=np.sqrt(t)\n return u-p\n\ndef algorytm_2(p,q):\n s=p**2\n t=s+q\n u=np.sqrt(t)\n v=p+u\n return q/v\n```\n\n# Por\u00f3wnanie oblicze\u0144\n\n\n```python\np=1000\nq=0.018000000081\nexact_sol=np.max(np.roots([1,2*p,-q]))\n```\n\n\n```python\nepsilon_1=np.abs((algorytm_1(p,q))-exact_sol)\neta_1=epsilon_1/np.abs(exact_sol)\nepsilon_2=np.abs((algorytm_2(p,q))-exact_sol)\neta_2=epsilon_2/np.abs(exact_sol)\n```\n\n\n```python\nprint('Algorytm 1')\nprint(algorytm_1(p,q))\nprint('Algorytm 2')\nprint(algorytm_2(p,q))\nprint('Rozwi\u0105zanie dok\u0142adne')\nprint(exact_sol)\nprint(\"Algorytm 1: B\u0142\u0105d bezwzgl\u0119dny %10.2e, B\u0142\u0105d wzgl\u0119dny %10.2e\"%(epsilon_1,eta_1))\nprint(\"Algorytm 2: B\u0142\u0105d bezwzgl\u0119dny %10.2e, B\u0142\u0105d wzgl\u0119dny %10.2e\"%(epsilon_2,eta_2))\n```\n\n Algorytm 1\n 8.999999977277184e-06\n Algorytm 2\n 9e-06\n Rozwi\u0105zanie dok\u0142adne\n 9e-06\n Algorytm 1: B\u0142\u0105d bezwzgl\u0119dny 2.27e-14, B\u0142\u0105d wzgl\u0119dny 2.52e-09\n Algorytm 2: B\u0142\u0105d bezwzgl\u0119dny 0.00e+00, B\u0142\u0105d wzgl\u0119dny 0.00e+00\n\n\n# Metody Numeryczne\n\n## Ocena algorytm\u00f3w numerycznych\n\n### dr hab. in\u017c. Jerzy Baranowski, Prof. AGH.\n\n## Notacja O du\u017ce\n- M\u00f3wimy, \u017ce dla wielko\u015bci zale\u017cnej od parametru np. $F(n)$ zachodzi\n$$ \nF(n)=O(G(n))\n$$\nje\u017celi istnieje taka sta\u0142a $C$, \u017ce przy $n$ zmierzaj\u0105cym do niesko\u0144czono\u015bci (odpowiednio du\u017cym), mamy\n$$F(n)\u2264C G(n)$$\n- Je\u017celi interesuje nas $O(c)$, gdzie $c$ jest sta\u0142\u0105, zale\u017cno\u015b\u0107 ta ma zachodzi\u0107 niezale\u017cnie od wielko\u015bci parametru.\n- M\u00f3wimy potocznie, gdy b\u0142\u0105d jest r\u00f3wny $O(n^2)$, \u017ce b\u0142\u0105d jest rz\u0119du $n^2$\n \n\n## Ocena algorytmu\n- Naszym celem jest obliczenie pewnej wielko\u015bci $f(x)$, zale\u017cnej od danych wej\u015bciowych $x$\n- W przypadku oblicze\u0144 komputerowych zawsze mamy do czynienia z obliczaniem przybli\u017conym st\u0105d algorytm obliczania $f(x)$ b\u0119dziemy oznacza\u0107 jako $f^*(x)$\n- Dane w komputerze r\u00f3wnie\u017c s\u0105 reprezentowane w spos\u00f3b zaokr\u0105glony, wi\u0119c b\u0119dziemy je oznacza\u0107 jako $x^*$\n\n## Uwarunkowanie problemu\n\n- M\u00f3wimy, \u017ce problem $f(x)$ jest dobrze uwarunkowany, je\u017celi ma\u0142a zmiana $x$ powoduje ma\u0142\u0105 zmian\u0119\u00a0w $f(x)$\n- Problem jest \u017ale uwarunkowany, je\u017celi ma\u0142a zmiana $x$ powoduje du\u017c\u0105 zmian\u0119\u00a0w $f(x)$\n- Miar\u0105\u00a0uwarunkowania jest sta\u0142a $\\kappa$ (kappa), kt\u00f3ra (nieformalnie) okre\u015bla najwi\u0119kszy iloraz zaburze\u0144 $f(x)$ wywo\u0142anych przez najmniejsze zaburzenia $x$.\n- Sta\u0142\u0105 $\\kappa$ mo\u017cna wyliczy\u0107 tylko w niekt\u00f3rych probemach\n\n## Dok\u0142adno\u015b\u0107 algorytmu\n- Algorytm jest dok\u0142adny, je\u017celi\n$$\n\\frac{\\Vert f^*(x)-f(x) \\Vert}{\\Vert f(x)\\Vert}=O(\\varepsilon_m)\n$$\n- Zagwarantowanie, \u017ce algorytm jest dok\u0142adny wg tej definicji jest niezwykle trudne, zw\u0142aszcza dla \u017ale uwarunkowanych problem\u00f3w\n\n## Stabilno\u015b\u0107 algorytmu\n\nM\u00f3wimy, \u017ce algorytm jest stabilny, gdy dla ka\u017cdego $x$, zachodzi\n$$\n\\frac{\\Vert f^*(x)-f(x^*) \\Vert}{\\Vert f(x^*)\\Vert}=O(\\varepsilon_m)\n$$\ndla takich $x^*$, \u017ce\n$$\\frac{\\Vert x-x^* \\Vert}{\\Vert x\\Vert}=O(\\varepsilon_m)$$\nInnymi s\u0142owy\n**Stabilny algorytm daje prawie dobr\u0105 odpowied\u017a na prawie dobre pytanie**\n\n## Stabilno\u015b\u0107 wsteczna algorytmu\nAlgorytm jest stabilny wstecznie, je\u017celi dla ka\u017cdego $x$, zachodzi\n$$f^*(x)=f(x^*)$$\ndla takich $x^*$, \u017ce\n$$\\frac{\\Vert x-x^* \\Vert}{\\Vert x\\Vert}=O(\\varepsilon_m)$$\nInnymi s\u0142owy\n**Stabilny wstecznie algorytm daje prawid\u0142ow\u0105 odpowied\u017a na prawie dobre pytanie**\n\n\n\n\n\n## Dok\u0142adno\u015b\u0107 algorytm\u00f3w stabilnych wstecznie przy z\u0142ym uwarunkowaniu\nJe\u015bli algorytm jest stabilny wstecznie, to jego b\u0142\u0105d wzgl\u0119dny pogarsza si\u0119\u00a0proporcjonalnie do sta\u0142ej uwarunkowania tj. $O(\\kappa\\varepsilon_m)$\n\n# Metody Numeryczne\n\n## Problemy techniczne\n\n### dr hab. in\u017c. Jerzy Baranowski, Prof. AGH.\n", "meta": {"hexsha": "0cae0325b4d14c6106edf131487b06ff7ca7c38e", "size": 43817, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "Metody Numeryczne 2019/Lecture 1 (errors and stuff)/Lecture 1.ipynb", "max_stars_repo_name": "Piotrek12332121/Piotr-Polak-MN2", "max_stars_repo_head_hexsha": "2d5113981171a53716130cac8005835fbd7e0b76", "max_stars_repo_licenses": ["CC-BY-4.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "Metody Numeryczne 2019/Lecture 1 (errors and stuff)/Lecture 1.ipynb", "max_issues_repo_name": "Piotrek12332121/Piotr-Polak-MN2", "max_issues_repo_head_hexsha": "2d5113981171a53716130cac8005835fbd7e0b76", "max_issues_repo_licenses": ["CC-BY-4.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "Metody Numeryczne 2019/Lecture 1 (errors and stuff)/Lecture 1.ipynb", "max_forks_repo_name": "Piotrek12332121/Piotr-Polak-MN2", "max_forks_repo_head_hexsha": "2d5113981171a53716130cac8005835fbd7e0b76", "max_forks_repo_licenses": ["CC-BY-4.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 24.6578503095, "max_line_length": 236, "alphanum_fraction": 0.4919551772, "converted": true, "num_tokens": 9915, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.2598256379609837, "lm_q2_score": 0.320821300824607, "lm_q1q2_score": 0.08335759915822619}} {"text": "```python\n%matplotlib inline\n```\n\n\n```python\n#environment setup with watermark\n%load_ext watermark\n%watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer\n```\n\n WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.\n\n\n Gopala KR \n last updated: 2018-01-30 \n \n CPython 3.6.3\n IPython 6.2.1\n \n watermark 1.6.0\n numpy 1.13.1\n pandas 0.20.3\n matplotlib 2.0.2\n nltk 3.2.5\n sklearn 0.19.0\n tensorflow 1.3.0\n theano 1.0.1\n mxnet 1.0.0\n chainer 3.3.0\n\n\n\n# The Johnson-Lindenstrauss bound for embedding with random projections\n\n\n\nThe `Johnson-Lindenstrauss lemma`_ states that any high dimensional\ndataset can be randomly projected into a lower dimensional Euclidean\nspace while controlling the distortion in the pairwise distances.\n\n\n\nTheoretical bounds\n==================\n\nThe distortion introduced by a random projection `p` is asserted by\nthe fact that `p` is defining an eps-embedding with good probability\nas defined by:\n\n\\begin{align}(1 - eps) \\|u - v\\|^2 < \\|p(u) - p(v)\\|^2 < (1 + eps) \\|u - v\\|^2\\end{align}\n\nWhere u and v are any rows taken from a dataset of shape [n_samples,\nn_features] and p is a projection by a random Gaussian N(0, 1) matrix\nwith shape [n_components, n_features] (or a sparse Achlioptas matrix).\n\nThe minimum number of components to guarantees the eps-embedding is\ngiven by:\n\n\\begin{align}n\\_components >= 4 log(n\\_samples) / (eps^2 / 2 - eps^3 / 3)\\end{align}\n\n\nThe first plot shows that with an increasing number of samples ``n_samples``,\nthe minimal number of dimensions ``n_components`` increased logarithmically\nin order to guarantee an ``eps``-embedding.\n\nThe second plot shows that an increase of the admissible\ndistortion ``eps`` allows to reduce drastically the minimal number of\ndimensions ``n_components`` for a given number of samples ``n_samples``\n\n\nEmpirical validation\n====================\n\nWe validate the above bounds on the digits dataset or on the 20 newsgroups\ntext document (TF-IDF word frequencies) dataset:\n\n- for the digits dataset, some 8x8 gray level pixels data for 500\n handwritten digits pictures are randomly projected to spaces for various\n larger number of dimensions ``n_components``.\n\n- for the 20 newsgroups dataset some 500 documents with 100k\n features in total are projected using a sparse random matrix to smaller\n euclidean spaces with various values for the target number of dimensions\n ``n_components``.\n\nThe default dataset is the digits dataset. To run the example on the twenty\nnewsgroups dataset, pass the --twenty-newsgroups command line argument to this\nscript.\n\nFor each value of ``n_components``, we plot:\n\n- 2D distribution of sample pairs with pairwise distances in original\n and projected spaces as x and y axis respectively.\n\n- 1D histogram of the ratio of those distances (projected / original).\n\nWe can see that for low values of ``n_components`` the distribution is wide\nwith many distorted pairs and a skewed distribution (due to the hard\nlimit of zero ratio on the left as distances are always positives)\nwhile for larger values of n_components the distortion is controlled\nand the distances are well preserved by the random projection.\n\n\nRemarks\n=======\n\nAccording to the JL lemma, projecting 500 samples without too much distortion\nwill require at least several thousands dimensions, irrespective of the\nnumber of features of the original dataset.\n\nHence using random projections on the digits dataset which only has 64 features\nin the input space does not make sense: it does not allow for dimensionality\nreduction in this case.\n\nOn the twenty newsgroups on the other hand the dimensionality can be decreased\nfrom 56436 down to 10000 while reasonably preserving pairwise distances.\n\n\n\n\n\n```python\nprint(__doc__)\n\nimport sys\nfrom time import time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.random_projection import johnson_lindenstrauss_min_dim\nfrom sklearn.random_projection import SparseRandomProjection\nfrom sklearn.datasets import fetch_20newsgroups_vectorized\nfrom sklearn.datasets import load_digits\nfrom sklearn.metrics.pairwise import euclidean_distances\n\n# Part 1: plot the theoretical dependency between n_components_min and\n# n_samples\n\n# range of admissible distortions\neps_range = np.linspace(0.1, 0.99, 5)\ncolors = plt.cm.Blues(np.linspace(0.3, 1.0, len(eps_range)))\n\n# range of number of samples (observation) to embed\nn_samples_range = np.logspace(1, 9, 9)\n\nplt.figure()\nfor eps, color in zip(eps_range, colors):\n min_n_components = johnson_lindenstrauss_min_dim(n_samples_range, eps=eps)\n plt.loglog(n_samples_range, min_n_components, color=color)\n\nplt.legend([\"eps = %0.1f\" % eps for eps in eps_range], loc=\"lower right\")\nplt.xlabel(\"Number of observations to eps-embed\")\nplt.ylabel(\"Minimum number of dimensions\")\nplt.title(\"Johnson-Lindenstrauss bounds:\\nn_samples vs n_components\")\n\n# range of admissible distortions\neps_range = np.linspace(0.01, 0.99, 100)\n\n# range of number of samples (observation) to embed\nn_samples_range = np.logspace(2, 6, 5)\ncolors = plt.cm.Blues(np.linspace(0.3, 1.0, len(n_samples_range)))\n\nplt.figure()\nfor n_samples, color in zip(n_samples_range, colors):\n min_n_components = johnson_lindenstrauss_min_dim(n_samples, eps=eps_range)\n plt.semilogy(eps_range, min_n_components, color=color)\n\nplt.legend([\"n_samples = %d\" % n for n in n_samples_range], loc=\"upper right\")\nplt.xlabel(\"Distortion eps\")\nplt.ylabel(\"Minimum number of dimensions\")\nplt.title(\"Johnson-Lindenstrauss bounds:\\nn_components vs eps\")\n\n# Part 2: perform sparse random projection of some digits images which are\n# quite low dimensional and dense or documents of the 20 newsgroups dataset\n# which is both high dimensional and sparse\n\nif '--twenty-newsgroups' in sys.argv:\n # Need an internet connection hence not enabled by default\n data = fetch_20newsgroups_vectorized().data[:500]\nelse:\n data = load_digits().data[:500]\n\nn_samples, n_features = data.shape\nprint(\"Embedding %d samples with dim %d using various random projections\"\n % (n_samples, n_features))\n\nn_components_range = np.array([300, 1000, 10000])\ndists = euclidean_distances(data, squared=True).ravel()\n\n# select only non-identical samples pairs\nnonzero = dists != 0\ndists = dists[nonzero]\n\nfor n_components in n_components_range:\n t0 = time()\n rp = SparseRandomProjection(n_components=n_components)\n projected_data = rp.fit_transform(data)\n print(\"Projected %d samples from %d to %d in %0.3fs\"\n % (n_samples, n_features, n_components, time() - t0))\n if hasattr(rp, 'components_'):\n n_bytes = rp.components_.data.nbytes\n n_bytes += rp.components_.indices.nbytes\n print(\"Random matrix with size: %0.3fMB\" % (n_bytes / 1e6))\n\n projected_dists = euclidean_distances(\n projected_data, squared=True).ravel()[nonzero]\n\n plt.figure()\n plt.hexbin(dists, projected_dists, gridsize=100, cmap=plt.cm.PuBu)\n plt.xlabel(\"Pairwise squared distances in original space\")\n plt.ylabel(\"Pairwise squared distances in projected space\")\n plt.title(\"Pairwise distances distribution for n_components=%d\" %\n n_components)\n cb = plt.colorbar()\n cb.set_label('Sample pairs counts')\n\n rates = projected_dists / dists\n print(\"Mean distances rate: %0.2f (%0.2f)\"\n % (np.mean(rates), np.std(rates)))\n\n plt.figure()\n plt.hist(rates, bins=50, normed=True, range=(0., 2.), edgecolor='k')\n plt.xlabel(\"Squared distances rate: projected / original\")\n plt.ylabel(\"Distribution of samples pairs\")\n plt.title(\"Histogram of pairwise distance rates for n_components=%d\" %\n n_components)\n\n # TODO: compute the expected value of eps and add them to the previous plot\n # as vertical lines / region\n\nplt.show()\n```\n\n\n```python\n\n```\n\n\n```python\ntest complete; Gopal\n```\n", "meta": {"hexsha": "0155afa65542fcd58fdf684a33295a2b40f695c3", "size": 232006, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "tests/scikit-learn/plot_johnson_lindenstrauss_bound.ipynb", "max_stars_repo_name": "gopala-kr/ds-notebooks", "max_stars_repo_head_hexsha": "bc35430ecdd851f2ceab8f2437eec4d77cb59423", "max_stars_repo_licenses": ["MIT"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2019-05-10T09:16:23.000Z", "max_stars_repo_stars_event_max_datetime": "2019-05-10T09:16:23.000Z", "max_issues_repo_path": "tests/scikit-learn/plot_johnson_lindenstrauss_bound.ipynb", "max_issues_repo_name": "gopala-kr/ds-notebooks", "max_issues_repo_head_hexsha": "bc35430ecdd851f2ceab8f2437eec4d77cb59423", "max_issues_repo_licenses": ["MIT"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "tests/scikit-learn/plot_johnson_lindenstrauss_bound.ipynb", "max_forks_repo_name": "gopala-kr/ds-notebooks", "max_forks_repo_head_hexsha": "bc35430ecdd851f2ceab8f2437eec4d77cb59423", "max_forks_repo_licenses": ["MIT"], "max_forks_count": 1, "max_forks_repo_forks_event_min_datetime": "2019-10-14T07:30:18.000Z", "max_forks_repo_forks_event_max_datetime": "2019-10-14T07:30:18.000Z", "avg_line_length": 526.0907029478, "max_line_length": 37944, "alphanum_fraction": 0.9409541133, "converted": true, "num_tokens": 1990, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4493926344647597, "lm_q2_score": 0.184767510648, "lm_q1q2_score": 0.08303315837360026}} {"text": "# Machine learning compilation of quantum circuits\n> Optimal compiling of unitaries reaching the theoretical lower bound\n\n- toc: true \n- badges: true\n- comments: true\n- categories: [machine learning, compilation, qiskit, paper review]\n- image: images/grovercirc.png\n\n# Introduction\n\nI am going to review a recent [preprint](http://arxiv.org/abs/2106.05649) by Liam Madden and\nAndrea Simonetto that uses techniques from machine learning to tackle the problem of quantum circuits compilation. I find the approach suggested in the paper very interesting and the preliminary results quite promising.\n\n## What is compilation?\n> Note that a variety of terms are floating around the literature and used more or less interchangibly. Among those are **synthesis**, **compilation**, **transpilation** and **decomposition** of quantum circuits. I will not make a distinction and try to stick to **compilation**.\n\nBut first things first, what is a compilation of a quantum circuit? The best motivation and illustration for the problem is the following. Say you need to run a textbook quantum circuit on a real hardware. The real hardware usually allows only for a few basic one and two qubit gates. In contrast, your typical textbook quantum circuit may feature (1) complex many-qubit gates, for example multi-controlled gates and (2) one and two qubit gates which are not supported by the hardware. As a simple example take this 3-qubit Grover's circuit (from [qiskit textbook](https://qiskit.org/textbook/ch-algorithms/grover.html)):\n\n\n```python\n# collapse\n#initialization\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n# importing Qiskit\nfrom qiskit import IBMQ, Aer, assemble, transpile\nfrom qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister\nfrom qiskit.providers.ibmq import least_busy\n\n# import basic plot tools\nfrom qiskit.visualization import plot_histogram\n\ndef initialize_s(qc, qubits):\n \"\"\"Apply a H-gate to 'qubits' in qc\"\"\"\n for q in qubits:\n qc.h(q)\n return qc\n\ndef diffuser(nqubits):\n qc = QuantumCircuit(nqubits)\n # Apply transformation |s> -> |00..0> (H-gates)\n for qubit in range(nqubits):\n qc.h(qubit)\n # Apply transformation |00..0> -> |11..1> (X-gates)\n for qubit in range(nqubits):\n qc.x(qubit)\n # Do multi-controlled-Z gate\n qc.h(nqubits-1)\n qc.mct(list(range(nqubits-1)), nqubits-1) # multi-controlled-toffoli\n qc.h(nqubits-1)\n # Apply transformation |11..1> -> |00..0>\n for qubit in range(nqubits):\n qc.x(qubit)\n # Apply transformation |00..0> -> |s>\n for qubit in range(nqubits):\n qc.h(qubit)\n # We will return the diffuser as a gate\n U_s = qc.to_gate()\n U_s.name = \"U$_s$\"\n return U_s\n\nqc = QuantumCircuit(3)\nqc.cz(0, 2)\nqc.cz(1, 2)\noracle_ex3 = qc.to_gate()\noracle_ex3.name = \"U$_\\omega$\"\n\nn = 3\ngrover_circuit = QuantumCircuit(n)\ngrover_circuit = initialize_s(grover_circuit, [0,1,2])\ngrover_circuit.append(oracle_ex3, [0,1,2])\ngrover_circuit.append(diffuser(n), [0,1,2])\ngrover_circuit = grover_circuit.decompose()\ngrover_circuit.draw(output='mpl')\n```\n\nThe three qubit gates like Toffoli are not generally available on a hardware and one and two qubit gates my be different from those in the textbook algorithm. For example ion quantum computers are good with [M\u00f8lmer\u2013S\u00f8rensen gates](https://en.wikipedia.org/wiki/M%C3%B8lmer%E2%80%93S%C3%B8rensen_gate) and may need several native one qubit gates to implement the Hadamard gate.\n\nAdditional important problem is to take into account qubit connectivity. Usually textbook algorithms assume full connectivity, meaning that two-qubit gates can act on any pair of qubits. On most hardware platforms however a qubit can only interact with its neighbors. Assuming that one and two qubits gates available on the hardware can implement a SWAP gate between adjacent qubits, to solve the connectivity problem one can insert as many SWAPs as necessary to connect topologically disjoint qubits. Using SWAPs however leads to a huge overhead in the number of total gates in the compiled circuit, and it is of much importance use them as economically as possible. In fact, the problem of optimal SWAPping alone in generic situation is [NP-complete](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=on+the+complexity+of+quantum+circuit+compilation&btnG=).\n\n## Simplified problem\nWhen compiling a quantum circuit one has to decide which resulting circuits are considered to be efficient. Ideally, one should optimize for the total fidelity of the circuit. Let us imagine running the algorithm on a real device. Probably my theorist's image of a real device is still way too platonic, but I will try my best. Many details need to be taken into account. For example, gates acting on different qubits or pairs of qubits may have different fidelities. Decoherence of qubits with time can make circuits where many operations can be executed in parallel more favorable. Cross-talk (unwanted interactions) between neighboring qubits may lead to exotic patterns for optimal circuits. A simple proxy for the resulting fidelity that is often adopted is the number of two-qubit gates (which are generically much less accurate than a single-qubit gates). So the problem that is often studied, and that is addressed in the preprint we are going to discuss, is the problem of optimal compilation into a gate set consisting of arbitrary single-qubit gates and CNOTs, the only two qubits gate. The compiled circuit must \n\n1. Respect hardware connectivity.\n1. Have as few CNOTs as possible.\n1. Exceed a given fidelity threshold.\n\nLast item here means that we also allow for an approximate compilation. By increasing the number of CNOTs one can always achieve an exact compilation, but since in reality each additional CNOT comes with its own fidelity cost this might not be a good trade-off. Note also that a specific choice for two-qubit gate is made, a CNOT gate. Any two-qubit gate can be decomposed into at most 3 CNOTs [see e.g. here](https://arxiv.org/pdf/quant-ph/0308006.pdf), so in terms of computational complexity this is of course inconsequential. However in the following discussion we will care a lot about constant factors and may wish to revisit this choice at the end.\n\n## Existing results \n\nSince finding the exact optimal solution to the compilation problem is intractable, as with many things in life one needs to resort to heuristic methods. A combination of many heuristic methods, in fact. As an example one can check out the [transpilation workflow](https://qiskit.org/documentation/apidoc/transpiler.html) in `qiskit`. Among others, there is a step that compiles >2 qubit gates into one and two qubit gates; the one that tries to find a good initial placement of the logical qubits onto physical hardware; the one that 'routes' the desired circuit to match a given topology being as greedy on SWAPs as possible. Each of these steps can use several different heuristic optimization algorithms, which are continuously refined and extended (for example this [recent preprint](https://arxiv.org/abs/2106.06446) improves on the default rounting procedure in `qiskit`). In my opinion it would be waay better to have one unified heuristic for all steps of the process, especially taking into account that they are not completely independent. Although this might be too much to ask for, some advances are definitely possible and machine learning tools might prove very useful. The paper we are going to discuss is an excellent demonstration.\n\n## Theoretical lower bound and quantum Shannon decomposition\nThere is a couple of very nice theoretical results about the compilation problem that I need to mention. But first, let us agree that we will compile unitaries, not circuits. What is the difference? Of course, any quantum circuit (without measurements and neglecting losses) corresponds to a unitary matrix. However, to compute that unitary matrix for a large quantum circuit explicitly is generally an intractable problem, precisely for the same reasons that quantum computation is assumed to be more powerful than classical. Still, taking as the input a unitary matrix (which is in general hard to compute from the circuit) is very useful both theoretically and practically. I will discuss pros and cons of this approach later on.\n\nOK, now the fun fact. Generically, one needs at least this many CNOTs\n\n\\begin{align}\n L:=\\# \\text{CNOTs} \\geq \\frac14\\left(4^n-3n-1\\right) \\label{TLB}\n\\end{align}\n\nto exactly compile an $n$-qubit unitary. 'Generically' means that the set of $n$-qubit unitaries that can be compiled exactly with smaller amount of CNOTs has measure zero. Keep in mind though, that there are important unitaries in this class like multi-controlled gates or qubit permutations. We will discuss compilation of some gates from the 'measure-zero' later on. \n\nThe authors of the preprint (I hope you and me still remember that there is some actual results to discuss, not just my overly long introduction to read) refer to \\eqref{TLB} as the theoretical lower bound or TLB for short. The proof of this [fact](https://dl.acm.org/doi/10.5555/968879.969163) is actually rather simple and I will sketch it. A general $d\\times d$ unitary has $d^2$ real parameters. For $n$ qubits $d=2^n$. Single one-qubit gate has 3 real parameters. Any sequence of one-qubit gates applied to the same qubit can be reduced to a single one-qubit gate and hence can have no more than 3 parameters. That means, that without CNOTs we can only have 3n parameters in our circuit, 3 for each one-qubit gate. This is definitely not enough to describe an arbitrary unitary on $n$ qubits which has $d^2=4^n$ parameters.\n\nNow, adding a single CNOT allows to insert two more 1-qubit unitaries after it, like that\n\n\n```python\n#collapse\nfrom qiskit.circuit import Parameter\n\na1, a2, a3 = [Parameter(a) for a in ['a1', 'a2', 'a3']]\nb1, b2, b3 = [Parameter(b) for b in ['b1', 'b2', 'b3']]\n\nqc = QuantumCircuit(2)\nqc.cx(0, 1)\nqc.u(a1, a2, a3, 0) \nqc.u(b1, b2, b3, 1)\n \nqc.draw(output='mpl')\n```\n\nAt the first glance this allows to add 6 more parameters. However, each single-qubit unitary can be represented via the Euler angles as a product of only $R_z$ and $R_x$ rotations either as $U=R_z R_x R_z$ or $U=R_x R_y R_z$ (I do not specify angles). Now, CNOT can be represented as $CNOT=|0\\rangle\\langle 0|\\otimes I+|1\\rangle\\langle 1|\\otimes X$. It follows that $R_z$ commutes with the control of CNOT and $R_x$ commutes with the target of CNOT, hence they can be dragged to the left and joined with preceding one-qubit gates. So in fact each new CNOT gate allows to add only 4 real parameters:\n\n\n```python\n#collapse\na1, a2 = [Parameter(a) for a in ['a1', 'a2']]\nb1, b2 = [Parameter(b) for b in ['b1', 'b2']]\n\nqc = QuantumCircuit(2)\nqc.cx(0, 1)\nqc.rx(a1, 0) \nqc.rz(a2, 0)\nqc.rz(b1, 1)\nqc.rx(b2, 1)\n \nqc.draw(output='mpl')\n```\n\n That's it, there are no more caveats. Thus, the total number of parameters we can get with $L$ CNOTs is $3n+4L$ and we need to describe a $d\\times d$ unitary which has $4^n$ parameters. In fact, the global phase of the unitary is irrelevant so we only need $3n+4L \\geq 4^n-1$. Solving for $L$ gives the TLB \\eqref{TLB}. That's pretty cool, isn't it?\n\nNow there is an algorithm, called *quantum Shannon decomposition* (see [ref](https://arxiv.org/abs/quant-ph/0406176)), which gives an exact compilation of any unitary with the number of CNOTs twice as much as the TLB requires. In complexity-theoretic terms an overall factor of two is of course inessential, but for current NISQ devices we want to get as efficient as possible. Moreover, to my understanding the quantum Shannon decomposition is not easily extendable to restricted topology while inefficient generalizations lead to a much bigger overhead (roughly an order of magnitude).\n\n# What's in the preprint?\n## Templates\nI've already wrote an introduction way longer than intended so from now on I will try to be brief and to the point. The authors of the preprint propose two templates inspired by the quantum Shannon decomposition. The building block for each template is a 'CNOT unit'\n\n\n```python\n#collapse\na1, a2 = [Parameter(a) for a in ['a1', 'a2']]\nb1, b2 = [Parameter(b) for b in ['b1', 'b2']]\n\nqc = QuantumCircuit(2)\nqc.cx(0, 1)\nqc.ry(a1, 0) \nqc.rz(a2, 0)\nqc.ry(b1, 1)\nqc.rx(b2, 1)\n \nqc.draw(output='mpl')\n```\n\nFirst template is called **sequ** in the paper and is obtained as follows. There are $n(n-1)/2$ different CNOTs on $n$-qubit gates. We enumerate them somehow and simply stack sequentially. Here is a 3-qubut example with two layers (I use `qiskit` gates `cz` instead of our 'CNOT units' for the ease of graphical representation)\n\n\n```python\n#collapse\nqc = QuantumCircuit(3)\nfor _ in range(2):\n qc.cz(0, 1)\n qc.cz(0, 2)\n qc.cz(1, 2)\n qc.barrier()\nqc.draw(output='mpl')\n```\n\nThe second template is called **spin** and for 4 qubits looks as follows\n\n\n```python\n#collapse\nqc = QuantumCircuit(4)\nfor _ in range(2):\n qc.cz(0, 1)\n qc.cz(1, 2)\n qc.cz(2, 3)\n qc.barrier()\nqc.draw(output='mpl')\n```\n\nI'm sure you get the idea. That's it! The templates fix the pattern of CNOTs while angles of single-qubit gates are adjustable parameters which are collectively denoted by $\\theta$. \n\nThe idea now is simple. Try to optimize these parameters to achieve the highest possible fidelity for a given target unitary to compile. I am not at all an expert on the optimization methods, so I might miss many subtleties, but on the surface the problem looks rather straightforward. You can choose your favorite flavor of the gradient descent and hope for convergence. The problem appears to be non-convex but the gradient descent seems to work well in practice. One technical point that I do not fully understand is that the authors choose to work with fidelity defined by the Frobenius norm $||U-V||_F^2$ which is sensitive to the global phase of each unitary. To my understanding they often find that local minima of this fidelity coincides with the global minimum up to a global phase. OK, so in the rest of the post I refer to the 'gradient descent' as the magic numerical method which does good job of finding physically sound minimums.\n\n## Results\n### Compiling random unitaries\nOK, finally, for the surprising results. The authors find experimentally that both **sequ** and **spin** perform surprisingly well on random unitaries always coming very close to the TLB \\eqref{TLB} with good fidelity. More precisely, the tests proceed as follows. First, one generates a random unitary. Next, for each number $L$ of CNOTs below the TLB one runs the gradient descent to see how much fidelity can be achieved with this amount of CNOTs. Finally, one plots the fidelity as a function of $L$. Impressively, on the sample of hundred unitaries the fidelity always approaches 100% when the number of CNOTs reaches the TLB. For the $n=3$ qubits TLB is $L=14$, for $n=5$ $L=252$ (these are the two cases studied). So, in all cases studied, the gradient descent lead by the provided templates seems to always find the optimal compilation circuit! Recall that this is two times better than quantum Shannon decomposition. Please see the original paper for nice plots that I do not reproduce here.\n\n\n### Compiling on restricted topology\nThese tests were performed on the fully connected circuits. The next remarkable discovery is that restricting the connectivity does not to seem to harm the performance of the compilation! More precisely, the authors considered two restricted topologies in the paper, 'star' where all qubits are connected to single central one and 'line' where well, they are connected by links on a line. The **spin** template can not be applied to star topology, but it can be applied to line topology. The **sequ** template can be generalized to any topology by simply omitting CNOTs that are not allowed. Again, as examining a hundred of random unitaries on $n=3$ and $n=5$ qubits shows, the fidelity nearing 100% can be achieved right at the TLB in all cases, which hints that topology restriction may not be a problem in this approach at all! To appreciate the achievement, imagine decomposing each unitary via the quantum Shannon decomposition and then routing on restricted topology with swarms of SWAPs, a terrifying picture indeed. It would be interesting to compare the results against the performance of `qiskit` transpiler which is unfortunately not done in the paper to my understanding.\n\n### Compiling specific 'measure zero' gates\nSome important multi-qubit gates fall into the 'measure zero' set which can be compiled with a smaller amount of CNOTs than is implied by the TLB \\eqref{TLB}. For example, 4-qubit Toffoli gate can be compiled with 14 CNOTs while the TLB requires 61 gates. Numerical tests show that the plain version of the algorithm presented above does not generically obtain the optimal compilation for special gates. However, with some tweaking and increasing the amount of attempts the authors were able to find optimal decompositions for a number of known gates such as 3- and 4-qubit Toffoli, 3-qubit Fredkin and 1-bit full adder on 4 qubits. The tweaking included randomly changing the orientation of some CNOTs (note that in both **sequ** and **spin** the control qubit is always at the top) and running many optimization cycles with random initial conditions. The best performing method appeared to be **sequ** with random flips of CNOTs. The whole strategy might look a bit fishy, but I would argue that it is not. My argument is simple: you only need to find a good compilation of the 4-qubit Toffoli *once*. After that you pat yourself on the back and use the result in all your algorithms. So it does not really matter how hard it was to find the compilation as long as you did not forget to write it down.\n\n### Compressing the quantum Shannon decomposition\nFinally, as a new twist on the plot the authors propose a method to compress the standard quantum Shannon decomposition (which is twice the TLB, remember?). The idea seems simple and works surprisingly well. The algorithm works as follows.\n1. Compile a unitary exactly using the quantum Shannon decomposition.\n1. Promote parameters in single-qubit gates variables (they have fixed values in quantum Shannon decomposition).\n2. Add [LASSO](https://en.wikipedia.org/wiki/Lasso_(statistics)-type regularization term, which forces one-qubit gates to have small parameters, ideally zero (which makes the corresponding gates into identities).\n3. Run a gradient descent on the regularized cost function (fidelity+LASSO term). Some one-qubit gates will become identity after that (one might need to tune the regularization parameter here).\n4. After eliminating identity one-qubit gates one can end up in the situation where there is a bunch of CNOTs with no single-qubit gates in between. There are efficient algorithms for reducing the amount of CNOTs in this case. \n5. Recall that the fidelity was compromised by adding regularization terms. Run the gradient descent once more, this time without regularization, to squeeze out these last percents of fidelity.\n\nFrom the description of this algorithm it does not appear obvious that the required cancellations (elimination of single-qubit gates and cancellations in resulting CNOT clusters) is bound to happen, but the experimental tests show that they do. Again, from a bunch of random unitaries it seems that the $\\times 2$ reduction to the TLB is almost sure to happen! Please see the preprint for plots.\n\n## Weak spots\nAlthough I find results of the paper largely impressive, a couple of weak spots deserve a mention.\n### Limited scope of experiments\nThe numerical experiments were only carried out for $n=3$ and $n=5$ qubits which of course is not much. To see if the method keeps working as the number of qubits is scaled is sure very important. There may be two promblems. First, the templates can fail to be expressive enough for larger circuits. The authors hope to attack this problem from the theoretical side and show that the templates do fill the space of unitaries. Well, best of luck with that! Another potential problem is that although the templates work fine for higher $n$, the learning part might become way more challenging. Well, I guess we should wait and see. \n### Unitary as the input\nAs I discussed somewhere way above, for a realistic quantum computation we can not know the unitary matrix that we need to compile. If we did, there would no need in the quantum computer in the first place. I can make two objects here. First, we are still in the NISQ era and pushing the existing quantum computers to their edge is a very important task. Even if an algorithm can be simulated classically, running it on a real device might be invaluable. Second, even quantum circuits on 1000 qubits do not usually feature 100-qubit unitaries. So it could be possible to separate a realistic quantum circuit into pieces, each containing only a few qubits, and compile them separately.\n\n# Final remarks\nTo me, the algorithms presented in the preprint seem to be refreshingly efficient and universal. At some level it appears to be irrelevant which exact template do we use. Near the theoretical lower bound they all perform similarly well, even on restricted topology. This might be a justification for choosing CNOT as the two-qubit gate, as this probably does not matter in the end! I'm really cheering for a universal algorithm like that to win the compilation challenge over a complicated web of isolated heuristics, which are currently state of the art.\n", "meta": {"hexsha": "5f5e02022c847441ac3bb7161e200337baac58a0", "size": 71814, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "_notebooks/2021-07-22-Machine learning compilation of quantum circuits.ipynb", "max_stars_repo_name": "idnm/blog", "max_stars_repo_head_hexsha": "a9e976ea45fe077b7b13a5fa3680fab1affc2c48", "max_stars_repo_licenses": ["Apache-2.0"], "max_stars_count": null, "max_stars_repo_stars_event_min_datetime": null, "max_stars_repo_stars_event_max_datetime": null, "max_issues_repo_path": "_notebooks/2021-07-22-Machine learning compilation of quantum circuits.ipynb", "max_issues_repo_name": "idnm/blog", "max_issues_repo_head_hexsha": "a9e976ea45fe077b7b13a5fa3680fab1affc2c48", "max_issues_repo_licenses": ["Apache-2.0"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "_notebooks/2021-07-22-Machine learning compilation of quantum circuits.ipynb", "max_forks_repo_name": "idnm/blog", "max_forks_repo_head_hexsha": "a9e976ea45fe077b7b13a5fa3680fab1affc2c48", "max_forks_repo_licenses": ["Apache-2.0"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 166.2361111111, "max_line_length": 10804, "alphanum_fraction": 0.8604868132, "converted": true, "num_tokens": 5216, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.4843800842769844, "lm_q2_score": 0.1710611959045317, "lm_q1q2_score": 0.08285863648875881}} {"text": "## Introduction to Python 2016\n\nUniversity of Melbourne\n\nSchool of Earth Sciences\n\nLouis Moresi\n\n[louis.moresi@unimelb.edu.au](mailto:louis.moresi@unimelb.edu.au)\n\n[www.moresi.info](http://www.moresi.info)\n\n\n### Quick links\n\n - [Browse](/notebooks/Content/Notebooks/) all files that make up the course, start a new notebook or access the terminal.\n - Run the [Mapping Notebooks](/notebooks/Content/Notebooks/Mapping) \n - If you are running in the Docker environment, you can drill through to [local files](/notebooks/Content/Notebooks/external) if they have been correctly mounted as a volume within the container. \n\n\n### What is this ?\n\n\nWe will be working in the iPython / Jupyter notebook system. I like these because they are a form of literate programming in which we can mix textbook instruction and explanations with code that can also be run and edited.\nThe text and mathematics in the notebooks requires a little preliminary learning. \n\nThe notebook system also includes a [file browser](/tree) which also allows you to add your own notebook, add a text file or start a terminal on the machine running this notebook. \n\n\n### Markdown\n\nYou can document your iPython notebooks by making some cells into **Markdown** cells. Markdown is a way of formatting text that is supposed to be almost as readable un-rendered as when it is tidied up. You might argue that it looks equally bad either way, but that's tough because the notebooks use it and that's how I want you to produce nice looking output to hand in as an assignment !\n\nIf you look at the **Markdown** cells as source code (by double-clicking on them) you will see how the raw text looks. To get back to the pretty version of the text, hit shift-enter.\n\n### Maths\n\nIn a browser, you can render beautiful equations using a javascript tool called **Mathjax** which is build into the iPython notebooks. \n\n\nYou can build in symbols to your text such as $\\pi$ and $\\epsilon$ if you use the \\$ signs to indicate where your equations begin and end, and you know enough $\\LaTeX$ [try it here !](http://www.codecogs.com/latex/eqneditor.php) to get by.\n\n\nEquations in 'display' mode are written like this (again look at the source for this cell to see what is used)\n\n\\\\[ e^{i\\pi} + 1 = 0 \\\\]\n\nor even like this\n\n\\begin{equation}\n%%\n \\nabla^4 \\psi = \\frac{\\partial T}{\\partial x}\n%% \n\\end{equation}\n\nGo back to the rendered form of the cell by 'running' it.\n\n### Links \n\n[Markdown Website](http://daringfireball.net/projects/markdown/)\n\n[Mathjax Website](http://docs.mathjax.org)\n\n[Jupyter Notebooks](http://www.jupyter.org)\n\n\n\n\n\n```python\n\n```\n", "meta": {"hexsha": "9e0d22af73f326c93132b7606d8050760902dd6e", "size": 3853, "ext": "ipynb", "lang": "Jupyter Notebook", "max_stars_repo_path": "CourseContent/Notebooks/StartHere.ipynb", "max_stars_repo_name": "lmoresi/teaching-python", "max_stars_repo_head_hexsha": "ed4091fea79e436902c40d4d6a9456fad83cd7c8", "max_stars_repo_licenses": ["BSD-4-Clause-UC"], "max_stars_count": 1, "max_stars_repo_stars_event_min_datetime": "2016-01-07T20:44:52.000Z", "max_stars_repo_stars_event_max_datetime": "2016-01-07T20:44:52.000Z", "max_issues_repo_path": "CourseContent/Notebooks/StartHere.ipynb", "max_issues_repo_name": "lmoresi/teaching-python", "max_issues_repo_head_hexsha": "ed4091fea79e436902c40d4d6a9456fad83cd7c8", "max_issues_repo_licenses": ["BSD-4-Clause-UC"], "max_issues_count": null, "max_issues_repo_issues_event_min_datetime": null, "max_issues_repo_issues_event_max_datetime": null, "max_forks_repo_path": "CourseContent/Notebooks/StartHere.ipynb", "max_forks_repo_name": "lmoresi/teaching-python", "max_forks_repo_head_hexsha": "ed4091fea79e436902c40d4d6a9456fad83cd7c8", "max_forks_repo_licenses": ["BSD-4-Clause-UC"], "max_forks_count": null, "max_forks_repo_forks_event_min_datetime": null, "max_forks_repo_forks_event_max_datetime": null, "avg_line_length": 36.0093457944, "max_line_length": 397, "alphanum_fraction": 0.6080975863, "converted": true, "num_tokens": 634, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. NO\n2. NO", "lm_q1_score": 0.3849121444839335, "lm_q2_score": 0.21469142916152645, "lm_q1q2_score": 0.08263733840088365}} {"text": "# Introduction to Jupyter\n\nJupyter is a web based development environment.\n\nThe three moons in the symbol are _Julia_, _R_ and _Python_\n\nIn these examples, we will use only python2 or 3 (depending on yout taste).\nHere's some shortcut commands for the Notebook:\n\n```bash\n Alt + Return/Enter: Evaluates current cell and creates a new cell\n Shift + Return/Enter: Evaluates current cell ang goes to next cell\n Ctrl + Return/Enter: Evaluates current cell stays in current cell\n Tab: Gives you auto completion\n Shift + Tab: Gives you Help\n Enter: Goes into Edit mode (notice the color change in the cell from blue to green)\n Esc: Goes into Command mode (notice the color change in the cell from green to blue)\n Command Mode Actions:\n a: Creates Cell Above\n b: Creates Cell below\n s: Saves notebook\n y: Changes Cell type to CODE\n m: Changes Cell type to Markdown\n d: Deletes Cell\n x: Cuts Cell\n And many more... go to Edit -> Keyboard Shortcuts\n Shift + Ctrl + P: Command Palette\n```\n\nEach **cell** accepts a series of commands that are run at the time of evaluation.\n\n\n```python\na=1\nb=2\na+b\n```\n\nYou can embbed plots:\n\n\n```python\nimport matplotlib.pyplot as plt\nimport numpy as np\n```\n\n\n```python\nx = np.arange(-5,5,0.1)\ny = np.sin(x)\n```\n\n\n```python\nplt.plot(x,y,label='sin')\nplt.legend()\nplt.show()\n```\n\nOr you can change the way figures are displayed and retain some interactivity:\n\n\n```python\n%matplotlib notebook\n#inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nx = np.arange(-5,5,0.1)\ny = np.sin(x)\nplt.plot(x,y,label='sin')\nplt.show()\n#remember to close figure\n```\n\n\n \n\n\n\n\n\n\nYou can set up packages with your functions if you are going to use them repetitively. If you want to share them later, documment them thouroughly.\n\n\n```python\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom my_utils.utils import testf\n\nx, y = testf()\nplt.plot(x,y)\nplt.show()\n```\n\nThere are many formatting options:\n\n# Heading 1\n# Heading 2\n## Heading 2.1\n### Heading 2.1.1\n#### Heading 2.1.1.1 all the way to 6 (1-6 while in Command mode)\n\n1. Make a list\n2. With the entries\n 3. Organized\n 4. As you\n 5. Want\n 6. Oops\n \n \n* Well\n* Numbers\n * Are not exclusive\n * To the lists\n \nGo to [Jupyter Notebook](http://jupyter.org/) for more info!\n\n$$e^x=\\sum_{i=0}^\\infty \\frac{1}{i!}x^i$$\n\n\\begin{equation}\nc = a \\times b\n\\end{equation}\n\n```python\nprint \"Hello World\"\n```\n\n| This | is | | Try it out! |\n|------|------|----|-------------|\n| a |table | | Huzzah |\n\nAdds an image from the local notebook directory.\n\n\nThanks to F. da Silva\n